Patent application number | Description | Published |
20110153373 | TWO-LAYER DATA ARCHITECTURE FOR RESERVATION MANAGEMENT SYSTEMS - A reservation management system includes at least one processing node that includes a memory and a processor. The at least one processing node further includes a set of reservation processing and transaction modules that manage and process reservation requests and inquiries. At least one general purpose database is communicatively coupled to the at least one processing node. The at least one general purpose database includes a set of pre-allocated tables of fixed length records. At least one persistent storage device is communicatively coupled to the at least one processing node. The general purpose database persistently stores the set of pre-allocated tables of fixed length records on the at least one persistent storage device. | 06-23-2011 |
20120072686 | INTELLIGENT COMPUTER MEMORY MANAGEMENT - A plurality of memory allocators are initialized within a computing system. At least a first memory allocator and a second memory allocator in the plurality of memory allocators are each customizable to efficiently handle a set of different memory request size distributions. The first memory allocator is configured to handle a first memory request size distribution. The second memory allocator is configured to handle a second memory request size distribution. The second memory request size distribution is different than the first memory request size distribution. At least the first memory allocator and the second memory allocator that have been configured are deployed within the computing system in support of at least one application. Deploying at least the first memory allocator and the second memory allocator within the computing system improves at least one of performance and memory utilization of the at least one application. | 03-22-2012 |
20120078963 | SUPPORTING LINKED MULTI-USER DECISION MAKING IN ENVIRONMENTS WITH CONSTRAINED SHARED RESOURCES UTILIZING DURABLE FILES - Embodiments of the present invention manage multiple requests to allocate real world resources in a multi-user environment. A set of resource availability information is stored in a first durable data file for each resource in a plurality of resources provided by a database environment. The database environment is shared between a plurality of users. A decision context is associated with a second durable data file. The decision context is associated with a user interacting with the database environment. The decision context exists for a defined duration of time. A least one resource is determined to have been temporarily allocated to the decision context for the defined duration of time. The second durable data file is updated to indicate that the at least one resource has been temporarily allocated to the decision context. The first durable data file is updated to indicate that the at least one resource is currently unavailable. | 03-29-2012 |
20120079212 | ARCHITECTURE FOR SHARING CACHES AMONG MULTIPLE PROCESSES - Various embodiments of the present invention provide a system for caching information in a multi-process environment. The system includes a processor. A shared memory is communicatively coupled to the processor. The shared memory includes a set of data. A writer process is communicatively coupled to the shared memory. The write process reads and updates the set of data. A plurality of reader processes is communicatively coupled to the shared memory. Each reader process reads at least part of the set of data directly from the shared memory and sends a set of update information to the writer process. The writer process then updates the set of data stored in the shared memory based on the set of update information. | 03-29-2012 |
20120079213 | MANAGING CONCURRENT ACCESSES TO A CACHE - Various embodiments of the present invention manage concurrent accesses to a resource in a parallel computing environment. A plurality of locks is assigned to manage concurrent access to a plurality of parts of a resource. A usage of at least one of the plurality of parts of the resource is monitored. The assignment of the plurality of locks to the plurality of parts of the resource is modified based on the usage that has been monitored. | 03-29-2012 |
20120079391 | SUPPORTING LINKED MULTI-USER DECISION MAKING IN ENVIRONMENTS WITH CONSTRAINED SHARED RESOURCES - Embodiments of the present invention manage multiple requests to allocate real world resources in a multi-user environment. A request for interacting with a database environment comprising records of allocations of a plurality of resources is received from a user in a plurality of users. The database environment is shared between the plurality of users. A set of action choices available for the request is provided to the user via the user interface. A set of resources required by each action choice is identified. The set of resources is associated with a decision context. The decision context exists for a period of time. The set of resources are allocated to the user for a duration of the decision context. The allocating prevents the set of resources from being allocated to other users for the duration of the decision context irrespective of a set of actions performed by the other users. | 03-29-2012 |
20120303908 | MANAGING CONCURRENT ACCESSES TO A CACHE - Various embodiments of the present invention allow concurrent accesses to a cache. A request to update an object stored in a cache is received. A first data structure comprising a new value for the object is created in response to receiving the request. A cache pointer is atomically modified to point to the first data structure. A second data structure comprising an old value for the cached object is maintained until a process, which holds a pointer to the old value of the cached object, at least one of one of ends and indicates that the old value is no longer needed. | 11-29-2012 |
20130019079 | INTELLIGENT COMPUTER MEMORY MANAGEMENT - A plurality of memory allocators are initialized within a computing system. At least a first memory allocator and a second memory allocator in the plurality of memory allocators are each customizable to efficiently handle a set of different memory request size distributions. The first memory allocator is configured to handle a first memory request size distribution. The second memory allocator is configured to handle a second memory request size distribution. The second memory request size distribution is different than the first memory request size distribution. At least the first memory allocator and the second memory allocator that have been configured are deployed within the computing system in support of at least one application. Deploying at least the first memory allocator and the second memory allocator within the computing system improves at least one of performance and memory utilization of the at least one application. | 01-17-2013 |
20130282985 | MANAGING CONCURRENT ACCESSES TO A CACHE - Various embodiments of the present invention allow concurrent accesses to a cache. A request to update an object stored in a cache is received. A first data structure comprising a new value for the object is created in response to receiving the request. A cache pointer is atomically modified to point to the first data structure. A second data structure comprising an old value for the cached object is maintained until a process, which holds a pointer to the old value of the cached object, at least one of one of ends and indicates that the old value is no longer needed. | 10-24-2013 |
20140006576 | MANAGING SERVICE SPECIFICATIONS AND THE DISCOVERY OF ASSOCIATED SERVICES | 01-02-2014 |
20140006582 | MANAGING SERVICE SPECIFICATIONS AND THE DISCOVERY OF ASSOCIATED SERVICES | 01-02-2014 |
20140025897 | METHOD AND SYSTEM FOR CACHE REPLACEMENT FOR SHARED MEMORY CACHES - A method for managing objects stored in a shared memory cache. The method includes accessing data from the shared memory cache using at least a plurality of cache readers. A system updates data in the shared memory cache using a cache writer. The system maintains a cache replacement process collocated with a cache writer. The cache replacement process makes a plurality of decisions on objects to store in the shared memory cache. Each of the plurality of cache readers maintains information on frequencies with which it accesses cached objects. Each of the plurality of cache readers communicates the maintained information to the cache replacement process. The cache replacement process uses the communicated information on frequencies to make at least one decision on replacing at least one object currently stored in the shared memory cache. | 01-23-2014 |
20140025898 | CACHE REPLACEMENT FOR SHARED MEMORY CACHES - An information processing system and computer program storage product for managing objects stored in a shared memory cache. The system includes at least a plurality of cache readers accessing data from the shared memory cache. The system updates data in the shared memory cache using a cache writer. The system maintains a cache replacement process collocated with a cache writer. The cache replacement process makes a plurality of decisions on objects to store in the shared memory cache. Each of the plurality of cache readers maintains information on frequencies with which it accesses cached objects. Each of the plurality of cache readers communicates the maintained information to the cache replacement process. The cache replacement process uses the communicated information on frequencies to make at least one decision on replacing at least one object currently stored in the shared memory cache. | 01-23-2014 |
20140089495 | PREDICTION-BASED PROVISIONING PLANNING FOR CLOUD ENVIRONMENTS - Various embodiments predict performance of a system including a plurality of server tiers. In one embodiment, a first set of performance information is collected for a base allocation of computing resources across multiple server tiers in the plurality of sever tiers for a set of workloads. A set of experimental allocations of the computing resources is generated on a tier-by-tier basis. Each of the set of experimental allocations varies the computing resources allocated by the base allocation for a single server tier of the multiple server tiers. A second set of performance information associated with the single server tier for each of the set of experimental allocations is collected for a plurality of workloads. At least one performance characteristic of at least one candidate allocation of computing resources across the multiple server tiers is predicted for a given workload based on the first and second sets of performance information. | 03-27-2014 |
20140089509 | PREDICTION-BASED PROVISIONING PLANNING FOR CLOUD ENVIRONMENTS - Various embodiments predict performance of a system including a plurality of server tiers. In one embodiment, a first set of performance information is collected for a base allocation of computing resources across multiple server tiers in the plurality of sever tiers for a set of workloads. A set of experimental allocations of the computing resources is generated on a tier-by-tier basis. Each of the set of experimental allocations varies the computing resources allocated by the base allocation for a single server tier of the multiple server tiers. A second set of performance information associated with the single server tier for each of the set of experimental allocations is collected for a plurality of workloads. At least one performance characteristic of at least one candidate allocation of computing resources across the multiple server tiers is predicted for a given workload based on the first and second sets of performance information. | 03-27-2014 |
20140215159 | MANAGING CONCURRENT ACCESSES TO A CACHE - Various embodiments of the present invention allow concurrent accesses to a cache. A request to update an object stored in a cache is received. A first data structure comprising a new value for the object is created in response to receiving the request. A cache pointer is atomically modified to point to the first data structure. A second data structure comprising an old value for the cached object is maintained until a process, which holds a pointer to the old value of the cached object, at least one of one of ends and indicates that the old value is no longer needed. | 07-31-2014 |
20150026428 | MEMORY USE FOR GARBAGE COLLECTED COMPUTER ENVIRONMENTS - A method, processing system, and computer readable storage medium, reduce heap memory used by an application, where unused memory in the heap memory is reclaimed by a garbage collector. A processor periodically monitors the application's memory usage including maximum heap memory size, committed heap memory size, in use heap memory size, and a garbage collection activity level. The processor, based on determining that the monitored garbage collection activity level is below a threshold, releases unused heap memory from the application by reducing the maximum heap memory size. | 01-22-2015 |
20150026429 | OPTIMIZING MEMORY USAGE ACROSS MULTIPLE GARBAGE COLLECTED COMPUTER ENVIRONMENTS - A method, information processing system, and computer readable storage medium, vary a maximum heap memory size for one application of a plurality of applications based on monitoring garbage collection activity levels for the plurality of applications, each application including a heap memory, and unused memory in the heap memory being reclaimed by a garbage collector. | 01-22-2015 |
20150026687 | MONITORING SYSTEM NOISES IN PARALLEL COMPUTER SYSTEMS - Various embodiments monitor system noise in a parallel computing system. In one embodiment, at least one set of system noise data is stored in a shared buffer during a first computation interval. The set of system noise data is detected during the first computation interval and is associated with at least one parallel thread in a plurality of parallel threads. Each thread in the plurality of parallel threads is a thread of a program. The set of system noise data is filtered during a second computation interval based on at least one filtering condition creating a filtered set of system noise data. The filtered set of system noise data is then stored. | 01-22-2015 |
20150363114 | OPTIMIZING MEMORY USAGE ACROSS MULTIPLE GARBAGE COLLECTED COMPUTER ENVIRONMENTS - A method, information processing system, and computer readable storage medium, vary a maximum heap memory size for one application of a plurality of applications based on monitoring garbage collection activity levels for the plurality of applications, each application including a heap memory, and unused memory in the heap memory being reclaimed by a garbage collector. | 12-17-2015 |
20160048339 | INTELLIGENT COMPUTER MEMORY MANAGEMENT - A plurality of memory allocators are initialized within a computing system. At least a first memory allocator and a second memory allocator in the plurality of memory allocators are each customizable to efficiently handle a set of different memory request size distributions. The first memory allocator is configured to handle a first memory request size distribution. The second memory allocator is configured to handle a second memory request size distribution. The second memory request size distribution is different than the first memory request size distribution. At least the first memory allocator and the second memory allocator that have been configured are deployed within the computing system in support of at least one application. Deploying at least the first memory allocator and the second memory allocator within the computing system improves at least one of performance and memory utilization of the at least one application. | 02-18-2016 |
Patent application number | Description | Published |
20100235632 | PROTECTING AGAINST DENIAL OF SERVICE ATTACKS USING TRUST, QUALITY OF SERVICE, PERSONALIZATION, AND HIDE PORT MESSAGES - According to an embodiment of the invention, a system for processing a plurality of service requests in a client-server system includes a challenge server for: presenting a cryptographic challenge to the client; initializing a trust cookie that encodes a client's initial priority level after the client correctly solves the cryptographic challenge; computing a trust level score for the client based on a service request wherein said trust level score is associated with an amount of resources expended by the server in handling the service request such that a higher trust level score is computed for service requests consuming less system resources; assigning the trust level score to the client based on the computation; and embedding the assigned trust level score in the trust cookie included in all responses sent from the server to the client. The system further includes an application server coupled with a firewall. | 09-16-2010 |
20110078685 | SYSTEMS AND METHODS FOR MULTI-LEG TRANSACTION PROCESSING - Embodiments of the invention broadly contemplate systems, methods and arrangements for processing multi-leg transactions. Embodiments of the invention process multi-leg transactions while allowing later arrived orders to get processed during the time when an earlier, tradable multi-leg transaction is pending using a look-ahead mechanism without violating any relevant timing or exchange rules. | 03-31-2011 |
20110078686 | METHODS AND SYSTEMS FOR HIGHLY AVAILABLE COORDINATED TRANSACTION PROCESSING - Embodiments of the invention provide a coordinated transaction processing system capable of providing primary-primary high availability as well as minimal response time to queries via utilization of a virtual reply system between partner nodes. One or more global queues ensure peer nodes are synchronized. | 03-31-2011 |
20110219311 | METHOD AND SYSTEM FOR PARTITIONING ASSET MANAGEMENT PLUGINS - An embodiment of the invention includes a system for partitioning asset management plugins. The system includes an application program interface for performing basic CRUD functions on assets having multiple asset types. At least one plugin having plugin components is provided, wherein the plugin manages at least one asset having a specific asset type (of the multiple asset types). The plugin components include a CRUD component, a state component, an actions component, and/or a view component. The system further includes plugin containers for hosting the plugin components, the plugin containers include at least one client-side plugin container and at least one server-side plugin container. The plugin components are partitioned and distributed from the plugin components to the plugin containers by a plugin server based on capabilities of the client. | 09-08-2011 |
20110252127 | METHOD AND SYSTEM FOR LOAD BALANCING WITH AFFINITY - A method and system for distributing requests to multiple back-end servers in client-server environments. A front-end load balancer is used to send requests to multiple back-end servers. In appropriate cases, the load balancer will send requests to the servers based on affinity requirements, while maintaining load balance among servers. | 10-13-2011 |
20120297008 | CACHING PROVENANCE INFORMATION - Techniques are disclosed for caching provenance information. For example, in an information system comprising a first computing device requesting provenance data from at least a second computing device, a method for improving the delivery of provenance data to the first computing device, comprises the following steps. At least one cache is maintained for storing provenance data which the first computing device can access with less overhead than accessing the second computing device. Aggregated provenance data is produced from input provenance data. A decision whether or not to cache input provenance data is made based on a likelihood of the input provenance data being used to produce aggregated provenance data. By way of example, the first computing device may comprise a client and the second computing device may comprise a server. | 11-22-2012 |
20130332507 | HIGHLY AVAILABLE SERVERS - Techniques for maintaining high availability servers are disclosed. For example, a method comprises the following steps. One or more client requests are provided to a first server for execution therein. The one or more client requests are also provided to a second server for storage therein. In response to the first server failing, the second server is configured to execute at least one client request of the one or more client requests provided to the first server and the second server that is not properly executed by the first server. | 12-12-2013 |
20140067989 | CACHING PROVENANCE INFORMATION - Techniques are disclosed for caching provenance information. For example, in an information system comprising a first computing device requesting provenance data from at least a second computing device, a method for improving the delivery of provenance data to the first computing device, comprises the following steps. At least one cache is maintained for storing provenance data which the first computing device can access with less overhead than accessing the second computing device. Aggregated provenance data is produced from input provenance data. A decision whether or not to cache input provenance data is made based on a likelihood of the input provenance data being used to produce aggregated provenance data. By way of example, the first computing device may comprise a client and the second computing device may comprise a server. | 03-06-2014 |
20140122320 | METHODS AND SYSTEMS FOR COORDINATED TRANSACTIONS IN DISTRIBUTED AND PARALLEL ENVIRONMENTS - Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. As another example, a technique for minimizing communication between nodes, in a system comprising multiple nodes for executing a plurality of applications, comprises the steps of maintaining information about an amount of communication between said applications, and using said information to place said applications on said nodes to minimize communication among said nodes. | 05-01-2014 |
20140123155 | METHODS AND SYSTEMS FOR COORDINATED TRANSACTIONS IN DISTRIBUTED AND PARALLEL ENVIRONMENTS - Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. As another example, a technique for minimizing communication between nodes, in a system comprising multiple nodes for executing a plurality of applications, comprises the steps of maintaining information about an amount of communication between said applications, and using said information to place said applications on said nodes to minimize communication among said nodes. | 05-01-2014 |
20140331016 | APPLICATION-DIRECTED MEMORY DE-DUPLICATION - In a computing system including an application executing on top of a virtualization control layer, wherein the virtualization control layer maps portions of a virtual memory to portions of a physical memory, a method for managing memory including: identifying, by the application, a range of virtual memory whose probability of being replicated in the virtual memory exceeds a given threshold; obtaining, by the application, at least one memory address corresponding to the range of virtual memory; and passing, from the application to the virtualization control layer, an identifier for the range of virtual memory and the memory address corresponding to the range of virtual memory, wherein the identifier is useable by the virtualization control layer to identify similar ranges within the virtual memory. | 11-06-2014 |
20140331017 | APPLICATION-DIRECTED MEMORY DE-DUPLICATION - In a computing system including an application executing on top of a virtualization control layer, wherein the virtualization control layer maps portions of a virtual memory to portions of a physical memory, an apparatus for managing memory configured to: identify, by the application, a range of virtual memory whose probability of being replicated in the virtual memory exceeds a given threshold; obtain, by the application, at least one memory address corresponding to the range of virtual memory; and pass, from the application to the virtualization control layer, an identifier for the range of virtual memory and the memory address corresponding to the range of virtual memory, wherein the identifier is useable by the virtualization control layer to identify similar ranges within the virtual memory. | 11-06-2014 |
Patent application number | Description | Published |
20090031425 | METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR DETECTING ALTERATION OF AUDIO OR IMAGE DATA - Using metadata to detect alteration of data. A first set of metadata characteristics including at least one respective semantic description are recorded for a first set of data representing original data. A second set of metadata characteristics including at least one corresponding semantic description are recorded for a second set of data representing data under test. The first and second sets of metadata characteristics are compared. If the first and second sets of metadata characteristics are not identical, these sets are processed to identify locations in the first set of data that have been altered. Using the at least one semantic description for the first set of data and the at least one corresponding semantic description for the second set of data, one or more metadata characteristics that have changed from the first set of data to the second set of data are identified. | 01-29-2009 |
20100014722 | METHODS INVOLVING OPTIMIZING AND MAPPING IMAGES - A method for optimizing images, the method comprising, receiving a designation of a first feature of interest, receiving a designation of a second feature of interest, receiving a target image, receiving an atlas image including labels of first and second features of interest of the target image and a first optimization parameter associated with the first feature of interest and a second optimization parameter associated with the second feature of interest, mapping the atlas image onto the target image resulting in a global mapped image, defining an area of the first feature of interest and an area of the second feature of interest, mapping the reference image onto the area of the first feature of interest on the global mapped image using the first optimization parameter, and mapping the reference image onto the area of the second feature of interest on the global mapped image using the second optimization parameter. | 01-21-2010 |
20100054525 | SYSTEM AND METHOD FOR AUTOMATIC RECOGNITION AND LABELING OF ANATOMICAL STRUCTURES AND VESSELS IN MEDICAL IMAGING SCANS - A system and method for recognizing and labeling anatomical structures in an image includes creating a list of objects such that one or more objects on the list appear before a target object and setting the image as a context for a first object on the list. The first object is detected and labeled by subtracting a background of the image. A local context is set for a next object on the list using the first object. The next object is detected and labeled by registration using the local context. Setting a local context and detecting and labeling the next object are repeated until the target object is detected and labeled. Labeling of the target object is refined using region growing. | 03-04-2010 |
20120281904 | SYSTEM AND METHOD FOR AUTOMATIC RECOGNITION AND LABELING OF ANATOMICAL STRUCTURES AND VESSELS IN MEDICAL IMAGING SCANS - A system and method for recognizing and labeling anatomical structures in an image includes creating a list of objects such that one or more objects on the list appear before a target object and setting the image as a context for a first object on the list. The first object is detected and labeled by subtracting a background of the image. A local context is set for a next object on the list using the first object. The next object is detected and labeled by registration using the local context. Setting a local context and detecting and labeling the next object are repeated until the target object is detected and labeled. Labeling of the target object is refined using region growing. | 11-08-2012 |
Patent application number | Description | Published |
20080212641 | APPARATUS FOR THERMAL CHARACTERIZATION UNDER NON-UNIFORM HEAT LOAD - What is disclosed is an apparatus for determining the cooling characteristics of a cooling device used for transferring heat from an electronic device. The apparatus comprising a cooling device thermally coupled to a heat pipe. The heat pipe having an exposed surface for the selective application of heat thereon. A localized heat source is selectively applied to at least one region of the exposed surface. The heat source preferably capable of being varied both positionally relative to the exposed surface and in heat intensity. A heat shield is preferably positioned around the exposed surface of the heat pipe to isolate the operational cooling device from the localized heat source. A temperature detector repeatedly measures a temperature distribution across the exposed surface while the cooling device is in a heat transfer mode. The temperature distribution is then used to thermally characterize the cooling device. | 09-04-2008 |
20080215284 | APPARATUS FOR THERMAL CHARACTERIZATION UNDER NON-UNIFORM HEAT LOAD - A method and apparatus for real-time thermal characterization of a fully operating cooling device ( | 09-04-2008 |
20080239539 | METHOD AND APPARATUS FOR THREE-DIMENSIONAL MEASUREMENTS - An apparatus and method for measuring the physical quantities of a data center during operation and method for servicing large-scale computing systems is disclosed. The apparatus includes a cart that supports a plurality of sensors. The cart is moveable within the data center. The sensors capture temperature or other physical parameters within the room. The sensor readings, along with position and orientation information pertaining to the cart are transmitted to a computer system where the data is analyzed to select the optimum temperature or other system environmental parameters for the data center. | 10-02-2008 |
20080245506 | COOLING APPARTUSES WITH DISCRETE COLD PLATES COMPLIANTLY COUPLED BETWEEN A COMMON MANIFOLD AND ELECTRONICS COMPONENTS OF AN ASSEMBLY TO BE COOLED - Cooling apparatuses and methods are provided for cooling an assembly including a planar support structure supporting multiple electronics components. The cooling apparatus includes: multiple discrete cold plates, each having a coolant inlet, coolant outlet and at least one coolant carrying channel disposed therebetween; and a manifold for distributing coolant to and exhausting coolant from the cold plates. The cooling apparatus also includes multiple flexible hoses connecting the coolant inlets of the cold plates to the manifold, as well as the coolant outlets to the manifold, with each hose segment being disposed between a respective cold plate and the manifold. A biasing mechanism biases the cold plates away from the manifold and towards the electronics components, and at least one fastener secures the manifold to the support structure, compressing the biasing mechanism, and thereby forcing the parallel coupled cold plates towards their respective electronics components to ensure good thermal interface. | 10-09-2008 |
20080281551 | METHOD AND APPARATUS FOR THREE-DIMENSIONAL MEASUREMENTS - An apparatus and method for measuring the physical quantities of a data center during operation and method for servicing large-scale computing systems is disclosed. The apparatus includes a cart that supports a plurality of sensors. The cart is moveable within the data center. The sensors capture temperature or other physical parameters within the room. The sensor readings, along with position and orientation information pertaining to the cart are transmitted to a computer system where the data is analyzed to select the optimum temperature or other system environmental parameters for the data center. | 11-13-2008 |
20100046574 | APPARATUS FOR THERMAL CHARACTERIZATION UNDER NON-UNIFORM HEAT LOAD - What is disclosed is an apparatus for determining the cooling characteristics of a cooling device used for transferring heat from an electronic device. The apparatus comprising a cooling device thermally coupled to a heat pipe. The heat pipe having an exposed surface for the selective application of heat thereon. A localized heat source is selectively applied to at least one region of the exposed surface. The heat source preferably capable of being varied both positionally relative to the exposed surface and in heat intensity. A heat shield is preferably positioned around the exposed surface of the heat pipe to isolate the operational cooling device from the localized heat source. A temperature detector repeatedly measures a temperature distribution across the exposed surface while the cooling device is in a heat transfer mode. The temperature distribution is then used to thermally characterize the cooling device. | 02-25-2010 |
20120201004 | APPARATUS FOR THERMAL CHARACTERIZATION UNDER NON-UNIFORM HEAT LOAD - A method and apparatus for real-time thermal characterization of a fully operating cooling device ( | 08-09-2012 |