Patent application number | Description | Published |
20080235684 | Heuristic Based Affinity Dispatching for Shared Processor Partition Dispatching - A mechanism is provided for determining whether to use cache affinity as a criterion for software thread dispatching in a shared processor logical partitioning data processing system. The server firmware may store data about when and/or how often logical processors are dispatched. Given these data, the operating system may collect metrics. Using the logical processor metrics, the operating system may determine whether cache affinity is likely to provide a significant performance benefit relative to the cost of dispatching a particular logical processor to the operating system. | 09-25-2008 |
20090119474 | PARTITION REDISPATCHING USING PAGE TRACKING - Illustrated embodiments provide a computer implemented method and data processing system for redispatching a partition by tracking a set of memory pages, belonging to the dispatched partition. In one illustrative embodiment the computer implemented method comprises finding an effective page address to real page address mapping for a page address miss to create a found real page address and page size combination, responsive to determining the page address miss in a page addressing buffer, and saving the found real page address and page size combination as an entry in set of entries in an array. Further in the computer implemented method, creating a preserved array from the array, responsive to determining the dispatched partition to be an undispatched partition. The computer implemented method further, analyzing each entry of the preserved array for a compressed page, responsive to determining the undispatched partition is now redispatched, and invoking a partition management firmware function to decompress the compressed page, prior to the partition being redispatched, responsive to determining a compressed page. | 05-07-2009 |
20090204959 | METHOD AND APPARATUS FOR VIRTUAL PROCESSOR DISPATCHING TO A PARTITION BASED ON SHARED MEMORY PAGES - The present invention provides a computer implemented method, data processing system, and computer program product for mapping and dispatching virtual processors in a data processing system having at least a first partition and a second partition. The data processing system runs a first partition on a virtual processor during a first timeslice. The data processing system identifies an at least one physical page used by the first partition and the second partition. The data processing system maps the at least one physical page to the first partition and the second partition. The data processing system determines a fitness value based on the mapping. The data processing system dispatches the Virtual processor to the second partition on a second timeslice based on the fitness value, wherein the second timeslice immediately succeeds after the first timeslice, whereby the at least one physical page remains in cache during at least the first timeslice and the second timeslice. | 08-13-2009 |
20090213122 | Graphical Display of CPU Utilization - A method for graphically displaying central processing unit consumption for at least one variable capacity or uncapped partition including displaying CPU utilization or consumption of at least one variable capacity or uncapped partition in a variable-size colored pie chart. The pie chart shows time spent in at least one of user mode, operating system mode, I/O wait mode, or idle mode, with each mode being represented by a different color. An entitlement indicator is displayed for the effective minimum capacity of the at least one variable capacity or uncapped partition. | 08-27-2009 |
20090217276 | METHOD AND APPARATUS FOR MOVING THREADS IN A SHARED PROCESSOR PARTITIONING ENVIRONMENT - The present invention provides a computer implemented method and apparatus to assign software threads to a common virtual processor of a data processing system having multiple virtual processors. A data processing system detects cooperation between a first thread and a second thread with respect to a lock associated with a resource of the data processing system. Responsive to detecting cooperation, the data processing system assigns the first thread to the common virtual processor. The data processing system moves the second thread to the common virtual processor, whereby a sleep time associated with the lock experienced by the first thread and the second thread is reduced below a sleep time experienced prior to the detecting cooperation step. | 08-27-2009 |
20100115522 | MECHANISM TO CONTROL HARDWARE MULTI-THREADED PRIORITY BY SYSTEM CALL - A method, a system and a computer program product for controlling the hardware priority of hardware threads in a data processing system. A Thread Priority Control (TPC) utility assigns a primary level and one or more secondary levels of hardware priority to a hardware thread. When a hardware thread initiates execution in the absence of a system call, the TPC utility enables execution based on the primary level. When the hardware thread initiates execution within a system call, the TPC utility dynamically adjusts execution from the primary level to the secondary level associated with the system call. The TPC utility adjusts hardware priority levels in order to: (a) raise the hardware priority of one hardware thread relative to another; (b) reduce energy consumed by the hardware thread; and (c) fulfill requirements of time critical hardware sections. | 05-06-2010 |
20110153949 | DELAYED REPLACEMENT OF CACHE ENTRIES - A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry. | 06-23-2011 |
20110153975 | METHOD FOR PRIORITIZING VIRTUAL REAL MEMORY PAGING BASED ON DISK CAPABILITIES - A method manages memory paging operations. Responsive to a request to page out a memory page from a shared memory pool, the method identifies whether a physical space within one of a number of paging space devices has been allocated for the memory page. If physical space within the paging space device has not been allocated for the memory page, a page priority indicator for the memory page is identified. The memory page is then allocated to one of a number of memory pools within one of the number of paging space devices. The memory page is allocated one of the memory pools according to the page priority indicator of the memory page. The memory page is then written to the allocated memory pools. | 06-23-2011 |
20110161539 | OPPORTUNISTIC USE OF LOCK MECHANISM TO REDUCE WAITING TIME OF THREADS TO ACCESS A SHARED RESOURCE - Embodiments of the invention provide a method, apparatus and computer program product for enabling a thread to acquire a lock associated with a shared resource, when a locking mechanism is used therewith, wherein each embodiment reduces waiting time and enhances efficiency in using the shared resource. One embodiment is associated with a plurality of processors, which includes two or more processors that each provides a specified thread to access a shared resource. The shared resource can only be accessed by one thread at a given time, a locking mechanism enables a first one of the specified threads to access the shared resource while each of the other specified threads is retained in a waiting queue, and a second one of the specified threads occupies a position of highest priority in the queue. The method includes the step of identifying a time period between a time when the first specified thread releases access to the shared resource, and a later time when the second specified thread becomes enabled to access the shared resource. Responsive to an additional thread that is not one of the specified threads being provided by a processor to access the shared resource during the identified time period, it is determined whether a first prespecified criterion pertaining to the specified threads retained in the queue has been met. Responsive to the first criterion being met, the method determines whether a second prespecified criterion has been met, wherein the second criterion is that the number of specified threads in the queue has not decreased since a specified prior time. Responsive to the second criterion being met, the method then decides whether to enable the additional thread to access the shared resource before the second specified thread accesses the resource. | 06-30-2011 |
20110246800 | OPTIMIZING POWER MANAGEMENT IN MULTICORE VIRTUAL MACHINE PLATFORMS BY DYNAMICALLY VARIABLE DELAY BEFORE SWITCHING PROCESSOR CORES INTO A LOW POWER STATE - Distributing a thread for running on a physical processor and enabling the physical processor to be switched into a low power snooze state when said running thread is IDLE. However, this switching into said low power state is enabled to be delayed by a delay time from an IDLE dispatch from said running thread; such delay is determined by tracking the rate of the number of said IDLE dispatches per processor clock interval and dynamically varying said delay time wherein the delay time is decreased when said rate of IDLE dispatches increases and the delay time is increased when said rate of IDLE dispatches decreases. | 10-06-2011 |
20120005401 | PAGE BUFFERING IN A VIRTUALIZED, MEMORY SHARING CONFIGURATION - An apparatus includes a processor and a volatile memory that is configured to be accessible in an active memory sharing configuration. The apparatus includes a machine-readable encoded with instructions executable by the processor. The instructions including first virtual machine instructions configured to access the volatile memory with a first virtual machine. The instructions including second virtual machine instructions configured to access the volatile memory with a second virtual machine. The instructions including virtual machine monitor instructions configured to page data out from a shared memory to a reserved memory section in the volatile memory responsive to the first virtual machine or the second virtual machine paging the data out from the shared memory or paging the data in to the shared memory. The shared memory is shared across the first virtual machine and the second virtual machine. The volatile memory includes the shared memory. | 01-05-2012 |
20120005448 | Demand-Based Memory Management of Non-pagable Data Storage - Management of a UNIX-style storage pools is enhanced by specially managing one or more memory management inodes associated with pinned and allocated pages of data storage by providing indirect access to the pinned and allocated pages by one or more user processes via a handle, while preventing direct access of the pinned and allocated pages by the user processes without use of the handles; scanning periodically hardware status bits in the inodes to determine which of the pinned and allocated pages have been recently accessed within a pre-determined period of time; requesting via a callback communication to each user process to determine which of the least-recently accessed pinned and allocated pages can be either deallocated or defragmented and compacted; and responsive to receiving one or more page indicators of pages unpinned by the user processes, compacting or deallocating one or more pages corresponding to the page indicators. | 01-05-2012 |
20120036292 | POLLING IN A VIRTUALIZED INFORMATION HANDLING SYSTEM - A software thread is dispatched for causing the system to poll a device for determining whether a condition has occurred. Subsequently, the software thread is undispatched and, in response thereto, an interrupt is enabled on the device, so that the device is enabled to generate the interrupt in response to an occurrence of the condition, and so that the system ceases polling the device for determining whether the condition has occurred. Eventually, the software thread is redispatched and, in response thereto, the interrupt is disabled on the device, so that the system resumes polling the device for determining whether the condition has occurred. | 02-09-2012 |
20120060012 | MANAGEMENT OF LOW-PAGING SPACE CONDITIONS IN AN OPERATING SYSTEM - A virtual memory management unit can implement various techniques for managing paging space. The virtual memory management unit can monitor a number of unallocated large sized pages and can determine when the number of unallocated large sized pages drops below a page threshold. Unallocated contiguous smaller-sized pages can be aggregated to obtain unallocated larger-sized pages, which can then be allocated to processes as required to improve efficiency of disk I/O operations. Allocated smaller-sized pages can also be reorganized to obtain the unallocated contiguous smaller-sized pages that can then be aggregated to yield the larger-sized pages. Furthermore, content can also be compressed before being written to the paging space to reduce the number of pages that are to be allocated to processes. This can enable efficient management of the paging space without terminating processes. | 03-08-2012 |
20120102258 | DYNAMIC MEMORY AFFINITY REALLOCATION AFTER PARTITION MIGRATION - A method of dynamically reallocating memory affinity in a virtual machine after migrating the virtual machine from a source computer system to a destination computer system migrates processor states and resources used by the virtual machine from the source computer system to the destination computer system. The method maps memory of the virtual machine to processor nodes of the destination computer system. The method deletes memory mappings in processor hardware, such as translation lookaside buffers and effective-to-real address tables, for the virtual machine on the destination computer system. The method starts the virtual machine on the destination computer system in virtual real memory mode. A hypervisor running on the destination computer system receives a page fault and virtual address of a page for said virtual machine from a processor of the destination computer system and determines if the page is in local memory of the processor. If the hypervisor determines the page to be in the local memory of the processor, the hypervisor returning a physical address mapping for the page to the processor. If the hypervisor determines the page not to be in the local memory of the processor, the hypervisor moves the page to local memory of the processor and returns a physical address mapping for said page to the processor. | 04-26-2012 |
20120246652 | Processor Management Via Thread Status - Various systems, processes, and products may be used to manage a processor. In particular implementations, managing a processor may include the ability to determine whether a thread is pausing for a short period of time and place a wait event for the thread in a queue based on a short thread pause occurring. Managing a processor may also include the ability to activate a delay thread that determines whether a wait time associated with the pause has expired and remove the wait event from the queue based on the wait time having expired. | 09-27-2012 |
20120278809 | LOCK BASED MOVING OF THREADS IN A SHARED PROCESSOR PARTITIONING ENVIRONMENT - The present invention provides a computer implemented method and apparatus to assign software threads to a common virtual processor of a data processing system having multiple virtual processors. A data processing system detects cooperation between a first thread and a second thread with respect to a lock associated with a resource of the data processing system. Responsive to detecting cooperation, the data processing system assigns the first thread to the common virtual processor. The data processing system moves the second thread to the common virtual processor, whereby a sleep time associated with the lock experienced by the first thread and the second thread is reduced below a sleep time experienced prior to the detecting cooperation step. | 11-01-2012 |
20120311605 | PROCESSOR CORE POWER MANAGEMENT TAKING INTO ACCOUNT THREAD LOCK CONTENTION - A method maintains, for each processing element in a processor, a count of threads waiting in a data structure for hand-off locks in order to execute on the processing element. The method maintains the processing element in a first power state if the count of threads waiting for hand-off locks is greater than zero. The method puts the processing element in a second power state if the count of threads waiting for hand-off locks is equal to zero and no thread is ready to be processed by the processing element. The method returns the processing element to the first power state if the count of threads becomes greater than zero, or if a thread becomes ready to be processed by the processing element. | 12-06-2012 |
20130159614 | PAGE BUFFERING IN A VIRTUALIZED, MEMORY SHARING CONFIGURATION - An apparatus includes a processor and a volatile memory that is configured to be accessible in an active memory sharing configuration. The apparatus includes a machine-readable encoded with instructions executable by the processor. The instructions including first virtual machine instructions configured to access the volatile memory with a first virtual machine. The instructions including second virtual machine instructions configured to access the volatile memory with a second virtual machine. The instructions including virtual machine monitor instructions configured to page data out from a shared memory to a reserved memory section in the volatile memory responsive to the first virtual machine or the second virtual machine paging the data out from the shared memory or paging the data in to the shared memory. The shared memory is shared across the first virtual machine and the second virtual machine. The volatile memory includes the shared memory. | 06-20-2013 |
20130166873 | MANAGEMENT OF LOW-PAGING SPACE CONDITIONS IN AN OPERATING SYSTEM - A virtual memory management unit can implement various techniques for managing paging space. The virtual memory management unit can monitor a number of unallocated large sized pages and can determine when the number of unallocated large sized pages drops below a page threshold. Unallocated contiguous smaller-sized pages can be aggregated to obtain unallocated larger-sized pages, which can then be allocated to processes as required to improve efficiency of disk I/O operations. Allocated smaller-sized pages can also be reorganized to obtain the unallocated contiguous smaller-sized pages that can then be aggregated to yield the larger-sized pages. Furthermore, content can also be compressed before being written to the paging space to reduce the number of pages that are to be allocated to processes. This can enable efficient management of the paging space without terminating processes. | 06-27-2013 |
20130227549 | MANAGING UTILIZATION OF PHYSICAL PROCESSORS IN A SHARED PROCESSOR POOL - Systems, methods and computer program products may provide managing utilization of one or more physical processors in a shared processor pool. A method of managing utilization of one or more physical processors in a shared processor pool may include determining a current amount of utilization of the one or more physical processors and generating an instruction message. The instruction message may be at least partially determined by the current amount of utilization. The method may further include sending the instruction message to a guest operating system, the guest operating system having a number of enabled virtual processors. | 08-29-2013 |
20130254775 | EFFICIENT LOCK HAND-OFF IN A SYMMETRIC MULTIPROCESSING SYSTEM - Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor. | 09-26-2013 |
20130290666 | Demand-Based Memory Management of Non-pagable Data Storage - Management of a UNIX-style storage pools is enhanced by specially managing one or more memory management inodes associated with pinned and allocated pages of data storage by providing indirect access to the pinned and allocated pages by one or more user processes via a handle, while preventing direct access of the pinned and allocated pages by the user processes without use of the handles; scanning periodically hardware status bits in the inodes to determine which of the pinned and allocated pages have been recently accessed within a pre-determined period of time; requesting via a callback communication to each user process to determine which of the least-recently accessed pinned and allocated pages can be either deallocated or defragmented and compacted; and responsive to receiving one or more page indicators of pages unpinned by the user processes, compacting or deallocating one or more pages corresponding to the page indicators. | 10-31-2013 |
20130346967 | Determining Placement Fitness For Partitions Under A Hypervisor - A technique for determining placement fitness for partitions under a hypervisor in a host computing system having non-uniform memory access (NUMA) nodes. In an embodiment, a partition resource specification is received from a partition score requester. The partition resource specification identifies a set of computing resources needed for a virtual machine partition to be created by a hypervisor in the host computing system. Resource availability within the NUMA nodes of the host computing system is assessed to determine possible partition placement options. A partition fitness score of a most suitable one of the partition placement options is calculated. The partition fitness score is reported to the partition score requester. | 12-26-2013 |
20130346972 | Determining Placement Fitness For Partitions Under A Hypervisor - A technique for determining placement fitness for partitions under a hypervisor in a host computing system having non-uniform memory access (NUMA) nodes. In an embodiment, a partition resource specification is received from a partition score requester. The partition resource specification identifies a set of computing resources needed for a virtual machine partition to be created by a hypervisor in the host computing system. Resource availability within the NUMA nodes of the host computing system is assessed to determine possible partition placement options. A partition fitness score of a most suitable one of the partition placement options is calculated. The partition fitness score is reported to the partition score requester. | 12-26-2013 |
20140006741 | Computing Processor Resources for Logical Partition Migration | 01-02-2014 |
20140007124 | Computing Processor Resources for Logical Partition Migration | 01-02-2014 |
20140101662 | EFFICIENT LOCK HAND-OFF IN A SYMMETRIC MULTIPROCESSOR SYSTEM - Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor. | 04-10-2014 |
20140149672 | SELECTIVE RELEASE-BEHIND OF PAGES BASED ON REPAGING HISTORY IN AN INFORMATION HANDLING SYSTEM - An information handling system (IHS) includes an operating system with a release-behind component that determines which file pages to release from a file cache in system memory. The release-behind component employs a history buffer to determine which file pages to release from the file cache to create room for a current page access. The history buffer stores entries that identify respective pages for which a page fault occurred. For each identified page, the history buffer stores respective repage information that indicates if a repage fault occurred for such page. The release-behind component identifies a candidate previous page for release from the file cache. The release-behind component checks the history buffer to determine if a repage fault occurred for that entry. If so, then the release-behind component does not discard the candidate previous page from the cache. Otherwise, the release-behind component discards the candidate previous page if a repage fault occurred. | 05-29-2014 |
20140149675 | SELECTIVE RELEASE-BEHIND OF PAGES BASED ON REPAGING HISTORY IN AN INFORMATION HANDLING SYSTEM - An information handling system (IHS) includes an operating system with a release-behind component that determines which file pages to release from a file cache in system memory. The release-behind component employs a history buffer to determine which file pages to release from the file cache to create room for a current page access. The history buffer stores entries that identify respective pages for which a page fault occurred. For each identified page, the history buffer stores respective repage information that indicates if a repage fault occurred for such page. The release-behind component identifies a candidate previous page for release from the file cache. The release-behind component checks the history buffer to determine if a repage fault occurred for that entry. If so, then the release-behind component does not discard the candidate previous page from the cache. Otherwise, the release-behind component discards the candidate previous page if a repage fault occurred. | 05-29-2014 |
20140223108 | HARDWARE PREFETCH MANAGEMENT FOR PARTITIONED ENVIRONMENTS - This disclosure includes a method for managing hardware prefetch policy of a partition in a partitioned environment which includes dispatching a virtual processor on a physical processor of a first node, assigning a home memory partition of a memory of a second node to the virtual processor, determining whether the first node and the second node are different nodes, disabling hardware prefetch for the virtual processor when the first node and the second node are different nodes, and enabling hardware prefetch when the first node and the second node are the same physical node. | 08-07-2014 |
20140223109 | HARDWARE PREFETCH MANAGEMENT FOR PARTITIONED ENVIRONMENTS - This disclosure includes a method for managing hardware prefetch policy of a partition in a partitioned environment which includes dispatching a virtual processor on a physical processor of a first node, assigning a home memory partition of a memory of a second node to the virtual processor, determining whether the first node and the second node are different nodes, disabling hardware prefetch for the virtual processor when the first node and the second node are different nodes, and enabling hardware prefetch when the first node and the second node are the same physical node. | 08-07-2014 |
Patent application number | Description | Published |
20090055623 | Method and Apparatus for Supporting Shared Library Text Replication Across a Fork System Call - A fork system call by a first process is detected. A second process is created as a replication of the first process with a second affinity. If a replication of the replicated shared library is present in the second affinity domain, effective addresses of the replication of the replicated shared library are mapped using a mapping mechanism of the present invention to physical addresses in the second affinity domain. | 02-26-2009 |
20090182893 | CACHE COHERENCE IN A VIRTUAL MACHINE MANAGED SYSTEM - A method, a system, and computer readable program code for managing cache coherence in a virtual machine managed system are provided. In response to a processor issuing a message to be broadcast, a determination is made as to whether the processor is part of a virtual domain. In response to a determination that the processor is part of the virtual domain, the message and a first bit mask are sent from a source node to a destination node. In response to receiving the message and the first bit mask, one of a primary link or a secondary link is selected to send the message and the first bit mask over, forming a selected link. The message and the first bit mask are sent to the destination node over the selected link. | 07-16-2009 |
20090300317 | SYSTEM AND METHOD FOR OPTIMIZING INTERRUPT PROCESSING IN VIRTUALIZED ENVIRONMENTS - An approach is provided that retrieves a time spent value corresponding to a selected partition that is selected from a group of partitions included in a virtualized environment running on a computer system. The virtualized environment is provided by a Hypervisor. The time spent value corresponds to an amount of time the selected partition has spent processing interrupts. A number of virtual CPUs have been assigned to the selected partition. The time spent value (e.g., a percentage of the time that the selected partition spends processing interrupts) is compared to one or more interrupt threshold values. If the comparison reveals that the time that the partition is spending processing interrupts exceeds a threshold, then the number of virtual CPUs assigned to the selected partition is increased. | 12-03-2009 |
20100287319 | ADJUSTING PROCESSOR UTILIZATION DATA IN POLLING ENVIRONMENTS - A method, system, and computer usable program product for adjusting processor utilization data in polling environments are provided in the illustrative embodiments. An amount of a computing resource consumed during polling performed by the polling application over a predetermined period is received at a processor in a data processing system from a polling application executing in the data processing system. The amount forms a polling amount of the computing resource. Using the polling amount of the computing resource, another amount of the computing resource consumed for performing meaningful task is determined. The other amount forms a work amount of the computing resource. Using the work amount of the computing resource, an adjusted utilization of the computing resource is computed over a utilization interval. The data of the adjusted utilization is saved. | 11-11-2010 |
20100287339 | DEMAND BASED PARTITIONING OR MICROPROCESSOR CACHES - Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein said shared cache memory is effectively shared on a line-by-line basis among said plurality of logical processing partitions of said multi-core processor. | 11-11-2010 |
20100299655 | Determining Performance of a Software Entity - Methods, systems, and products for determining performance of a software entity running on a data processing system. The method comprises allowing extended execution of the software entity without monitoring code. The method also comprises intermittently sampling behavior data for the software entity. Intermittently sampling behavior data may be carried out by injecting monitoring code into the software entity to instrument the software entity, collecting behavior data by utilizing the monitoring code, and removing the monitoring code. The method also comprises repeatedly performing iterations of the allowing and sampling steps until collected behavior data is sufficient for diagnosing performance of the software entity. The method may further comprise analyzing the collected behavior data to diagnose performance of the software entity. | 11-25-2010 |
20110113214 | INFORMATION HANDLING SYSTEM MEMORY MANAGEMENT - An information handling system (IHS) loads an application that may include startup code and steady state operation code. The IHS allocates one region of system memory to the startup code and another region of system memory to the steady state operation code. A programmer inserts a memory release call command at a location that marks the end of execution of the startup code. After executing the startup code, the operation system receives the memory release call command. In response to the memory release call command, the operating system releases or de-allocates the region of memory to which the IHS previously assigned to the startup code. This enables the released memory for use by code other than the startup code, such as other code pages, library pages and other code. | 05-12-2011 |
20120072676 | SELECTIVE MEMORY COMPRESSION FOR MULTI-THREADED APPLICATIONS - A method, system, and computer usable program product for selective memory compression for multi-threaded applications are provided in the illustrative embodiments. An identification of a memory region that is shared by a plurality of threads in an application is received at a first entity in a data processing system. A request for a second entity in the data processing system to keep the memory region uncompressed when compressing at least one of a plurality of memory regions that comprise the memory region is provided from the first entity to the second entity. | 03-22-2012 |
20120144121 | Demand Based Partitioning of Microprocessor Caches - Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein the shared cache memory is effectively shared on a line-by-line basis among the plurality of logical processing partitions of the multi-core processor. | 06-07-2012 |
20120151146 | Demand Based Partitioning of Microprocessor Caches - Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein said shared cache memory is effectively shared on a line-by-line basis among said plurality of logical processing partitions of said multi-core processor. | 06-14-2012 |
20120204186 | PROCESSOR RESOURCE CAPACITY MANAGEMENT IN AN INFORMATION HANDLING SYSTEM - An operating system or virtual machine of an information handling system (IHS) initializes a resource manager to provide processor resource utilization management during workload or application execution. The resource manager captures short term interval (STI) and long term interval (LTI) processor resource utilization data and stores that utilization data within an information store of the virtual machine. If a capacity on demand mechanism is enabled, the resource manager modifies a reserved capacity value. The resource manager selects previous STI and LTI values for comparison with current resource utilization and may apply a safety margin to generate a reserved capacity or target resource utilization value for the next short term interval (STI). The hypervisor may modify existing virtual processor allocation to match the target resource utilization. | 08-09-2012 |
20120210331 | PROCESSOR RESOURCE CAPACITY MANAGEMENT IN AN INFORMATION HANDLING SYSTEM - An operating system or virtual machine of an information handling system (IHS) initializes a resource manager to provide processor resource utilization management during workload or application execution. The resource manager captures short term interval (STI) and long term interval (LTI) processor resource utilization data and stores that utilization data within an information store of the virtual machine. If a capacity on demand mechanism is enabled, the resource manager modifies a reserved capacity value. The resource manager selects previous STI and LTI values for comparison with current resource utilization and may apply a safety margin to generate a reserved capacity or target resource utilization value for the next short term interval (STI). The hypervisor may modify existing virtual processor allocation to match the target resource utilization. | 08-16-2012 |
20120297216 | DYNAMICALLY SELECTING ACTIVE POLLING OR TIMED WAITS - Dynamically selecting active polling or timed waits by a server in a clustered system includes determining a load ratio of a processor of the server, which is determined by calculating a ratio of an instantaneous run queue occupancy to a number of cores of the processor. The processor is occupied by a first runnable thread that requires a message response. A determination may be made whether power management is enabled on the processor, an instantaneous state may be determined based on the load ratio and whether power management is enabled on the processor, and a state process corresponding to the instantaneous state may be executed. | 11-22-2012 |
20130179616 | Partitioned Shared Processor Interrupt-intensive Task Segregator - Interrupt-intensive and interrupt-driven processes are managed among a plurality of virtual processors, wherein each virtual processor is associated with a physical processor, wherein each physical processor may be associated with a plurality of virtual processors, and wherein each virtual processor is tasked to execute one or more of the processes, by determining which of a plurality of the processes executing among a plurality of virtual processors are being or have been driven by at least a minimum count of interrupts over a period of operational time; selecting a subset of the plurality of virtual processors to form a sequestration pool; migrating the interrupt-intensive processes on to the sequestration pool of virtual processors; and commanding by a computer a bias in delivery or routing of the interrupts to the sequestration pool of virtual processors. | 07-11-2013 |