Patent application number | Description | Published |
20080256302 | Programmable Data Prefetching - A method, computer program product, and system are provided for prefetching data into a cache memory. As a program is executed an object identifier is obtained of a first object of the program. A lookup operation is performed on a data structure to determine if the object identifier is present in the data structure. Responsive to the object identifier being present in the data structure, a referenced object identifier is retrieved that is referenced by the object identifier. Then, the data associated with the referenced object identifier is prefetched from main memory into the cache memory. | 10-16-2008 |
20110113406 | SYMMETRIC MULTI-PROCESSOR LOCK TRACING - A symmetric multi-processor SMP system includes an SMP processor and operating system OS software that performs automatic SMP lock tracing analysis on an executing application program. System administrators, users or other entities initiate an automatic SMP lock tracing analysis. A particular thread of the executing application program requests and obtains a lock for a memory address pointer. A subsequent thread requests the same memory address pointer lock prior to the particular thread release of that lock. The subsequent thread begins to spin waiting for the release of that address pointer lock. When the subsequent thread reaches a predetermined maximum amount of wait time, MAXSPIN, a lock testing tool in the kernel of the OS detects the MAXSPIN condition. The OS performs a test to determine if the subsequent thread and address pointer lock meet the list of criteria set during initiation of the automatic lock trace method. The OS initiates an SMP lock trace capture automatically if all criteria or the arguments of the lock trace method are met. System administrators, software programmers, users or other entities interpret the results of the SMP lock tracing method that the OS stores in a trace table to determine performance improvements for the executing application program. | 05-12-2011 |
20120005580 | Autonomic I/O Tracing and Performance Tuning - In an embodiment, a kernel performs autonomic input/output tracing and performance tuning. A first table is provided in a device driver framework and a second table in a kernel of a computer. An input/output device monitoring tool is provided in the device driver framework. A plurality of instructions in the kernel compares each value in the first table with each value in the second table. Responsive to a match of a value in the first table and a value in the second table, the kernel automatically runs a command line to perform a system trace, a component trace, or a tuning task. The first table is populated with a plurality of values calculated from a plurality of data in a plurality of device memories and in the controller memory and the second table is populated in accordance with a second plurality of inputs to the command line interface. | 01-05-2012 |
20120102499 | OPTIMIZING THE PERFORMANCE OF HYBRID CPU SYSTEMS BASED UPON THE THREAD TYPE OF APPLICATIONS TO BE RUN ON THE CPUs - A hybrid CPU system wherein the plurality of processors forming the hybrid system are initially undifferentiated by type or class. Responsive to the sampling of the threads of a received and loaded computer application to be executed, the function of at least one of the processors is changed so that the threads of the sampled application may be most effectively processed/run on the hybrid system. | 04-26-2012 |
20120124299 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR EXTENDING A CACHE USING PROCESSOR REGISTERS - According to one aspect of the present disclosure, a method and technique for using processor registers for extending a cache structure is disclosed. The method includes identifying a register of a processor, identifying a cache to extend, allocating the register as an extension of the cache, and setting an address of the register as corresponding to an address space in the cache. | 05-17-2012 |
20120303591 | MANAGING ROLLBACK IN A TRANSACTIONAL MEMORY ENVIRONMENT - According to one aspect of the present disclosure, a method and technique for managing rollback in a transactional memory environment is disclosed. The method includes, responsive to detecting a begin transaction directive by a processor supporting transactional memory processing, detecting an access of a first memory location not needing rollback and indicating that the first memory location does not need to be rolled back while detecting an access to a second memory location and indicating that a rollback will be required. The method also includes, responsive to detecting an end transaction directive after the begin transaction directive and a conflict requiring a rollback, omitting a rollback of the first memory location while performing rollback on the second memory location. | 11-29-2012 |
20120303938 | PERFORMANCE IN PREDICTING BRANCHES - A method, data processing system, and computer program product for processing instructions. The instructions are processed by a processor unit while using a first table in a plurality of tables to predict a set of instructions needed by the processor unit after processing of a conditional instruction. An identification is formed that a rate of success in correctly predicting the set of instructions when using the first table is less than a threshold number. A sequence of the instructions being processed by the processor unit is searched for an instruction that matches a marker in a set of markers for identifying when to use the plurality of tables. An identification that the instruction that matches the marker is formed. A second table from the plurality of tables referenced by the marker is identified. The second table is used in place of the first table. | 11-29-2012 |
20120304002 | MANAGING ROLLBACK IN A TRANSACTIONAL MEMORY ENVIRONMENT - A system and technique for managing rollback in a transactional memory environment is disclosed. The system includes a processor, a transactional memory, and a transactional memory manager (TMM) configured to perform a rollback on the transactional memory. The TMM is configured to, responsive to detecting a begin transaction directive by the processor, detect an access of a first memory location of the transactional memory not needing rollback and indicate that the first memory location does not need to be rolled back while detecting an access to a second memory location of the transactional memory and indicating that a rollback will be required. The TMM is also configured to, responsive to detecting an end transaction directive after the begin transaction directive and a conflict requiring a rollback, omit a rollback of the first memory location while performing rollback on the second memory location. | 11-29-2012 |
20130054897 | Use of Cache Statistics to Ration Cache Hierarchy Access - A method, system and program are provided for controlling access to a specified cache level in a cache hierarchy in a multiprocessor system by evaluating cache statistics for a specified application at the specified cache level against predefined criteria to prevent the specified application from accessing the specified cache level if the specified application does not meeting the predefined criteria. | 02-28-2013 |
20140082277 | EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations. A total number of I/O operations to be awoken at each of an iterated instance of the waking is limited. | 03-20-2014 |
20150026409 | DEFERRED RE-MRU OPERATIONS TO REDUCE LOCK CONTENTION - Data operations, requiring a lock, are batched into a set of operations to be performed on a per-core basis. A global lock for the set of operations is periodically acquired, the set of operations is performed, and the global lock is freed so as to avoid excessive duty cycling of lock and unlock operations in the computing storage environment. | 01-22-2015 |
20150052529 | EFFICIENT TASK SCHEDULING USING A LOCKING MECHANISM - For efficient task scheduling using a locking mechanism, a new task is allowed to spin on the locking mechanism if a number of tasks spinning on the locking mechanism is less than a predetermined threshold for parallel operations requiring locks between the multiple threads. | 02-19-2015 |
Patent application number | Description | Published |
20100050008 | Estimating Power Consumption in a Computing Environment - A method for determining power consumption in a data storage system is provided. The method comprises determining data access patterns for at least a first storage device in a storage system based on operations performed by the first storage device; and calculating power consumption for the storage system by interpolating costs associated with the operations performed by the first storage device, wherein the cost associated with each operation is determined based on: (1) various levels of activities for the first storage device and a mix of workload characteristics, and (2) predetermined power consumption measurements obtained from one or more benchmarks for same operations performed by a second storage device in a test environment. | 02-25-2010 |
20100287319 | ADJUSTING PROCESSOR UTILIZATION DATA IN POLLING ENVIRONMENTS - A method, system, and computer usable program product for adjusting processor utilization data in polling environments are provided in the illustrative embodiments. An amount of a computing resource consumed during polling performed by the polling application over a predetermined period is received at a processor in a data processing system from a polling application executing in the data processing system. The amount forms a polling amount of the computing resource. Using the polling amount of the computing resource, another amount of the computing resource consumed for performing meaningful task is determined. The other amount forms a work amount of the computing resource. Using the work amount of the computing resource, an adjusted utilization of the computing resource is computed over a utilization interval. The data of the adjusted utilization is saved. | 11-11-2010 |
20140082231 | EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations. A total number of I/O operations to be awoken at each of an iterated instance of the waking is limited. | 03-20-2014 |
20140082296 | DEFERRED RE-MRU OPERATIONS TO REDUCE LOCK CONTENTION - Data operations, requiring a lock, are batched into a set of operations to be performed on a per-core basis. A global lock for the set of operations is periodically acquired, the set of operations is performed, and the global lock is freed so as to avoid excessive duty cycling of lock and unlock operations in the computing storage environment. | 03-20-2014 |
20140156979 | Performance in Predicting Branches - A method for processing instructions. The instructions are processed by a processor unit while using a first table in a plurality of tables to predict a set of instructions needed by the processor unit after processing of a conditional instruction. An identification is formed that a rate of success in correctly predicting the set of instructions when using the first table is less than a threshold number. A sequence of the instructions being processed by the processor unit is searched for an instruction that matches a marker in a set of markers for identifying when to use the plurality of tables. An identification that the instruction that matches the marker is formed. A second table from the plurality of tables referenced by the marker is identified. The second table is used in place of the first table. | 06-05-2014 |
Patent application number | Description | Published |
20090208691 | Device and Method for Repairing Structural Components - A device for repairing a structural component ( | 08-20-2009 |
20110045747 | Abrasive Article - An abrasive article is disclosed that is suitable for cleaning, sanding, scraping, or other such process of removing an outer layer or adherent matter. The abrasive article includes a plurality of abrasive particles at least partially embedded in a frozen liquid. | 02-24-2011 |
20120189807 | Device and Method for Repairing Structural Components - A device and method to repair a structural component. The device includes a plug that bonds to the structural component. The plug includes a flange and a solid shank extending from the flange, the solid shank being disposable in an opening of the structural component. The device further includes disc that bonds to the structural component such that, when bonded, the disc covers the solid shank of the plug. The method includes bonding the plug to the structural component such that the solid shank of the plug is disposed in the opening of the structural component. The method further includes bonding the disc to the structural component. | 07-26-2012 |
20140299255 | SINGLE VACUUM DEBULK COMPOSITE PANEL REPAIR - A method of attaching a composite member to a structure. The method including forming a laminate of fabric impregnated with resin; applying heat at a first temperature to the impregnated laminate; applying vacuum at a first pressure to the impregnated laminate to degas the resin and form a degassed, impregnated laminate; positioning the degassed, impregnated laminate on a structure; and curing the degassed, impregnated laminate on the substrate by applying heat at a second temperature and by applying vacuum at a second pressure. | 10-09-2014 |
Patent application number | Description | Published |
20100243146 | Multi-Stage Debulk and Compaction of Thick Composite Repair Laminates - A method for fabricating a repair laminate for a composite part having an exposed surface includes applying a release film to the exposed surface and forming an uncured ply stack assembly on the release film. The uncured ply stack assembly is formed by forming and compacting a series of uncured ply stacks. The release film and ply stack assembly is then removed from the exposed surface. A bonding material is then applied to the exposed surface, and the uncured ply stack assembly is applied to the bonding material. The ply stack assembly and bonding material are then cured. | 09-30-2010 |
20100258235 | In-Situ, Multi-Stage Debulk, Compaction, and Single Stage Curing of Thick Composite Repair Laminates - A method for fabricating a repair laminate for a composite part having an exposed surface includes applying a bonding material to the exposed surface and forming an uncured ply stack assembly on the bonding material. The uncured ply stack assembly is formed by forming and compacting a series of uncured ply stacks. The ply stack assembly and bonding material are then cured. | 10-14-2010 |
20130043232 | Vacuum Assisted Conformal Shape Setting Device - A device and method for applying treatment to a structure includes a housing having a first membrane sealed to a second membrane for creating an airtight cavity therebetween for receiving a porous material and a working layer. The method includes the process of conforming the shape of the housing to the structure and applying a treatment thereafter to the structure. | 02-21-2013 |
20130056131 | Single Stage Debulk and Cure of a Prepreg Material - An apparatus and method for repairing a damaged laminate including the process of cutting out a damaged section and replacing damaged section with a repair laminate, then covering the repair laminate with a vacuum bag having a heater disposed therein. Thereafter, applying heat and pressure to the repair laminate with the heater and vacuum bag to adequately debulk and cure the repair laminate. | 03-07-2013 |
20130255856 | PROCESSES FOR REPAIRING COMPLEX LAMINATED COMPOSITES - Complex laminated composites may be repaired by removal of individual damaged plies through peeling in order to exploit the weaker interlaminar properties of these composites. Upon removal of the individual damaged plies through peeling, replacement plies may be added to restore the laminated composite. In addition, when damage extends through the thickness of a laminated composite, a plug may be used to allow plies to be replaced while maintaining contour within the repair region. A caul plate also may be used to stiffen and maintain the contour of a repair region while peeling and removing plies from the repair region. | 10-03-2013 |