Entries |
Document | Title | Date |
20080263279 | DESIGN STRUCTURE FOR EXTENDING LOCAL CACHES IN A MULTIPROCESSOR SYSTEM - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design for caching data in a multiprocessor system is provided. The design structure includes a multiprocessor system, which includes a first processor including a first cache associated therewith, a second processor including a second cache associated therewith, and a main memory to store data required by the first processor and the second processor, the main memory being controlled by a memory controller that is in communication with each of the first processor and the second processor through a bus, wherein the second cache associated with the second processor is operable to cache data from the main memory corresponding to a memory access request of the first processor. | 10-23-2008 |
20080270701 | Cluster-type storage system and managing method of the cluster-type storage system - A storage system | 10-30-2008 |
20080301368 | Recording controller and recording control method - Upon retrieving, after occurrence of replacement of a first cache, move out (MO) data that is a write back target, a second cache determines, based on data that is set in a control flag of a register, whether a new registration process of move in (MI) data with respect to a recording position of the MO data is completed. Upon determining that the new registration process is not completed, the second cache cancels the new registration process to ensure that a request of the new registration process is not output to a pipeline. | 12-04-2008 |
20090013130 | MULTIPROCESSOR SYSTEM AND OPERATING METHOD OF MULTIPROCESSOR SYSTEM - According to one aspect of embodiments, a multiprocessor system includes a plurality of processors, cache memories corresponding respectively to the processors, and a cache access controller. The cache access controller accesses at least one of the cache memories except one of the cache memories corresponding to one of the processors that issued the indirect access instruction in response to an indirect access instruction from each of the processors. Accordingly, even when one processor accesses data stored in a cache memory of another processor, data transfer between the cache memories is not required. Therefore, latency of an access to the data shared by the plurality of processors can be reduced. Moreover, since the communication between the cache memories is performed only at the time of executing the indirect access instructions, the bus traffic between the cache memories can be reduced. | 01-08-2009 |
20090037658 | Providing an inclusive shared cache among multiple core-cache clusters - In one embodiment, the present invention includes a method for receiving requested data from a system interconnect interface in a first scalability agent of a multi-core processor including a plurality of core-cache clusters, storing the requested data in a line of a local cache of a first core-cache cluster including a requester core, and updating a cluster field and a core field in a vector of a tag array for the line. Other embodiments are described and claimed. | 02-05-2009 |
20090055589 | Cache memory system for a data processing apparatus - A data processing apparatus is provided having a cache memory | 02-26-2009 |
20090055590 | Storage system having function to backup data in cache memory - A storage system comprises a plurality of control modules having a plurality of cache memories respectively. One or more dirty data elements out of a plurality of dirty data elements stored in a first cache memory in a first control module are copied to a second cache memory in a second control module. The one or more dirty data elements stored in the second cache memory are backed up to a non-volatile storage resource. The dirty data elements backed up from the first cache memory to the non-volatile storage resource are dirty data elements other than the one or more dirty data elements of which copying has completed, out of the plurality of dirty data elements. | 02-26-2009 |
20090100227 | Processor architecture with wide operand cache - A programmable processor and method for improving the performance of processors by expanding at least two source operands, or a source and a result operand, to a width greater than the width of either the general purpose register or the data path width. The present invention provides operands which are substantially larger than the data path width of the processor by using the contents of a general purpose register to specify a memory address at which a plurality of data path widths of data can be read or written, as well as the size and shape of the operand. In addition, several instructions and apparatus for implementing these instructions are described which obtain performance advantages if the operands are not limited to the width and accessible number of general purpose registers. | 04-16-2009 |
20090106494 | ALLOCATING SPACE IN DEDICATED CACHE WAYS - A system comprises a processor core and a cache coupled to the core and comprising at least one cache way dedicated to the core, where the cache way comprises multiple cache lines. The system also comprises a cache controller coupled to the cache. Upon receiving a data request from the core, the cache controller determines whether the cache has a predetermined amount of invalid cache lines. If the cache does not have the predetermined amount of invalid cache lines, the cache controller is adapted to allocate space in the cache for new data, where the space is allocable in the at least one cache way dedicated to the core. | 04-23-2009 |
20090138659 | MECHANISM TO ACCELERATE REMOVAL OF STORE OPERATIONS FROM A QUEUE - A processor includes at least one processing core. The processing core includes a memory cache, a store queue, and a post-retirement store queue. The processing core retires a store in the store queue and conveys the store to the memory cache and the post-retirement store queue, in response to retiring the store. In one embodiment, the store queue and/or the post-retirement store queue is a first-in, first-out queue. In a further embodiment, to convey the store to the memory cache, the processing core obtains exclusive access to a portion of the memory cache targeted by the store. The processing core buffers the store in a coalescing buffer and merges with the store, one or more additional stores and/or loads targeted to the portion of the memory cache targeted by the store prior to writing the store to the memory cache. | 05-28-2009 |
20090182942 | Extract Cache Attribute Facility and Instruction Therefore - A facility and cache machine instruction of a computer architecture for specifying a target cache cache-level and a target cache attribute of interest for obtaining a cache attribute of one or more target caches. The requested cache attribute of the target cache(s) is saved in a register. | 07-16-2009 |
20090198895 | CONTROL METHOD, MEMORY, AND PROCESSING SYSTEM UTILIZING THE SAME - A control method for a memory is provided. The memory includes a plurality of storage units, each storing a plurality of bits. In a read mode, a read command is provided to the memory. The value of a most significant bit (MSB) of each storage unit is obtained and recorded. The value of the most significant bits is output. The value of a neighboring bit of each storage unit is obtained and recorded. The neighboring bit neighbors the most significant bit. The value of the neighboring bits is output. | 08-06-2009 |
20090198896 | DUAL WRITING DEVICE AND ITS CONTROL METHOD - A first storage system sets a storage area of a first disk drive as a first volume, and misrepresents an identifier of the storage system and an identifier of the first volume. A second storage system sets a storage area of a second disk drive as a second volume, and misrepresents an identifier of the storage system and an identifier of the second volume. The first storage system copies data of the first volume in the second volume. The second storage system copies data of the second volume in the first volume. A management computer determines whether there is a match between the data of the first volume and the data of the second volume. When it is determined that there is not match, the host computer accesses only one of the first volume and the second volume that stores the latest data. | 08-06-2009 |
20090204762 | Self Test Apparatus for Identifying Partially Defective Memory - A computing system is provided which includes a processor having a cache memory. The cache memory includes a plurality of independently configurable subdivisions, each subdivision including a memory array. A service element (SE) of the computing system is operable to cause a built-in-self-test (BIST) to be executed to test the cache memory, the BIST being operable to determine whether any of the subdivisions is defective. When it is determined that one of the subdivisions of the cache memory determined defective by the BIST is non-repairable, the SE logically deletes the defective subdivision from the system configuration, and the SE is operable to permit the processor to operate without the logically deleted subdivision. The SE is further operable to determine that the processor is defective when a number of the defective subdivisions exceeds a threshold. | 08-13-2009 |
20090204763 | SYSTEM AND METHOD FOR AVOIDING DEADLOCKS WHEN PERFORMING STORAGE UPDATES IN A MULTI-PROCESSOR ENVIRONMENT - A system and method for avoiding deadlocks when performing storage updates in a multi-processor environment. The system includes a processor having a local cache, a store queue having a temporary buffer with capability to reject exclusive cross-interrogates (XI) while an interrogated cache line is owned exclusive and is to be stored, and a mechanism for performing a method. The method includes setting the processor into a slow mode. A current instruction that includes a data store having one or more target lines is received. The current instruction is executed, with the executing including storing results associated with the data store into the temporary buffer. The store queue is prevented from rejecting an exclusive XI corresponding to the target lines of the current instruction. Each target line is acquired with a status of exclusive ownership, and the contents from the temporary buffer are written to each target line after instruction completion. | 08-13-2009 |
20090210624 | 3-Dimensional L2/L3 Cache Array to Hide Translation (TLB) Delays - Embodiments of the invention provide a look-aside-look-aside buffer (LLB) configured to retain a portion of the real addresses in a translation look-aside (TLB) buffer to allow prefetching of data from a cache. A subset of real address bits associated with an effective address may be retrieved relatively quickly from the LLB, thereby allowing access to the cache before the complete address translation is available and reducing cache access latency. | 08-20-2009 |
20090307430 | SHARING AND PERSISTING CODE CACHES - Computer code from an application program comprising a plurality of modules that each comprise a separately loadable file is code cached in a shared and persistent caching system. A shared code caching engine receives native code comprising at least a portion of a single module of the application program, and stores runtime data corresponding to the native code in a cache data file in the non-volatile memory. The engine then converts cache data file into a code cache file and enables the code cache file to be pre-loaded as a runtime code cache. These steps are repeated to store a plurality of separate code cache files at different locations in non-volatile memory. | 12-10-2009 |
20090327610 | Method and System for Conducting Intensive Multitask and Multiflow Calculation in Real-Time - The system for conducting intensive multitask and multistream calculation in real time comprises a central processor core (SPP) for supporting the system software and comprising a control unit (ESCU) for assigning threads of an application, the non-critical threads being run by the central processor core (SPP), whereas the intensive or specialized threads are assigned to an auxiliary processing part (APP) comprising a set of N auxiliary calculation units (APU | 12-31-2009 |
20100023694 | Memory access system, memory control apparatus, memory control method and program - A memory control apparatus disposed in a memory access system having a bus, a single storage unit with a bank structure and a bus arbitrating unit, includes: an access-request accepting means for accepting sequential access requests for data located at sequential addresses in the storage unit, sequential access requests for data located at discrete addresses in the storage unit as sequential access requests, or access requests for data located at sequential addresses in the storage unit which cannot be made into a single access request as sequential access requests; and an access-request rearranging means for rearranging sequential access requests accepted by the access-request accepting means in an order of banks of the storage unit within a range of access requests relating to either a data write request output from one of data processing units or a data read request output therefrom to control an access control of the storage unit. | 01-28-2010 |
20100030964 | METHOD AND SYSTEM FOR SECURING INSTRUCTION CACHES USING CACHE LINE LOCKING - A method and system is provided for securing micro-architectural instruction caches (I-caches). Securing an I-cache involves providing security critical instructions to indicate a security critical code section; and implementing an I-cache locking policy to prevent unauthorized eviction and replacement of security critical instructions in the I-cache. Securing the I-cache may further involve dynamically partitioning the I-cache into multiple logical partitions, and sharing access to the I-cache by an I-cache mapping policy that provides access to each I-cache partition by only one logical processor. | 02-04-2010 |
20100042785 | ADVANCED PROCESSOR WITH FAST MESSAGING NETWORK TECHNOLOGY - An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner. | 02-18-2010 |
20100082904 | Apparatus and method to harden computer system - In some embodiments, a non-volatile cache memory may include a segmented non-volatile cache memory configured to be located between a system memory and a mass storage device of an electronic system and a controller coupled to the segmented non-volatile cache memory, wherein the controller is configured to control utilization of the segmented non-volatile cache memory. The segmented non-volatile cache memory may include a file cache segment, the file cache segment to store complete files in accordance with a file cache policy, and a block cache segment, the block cache segment to store one or more blocks of one or more files in accordance with a block cache policy, wherein the block cache policy is different from the file cache policy. The controller may be configured to utilize the file cache segment in accordance with information related to the block cache segment and to utilize the block cache segment in accordance with information related to the file cache segment. Other embodiments are disclosed and claimed. | 04-01-2010 |
20100100681 | System on a chip for networking - A system on a chip for network devices. In one implementation, the system on a chip may include (integrated onto a single integrated circuit), a processor and one or more I/O devices for networking applications. For example, the I/O devices may include one or more network interface circuits for coupling to a network interface. In one embodiment, coherency may be enforced within the boundaries of the system on a chip but not enforced outside of the boundaries. | 04-22-2010 |
20100106911 | METHODS AND SYSTEMS FOR COMMUNICATION BETWEEN STORAGE CONTROLLERS - Methods and systems for communication between two storage controllers. A first storage controller specifies a special frame indicator in a frame of a protocol that is also used by a first storage controller to send a storage command to a storage device. The first storage controller transmits the frame to a second storage controller such that the frame comprises data in a payload field of the frame. | 04-29-2010 |
20100211742 | CONVEYING CRITICAL DATA IN A MULTIPROCESSOR SYSTEM - A system for conveying critical and non-critical words of multiple cache lines includes a first node interface of a first processing node receiving, from a first processor, a first request identifying a critical word of a first cache line and a second request identifying a critical word of a second cache line. The first node interface conveys requests corresponding to the first and second requests to a second node interface of a second processing node. The second node interface receives the corresponding requests and conveys the critical words of the first and second cache lines to the first processing node before conveying non-critical words of the first and second cache lines. | 08-19-2010 |
20100211743 | INFORMATION PROCESSING APPARATUS AND METHOD OF CONTROLLING SAME - Disclosed is an information processing apparatus equipped with first and second CPUs, as well as a method of controlling this apparatus. When the first CPU launches an operating system for managing a virtual memory area that includes a first cache area for a device, the first CPU generates specification data, which indicates the corresponding relationship between the first cache and a second cache for the device and provided in a main memory, and transfers the specification data to the second CPU. In accordance with the specification data, the second CPU transfers data, which has been stored in the device, to a physical memory corresponding to a cache to which the first CPU refers. As a result, the first CPU accesses the first cache area is thereby capable of accessing the device at high speed. | 08-19-2010 |
20100241808 | CACHE-LINE AWARE COLLECTION FOR RUNTIME ENVIRONMENTS - Target data is allocated into caches of a shared-memory multiprocessor system during a runtime environment. The target data includes a plurality of data items that are allocated onto separate cache lines. Each data item is allocated on a separate cache line regardless of the size of the cache line of the system. The data items become members of a wrapper types when data items are value types. The runtime environment maintains a set of wrapper types of various sizes that are of typical cache line sizes. Garbage data is inserted into the cache line in cases where data items are reference types and data is stored on a managed heap. The allocation also configures garbage collectors in the runtime environment not to slide multiple data items onto the same cache line. Other examples are included where a developer can augment the runtime environment to be aware of cache line sizes. | 09-23-2010 |
20100262781 | Loading Data to Vector Renamed Register From Across Multiple Cache Lines - A load instruction that accesses data cache may be off natural alignment, which causes a cache line crossing to complete the access. The illustrative embodiments provide a mechanism for loading data across multiple cache lines without the need for an accumulation register or collection point for partial data access from a first cache line while waiting for a second cache line to be accessed. Because the accesses to separate cache lines are concatenated within the vector rename register without the need for an accumulator, an off-alignment load instruction is completely pipeline-able and flushable with no cleanup consequences. | 10-14-2010 |
20100293331 | STORAGE SYSTEM AND DATA MANAGEMENT METHOD - A storage system, which is coupled to a computer, includes a storage device, a controller, a plurality of cache memory units, and a connecting unit. Each of the plurality of cache memory units includes: a cache memory for storing data; an auxiliary storage device for holding a content of data even after shutdown of power; and a cache controller for controlling an input/output of data to/from the cache memory and the auxiliary storage device. The cache controller store data stored in the cache memory, which is divided into a plurality of parts, into a plurality of the auxiliary storage devices included in the plurality of cache memory units. | 11-18-2010 |
20100293332 | CACHE ENUMERATION AND INDEXING - In response to a request including a state object, which can indicate a state of an enumeration of a cache, the enumeration can be continued by using the state object to identify and send cache data. Also, an enumeration of cache units can be performed by traversing a data structure that includes object nodes, which correspond to cache units, and internal nodes. An enumeration state stack can indicate a current state of the enumeration, and can include state nodes that correspond to internal nodes in the data structure. Additionally, a cache index data structure can include a higher level table and a lower level table. The higher level table can have a leaf node pointing to the lower level table, and the lower level table can have a leaf node pointing to one of the cache units. Moreover, the lower level table can be associated with a tag. | 11-18-2010 |
20100332755 | METHOD AND APPARATUS FOR USING A SHARED RING BUFFER TO PROVIDE THREAD SYNCHRONIZATION IN A MULTI-CORE PROCESSOR SYSTEM - An apparatus and method for improving synchronization between threads in a multi-core processor system are provided. An apparatus includes a memory, a first processor core, and a second processor core. The memory includes a shared ring buffer for storing data units, and stores a plurality of shared variables associated with accessing the shared ring buffer. The first processor core runs a first thread and has a first cache associated therewith. The first cache stores a first set of local variables associated with the first processor core. The first thread controls insertion of data items into the shared ring buffer using at least one of the shared variables and the first set of local variables. The second processor core runs a second thread and has a second cache associated therewith. The second cache stores a second set of local variables associated with the second processor core. The second thread controls extraction of data items from the shared ring buffer using at least one of the shared variables and the second set of local variables. | 12-30-2010 |
20100332756 | PROCESSING OUT OF ORDER TRANSACTIONS FOR MIRRORED SUBSYSTEMS - Methods and apparatus relating to processing out of order transactions for mirrored subsystems are described. In one embodiment, a device (that is mirroring data from another device) includes a cache to track out of order write operations prior to writing the data from the write operations to memory. A register may be used to track the state of the cache and cause acknowledgement of commitment of the data to memory once all cache entries, as recorded at a select point by the register, are emptied or otherwise invalidated. Other embodiments are also disclosed. | 12-30-2010 |
20110060879 | SYSTEMS AND METHODS FOR PROCESSING MEMORY REQUESTS - A processing system is provided. The processing system includes a first processing unit coupled to a first memory and a second processing unit coupled to a second memory. The second memory comprises a coherent memory and a private memory that is private to the second processing unit. | 03-10-2011 |
20110072212 | CACHE MEMORY CONTROL APPARATUS AND CACHE MEMORY CONTROL METHOD - A cache memory controller searches a second cache tag memory holding a cache state information indicating whether any of multi-processor cores storing a registered address of information registered within its own first cache memory exists. When a target address coincides with the obtained registered address, the cache memory controller determines whether an invalidation request or a data request to a processor core including a block is necessary based on the cache status information. Once it is determined that invalidation or a data request for the processor including the block, the cache memory controller determines whether a retry of instruction based on a comparison result of the first cache tag memory is necessary, if it is determined that invalidation or a data request for the processor including the block. | 03-24-2011 |
20110082980 | HIGH PERFORMANCE UNALIGNED CACHE ACCESS - A cache memory device and method for operating the same. One embodiment of the cache memory device includes an address decoder decoding a memory address and selecting a target cache line. A first cache array is configured to output a first cache entry associated with the target cache line, and a second cache array coupled to an alignment unit is configured to output a second cache entry associated with the alignment cache line. The alignment unit coupled to the address decoder selects either the target cache line or a neighbor cache line proximate the target cache line as an alignment cache line output. Selection of either the target cache line or a neighbor cache line is based on an alignment bit in the memory address. A tag array cache is split into even and odd cache lines tags, and provides one or two tags for every cache access. | 04-07-2011 |
20110082981 | MULTIPROCESSING CIRCUIT WITH CACHE CIRCUITS THAT ALLOW WRITING TO NOT PREVIOUSLY LOADED CACHE LINES - Data is processed using a first and second processing circuit ( | 04-07-2011 |
20110131376 | METHOD AND APPARATUS FOR TILE MAPPING TECHNIQUES - An approach for improving tile-map caching techniques is provided. Whether a tile object is stored in a first cache that is configured to store a plurality of tile objects associated with a map is determined. It is also determined whether a resource locator associated with the tile object is stored in a second cache, if the tile object is not in the first cache. The tile object is retrieved based on the resource locator if the resource locator is stored in the second cache. | 06-02-2011 |
20110153941 | Multi-Autonomous System Anycast Content Delivery Network - A content delivery network includes first and second sets of cache servers, a domain name server, and an anycast island controller. The first set of cache servers is hosted by a first autonomous system and the second set of cache servers is hosted by a second autonomous system. The cache servers are configured to respond to an anycast address for the content delivery network, to receive a request for content from a client system, and provide the content to the client system. The first and second autonomous systems are configured to balance the load across the first and second sets of cache servers, respectively. The domain name server is configured to receive a request from a requestor for a cache server address, and provide the anycast address to the requestor in response to the request. The anycast island controller is configured to receive load information from each of the cache servers, determine an amount of requests to transfer from the first autonomous system to the second autonomous system; send an instruction to the first autonomous system to transfer the amount of requests to the second autonomous system. | 06-23-2011 |
20110167223 | BUFFER MEMORY DEVICE, MEMORY SYSTEM, AND DATA READING METHOD - Memory access is accelerated by performing a burst read without any problems caused due to rewriting of data. A buffer memory device reads, in response to a read request from a processor, data from a main memory including cacheable and uncacheable areas. The buffer memory device includes an attribute obtaining unit which obtains the attribute of the area indicated by a read address included in the read request; an attribute determining unit which determines whether or not the attribute obtained by the attribute obtaining unit is burst-transferable; a data reading unit which performs a burst read of data including data held in the area indicated by the read address, when determined that the attribute obtained by the attribute obtaining unit is burst-transferable; and a buffer memory which holds the data burst read by the data reading unit. | 07-07-2011 |
20110191541 | TECHNIQUES FOR DISTRIBUTED CACHE MANAGEMENT - Techniques for distributed cache management are provided. A server having backend resource includes a global cache and a global cache agent. Individual clients each have client cache agents and client caches. When data items associated with the backend resources are added, modified, or deleted in the client caches, the client cache agents report the changes to the global cache agent. The global cache agent records the changes and notifies the other client cache agents to update a status of the changes within their client caches. When the changes are committed to the backend resource each of the statuses in each of the caches are updated accordingly. | 08-04-2011 |
20110208914 | STORAGE SYSTEM AND METHOD OF OPERATING THEREOF - There are provided a storage system, storage control unit and method of operating thereof. A storage system comprises a permanent storage subsystem comprising a first cache memory and a non-volatile storage medium, and a storage control unit operatively coupled to said subsystem and to a second cache memory operable to cache “dirty” data pending to be written to the permanent storage subsystem and to enable, responsive to at least one command by the control storage unit, destaging said “dirty” data or part thereof to the permanent storage subsystem. The storage control unit is operable to determine achievement of a “writing criterion”, to provide, upon achieving, at least one command to the permanent storage subsystem requiring flushing destaged data or part thereof from the first cache memory to the non-volatile storage medium, and to provide at least one command to the second cache memory requiring reclassification of the “washed” data or a respective part thereof into the “clean” data, wherein the storage control unit is further operable to configure the “writing criterion” responsive to indicating one or more predefined events during an operation of the storage system. | 08-25-2011 |
20110213931 | NON BLOCKING REHASHING - An apparatus and a method operating on data at a server node of a data grid system with distributed cache is described. A coordinator receives a request to change a topology of a cache cluster from a first group of cache nodes to a second group of cache nodes. The request includes a cache node joining or leaving the first group. A key for the second group is rehashed without blocking access to the first group while rehashing. | 09-01-2011 |
20110219188 | CACHE AS POINT OF COHERENCE IN MULTIPROCESSOR SYSTEM - In a multiprocessor system, a conflict checking mechanism is implemented in the L2 cache memory. Different versions of speculative writes are maintained in different ways of the cache. A record of speculative writes is maintained in the cache directory. Conflict checking occurs as part of directory lookup. Speculative versions that do not conflict are aggregated into an aggregated version in a different way of the cache. Speculative memory access requests do not go to main memory. | 09-08-2011 |
20110219189 | STORAGE SYSTEM AND REMOTE COPY CONTROL METHOD FOR STORAGE SYSTEM - A storage system maintains consistency of the stored contents between volumes even when a plurality of remote copying operations are executed asynchronously. A plurality of primary storage control devices and a plurality of secondary storage control devices are connected by a plurality of paths, and remote copying is performed asynchronously between respective first volumes and second volumes. Write data transferred from the primary storage control device to the secondary storage control device is held in a write data storage portion. Update order information, including write times and sequential numbers, is managed by update order information management portions. An update control portion collects update order information from each update order information management portion, determines the time at which update of each second volume is possible, and notifies each-update portion. By this means, the stored contents of each second volume can be updated up to the time at which update is possible. | 09-08-2011 |
20110231612 | PRE-FETCHING FOR A SIBLING CACHE - One embodiment provides a system that pre-fetches into a sibling cache. During operation, a first thread executes in a first processor core associated with a first cache, while a second thread associated with the first thread simultaneously executes in a second processor core associated with a second cache. During execution, the second thread encounters an instruction that triggers a request to a lower-level cache which is shared by the first cache and the second cache. The system responds to this request by directing a load fill which returns from the lower-level cache in response to the request to the first cache, thereby reducing cache misses for the first thread. | 09-22-2011 |
20110246720 | STORAGE SYSTEM WITH MULTIPLE CONTROLLERS - A first controller, and a second controller coupled to the first controller via a first path are provided. The first controller includes a first relay circuit which is a circuit that controls data transfer, and a first processor coupled to the first relay circuit via a first second path. The second controller includes a second relay circuit which is a circuit that controls data transfer, and is coupled to the first relay circuit via the first path, and a second processor coupled to the second relay circuit via a second second path. The first processor is coupled to the second relay circuit not via the first relay circuit but via a first third path, and accesses the second relay circuit via the first third path during an I/O process. The second processor is coupled to the first relay circuit not via the second relay circuit but via a second third path, and accesses the first relay circuit via the second third path during an I/O process. | 10-06-2011 |
20110271056 | MULTITHREADED CLUSTERED MICROARCHITECTURE WITH DYNAMIC BACK-END ASSIGNMENT - A multithreaded clustered microarchitecture with dynamic back-end assignment is presented. A processing system may include a plurality of instruction caches and front-end units each to process an individual thread from a corresponding one of the instruction caches, a plurality of back-end units, and an interconnect network to couple the front-end and back-end units. A method may include measuring a performance metric of a back-end unit, comparing the measurement to a first value, and reassigning, or not, the back-end unit according to the comparison. Computer systems according to embodiments of the invention may include: a random access memory; a system bus; and a processor having a plurality of instruction caches, a plurality of front-end units each to process an individual thread from a corresponding one of the instruction caches; a plurality of back-end units; and an interconnect network coupled to the plurality of front-end units and the plurality of back-end units. | 11-03-2011 |
20110314225 | COMPUTATIONAL RESOURCE ASSIGNMENT DEVICE, COMPUTATIONAL RESOURCE ASSIGNMENT METHOD AND COMPUTATIONAL RESOURCE ASSIGNMENT PROGRAM - In a multi-core processor system, cache memories are provided respectively for a plurality of processors. An assignment management unit manages assignment of tasks to the processors. A cache status calculation unit calculates a cache usage status such as a memory access count and a cache hit ratio, with respect to each task. A first processor handles a plurality of first tasks that belong to a first process. If computation amount of the first process exceeds a predetermined threshold value, the assignment management unit refers to the cache usage status to preferentially select, as a migration target task, one of the plurality of first tasks whose memory access count is smaller or whose cache hit ratio is higher. Then, the assignment management unit newly assigns the migration target task to a second processor handling another process different from the first processor. | 12-22-2011 |
20120059994 | USING A MIGRATION CACHE TO CACHE TRACKS DURING MIGRATION - Provided are a method, system, and computer program product for using a migration cache to cache tracks during migration. Indication is made in an extent list of tracks in an extent in a source storage subject to Input/Output (I/O) requests. A migration operation is initiated to migrate the extent from the source storage to a destination storage. In response to initiating the migration operation, a determination is made of a first set of tracks in the extent in the source storage indicated in the extent list. A determination is also made of a second set of tracks in the extent. The tracks in the source storage in the first set are copied to a migration cache, wherein updates to the tracks in the migration cache during the migration operation are applied to the migration cache. The tracks in the second set are copied directly from the source storage to the destination storage without buffering in the migration cache. The tracks in the first set are copied from the migration cache to the destination storage. The migration operation is completed in response to copying the first set of tracks from the migration cache to the destination storage and copying the second set of tracks from the source storage to the destination storage, wherein after the migration the tracks in the extent are located in the destination storage. | 03-08-2012 |
20120079200 | UNIFIED STREAMING MULTIPROCESSOR MEMORY - One embodiment of the present invention sets forth a technique for providing a unified memory for access by execution threads in a processing system. Several logically separate memories are combined into a single unified memory that includes a single set of shared memory banks, an allocation of space in each bank across the logical memories, a mapping rule that maps the address space of each logical memory to its partition of the shared physical memory, a circuitry including switches and multiplexers that supports the mapping, and an arbitration scheme that allocates access to the banks. | 03-29-2012 |
20120084510 | Computing Machine and Computing System - According to one embodiment, a computing machine includes a virtual machine operated on a virtual machine monitor, the computing machine includes a first memory device, and a second memory device. The virtual machine monitor is configured to assign a part of a region of the first memory device as a third memory device to the virtual machine and to assign a part of a region of the second memory device as a fourth memory device to the virtual machine. The virtual machine comprises a first cache control module configured to use the fourth memory device as a read cache of the third memory device. | 04-05-2012 |
20120096225 | DYNAMIC CACHE CONFIGURATION USING SEPARATE READ AND WRITE CACHES - Data from storage devices is stored in a read cache, having a read cache size, and a write cache, having a write cache size. The read cache and the write cache are separate caches. Cache configuration of the read cache and the write cache are automatically and dynamically adjusted based, at least in part, upon cache performance parameters. Cache performance parameters include one or more of preference scores, frequency of read and write operations, read and write performance of a storage device, localization information, and contiguous read and write performance. Dynamic cache configuration includes one or more of adjusting read cache size and/or write cache size and adjusting read cache block size and/or write cache block size. | 04-19-2012 |
20120137073 | Extract Cache Attribute Facility and Instruction Therefore - A facility and cache machine instruction of a computer architecture for specifying a target cache cache-level and a target cache attribute of interest for obtaining a cache attribute of one or more target caches. The requested cache attribute of the target cache(s) is saved in a register. | 05-31-2012 |
20120144117 | RECOMMENDATION BASED CACHING OF CONTENT ITEMS - Content item recommendations are generated for users based on metadata associated with the content items and a history of content item usage associated with the users. Each content item recommendation identifies a user and a content item and includes a score that indicates how likely the user is to view the content item. Based on the content item recommendations, and constraints of one or more caches, the content items are selected for storage in one or more caches. The constraints may include users that are associated with each cache, the geographical location of each cache, the size of each cache, and/or costs associated with each cache such as bandwidth costs. The content items stored in a cache are recommended to users associated with the cache. | 06-07-2012 |
20120159072 | MEMORY SYSTEM - According to one embodiment, a memory system includes a chip including a cell array and first and second caches configured to hold data read out from the cell array; an interface configured to manage a first and a second addresses; a controller configured to issue a readout request to the interface; and a buffer configured to hold the data from the chip. The interface transfers the data in the first cache to the buffer without reading out the data from the cell array if the readout address matches the first address, transfers the data in the second cache to the buffer without reading out the data from the cell array if the readout address matches the second address, and reads out the data from the cell array and transfers the data to the buffer if the readout address does not match either one of the first or second address. | 06-21-2012 |
20120173819 | Accelerating Cache State Transfer on a Directory-Based Multicore Architecture - Technologies are generally described herein for accelerating a cache state transfer in a multicore processor. The multicore processor may include first, second, and third tiles. The multicore processor may initiate migration of a thread executing on the first core at the first tile from the first tile to the second tile. The multicore processor may determine block addresses of blocks to be transferred from a first cache at the first tile to a second cache at the second tile, and identify that a directory at the third tile corresponds to the block addresses. The multicore processor may update the directory to reflect that the second cache shares the blocks. The multicore processor may transfer the blocks from the first cache in the first tile to the second cache in the second tile effective to complete the migration of the thread from the first tile to the second tile. | 07-05-2012 |
20120191912 | STORING DATA ON STORAGE NODES - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for storing data on storage nodes. In one aspect, a method includes receiving a file to be stored across a plurality of storage nodes each including a cache. The is stored by storing portions of the file each on a different storage node. A first portion is written to a first storage node's cache until determining that the first storage node's cache is full. A different second storage node is selected in response to determining that the first storage node's cache is full. For each portion of the file, a location of the portion is recorded, the location indicating at least a storage node storing the portion. | 07-26-2012 |
20120215982 | Partial Line Cache Write Injector for Direct Memory Access Write - A cache within a computer system receives a partial write request and identifies a cache hit of a cache line. The cache line corresponds to the partial write request and includes existing data. In turn, the cache receives partial write data and merges the partial write data with the existing data into the cache line. In one embodiment, the existing data is “modified” or “dirty.” In another embodiment, the existing data is “shared.” In this embodiment, the cache changes the state of the cache line to indicate the storing of the partial write data into the cache line. | 08-23-2012 |
20120246406 | EFFECTIVE PREFETCHING WITH MULTIPLE PROCESSORS AND THREADS - A processing system includes a memory and a first core configured to process applications. The first core includes a first cache. The processing system includes a mechanism configured to capture a sequence of addresses of the application that miss the first cache in the first core and to place the sequence of addresses in a storage array; and a second core configured to process at least one software algorithm. The at least one software algorithm utilizes the sequence of addresses from the storage array to generate a sequence of prefetch addresses. The second core issues prefetch requests for the sequence of the prefetch addresses to the memory to obtain prefetched data and the prefetched data is provided to the first core if requested. | 09-27-2012 |
20120290793 | EFFICIENT TAG STORAGE FOR LARGE DATA CACHES - An apparatus, method, and medium are disclosed for implementing data caching in a computer system. The apparatus comprises a first data cache, a second data cache, and cache logic. The cache logic is configured to cache memory data in the first data cache. Caching the memory data in the first data cache comprises storing the memory data in the first data cache and storing in the second data cache, but not in the first data cache, tag data corresponding to the memory data. | 11-15-2012 |
20120303898 | MANAGING UNMODIFIED TRACKS MAINTAINED IN BOTH A FIRST CACHE AND A SECOND CACHE - Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache. | 11-29-2012 |
20120303899 | MANAGING TRACK DISCARD REQUESTS TO INCLUDE IN DISCARD TRACK MESSAGES - Provided are a computer program product, system, and method for managing track discard requests to include in discard track messages. A backup copy of a track in a cache is maintained in the cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. In response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message. In response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode. | 11-29-2012 |
20120311266 | MULTIPROCESSOR AND IMAGE PROCESSING SYSTEM USING THE SAME - To provide a multiprocessor capable of easily sharing data and buffering data to be transferred. | 12-06-2012 |
20120324166 | COMPUTER-IMPLEMENTED METHOD OF PROCESSING RESOURCE MANAGEMENT - A computer-implemented method for managing processing resources of a computerized system having at least a first processor and a second processor, each of the processors operatively interconnected to a memory storing a set of data to be processed by a processor, the method comprising: monitoring data accessed by the first processor while executing; and if the second processor is at a shorter distance than the first processor from the monitored data, instructing to interrupt execution at the first processor and resume the execution at the second processor. | 12-20-2012 |
20120331231 | METHOD AND APPARATUS FOR SUPPORTING MEMORY USAGE THROTTLING - An apparatus for providing system memory usage throttling within a data processing system having multiple chiplets is disclosed. The apparatus includes a system memory, a memory access collection module, a memory credit accounting module and a memory throttle counter. The memory access collection module receives a first set of signals from a first cache memory within a chiplet and a second set of signals from a second cache memory within the chiplet. The memory credit accounting module tracks the usage of the system memory on a per user virtual partition basis according to the results of cache accesses extracted from the first and second set of signals from the first and second cache memories within the chiplet. The memory throttle counter for provides a throttle control signal to prevent any access to the system memory when the system memory usage has exceeded a predetermined value. | 12-27-2012 |
20130007367 | INFORMATION PROCESSING APPARATUS AND METHOD OF CONTROLLING SAME - Disclosed is an information processing apparatus equipped with first and second CPUs, as well as a method of controlling this apparatus. When the first CPU launches an operating system for managing a virtual memory area that includes a first cache area for a device, the first CPU generates specification data, which indicates the corresponding relationship between the first cache and a second cache for the device and provided in a main memory, and transfers the specification data to the second CPU. In accordance with the specification data, the second CPU transfers data, which has been stored in the device, to a physical memory corresponding to a cache to which the first CPU refers. As a result, the first CPU accesses the first cache area is thereby capable of accessing the device at high speed. | 01-03-2013 |
20130013862 | EFFICIENT HANDLING OF MISALIGNED LOADS AND STORES - A system and method for efficiently handling misaligned memory accesses within a processor. A processor comprises a load-store unit (LSU) with a banked data cache (d-cache) and a banked store queue. The processor generates a first address corresponding to a memory access instruction identifying a first cache line. The processor determines the memory access is misaligned which crosses over a cache line boundary. The processor generates a second address identifying a second cache line logically adjacent to the first cache line. If the instruction is a load instruction, the LSU simultaneously accesses the d-cache and store queue with the first and the second addresses. If there are two hits, the data from the two cache lines are simultaneously read out. If the access is a store instruction, the LSU separates associated write data into two subsets and simultaneously stores these subsets in separate cache lines in separate banks of the store queue. | 01-10-2013 |
20130042065 | CUSTOM CACHING - Methods and systems are presented for custom caching. Application threads define caches. The caches may be accessed through multiple index keys, which are mapped to multiple application thread-defined keys. Methods provide for the each index key and each application thread-defined key to be symmetrical. The index keys are used for loading data from one or more data sources into the cache stores on behalf of the application threads. Application threads access the data from the cache store by providing references to the caches and the application-supplied keys. Some data associated with some caches may be shared from the cache store by multiple application threads. Additionally, some caches are exclusively accessed by specific application threads. | 02-14-2013 |
20130042066 | STORAGE CACHING - The present disclosure provides a method for processing a storage operation in a system with an added level of storage caching. The method includes receiving, in a storage cache, a read request from a host processor that identifies requested data and determining whether the requested data is in a cache memory of the storage cache. If the requested data is in the cache memory of the storage cache, the requested data may be obtained from the storage cache and sent to the host processor. If the requested data is not in the cache memory of the storage cache, the read request may be sent to a host bus adapter operatively coupled to a storage system. The storage cache is transparent to the host processor and the host bus adapter. | 02-14-2013 |
20130046934 | SYSTEM CACHING USING HETEROGENOUS MEMORIES - A caching circuit includes tag memories for storing tagged addresses of a first cache. On-chip data memories are arranged in the same die as the tag memories, and the on-chip data memories form a first sub-hierarchy of the first cache. Off-chip data memories are arranged in a different die as the tag memories, and the off-chip data memories form a second sub-hierarchy of the first cache. Sources (such as processors) are arranged to use the tag memories to service first cache requests using the first and second sub-hierarchies of the first cache. | 02-21-2013 |
20130046935 | SHARED COPY CACHE ACROSS NETWORKED DEVICES - A copy cache feature that can be shared across networked devices is provided. Content added to copy cache through a “copy”, a “like”, or similar command through one device may be forwarded to a server providing cloud-based services to a user and/or another device associated with the user such that the content can be inserted into the same or other files on other computing devices by the user. In addition to seamless movement of copy cache content across devices, the content may be made available in a context-based manner and/or sortable manner. | 02-21-2013 |
20130073808 | METHOD AND NODE ENTITY FOR ENHANCING CONTENT DELIVERY NETWORK - The present invention provides a method and a caching node entity for ensuring at least a predetermined number of a content object to be kept stored in a network, comprising a plurality of cache nodes for storing copies of content objects. The present invention makes use of ranking states values, deletable or non-deletable, which when assigned to copies of content objects are indicating whether a copy is either deletable or non-deletable. At least one copy of each content object is assigned the value non-deletable. The value for a copy of a content object changing from deletable to non-deletable in one cache node of the network, said copy being a candidate for the value non-deletable, if a certain condition is fulfilled. | 03-21-2013 |
20130086322 | SYSTEMS AND METHODS FOR MULTITENANCY DATA - Systems and methods are provided to support multitenant data in an EclipseLink environment. EclipseLink supports shared multitenant tables using tenant discriminator columns, allowing an application to be re-used for multiple tenants and have all their data co-located. Tenants can share the same schema transparently, without affecting one another and can use non-multitenant entity types as per usual. This functionality is flexible enough to allow for its usage at an Entity Manager Factory level or with individual Entity Manager's based on the application's needs. Support for multitenant entities can be done though the usage of a multitenant annotation or xml element configured in an eclipselink-orm.xml mapping file. The multitenant annotation can be used on an entity or mapped superclass and is used in conjunction with a tenant discriminator column or xml element. | 04-04-2013 |
20130086323 | EFFICIENT CACHE MANAGEMENT IN A CLUSTER - A content management system has at least two content server computers, a cache memory corresponding to each content server, the cache memory having a page cache to store cache objects for pages displayed by the content server, a dependency cache to store dependency information for the cache objects, and a notifier cache to replicate changes in dependency information to other caches. | 04-04-2013 |
20130111132 | Cache Memory That Supports Tagless Addressing | 05-02-2013 |
20130138884 | LOAD DISTRIBUTION SYSTEM - Exemplary embodiments of the invention provide load distribution among storage systems using solid state memory (e.g., flash memory) as expanded cache area. In accordance with an aspect of the invention, a system comprises a first storage system and a second storage system. The first storage system changes a mode of operation from a first mode to a second mode based on load of process in the first storage system. The load of process in the first storage system in the first mode is executed by the first storage system. The load of process in the first storage system in the second mode is executed by the first storage system and the second storage system. | 05-30-2013 |
20130138885 | DYNAMIC PROCESS/OBJECT SCOPED MEMORY AFFINITY ADJUSTER - An apparatus, method, and program product for optimizing a multiprocessor computing system by sampling memory reference latencies and adjusting components of the system in response thereto. During execution of processes the computing system, memory reference sampling of memory locations from shared memory of the computing system referenced in the executing processes is performed. Each sampled memory reference collected from sampling is associated with a latency and a physical memory location in the shared memory. Each sampled memory reference is analyzed to identify segments of memory locations in the shared memory corresponding to a sub-optimal latency, and based on the analyzed sampled memory references, the physical location of the one or more identified segments, the processor on which one or more processes referencing the identified segments, and/or a status associated with the one or more identified segments is dynamically adjusted to thereby optimize memory access for the multiprocessor computing system. | 05-30-2013 |
20130159626 | OPTIMIZED EXECUTION OF INTERLEAVED WRITE OPERATIONS IN SOLID STATE DRIVES - A method for data storage includes receiving a plurality of data items for storage in a memory, including at least first data items that are associated with a first data source and second data items that are associated with a second data source, such that the first and second data items are interleaved with one another over time. The first data items are de-interleaved from the second data items, by identifying a respective data source with which each received data item is associated. The de-interleaved first data items and the de-interleaved second data items are stored in the memory. | 06-20-2013 |
20130185511 | Hybrid Write-Through/Write-Back Cache Policy Managers, and Related Systems and Methods - Embodiments disclosed in the detailed description include hybrid write-through/write-back cache policy managers, and related systems and methods. A cache write policy manager is configured to determine whether at least two caches among a plurality of parallel caches are active. If all of one or more other caches are not active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-hack cache policy. In this manner, the cache write policy manager may conserve power and/or increase performance of a singly active processor core. If any of the one or more other caches are active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-through cache policy. In this manner, the cache write policy manager facilitates data coherency among the parallel caches when multiple processor cores are active. | 07-18-2013 |
20130205087 | FORWARD PROGRESS MECHANISM FOR STORES IN THE PRESENCE OF LOAD CONTENTION IN A SYSTEM FAVORING LOADS - A multiprocessor data processing system includes a plurality of cache memories including a cache memory. In response to the cache memory detecting a storage-modifying operation specifying a same target address as that of a first read-type operation being processed by the cache memory, the cache memory provides a retry response to the storage-modifying operation. In response to completion of the read-type operation, the cache memory enters a referee mode. While in the referee mode, the cache memory temporarily dynamically increases priority of any storage-modifying operation targeting the target address in relation to any second read-type operation targeting the target address. | 08-08-2013 |
20130262766 | Cache Synchronization System, Cache Synchronization Method and Apparatus thereof - Disclosed are a cache synchronization system, a cache synchronization method and a local cache to perform synchronization. The local cache is configured to determine whether to perform synchronization for specific content on the basis of synchronization policy information, if it is determined that synchronization is to be performed, to set a dispersion parameter that defines a synchronization range for the specific content according to the synchronization policy information, and to transmit synchronization information about the specific content, which includes the dispersion parameter, to at least one neighboring local cache. | 10-03-2013 |
20130268733 | CACHE STORAGE OPTIMIZATION IN A CACHE NETWORK - In one embodiment, a method includes receiving data at a cache node in a network of cache nodes, the cache node located on a data path between a source of the data and a network device requesting the data, and determining if the received data is to be cached at the cache node, wherein determining comprises calculating a cost incurred to retrieve the data. An apparatus and logic are also disclosed. | 10-10-2013 |
20130275681 | CACHING FOR HETEROGENEOUS PROCESSORS - A multi-core processor providing heterogeneous processor cores and a shared cache is presented. | 10-17-2013 |
20130297875 | Encoding and Decoding Images - Some embodiments provide a method for encoding a first set of pixels in a first image by reference to a second image in a video sequence. In a first search window within a second image, the method searches to identify a first particular portion in the second image that best matches the first set of pixels in the first image. In the first search window within the second image, the method identifies a first location corresponding to the first particular portion. In a second search window within the second image, the method then searches to identify a second particular portion in the second image that best matches the first set of pixels in the first image, where the second search window is defined about the first location. | 11-07-2013 |
20130318301 | Virtual Machine Exclusive Caching - Techniques, systems and an article of manufacture for caching in a virtualized computing environment. A method includes enforcing a host page cache on a host physical machine to store only base image data, and enforcing each of at least one guest page cache on a corresponding guest virtual machine to store only data generated by the guest virtual machine after the guest virtual machine is launched, wherein each guest virtual machine is implemented on the host physical machine. | 11-28-2013 |
20130339606 | REDUCING STORE OPERATION BUSY TIMES - A computer product for reducing store operation busy times is provided and relates to associating first and second platform registers with a cache array, determining that first and second store operations target a same wordline of the cache array, loading control information and data of the store operations into the platform registers and delaying a commit of the first store operation until the loading of the second platform register is complete. The method further includes committing the data from the platform registers using the control information from the platform registers to the wordline of the cache array at a same time to thereby reduce a busy time of the wordline of the cache array. | 12-19-2013 |
20130339607 | REDUCING STORE OPERATION BUSY TIMES - A computer product for reducing store operation busy times is provided. The computer product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes associating first and second platform registers with a cache array, determining that first and second store operations target a same wordline of the cache array, loading control information and data of the first and second store operation into the first and second platform registers and delaying a commit of the first store operation until the loading of the second platform register is complete. The method further includes committing the data from the first and second platform registers using the control information from the first and second platform registers to the wordline of the cache array at a same time to thereby reduce a busy time of the wordline of the cache array. | 12-19-2013 |
20140025891 | RELAXED COHERENCY BETWEEN DIFFERENT CACHES - One embodiment sets forth a technique for ensuring relaxed coherency between different caches. Two different execution units may be configured to access different caches that may store one or more cache lines corresponding to the same memory address. During time periods between memory barrier instructions relaxed coherency is maintained between the different caches. More specifically, writes to a cache line in a first cache that corresponds to a particular memory address are not necessarily propagated to a cache line in a second cache before the second cache receives a read or write request that also corresponds to the particular memory address. Therefore, the first cache and the second are not necessarily coherent during time periods of relaxed coherency. Execution of a memory barrier instruction ensures that the different caches will be coherent before a new period of relaxed coherency begins. | 01-23-2014 |
20140047183 | System and Method for Utilizing a Cache with a Virtual Machine - In one embodiment, a computer system includes a cache having one or more memory locations associated with one or more computing systems, one or more cache managers, each cache manager associated with a portion of the cache, a metadata service communicatively linked with the cache managers, a configuration manager communicatively linked with the cache managers and the metadata service, and a data store. | 02-13-2014 |
20140052913 | MULTI-PORTED MEMORY WITH MULTIPLE ACCESS SUPPORT - A multi-ported memory that supports multiple read and write accesses is described herein. The multi-ported memory may include a number of read/write ports that is greater than the number of read/write ports of each memory bank of the multi-ported memory. The multi-ported memory allows for at least one read operation and at least one write operation to be received during the same clock cycle. In the event that an incoming write operation is blocked by the at least one read operation, data for that incoming write operation may be stored in a cache included in the multi-port memory. That cache is accessible to both write operations and read operations. In the event than the incoming write operation is not blocked by the at least one read operation, data for that incoming write operation is stored in the memory bank targeted by that incoming write operation. | 02-20-2014 |
20140052914 | MULTI-PORTED MEMORY WITH MULTIPLE ACCESS SUPPORT - A multi-ported memory that supports multiple read and write accesses is described. The multi-ported memory may include a number of read/write ports that is greater than the number of read/write ports of each memory bank of the multi-ported memory. The multi-ported memory allows for read operation(s) and write operation(s) to be received during the same clock cycle. In the event that an incoming write operation is blocked by read operation(s), data for that write operation may be stored in one of a plurality of cache banks included in the multi-port memory. The cache banks are accessible to both write and read operations. In the event than the write operation is not blocked by read operation(s), a determination is made as to whether data for that incoming write operation is stored in the memory bank targeted by that incoming write operation or in one of the cache banks. | 02-20-2014 |
20140052915 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes a plurality of cache memories, a plurality of processors configured to respectively access the plurality of cache memories, and a memory, in which each of the plurality of processors executes a program to function as a cache processing unit configured to perform cache processing including at least one of transfer to the memory and discard with respect to all the pieces of data stored in the cache memory. | 02-20-2014 |
20140052916 | Reduced Scalable Cache Directory - A processing network comprising a cache configured to store copies of memory data as a plurality of cache lines, a cache controller configured to receive data requests from a plurality of cache agents, and designate at least one of the cache agents as an owner of a first of the cache lines, and a directory configured to store cache ownership designations of the first cache line, and wherein the directory is encoded to support substantially simultaneous ownership of the first cache line by a plurality but less than all of the cache agents. Also disclosed is a method comprising receiving coherent transactions from a plurality of cache agents, and storing ownership designations of a plurality of cache lines by the cache agents in a directory, wherein the directory is configured to support storage of substantially simultaneous ownership designations for a plurality but less than all of the cache agents. | 02-20-2014 |
20140068191 | SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY - A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache. | 03-06-2014 |
20140082284 | DEVICE FOR CONTROLLING THE ACCESS TO A CACHE STRUCTURE - The present disclosure relates to a device for controlling the access to a cache structure comprising multiple cache sets during the execution of at least one computer program, the device comprising a module for generating seed values during the execution of the at least one computer program; a parametric hash function module for generating a cache set identifier to access the cache structure, the identifier being generated by combining a seed value generated by the module for generating seed values and predetermined bits of an address to access a main memory associated to the cache structure. | 03-20-2014 |
20140082285 | INTERCLUSTER RELATIONSHIP MANAGEMENT - Data storage and management systems can be interconnected as clustered systems to distribute data and operational loading. Further, independent clustered storage systems can be associated to form peered clusters. As provided herein, methods and systems for creating and managing intercluster relationships between independent clustered storage systems, allowing the respective independent clustered storage systems to exchange data and distribute management operations between each other while mitigating administrator involvement. Cluster introduction information is provided on a network interface of one or more nodes in a cluster, and intercluster relationships are created between peer clusters. A relationship can be created by initiating contact with a peer using a logical interface, and respective peers retrieving the introduction information provided on the network interface. Respective peers have a role/profile associated with the provided introduction information, which is mapped to the peers, allowing pre-defined access to respective peers. | 03-20-2014 |
20140115255 | STORAGE SYSTEM AND METHOD FOR CONTROLLING STORAGE SYSTEM - It is provided a storage system, comprising a storage device for storing data and at least one controller for controlling reading/writing of the data from/to the storage device. The at least one controller each includes a first cache memory for temporarily storing the data read from the storage device by file access, and a second cache memory for temporarily storing the data to be read/written from/to the storage device by block access. The processor reads the requested data from the storage device in the case where data requested by a file read request received from a host computer is not stored in the first cache memory, stores the data read from the storage device in the first cache memory without storing the data in the second cache memory, and transfers the data stored in the first cache memory to the host computer that has issued the file read request. | 04-24-2014 |
20140122804 | PROTECTING GROUPS OF MEMORY CELLS IN A MEMORY DEVICE - Methods for memory block protection and memory devices are disclosed. One such method for memory block protection includes programming protection data to protection bytes diagonally across different word lines of a particular memory block (e.g., Boot ROM). The protection data can be retrieved by an erase verify operation that can be performed at power-up of the memory device. | 05-01-2014 |
20140129772 | PREFETCHING TO A CACHE BASED ON BUFFER FULLNESS - A processor transfers prefetch requests from their targeted cache to another cache in a memory hierarchy based on a fullness of a miss address buffer (MAB) or based on confidence levels of the prefetch requests. Each cache in the memory hierarchy is assigned a number of slots at the MAB. In response to determining the fullness of the slots assigned to a cache is above a threshold when a prefetch request to the cache is received, the processor transfers the prefetch request to the next lower level cache in the memory hierarchy. In response, the data targeted by the access request is prefetched to the next lower level cache in the memory hierarchy, and is therefore available for subsequent provision to the cache. In addition, the processor can transfer a prefetch request to lower level caches based on a confidence level of a prefetch request. | 05-08-2014 |
20140136783 | HYBRID HARDWARE AND SOFTWARE IMPLEMENTATION OF TRANSACTIONAL MEMORY ACCESS - Embodiments of the invention relate a hybrid hardware and software implementation of transactional memory accesses in a computer system. A processor including a transactional cache and a regular cache is utilized in a computer system that includes a policy manager to select one of a first mode (a hardware mode) or a second mode (a software mode) to implement transactional memory accesses. In the hardware mode the transactional cache is utilized to perform read and write memory operations and in the software mode the regular cache is utilized to perform read and write memory operations. | 05-15-2014 |
20140149668 | PREFETCHING ACCORDING TO ATTRIBUTES OF ACCESS REQUESTS - Attributes of access requests can be used to distinguish one set of access requests from another set of access requests. The prefetcher can determine a pattern for each set of access requests and then prefetch cache lines accordingly. In an embodiment in which there are multiple caches, a prefetcher can determine a destination for prefetched cache lines associated with a respective set of access requests. For example, the prefetcher can prefetch one set of cache lines into one cache, and another set of cache lines into another cache. Also, the prefetcher can determine a prefetch distance for each set of access requests. For example, the prefetch distances for the sets of access requests can be different. | 05-29-2014 |
20140149669 | CACHE MEMORY AND METHODS FOR MANAGING DATA OF AN APPLICATION PROCESSOR INCLUDING THE CACHE MEMORY - In one example embodiment of the inventive concepts, a cache memory system includes a main cache memory including a nonvolatile random access memory, the main cache memory configured to exchange data with an external device and store the exchange data, each exchanged data includes less significant bit (LSB) data and more significant bit (MSB) data. The cache memory system further includes a sub-cache memory including a random access memory, the sub-cache memory configured to store LSB data of at least a portion of data stored at the main cache memory, wherein the main cache memory and the sub-cache memory are formed of a single-level cache memory. | 05-29-2014 |
20140164700 | SYSTEM AND METHOD OF DETECTING CACHE INCONSISTENCIES - A system and method of detecting cache inconsistencies among distributed data centers is described. Key-based sampling captures a complete history of a key for comparing cache values across data centers. In one phase of a cache inconsistency detection algorithm, a log of operations performed on a sampled key is compared in reverse chronological order for inconsistent cache values. In another phase, a log of operations performed on a candidate key having inconsistent cache values as identified in the previous phase is evaluated in near real time in forward chronological order for inconsistent cache values. In a confirmation phase, a real time comparison of actual cache values stored in the data centers is performed on the candidate keys identified by both the previous phases as having inconsistent cache values. An alert is issued that identifies the data centers in which the inconsistent cache values were reported. | 06-12-2014 |
20140164701 | VIRTUAL MACHINES FAILOVER - Disclosed is a computer system ( | 06-12-2014 |
20140164702 | VIRTUAL ADDRESS CACHE MEMORY, PROCESSOR AND MULTIPROCESSOR - An embodiment provides a virtual address cache memory including: a TLB virtual page memory configured to, when a rewrite to a TLB occurs, rewrite entry data; a data memory configured to hold cache data using a virtual page tag or a page offset as a cache index; a cache state memory configured to hold a cache state for the cache data stored in the data memory, in association with the cache index; a first physical address memory configured to, when the rewrite to the TLB occurs, rewrite a held physical address; and a second physical address memory configured to, when the cache data is written to the data memory after the occurrence of the rewrite to the TLB, rewrite a held physical address. | 06-12-2014 |
20140189238 | Two-Level Cache Locking Mechanism - A virtually tagged cache may be configured to index virtual address entries in the cache into lockable sets based on a page offset value. When a memory operation misses on the virtually tagged cache, only the one set of virtual address entries with the same page offset may be locked. Thereafter, this general lock may be released and only an address stored in the physical tag array matching the physical address and a virtual address in the virtual tag array corresponding to the matching address stored in the physical tag array may be locked to reduce the amount and duration of locked addresses. The machine may be stalled only if a particular memory address request hits and/or tries to access one or more entries in a locked set. Devices, systems, methods, and computer readable media are provided. | 07-03-2014 |
20140201442 | CACHE BASED STORAGE CONTROLLER - Systems and techniques for continuously writing to a secondary storage cache are described. A data storage region of a secondary storage cache is divided into a first cache region and a second cache region. A data storage threshold for the first cache region is determined. Data is stored in the first cache region until the data storage threshold is met. Then, additional data is stored in the second cache region while the data stored in the first cache region is written back to a primary storage device. | 07-17-2014 |
20140201443 | INTERCONNECTED RING NETWORK IN A MULTI-PROCESSOR SYSTEM - In various embodiments, the present disclosure provides a system comprising a first plurality of processing cores, ones of the first plurality of processing cores coupled to a respective core interface module among a first plurality of core interface modules, the first plurality of core interface modules configured to be coupled to form in a first ring network of processing cores; a second plurality of processing cores, ones of the second plurality of processing cores coupled to a respective core interface module among a second plurality of core interface modules, the second plurality of core interface modules configured to be coupled to form a second ring network of processing cores; a first global interface module to form an interface between the first ring network and a third ring network; and a second global interface module to form an interface between the second ring network and the third ring network. | 07-17-2014 |
20140201444 | INTERCONNECTED RING NETWORK IN A MULTI-PROCESSOR SYSTEM - In various embodiments, the present disclosure provides a system comprising a first plurality of processing cores, ones of the first plurality of processing cores coupled to a respective core interface module among a first plurality of core interface modules, the first plurality of core interface modules configured to be coupled to form in a first ring network of processing cores; a second plurality of processing cores, ones of the second plurality of processing cores coupled to a respective core interface module among a second plurality of core interface modules, the second plurality of core interface modules configured to be coupled to form a second ring network of processing cores; a first global interface module to form an interface between the first ring network and a third ring network; and a second global interface module to form an interface between the second ring network and the third ring network. | 07-17-2014 |
20140201445 | INTERCONNECTED RING NETWORK IN A MULTI-PROCESSOR SYSTEM - In various embodiments, the present disclosure provides a system comprising a first plurality of processing cores, ones of the first plurality of processing cores coupled to a respective core interface module among a first plurality of core interface modules, the first plurality of core interface modules configured to be coupled to form in a first ring network of processing cores; a second plurality of processing cores, ones of the second plurality of processing cores coupled to a respective core interface module among a second plurality of core interface modules, the second plurality of core interface modules configured to be coupled to form a second ring network of processing cores; a first global interface module to form an interface between the first ring network and a third ring network; and a second global interface module to form an interface between the second ring network and the third ring network. | 07-17-2014 |
20140208029 | USE OF FLASH CACHE TO IMPROVE TIERED MIGRATION PERFORMANCE - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, and reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target. | 07-24-2014 |
20140208030 | INFORMATION PROCESSING APPARATUS AND CONTROL METHOD OF INFORMATION PROCESSING APPARATUS - An information processing apparatus including a plurality of mutually connected system boards, wherein each of the system boards includes: a plurality of processors; and a plurality of memories each of which stores data and directory information corresponding to the data, and corresponds to any one of the processors, and wherein each of the plurality of processors, upon receiving a read request for data stored in a memory corresponding to the own processor from another processor, performs an exclusive logical sum operation on identification information included in the read request and identifying the another processor and a check bit included in the directory information and identifying a processor which holds target data of the read request, increments a count value included in the directory information and indicating the number of processors which hold the target data, and sets presence information included in the directory information and indicating a system board which includes the another processor. | 07-24-2014 |
20140215156 | PRIORITIZED DUAL CACHING METHOD AND APPARATUS - Provided are a prioritized dual caching method and apparatus. The dual caching apparatus includes a content cache unit configured to store a content cache separated into a first (premium) cache and a second (general) cache, a pointer storage unit configured to store a pointer for variably separating the first and second caches, a threshold value storage unit configured to store a first threshold value and a second threshold value that is less than the first threshold value, and a cache policy execution unit configured to receive a request for content, manage a request count value for the content, and execute a cache policy based on results of comparing the request count value to the first threshold value and the second threshold value and whether there is requested content in the content cache. | 07-31-2014 |
20140237185 | ONE-CACHEABLE MULTI-CORE ARCHITECTURE - Technologies are generally described for methods, systems, and devices effective to implement one-cacheable multi-core architectures. In one example, a multi-core processor that includes a first and second tile may be configured to implement a one-cacheable architecture. The second tile may be configured to generate a request for a data block. The first tile may be configured to receive the request for the data block, and determine that the requested data block is part of a group of data blocks identified as one-cacheable. The first tile may further determine that the requested data block is stored in a first cache in the first tile. The first tile may send the data block from the first cache in the first tile to the second tile, and invalidate the data blocks of the group of data blocks in the first cache in the first tile. | 08-21-2014 |
20140281232 | System and Method for Capturing Behaviour Information from a Program and Inserting Software Prefetch Instructions - Methods, systems and software for inserting prefetches into software applications or programs are described. A baseline program is analyzed to identify target instructions for which prefetching may be beneficial using various pattern analyses. Optionally, a cost/benefit analysis can be performed to determine if it is worthwhile to insert prefetches for the target instructions. | 09-18-2014 |
20140281233 | STORING DATA ACROSS A PLURALITY OF STORAGE NODES - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for storing data on storage nodes. In one aspect, a method includes receiving a file to be stored across a plurality of storage nodes each including a cache. The is stored by storing portions of the file each on a different storage node. A first portion is written to a first storage node's cache until determining that the first storage node's cache is full. A different second storage node is selected in response to determining that the first storage node's cache is full. For each portion of the file, a location of the portion is recorded, the location indicating at least a storage node storing the portion. | 09-18-2014 |
20140289468 | LIGHTWEIGHT PRIMARY CACHE REPLACEMENT SCHEME USING ASSOCIATED CACHE - One aspect provides a method including: responsive to a request for data and a miss in both a first cache and a second cache, retrieving the data from memory, the first cache storing at least a subset of data stored in the second cache; inferring from information pertaining to the first cache a replacement entry in the second cache; and responsive to inferring from information pertaining to the first cache a replacement entry in the second cache, replacing an entry in the second cache with the data from memory. Other aspects are described and claimed. | 09-25-2014 |
20140289469 | PROCESSOR AND CONTROL METHOD OF PROCESSOR - A processor includes: processing units, each including a first cache memory; a second cache memory being shared among the processing units; an acquiring unit to acquire lock target information including first storage location information in an first cache memory included in one of the processing units from an access request to data cached in the second cache memory; a retaining unit to retain the lock target information until an response processing to the access request is completed; and a control unit to control an access request to the second cache memory, the access request being related to a replace request to a first cache memory, based on second storage location information of replace target data in the first cache memory and the lock target information, the second storage location information acquired from the access request related to the replace request. | 09-25-2014 |
20140297956 | ARITHMETIC PROCESSING APPARATUS, INFORMATION PROCESSING APPARATUS AND CONTROL METHOD OF ARITHMETIC PROCESSING APPARATUS - An arithmetic processing apparatus includes a plurality of first processing units to be connected to a cache memory; a plurality of second processing units to be connected to the cache memory and to acquire, into the cache memory, data to be processed by the first processing unit before each of the plurality of first processing units executes processing; and a schedule processing unit to control a schedule for acquiring the data of the plurality of second processing units into the cache memory. | 10-02-2014 |
20140310464 | PARALLEL DESTAGING WITH REPLICATED CACHE PINNING - Methods, apparatus and computer program products implement embodiments of the present invention that include identifying non-destaged first data in a write cache. Upon detecting second data in a master read cache, the second data is copied the second data to one or more backup read caches, and the second data is pinned to the master and the backup read caches. Using the first data stored in the write cache and the second data stored in the master read cache, one or more parity values are calculated, and the first data and the one or more parity values are destaged. | 10-16-2014 |
20140310465 | BACKUP CACHE WITH IMMEDIATE AVAILABILITY - Methods, apparatus and computer program products implement embodiments of the present invention that include defining, in a storage system including receiving, by a processor, metadata describing a first cache configured as a master cache having non-destaged data, and defining, using the received metadata, a second cache configured as a backup cache for the master cache. Subsequent to defining the second cache, the non-destaged data is retrieved from the first cache, and the non-destaged data is stored to the second cache. | 10-16-2014 |
20140310466 | Multi-processor bus and cache interconnection system - A multi-processor cache and bus interconnection system. A multi-processor is provided a segmented cache and an interconnection system for connecting the processors to the cache segments. An interface unit communicates to external devices using module IDs and timestamps. A buffer protocol includes a retransmission buffer and method. | 10-16-2014 |
20140310467 | MULTIPLE-CORE COMPUTER PROCESSOR FOR REVERSE TIME MIGRATION - A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality of processor cores, wherein at least one of a number of the processor cores, a size of each of the plurality of caches, or a size of each of the plurality of memories is configured for performing a reverse-time-migration (RTM) computation. | 10-16-2014 |
20150012706 | MANAGING METADATA FOR CACHING DEVICES DURING SHUTDOWN AND RESTART PROCEDURES - A computer program product, system, and method for managing metadata for caching devices during shutdown and restart procedures. Fragment metadata for each fragment of data from the storage server stored in the cache device is generated. The fragment metadata is written to at least one chunk of storage in the cache device in a metadata directory in the cache device. For each of the at least one chunk in the cache device to which the fragment metadata is written, chunk metadata is generated for the chunk and writing the generated chunk metadata to the metadata directory in the cache device. Header metadata having information on access of the storage server is written to the metadata directory in the cache device. The written header metadata, chunk metadata, and fragment metadata are used to validate the metadata directory and the fragment data in the cache device during a restart operation. | 01-08-2015 |
20150046649 | MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. | 02-12-2015 |
20150067258 | CACHE MANAGEMENT APPARATUS OF HYBRID CACHE-BASED MEMORY SYSTEM AND THE HYBRID CACHE-BASED MEMORY SYSTEM - A cache management apparatus includes an access pattern analysis unit configured to analyze an access pattern of each of one or more pages present in a first cache by monitoring data input/output (I/O) requests, a page class management unit configured to determine a class of each of the pages based on results of the analysis performed by the access pattern analysis unit, and a page transfer management unit configured to transfer one or more pages classified into a first class including pages to be transferred, to a second cache based on results of the determination performed by the page class management unit. | 03-05-2015 |
20150074352 | Multiprocessor Having Segmented Cache Memory - A sequential data processor having a plurality of data processors, a plurality of memory segments, and a plurality of bus segments selectively interconnecting the data processors and memory segments to form a data cache. | 03-12-2015 |
20150089138 | Fast Data Initialization - A method and system for fast file initialization is provided. An initialization request to create or extend a file is received. The initialization request comprises or identifies file template metadata. A set of allocation units are allocated, the set of allocation units comprising at least one allocation unit for the file on a primary storage medium without initializing at least a portion of the file on the primary storage medium. The file template metadata is stored in a cache. The cache resides in at least one of volatile memory and persistent flash storage. A second request is received corresponding to a particular allocation unit of the set of allocation units. Particular file template metadata associated with the particular allocation unit is obtained. In response to the second request, at least a portion of a new allocation unit is generated. | 03-26-2015 |
20150095576 | CONSISTENT AND EFFICIENT MIRRORING OF NONVOLATILE MEMORY STATE IN VIRTUALIZED ENVIRONMENTS - Updates to nonvolatile memory pages are mirrored so that certain features of a computer system, such as live migration of applications, fault tolerance, and high availability, will be available even when nonvolatile memory is local to the computer system. Mirroring may be carried out when a cache flush instruction is executed to flush contents of the cache into nonvolatile memory. In addition, mirroring may be carried out asynchronously with respect to execution of the cache flush instruction by retrieving content that is to be mirrored from the nonvolatile memory using memory addresses of the nonvolatile memory corresponding to target memory addresses of the cache flush instruction. | 04-02-2015 |
20150127906 | Techniques for Logging Addresses of High-Availability Data - A technique for operating a high-availability (HA) data processing system includes, in response to receiving an HA logout indication at a cache, initiating a walk of the cache to locate cache lines in the cache that include HA data. In response to determining that a cache line includes HA data, an address of the cache line is logged in a first portion of a buffer in the cache. In response to the first portion of the buffer reaching a determined fill level, contents of the first portion of the buffer are logged to another memory. In response to all cache lines in the cache being walked, the cache walk is terminated. | 05-07-2015 |
20150143043 | DECENTRALIZED ONLINE CACHE MANAGEMENT FOR DIGITAL CONTENT - A first cache is provided to cache a first portion of a first block of digital content received over a network connection shared between a first user associated with the first cache and at least one second user. The first cache caches the first portion in response to the first user or the second user(s) requesting the first block. The first cache selects the first portion based on a fullness of the first cache, a number of blocks cached in the first cache, or a cache eviction rule associated with the first cache. | 05-21-2015 |
20150309934 | ARITHMETIC PROCESSING APPARATUS AND METHOD FOR CONTROLLING SAME - An arithmetic processing apparatus includes: first and second core groups each including cores, a first to an Nth (N is plural) caches that process access requests from the cores, and an intra-core-group bus through which the access requests from the cores are provided to the first to Nth caches; and a first to an Nth inter-core-group buses each provided between the first to Nth caches in the first and second core groups respectively. The first to Nth caches in the first core group individually store data from a first to an Nth memory spaces in a memory, respectively. The first to Nth caches in the second core group individually store data from an N+1th to a 2Nth memory spaces, respectively. The first to Nth caches in the first core group access the data in the N+1th to 2Nth memory spaces, respectively, via the first to Nth inter-core-group buses. | 10-29-2015 |
20150317093 | STORAGE SYSTEM - A storage controller has a processor, a volatile first cache memory that is coupled to the processor and that temporarily stores data, a nonvolatile second cache memory that is coupled to a microprocessor and that temporarily stores data, and a battery that is configured to supply electrical power to at least the processor and the first cache memory when a power stoppage has occurred. The second cache memory includes a dirty data area for storing dirty data, which is data that is not stored in the storage device, and a remaining area other than the dirty data area. When a power stoppage has occurred, the processor stores as target data in the remaining area of the second cache memory either all or a part of the data stored in the first cache memory. | 11-05-2015 |
20150317182 | THREAD WAITING IN A MULTITHREADED PROCESSOR ARCHITECTURE - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for thread waiting. One of the methods includes starting, by a first thread on a processing core, a task by starting to execute a plurality of task instructions; initiating, by the first thread, an atomic memory transaction using a transactional memory system, including: specifying, to the transactional memory system, at least a first memory address for the atomic memory transaction and temporarily ceasing the task by not proceeding to execute the task instructions; receiving, by the first thread, a signal as a consequence of a second thread accessing the first memory address specified for the atomic memory transaction; and as a consequence of receiving the signal, resuming the task, by the first thread, and continuing to execute the task instructions. | 11-05-2015 |
20150331633 | METHOD AND SYSTEM OF CACHING WEB CONTENT IN A HARD DISK - A method and system of storing and retrieving web content in a cache hard disk memory, in which read requests and write requests are separated in two different queues, and read requests are prioritized. This way, write requests are selectively delayed to favour read operations and improve user experience. | 11-19-2015 |
20150338904 | MULTI-CORE APPARATUS AND METHOD FOR RESTORING DATA ARRAYS FOLLOWING A POWER GATING EVENT - An apparatus includes a fuse array and a plurality of cores. The fuse array is programmed with compressed data. Each of the plurality of cores accesses the fuse array upon power-up/reset to read and decompress the compressed data, and to store decompressed data sets for one or more cache memories within the each of the plurality of cores in a stores that is coupled to the each of the plurality of cores. Each of the plurality of cores has reset logic and sleep logic. The reset logic employs the decompressed data sets to initialize the one or more cache memories upon power-up/reset. The sleep logic determines that power is restored following a power gating event, and subsequently accesses the stores to retrieve and employ the decompressed data sets to initialize the one or more caches following the power gating event. | 11-26-2015 |
20150338905 | MULTI-CORE DATA ARRAY POWER GATING RESTORAL MECHANISM - An apparatus includes a fuse array and a stores. The fuse array is programmed with compressed configuration data for a plurality of cores. The stores is coupled to the plurality of cores, and includes a plurality of sub-stores that each correspond to each of the plurality of cores, where one of the plurality of cores accesses the semiconductor fuse array upon power-up/reset to read and decompress the compressed configuration data, and to store a plurality of decompressed configuration data sets for one or more cache memories within the each of the plurality of cores in the plurality of sub-stores. Each of the plurality of cores has sleep logic. The sleep logic is configured to subsequently access a corresponding one of the each of the plurality of sub-stores to retrieve and employ the decompressed configuration data sets to initialize the one or more caches following a power gating event. | 11-26-2015 |
20150339231 | MULTI-CORE MICROPROCESSOR POWER GATING CACHE RESTORAL MECHANISM - An apparatus includes a fuse array and a stores. The fuse array is disposed on a die, and is programmed with compressed configuration data for a plurality of cores. The stores is coupled to the plurality of cores, and includes a plurality of sub-stores that each correspond to each of the plurality of cores, where one of the plurality of cores accesses the semiconductor fuse array upon power-up/reset to read and decompresses the compressed configuration data, and stores a plurality of decompressed configuration data sets for one or more cache memories within the each of the plurality of cores in the plurality of sub-stores, and where, following a power gating event, one of the each of the plurality of cores subsequently accesses a corresponding one of the each of the plurality of sub-stores to retrieve and employ the decompressed configuration data sets to initialize the caches. | 11-26-2015 |
20150339232 | APPARATUS AND METHOD FOR REPAIRING CACHE ARRAYS IN A MULTI-CORE MICROPROCESSOR - An apparatus includes a fuse array, a stores, and a plurality of cores. The fuse array is programmed with compressed configuration data. The stores is for storage and access of decompressed configuration data sets. One of the plurality of cores accesses the fuse array upon power-up/reset to read and decompress the compressed configuration data, and to store the decompressed configuration data sets for one or more cache memories in the stores. Each of the plurality of cores includes reset logic and sleep logic. The reset logic is configured to employ the decompressed configuration data sets to initialize the one or more cache memories upon power-up/reset. The sleep logic is configured to determine that power is restored following a power gating event, and is configured to subsequently access the stores to retrieve and employ the decompressed configuration data sets to initialize the one or more caches following the power gating event. | 11-26-2015 |
20150347297 | SYSTEMS AND METHODS FOR IMPLEMENTING A TAG-LESS SHARED CACHE AND A LARGER BACKING CACHE - A computer processing system includes a plurality of nodes, each node having at least one processor core and at least one level of cache memory which is private to the node, a shared, last level cache (LLC) memory device and a shared, last level cache location buffer containing cache location entries, each cache location entry storing an address tag and a plurality of location information. The location information stored in a cache location entry points to an identified cacheline location within the LLC that stores a cacheline associated with the location information. The cacheline stored in the LLC has associated information identifying the cache location entry. | 12-03-2015 |
20150347298 | TRACKING ALTERNATIVE CACHELINE PLACEMENT LOCATIONS IN A CACHE HIERARCHY - Data can be stored in a multi-level cache hierarchy memory system by, for example, storing valid data associated with a cacheline in a primary location in a first cache memory location. The first cache memory also stores location information about an alternative location in a second cache memory associated with the cacheline. Space is allocated in the alternative location of the second cache memory to store data associated with the cacheline. | 12-03-2015 |
20150356015 | PROCESSOR PERFORMANCE BY DYNAMICALLY RE-ADJUSTING THE HARDWARE STREAM PREFETCHER STRIDE - An apparatus may include a first memory, a control circuit, a first address comparator and a second address comparator. The first memory may store a table, which may include an expected address of a next memory access and an offset to increment a value of the expected address. The control circuit may read data at a predicted address in a second memory and store the read data in a cache. The first and second address comparators may determine if a value of a received address is between the value of the expected address and the value of the expected address minus a value of the offset. The control circuit may also modify the value of the offset responsive to determining the value of the received address is between the value of the expected address and the value of the expected address minus the value of the offset. | 12-10-2015 |
20150370713 | STORAGE SYSTEM AND STORAGE CONTROL METHOD - In a storage system, first and second controllers have respective first and second buffer and cache areas. The first controller stores write data in accordance with a write request in the first cache area without involving the first buffer area and to transfer the stored write data to the second cache area without involving the second buffer area. The first controller is configured to determine which of the first and second cache areas is to be used as a copy source and to be used as a copy destination depending on whether the storing of the first write data in the first cache area had been successful or on whether the transfer of the write data from the first cache area to the second controller had been successful, and by copying data from the copy source to the copy destination, recovers data in an area related to a transfer failure. | 12-24-2015 |
20150378913 | CACHING DATA IN A MEMORY SYSTEM HAVING MEMORY NODES AT DIFFERENT HIERARCHICAL LEVELS - A memory system includes a plurality of memory nodes provided at different hierarchical levels of the memory system, each of the memory nodes including a corresponding memory storage and a cache. A memory node at a first of the different hierarchical levels is coupled to a processor with lower communication latency than a memory node at a second of the different hierarchical levels. The memory nodes are to cooperate to decide which of the memory nodes is to cache data of a given one of the memory nodes. | 12-31-2015 |
20160019145 | STORAGE SYSTEM AND CACHE CONTROL METHOD - A receiving controller which receives a read request out of first and second storage controllers transfers the read request to an associated controller which is associated with a read source storage area out of the first and second storage controllers when the receiving controller is not the associated controller. It is however the receiving controller that reads the read-target data from a read source storage device, writes the read-target data to a cache memory of the receiving controller, and transmits the read-target data written in the cache memory of the receiving controller to a host apparatus. | 01-21-2016 |
20160034193 | CACHE MOBILITY - A method and system of selecting and migrating relevant data from among data associated with a workload of a virtual machine and stored in source storage cache memory in a dynamic computing environment is described. The method includes selecting one or more policies, the one or more policies including a size policy defining a default maximum size for the relevant data. The method also includes selecting the relevant data from among the data based on the one or more policies in a default mode, and migrating the relevant data from the source storage cache memory to target storage cache memory. | 02-04-2016 |
20160034197 | DATA MIGRATION METHOD AND DATA MIGRATION DEVICE - A data migration method includes creating, by a first control processor that controls a first cache memory storing first cache data cached from first storage data stored in a storage, first management information including information indicating a storage location of the first cache data on the first cache memory and information indicating whether or not the first storage data has been updated in accordance with an update of the first cache data for each block of a predetermined data size in the first cache memory, when a program that accesses the first cache data migrates to a different node, transmitting, by the first control processor, the first management information to a second control processor that controls a second cache memory capable of being accessed by the program after migration to the different node. | 02-04-2016 |
20160034397 | Method and Apparatus for Processing Data and Computer System - A method and an apparatus for processing data and a computer system are provided. The method includes copying a shared virtual memory page to which a first process requests access into off-chip memory of a computing node, and using the shared virtual memory page copied into the off-chip memory as a working page of the first process; and before the first process performs a write operation on the working page, creating, in on-chip memory of the computing node, a backup page of the working page, so as to back up original data of the working page. Before a write operation is performed on a working page, page data is backed up in the on-chip memory, so as to ensure data consistency when multiple processes perform an operation on a shared virtual memory page while accessing off-chip memory as less as possible and improving a speed of a program. | 02-04-2016 |
20160041774 | COMMAND AND DATA SELECTION IN STORAGE CONTROLLER SYSTEMS - A storage controller system may include a host controller that queues host commands as data transfer commands in a plurality of queue channels. The storage controller system may also include a data storage controller that selects data transfer commands for execution. The data storage controller may select all data transfer commands associated with a host command when all of the data transfer commands are located at heads of the queue channels. Alternatively, the data storage controller may select for execution data transfer commands at heads of the queue channels when associated cache areas are available to receive data, regardless of whether all of the data transfer commands associated with a host command are at the heads. The host controller may then retrieve the data in the cache areas when all of the data to be sent to the host in response to the host command is being cached. | 02-11-2016 |
20160041906 | SHARDING OF IN-MEMORY OBJECTS ACROSS NUMA NODES - Techniques are provided for sharding objects across different compute nodes. In one embodiment, a database server instance generates, for an object, a plurality of in-memory chunks including a first in-memory chunk and a second in-memory chunk, where each in-memory chunk includes a different portion of the object. The database server instance assigns each in-memory chunk to one of a plurality of computer nodes including the first in-memory chunk to a first compute node and a second in-memory chunk to a second local memory of a second compute node. The database server instance stores an in-memory map that indicates a memory location for each in-memory chunk. The in-memory map indicates that the first in-memory chunk is located in the first local memory of the first compute node and that the second in-memory chunk is located in the second local memory of the second compute node. | 02-11-2016 |
20160062888 | LEAST DISRUPTIVE CACHE ASSIGNMENT - The embodiments are directed to methods and appliances for assigning communication network caches. The methods and appliances can assign storage buckets to caches in a manner that is minimally disruptive to all cache assignments. The methods and appliances can determine a minimum number of cache assignments and reassignments to perform based on a plurality of factors including a number of caches added and removed to a communication network during a given time period, a number of buckets in the communication network, and current cache assignment information. The methods and appliances determine a quantity of buckets to be assigned to each cache, and a quantity of extra buckets to assign, and selectively choose certain buckets for reassignment. | 03-03-2016 |
20160062915 | STORAGE CONTROL DEVICE AND STORAGE CONTROL METHOD - An apparatus includes a first cache memory, a second cache memory, and a processor coupled to the first cache memory and the second cache memory, and configured to store data in the second cache memory, the data being deleted from the first cache memory, store first data stored in a first address of the storage device, in the second cache memory, in case where the first address is included in first management information and is not included in second management information, according to a request for access to the first address of the storage device, the first management information including an address in the storage device of specific data stored in the storage device, and the second management information including an address in the storage device of data stored in both of the second cache memory and the storage device, and register the first address in the second management information. | 03-03-2016 |
20160085683 | DATA RECEIVING DEVICE AND DATA RECEIVING METHOD - According to one embodiment, a data receiving device includes: a communication circuit to receive first data and second data over a network; a first storage; a second storage in which data read or data write is performed by a fixed size block; and a processor. The processor sets a first buffer and a second buffer in the first storage. The processor writes tail data of the first data into the allocated area in the second buffer. The tail data has a size of a remainder that a first value is divided by a size of the first buffer, the first value being a value subtracted from a size of the first data by a size of the available area in the first buffer before writing of the first data. The processor writes the second data into an area sequential to the area of the tail data. | 03-24-2016 |
20160092357 | Apparatus and Method to Transfer Data Packets between Domains of a Processor - In an embodiment, a processor includes a first domain to operate according to a first clock. The first domain includes a write source, a payload bubble generator first in first out buffer (payload BGF) to store data packets, and write credit logic to maintain a count of write credits. The processor also includes a second domain to operate according to a second clock. When the write source has a data packet to be stored while the second clock is shut down, the write source is to write the data packet to the payload BGF responsive to the count of write credits being at least one, and after the second clock is restarted the second domain is to read the data packet from the payload BGF. Other embodiments are described and claimed. | 03-31-2016 |
20160092363 | Cache-Aware Adaptive Thread Scheduling And Migration - In one embodiment, a processor includes: a plurality of cores each to independently execute instructions; a shared cache memory coupled to the plurality of cores and having a plurality of clusters each associated with one or more of the plurality of cores; a plurality of cache activity monitors each associated with one of the plurality of clusters, where each cache activity monitor is to monitor one or more performance metrics of the corresponding cluster and to output cache metric information; a plurality of thermal sensors each associated with one of the plurality of clusters and to output thermal information; and a logic coupled to the plurality of cores to receive the cache metric information from the plurality of cache activity monitors and the thermal information and to schedule one or more threads to a selected core based at least in part on the cache metric information and the thermal information for the cluster associated with the selected core. Other embodiments are described and claimed. | 03-31-2016 |
20160092364 | MANAGEMENT OF STORAGE SYSTEM - In an approach for managing a storage system, distribution of storage volumes among a plurality of storage controller groups may be adjusted dynamically or adaptively based on the current access hot degrees of respective storage volumes in the storage system. In this way, optimized distribution of storage volumes can be achieved without user interference. Such redistribution eliminates the degradation of performance of the storage system. | 03-31-2016 |
20160103765 | APPARATUS, SYSTEMS, AND METHODS FOR PROVIDING A MEMORY EFFICIENT CACHE - The present disclosure relates to apparatus, systems, and methods that implement a less-recently-used data eviction mechanism for identifying a memory block of a cache for eviction. The less-recently-used mechanism can achieve a similar functionality as the least-recently-used data eviction mechanism, but at a lower memory requirement. A memory controller can implement the less-recently-used data eviction mechanism by selecting a memory block and determining whether the memory block is one of the less-recently-used memory blocks. If so, the memory controller can evict data in the selected memory block; if not, the memory controller can continue to select other memory blocks until the memory controller selects one of the less-recently-used memory blocks. | 04-14-2016 |
20160110285 | METHOD OF CONTROLLING DATA WRITING TO PERSISTENT STORAGE DEVICE - A second computer transmits, to a first computer, confirmation data including identification information and a version number of copy data updated in a cache. Based on the confirmation data received from the second computer and information stored in the persistent storage device, the first computer extracts the identification information and the version number corresponding to the copy data to be written to the persistent storage device, from the confirmation data, and transmits response data including the extracted identification information and the version number to the second computer. Based on the response data received from the first computer and information stored in the cache, the second computer determines the copy data in the cache to be transmitted to the first computer so as to be written to the persistent storage device. | 04-21-2016 |
20160117241 | METHOD FOR USING SERVICE LEVEL OBJECTIVES TO DYNAMICALLY ALLOCATE CACHE RESOURCES AMONG COMPETING WORKLOADS - A method, device, and non-transitory computer readable medium that dynamically allocates cache resources includes monitoring a hit or miss rate of a service level objective for each of a plurality of prior workloads and a performance of each of a plurality of cache storage resources. At least one configuration for the cache storage resources for one or more current workloads is determined based at least on a service level objective for each of the current workloads, the monitored hit or miss rate for each of the plurality of prior workloads and the monitored performance of each of the plurality of cache storage resources. The cache storage resources are dynamically partitioned among each of the current workloads based on the determined configuration. | 04-28-2016 |
20160117249 | SNOOP FILTER FOR MULTI-PROCESSOR SYSTEM AND RELATED SNOOP FILTERING METHOD - A snoop filter for a multi-processor system has a storage device and a control circuit. The control circuit manages at least a first-type entry and at least a second-type entry stored in the storage device. The first-type entry is configured to record information indicative of a first cache of the multi-processor system and first requested memory addresses that are associated with multiple first cache lines each being only available in the first cache. The second-type entry is configured to record information indicative of multiple second caches of the multi-processor system and at least a second requested memory address that is associated with a second cache line being available in each of the multiple second caches. | 04-28-2016 |
20160117254 | CACHE OPTIMIZATION TECHNIQUE FOR LARGE WORKING DATA SETS - A system and method for recognizing data access patterns in large data sets and for preloading a cache based on the recognized patterns is provided. In some embodiments, the method includes receiving a data transaction directed to an address space and recording the data transaction in a first set of counters and in a second set of counters. The first set of counters divides the address space into address ranges of a first size, whereas the second set of counters divides the address space into address ranges of a second size that is different from the first size. One of a storage device or a cache thereof is selected to service the data transaction based on the first set of counters, and data is preloaded into the cache based on the second set of counters. | 04-28-2016 |
20160132430 | Memory Control Circuit and Processor - A memory control circuit has a request determination circuitry to determine whether a period without read-out request and write request to an i-th (i being an integer of 1 or more and of n or less, n being an integer of 2 or more) level cache memory among first to n-th level cache memories continues for a first period of time or longer, the i-th level cache memory comprising a first nonvolatile memory, and a power-supply controller to control a power cut-off timing to the i-th level cache memory based on a determination of the request determination circuitry. | 05-12-2016 |
20160139849 | NON-VOLATILE BUFFERING FOR DEDUPLICATON - A system and method for storage of data is described where the data and commands received by a storage system is processed using at least a pair of redundant channels, configured so that received data buffered in a first channel is also buffered by a second channel prior to acknowledgement of the completion of the command execution. This permits a low latency of response to commands while securely storing the data. Data that is buffered in the first channel may be processed for storage, or for deduplication or compression prior to sending to the storage system subsequent to the acknowledgement of completion of the command and the data may then be purged from the data buffers in the redundant channels. A file identified as being smaller in size than the size allocated to associated metadata may be stored as part of the metadata without being sent to the storage system. | 05-19-2016 |
20160140039 | PROVIDING MULTIPLE MEMORY MODES FOR A PROCESSOR INCLUDING INTERNAL MEMORY - In one embodiment, a processor comprises: at least one core formed on a die to execute instructions; a first memory controller to interface with an in-package memory; a second memory controller to interface with a platform memory to couple to the processor; and the in-package memory located within a package of the processor, where the in-package memory is to be identified as a more distant memory with respect to the at least one core than the platform memory. Other embodiments are described and claimed. | 05-19-2016 |
20160140051 | TRANSLATION LOOKASIDE BUFFER INVALIDATION SUPPRESSION - Managing a plurality of translation lookaside buffers (TLBs) includes: issuing, at a first processing element, a first instruction for invalidating one or more TLB entries associated with a first context in a first TLB associated with the first processing element. The issuing includes: determining whether or not a state of an indicator indicates that all TLB entries associated with the first context in a second TLB associated with a second processing element are invalidated; if not: sending a corresponding instruction to the second processing element, causing invalidation of all TLB entries associated with the first context in the second TLB, and changing a state of the indicator; and if so: suppressing sending of any corresponding instructions for causing invalidation of any TLB entries associated with the first context in the second TLB to the second processing element. | 05-19-2016 |
20160147480 | DATA STORAGE MANAGEMENT IN A MEMORY DEVICE - The disclosure is related to systems and methods of managing data storage in a memory device. In a particular embodiment, a method is disclosed that includes receiving, in a data storage device, at least one data packet that has a size that is different from an allocated storage capacity of at least one physical destination location on a data storage medium in the data storage device for the at least one data packet. The method also includes storing the at least one received data packet in a non-volatile cache memory prior to transferring the at least one received data packet to the at least one physical destination location. | 05-26-2016 |
20160170882 | ADAPTABLE DATA CACHING MECHANISM FOR IN-MEMORY CLUSTER COMPUTING | 06-16-2016 |
20160179668 | COMPUTING SYSTEM WITH REDUCED DATA EXCHANGE OVERHEAD AND RELATED DATA EXCHANGE METHOD THEREOF | 06-23-2016 |
20160179670 | POINTER CHASING ACROSS DISTRIBUTED MEMORY | 06-23-2016 |
20160179689 | MULTI-CORE PROGRAMMING APPARATUS AND METHOD FOR RESTORING DATA ARRAYS FOLLOWING A POWER GATING EVENT | 06-23-2016 |
20160179690 | MULTI-CORE DATA ARRAY POWER GATING CACHE RESTORAL PROGRAMMING MECHANISM | 06-23-2016 |
20160179691 | MULTI-CORE MICROPROCESSOR POWER GATING CACHE RESTORAL PROGRAMMING MECHANISM | 06-23-2016 |
20160179692 | MULTI-CORE PROGRAMMING APPARATUS AND METHOD FOR RESTORING DATA ARRAYS FOLLOWING A POWER GATING EVENT | 06-23-2016 |
20160188471 | CONFIGURABLE SNOOP FILTERS FOR CACHE COHERENT SYSTEMS - A cache coherent system includes a directory with more than one snoop filter, each of which stores information in a different set of snoop filter entries. Each snoop filter is associated with a subset of all caching agents within the system. Each snoop filter uses an algorithm chosen for best performance on the caching agents associated with the snoop filter. The number of snoop filter entries in each snoop filter is primarily chosen based on the caching capacity of just the caching agents associated with the snoop filter. The type of information stored in each snoop filter entry of each snoop filter is chosen to meet the desired filtering function of the specific snoop filter. | 06-30-2016 |
20160188474 | HARDWARE/SOFTWARE CO-OPTIMIZATION TO IMPROVE PERFORMANCE AND ENERGY FOR INTER-VM COMMUNICATION FOR NFVS AND OTHER PRODUCER-CONSUMER WORKLOADS - Methods and apparatus implementing Hardware/Software co-optimization to improve performance and energy for inter-VM communication for NFVs and other producer-consumer workloads. The apparatus include multi-core processors with multi-level cache hierarchies including and L1 and L2 cache for each core and a shared last-level cache (LLC). One or more machine-level instructions are provided for proactively demoting cachelines from lower cache levels to higher cache levels, including demoting cachelines from L1/L2 caches to an LLC. Techniques are also provided for implementing hardware/software co-optimization in multi-socket NUMA architecture system, wherein cachelines may be selectively demoted and pushed to an LLC in a remote socket. In addition, techniques are disclosure for implementing early snooping in multi-socket systems to reduce latency when accessing cachelines on remote sockets. | 06-30-2016 |
20160188475 | Concurrent Execution of Critical Sections by Eliding Ownership of Locks - Critical sections of multi-threaded programs, normally protected by locks providing access by only one thread, are speculatively executed concurrently by multiple threads with elision of the lock acquisition and release. Upon a completion of the speculative execution without actual conflict as may be identified using standard cache protocols, the speculative execution is committed, otherwise the speculative execution is squashed. Speculative execution with elision of the lock acquisition, allows a greater degree of parallel execution in multi-threaded programs with aggressive lock usage. | 06-30-2016 |
20160202919 | SEMICONDUCTOR DEVICE | 07-14-2016 |
20160253263 | COMPUTER AND MEMORY CONTROL METHOD | 09-01-2016 |
20160253269 | Spatial Sampling for Efficient Cache Utility Curve Estimation and Cache Allocation | 09-01-2016 |
20160378667 | INDEPENDENT BETWEEN-MODULE PREFETCHING FOR PROCESSOR MEMORY MODULES - A processor employs multiple prefetchers at a processor to identify patterns in memory accesses to different memory modules. The memory accesses can include transfers between the memory modules, and the prefetchers can prefetch data directly from one memory module to another based on patterns in the transfers. This allows the processor to efficiently organize data at the memory modules without direct intervention by software or by a processor core, thereby improving processing efficiency. | 12-29-2016 |
20160378670 | DYNAMIC STRUCTURAL MANAGEMENT OF A DISTRIBUTED CACHING INFRASTRUCTURE - Embodiments of the present invention provide a method, system and computer program product for the dynamic structural management of an n-Tier distributed caching infrastructure. In an embodiment of the invention, a method of dynamic structural management of an n-Tier distributed caching infrastructure includes establishing a communicative connection to a plurality of cache servers arranged in respective tier nodes in an n-Tier cache, collecting performance metrics for each of the cache servers in the respective tier nodes of the n-Tier cache, identifying a characteristic of a specific cache resource in a corresponding one of the tier nodes of the n-Tier crossing a threshold, and dynamically structuring a set of cache resources including the specific cache resource to account for the identified characteristic. | 12-29-2016 |
20170235677 | COMPUTER SYSTEM AND STORAGE DEVICE | 08-17-2017 |
20180024925 | INCREASING INVALID TO MODIFIED PROTOCOL OCCURRENCES IN A COMPUTING SYSTEM | 01-25-2018 |