Entries |
Document | Title | Date |
20080201523 | PRESERVATION OF CACHE DATA FOLLOWING FAILOVER - In a data storage subsystem with disk storage and a pair of clusters, one set of DASD fast write data is in cache of one cluster and in non-volatile data storage of the other. In response to a failover of one of the pair of clusters to a local cluster, the local cluster converts the DASD fast write data in local cache to converted fast write data to prioritize the converted data for destaging to disk storage. In response to failure to destage, the local cluster allocates local non-volatile storage tracks and emulates a host adapter to store the converted fast write data by the local non-volatile storage, reconverting the converted fast write data of the non-volatile storage to local DASD fast write data stored in the local non-volatile storage and stored in the local cache storage. | 08-21-2008 |
20080215807 | VIDEO DATA SYSTEM - A video data system is presented including buffering video data for a display device, preserving the video data in a non-volatile video random access memory during shutdown of the display device, and restoring the video data to the display device on power-up of the display device. | 09-04-2008 |
20080215808 | RAID CONTROLLER USING CAPACITOR ENERGY SOURCE TO FLUSH VOLATILE CACHE DATA TO NON-VOLATILE MEMORY DURING MAIN POWER OUTAGE - A write-caching RAID controller includes a CPU that manages transfers of posted-write data from host computers to a volatile memory and transfers of the posted-write data from the volatile memory to a redundant array of storage devices when a main power source is supplying power to the RAID controller. A memory controller transfers the posted-write data received from the host computers to the volatile memory and transfers the posted-write data from the volatile memory for transfer to the redundant array of storage devices as managed by the CPU. The memory controller flushes the posted-write data from the volatile memory to the non-volatile memory when main power fails, during which time capacitors provide power to the memory controller, volatile memory, and non-volatile memory, but not to the CPU, in order to reduce the energy storage requirements of the capacitors. During main power provision, the CPU programs the memory controller with information needed to perform the flush operation, such as the location and size of the posted-write data in the volatile memory and various flush operation characteristics. | 09-04-2008 |
20080222353 | METHOD OF CONVERTING A HYBRID HARD DISK DRIVE TO A NORMAL HDD - A method of converting a hybrid hard disk drive (HDD) to a normal HDD when a system is powered on depending on whether the total number of defective blocks in a non-volatile cache (NVC) exceeds a predetermined threshold. The method of converting a hard disk drive (HDD) from a hybrid HDD to a normal HDD where the HDD has a normal hard disk and a non-volatile cache includes the steps of determining whether a mode conversion flag is enabled during a power-on period. When the mode conversion flag is enabled, operating the HDD as a normal HDD. When the mode conversion flag is disabled, determining whether an operating mode of the HDD is a normal mode or a hybrid mode. When the operating mode of the HDD is in the normal mode, the HDD operates as a normal HDD. A determination is made when the HDD is in the hybrid mode as to whether the total number of defective blocks in the non-volatile cache is greater than a predetermined threshold. The HDD is operated as a hybrid HDD when the total number of defective blocks is not greater than the threshold. The mode conversion flag is enabled and the HDD is operated as a hybrid HDD when the total number of defective blocks is greater than the threshold. | 09-11-2008 |
20080229009 | SYSTEMS AND METHODS FOR PUSHING DATA - A system for pushing data, the system includes a source node that stores a coherent copy of a block of data. The system also includes a push engine configured to determine a next consumer of the block of data. The determination being made in the absence oft he push engine detecting a request for the block of data from the next consumer. The push engine causes the source node to push the block of data to a memory associated with the next consumer to reduce latency of the next consumer accessing the block of data. | 09-18-2008 |
20080229010 | STORAGE SYSTEM AND METHOD FOR CONTROLLING CACHE RESIDENCY SETTING IN THE STORAGE SYSTEM - In a storage system adopting an external storage connection configuration, a first storage apparatus is capable of integrally managing the cache residency settings made in second storage apparatuses, which serve as external storage apparatuses. The first storage apparatus stores the cache residency information for the second storage apparatuses, i.e., external storage apparatuses, in a shared memory thereof. When the storage system receives a cache residency setting request from a management device or the like, the first storage apparatus issues a cache residency setting instruction to a second storage apparatus with reference to the residency information. In accordance with the setting instruction, the second storage apparatus sets a cache-resident area in a cache memory thereof. | 09-18-2008 |
20080229011 | CACHE MEMORY UNIT AND PROCESSING APPARATUS HAVING CACHE MEMORY UNIT, INFORMATION PROCESSING APPARATUS AND CONTROL METHOD - A cache memory unit connecting to a main memory system having a cache memory area in which, if memory data that the main memory system has is registered therewith, the registered memory data is accessed by a memory access instruction that accesses the main memory system and a local memory area with which local data to be used by the processing section is registered and in which the registered local data is accessed by a local memory access instruction, which is different from the memory access instruction. | 09-18-2008 |
20080235446 | Method of monitoring status information of remote storage and storage subsystem - A host computer acquires remote copy status information of storage subsystems that are not directly coupled to the host computer. | 09-25-2008 |
20080244173 | STORAGE DEVICE USING NONVOLATILE CACHE MEMORY AND CONTROL METHOD THEREOF - According to one embodiment, the present invention provides a storage device that sophisticatedly utilizes the characteristics of a nonvolatile cache memory and a hard disk, and compensates defects of the hard disk drive side to improve the reliability of the device. The storage device includes a host interface, a command analyzing section, a memory that stores request information which permits or forcibly forbids accessing the hard disk, a device state determining section that determines the request information of the memory, and a media access determining section that, when the determination result of the device state determining section indicates the “forbiddance”, forbids accessing the hard disk, and, when the determination result of the device state determining section indicates the “permission”, permits the accessing based on the analysis result of the command analyzing section and unique determination result. | 10-02-2008 |
20080250198 | Storage Consolidation Platform - One embodiment of the invention provides a disk-to-tape storage system including a front-end portion and a hack-end portion. The front-end portion ha, a first interface for receiving storage commands and data over a network from an application performing a backup or archive operation. The received storage commands conform to a standardised command format. The back-end portion has a second interface for transmitting storage commands and the received data for storage in a tape library. The disk-to-tape storage system is operable to transform the received storage commands from the standardised command format into an appropriate format for the transmitted storage commands so as to maintain direct accessibility by the application of the received data as stored in the tape library. | 10-09-2008 |
20080250199 | ATOMIC CACHE TRANSACTIONS IN A DISTRIBUTED STORAGE SYSTEM - An atomic write descriptor associated with stripe buffer list metadata. | 10-09-2008 |
20080250200 | SYSTEM AND PROGRAM FOR DEMOTING TRACKS FROM CACHE - Provided are a method, system, and program for destaging a track from cache to a storage device. The destaged track is retained in the cache. Verification is made of whether the storage device successfully completed writing data. Indication is made of destaged tracks eligible for removal from the cache that were destaged before the storage device is verified in response to verifying that the storage device is successfully completing the writing of data. | 10-09-2008 |
20080250201 | Information processing system and control method thereof - A control technique for resident information or release of resident information in a cache memory is provided, by which residence is set into a cache memory without regard of a logical volume where a dataset is present, and an unused resident area in the cache memory is automatically deleted. In an information processing system, a host system has a resident management program for automatically acquiring a logical volume name from a dataset name specified by the user with reference to catalog information for managing dataset stored in the logical volume and instructing the dataset on the logical volume having the corresponding logical volume name to be resident. Further, a disk array system has microprogram for making the dataset on the logical volume having the corresponding logical volume name resident in the cache memory in response to the instruction to set residence from the resident management program. | 10-09-2008 |
20080270686 | METHODS AND SYSTEM TO CACHE CONTENT ON A VEHICLE - A system and methods of caching content on a vehicle are disclosed herein. One method comprises identifying a passenger for an upcoming ride on the vehicle, identifying content of interest to the passenger, and transmitting at least a portion of the content of interest to a storage device carried by the vehicle for caching prior to a beginning of the ride. | 10-30-2008 |
20080270687 | Cache chunked list concrete data type - An embodiment of the invention provides a concrete data type and a method for providing a cached chunked list concrete data type. The method can perform steps including: storing at least one datum in a chunk in a cache line; and setting a lower bit value (LB) in a link/space pointer in the chunk to indicate the empty slots in the chunk. | 10-30-2008 |
20080270688 | Direct access storage system with combined block interface and file interface access - A storage system includes a storage controller and storage media for reading data from or writing data to the storage media in response to block-level and file-level | 10-30-2008 |
20080270689 | Storage controller, data processing method and computer program product for reducing channel processor overhead by effcient cache slot management - When a first channel processor from among a plurality of channel processors receives an I/O request from a host system, a second channel processor, which is to execute a part of the processing to respond to the I/O request, is selected from among the channel processors based on the LM directories of the respective channel processors. The selected second channel processor checks whether there is a cache hit. If there is a cache hit, it transfers the data from the cache memory to the buffer memory. The first channel processor then processes the I/O request using the data transferred to the buffer memory. | 10-30-2008 |
20080276040 | STORAGE APPARATUS AND DATA MANAGEMENT METHOD IN STORAGE APPARATUS - Provided are a storage apparatus and its data management method capable of preventing the loss of data retained in a volatile cache memory even during an unexpected power shutdown. This storage apparatus includes a cache memory configured from a volatile and nonvolatile memory. The volatile cache memory caches data according to a write request from a host system and data staged from a disk drive, and the nonvolatile cache memory only caches data staged from a disk drive. Upon an unexpected power shutdown, the storage apparatus immediately backs up the dirty data and other information cached in the volatile cache memory to the nonvolatile cache memory. | 11-06-2008 |
20080301364 | CACHING OF MICROCODE EMULATION MEMORY - A processor includes a cache hierarchy including a level-1 cache and a higher-level cache. The processor maps a portion of physical memory space to a portion of the higher-level cache, executes instructions, at least some of which comprise microcode, allows microcode to access the portion of the higher-level cache, and prevents instructions that do not comprise microcode from accessing the portion of the higher-level cache. The first portion of the physical memory space can be permanently allocated for use by microcode. The processor can move one or more cache lines of the first portion of the higher-level cache from the higher-level cache to a first portion of the level-1 cache, allow microcode to access the first portion of the first level-1 cache, and prevent instructions that do not comprise microcode from accessing the first portion of the first level-1 cache. | 12-04-2008 |
20080301365 | Storage unit and circuit for shaping communication signal - The present invention relates to a storage unit comprising: a channel control portion for receiving a data input/output request; a cache memory for storing data; a disk control portion for performing input/output processing on data in accordance with the data input/output request; and a plurality of disk drives for storing data, wherein at least two of the disk drives input data to and output it from the disk control portion at different communication speeds. Further, the storage unit has a plurality of communication paths provided to connect at least one of the disk drives in such a manner as to constitute a loop defined by the FC-AL fiber channel standards, so that the communication speeds can be set differently for these different communication paths. | 12-04-2008 |
20080307160 | METHODS AND STRUCTURE FOR IMPROVED STORAGE SYSTEM PERFORMANCE WITH WRITE-BACK CACHING FOR DISK DRIVES - Methods and associated structures for utilizing write-back cache management modes for local cache memory of disk drives coupled to a storage controller while maintaining data integrity of the data transferred to the local cache memories of affected disk drives. In one aspect hereof, a state machine model of managing cache blocks in a storage controller cache memory maintains blocks in the storage controller's cache memory in a new state until verification is sensed that the blocks have been successfully stored on the persistent storage media of the affected disk drives. Responsive to failure or other reset of the disk drive, the written cache blocks may be re-written from the copy maintained in the cache memory of the storage controller. In another aspect, an alternate controller's cache memory may also be used to mirror the cache blocks from the primary storage controller's cache memory as additional data integrity assurance. | 12-11-2008 |
20090006736 | SYSTEMS AND METHODS FOR MANAGING DATA STORAGE - This invention is directed to a system by which data received by an electronic device from a server may be selectively stored in cache. The electronic device may define an anchor that is related to the current position of a playhead reading data stored in cache. The electronic device may then dynamically assign values to each data block of the received file based on the position of the anchor. As the anchor moves the value of data blocks changes, and new incoming data may replace less valuable data previously stored in cache. | 01-01-2009 |
20090013128 | Runtime Machine Supported Method Level Caching - A computer system includes a disk space comprising at least one type of memory and an operating system for controlling allocations and access to the disk space. A runtime machine runs applications through at least one of the operating system or directly on at least one processor of the computer system. In addition, the runtime machine manages a selected runtime disk space allocated to the runtime machine by the operating system and manages a separate method cache within the selected virtual disk space. The virtual machine controls caching within the method cache of a separate result of at least one method of the application marked as cache capable. For a next instance of the method detected by the runtime machine, the runtime machine accesses the cached separate result of the method in lieu of executing the method again. | 01-08-2009 |
20090019221 | Efficient chunked java object heaps - A mechanism is disclosed for offset-based addressing in the chunks of a chunked heap. The mechanism provides for storing a side data structure within a portion of a chunk, where the side data structure begins at a predetermined offset within the range of virtual memory addresses allocated to the chunk. The side data structure comprises a plurality of entries, where each entry is associated with a corresponding section of the chunk. The mechanism provides for locating a particular entry in the side data structure corresponding to a particular section of the chunk by using the predetermined offset and an index derived based on the particular section, where locating the particular entry does not include performing any memory accessing operations or conditional branch operations to obtain an indirect reference to the side data structure. | 01-15-2009 |
20090043961 | STORAGE SYSTEM, STORAGE DEVICE, AND CONTROL METHOD THEREOF - A storage system including a storage device which includes media for storing data from a host computer, a medium controller for controlling the media, a plurality of channel controllers for connecting to the host computer through a channel and a cache memory for temporarily storing data from the host computer, wherein the media have a restriction on a number of writing times. The storage device includes a bus for directly transferring data from the medium controller to the channel controller. | 02-12-2009 |
20090049237 | Methods and systems for multi-caching - Provided are methods and systems for multi-caching. The methods and systems provided can enhance network content delivery performance in terms of reduced response time and increased throughput, and can reduce communication overhead by decreasing the amount of data that have to be transmitted over the communication paths. | 02-19-2009 |
20090049238 | DISK DRIVE STORAGE DEFRAGMENTATION SYSTEM - The present invention provides a disk drive storage defragmentation system, comprising providing a cache buffer system coupled to a host system, coupling a disk drive storage system to the cache buffer system, performing a defragmentation process on the disk drive storage system utilizing the cache buffer system and servicing a data access request by the host system from the cache buffer system. | 02-19-2009 |
20090063766 | STORAGE CONTROLLER AND FIRMWARE UPDATING METHOD - A storage controller and method are provided. The storage controller includes control sections including storage sections into which data transmitted from a host unit is cached, one of the control sections being a main control section which controls firmware update in the control sections. The main control section includes an instruction updater sending an update instruction to a sub control section in the control sections in which firmware is to be updated, and an area instructor requesting the sub control section to transmit area information, the sub control section including an area information obtainer obtaining, according to the instruction from the area instructor and an area information transmitter transmitting to the area instructor; and an area setter setting the location of the cache area in the storage section on the basis of the instruction. | 03-05-2009 |
20090070526 | USING EXPLICIT DISK BLOCK CACHEABILITY ATTRIBUTES TO ENHANCE I/O CACHING EFFICIENCY - A data caching method comprising identifying whether data stored in a first data block on a storage medium is cacheable; setting a first cacheability attribute associated with the first data block in a data structure to identify whether the data in the first data block is cacheable; monitoring I/O requests submitted for accessing target data in the first data block; determining whether the target data is cacheable based on the first cacheability attribute; and applying algorithms that implement cache policy to the target data, in response to determining that the target data is cacheable. | 03-12-2009 |
20090070527 | USING INTER-ARRIVAL TIMES OF DATA REQUESTS TO CACHE DATA IN A COMPUTING ENVIRONMENT - A data caching method comprising monitoring read and write requests submitted for accessing target data in a first data block on a storage medium; identifying a sequence of access requests for target data as a first stream; and determining whether the first stream is suitable for direct disk access based on inter-arrival times of the read or write requests in the stream. | 03-12-2009 |
20090077312 | Storage apparatus and data management method in the storage apparatus - A storage apparatus sets up part of non-volatile cache memory as a cache-resident area, and in an emergency such as an unexpected power shutdown, backs up dirty data of data cached in volatile memory to an area other than the cache-resident area in the non-volatile cache memory, together with the relevant cache management information. Further, the storage apparatus monitors the amount of the dirty data in the volatile cache memory so that the dirty data cached in the volatile cache memory is reliably contained in a backup area in the non-volatile memory, and when the dirty data amount exceeds a predetermined threshold value, the storage apparatus releases the cache-resident area to serve as the backup area. | 03-19-2009 |
20090077313 | METHOD AND APPARATUS TO MAINTAIN DATA INTEGRITY IN DISK CACHE MEMORY DURING AND AFTER PERIODS OF CACHE INACCESSIBILITY - A volatile or nonvolatile cache memory can cache mass storage device read data and write data. The cache memory may become inaccessible, and I/O operations may go directly to the mass storage device, bypassing the cache memory. A log of write operations may be maintained to update the cache memory when it becomes available. | 03-19-2009 |
20090089500 | MEMORY CACHE SHARING IN HYBRID HARD DISK - A system allows one or more hybrid hard disks or any other storage devices to share a logical nonvolatile device formed by one or more non-volatile memory devices. The system comprises a control logic to reserve on a hybrid hard disk a space that corresponds to a non-volatile memory device in the hybrid hard disk and to use a space access instruction to access the non-volatile memory device. The control logic accesses the logical non-volatile memory device in an event that a content of a storage device is stored in the logical non-volatile memory device in response to an instruction to access the storage device. | 04-02-2009 |
20090089501 | METHOD OF PREFETCHING DATA IN HARD DISK DRIVE, RECORDING MEDIUM INCLUDING PROGRAM TO EXECUTE THE METHOD, AND APPARATUS TO PERFORM THE METHOD - A method of prefetching data in a hard disk drive includes searching for a logic block address (LBA) of data requested by an external apparatus in a history of a non-volatile cache of the hard disk drive, and if the LBA of the data is stored in the history, storing data recorded in a LBA stored after the LBA of the data requested by the external apparatus from among LBAs stored in the history in a buffer of the hard disk drive. | 04-02-2009 |
20090106490 | DATA PROCESSING APPARATUS AND PROGRAM FOR SAME - The present invention provides a data processing apparatus capable of maintaining consistency of specific data without switching between a write-back method and a write-through method. A first microcomputer of an engine ECU performs data updating in the write-back method. In the case of performing data writing process on specific data, the data writing process is performed on dummy data having the same index and a different tag (i.e., a forced write-back). Consequently, the specific data written in a cache memory is evicted from the cache memory immediately by writing of the dummy data and is written in a main-storage RAM. Therefore, without switching the write-back method to the write-through method, the same specific data can be stored in both of the cache memory and the main-storage RAM. | 04-23-2009 |
20090125676 | INFORMATION HANDLING SYSTEM INCLUDING A LOGICAL VOLUME AND A CACHE AND A METHOD OF USING THE SAME - A system and method of recovering cached data can be used when a particular physical storage device becomes unsuitable for storing data. In one aspect, the method can include providing the information handling system including a logical volume and a cache. The cache includes data that is to be stored within a particular physical storage device. The method can also include persisting the data within a different physical storage device. In one embodiment, the different physical storage device can be used to temporarily store the data when a logical volume is inaccessible. After the particular physical storage device becomes suitable to persist the data, the logical volume can be restored. The method can further include persisting the data within the particular or a replacement physical storage device. In another aspect, a system can be configured to carry out the methods described herein. | 05-14-2009 |
20090125677 | INTELLIGENT CACHING OF MEDIA FILES - A method of receiving and forwarding a multimedia message is provided. The multimedia message is adapted with a first adaptation profile into a first adapted message to be received in a first device. The multimedia message and the first adapted message are stored in a media cache. The message may then be forwarded from the first device to a second device that has a second adaptation profile by retrieving the first adapted message from the media cache and sending it to the second device if the first and second adaptation profiles match, otherwise the multimedia message is retrieved from the media cache and adapted with the second adaptation profile into a second adapted message that is then sent to the second device. In addition, the second adapted message is stored in the media cache. | 05-14-2009 |
20090132760 | APPARATUS, SYSTEM, AND METHOD FOR SOLID-STATE STORAGE AS CACHE FOR HIGH-CAPACITY, NON-VOLATILE STORAGE - An apparatus, system, and method are disclosed for solid-state storage as cache for high-capacity, non-volatile storage. The apparatus, system, and method are provided with a plurality of modules including a cache front-end module and a cache back-end module. The cache front-end module manages data transfers associated with a storage request. The data transfers between a requesting device and solid-state storage function as cache for one or more HCNV storage devices, and the data transfers may include one or more of data, metadata, and metadata indexes. The solid-state storage may include an array of non-volatile, solid-state data storage elements. The cache back-end module manages data transfers between the solid-state storage and the one or more HCNV storage devices. | 05-21-2009 |
20090150607 | DISK CONTROLLER CONFIGURED TO PERFORM OUT OF ORDER EXECUTION OF WRITE OPERATIONS - A controller for a disk drive includes first memory storing first write operations and second write operations received in a first order. A processor arranges the first write operations and the second write operations in a second order based on respective track sectors associated with the first and the second write operations. The second order is different than the first order. A memory controller transfers write operation data corresponding to the first write operations and the second write operations to a disk formatter in the second order in response to a single command from the processor. | 06-11-2009 |
20090150608 | STORAGE SYSTEM AND OPERATION METHOD OF STORAGE SYSTEM - The present invention is able to improve the processing performance of a storage system by respectively virtualizing the external volumes and enabling the shared use of such external volumes by a plurality of available virtualization storage devices. By virtualizing and incorporating the external volume of an external storage device, a first virtualization storage device is able to provide the volume to a host as though it is an internal volume. When the load of the first virtualization storage device increases, a second virtualization storage device | 06-11-2009 |
20090157957 | APPARATUS WITH DISC DRIVE THAT USES CACHE MANAGEMENT POLICY TO REDUCE POWER CONSUMPTION - Data blocks are loaded in multi-block fetch units from a disc. Cache management policy is selects data blocks for non-retention in cache memory so as to reduce the number of fetch units that must be fetched. Use is made of the large multi-block fetch unit size to profit from the possibility to load additional blocks essentially without additional power consumption when a fetch unit has to be fetched to obtain a block. Selection of data blocks for non-retention is biased toward combinations of data blocks that can be fetched together for a next use in one fetch unit. Between fetching of fetch units the disc drive is switched from a read mode to a power saving mode, wherein at least part of the disc drive is deactivated, so that energy consumption is reduced. Retention is managed at a granularity of data blocks, that is, below the level of the fetch units. If a combination of blocks from the same fetch unit can be fetched together at one go before their next use, these blocks are not retained if as a result other blocks, from a plurality of other fetch units, can be retained in place of the combination of blocks. | 06-18-2009 |
20090193189 | Block-based Storage System Having Recovery Memory to Prevent Loss of Data from Volatile Write Cache - A block-based storage system that maximizes data throughput while minimizing data loss has a non-volatile mass storage media for receiving and non-volatilly storing WRITE data and a volatile write cache for receiving and caching WRITE data until the WRITE data has been written to the non-volatile mass storage media. A controller includes a processor in communication with the volatile write cache for writing data to the volatile write cache and a non-volatile recovery memory in communication with the processor is supplied for receiving and non-volatilly storing a copy of all data that the processor writes to the volatile write cache so that any data cached in the volatile write cache which is lost due to a loss of power may be re-written to the volatile write cache from the recovery memory. | 07-30-2009 |
20090216944 | EFFICIENT VALIDATION OF WRITES FOR PROTECTION AGAINST DROPPED WRITES - A write cache provides for staging of data units written from a processor for recording in a disk. The order in which destages and validations occur is controlled to make validations more efficient. The data units are arranged in a circular queue according to their respective disk storage addresses. Each data unit is tagged with a state value of 1, 0, or −1. A destaging pointer is advanced one-by-one to each data unit like the hand of a clock. Each data unit pointed to is evaluated as a destage victim. The first step is to check its state value. A data unit newly brought into the write cache will have its state value reset to 0. It will stay that way until it receives an overwrite x command or the destage pointer clocks around to x. If an overwrite x, the state value is set to 1, in a way, indicating recent use of the data unit and postponing its destaging and eviction. If the destage pointer clocks around to x when the state was 0, then it's time to destage x and the state value is changed to −1. A write to the disk occurs and a later read will be used to verify the write. If the state value was already 1 when the destage pointer clocks around to x, the state value is reset to 0. If the destage pointer clocks around to x when the state is −1, the associated data is read from the disk and validated to be same as the copy in cache. If not, the destage of x is repeated, and the state value remains as −1. Otherwise, if the associated read for validation did return a success, then data unit x is evicted from the write cache. | 08-27-2009 |
20090216945 | STORAGE SYSTEM WHICH UTILIZES TWO KINDS OF MEMORY DEVICES AS ITS CACHE MEMORY AND METHOD OF CONTROLLING THE STORAGE SYSTEM - Provide is a storage system including one or more disk drives, and one or more cache memories for temporarily storing data read from the disk drives or data to be written to the disk drives, in which: the cache memories includes volatile first memories and non-volatile second memories; and the storage system receives a data write request, stores the requested data in the volatile first memories, selects one of memory areas of the volatile first memories if a total capacity of free memory areas contained in the volatile first memories is less than a predetermined threshold, write data stored in the selected memory area in the non-volatile second memories, and changes the selected memory area to a free memory area. Accordingly, there can be realized capacity enlarging of the cache memory using a non-volatile memory device while realizing a high speed similar to that of a volatile memory device. | 08-27-2009 |
20090235021 | EFFICIENTLY SYNCHRONIZING WITH SEPARATED DISK CACHES - In a method of synchronizing with a separated disk cache, the separated cache is configured to transfer cache data to a staging area of a storage device. An atomic commit operation is utilized to instruct the storage device to atomically commit the cache data to a mapping scheme of the storage device. | 09-17-2009 |
20090240879 | Disk array system - A technique to distribute processing to meet a request from other system without partializing the processing to specific processor and can execute processing efficiently while adopting configuration to control one port unit by multiple processors at channel adapter of disk array system. CHA of a controller has a port unit carrying out interface operation and multiple host processor units having host processors. Multiple processors operate in parallel and control the port unit. When the port unit receives a request from other system, the first processor takes charge of the processing on the basis of the judgment of the processing load condition in processors including itself and in the event that the second processor is assigned the processing, the first processor communicates with the protocol unit and transfers the request to the second processor unit to enable the second processor to take charge of the processing. | 09-24-2009 |
20090248976 | MULTI-CORE MEMORY THERMAL THROTTLING ALGORITHMS FOR IMPROVING POWER/PERFORMANCE TRADEOFFS - Embodiments of the invention are generally directed to systems, methods, and apparatuses for improving power/performance tradeoffs associated with multi-core memory thermal throttling algorithms. In some embodiments, the priority of shared resource allocation is changed on one or more points in a system, while the system is in dynamic random access memory (DRAM) throttling mode. This may enable the forward progress of cache bound workloads while still throttling DRAM for memory bound workloads. | 10-01-2009 |
20090248977 | VIRTUAL TAPE APPARATUS, VIRTUAL TAPE LIBRARY SYSTEM, AND METHOD FOR CONTROLLING POWER SUPPLY - A virtual tape apparatus, which can switch a power supply state to a tape apparatus to thereby suppress power consumption, has an access instruction unit and a power supply control unit. The access instruction unit determines whether or not it is necessary to supply power to a tape apparatus in which a physical tape is stored and which stores data to the physical tape based on an update state of data stored to a tape volume cache, and the power supply control unit switches a state of power supplied to the tape apparatus based on a result of determination executed by the access instruction unit. | 10-01-2009 |
20090282191 | Operating Method for a Memory Subsystem and Devices for Executing the Operating Method - A memory subsystem has at least one first mass memory with a solid-state memory medium, at least one second mass memory with a moving read/write head or moving memory medium, and at least one control unit for controlling the first mass memory and the second mass memory. A method of operating the memory subsystem includes receiving a request for storing or reading data, defining first and second memory regions in the first and second mass memories, respectively, and transmitting first and second subrequests to the first and second memory regions, respectively. | 11-12-2009 |
20090292869 | DATA DELIVERY SYSTEMS - A request for a multi-segment, sequential data file is received from a user. At least a portion of a first segment of the multi-segment, sequential data file is provided from a previously-energized first storage device to the user. A previously-unenergized second storage device that contains a second segment of the multi-segment, sequential data file is energized. | 11-26-2009 |
20090300280 | DETECTING DATA MINING PROCESSES TO INCREASE CACHING EFFICIENCY - Methods and apparatus to detect a data mining process are presented. In one embodiment the method comprising monitoring access of a process to a resource and classifying if the process is a data mining process based on at least one of a plurality of monitored values, such as an access rate, an eviction rate, and an I/O consumption value. | 12-03-2009 |
20090300281 | Disk Controller Providing for the Auto-Transfer of Host-Requested-Data from a Cache Memory within a Disk Memory System - A disk-controller ( | 12-03-2009 |
20090307419 | Allocating Clusters to Storage Partitions in a Storage System - The bandwidth of the inter-connection network between the clusters is quite narrower than that of the inter-connection network in the clusters. When the logical allocation technique is simply applied to a cluster storage system, there is created a logical partition associated with two or more clusters. It is not possible to create logical partitions of performance corresponding to resources allocated thereto. In a storage system including a first cluster and a second cluster, when a resource of the storage system is logically subdivided into logical partitions, a resource of the first cluster is allocated to one logical partition. The system may be configured such that the first and second clusters are connected via switches to disk drives. The system may also be configured such that when failure occurs in the first cluster, the second cluster continuously executes processing of the first cluster. | 12-10-2009 |
20090319724 | DISTRIBUTED DISK CACHE SYSTEM AND DISTRIBUTED DISK CACHE METHOD - According to an aspect of the embodiment, a packet analyzing apparatus monitors a concentration level of input and output access from an access apparatus to a disk device, specifies a data area to which the concentration level of input and output access exceeds a first threshold, and instructs a storage server to cache the data area. The packet analyzing apparatus monitors a concentration level of input and output access to a data area to which the data area is cached, and, when the concentration level of input and output access is below a second threshold, instructs the storage server to release the caching. | 12-24-2009 |
20090327600 | OPTIMIZED CACHE COHERENCY IN A DUAL-CONTROLLER STORAGE ARRAY - Data is cached in a dual-controller storage array having a first cache controlled by a first controller, a second cache controlled by a second controller, and a shared array of persistent storage devices, such as disk drives. When one of the controllers receives a write request, it stores the data in persistent storage, stores a copy of that data in the first cache, and transmits identification data to the second controller that identifies the data written to persistent storage. Using the identification data, the second controller invalidates any data stored in the second cache that corresponds to the data that the first controller wrote to persistent storage. If a controller receives a read request, and the requested data is validly stored in its cache, the controller retrieves it from the cache; otherwise, the controller reads the requested data from persistent storage and caches a copy of the requested data. | 12-31-2009 |
20100037018 | DRIVE TRAY BASED CACHING FOR HIGH PERFORMANCE DATA STORAGE SYSTEMS - Methods and systems of drive tray based caching for high performance data storage systems are disclosed. In one embodiment, the data storage system includes a controller module with at least one storage controller for managing flow of data associated with an application software, and a plurality of drive trays. Each drive tray includes a plurality of drives for storing a respective portion of the data and at least one drive controller for managing flow of the respective portion of the data between the at least one storage controller and the plurality of drives. Also, each drive tray includes a drive cache memory coupled to each one of the at least one drive controller for caching the respective portion of the data. | 02-11-2010 |
20100057984 | MEMORY HIERARCHY CONTAINING ONLY NON-VOLATILE CACHE - A storage system that includes non-volatile main memory; non-volatile read cache; non-volatile write cache; and a data path operably coupled between the non-volatile write cache and the non-volatile read cache, wherein the storage system does not include any volatile cache and methods for retrieving and writing data throughout this memory hierarchy system. | 03-04-2010 |
20100070700 | CACHE MANAGEMENT SYSTEM AND METHOD AND CONTENT DISTRIBUTION SYSTEM INCORPORATING THE SAME - A cache management system and method and a content distribution system. In one embodiment, the cache management system includes: (1) a content request receiver configured to receive content requests, (2) a popularity lifetime prediction modeler coupled to the content request receiver and configured to generate popularity lifetime prediction models for content that can be cached based on at least some of the content requests, (3) a database coupled to the popularity lifetime prediction modeler and configured to contain the popularity lifetime prediction models and (4) a popularity lifetime prediction model matcher coupled to the content request receiver and the database and configured to match at least one content request to the popularity lifetime prediction models and control a cache based thereon. | 03-18-2010 |
20100070701 | MANAGING CACHE DATA AND METADATA - Embodiments of the invention provide techniques for ensuring that the contents of a non-volatile memory device may be relied upon as accurately reflecting data stored on disk storage across a power transition such as a reboot. For example, some embodiments of the invention provide techniques for determining whether the cache contents and/or or disk contents are modified during a power transition, causing cache contents to no longer accurately reflect data stored in disk storage. Further, some embodiments provide techniques for managing cache metadata during normal (“steady state”) operations and across power transitions, ensuring that cache metadata may be efficiently accessed and reliably saved and restored across power transitions. | 03-18-2010 |
20100082898 | Methods to securely bind an encryption key to a storage device - Embodiments of methods to securely bind a disk cache encryption key to a cache device are generally described herein. Other embodiments may be described and claimed. | 04-01-2010 |
20100088469 | Storage system - Disclosed is a storage system that suppress occurrence of a bottleneck in the storage system, efficiently uses a bandwidth of hardware, and achieves high reliability. A storage system includes a storage | 04-08-2010 |
20100100674 | MANAGING A REGION CACHE - A method, system, and computer program product are provided for managing a cache. A region to be stored within the cache is received. The cache includes multiple regions and each of the regions is defined by memory ranges having a starting index and an ending index. The region that has been received is stored in the cache in accordance with a cache invariant. The cache invariant guarantees that at any given point in time the regions in the cache are stored in a given order and none of the regions are completely contained within any other of the regions. | 04-22-2010 |
20100100675 | SYSTEM AND METHOD FOR MANAGING STORAGE DEVICE CACHING - A data storage device comprising at least one non-volatile storage medium, at least one data cache, and a controller configured to perform cache writing operations between the at least one non-volatile storage medium and the at least one data cache based on user-selected caching modes. | 04-22-2010 |
20100115197 | METHODS AND STRUCTURE FOR LIMITING STORAGE DEVICE WRITE CACHING - Methods and structures for limiting the write portion of a local cache memory in one or more disk drives of a storage system such that a storage controller coupled to each may mirror the content and structure of the write portion of each disk drive. The size of the write portion of the local cache memory in a disk drive controller may be controlled by the storage controller or other host device. The size of the write portion may be controlled by switch settings to select among a plurality of predefined sizes or may be programmed by the storage controller or other host device. Programming such a size value may be by setting parameter values in a configuration page of a SCSI disk drive's local memory or may be by a vendor unique command sent by a host device to the disk drive. | 05-06-2010 |
20100122026 | SELECTIVELY READING DATA FROM CACHE AND PRIMARY STORAGE - Techniques are provided for using an intermediate cache to provide some of the items involved in a scan operation, while other items involved in the scan operation are provided from primary storage. Techniques are also provided for determining whether to service an I/O request for an item with a copy of the item that resides in the intermediate cache based on factors such as a) an identity of the user for whom the I/O request was submitted, b) an identity of a service that submitted the I/O request, c) an indication of a consumer group to which the I/O request maps, d) whether the I/O request is associated with an offloaded filter provided by the database server to the storage system, or e) whether the intermediate cache is overloaded. Techniques are also provided for determining whether to store items in an intermediate cache in response to the items being retrieved, based on logical characteristics associated with the requests that retrieve the items. | 05-13-2010 |
20100125704 | STORAGE CONTROL APPARATUS AND STORAGE SYSTEM - Provided is a storage control apparatus, including a Central Processing Unit, a channel interface device, a disk interface device, and an input/output control device. In the storage control apparatus, a main memory device is configured by a battery-backed-up volatile memory that can maintain the nonvolatility of data for a predetermined period of time after the power supply is turned off. By the battery-backed-up main memory device and a memory device of the input/output control device, a memory address space is formed to make it serve as a cache device of the storage control apparatus, and staging and destaging processes are executed between the cache device and the disk drive unit. In such a storage control apparatus being coupled to a host computer and a storage device to control data input/output to/from the storage device, memory devices therein can be used with a good efficiency, and data input/output control can be performed with high reliability and swiftness. | 05-20-2010 |
20100146205 | Storage device and method of writing data - In a particular embodiment, a device is disclosed that includes a controller adapted to store data to a first storage medium. The controller is adapted to receive data storage commands in an order received and to store the received data storage commands at a second storage medium. Further, the controller is adapted to re-order the data storage commands from the order received to an execution order, monitor access activity related to the first storage medium, and store the data from the second storage medium to the first storage medium according to the execution order when the monitored access activity falls below an activity threshold. | 06-10-2010 |
20100153638 | GRID STORAGE SYSTEM AND METHOD OF OPERATING THEREOF - There is provided a storage system and method of operating thereof. The storage system comprises a plurality of disk units adapted to store data at respective ranges of logical block addresses (LBAs), said addresses constituting an entire address space divided between a plurality of virtual partitions (VP), and a storage control grid operatively connected to the plurality of disk units and comprising a plurality of data servers, each server having direct or indirect access to the entire address space. Each certain virtual partition is configured to be controlled by at least two data servers among said plurality of data servers, a primary data server configured to have a primary responsibility for handling requests directed to any range of LBAs corresponding to said certain virtual partition and a secondary data server configured to have a secondary responsibility for handling requests directed to any range of LBAs corresponding to said certain virtual partition and to overtake the primary responsibility for handling respective requests if the primary server fails. Respectively, each data server is configured to have primary responsibility over all LBAs corresponding to at least two virtual partitions and to have secondary responsibility over all LBAs corresponding to at least two other virtual partitions. | 06-17-2010 |
20100153639 | GRID STORAGE SYSTEM AND METHOD OF OPERATING THEREOF - The is provided a method of hot backward compatible upgrade of a storage system comprising a plurality of disk units adapted to store data at respective ranges of logical block addresses (LBAs), said addresses constituting an entire address space divided between a plurality of virtual partitions (VPs), and a storage control grid operatively connected to the plurality of disk units and comprising a plurality of at least three data servers, each server having direct or indirect access to the entire address space. The method comprises: a) configuring each certain virtual partition to be controlled by at least two data servers, a primary data server configured to have a primary responsibility for handling requests directed to any range of LBAs corresponding to said certain virtual partition and a secondary data server configured to have a secondary responsibility for handling requests directed to any range of LBAs corresponding to said certain virtual partition and to overtake the primary responsibility for handling respective requests if the primary server fails; b) respectively configuring each data server among the plurality of data servers to have primary responsibility over all LBAs corresponding to at least two virtual partitions and to have secondary responsibility over all LBAs corresponding to at least two other virtual partitions; c) responsive to a shut-down of a data server for an upgrade purpose, i) re-configuring primary responsibility over each VP previously primary controlled by the shut-down server such that it becomes primary controlled by a server previously configured as a secondary server with respect to this VP; ii) re-allocating secondary responsibility over each VP previously secondary controlled by the shut-down server in a manner that each such VP becomes secondary controlled by a server other than the newly assigned server with primary responsibility. | 06-17-2010 |
20100169569 | STORAGE SYSTEM THAT IS CONNECTED TO EXTERNAL STORAGE - A first storage system is connected to a second storage system, and an external device within the first storage system is provided to a host as a device of the second storage system. The second storage system includes a cache control section having cache adaptors, each controlling a disk and a cache, a protocol conversion section including protocol adaptors that switch requests from the host to appropriate ones of the cache adaptors, a management adaptor, and an internal network that mutually connects the cache adaptors, the protocol adaptors and the management adaptor. The first storage system being connected to any of the protocol adaptors is connected to the second storage system. The second storage system executes a processing for the external device by the cache control section, or connects to the first storage system through the protocol conversion section without the cache control section executing processing for the external device. | 07-01-2010 |
20100174861 | System And Method For Refreshing Cached Data Based On Content Update Frequency - A system for refreshing cached data based on content update frequency includes an application/presentation layer coupled to a caching layer, the caching layer including cached content, and a content management system coupled to the application/presentation layer, the content management system configured to provide a content invalidation message to the caching layer informing the caching layer when the cached content is updated. | 07-08-2010 |
20100174862 | SYSTEMS AND METHODS FOR STORING AND ACCESSING DATA STORED IN A DATA ARRAY - Methods, systems, and apparatus for storing and accessing data stored in a data array are presented. In one embodiment, data is stored in a data array that includes a plurality of nodes. The nodes of the data array are segmented into one or more standard and priority pages. The pages are represented in a packed index. The priority pages are then cached and the standard pages are saved to disk. In another embodiment, data stored in a node of a data array may be accessed wherein the data array is segmented into at least one priority page and at least one standard page and the data array includes a plurality of nodes. A request for data stored in the node may be received. A priority page and/or a standard page may be searched for the node and, when found, the node may be accessed. | 07-08-2010 |
20100174863 | SYSTEM FOR PROVIDING SCALABLE IN-MEMORY CACHING FOR A DISTRIBUTED DATABASE - A system is described for providing scalable in-memory caching for a distributed database. The system may include a cache, an interface, a non-volatile memory and a processor. The cache may store a cached copy of data items stored in the non-volatile memory. The interface may communicate with devices and a replication server. The non-volatile memory may store the data items. The processor may receive an update to a data item from a device to be applied to the non-volatile memory. The processor may apply the update to the cache. The processor may generate an acknowledgement indicating that the update was applied to the non-volatile memory and may communicate the acknowledgment to the device. The processor may then communicate the update to a replication server. The processor may apply the update to the non-volatile memory upon receiving an indication that the update was stored by the replication server. | 07-08-2010 |
20100199037 | Methods and Systems for Providing Translations of Data Retrieved From a Storage System in a Cloud Computing Environment - A method for providing translations of data retrieved from a storage system in a cloud computing environment includes receiving, by an interface object executing on a first physical computing device, a request for provisioning of a virtual storage resource by a storage system. The interface object requests, from a storage system interface object, provisioning of the virtual storage resource. The interface object receives, from the storage system interface object, an identification of the provisioned virtual storage resource. The interface object translates the identification of the provisioned virtual storage resource from a proprietary format implemented by the storage system interface object into a standardized format by accessing an interface translation file mapping each of a plurality of proprietary formats with the standardized format. The interface object responds to the request received from the second physical computing device, with a translation of the received identification. | 08-05-2010 |
20100199038 | REMOTE COPY METHOD AND REMOTE COPY SYSTEM - In a configuration in which it is necessary to transfer data from a first storage system to a third storage system through a storage system between the storage systems, there is a problem that it is inevitable to give an excess logical volume to a second storage system between the storage systems. A remote copy system includes first storage system that sends and receives data to and from an information processing apparatus, a second storage system, and a third storage system. The second storage system virtually has a second storage area in which the data should be written and has a third storage area in which the data written in the second storage area and update information concerning the data are written. Data sent from the first storage system is not written in the second storage area but is written in the third storage area as data and update information. The data and the update information written in the third storage area are read out from the third storage system. | 08-05-2010 |
20100205367 | Method And System For Maintaining Cache Data Integrity With Flush-Cache Commands - A non-volatile memory location in a disk drive is utilized to store data residing in a write-cache upon receiving a flush-cache command from a host computer. If a subsequent flush-cache command is not issued within a predetermined time period, any data residing in the write-cache and stored in the non-volatile memory location that has not yet been written to its correct location on disk will be written to its correct location on disk. | 08-12-2010 |
20100205368 | METHOD AND SYSTEM FOR CACHING DATA IN A STORAGE SYSTEM - A method for caching data in a storage system involves receiving a request for a first datum stored on a storage disk, retrieving the first datum from the storage disk when a copy of the first datum is not stored in a main memory and when a copy of the first datum is not stored on an asymmetric cache device (ACD), storing a first copy of the first datum in the main memory, updating a list of data to include the first datum, where each datum in the list of data is a datum for which a copy is stored in the main memory, where the list of data is sorted using a scheme such that a datum at a head of the list of data is most favored by the scheme and a datum at a tail of the list of data is least favored by the scheme, storing, prior to any data being evicted from the main memory, a second copy of the first datum on the ACD, where the first datum is one of a first group of data selected using a head-first search of the list of data, and evicting the first copy of the first datum from the main memory when a first copy of a second datum is designated for storing in the main memory and the main memory is full and the first datum is at the tail of the list of data. | 08-12-2010 |
20100205369 | Methods and Systems for Storing Data Blocks of Multi-Streams and Multi-User Applications - A method for storing data, comprises the steps of: defining one or more intervals for one or more virtual disks, wherein each of the intervals has data; receiving a storage command in a cache, wherein the command having a logical address and a data block; determining a respective interval for the data block corresponding to the logical address of the data block; determining whether the data of the respective interval is to be written to a corresponding storage unit; and receiving a next storage command. | 08-12-2010 |
20100211730 | SYSTEM AND METHOD FOR CACHING DATA ON A HARD DISK DRIVE - A method for caching data on a hard disk drive. The method begins by identifying at least one track residing on the hard disk drive to devote to caching. The method continues with determining an average for each data value both residing on the hard disk drive and not residing in random access memory. The average value being the average number of times a given data value was read into memory before being the given data value was overwritten. Next the method detects a period of hard disk activity and in response to detecting, the method concludes by copying to each cache track each data value not residing in random access memory and having an average which exceeds a first threshold. | 08-19-2010 |
20100211731 | Hard Disk Drive with Attached Solid State Drive Cache - Methods, systems, and computer programs for managing storage in a computer system using a solid state drive (SSD) read cache memory are presented. The method includes receiving a read request, which causes a miss in a cache memory. After the cache miss, the method determines whether the data to satisfy the read request is available in the SSD memory. If the data is in SSD memory, the read request is served from the SSD memory. Otherwise, SSD memory tracking logic is invoked and the read request is served from a hard disk drive (HDD). Additionally, the SSD memory tracking logic monitors access requests to pages in memory, and if a predefined criteria is met for a certain page in memory, then the page is loaded in the SSD. The use of the SSD as a read cache improves memory performance for random data reads. | 08-19-2010 |
20100211732 | COPY CONTROL APPARATUS - A copy control apparatus for controlling a copy process between disks includes a copy process execution unit, a data capacity measuring unit, and a changing unit. The copy process execution unit executes the copy process between disks by securing a storage area on a cache. The data capacity measuring unit measures a data capacity contained in a write request accepted from a host system during execution of the copy process between disks by the copy process execution unit. The changing unit changes a capacity of the storage area secured by the copy process execution unit in accordance with the data capacity measured by the data capacity measuring unit. | 08-19-2010 |
20100228914 | DATA CACHING SYSTEM AND METHOD FOR IMPLEMENTING LARGE CAPACITY CACHE - Disclosed is a data caching system and a method for implementing a large capacity cache. The system includes: a record processing apparatus and a record storage apparatus which is configured with a first storage unit configured in a disk unit, a second storage unit and a third storage unit. The record processing apparatus is configured with a record inserting unit; the record inserting unit is adapted to store a record to be cached which comprises one or more data blocks into the first storage unit; the record inserting unit is further adapted to obtain addressing information of each data block of the record to be cached, configure one or more data block nodes in the second storage unit, and store the addressing information in the corresponding data block nodes; and the record inserting unit is further adapted to configure an index node in the third storage unit for the record to be cached, and establish an addressing relationship between the index node and the one or more data blocks of the record to be cached. The method and system provided by the present invention divide information related to the record into three parts according to their functions and store them separately, which sufficiently considers the characteristics of the cache. | 09-09-2010 |
20100241802 | STORAGE UNIT, DATA WRITE METHOD AND DATA WRITE PROGRAM - A storage unit includes a cache memory, a cache controller which accesses the cache memory, one or more disk units, a data receiving unit, a merge interpolation determination unit, a data readout unit, a write data generation unit and a data write unit. The data receiving unit receives, from the cache controller, unit readout data that includes update records updated by the cache controller and is unit of data read from the cache memory. The merge interpolation determination unit determines whether the received unit readout data is merge interpolated. The data readout unit reads, from the disk unit, data corresponding to the unit readout data when the unit readout data is determined to be merge interpolated. The write data generation unit generates data to be written to the disk unit by merge interpolating the unit readout data. The data write unit writes, to the disk unit, the generated data. | 09-23-2010 |
20100262771 | DATA STORAGE SYSTEM AND CACHE DATA-CONSISTENCY ASSURANCE METHOD - According to one embodiment, a data storage system includes a controller which accesses a first storage device using a first module on startup and accesses the first storage device using a second module after the startup. The first module records, when the write-target data is written to the first storage device, trace information indicating the write command in a second storage device. The second module determines, when taking over a reception of a command instructing writing/reading of data from the first module, whether or not unupdated data to be updated as a result of a writing of the first module is cached in the second storage device based on the trace information, and invalidates a data block including the unupdated data when the unupdated data is cached. | 10-14-2010 |
20100274962 | METHOD AND APPARATUS FOR IMPLEMENTING A CACHING POLICY FOR NON-VOLATILE MEMORY - The present disclosure relates to methods, devices and computer-readable medium for implementing a caching policy and/or a cache flushing policy in a peripheral non-volatile storage device operatively coupled to a host device. In some embodiments, data is stored to a cache area of a non-volatile memory within the peripheral non-volatile storage device in accordance with a historical rate at which other data was received by the peripheral storage device from the host device and/or a historical average time interval between successive host write requests received and/or an assessed rate at which data is required to be written to the non-volatile memory and/or a detecting by the peripheral non-volatile memory device that the host has read the storage ready/busy flag. In some embodiments, data is copied from a cache storage area of the non-volatile memory to a main storage area in accordance with the historical rate and/or the historical average time interval. | 10-28-2010 |
20100274963 | STORAGE SYSTEM AND OPERATION METHOD OF STORAGE SYSTEM - The present invention is able to improve the processing performance of a storage system by respectively virtualizing the external volumes and enabling the shared use of such external volumes by a plurality of available virtualization storage devices. By virtualizing and incorporating the external volume of an external storage device, a first virtualization storage device is able to provide the volume to a host as though it is an internal volume. When the load of the first virtualization storage device increases, a second virtualization storage device | 10-28-2010 |
20100274964 | STORAGE SYSTEM FOR CONTROLLING DISK CACHE - A storage system coupled to a host computer, including: a non-volatile medium that stores data; a disk cache that temporarily stores data stored in the non-volatile medium, where the disk cache is divided into a plurality of independent disk cache partitions; a control unit that controls an input and an output of data to and from the non-volatile medium; and a memory unit that stores information used by the control unit, including consistency control information setting respective commands permitted for each of the disk cache partitions, to guarantee consistency of the data; wherein the control unit is configured to determine whether or not to execute a requested command for a given disk cache partition, by referring to the consistency control information setting respective commands permitted for each of the disk cache partitions. | 10-28-2010 |
20100306463 | STORAGE SYSTEM AND ITS CONTROLLING METHOD - This invention, in the interface coupled to the server, the disk interface coupled to the second memory to store final data, the cache to store data temporarily, and in the storage system with the MP which controls them, specifies the area by referring to the stored data, and makes the virtual memory area resident in the cache by using the storage system where the specified area is made resident in the cache. | 12-02-2010 |
20100312960 | METHOD AND APPARATUS FOR PROTECTING THE INTEGRITY OF CACHED DATA IN A DIRECT-ATTACHED STORAGE (DAS) SYSTEM - A DAS system that implements RAID technology is provided in which an array of solid state disks (SSDs) that is external to the DAS controllers of the DAS system is used by the DAS controllers as WB cache memory for performing WB caching operations. Using the external SSD array as WB cache memory allows the DAS system to be fully cache coherent without significantly increasing the complexity of the DAS system and without increasing the amount of bandwidth that is utilized for performing caching operations. In addition, using the external SSD array as WB cache memory obviates the need to mirror DAS controllers. | 12-09-2010 |
20100318734 | APPLICATION-TRANSPARENT HYBRIDIZED CACHING FOR HIGH-PERFORMANCE STORAGE - Systems, apparatus, and computer-implemented methods are provided for the hybridization of cache memory utilizing both magnetic and solid-state memory media. A solid-state cache controller apparatus can be coupled to a host computing system to maximize efficiency of the system in a manner that is transparent to the high-level applications using the system. The apparatus includes an associative memory component and a solid-state cache control component. Solid-state memory is configured to store data blocks of host read operations. If a host-read operation is requested, the controller communicates with a solid-state cache memory controller to determine whether a tag array data structure indicates a cached copy of the requested data block is available in solid-state memory. | 12-16-2010 |
20100318735 | STORAGE SYSTEM - The storage system includes a disk controller for receiving write commands from a computer, and a plurality of disk devices in which data is written in accordance with the control of the disk controller. The size of the first block which constitutes the data unit handled in the execution of the input/output processing of the data in accordance with the write command by the disk controller is different from the size of the second block which constitutes the data unit handled in the execution of the input/output processing of data by the plurality of disk devices. The disk controller issues an instruction for the writing of data to the disk devices using a third block unit of a size corresponding to a common multiple of the size of the first block and the size of the second block. | 12-16-2010 |
20100325356 | NONVOLATILE STORAGE THRESHOLDING - Embodiments for facilitating data transfer between a nonvolatile storage (NVS) write cache and a pool of target storage devices are provided. Each target storage device in the pool of target storage devices is determined as one of a hard disk drive (HDD) and a solid-state drive (SSD) device, and classified into one of a SSD rank group and a HDD rank group. If no data is received in the NVS write cache for a predetermined time to be written to a target storage device classified in the SSD rank group, a threshold of available space in the NVS write cache is set to allocate at least a majority of the available space to the HDD rank group. Upon receipt of a write request for the SSD rank group, the threshold of the available space is reduced to allocate a greater portion of the available space to the SSD rank group. | 12-23-2010 |
20100332746 | APPARATUS CAPABLE OF COMMUNICATING WITH ANOTHER APPARATUS, METHOD OF CONTROLLING APPARATUS AND COMPUTER-READABLE RECORDING MEDIUM - An apparatus capable of communicating with another apparatus including a first writing unit for writing data into a plurality of recording mediums housed in a first housing, a first storage, has a first reading unit for reading out data from a plurality of recording mediums housed in a second housing for housing the recording mediums storing data written by the first writing unit, a second reading unit for reading out data from the first storage, a second storage for storing cache data of the plurality of the recording mediums housed in the second housing, a controller unit for enabling the first and second reading units to read out data on the basis of the determined area, and a second writing unit for writing data read out by the first reading unit and the second reading unit into the second storage. | 12-30-2010 |
20100332747 | STORAGE DEVICE, INFORMATION PROCESSING SYSTEM, AND COMPUTER PROGRAM PRODUCT - Ease of operation is improved by making it easier for the operator to monitor and select a storage medium device connected to a computer device. The device is a USB hard disk connected to a personal computer, and includes a disk, a cache memory, a push-button, and an LED. When the push-button is pushed (S | 12-30-2010 |
20110004728 | ON-DEVICE DATA COMPRESSION FOR NON-VOLATILE MEMORY-BASED MASS STORAGE DEVICES - A non-volatile memory-based mass storage device that includes a host interface attached to a package, at least one non-volatile memory device within the package, a memory controller connected to the host interface and adapted to access the non-volatile memory device in a random access fashion through a parallel bus, a volatile memory cache within the package, and co-processor means within the package for performing hardware-based compression of cached data before writing the cached data to the non-volatile memory device in random access fashion and performing hardware-based decompression of data read from the non-volatile memory device in random access fashion. | 01-06-2011 |
20110016271 | Techniques For Managing Data In A Write Cache Of A Storage Controller - A technique for limiting an amount of write data stored in a cache memory includes determining a usable region of a non-volatile storage (NVS), determining an amount of write data in a current write request for the cache memory, and determining a failure boundary associated with the current write request. A count of the write data associated with the failure boundary is maintained. The current write request for the cache memory is rejected when a sum of the count of the write data associated with the failure boundary and the write data in the current write request exceeds a determined percentage of the usable region of the NVS. | 01-20-2011 |
20110022794 | DISTRIBUTED CACHE SYSTEM IN A DRIVE ARRAY - An apparatus comprising a drive array, a first cache circuit, a plurality of second cache circuits and a controller. The drive array may comprise a plurality of disk drives. The plurality of second cache circuits may each be connected to a respective one of the disk drives. The controller may be configured to (i) control read and write operations of the disk drives, (ii) read and write information from the disk drives to the first cache, (iii) read and write information to the second cache circuits, and (iv) control reading and writing of information directly from one of the disk drives to one of the second cache circuits. | 01-27-2011 |
20110040934 | STORAGE APPARATUS HAVING VIRTUAL-TO-ACTUAL DEVICE ADDRESSING SCHEME - A storage apparatus includes a storage unit and a controller, wherein control of inputting/outputting data from/to a device provided in said storage unit is executed in accordance with a request received by said storage apparatus. An actual device of the storage apparatus corresponds to a virtual device which is external to said storage apparatus. The controller operates to perform a process for mapping an actual device address corresponding to a virtual device address, in accordance with a specification of the actual device to be mounted or unmounted to correspond to the virtual device, and storing and retaining mapping information obtained from the mapping in a first table. The controller also performs data input/output process for receiving, an access request for data input/output in which said virtual device address is specified, obtaining the actual device address mapped to said specified virtual device address in said first table, and accessing the actual device by said obtained actual device address. | 02-17-2011 |
20110047329 | Virtualized Storage Performance Controller - An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprises: a monitoring component operable in communication with the network for acquiring performance data from the managed physical storage and the virtual storage; and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage. The apparatus may further comprise a queue controller component responsive to the monitoring component for adjusting queue parameters for the managed physical storage. The monitoring component, the cache controller component and the queue controller component may be configured to operate periodically during operation of the virtualized storage system. | 02-24-2011 |
20110087836 | STORAGE UNIT AND MEMORY SYSTEM - A storage unit includes: a random access memory device and a storage device to be accessed using an address in units of word and sector, respectively; and a storage controller controlling accesses to the random access memory device and the storage device according to the addresses designated via a bus. The storage controller includes first and second interface functions for access to data stored on the storage device and the random access memory designated using the sector address and the word address provided via the bus, respectively, a function of using the random access memory device as a first disk cache and determining data to be saved in the random access memory device in response to the access by the first interface function, and functions of transferring the data designated using the sector address by repeating register access and by a bus master function as continuous word-sized data through the bus. | 04-14-2011 |
20110113192 | CLUSTERED STORAGE SYSTEM WITH EXTERNAL STORAGE SYSTEMS - A first storage system includes a first storage unit to provide storage volumes, a first storage controller, and a first memory to store a first control program to process an input/output request received by the first storage system. A second storage system includes a second storage unit to provide storage volumes, a second storage controller, and a second memory to store a second control program to process an input/output request received by the second storage system. Each of the first and second storage systems is configured to present the storage volumes of the other storage system to the host computer, so that the host computer can access the storage volumes of each of the first and second storage systems via one of the first and second storage systems if the host computer is unable to access the storage volumes via the other storage system. | 05-12-2011 |
20110119442 | NON-VOLATILE WRITE CACHE FOR A DATA STORAGE SYSTEM - The present disclosure provides a data storage system. In one example, the data storage system includes a data storage media component having a plurality of data storage locations. A first set of the plurality of data storage locations are allocated for a main data storage area. The data storage system also includes a controller configured to define a write cache for the main data storage area by selectively allocating a second set of the plurality of data storage locations. | 05-19-2011 |
20110125962 | MAP UPDATING SYSTEM AND MAP UPDATING PROGRAM USING DYNAMIC CACHE MEMORY - A map updating system includes: an update processing unit for performing update processing by reading data required in the update processing from a cache area of a memory when the data are stored in the cache area and from a map database when the data are not stored in the cache area; a cache storage unit for storing the data read by the update processing unit in the cache area; a processing memory capacity determination unit for determining a processing memory capacity, which is a capacity of the memory required as an update processing area, on the basis of the content of map data to be subjected to the update processing; and a cache capacity determination unit for determining a cache capacity, which is a capacity of the memory allocated to the cache area, on the basis of the processing memory capacity. | 05-26-2011 |
20110131373 | Mirroring Data Between Redundant Storage Controllers Of A Storage System - In one embodiment, the present invention includes canisters to control storage of data in a storage system including a plurality of disks. Each of multiple canisters may have a processor configured for uniprocessor mode and having an internal node identifier to identify the processor and an external node identifier to identify another processor with which it is to mirror cached data. The mirroring of cached data may be performed by communication of non-coherent transactions via the PtP interconnect, wherein the PtP interconnect is according to a cache coherent protocol. Other embodiments are described and claimed. | 06-02-2011 |
20110138120 | INFORMATION PROCESSOR, AND OPTICAL DISC DRIVE USED IN INFORMATION PROCESSOR - The speed of recording or reproduction of data to or from an HDD in an information processor is increased. This is achieved without an increase in device size, system changes, or other inconvenience. A cache memory configured by, for example, a flash memory for the data to be recorded to or reproduced from the HDD is provided not on the HDD side but on the ODD side. When, after the HDD has been replaced, there is inconvenience in using this cache memory as a cache for the data to be recorded, the cache memory is used only as a cache for the data to be reproduced, and if data in the cache memory and data on the HDD do not match each other, the cached data is invalidated. | 06-09-2011 |
20110145497 | Cluster Families for Cluster Selection and Cooperative Replication - An apparatus, system, and method are disclosed to create cluster families for cluster selection and cooperative replication. The clusters are grouped into family members of a cluster family base on their relationships and roles. Members of the cluster family determine which family member is in the best position to obtain replicated information and become cumulatively consistent within their cluster family. Once the cluster family becomes cumulatively consistent, the data is shared within the cluster family so that all copies within the cluster family are consistent. | 06-16-2011 |
20110167214 | Method And Apparatus To Manage Non-Volatile Disk Cache - The present invention provides a method and an apparatus to manage non-volatile (NV) memory as cache on a hard drive disk for data storage. | 07-07-2011 |
20110185118 | STORAGE SYSTEM - The storage system includes a disk controller for receiving write commands from a computer, and a plurality of disk devices in which data is written in accordance with the control of the disk controller. The size of the first block which constitutes the data unit handled in the execution of the input/output processing of the data in accordance with the write command by the disk controller is different from the size of the second block which constitutes the data unit handled in the execution of the input/output processing of data by the plurality of disk devices. The disk controller issues an instruction for the writing of data to the disk devices using a third block unit of a size corresponding to a common multiple of the size of the first block and the size of the second block. | 07-28-2011 |
20110191534 | DYNAMIC MANAGEMENT OF DESTAGE TASKS IN A STORAGE CONTROLLER - Method, system, and computer program product embodiments for facilitating data transfer from a write cache and NVS via a device adapter to a pool of storage devices by a processor or processors are provided. The processor(s) adaptively varies the destage rate based on the current occupancy of the NVS for a particular storage device and stage activity related to that storage device. The stage activity includes one or more of the storage device stage activity, device adapter stage activity, device adapter utilized bandwidth and the read/write speed of the storage device. These factors are generally associated with read response time in the event of a cache miss and not ordinarily associated with dynamic management of the destage rate. This combination maintains the desired overall occupancy of the NVS while improving response time performance. | 08-04-2011 |
20110191535 | METHOD FOR CONTROLLING DISK ARRAY APPARATUS AND DISK ARRAY APPARATUS - According to an aspect of the embodiment, a cache controller sets, when power supply capacity information is acquired at an update period, a size of a permitted area in which the writing of dirty data is permitted and a size of an inhibited area in which the writing of the dirty data is inhibited in a cache memory, according to the power supply capacity information. The cache controller stores the dirty data or read data read out from a disk array in the permitted area, or stores only the read data in the inhibited area. | 08-04-2011 |
20110202716 | STORAGE SYSTEM AND DATA WRITING METHOD - A storage system includes: storage devices storing data and varying in data-writing speed; a cash memory storing the write data until the write data is written to the storage device; a storage section that stores the write data received from an external device in the cash memory; and a writing section that takes the write data from the cash memory and writes the taken write data to the storage device. The storage section stores the write data received from the external device, in a storage area according to the type of the storage device to which the write data is to be written, among storage areas resulting from division according to the types of the storage devices. The writing section sequentially takes the write data from the storage area sequentially selected from among the storage areas on the cash memory and writes the taken data to the storage device. | 08-18-2011 |
20110202717 | STORAGE SYSTEM AND CONTROL METHOD THEREOF - The plurality of host systems or the plurality of applications include an insertion unit for sending the identifier. The storage controller includes an analysis unit for identifying a host system or an application based on the identifier contained in the access information and analyzing an access pattern of access information sent from the identified host system or application, a management unit for managing the identifier, the analysis result of the access pattern analyzed with the analysis unit, and a control method for controlling the processing of data to be sent from a host system based on the analysis result or data to be stored in a logical volume, and a data processing controller for controlling the processing of data to be sent from a host system or data to be stored in a logical volume according to the control method managed by the management unit. | 08-18-2011 |
20110202718 | DUAL WRITING DEVICE AND ITS CONTROL METHOD - A data storing system include a first and second storage system, each including a cache memory and a disk drive, and a third storage system coupled to the first and second storage systems and including a third disk drive to provide a third logical volume based on the third disk drive. First and second write data is transmitted from the first storage system to the second storage system according to copy pair management information. A certain storage system, which is one of the first and second storage systems, destages the second write data from a cache memory of the certain storage system to the third logical volume in the third storage system according to a volume and storage conversion table in the certain storage system. | 08-18-2011 |
20110208909 | REDUCTION OF I/O LATENCY FOR WRITABLE COPY-ON-WRITE SNAPSHOT FUNCTION - According to one aspect of the invention, a method of controlling a storage system comprises storing data in a first volume in the storage system which has volumes including the first volume and a plurality of second volumes; prohibiting write I/O (input/output) access against the first volume after storing the data in the first volume; performing subsequent write requests received by the storage system against the second volumes in the storage system after storing the data in the first volume, each write request having a target volume which is one of the second volumes; and in response to each one write request of the write requests, determining whether the target volume of the one write request is write prohibited or not, and performing the one write request only if the target volume is not write prohibited. | 08-25-2011 |
20110213923 | METHODS FOR OPTIMIZING PERFORMANCE OF TRANSIENT DATA CALCULATIONS - A redundant array of independent disk (RAID) stack loads a parity block of RAID data from a main memory into a first register of a processing device and loading the parity block into a cache memory of the processing device. The RAID stack loads a first data block of the RAID data from the main memory into a second register of the processing device without loading the first data block into the cache memory of the processing device. The processing device performs a first parity calculation based on the parity block of the first register and the first data block of the second register. | 09-01-2011 |
20110213924 | METHODS FOR ADAPTING PERFORMANCE SENSITIVE OPERATIONS TO VARIOUS LEVELS OF MACHINE LOADS - For each of a plurality of memory access routines having different access timing characteristic, a redundant array of independent disk (RAID) stack executes the memory access routine to load predetermined data from a main memory to a register of a processor of a data processing system. The RAID stack determines an amount of cache misses for the execution of the memory access routine. The RAID stack selects one of the plurality of memory access routines that has the least amount of cache misses for further memory accesses for the purpose of parity calculations of RAID data. | 09-01-2011 |
20110213925 | METHODS FOR REDUCING CACHE MEMORY POLLUTION DURING PARITY CALCULATIONS OF RAID DATA - A redundant array of independent disk (RAID) stack loads a first parity block of RAID data into a first memory address of a main memory of a data processing system. A first parity calculation is performed on a first plurality of data blocks of the RAID data with the first parity block loaded from the first memory address of the main memory into a register of the processor of the data processing system and a cache memory associated with the processor. The RAID stack loads subsequent parity blocks of RAID data into subsequent memory addresses of the main memory, where a difference between the first memory address and the subsequent memory addresses equals to one or more multiples of an alias offset associated with the cache memory. A second parity calculation is performed on a second plurality of data blocks and the second parity block of the RAID data. | 09-01-2011 |
20110213926 | METHODS FOR DETERMINING ALIAS OFFSET OF A CACHE MEMORY - A redundant array of independent disk (RAID) stack determines a first number of processor cycles to reload first data from a first memory address of a main memory into a processor of a data processing system. The RAID stack loads second data from a second memory address of the main memory into the processor, where the second memory address is configured to be an address offset from the first memory address. The RAID stack reloads the first data from the first memory address of the main memory and determines a second number of processor cycles to reload the first data from the first memory address of the main memory. An alias offset of a cache memory associated with the processor of the data processing system is determined based on the first number of processor cycles and the second number of processor cycle. | 09-01-2011 |
20110225358 | DISK ARRAY DEVICE, DISK ARRAY SYSTEM AND CACHE CONTROL METHOD - The invention proposes a disk array device that can improve response performance while maintaining data consistency even in the case a write request is received from a host device by a controller that does not have master authority. The disk array device includes a master controller and a slave controller. Upon adding identifying information indicating that write data has been stored in a buffer memory to the write request, the slave controller transmits, to the master controller, the write request to which the identifying information has been added as well as the write data. After having stored the write data, the master controller transmits the write request to which the identifying information has been added to the slave controller. Upon receiving the write request, the slave controller alters the attributes of the buffer memory where the write data has been stored, from the buffer memory to the cache memory. | 09-15-2011 |
20110238908 | DISC DEVICE - In a disc device according to the present invention, when a controller | 09-29-2011 |
20110258376 | METHODS AND APPARATUS FOR CUT-THROUGH CACHE MANAGEMENT FOR A MIRRORED VIRTUAL VOLUME OF A VIRTUALIZED STORAGE SYSTEM - Methods and apparatus for cut-through cache memory management in write command processing on a mirrored virtual volume of a virtualized storage system, the virtual volume comprising a plurality of physical storage devices coupled with the storage system. Features and aspects hereof within the storage system provide for receipt of a write command and associated write data from an attached host. Using a cut-through cache technique, the write data is stored in a cache memory and transmitted to a first of the plurality of storage devices as the write data is stored in the cache memory thus eliminating one read-back of the write data for transfer to a first physical storage device. Following receipt of the write data and storage in the cache memory, the write data is transmitted from the cache memory to the other physical storage devices. | 10-20-2011 |
20110271048 | STORAGE APAPRATUS AND ITS CONTROL METHOD - A storage apparatus and its control method capable of shortening data save time at the time of power shutdown are suggested. | 11-03-2011 |
20110276757 | STORAGE CONTROL DEVICE, AND CONTROL METHOD FOR CACHE MEMORY - The storage control device of the present invention uses a plurality of queues to manage cache segments which are in use, so as to retain cache segments which contain large amounts of data for long periods of time. One of the queues manages segments in which large amounts of valid data are stored. Another queue manages segments in which small amounts of valid data are stored. If the number of unused segments becomes insufficient, then a segment which is positioned at the LRU end of the other queue is released, and is shifted to a free queue. Due to the use of this other queue, it is possible to retain segments in which comparatively large amounts of data are stored for comparatively long periods of time. | 11-10-2011 |
20110296100 | MIGRATING WRITE INFORMATION IN A WRITE CACHE OF A STORAGE SYSTEM - To migrate data from a first storage system to a second storage system, the second storage system detects a migration of a persistent storage media from the first storage system to the second storage system. In response to detecting the migration of the persistent storage media, write information from a write cache in the first storage system is copied to a write cache in the second storage system, where the write caches in the first and second storage systems were not maintained synchronously before the write information from the write cache in the first storage system is copied to the write cache in the second storage system. | 12-01-2011 |
20110296101 | COMPUTER SYSTEM HAVING AN EXPANSION DEVICE FOR VIRTUALIZING A MIGRATION SOURCE LOGICAL UNIT - A migration destination storage creates an expansion device for virtualizing a migration source logical unit. A host computer accesses an external volume by way of an access path of a migration destination logical unit, a migration destination storage, a migration source storage, and an external volume. After destaging all dirty data accumulated in the disk cache of the migration source storage to the external volume, an expansion device for virtualizing the external volume is mapped to the migration destination logical unit. | 12-01-2011 |
20110314217 | ESTIMATING THE SIZE OF AN IN-MEMORY CACHE - This Sampling Object Cache System (“SOCS”) estimates the size of an in-memory heap-based object cache without the need to serialize every object within the cache. SOCS samples objects at a user-determined rate and then computes a “sample size average” for each type of class—whether a top class, type of top class or non top class. Using these sample size averages, a statistically accurate measure of the overall size of the cache is calculated by adding together the total size of the objects in the cache for each class type. | 12-22-2011 |
20120011312 | STORAGE SYSTEM WITH REDUCED ENERGY CONSUMPTION - A control layer of a data storage system is configured to identify one or more physical data units in the physical storage, which are associated only with corresponding logical snapshot data units, and to reallocate such physical snapshot data units to a dedicated storage space. The dedicated storage space can be a low-power storage space, which includes one or more disks designated as low power disks. The reallocation of snapshot data units to low power disks can be carried out according to an energy-aware migration policy, directed for minimizing the activation of the low power disks, and maintaining the disks in an inactive state for longer periods of time. | 01-12-2012 |
20120011313 | STORAGE SYSTEMS WITH REDUCED ENERGY CONSUMPTION - Storage systems with reduced energy consumption, methods of operating thereof, corresponding computer program products and corresponding program storage devices. Some non-limiting examples of a write method include: configuring a plurality of storage disk units such that at any given point in time there are at least two storage disk drives operating in active state in any storage disk unit; caching in a cache memory one or more write requests and generating a consolidated write request corresponding to a stripe in a RAID group; destaging the consolidated write request; and writing the destaged consolidated write request in a write out of place manner to one or more storage disk drives operating at the destage point of time in active state. Some non-limiting examples of a read method include: configuring local storage disk drives so that at any given point in time, a part of the local storage disk drives operates in low power state, wherein the local storage disk drives are operable to switch between low power state and active state; and responsive to a read request for a portion on a local storage disk drive, reading from the local storage disk drive, if active; and if the local storage disk drive is not active, enquiring if a remote mirror storage disk drive storing a copy of the portion is active, and if yes, reading from the remote mirror storage disk drive. | 01-12-2012 |
20120011314 | STORAGE SYSTEM WITH REDUCED ENERGY CONSUMPTION AND METHOD OF OPERATING THEREOF - There are provided a storage system with reduced energy consumption and a method of operating thereof. The method comprises caching in the cache memory a plurality of data portions corresponding to one or more incoming write requests, to yield cached data portions; consolidating the cached data portions characterized by a given level of expected I/O activity addressed thereto into a consolidated write request; and, responsive to a destage event, enabling writing the consolidated write request to one or more disk drives dedicated to accommodate data portions characterized by said given level of expected I/O activity addressed thereto. The cached data portions consolidated into the consolidated write request can be characterized by expected low frequency of I/O activity, and the respective one or more dedicated disk drives can be configured to operate in low-powered state unless activated. | 01-12-2012 |
20120017040 | Maintaining Data Consistency in Mirrored Cluster Storage Systems Using Bitmap Write-Intent Logging - Techniques for maintaining mirrored storage cluster data consistency can employ write-intent logging. The techniques can be scaled to any number of mirror nodes. The techniques can keep track of any outstanding I/Os, data in caches, and data that has gone out of sync between mirrored nodes due to link failures. The techniques can ensure that a power failure on any of the storage nodes does not result in inconsistent data among the storage nodes. The techniques may keep track of outstanding I/Os using a minimal memory foot-print and having a negligible impact on the I/O performance. Properly choosing the granularity of the system for tracking outstanding I/Os can result in a minimal amount of data requiring transfer to synchronize the mirror nodes. The capability to vary the granularity based on physical and logical parameters of the storage volumes may provide performance benefits. | 01-19-2012 |
20120030422 | Resilient Mirroring Utilizing Peer-to-Peer Storage - An apparatus and associated method including a first storage device and a second storage device, each coupled to a remote server independently of the other via a network. Resilient mirroring logic is stored in each of the storage devices that establishes a peer-to-peer communication connection with the other storage device in response to receiving a data access command from the remote server. | 02-02-2012 |
20120042123 | INTELLIGENT CACHE MANAGEMENT - An exemplary storage network, storage controller, and methods of operation are disclosed. In one embodiment, a method of managing cache memory in a storage controller comprises receiving, at the storage controller, a cache hint generated by an application executing on a remote processor, wherein the cache hint identifies a memory block managed by the storage controller, and managing a cache memory operation for data associated with the memory block in response to the cache hint received by the storage controller. | 02-16-2012 |
20120072660 | OPTICAL DISC RECORDER AND BUFFER MANAGEMENT METHOD THEREOF - A buffer management method is provided. A host issues a read command requesting access for a read data block and a write command requesting recording of a write data block. A write buffer is dedicated to store the write data block. A read buffer is dedicated to store the read data block. The method comprises entering the optical disc recorder into a write loop. During the write loop, the optical disc recorder triggering a write command handling procedure in response to the write command; triggering a read command handling procedure in response to the read command; and triggering a pre-recording procedure to prepare the write data block in the write buffer for recording. Wherein contents between the write buffer and read buffer are exchangeable during the write handling procedure, the read handling procedure or the pre-recording procedure. | 03-22-2012 |
20120079186 | MULTI-PROCESSOR COMPUTING SYSTEM HAVING FAST PROCESSOR RESPONSE TO CACHE AGENT REQUEST CAPACITY LIMIT WARNING - An apparatus is described that includes a plurality of processors, a plurality of cache slices and respective cache agents. Each of the cache agents have a buffer to store requests from the processors. The apparatus also includes a network between the processors and the cache slices to carry traffic of transactions that invoke the processors and/or said cache agents. The apparatus also includes communication resources between the processors and the cache agents reserved to transport one or more warnings from one or more of the cache agents to the processors that the one or more cache agents' respective buffers have reached a storage capacity threshold. | 03-29-2012 |
20120079187 | MANAGEMENT OF WRITE CACHE USING STRIDE OBJECTS - Method, system, and computer program product embodiments for, in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit, identifying working data on a stride basis by a processor device are provided. A multi-update bit is established for each stride in a modified cache. The multi-update bit is adapted to indicate at least one track in a working set. A schedule of destage scans is configured based on a plurality of levels of urgency. A destage operation is performed based on at least one of a number of strides examined by the destage scans, whether the multi-update bit is set, and whether an emergency level of the plurality of levels of urgency is active. | 03-29-2012 |
20120089775 | METHOD AND APPARATUS FOR SELECTING REFERENCES TO USE IN DATA COMPRESSION - A cloud storage appliance generates a plurality of fingerprints of a data chunk, wherein each of the plurality of fingerprints is associated with a different region of the data chunks. The cloud storage appliance identifies a plurality of reference chunks based on the plurality of fingerprints, and generates a plurality of reference chunk pairs. The cloud storage appliance then selects a reference chunk pair based on a probability that an amount of regions of the data chunk match a reference chunk in the reference chunk pair. The selected reference chunk pair will be used to compress the data chunk. | 04-12-2012 |
20120102268 | METHODS AND SYSTEMS USING SOLID-STATE DRIVES AS STORAGE CONTROLLER CACHE MEMORY - Methods and systems for using one or more solid-state drives (SSDs) as a shared cache memory for a plurality of storage controllers coupled with the SSDs and coupled with a plurality of storage devices through a common switched fabric communication medium. All controllers share access to the SSDs through the switched fabric and thus can assume control for a failed controller by, in part, accessing cached data of the failed controller in the shared SSDs. | 04-26-2012 |
20120110258 | STORAGE DEVICE CACHE - Implementations described and claimed herein provide a method and system for comparing a storage location related to a new write command on a storage device with storage locations of a predetermined number of write commands stored in a first table to determine frequency of write commands to the storage location. If the frequency is determined to be higher than a first threshold, the data related to the write command is stored in a write cache. | 05-03-2012 |
20120110259 | TIERED DATA STORAGE SYSTEM WITH DATA MANAGEMENT AND METHOD OF OPERATION THEREOF - A method of operation of a data storage system includes: enabling a system interface for receiving host commands; updating a mapping register for monitoring transaction records of a logical block address for the host commands including translating a host virtual block address to a physical address for storage devices; accessing by a storage processor, the mapping register for comparing the transaction records with a tiering policies register; and enabling a tiered storage engine for transferring host data blocks by the system interface and concurrently transferring between a tier zero, a tier one, or a tier two if the storage processor determines the transaction records exceed the tiering policies register. | 05-03-2012 |
20120117321 | STORAGE SYSTEM AND OWNERSHIP CONTROL METHOD FOR STORAGE SYSTEM - When a failure has occurred, the situation is dealt with promptly according to this invention. As triggered by detection of a failure in any specified processor package of a plurality of processor packages, a processor for the specified processor package is temporarily substituted with a processor for another processor package, as an assignment destination of ownership which is assigned to the processor for the specified processor package, instead of actually transferring the ownership, thereby making the transition to an ownership-substituted state; and as triggered by an event that the failure is no longer detected in the specified processor package, a processor for the other processor package cancels the ownership-substituted state. | 05-10-2012 |
20120124284 | STORAGE APPARATUS, STORAGE MANAGEMENT METHOD, AND STORAGE MEDIUM STORING STORAGE MANAGEMENT PROGRAM - Provided are a storage apparatus, a storage management method, and a storage management program capable of performing a backing-up operation under the condition that a host computer is in operation. A storage apparatus | 05-17-2012 |
20120137062 | LEVERAGING COALESCED MEMORY - Embodiments of the invention relate to efficiently processing read transactions in a shared file system having multiple virtual machines. Each virtual machine in the file system has access to disk storage and local disk cache. At the same time, each virtual machine in the file system has access to remote disk cache of a remote virtual machine. For each read transaction, the local and/or remote disk cache employed for data blocks to support the transaction. Disk storage is employed to support the transaction in the event that the data blocks are not available in the local and/or remote disk cache. | 05-31-2012 |
20120137063 | AUXILIARY STORAGE DEVICE AND PROCESSING METHOD THEREOF - A storage device that receives writing data with a first data size from a host, and writes data with a second data size that is greater than the first data size. The storage device includes a storage area unit formatted and managed by a file format including user data and specific management data, the user data and specific management data having a size smaller than the second data size, a cache memory having a capacity of not less than the second data size and that stores the specific management data, and a controller that controls reading and writing data from and into the storage area unit and the cache memory, when receiving an instruction from the host. | 05-31-2012 |
20120144109 | DYNAMIC ADJUSTMENT OF READ/WRITE RATIO OF A DISK CACHE - Embodiments of the invention are directed to optimizing the performance of a split disk cache. In one embodiment, a disk cache includes a primary region having a read portion and write portion and one or more smaller, sample regions also including a read portion and a write portion. The primary region and one or more sample region each have an independently adjustable ratio of a read portion to a write portion. Cached reads are distributed among the read portions of the primary and sample region, while cached writes are distributed among the write portions of the primary and sample region. The performance of the primary region and the performance of the sample region are tracked, such as by obtaining a hit rate for each region during a predefined interval. The read/write ratio of the primary region is then selectively adjusted according to the performance of the one or more sample regions. | 06-07-2012 |
20120151133 | SAVING LOG DATA USING A DISK SYSTEM AS PRIMARY CACHE AND A TAPE LIBRARY AS SECONDARY CACHE - Various embodiments are provided for saving a plurality of log data in a hierarchical storage management system using a disk system as a primary cache with a tape library as a secondary cache. The user data is stored in the primary cache and written into the secondary cache at a subsequent period of time. The plurality of blank tapes in the secondary cache is prepared for storing the user data and the plurality of log data based on priorities. At least one of the plurality of blank tapes is selected for copying the plurality of log data and the user data from the primary cache to the secondary cache based on priorities. The plurality of log data is stored in the primary cache. The selection of at least one of the plurality of blank tapes completely filled with the plurality of log data is delayed for writing additional amounts of the user data. | 06-14-2012 |
20120151134 | Data Storage Management in a Memory Device - The disclosure is related to systems and methods of managing data storage in a memory device. In a particular embodiment, a method is disclosed that includes receiving, in a data storage device, at least one data packet that has a size that is different from an allocated storage capacity of at least one physical destination location on a data storage medium in the data storage device for the at least one data packet. The method also includes storing the at least one received data packet in a non-volatile cache memory prior to transferring the at least one received data packet to the at least one physical destination location. | 06-14-2012 |
20120159066 | SYSTEM AND METHOD FOR PERFORMING CONTIGUOUS DISK READ ON PSEUDO-CONTIGUOUS DATA BLOCKS WITHIN A DATABASE MANAGEMENT SYSTEM - A system and method to facilitate cache management and improve disk read performance for database systems with large memory and large disks. A contiguous read feature is employed to read multiple pseudo-contiguous data blocks in one large I/O from disk storage into cache memory. The contiguous read feature loads the disk area containing pseudo-contiguous data blocks by issuing a single disk read. A separate virtual space and memory page list is created for each data block, and the page lists are reunited to create one I/O. The pseudo-contiguity of two data blocks is determined by comparing the distance between them, i.e., the size of the hole between the two data blocks, with a predefined maximum distance, over which it is more effective to read the data blocks independently. | 06-21-2012 |
20120159067 | SYSTEM AND METHOD FOR HANDLING IO TO DRIVES IN A RAID SYSTEM - A system and method for handling IO to drives in a RAID system is described. In one embodiment, the method includes providing a multiple disk system with a predefined strip size. IO request with a logical block address is received for execution on the multiple disk system. A plurality of sub-IO requests with a sub-strip size is generated, where the sub-strip size is smaller than the strip size. The generated sub-IO commands are executed on the multiple disk system. In one embodiment, a cache line size substantially equal to the sub-strip size is assigned to process the IO request. | 06-21-2012 |
20120159068 | STORAGE SYSTEM - The storage system includes a disk controller for receiving write commands from a computer, and a plurality of disk devices in which data is written in accordance with the control of the disk controller. The size of the first block which constitutes the data unit handled in the execution of the input/output processing of the data in accordance with the write command by the disk controller is different from the size of the second block which constitutes the data unit handled in the execution of the input/output processing of data by the plurality of disk devices. The disk controller issues an instruction for the writing of data to the disk devices using a third block unit of a size corresponding to a common multiple of the size of the first block and the size of the second block. | 06-21-2012 |
20120166723 | STORAGE SYSTEM AND MANAGEMENT METHOD OF CONTROL INFORMATION THEREIN - An embodiment of this invention divides a cache memory of a storage system into a plurality of partitions and information in one or more of the partitions is composed of data different from user data and including control information. The storage system dynamically swaps data between an LU storing control information and a cache partition. Through this configuration, in a storage system having an upper limit in the capacity of the cache memory, a large amount of control information can be used while access performance to control information is kept. | 06-28-2012 |
20120198148 | ADAPTIVE PRESTAGING IN A STORAGE CONTROLLER - In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Still further, a cache prestaging operation in accordance with further aspects may decrease one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from prestaged tracks being demoted before they are used. Conversely, a cache prestaging operation in accordance with another aspect may increase one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from waiting for a stage to complete. In yet another aspect, the prestage trigger may not be limited by the prestage amount. Instead, the pre-stage trigger may be permitted to expand as conditions warrant it by prestaging additional tracks and thereby effectively increasing the potential range for the prestage trigger. Other features and aspects may be realized, depending upon the particular application. | 08-02-2012 |
20120198149 | EFFICIENTLY SYNCHRONIZING WITH SEPARATED DISK CACHES - In a method of synchronizing with a separated disk cache, the separated cache is configured to transfer cache data to a staging area of a storage device. An atomic commit operation is utilized to instruct the storage device to atomically commit the cache data to a mapping scheme of the storage device. | 08-02-2012 |
20120203963 | COMPUTER SYSTEM HAVING AN EXPANSION DEVICE FOR VIRTUALIZING A MIGRATION SOURCE - A migration destination storage creates an expansion device for virtualizing a migration source logical unit. A host computer accesses an external volume by way of an access path of a migration destination logical unit, a migration destination storage, a migration source storage, and an external volume. After destaging all dirty data accumulated in the disk cache of the migration source storage to the external volume, an expansion device for virtualizing the external volume is mapped to the migration destination logical unit. | 08-09-2012 |
20120203964 | SELECTING A VIRTUAL TAPE SERVER IN A STORAGE SYSTEM TO PROVIDE DATA COPY WHILE MINIMIZING SYSTEM JOB LOAD - In a storage system including plural source storage devices, a target storage device selects which source storage device to accept a copy request from the target storage device so as to minimize the load on the entire system. The system calculates first and second load values for job loads being processed. System load values for the system are derived from job load value of a specific data, and respective load values for first and second source storage devices. The system compares the system load values to select a storage device to provide the data copy so as to minimize the load on the entire system. | 08-09-2012 |
20120210058 | HARD DISK DRIVE WITH ATTACHED SOLID STATE DRIVE CACHE - Methods, systems, and computer programs for managing storage using a solid state drive (SSD) read cache memory are presented. One method includes an operation for determining whether data corresponding to a read request is available in a SSD memory when the read request causes a miss in a memory cache. The read request is served from the SSD memory when the data is available in the SSD memory, and when the data is not available in the SSD memory, SSD memory tracking logic is invoked and the read request is served from a hard disk drive. Invoking the SSD memory tracking logic includes determining whether a fetch criteria for the data has been met, and loading the data corresponding to the read request in the SSD memory when the fetch criteria has been met. The use of the SSD as a read cache improves memory performance for random data reads. | 08-16-2012 |
20120233398 | STORAGE SYSTEM AND DATA MANAGEMENT METHOD - The present invention comprises a CHA | 09-13-2012 |
20120239878 | METHODS FOR MANAGING A CACHE IN A MULTI-NODE VIRTUAL TAPE CONTROLLER - According to one embodiment, a method for managing cache space in a virtual tape controller includes receiving data from at least one host using the virtual tape controller; storing data received from the at least one host to a cache using the virtual tape controller; sending a first alert to the at least one host when a cache free space size is less than a first threshold and entering into a warning state using the virtual tape controller; sending a second alert to the at least one host when the cache free space size is less than a second threshold and entering into a critical state using the virtual tape controller; and allowing previously mounted virtual drives to continue normal writing activity when in the critical state. | 09-20-2012 |
20120239879 | STORAGE SYSTEM FOR MANAGING A LOG OF ACCESS - Provided is a storage system including: a first interface connected to a host computer; a second interface connected to a manager terminal; a control unit connected to the first interface and the second interface and equipping a processor and a memory; and one or more disk drives in which data that is requested to read by the host computer is stored, in which the control unit detects an access from the host computer to the first interface and an access from the manager terminal to the second interface, and generates log data of operations according to the accesses. Accordingly, log data concerning every action and every operation of the storage system is maintained and stored. | 09-20-2012 |
20120246402 | COMMUNICATION DEVICE, COMMUNICATION METHOD, AND COMPUTER- READABLE RECORDING MEDIUM STORING PROGRAM - A communication device reducing the processing time to install data on a disc storage medium onto multiple servers is provided. A protocol serializer | 09-27-2012 |
20120254531 | STORAGE APPARATUS AND STORAGE CONTROL DEVICE - A storage apparatus configured to store data received from a host system in a drive unit includes a memory unit partitioned into a cache area configured to temporarily store data read out from the drive unit and data to be written in the drive unit and an information storage area assigned for a memory pool configured to hold information for internal processing of the storage apparatus; an information-storage-area management table in which information-storage-area management information including position information on the memory pool in the memory unit is registered; a cache-area management table in which cache-area management information including usage status of the cache area is registered; and a memory control unit configured to acquire a memory area in the cache area having the least amount of write pending data in a pending state for writing in the drive unit by referring to the cache-area management table. | 10-04-2012 |
20120260034 | DISK ARRAY APPARATUS AND CONTROL METHOD THEREOF - Proposed are a disk array apparatus and a control method thereof which facilitate data processing such as write processing and read processing even if the block sizes handled by a host computer are different. | 10-11-2012 |
20120284459 | WRITE-THROUGH-AND-BACK CACHE - Embodiments are provided for cache memory systems. In one general embodiment, a system that includes a storage device, and at least one storage class memory device operating as a write cache for the storage device. The storage device further includes a first storage location for data received from a host computer during a host write request and a second storage. Data received from a host write request is written to the storage class memory device, to the first location in the storage device, and to the second location in the storage device that logically reflects the location of the data in the storage class device location configured as a log structured file. | 11-08-2012 |
20120290786 | SELECTIVE CACHING IN A STORAGE SYSTEM - A device, system, and method are disclosed. In one embodiment, a device includes caching logic that is capable of receiving an I/O storage request from an operating system. The I/O storage request includes an input/output (I/O) data type tag that specifies a type of I/O data to be stored or loaded with the I/O storage request. The caching logic is also capable of determining, based at least in part on a priority level associated with the I/O data type, whether to allocate cache to the I/O storage request. | 11-15-2012 |
20120290787 | REMOTE COPY METHOD AND REMOTE COPY SYSTEM - A remote copy system includes: a first storage system having a first logical volume accompanied with a first plurality of disk drives in the first storage system; a second storage system having a second logical volume, which is a virtual volume not accompanied with a second plurality of disk drives in the second storage system, the virtual volume configuring a first remote copy pair with the first logical volume; and a third storage system having a third logical volume accompanied with a third plurality of disk drives in the third storage system, the third logical volume configuring a second remote copy pair with the virtual volume and storing a copied data of data stored in the first logical volume. If the second storage system receives write data sent from the first storage system to the virtual volume, the second storage system transfers the write data to the third logical volume. | 11-15-2012 |
20120297133 | METHODS AND SYSTEMS OF DISTRIBUTING RAID IO LOAD ACROSS MULTIPLE PROCESSORS - A method for distributing IO load in a RAID storage system is disclosed. The RAID storage system may include a plurality of RAID volumes and a plurality of processors. The IO load distribution method may include determining whether the RAID storage system is operating in a write-through mode or a write-back mode; distributing the IO load to a particular processor selected among the plurality of processors when the RAID storage system is operating in the write-through mode, the particular processor being selected based on a number of available resources associated with the particular processor; and distributing the IO load among the plurality of processors when the RAID storage system is operating in the write-back mode, the distribution being determined based on: an index of a data stripe, and a number of processors in the plurality of processors. | 11-22-2012 |
20120303886 | IMPLEMENTING STORAGE ADAPTER PERFORMANCE OPTIMIZATION WITH HARDWARE CHAINS TO SELECT PERFORMANCE PATH - A method and controller for implementing storage adapter performance optimization with a predefined chain of hardware operations configured to implement a particular performance path minimizing hardware and firmware interactions, and a design structure on which the subject controller circuit resides are provided. The controller includes a plurality of hardware engines; and a data store configured to store a plurality of control blocks selectively arranged in one of a plurality of predefined chains. Each predefined chain defines a sequence of operations. Each control block is designed to control a hardware operation in one of the plurality of hardware engines. A resource handle structure is configured to select a predefined chain based upon a particular characteristic of the system. Each predefined chain is configured to implement a particular performance path to maximize performance. | 11-29-2012 |
20120303887 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR CACHING AND USING SCATTER LIST METADATA TO CONTROL DIRECT MEMORY ACCESS (DMA) RECEIVING OF NETWORK PROTOCOL DATA - Methods, systems, and computer readable media for caching and using scatter list metadata to control DMA receiving of network protocol data are described. According to one method, metadata associated with partially used scatter list entries is cached in memory of a scatter list caching engine. Data to be written to host system memory is received. The scatter list caching engine provides the metadata associated with partially used scatter list entries to a DMA controller to control the DMA writing of the data into host system memory. | 11-29-2012 |
20120303888 | DESTAGING OF WRITE AHEAD DATA SET TRACKS - Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X). | 11-29-2012 |
20120303889 | SMR storage device with user controls and access to status information and parameter settings - Shingled magnetic recording (SMR) devices are described that include a command processor for accepting commands from the host/user for executing selected SMR related operations, setting selected SMR parameters and reading selected SMR related statistics and status indicators. The commands allow a host/user to control defragmentation and destaging operations. Embodiments include some or all of the set of features allowing selection of formatting settings, selection of optimization settings; command to immediately run defragmentation operation; command to change waiting time before starting defragmentation operation; and command to temporarily suspend defragmentation operation until certain usage threshold is met (e.g., E-region(s) near full). | 11-29-2012 |
20120311252 | DATA REPLICATION AMONG STORAGE SYSTEMS - A first storage system stores information relating to the updating of data stored in that system as a journal. More specifically, the journal is composed of a copy of data that was used for updating and update information such as a write command used during updating. Furthermore, the second storage system acquires the journal via a communication line between the first storage system and the second storage system. The second storage system holds a duplicate of the data held by the first storage system and updates the data corresponding to the data of the first storage system in the data update order of the first storage system by using the journal. | 12-06-2012 |
20120311253 | Reallocation of Tape Drive Resources Associated With a Secure Data Erase Process - A method according to one embodiment includes determining whether to reallocate one or more of a plurality of tape drives that are presently allocated for a secure data erase process in response to an evaluation of a quantity of physical volumes to be secure data erased and a minimum queued threshold; and in response to said determination that one or more of said plurality of tape drives is to be reallocated, reallocating the one or more of said plurality of tape drives from the secure data erase process to another function. | 12-06-2012 |
20120317354 | STORAGE DEVICE, CONTROL METHOD FOR SAME AND SYSTEM MANAGEMENT PROGRAM - A storage device has plural data disks including a primary data area and a backup data area. Performance and reliability are secured while conserving power. A system management means includes a disk rotational state detection means, a disk rotational state control means for rotating or stopping a data disk, and a data placement control means for accessing the data disk to move the data. The data placement control means, if the data disk of the primary or backup side has been stopped at writing time, spins up and accesses thereof, and if the data disk of the primary or backup side has been stopped at reading time, prioritizes the side that is being rotated and accesses thereto, and if the data disk of the primary and backup side have both been stopped at reading time, spins up and accesses the side that has been stopped for the longer time. | 12-13-2012 |
20120331221 | SEMICONDUCTOR STORAGE DEVICE-BASED HIGH-SPEED CACHE STORAGE SYSTEM - Embodiments of the present invention provide a SSD-based high-speed cache storage system. Specifically, in a typical embodiment, a network cache component (NCC) is coupled to a high-speed cache storage pool (HCSP). The NCC generally comprises: a set of semiconductor storage device (SSD) memory disk units for storing data; a network cache controller coupled to the set of SSD memory units; a network traffic analysis component coupled to the network cache controller; a network interface coupled to the network traffic analysis component; a general storage controller coupled to the network cache controller; and a general storage interface coupled to the general storage controller. Moreover, the HCSP typically comprises a cache server, an internal interface, and a general storage system coupled to one another. | 12-27-2012 |
20130007361 | SELECTIVE DEVICE ACCESS CONTROL - Methods, systems, and computer program products for selective device access control in a data storage system are provided. A method includes initializing a plurality of access groups associated with logical devices used to access the data storage system, each of the plurality of access groups corresponding to a range of the logical devices, pursuant to a mount of a logical volume of the data storage system, and binding an access group name of one the plurality of access groups to at least one of a metadata of the logical volume at a volume creation and a volume header of the logical volume, wherein the logical volume, once bound to the access group name, is granted access by those of the logical devices in a range of the logical devices corresponding to the one of the plurality of access groups. | 01-03-2013 |
20130024613 | PREFETCHING DATA TRACKS AND PARITY DATA TO USE FOR DESTAGING UPDATED TRACKS - Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks. | 01-24-2013 |
20130031306 | APPARATUS AND METHOD FOR PREFETCHING DATA - Apparatuses and methods for prefetching data are disclosed. A method may include receiving a read request at a data storage device, determining a meta key in an address map that includes a logical block address (LBA) of the read request, wherein the meta key includes a beginning LBA and a size field corresponding to a number of consecutive sequential LBAs stored on the data storage device, calculating a prefetch operation to prefetch data based on addresses included in the meta key, and reading data corresponding to the prefetch operation and the read request. An apparatus may include a processor configured to receive a read request, determine a first meta key and a second meta key in an address map, calculate a prefetch operation based on addresses included in the first meta key and the second meta key, and read data corresponding to the prefetch operation and the read request. | 01-31-2013 |
20130031307 | STORAGE APPARATUS, METHOD THEREOF AND SYSTEM - A storage apparatus includes a memory that stores a job management information that registers a write job corresponding to a write command upon receiving the write command from other apparatus, a cache memory that stores data designated as target data by the write command, a storage drive that records the data stored in the cache memory to a storage medium based on the write job registered in the job management information, and a controller that controls a timing to output to the other apparatus a completion report of the write command based on a load condition of the storage device related to an accumulation count of write job acquired from the job management information. | 01-31-2013 |
20130036265 | METHOD TO ALLOW STORAGE CACHE ACCELERATION WHEN THE SLOW TIER IS ON INDEPENDENT CONTROLLER - The present invention is directed to a method for providing storage acceleration in a data storage system. In the data storage system described herein, multiple independent controllers may be utilized, such that a first storage controller may be connected to a first storage tier (ex.—a fast tier) which includes a solid-state drive, while a second storage controller may be connected to a second storage tier (ex.—a slower tier) which includes a hard disk drive. The accelerator functionality may be split between the host of the system and the first storage controller of the system (ex.—some of the accelerator functionality may be offloaded to the first storage controller) for promoting improved storage acceleration performance within the system. | 02-07-2013 |
20130080696 | STORAGE CACHING/TIERING ACCELERATION THROUGH STAGGERED ASYMMETRIC CACHING - A multi-tiered system of data storage includes a plurality of data storage solutions. The data storage solutions are organized such that the each progressively faster, more expensive solution serves as a cache for the previous solution, and each solution includes a dedicated data block to store individual data sets, newly written in a plurality of write operations, for later migration to slower data storage solutions in a single write operation. | 03-28-2013 |
20130086316 | Using unused portion of the storage space of physical storage devices configured as a RAID - Physical storage devices are configured as a redundant array of independent disks (RAID). As such, storage space of the physical storage devices is allocated to the RAID, and each physical storage device is part of the RAID. Where a portion of the storage space of the physical storage devices is not allocated to the RAID, this portion of the storage space from a mixed drive capacity is configured so that it is usable and is not wasted. | 04-04-2013 |
20130091326 | SYSTEM FOR PROVIDING USER DATA STORAGE ENVIRONMENT USING NETWORK-BASED FILE SYSTEM IN N-SCREEN ENVIRONMENT - A system for providing a user data storage environment using a network-based file system in an N-screen environment is provided. The system may include a memory cache to store a cache file downloaded from a Network File System (NFS) storage that is equipped in a server and stores data for each user, and an NFS interface to store, in the memory cache, the cache file downloaded through an NFS, in order to use the cache file in the N-screen environment. | 04-11-2013 |
20130097375 | STORAGE DEVICE AND REBUILD PROCESS METHOD FOR STORAGE DEVICE - A storage device includes a plurality of magnetic disk devices each having a write cache, a processor unit that redundantly stores data, a rebuild execution control unit that performs a rebuild process, a write cache control unit that, at the time of the rebuild process, enables a write cache of a storage device that stores rebuilt data, and a rebuild progress management unit that is configured using a nonvolatile memory and manages progress information of the rebuild process. In the case where power discontinuity is caused during the rebuild process and then power is restored, the rebuild execution control unit calculates an address that is before an address of last written rebuilt data by an amount corresponding to the capacity of the write cache based on the progress information of the rebuild process managed by the progress management unit and resumes the rebuild process from that calculated address. | 04-18-2013 |
20130111125 | SHARED CACHE MODULE AND METHOD THEREOF | 05-02-2013 |
20130132667 | ADJUSTMENT OF DESTAGE RATE BASED ON READ AND WRITE RESPONSE TIME REQUIREMENTS - A storage controller that includes a cache receives a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied. The storage controller determines ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type. Destage rate corresponding to the ranks of the first type are adjusted to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied. | 05-23-2013 |
20130159619 | STORAGE SYSTEM HAVING A CHANNEL CONTROL FUNCTION USING A PLURALITY OF PROCESSORS - Host-connected storage system, including: channel adaptor with local router having processor and transfer list index/processor number information, and a protocol processor for host and router data exchange; and plural storage nodes each including a processor and disk drive and providing the disk drive to the host as a logical unit, wherein processor number information including logical unit and processor number of the node, wherein transfer list index/processor number information including processor number identifying the processor and index information identifying a transfer list including instruction sent to the protocol processor, wherein the router determines a first processor transfer destination of a write request via the processor number information on receiving the write request from the host through the protocol processor, wherein the first processor generates a first transfer list including processing instructed to the protocol processor, and first index information indexing the first transfer list on receiving the write request. | 06-20-2013 |
20130166837 | DESTAGING OF WRITE AHEAD DATA SET TRACKS - Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X). | 06-27-2013 |
20130185501 | CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA - Provided are a computer program product, system, and method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted. | 07-18-2013 |
20130185502 | DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE - A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache. | 07-18-2013 |
20130198446 | STORAGE SYSTEM FOR ATOMIC WRITE OF ONE OR MORE COMMANDS - Storage systems which allow atomic write operations, methods of operating thereof, and corresponding computer program products. By way of non-limiting example, a possible method includes: receiving indication of a transaction, where a plurality of blocks directed to at least one destination logical volume and relating to at least one command is to be written as an atomic write operation; generating a transaction identifier number for the transaction; enabling tracking of the transaction at least partly based on the transaction identifier number, including temporary location of any one of the plurality of blocks; accommodating at least one block of the plurality temporarily in the storage system; and upon receiving an indication that all blocks in the plurality have been successfully temporarily accommodated in the storage system, enabling data corresponding to the plurality of blocks to subsequently be stored in the at least destination logical volume and discontinuing tracking of the transaction. | 08-01-2013 |
20130198447 | STORAGE SYSTEM FOR ATOMIC WRITE WHICH INCLUDES A PRE-CACHE - Storage systems which allow atomic write operations, methods of operating thereof, and corresponding computer program products. By way of non-limiting example, a possible method includes: configuring volatile memory into cache memory and pre-cache memory; receiving an indication that a plurality of blocks relating to a command is to be written as an atomic write operation; enabling tracking of the atomic write operation; caching at least one block from the plurality in the pre-cache memory; and upon receiving an indication that all blocks in the plurality have been successfully accommodated in the pre-cache memory, enabling data corresponding to the plurality of blocks to subsequently be cached in the cache memory and discontinuing tracking of the atomic write operation. | 08-01-2013 |
20130198448 | ELASTIC CACHE OF REDUNDANT CACHE DATA - An apparatus for elastic caching of redundant cache data. The apparatus may have a plurality of buffers and a circuit. The circuit may be configured to (i) receive a write request from a host to store write data in a storage volume, (ii) allocate a number of extents in the buffers based upon a redundant organization associated with the write request and (iii) store the write data in the number of extents, where (a) each of the number of extents is located in a different one of the buffers and (b) the number of extents are dynamically linked together in response to the write request. | 08-01-2013 |
20130227215 | PROVIDING RECORD LEVEL SHARING (RLS) TO INDIVIDUAL CATALOGS - In one embodiment, a storage system includes a server system having a processor and a local buffer pool for storing instances for use in catalog requests, and a Direct Access Storage Device (DASD) subsystem electrically coupled to the server system and to at least one DASD, wherein the at least one DASD is adapted for providing at least one catalog configured according to a Basic Catalog Structure (BCS), wherein the at least one catalog includes at least one of: a user catalog including information related to locations of user data sets and system data sets stored to the at least one DASD, and a tape volume catalog including information related to locations of user data sets and system data sets stored to at least one tape medium, and wherein the data storage system is adapted for providing Record Level Sharing (RLS) for the at least one catalog stored to the at least one DASD. | 08-29-2013 |
20130238851 | HYBRID STORAGE AGGREGATE BLOCK TRACKING - Methods and apparatuses for operating a hybrid storage aggregate are provided. In one example, such a method includes operating a first tier of physical storage of the hybrid storage aggregate as a cache for a second tier of physical storage of the hybrid storage aggregate. The first tier of physical storage includes a plurality of assigned blocks. The method also includes updating metadata of the assigned blocks in response to an event associated with at least one of the assigned blocks. The metadata includes block usage information tracking more than two possible usage states per assigned block. The method can further include processing the metadata to determine a caching characteristic of the assigned blocks. | 09-12-2013 |
20130246703 | SHINGLED MAGNETIC RECORDING DISK DRIVE WITH INTER-BAND DISK CACHE AND MINIMIZATION OF THE EFFECT OF FAR TRACK ERASURE ON ADJACENT DATA BANDS - A shingled magnetic recording hard disk drive that uses writeable cache tracks in the inter-band gaps between the annular data bands minimizes the effect of far track erasure (FTE) in the boundary regions of annular data bands caused by writing to the cache tracks. Based on the relative FTE effect for all the tracks in a range of tracks of the cache track being written, a count increment (CI) table or a cumulative count increment (CCI) table is maintained. For every writing to a cache track, a count for each track in an adjacent boundary region, or a cumulative count for each adjacent boundary region, is increased. When the count value for a track, or the cumulative count for a boundary region, reaches a predetermined threshold the data is read from that band and rewritten to the same band. | 09-19-2013 |
20130268730 | Grid Storage System and Method of Operating Thereof - A method of operating a storage system includes: configuring the address space so that each LBA is assigned to at least two servers among a plurality of at least three servers in a control grid: to a primary server with a primary responsibility for handling requests corresponding to said LBA, and to a secondary server with a secondary responsibility for handling requests corresponding to said LBA. In response to a request corresponding to a certain LBA range, generating by a data server having primary responsibility over the certain LBA range, a primary cache object; identifying a data server configured as a secondary data server with regard to the certain LBA range; and generating a redundancy cache object corresponding to the primary cache object only at the identified secondary data server, the redundancy cache object to be used by the identified secondary data server when taking the primary responsibility. | 10-10-2013 |
20130268731 | Host Controlled Hybrid Storage Device - A host based caching technique may be used to determine caching policies for a hybrid hard disk drive. Because the host based caching may make use of knowledge about what data is being cached, improved performance may be achieved in some cases. | 10-10-2013 |
20130275669 | APPARATUS AND METHOD FOR MEETING PERFORMANCE METRICS FOR USERS IN FILE SYSTEMS - A data block storage management capability is presented. A file system includes a plurality of data blocks which are managed using a first storage service and a second storage service, where the first storage service has a lower storage cost and a higher input-output cost than the second storage service. The data blocks stored using the second storage service have associated therewith respective expected storage durations indicative of respective lengths of time for which the data blocks are to be stored using the second storage service (which may be the same or different across the ones of the data blocks stored using the second storage service). The expected storage durations of the data blocks are modified based on a comparison of an expected hit rate of the second storage service and a current hit rate of the second storage service or current hit rates of the data blocks. | 10-17-2013 |
20130275670 | MULTIPLE ENHANCED CATALOG SHARING (ECS) CACHE STRUCTURE FOR SHARING CATALOGS IN A MULTIPROCESSOR SYSTEM - Various method and system embodiments for facilitating catalog sharing in multiprocessor systems use multiple ECS cache structures to which catalogs are assigned based on an attribute such as SMS storage class or a high level qualifier (HLQ) (e.g. an N-to-1 mapping) or each individual catalog (e.g. a 1-to-1 mapping). When maintenance is performed on an ECS shared catalog, the multiple ECS cache structure requires only those catalogs associated with a particular ECS cache structure be disconnected. Any catalogs in the structure that are not involved in or affected by the maintenance may be temporarily or permanently moved to a different ECS cache structure. As a result, VVDS sharing is only required for those catalogs on which maintenance is being performed or that remain associated with that ECS cache structure during maintenance. This reduces I/O activity to the DASD, and results in a significant overall performance improvement. | 10-17-2013 |
20130282977 | CACHE CONTROL DEVICE, CACHE CONTROL METHOD, AND PROGRAM THEREOF - To prevent an increase in the management information and to increase the capacity of a secondary cache. The cache control device includes: a secondary cache having the data of the data sector and management information; and a primary cache having a digest value calculated from the address of the data and secondary management information. A controller includes: a digest value calculation unit which calculates the digest value of the data when reading out the data; a management information searching unit which searches the management information in the primary cache based on the digest value; and a readout control unit which specifies the data sector in the secondary cache based on the management information and reads out the data. | 10-24-2013 |
20130297870 | METHODS AND APPARATUS FOR CUT-THROUGH CACHE MANAGEMENT FOR A MIRRORED VIRTUAL VOLUME OF A VIRTUALIZED STORAGE SYSTEM - Methods and apparatus for cut-through cache memory management in write command processing on a mirrored virtual volume of a virtualized storage system, the virtual volume comprising a plurality of physical storage devices coupled with the storage system. Features and aspects hereof within the storage system provide for receipt of a write command and associated write data from an attached host. Using a cut-through cache technique, the write data is stored in a cache memory and transmitted to a first of the plurality of storage devices as the write data is stored in the cache memory thus eliminating one read-back of the write data for transfer to a first physical storage device. Following receipt of the write data and storage in the cache memory, the write data is transmitted from the cache memory to the other physical storage devices. | 11-07-2013 |
20130304986 | SYSTEMS AND METHODS FOR SECURE HOST RESOURCE MANAGEMENT - Systems and methods are described herein to provide for secure host resource management on a computing device. Other embodiments include apparatus and system for management of one or more host device drivers from an isolated execution environment. Further embodiments include methods for querying and receiving event data from manageable resources on a host device. Further embodiments include data structures for the reporting of event data from one or more host device drivers to one or more capability modules. | 11-14-2013 |
20130326137 | Methods and Systems for Retrieving and Caching Geofence Data - Mobile device systems and methods for monitoring geofences cache a subset of geofences within a likely travel perimeter determined based on speed and direction of travel, available roads, current traffic, etc. A server may download to mobile devices subsets of geofences within a likely travel perimeter determined based on a threshold travel time possible from a current location given current travel speed, direction and roads. Mobile device may receive a list of local geofences from a server, which may maintain or have access to a database containing all geofences. The mobile device may use the cashed geofences in the normal manner, by comparing its location to the cached list of local geofences to detect matches. In an embodiment, the mobile device may calculate or receive from the server an update perimeter, which when crossed may prompt the mobile device to request an update to the geofences stored in cache. | 12-05-2013 |
20130332673 | SELECTING A VIRTUAL TAPE SERVER IN A STORAGE SYSTEM TO PROVIDE DATA COPY WHILE MINIMIZING SYSTEM JOB LOAD - In a storage system including plural source storage devices, a target storage device selects which source storage device to accept a copy request from the target storage device so as to minimize the load on the entire system. The system calculates first and second load values for job loads being processed. System load values for the system are derived from job load value of a specific data, and respective load values for first and second source storage devices. The system compares the system load values to select a storage device to provide the data copy so as to minimize the load on the entire system. | 12-12-2013 |
20130346688 | COMPUTER SYSTEM AND METHOD OF CONTROLLING I/O WITH RESPECT TO STORAGE APPARATUS - An aspect of this invention is a computer system, including: a storage apparatus for allocating real storage areas of a plurality of tiers of a tiered real storage area pool to a volume, and migrating and relocating data within the volume between the plurality of tiers; and a host apparatus that accesses the volume provided by the storage apparatus. The host apparatus is configured to refer to tier information including information on a corresponding one of the plurality of tiers to which an access destination address within the volume belongs to identify the corresponding one of the plurality of tiers to which the access destination address belongs and refer to settings predetermined for the plurality of tiers to perform I/O control for the access destination address based on settings of the identified corresponding one of the plurality of tiers. | 12-26-2013 |
20130346689 | STORAGE SYSTEM AND MANAGEMENT METHOD OF CONTROL INFORMATION THEREIN - An embodiment of this invention divides a cache memory of a storage system into a plurality of partitions and information in one or more of the partitions is composed of data different from user data and including control information. The storage system dynamically swaps data between an LU storing control information and a cache partition. Through this configuration, in a storage system having an upper limit in the capacity of the cache memory, a large amount of control information can be used while access performance to control information is kept. | 12-26-2013 |
20140006707 | ICC-NCQ Command Scheduling for Shingle-written Magnetic Recording (SMR) Drives | 01-02-2014 |
20140013047 | DEFINING ADDRESS RANGES USED TO CACHE SPECULATIVE READ DATA - A host read request affects a request address range of a main storage. A speculative address range proximate to the request address range is defined. Speculative data stored in the speculative address range is not requested via the host read request. A criterion is determined that is indicative of future read requests of associated with the speculative data. The speculative data is copied from the main storage to at least one of a non-volatile cache and a volatile cache together with data of the host read request in response to the criterion meeting a threshold. The non-volatile cache and the volatile cache mirror respective portions of the main storage. | 01-09-2014 |
20140019681 | HIGH DENSITY DISK DRIVE PERFORMANCE ENHANCEMENT SYSTEM - The present invention provides an HDD performance enhancement system that utilizes excess disk capacity as cache memory to enhance the I/O performance of the drive. The cache memory is distributed throughout the disk, for example in alternating tracks, sectors dedicated to serving as cache, or other distributed cache track segments or segment groups. Distributing the cache throughout the disk reduces the physical distance of the I/O head to the closest available cache location. The system minimizes the write seek time by storing write data in the closest available cache location. High utilization data blocks are stored in multiple cache location locations to reduce read seek time for high utilization data. The cached data is eventually written to permanent memory and cleared from the cache during idle or low data storage utilization periods. | 01-16-2014 |
20140019682 | SAVING LOG DATA USING A DISK SYSTEM AS PRIMARY CACHE AND A TAPE LIBRARY AS SECONDARY CACHE - Various embodiments are provided for saving a log data in a hierarchical storage management system using a disk system as a primary cache with a tape library as a secondary cache. The user data is stored in the primary cache and written into the secondary cache at a subsequent period of time. Blank tapes in the secondary cache is prepared for storing the user data and the log data based on priorities. At least one of the blank tapes is selected for copying the log data and the user data from the primary cache to the secondary cache based on priorities. The log data is stored in the primary cache. The selection of at least one of the blank tapes completely filled with the log data is delayed for writing additional amounts of the user data. | 01-16-2014 |
20140052910 | STORAGE CONTROL DEVICE, STORAGE DEVICE, STORAGE SYSTEM, STORAGE CONTROL METHOD, AND PROGRAM FOR THE SAME - A storage control device configured to control a storage device includes a first disk which is in active state and a second disk which is in standby state. The storage control device includes a communication unit and a control unit. The communication unit transmits a read-out request or a write request to the storage device and receives a response to the read-out request or the write request from the storage device. The control unit controls the communication unit so that the communication unit transmits a rotation start command which instructs a start of rotation of the second disk to the storage device, when a time to the point when receiving the response to the read-out request or the write request transmitted to the first disk which is in active state is longer than a predetermined threshold. | 02-20-2014 |
20140059291 | METHOD FOR PROTECTING STORAGE DEVICE DATA INTEGRITY IN AN EXTERNAL OPERATING ENVIRONMENT - An invention is provided for protecting the data integrity of a cached storage device in an alternate operating system (OS) environment. The invention includes replacing an actual partition table for a disk with a dummy partition table. The dummy partition table is designed to render data on the disk inaccessible when the dummy partition table is used by an OS to access the data. During operation, the data on the disk can be accessed using information based on the actual partition table. In response to receiving a request to disable caching, the dummy partition table on the disk is replaced with the actual partition table, thus rendering the data on the formally cached disk accessible in an alternate OS environment where appropriate caching software is not present. | 02-27-2014 |
20140059292 | TRANSPARENT HOST-SIDE CACHING OF VIRTUAL DISKS LOCATED ON SHARED STORAGE - Techniques for using a host-side cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, the hypervisor of a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The hypervisor can then process the I/O request by accessing a host-side cache that resides one or more cache devices distinct from the shared storage device, where the accessing of the host-side cache is transparent to the VM. | 02-27-2014 |
20140059293 | METHOD FOR PROTECTING A GPT CACHED DISKS DATA INTEGRITY IN AN EXTERNAL OPERATING SYSTEM ENVIRONMENT - An invention is provided for protecting the data integrity of a cached storage device in an alternate operating system (OS) environment. The invention includes replacing a globally unique identifiers partition table (GPT) for a cached disk with a modified globally unique identifiers partition table (MGPT). The MGPT renders cached partitions on the cached disk inaccessible when the MGPT is used by an OS to access the cached partitions, while un-cached partitions on the cached disk are still accessible when the using MGPT. In normal operation, the data on the cached disk is accessed using information based on the GPT, which can be stored on a caching disk, generally via caching software. In response to receiving a request to disable caching, the MGPT on the cached disk is replaced with the GPT, thus rendering the all data on the formally cached disk accessible in an alternate OS environment where appropriate caching software is not present. | 02-27-2014 |
20140068178 | WRITE PERFORMANCE OPTIMIZED FORMAT FOR A HYBRID DRIVE - An apparatus for optimizing write performance of a hybrid drive includes a magnetic medium that stores data with respect to the hybrid drive and a plurality of write cache regions configured on the magnetic medium. When a write request is received by the hybrid drive, a head of the hybrid drive is automatically positioned to a nearest write cache region for writing of data to at least one write cache region without rotational orientation, thereby eliminating rotational latency and optimizing the write performance of the hybrid drive. The hybrid drive also updates normal data regions of the magnetic medium with data comprising write cached data during drive idle time, freeing up the write cache regions for future writes. | 03-06-2014 |
20140068179 | PROCESSOR, INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD - A processor includes a cache memory that holds data from a main storage device. The processor includes a first control unit that controls acquisition of data, and that outputs an input/output request that requests the transfer of the target data. The processor includes a second control unit that controls the cache memory, that determines, when an instruction to transfer the target data and a response output by the first processor on the basis of the input/output request that has been output to the first processor is received, whether the destination of the response is the processor, and that outputs, to the first control unit when the second control unit determines that the destination of the response is the processor, the response and the target data with respect to the input/output request. | 03-06-2014 |
20140068180 | DATA ANALYSIS SYSTEM - A data analysis system, particularly, a system capable of efficiently analyzing big data is provided. The data analysis system includes an analyst server, at least one data storage unit, a client terminal independent of the analyst server, and a caching device independent of the analyst server. The caching device includes a caching memory, a data transmission interface, and a controller for obtaining a data access pattern of the client terminal with respect to the at least one data storage unit, performing caching operations on the at least one data storage unit according to a caching criterion to obtain and store cache data in the caching memory, and sending the cache data to the analyst server via the data transmission interface, such that the analyst server analyzes the cache data to generate an analysis result, which may be used to request a change in the caching criterion. | 03-06-2014 |
20140075109 | CACHE OPTIMIZATION - A system and method for management and processing of resource requests at cache server computing devices is provided. Cache server computing devices segment content into an initialization fragment for storage in memory and one or more remaining fragments for storage in a media having higher latency than the memory. Upon receipt of a request for the content, a cache server computing device transmits the initialization fragment from the memory, retrieves the one or more remaining fragments, and transmits the one or more remaining fragments without retaining the one or more remaining fragments in the memory for subsequent processing. | 03-13-2014 |
20140082276 | STORAGE APPARATUS AND METHOD FOR CONTROLLING INTERNAL PROCESS - According to an aspect of the present invention, provided is a storage apparatus including a plurality of solid state drives (SSDs) and a processor. The SSDs store data in a redundant manner. The processor controls a reading process of reading data from an SSD and a writing process of writing data into an SSD. The processor controls an internal process, which is performed during the writing process, to be performed in each of the SSDs when any one of the SSDs satisfies a predetermined condition. | 03-20-2014 |
20140082277 | EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations. A total number of I/O operations to be awoken at each of an iterated instance of the waking is limited. | 03-20-2014 |
20140108722 | VIRTUAL MACHINE INSTALLATION IMAGE CACHING - The subject matter of this specification can be implemented in, among other things, a computer-implemented method including sending, from a virtual desktop server manager at a data center and over a network, at least one request to a virtual machine storage domain for virtual machine installation images. The virtual machine storage domain stores the virtual machine installation images separate from the data center. The method further includes receiving, from the virtual machine storage domain over the network, the virtual machine installation images. The method further includes caching the virtual machine installation images in a data storage domain within the data center. The method further includes receiving a request to present a list of the virtual machine installation images. The method further includes in response to receiving the request to present the list, presenting the list of the cached virtual machine installation images. | 04-17-2014 |
20140108723 | REDUCING METADATA IN A WRITE-ANYWHERE STORAGE SYSTEM - Systems and methods for reducing metadata in a write-anywhere storage system are disclosed herein. The system includes a plurality of clients coupled with a plurality of storage nodes, each storage node having a plurality of primary storage devices coupled thereto. A memory management unit including cache memory is included in the client. The memory management unit serves as a cache for data produced by the clients before the data is stored in the primary storage. The cache includes an extent cache, an extent index, a commit cache and a commit index. The movement of data and metadata is by an interval tree. Methods for reducing data in the interval tree increase data storage and data retrieval performance of the system. | 04-17-2014 |
20140115251 | Reducing Memory Overhead of Highly Available, Distributed, In-Memory Key-Value Caches - Maintaining high availability of objects for both read and write transactions. Secondary copies of cached objects are created and maintained on disks of a secondary caching node and in remote data storage. In response to an update request, the secondary copies of cached objects are updated. Secondary cached objects are synchronously invalidated in response to the update request, and the update is asynchronously propagated to a secondary caching node. | 04-24-2014 |
20140115252 | BLOCK STORAGE-BASED DATA PROCESSING METHODS, APPARATUS, AND SYSTEMS - The present disclosure relates to the field of information technology, and in particular, to a block storage-based data processing method, apparatus, and system. The block storage-based data processing method provided in embodiments of the present disclosure is applied in a system including at least two storage nodes, each storage node including a CPU, a cache medium, and a non-volatile storage medium, and the cache medium in all the storage nodes forming a cache pool. According to the method, after receiving a data operation request sent by a client, a service processing node sends the data operation request to a corresponding storage node in the system according to a logical address carried in the data operation request, so that the data operation request is processed in the cache medium of the storage node under control of the CPU of the storage node. | 04-24-2014 |
20140136776 | RESILIENT MIRRORING - An apparatus and associated method including a first storage device and a second storage device, each coupled to a remote server independently of the other via a network. Resilient mirroring logic is stored in each of the storage devices that establishes a peer-to-peer communication connection with the other storage device in response to receiving a data access command from the remote server. | 05-15-2014 |
20140173194 | COMPUTER SYSTEM MANAGEMENT APPARATUS AND MANAGEMENT METHOD - The present invention measures an actual utilization frequency of data and controls a location of this data in a storage apparatus in a case where a host computer makes joint use of a storage apparatus and a cache apparatus. A portion of data used by an application program | 06-19-2014 |
20140189234 | PROTECTING VOLATILE DATA OF A STORAGE DEVICE IN RESPONSE TO A STATE RESET - A plurality of aligned or unaligned data packets are received in a data storage device. A data bundle is constructed by concatenating different ones of the plurality of unaligned data packets. Data loss protection identifiers are utilized to track the construction of the data bundle. The data loss protection identifiers are employed to prevent at least one of packet data loss or metadata loss in response to detecting a state reset of the data storage device. | 07-03-2014 |
20140195732 | METHOD AND SYSTEM TO MAINTAIN MAXIMUM PERFORMANCE LEVELS IN ALL DISK GROUPS BY USING CONTROLLER VDs FOR BACKGROUND TASKS - Disclosed is a system and method for performing reconstruction or on-line capacity expansion, background tasks, on a disk group configured on a controller with minimal impact on the other disk groups configured on the same controller. A user is enabled to continuously experience increased performance on all source virtual disks configured on the controller since the DRAM is always dedicated for I/O performance. | 07-10-2014 |
20140208017 | THINLY PROVISIONED FLASH CACHE WITH SHARED STORAGE POOL - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a Solid State Device (SSD) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments. | 07-24-2014 |
20140208018 | TIERED CACHING AND MIGRATION IN DIFFERING GRANULARITIES - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage, clumped hot ones of the groups of data segments are migrated to use the SSD portion while using the lower-speed cache for a remaining portion of the clumped hot ones, and sparsely hot ones of the groups of data segments are migrated to use the lower-speed cache while using a lower one of the tiered levels of storage for a remaining portion of the sparsely hot ones. | 07-24-2014 |
20140208019 | CACHING METHOD AND CACHING SYSTEM USING DUAL DISKS - A caching method and a caching system using dual disks, adapted to an electronic apparatus having a first storage unit and a second storage unit, are provided, in which an access speed of the second storage unit is higher than that of the first storage unit. In the method, a data access to the first storage unit is monitored, a data category of the data in an access address of the data access is identified and whether the data category belongs to a cache category is determined. If the data category belongs to the cache category, an access count of the data in the access address being accessed is accumulated and whether the accumulated access count is over a threshold is determined. If the access count is over the threshold, the data in the access address is cached to the second storage unit. | 07-24-2014 |
20140208020 | USE OF DIFFERING GRANULARITY HEAT MAPS FOR CACHING AND MIGRATION - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to utilize a Solid State Drive (SSD) portion of the tiered levels of storage, while sparsely hot ones of the groups of data segments are migrated to utilize the lower-speed cache. | 07-24-2014 |
20140208021 | THINLY PROVISIONED FLASH CACHE WITH SHARED STORAGE POOL - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a Solid State Device (SSD) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments. | 07-24-2014 |
20140250268 | Method and apparatus for efficient cache read ahead - A method for providing improved sequential read performance in a storage controller is provided. In response to the storage controller receiving a host read request from a host computer, the method includes identifying, by the storage controller, a largest burst length of a plurality of burst lengths in a memory of the storage controller, and determining a maximum number of consecutive times between bursts having a value less than a predetermined value. A burst includes a consecutive group of sequential host read requests from the same host computer. The method also includes multiplying the largest burst length of the plurality of burst lengths by the maximum number of consecutive times between bursts having a value less than the predetermined value to obtain an effective burst length and reading into a storage controller cache memory at least the effective burst length of data from storage devices coupled to the storage controller. | 09-04-2014 |
20140258608 | Storage Controller Cache Synchronization Method and Apparatus - A method for a pair of redundant storage controllers to ensure reliable cached write data transfers to storage device logical volumes is provided. The method includes maintaining metadata including a first number identifying which controller currently owns the volume, a second number identifying which controller previously owned the volume, a third number identifying which controller is a preferred owner of the volume, and an indication if the volume is write protected. The method also includes determining if all volumes currently owned by the controller are write protected. If all volumes currently owned by the controller are write protected, then the method includes verifying that the second controller is working and transferring cache data from the second controller to the first controller. If all volumes currently owned by the first controller are not write protected, then the method includes updating the second number and placing all volumes online. | 09-11-2014 |
20140258609 | QUALITY OF SERVICE CONTROL METHOD FOR STORAGE SYSTEM - A method and a system for controlling quality of service of a storage system, and a storage system. The method includes: collecting information about processing capabilities of the hard disks in the storage system and obtaining processing capabilities of the hard disks according to the information about processing capabilities; dividing a cache into multiple cache tiers according to the processing capabilities of the hard disks; and writing, for a cache tier in which dirty data reaches a preset threshold, data in the cache tier into at least one hard disk corresponding to the cache tier. The method avoids a phenomenon of preempting page resources in the cache. | 09-11-2014 |
20140281216 | VERTICALLY INTEGRATED STORAGE - Various systems, methods, apparatuses, and computer-readable media for accessing a storage device are described. Techniques are described for vertically integrating the various software functions and hardware functions for accessing storage hardware. In some embodiments, the system is implemented using non-volatile memory. | 09-18-2014 |
20140297940 | STORAGE CONTROL PROGRAM, STORAGE CONTROL METHOD, STORAGE SYSTEM AND HIERARCHY CONTROL APPARATUS THEREOF - A non-transitory computer readable storage medium that stores a storage control program causing a computer to execute a control process of a storage including a first storage device with first access speed, and a second storage device with second access speed that is slower than the first access speed, includes monitoring access frequency to data in the first and second storage devices; relocating data, the access frequency of which exceeds a first reference value, in the second storage device, to the first storage device; and conducting a overload processing of retaining, in the cache memory, at least a part of the data in the second storage device when the second storage device is in an overload state. the access frequency monitoring is executed in a state where the partial data is retained in the cache memory, and the relocating is executed based on the access frequency to the partial data. | 10-02-2014 |
20140304468 | DATA CONSOLIDATION USING A COMMON PORTION ACCESSIBLE BY MULTIPLE DEVICES - Multiple devices are provided access to a common, single instance of data and may use it without consuming resources beyond what would be required if only one device were using that data in a traditional configuration. In order to retain the device-specific differences, they are kept separate, but their relationship to the common data is maintained. All of this is done in a fashion that allows a given device to perceive and use its data as though it was its own separately accessible data. | 10-09-2014 |
20140325142 | Input/Output De-Duplication Based on Variable-Size Chunks - Techniques, systems, and articles of manufacture for input/output de-duplication based on variable-size chunks. A method includes partitioning virtual block data into multiple variable-sized chunks, caching each of the multiple variable-sized chunks in a chunk cache according to content of each of the multiple variable-sized chunks, initializing virtual block-to-chunk mapping and chunk-to-physical block mapping for each of the multiple variable-sized chunks, and detecting duplicate disk input and/or output requests across multiple hosts based on content-based mappings of the input and/or output requests to the chunk cache and the virtual block-to-chunk mapping and chunk-to-physical block mapping for each of the multiple variable-sized chunks in the chunk cache. | 10-30-2014 |
20140331007 | VIRTUAL LIBRARY CONTROLLER AND CONTROL METHOD - A virtual library controller includes: a substitution logical volume creation unit to create, in a case that a logical volume subject to an instruction to write data from a superior device is not present in a cache disk, a substitution logical volume in the cache disk; and a write process unit to carry out write of the data in the created substitution logical volume. | 11-06-2014 |
20140351504 | APPARATUS AND METHOD FOR TRANSFERRING DATA BETWEEN STORAGES HAVING DIFFERENT ACCESS SPEEDS - An apparatus is connected to a first storage and a second storage which is accessed at an access speed lower than an access speed of the first storage. The apparatus accesses each of blocks stored in the second storage, and counts, for each of the blocks, the number of accesses made for the each block. The apparatus determines, based on the number of accesses that has been counted for each of the blocks, a transfer target block that is a target which is to be transferred from the second storage to the first storage, and determines a transfer time at which transfer of the transfer target block is to be performed. The apparatus transfers the determined transfer target block to the first storage at the determined transfer time. | 11-27-2014 |
20140359211 | METHOD FOR DISK DEFRAG HANDLING IN SOLID STATE DRIVE CACHING ENVIRONMENT - An invention is provided for handling target disk access requests during disk defragmentation in a solid state drive caching environment. The invention includes detecting a request to access a target storage device. In response, data associated with the request is written to the target storage device without writing the data to the caching device, with the proviso that the request is a write request. In addition, the invention includes reading data associated with the request and marking the data associated with the request stored in the caching device for discard, with the proviso that the request is a read request and the data associated with the request is stored on the caching device. Data marked for discard is discarded from the caching device when time permits, for example, upon completion of disk defragmentation. | 12-04-2014 |
20140365725 | Method and apparatus for efficiently destaging sequential I/O streams - A method for destaging write data from a storage controller to storage devices is provided. The method includes determining that a cache element should be transferred from a write cache of the storage controller to the storage devices, calculating that a dirty watermark is above a dirty watermark maximum value, identifying a first cache element to destage from the write cache to the storage devices, transferring a first data container including the first cache element to the storage devices, and incrementing an active destage count. The method also includes repeating determining, calculating, identifying, transferring, and incrementing if the active destage count is less than an active destage count maximum value. The active destage count is a current number of write requests issued to a virtual disk that have not yet been completed, and the virtual disk is a RAID group comprising one or more specific storage devices. | 12-11-2014 |
20140372693 | SYSTEM, METHOD AND A NON-TRANSITORY COMPUTER READABLE MEDIUM FOR READ THROTLING - A method for managing read requests, the method may include receiving from a requesting entity a read request for reading an information unit stored in a storage system; determining by a control entity of the storage system whether the information unit is cached in a cache memory of the storage system and whether at least a predetermined number of disk drives of the storage system are currently overloaded; introducing a delay to a response to the read request thereby increasing a time difference between a time of the receiving of the read request and a time of a provision of the information unit to the requesting entity, if it is determined that the information unit is not cached in the cache memory and that the at least predetermined number of disk drives of the storage system are currently overloaded; and providing the information unit to the requesting entity. | 12-18-2014 |
20140372694 | Methods and Apparatus for Cut-Through Cache Management for a Mirrored Virtual Volume of a Virtualized Storage System - Methods and apparatus for cut-through cache memory management in write command processing on a mirrored virtual volume of a virtualized storage system, the virtual volume comprising a plurality of physical storage devices coupled with the storage system. Features and aspects hereof within the storage system provide for receipt of a write command and associated write data from an attached host. Using a cut-through cache technique, the write data is stored in a cache memory and transmitted to a first of the plurality of storage devices as the write data is stored in the cache memory thus eliminating one read-back of the write data for transfer to a first physical storage device. Following receipt of the write data and storage in the cache memory, the write data is transmitted from the cache memory to the other physical storage devices. | 12-18-2014 |
20150012698 | RESTORING TEMPORAL LOCALITY IN GLOBAL AND LOCAL DEDUPLICATION STORAGE SYSTEMS - Techniques and mechanisms described herein facilitate the restoration temporal locality in global and local deduplication storage systems. According to various embodiments, when it is determined that cache memory in a storage system has reached a capacity threshold, each of a plurality of data dictionary entries stored in the cache memory may be associated with a respective merge identifier. Each data dictionary entry may correspond with a respective data chunk. Each data dictionary entry may indicate a storage location of the respective data chunk in the storage system. The respective merge identifier may indicate temporal locality information about the respective data chunk. The plurality of data dictionary entries may be stored to disk memory in the storage system. Each of the stored plurality of data dictionary entries may include the respective merge identifier. | 01-08-2015 |
20150012699 | SYSTEM AND METHOD OF VERSIONING CACHE FOR A CLUSTERING TOPOLOGY - Aspects of the disclosure pertain to a system and method for versioning cache for a clustered topology. In the clustered topology, a first controller mirrors write data from a cache of the first controller to a cache of the second controller. When communication between controllers of the topology is disrupted (e.g., when the second controller goes offline, while the first controller stays online), the first controller increments a cache version number stored in a disk data format of a logical disk, the logical disk being owned by the first controller and associated with the write data. The incremented cache version number provides an indication to the second controller that the data of the cache of the second controller is stale. | 01-08-2015 |
20150012700 | MANAGING A CACHE IN A MULTI-NODE VIRTUAL TAPE CONTROLLER - According to one embodiment, a system includes a virtual tape library having a cache, a virtual tape controller (VTC) coupled to the virtual tape library, and an interface for coupling multiple hosts to the VTC. The cache is shared by the multiple hosts, and a common view of a cache state, a virtual library state, and a number of write requests pending is provided to the hosts by the VTC. In another embodiment, a method includes receiving data from at least one host using a VTC, storing data received from all the hosts to a cache using the VTC, sending an alert to all the hosts when free space is low and entering into a warning state, sending another alert to all the hosts when free space is critically low and entering into a critical state while allowing previously mounted virtual drives to continue normally. | 01-08-2015 |
20150019806 | MEMORY DEVICE WITH PAGE EMULATION MODE - In some examples, a memory device is configured to load multiple pages of an internal page size into a cache in response to receiving an activate command and to write multiple pages of the internal page size into a memory array in response to receiving a precharge command. In some implementations, the memory array is arranged to store multiple pages of the internal page size in a single physical row. | 01-15-2015 |
20150039824 | IMPLEMENTING ENHANCED BUFFER MANAGEMENT FOR DATA STORAGE DEVICES - A method, apparatus and a data storage device for implementing enhanced buffer management for storage devices. An amount of emergency power for the storage device is used to determine a time period for the storage device between emergency power loss and actual shut down of electronics. A time period for the storage device for storing write cache data to non-volatile storage is used to identify the amount of write cache data that can be safely written from the write cache to non-volatile memory after an emergency power loss, and using the write cache threshold for selected buffer management techniques for providing enhanced storage device performance, including enhanced SSD or HDD performance. | 02-05-2015 |
20150058553 | DATA WRITING METHOD, HARD DISC MODULE, AND DATA WRITING SYSTEM - A data writing method, a hard disc module, and a data writing system for writing data into the hard disc module are provided, wherein the hard disc module includes a plurality of memory units. The data writing method includes the following steps. A cache data is received and a data class of the cache data is determined. If the data class of the cache data belongs to a first type, the cache data is distributed and written to the memory units. If the data class of the cache data belongs to a second type, the cache data is written to one of the memory units. | 02-26-2015 |
20150089129 | COMPUTER SYSTEM AND STORAGE MANAGEMENT METHOD - A storage management system, where when adding a specified area of storage media to a storage tiered in response to a request from a host computer, a management computer: obtains storage media information, including I/O frequency of a data storage area of a volume(s) as well as performance information and structure information of the storage media, from the storage apparatus; identifies one or more storage media, which have not been allocated to any of the volumes with the I/O performance in excess of the I/O frequency, on the basis of the structure information of the storage media so that the data storage area of the volume(s), to which a specified storage in the storage tiered is allocated, would achieve a specified I/O performance target; and issues an instruction to the storage apparatus to create a storage tiered by using the identified storage media. | 03-26-2015 |
20150095567 | STORAGE APPARATUS, STAGING CONTROL METHOD, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED STAGING CONTROL PROGRAM - A cache controller controls data input/output of the storage device and causes the semiconductor storage device to function as a cache memory of the storage device. A staging controller performs, when data is staged from the storage device to the cache memory, first staging amount control until a staging amount to the cache memory exceeds a first threshold after the storage apparatus starts up; performs second staging amount control until a variation per unit time of a read amount from the cache memory falls within a predetermined range after the first period; and performs third staging amount control after the second period. With this configuration, the semiconductor apparatus can be efficiently used. | 04-02-2015 |
20150127901 | SAVING LOG DATA USING A DISK SYSTEM AS PRIMARY CACHE AND A TAPE LIBRARY AS SECONDARY CACHE - Various embodiments are provided for saving a log data in a hierarchical storage management system using a disk system as a primary cache with a tape library as a secondary cache. The user data is stored in the primary cache and written into the secondary cache at a subsequent period of time. Blank tapes in the secondary cache are prepared for storing the user data and the log data, based on priorities. At least one of the blank tapes is selected for copying the log data and the user data from the primary cache to the secondary cache based on priorities. The log data is stored in the primary cache. | 05-07-2015 |
20150134900 | CACHE EFFICIENCY IN A SHARED DISK DATABASE CLUSTER - Disclosed herein are system, method, and computer program product embodiments for storing and accessing data in a shared disk database system using a timestamp range to improve cache efficiency. An embodiment operates by retrieving, by a node, from a shared storage. a blockmap identity and a root page associated with a data request, based on a determination that the blockmap identity associated with a data request is present in a cache. The embodiment continues, retrieving, by the node, the logical page by copying a stored logical page from the shared storage and setting a lower timestamp value of the logical page to a timestamp associated with the stored logical page and an upper timestamp value of the logical page to a timestamp associated with the data request, based on a determination that the logical page is not present in the cache. | 05-14-2015 |
20150293700 | CONTROL APPARATUS AND CONTROL METHOD - A control apparatus includes: a storage unit configured to store information of a designated compression parameter value and information of a compression parameter value calculated from a data size of a data file with respect to the data file compressed with the designated compression parameter value that represents a reduction degree of the data size and stored in a storage device; and a controller configured to compare, in terms of the reduction degree, the designated compression parameter value and the calculated compression parameter value for each data file, and set a recompression target by extracting the data file in which the calculated compression parameter value is lower than the designated compression parameter value. | 10-15-2015 |
20150293703 | PAGE TABLE INCLUDING DATA FETCH WIDTH INDICATOR - Embodiments relate to a page table including a data fetch width indicator. An aspect includes allocating a memory page in a main memory to an application. Another aspect includes creating a page table entry corresponding to the memory page in the page table. Another aspect includes determining, by a data fetch width indicator determination logic, the data fetch width indicator for the memory page. Another aspect includes sending a notification of the data fetch width indicator from the data fetch width indicator determination logic to supervisory software. Another aspect includes setting the data fetch width indicator in the page table entry by the supervisory software based on the notification. Another aspect includes, based on a cache miss in the cache memory corresponding to an address that is located in the memory page, fetching an amount of data from the memory page based on the data fetch width indicator. | 10-15-2015 |
20150293704 | MEMORY-AREA PROPERTY STORAGE INCLUDING DATA FETCH WIDTH INDICATOR - Embodiments relate to memory-area property storage including a data fetch width indicator. An aspect includes allocating a memory page in a main memory to an application that is executed by a processor of a computer. Another aspect includes determining the data fetch width indicator for the allocated memory page. Another aspect includes setting the data fetch width indicator in the at least one memory-area property storage in the allocated memory page. Another aspect includes, based on a cache miss in the cache memory corresponding to an address that is located in the allocated memory page: determining the data fetch width indicator in the memory-area property storage associated with the location of the address; and fetching an amount of data from the memory page based on the data fetch width indicator. | 10-15-2015 |
20150301743 | COMPUTER AND METHOD FOR CONTROLLING ALLOCATION OF DATA IN STORAGE APPARATUS HIERARCHICAL POOL - A computer is coupled to a storage apparatus having a hierarchical pool with multiple storage tiers that are multiple page groups having different IO performance. Page data is migrated between storage tiers in accordance with a migration request. The computer has a storage device that stores performance requirement management information denoting a performance requirement of a task, which is an execution unit of a prescribed process, and IO performance information enabling identification of a difference in IO performance between the multiple storage tiers. A control device calculates an amount of change predicted in the task execution performance on supposition that one or more data objects for use in the task are to be migrated to a page in another storage tier, and identifies, based on the predicted amount of change, one or more data objects capable of being migrated to a page in another storage tier. | 10-22-2015 |
20150331696 | NETWORK BOOT SYSTEM - [SUBJECTS] To suppress decrease in boot speed and slowness in operation of a terminal even when the terminal is booted the second or subsequent time in a private mode in which the terminal directly writes to a virtual disk. | 11-19-2015 |
20150339056 | CLIENT-SIDE DATA CACHING - An apparatus for processing data from a host storage device includes a client processing device configured to be connected by a communication channel to the host storage device. The client processing device includes: a processor configured to request a data set stored at the host storage device, the data set associated with a globally unique identifier; and a cache configured to store a copy of the data set and the globally unique identifier based on the processor receiving the data set from the host storage device, the cache being a persistent storage configured to retain the copy of the data set until the processor stores a new data set in the cache, the cache configured to retain the copy of the data set independent of an amount of time that the data set is stored in the cache. | 11-26-2015 |
20150347022 | READING AND WRITING VIA FILE SYSTEM FOR TAPE RECORDING SYSTEM - Communicating data with a medium is provided. A cache is provided for storing target data of a file identified by an access request from an application of a host. The cache is divided into a read cache, a write cache, and an index cache. Responsive to receiving the access request: the medium is loaded onto a drive using a file system; target data is stored to the write cache and to the read cache; and the index file stored in the index cache is updated to reflect position metadata about the target data stored in the write cache. Responsive to initiating unloading of the medium from the drive: the updated index file stored in the index cache is written to the index partition of the medium; and the target data stored in the write cache is written onto a data partition of the medium without using the file system. | 12-03-2015 |
20150356024 | Translation Lookaside Buffer - The described embodiments include a translation lookaside buffer (“TLB”) that is used for performing virtual address to physical address translations when making memory accesses in a memory in a computing device. In the described embodiments, the TLB includes a hierarchy of tables that are each used for performing virtual address to physical address translations based on the arrangement of pages of memory in corresponding regions of the memory. When performing a virtual address to physical address translation, the described embodiments perform a lookup in each of the tables in parallel for the virtual address to physical address translation and use a physical address that is returned from a lowest table in the hierarchy as the translation. | 12-10-2015 |
20150370715 | Safe and Efficient Dirty Data Flush for Dynamic Logical Capacity Based Cache in Storage Systems - Systems and methods to safely and efficiently handle dirty data flush are disclosed. More specifically, when a cache controller determines that one (or more) storage device of a cache device is running out of space, that storage device is given priority to be flushed prior to the other storage devices that are not in such a critical condition. In addition, a cache bypass process can be conditionally enabled to save free physical spaces already running low on such critical cache storage devices. | 12-24-2015 |
20150370716 | System and Method to Enable Dynamic Changes to Virtual Disk Stripe Element Sizes on a Storage Controller - A storage controller includes a storage controller memory to store virtual disk metadata including an original stripe size (OSS) field and a logical stripe size (LSS) field, and a cache memory having an OSS buffer and a LSS buffer. The storage controller stores a first block size in the OSS field, configures a RAID array to provide storage blocks of the first block size based on the OSS field, stores a second block size in the LSS field, receives a first data transaction that includes a first data block of the second block size based upon the LSS field, maps the first data block from the second block size to the first block size, and executes the first data transaction on the RAID array using the first block size. | 12-24-2015 |
20160011809 | STORAGE DEVICE AND COMPUTER SYSTEM | 01-14-2016 |
20160034358 | STORAGE APPARATUS AND METHOD FOR CONTROLLING CACHE OF STORAGE APPARATUS - A storage apparatus is connected to a host apparatus and a secondary storage apparatus and includes a memory, a storage device, and a processor. The memory includes a save memory area and a cache memory area that temporarily stores data received from the host apparatus. The storage device stores data that is received from the host apparatus. | 02-04-2016 |
20160055091 | FUZZY COUNTERS FOR NVS TO REDUCE LOCK CONTENTION - A method for data management in a computing storage environment includes a processor device, operable in the computing storage environment, that divides a plurality of counters tracking write and discard storage operations through Non Volatile Storage (NVS) space into first, accurate, and second, fuzzy, groups where the first, accurate, group is one of updated on a per operation basis, while the second, fuzzy, group is one of updated on a more infrequent basis as compared to the first, accurate group. | 02-25-2016 |
20160062895 | METHOD FOR DISK DEFRAG HANDLING IN SOLID STATE DRIVE CACHING ENVIRONMENT - An invention is provided for handling target disk access requests during disk defragmentation in a solid state drive caching environment. The invention includes detecting a request to access a target storage device. In response, data associated with the request is written to the target storage device without writing the data to the caching device, with the proviso that the request is a write request. In addition, the invention includes reading data associated with the request and marking the data associated with the request stored in the caching device for discard, with the proviso that the request is a read request and the data associated with the request is stored on the caching device. Data marked for discard is discarded from the caching device when time permits, for example, upon completion of disk defragmentation. | 03-03-2016 |
20160064659 | ELECTRONIC DEVICE - An electronic device includes a semiconductor memory. The semiconductor memory includes a selection element layer; a material layer directly coupled to a first surface of the selection element layer and including a conductive filament; and a variable resistance layer coupled to a second surface of the selection element layer opposite to the first surface. | 03-03-2016 |
20160085451 | DRIVE ARRAY POLICY CONTROL - An apparatus can include an interface; cache memory; a plurality of drives; and a controller that includes detection circuitry, a write through mode and a write back mode, where the write through mode writes information received via the interface to the plurality of drives, where the write back mode writes information received via the interface to the cache memory and writes information written to the cache memory to the plurality of drives, and where the detection circuitry selects the write through mode based at least in part on detection of a first condition and selects the write back mode based at least in part on detection of a second condition, where the first condition and the second condition differ. | 03-24-2016 |
20160098193 | METHOD AND APPARATUS FOR MONITORING SYSTEM PERFORMANCE AND DYNAMICALLY UPDATING MEMORY SUB-SYSTEM SETTINGS USING SOFTWARE TO OPTIMIZE PERFORMANCE AND POWER CONSUMPTION - A method and apparatus are disclosed to monitor system performance and dynamically update memory subsystem settings using software to optimize system performance and power consumption. In an example embodiment, the apparatus monitors a software application's cache performance and provides the software application the cache performance data. The software application, which has a higher-level/macro view of the overall system and better determination of its future requests, analyzes the performance data to determine more optimal memory sub-system settings. The software application provides the system more optimal settings to implement in the memory component to improve the memory and overall system performance and efficiency. | 04-07-2016 |
20160098217 | METHOD AND SYSTEM FOR PRESERVING DATA OF A STORAGE DEVICE - Various embodiments of a method and system for preserving data of a data storage device are disclosed. The method can include determining a number of times data is written to a target track of a storage medium; rewriting data from a track adjacent the target track if the number of times data is written to the target track exceeds a first predetermined threshold; determining a number of times data is rewritten to the adjacent track; copying data from the target track to a first storage location of a media cache if the number of times data is rewritten to the adjacent track exceeds a second predetermined threshold; writing subsequent data designated for the target track to the first storage location of the media cache; and relocating data from the first storage location of the media cache to the target track. | 04-07-2016 |
20160124673 | CACHE ALLOCATION FOR DISK ARRAY - A method for allocating cache for a disk array includes monitoring an I/O distribution of the disk array in a predetermined time period, determining a garbage collection state of the disk array, the garbage collection state allows the disk array to perform a garbage collection and prevents the disk array to perform the garbage collection, and determining an allocation of the cache based on the I/O distribution and the garbage collection state. | 05-05-2016 |
20160154738 | TIERED DATA STORAGE SYSTEM | 06-02-2016 |
20160162210 | OPENSTACK SWIFT INTERFACE FOR TAPE LIBRARY (OSSITL) - A system for providing a Swift Storage Node includes a media library having drives and tapes formatted according to a Linear Tape File System (LTFS). A management system is connected over a first interface to the media library and over a second interface to a Swift Proxy Server. The management system provides a Swift-On-Disk File System Application Programming Interface (API) over the second external interface, uses a Virtual File System (VFS), stores and registers objects in the VFS, controls the media library to move a respective tape into a respective drive, moves the object from the cache data store to the respective tape using the LTFS and updates the VFS. The management system also receives a request over the Swift-On-Disk File System API to read an object, determines in the VFS the location of the object, loads the object using the LTFS and provides the object. | 06-09-2016 |
20160196074 | DATA ARRANGEMENT APPARATUS, STORAGE MEDIUM, AND DATA ARRANGEMENT METHOD | 07-07-2016 |
20160253111 | STORAGE DEVICE AND STORING METHOD | 09-01-2016 |
20170235485 | SHORT STROKING AND DATA TIERING FOR A DISTRIBUTED FILESYSTEM | 08-17-2017 |
20170235648 | INFORMATION MANAGEMENT BY A MEDIA AGENT IN THE ABSENCE OF COMMUNICATIONS WITH A STORAGE MANAGER | 08-17-2017 |
20190146923 | DEVICES, SYSTEMS, AND METHODS FOR CONFIGURING A STORAGE DEVICE WITH CACHE | 05-16-2019 |