17th week of 2014 patent applcation highlights part 60 |
Patent application number | Title | Published |
20140115189 | METHOD AND SYSTEM FOR IMPLEMENTING ELASTIC NETWORK INTERFACE AND INTERCONNECTION - The disclosure provides a method and system for implementing a resilient network interface and interconnection. The method includes: aggregating one or multiple aggregation ports on one or multiple nodes into one DLAG; and implementing a distributed resilient network interface by the DLAG. Through the disclosure, the problem that the existing ring network and other protection technologies cannot ensure normal transmission of traffic in an arbitrary network is solved, thereby effectively ensuring normal operation of a service in the network and improving the reliability of the network interface and the utilization rate of the link. | 2014-04-24 |
20140115190 | RING-OF-CLUSTERS NETWORK TOPOLOGIES - In a ring-of-clusters network topology, groups of slave devices are accessed in parallel, such that the latency around the ring is proportional to the number of clusters and not proportional to the number of integrated circuits. The devices of a cluster share input and output ring segments such that packets arriving on the input segment are received and interpreted by all the devices in a cluster. In other embodiments, none, some or all but one slaves per cluster are asleep or otherwise disabled so that they do not input and interpret incoming packets. Regardless, in all embodiments, the slaves of a cluster cooperate, potentially under the controller's direction, to ensure that at most one of them is actively driving the output segment at any given time. The devices may be addressed through a device ID, a cluster ID, or a combination thereof. Embodiments of the invention are suited to exploit multi-chip module implementations and forms of vertical circuit stacking. | 2014-04-24 |
20140115191 | METHOD OF ALLOCATING UNIQUE IDENTIFIER AND BATTERY MANAGEMENT SYSTEM USING THE SAME - Disclosed are a method of allocating unique identifiers to slave battery managers for managing battery modules by a master battery manager and a battery management system using the same, and the method includes making a request for allocation information to the slave battery managers; receiving the allocation information from the slave battery managers; and allocating the unique identifiers to the slave battery managers based on the allocation information, wherein the allocation information contains an MAC address of a device performing a calibration between the slave battery manager and the battery module and time information on a time when the calibration is performed. According to the present invention, it is possible to efficiently control and manage a plurality of battery modules by allocating unique identifiers by using allocation information set to each of the plurality of battery modules. | 2014-04-24 |
20140115192 | METHOD AND DEVICE FOR PROVIDING HIGH SPEED DATA TRANSMISSION WITH VIDEO DATA - A method and device for operating a data link having multiple data lanes is provided. The method includes supplying first data (such as video data that follows the DisplayPort protocol) on one or more data lanes of a data interface between a video source device and a video sink device. In addition to being video stream data (such as the above mentioned DisplayPort video data) the first data can also be audio stream data (such as DisplayPort audio data), source-sink interface configuration data (such as DisplayPort AUX data) and sink related interrupt data (such as DisplayPort Hot Plug Detect “HPD” data). The method also includes receiving second data on one or more unidirectional data lanes of the data interface. The second data being data other than video stream data, source-sink interface configuration data and sink related interrupt data. | 2014-04-24 |
20140115193 | METHOD AND DEVICE FOR ENUMERATING INPUT/OUTPUT DEVICES - Embodiments of the present disclosure relate to a method and a device for enumerating input/output devices (IO devices). The method for enumerating input/output devices includes: acquiring an identifier of each processor and an identifier of each input/output (IO) centralized controller in a system; separately instructing processors to simultaneously enumerate a specific IO centralized controller and an input/output IO device connected to the specific IO centralized controller, according to the identifier of each processor and the identifier of each IO centralized controller; and acquiring related information of IO devices enumerated by the instructed processors. According to the embodiments of the present disclosure, the work of enumerating the system IO devices may be allocated to multiple processors to be carried out simultaneously, so as to greatly reduce time consumed in the enumeration process, and to accelerate a system initialization process. | 2014-04-24 |
20140115194 | Data Card Updating Method, Personal Computer, and Data Card - The technical solutions disclose a data card software updating method, a data card, and a personal computer. In the method, a mapped interface is switched first, and then an existing data card updating method is adopted to update a data card that provides only a remote network driver interface specification (RNDIS) interface through mapping. The technical solutions can conveniently achieve compatibility with the existing data card updating method, and can be implemented simply and efficiently. | 2014-04-24 |
20140115195 | DMA VECTOR BUFFER - According to one example embodiment, a direct memory access (DMA) engine and buffer is disclosed. The vector buffer may be explicitly programmable, and may include advanced logic for reordering non-unity-stride vector data. An example MEMCPY instruction may provide an access request to the DMA buffer, which may then service the request asynchronously. Bitwise guards are set over memory in use, and cleared as each bit is read. | 2014-04-24 |
20140115196 | DEVICE AND METHOD FOR CONVERTING INPUT SIGNAL - Provided is an input signal converting device, including a device input receiving unit configured to receive an input signal from an input device, and extract a type of the input device from which the received input signal is generated and an input event corresponding to the input signal, an input event converting unit configured to obtain an output event in a corresponding user terminal and a type of a user terminal corresponding to the extracted input device type and input event, and a device input transmitting unit configured to generate an output signal corresponding to the output event to be output to the user terminal, and deliver the generated output signal to the user terminal. | 2014-04-24 |
20140115197 | INTER-QUEUE ANTI-STARVATION MECHANISM WITH DYNAMIC DEADLOCK AVOIDANCE IN A RETRY BASED PIPELINE - Methods and apparatus relating to an inter-queue anti-starvation mechanism with dynamic deadlock avoidance in a retry based pipeline are described. In one embodiment, logic may arbitrate between two queues based on various rules. The queues may store data including local or remote requests, data responses, non-data responses, external interrupts, etc. Other embodiments are also disclosed. | 2014-04-24 |
20140115198 | INTERRUPT LATENCY PERFORMANCE COUNTERS - A system and method for finding the sources of increased interrupt latencies. An interrupt controller includes monitoring logic for measuring and storing latencies for servicing interrupt requests. The interrupt controller determines a measured latency is greater than an associated threshold and in response sends an indication of a long latency. The interrupt controller may send the indication to firmware, a device driver, or other software. The interrupt controller stores associated information with the measured latency for debug purposes. Additionally, the monitoring logic may perform statistical analysis in place of, or in addition to, software. | 2014-04-24 |
20140115199 | ELECTRONIC DEVICE AND SEMICONDUCTOR DEVICE - There is a need to alleviate or reduce crosstalk between bonding wires or wires in a device substrate. One selection configuration divides a multiplexed terminal group into three groups according to functions differently from another selection configuration that divides the multiplexed terminal group into two groups. A first multi-pin semiconductor device is configured such that the groups are successively arranged along one edge of the chip. The first semiconductor device connects with a second semiconductor device via a multiplexed terminal group. The multiplexed terminal group includes first through third interface terminal groups that differ from each other in signal input/output configurations. | 2014-04-24 |
20140115200 | DEVICE AND METHOD FOR WRITING/READING A MEMORY REGISTER SHARED BY A PLURALITY OF PERIPHERALS - A device and method for writing/reading a piece of data in/from a memory register shared by a plurality of peripherals, each peripheral having a peripheral clock signal, when two or more of the plurality of peripherals need to write/read such piece of data at the same time, the digital device including a central unit having the memory register and a bank of SL modules in signal communication with the central unit, the bank of SL modules being designed to write/read the piece of data. The bank of SL modules comprises a plurality of writing/reading modules whose priority value ranges between maximum and minimum priority values, each module being connected to a respective peripheral, the central unit includes a multiplexer in signal communication on the one hand with the plurality of writing/reading modules, and on the other hand with the memory register, each module comprises an arbitration cell, such that the first module is identified by the maximum priority value (Prmax′) and the other N−1 modules are identified by decreasing priority values, the central unit operating at a predetermined main clock frequency to write/read the piece of data in the memory register. | 2014-04-24 |
20140115201 | Signal Order-Preserving Method and Apparatus - Embodiments of the present invention relate to a signal order-preserving method and apparatus. When data of a request signal that comes from a corresponding first upstream device is written into a first first input first output (FIFO) memory, invalid data is written into a second FIFO memory corresponding to a second upstream device in a same clock cycle; and the data of the request signal is read from the first FIFO memory, the invalid data is read from the second FIFO memory, the invalid data is discarded, and the data of the request signal is conveyed to a downstream device. Through the signal order-preserving method and apparatus in the embodiments of the present invention, the coupling extent between devices on which there is an order-preserving requirement is reduced while signal order-preserving is achieved. | 2014-04-24 |
20140115202 | ELECTRONIC DEVICE, COMMUNICATION CONTROL METHOD OF ELECTRONIC DEVICE, AND INFORMATION TERMINAL DEVICE - An electronic device includes a connection unit that enables a USB connection to a smartphone, a normal communication mode in which communication is performed by using a device class prepared in advance in the smartphone, a request unit that makes a request for switching to a unique communication mode, a search unit that searches for a device class usable in the normal communication mode after the request for switching to the unique communication mode has been made, a determination unit that determines, on the basis of a search result, whether or not a disadvantageous change has occurred in the normal communication mode, and a reset unit that resets the USB connection when it is determined that a disadvantageous change has occurred. | 2014-04-24 |
20140115203 | SYSTEM AND METHOD OF PROCESSING SEISMIC DATA ON A CO-PROCESSOR DEVICE - A system and method for processing seismic data on one or more co-processor devices that are operatively coupled to a host computing system via a communications channel. The compression of input data transmitted to the co-processor device and/or the size of the storage provided on the co-processor device may enhance the efficiency of the processing of the data on the peripheral device by obviating a bottleneck caused by the relatively slow transfer of data between the host computing system and the co-processor device or by the relatively slow transfer of data within the co-processor device between the co-processor information storage and the co-processor. | 2014-04-24 |
20140115204 | SYSTEM AND METHOD OF PROCESSING SEISMIC DATA ON A CO-PROCESSOR DEVICE - A system and method for processing seismic data on one or more co-processor devices that are operatively coupled to a host computing system via a communications channel. The compression of input data transmitted to the co-processor device and/or the size of the storage provided on the co-processor device may enhance the efficiency of the processing of the data on the peripheral device by obviating a bottleneck caused by the relatively slow transfer of data between the host computing system and the co-processor device or by the relatively slow transfer of data within the co-processor device between the co-processor information storage and the co-processor. | 2014-04-24 |
20140115205 | Secure Digital Card Capable of Transmitting Data Over Wireless Network - The present invention provides an SD card, including: an SDIO interface, a selector switch, a storage unit, a baseband processing unit, a radio frequency circuit, and an antenna. The SDIO interface is configured to provide a data and control interface between a host device and the storage unit. The storage unit is configured to store data. The selector switch includes a first branch and a second branch, and when the selector switch connects to the first branch, a read/write interface of the storage unit is coupled to the SDIO interface, and when the selector switch connects to the second branch, the read/write interface of the storage unit is coupled to the baseband processor. The baseband processor is coupled to the radio frequency circuit, and is configured to process baseband data. | 2014-04-24 |
20140115206 | METHODS AND SYSTEMS FOR RUNNING NETWORK PROTOCOLS OVER PERIPHERAL COMPONENT INTERCONNECT EXPRESS - Methods and devices for running network protocols over Peripheral Component Interconnect Express are disclosed. The methods and devices may receive an electronic signal comprising data. The methods and devices may also determine the data corresponds to a protocol selected from a set comprising a PCIe protocol and a network protocol. In addition, the methods and devices may also configure a CPU based on the determined protocol. The methods and devices may also receive a second electronic signal comprising second data at a pin or land of the CPU, wherein the pin or land is connected to a PCIe lane and wherein the second data is formatted in accordance with determined protocol. In addition, the methods and devices may process the second data in accordance with the determined protocol. | 2014-04-24 |
20140115207 | HIGH PERFORMANCE INTERCONNECT PHYSICAL LAYER - A periodic control window is embedded in a link layer data stream to be sent over a serial data link, where the control window is configured to provide physical layer information including information for use in initiating state transitions on the data link. The link layer data can be sent during a link transmitting state of the data link and the control window can interrupt the sending of flits. In one aspect, the information includes link width transition data indicating an attempt to change the number of active lanes on the link. | 2014-04-24 |
20140115208 | CONTROL MESSAGING IN MULTISLOT LINK LAYER FLIT - A link layer control message is generated and included in a flit that is to be sent over a serial data link to a device. The flits sent over the data link are to include a plurality of slots. Control messages can include, in some aspects, a viral alert message, a poison alert message, a credit return message, and acknowledgements. | 2014-04-24 |
20140115209 | Flow Control for a Serial Peripheral Interface Bus - Systems and methods for flow control within a Serial Peripheral Interface without additional signal lines are included herein. In one example, a method includes generating a flow control command. The method also includes sending the flow control command from a master device to a slave device with a Serial Peripheral Interface. In addition, the method includes sending a memory address from the master device to the slave device. Furthermore, the method includes detecting a ready indicator in the master device. The method also includes waiting to receive a ready indicator and communicating with the slave device in response to the ready indicator. | 2014-04-24 |
20140115210 | Multi Processor Multi Domain Conversion Bridge with Out of Order Return Buffering - An asynchronous dual domain bridge is implemented between the cache coherent master and the coherent system interconnect. The bridge has 2 halves, one in each clock/powerdown domain—master and interconnect. The asynchronous bridge is aware of the bus protocols used by each individual processor within the attached subsystem, and can perform the appropriate protocol conversion on each processor's transactions to adapt the transaction to/from the bus protocol used by the interconnect. | 2014-04-24 |
20140115211 | SYSTEM AND METHOD FOR CONTROLLING DEVICES - A method and system for communicating, comprising: at least one master device comprising at least one master driver with at least one intelligent vending controller application; at least one slave device comprising at least one slave driver; and at least one controller area network (CAN) bus facilitating communication between the at least one master device and the at least one slave device; the master device facilitating communication between at least one host application and the at least one master device and the at least one slave device such that the slave device does not require the at least one intelligent vending controller application in order to communicate with the host application; wherein the at least one master device and the at least one slave device are vendor devices. | 2014-04-24 |
20140115212 | SERIAL COMMUNICATION CIRCUIT, INTEGRATED CIRCUIT DEVICE, PHYSICAL QUANTITY MEASURING DEVICE, ELECTRONIC APPARATUS, MOVING OBJECT, AND SERIAL COMMUNICATION METHOD - A serial communication circuit includes a receiving unit configured to serially receive input data including a command and a synchronization identification code that is different from the command and a determining unit configured to receive the synchronization identification code from the receiving unit and when the synchronization identification code coincides with a slave selection value, to instruct a start of response processing based on the command. | 2014-04-24 |
20140115213 | TIERED LOCKING OF RESOURCES - In an embodiment, a lock command is received from a thread that specifies a resource. If tier status in a nodal lock indicates the nodal lock is currently owned, an identifier of the thread is added to a nodal waiters list, and if the thread's lock wait indicator indicates that the thread owns the nodal lock, then a successful completion status is returned for the lock command to the thread after waiting until a next tier wait indicator in the nodal lock indicates that any thread owns a global lock on the resource. If the tier status indicates no thread holds the nodal lock, the tier status is changed to indicate the nodal lock is owned, and if a global waiters and holder list is empty, an identifier of a node at which the thread executes is added to the global waiters and holder list. | 2014-04-24 |
20140115214 | BITMAP LOCKING USING A NODAL LOCK - In an embodiment, in response to a request from a producer thread to set a bit in a global bitmap, a nodal lock is obtained on a nodal bitmap at a node at which the producer thread executes. A determination is made whether a corresponding bit in a pending clear bitmap in the nodal bitmap indicates that a clear of the bit in the global bitmap is pending. If the corresponding bit in the pending clear bitmap in the nodal bitmap indicates that a clear of the bit in the global bitmap is pending, the corresponding bit in the pending clear bitmap is cleared. If the corresponding bit in the pending clear bitmap in the nodal bitmap indicates that the clear of the bit in the global bitmap is not pending, a corresponding bit in a pending set bitmap in the nodal bitmap is set. | 2014-04-24 |
20140115215 | TIERED LOCKING OF RESOURCES - In an embodiment, a lock command is received from a thread that specifies a resource. If tier status in a nodal lock indicates the nodal lock is currently owned, an identifier of the thread is added to a nodal waiters list, and if the thread's lock wait indicator indicates that the thread owns the nodal lock, then a successful completion status is returned for the lock command to the thread after waiting until a next tier wait indicator in the nodal lock indicates that any thread owns a global lock on the resource. If the tier status indicates no thread holds the nodal lock, the tier status is changed to indicate the nodal lock is owned, and if a global waiters and holder list is empty, an identifier of a node at which the thread executes is added to the global waiters and holder list. | 2014-04-24 |
20140115216 | BITMAP LOCKING USING A NODAL LOCK - In an embodiment, in response to a request from a producer thread to set a bit in a global bitmap, a nodal lock is obtained on a nodal bitmap at a node at which the producer thread executes. A determination is made whether a corresponding bit in a pending clear bitmap in the nodal bitmap indicates that a clear of the bit in the global bitmap is pending. If the corresponding bit in the pending clear bitmap in the nodal bitmap indicates that a clear of the bit in the global bitmap is pending, the corresponding bit in the pending clear bitmap is cleared. If the corresponding bit in the pending clear bitmap in the nodal bitmap indicates that the clear of the bit in the global bitmap is not pending, a corresponding bit in a pending set bitmap in the nodal bitmap is set. | 2014-04-24 |
20140115217 | FORMAL VERIFICATION OF ARBITERS - A computer-implement method, computerized apparatus and computer program product for formal verification of an arbiter design. The method comprising: performing formal verification of an arbiter design, wherein the arbiter design is based on an original arbiter design comprising a fairness logic and an arbitration logic, wherein the arbiter design comprising the arbitration logic and a portion of the fairness logic; and wherein the formal verification is performed with respect to a multi-dimensional Complete Random Sequence (CRS) having two or more dimensions. | 2014-04-24 |
20140115218 | ASYMMETRIC MESH NoC TOPOLOGIES - A method of interconnecting blocks of heterogeneous dimensions using a NoC interconnect with sparse mesh topology includes determining a size of a mesh reference grid based on dimensions of the chip, dimensions of the blocks of heterogeneous dimensions, relative placement of the blocks and a number of host ports required for each of the blocks of heterogeneous dimensions, overlaying the blocks of heterogeneous dimensions on the mesh reference grid based on based on a guidance floor plan for placement of the blocks of heterogeneous dimensions, removing ones of a plurality of nodes and corresponding ones of links to the ones of the plurality of nodes which are blocked by the overlaid blocks of heterogeneous dimensions, based on porosity information of the blocks of heterogeneous dimensions, and mapping inter-block communication of the network-on-chip architecture over remaining ones of the nodes and corresponding remaining ones of the links. | 2014-04-24 |
20140115219 | GENERAL INPUT/OUTPUT ARCHITECTURE, PROTOCOL AND RELATED METHODS TO IMPLEMENT FLOW CONTROL - An enhanced general input/output communication architecture, protocol and related methods are presented. | 2014-04-24 |
20140115220 | SYNCHRONIZING BARRIER SUPPORT WITH ZERO PERFORMANCE IMPACT - The barrier-aware bridge tracks all outstanding transactions from the attached master. When a barrier transaction is sent from the master, it is tracked by the bridge, along with a snapshot of the current list of outstanding transactions, in a separate barrier tracking FIFO. Each barrier is separately tracked with whatever transactions that are outstanding at that time. As outstanding transaction responses are sent back to the master, their tracking information is simultaneously cleared from every barrier FIFO entry. | 2014-04-24 |
20140115221 | Processor-Based System Hybrid Ring Bus Interconnects, and Related Devices, Processor-Based Systems, and Methods - Processor-based system hybrid ring bus interconnects, and related devices, systems, and methods are disclosed. In one embodiment, a processor-based system hybrid ring bus interconnect is provided. The processor-based system hybrid ring bus interconnect includes multiple ring buses, each having a bus width and configured to receive bus transaction messages from a requester device(s). The processor-based system hybrid ring bus interconnect also includes an inter-ring router(s) coupled to the ring buses. The inter-ring router(s) is configured to dynamically direct bus transaction messages among the ring buses based on bandwidth requirements of the requester device(s). Thus, less power is consumed than by a crossbar interconnect due to simpler switching configurations. Further, the inter-ring router(s) allows for provision of multiple ring buses that can be dynamically activated and deactivated based on bandwidth requirements. This provides conservation of power when full bandwidth requirements on the processor-based system hybrid ring bus interconnect are not required. | 2014-04-24 |
20140115222 | HIGH SPEED SERIAL PERIPHERAL INTERFACE SYSTEM - A serial peripheral interface (SPI) system including a bus adapter is disclosed. The bus adapter may include a data converter that may be adapted to receive respective first and second data from a first master output peripheral input (MOPI) line and a chip select line from a SPI master device. The data converter may also be adapted to interleave the first and second data, and the data converter may be adapted to transmit the interleaved first and second data synchronously with a second clock signal on a second MOPI line. The bus adapter may also include a clock rate adjuster adapted to generate the second clock signal to transmit to a SPI peripheral device. The second clock signal may be adapted to enable the SPI peripheral device to read the transmitted data. | 2014-04-24 |
20140115223 | DUAL CASTING PCIE INBOUND WRITES TO MEMORY AND PEER DEVICES - Methods and apparatus for supporting dual casting of inbound system memory writes from PCIe devices to memory and a peer PCIe device. An inbound system memory write request from a first PCIe device is received at a PCIe root complex and the memory address is inspected to determine whether it falls within an address window defined for dual casting operations. If it does, an IO write request is generated from the inbound system memory write request and sent to a second PCIe device associated with the address window. During a parallel operation, the original inbound system memory write request is forwarded to a system agent configured to receive such write requests. | 2014-04-24 |
20140115224 | MEMORY INTERCONNECT NETWORK ARCHITECTURE FOR VECTOR PROCESSOR - The present disclosure provides a memory interconnection architecture for a processor, such as a vector processor, that performs parallel operations. An example processor may include a compute array that includes processing elements; a memory that includes memory banks; and a memory interconnect network architecture that interconnects the compute array to the memory. In an example, the memory interconnect network architecture includes a switch-based interconnect network and a non-switch based interconnect network. The processor is configured to synchronously load a first data operand to each of the processing elements via the switch-based interconnect network and a second data operand to each of the processing elements via the non-switch-based interconnect network. | 2014-04-24 |
20140115225 | CACHE MANAGEMENT BASED ON PHYSICAL MEMORY DEVICE CHARACTERISTICS - A processor unit removes, responsive to obtaining a new address, an entry from a memory of a type of memory based on a comparison of a performance of the type of memory to different performances, each of the different performances associated with a number of other types of memory. | 2014-04-24 |
20140115226 | CACHE MANAGEMENT BASED ON PHYSICAL MEMORY DEVICE CHARACTERISTICS - A processor unit removes, responsive to obtaining a new address, an entry from a memory of a type of memory based on a comparison of a performance of the type of memory to different performances, each of the different performances associated with a number of other types of memory. | 2014-04-24 |
20140115227 | SELECTIVE COUPLING OF AN ADDRESS LINE TO AN ELEMENT BANK OF A VECTOR REGISTER FILE - A method includes selectively coupling a first address line of a plurality of address lines and a second address line of the plurality of address lines to a first element bank of a plurality of element banks of a vector register file according to a selection pattern. The method also includes accessing data stored within the first element bank that is selectively addressed by the first address line via a single read port. | 2014-04-24 |
20140115228 | METHOD AND SYSTEM FOR VM-GRANULAR I/O CACHING - Methods are presented for caching I/O data in a solid state drive (SSD) locally attached to a host computer supporting the running of a virtual machine (VM). Portions of the SSD are allocated as cache storage for VMs running on the host computer. A mapping relationship is maintained between unique identifiers for VMs running on the host computer and one or more process identifiers (PIDs) associated with processes running in the host computer that correspond to each of the VM's execution on the host computer. When an I/O request is received, a PID associated with I/O request is determined and a unique identifier for the VM is extracted from the mapping relationship based on the determined PID. A portion of the SSD corresponding to the unique identifier of the VM that is used as a cache for the VM can then be accessed in order to handle the I/O request. | 2014-04-24 |
20140115229 | METHOD AND SYSTEM TO REDUCE SYSTEM BOOT LOADER DOWNLOAD TIME FOR SPI BASED FLASH MEMORIES - Method and system for providing increased frequency of flash memories compatible to Serial Peripheral Interface (SPI) bus protocol by delayed data capturing so that system boot loader down load time reduces for a given memory configuration. Methods and systems are provided for operating the memory at the device rated frequency. | 2014-04-24 |
20140115230 | Flash Memory with Data Retention Partition - A NAND flash memory chip includes a first partition that has smaller memory cells, with smaller charge storage elements, and a second partition that has larger memory cells, with larger charge storage elements, in the same memory array. Data is selected for storage in the first or second partition according to characteristics, or expected characteristics, of the data. | 2014-04-24 |
20140115231 | NAND MEMORY MANAGEMENT - Apparatus, systems, and methods manage NAND memory are described. In one embodiment, an apparatus comprises a memory controller logic to apply a binary parity check code to a binary string and convert the binary string to a ternary string. Other embodiments are also disclosed and claimed. | 2014-04-24 |
20140115232 | Metadata Journaling with Error Correction Redundancy - Method and apparatus for managing a memory, such as but not limited to a flash memory. In accordance with some embodiments, user data and associated metadata are stored in a memory. The metadata are arranged as a first sequence of snapshots of the metadata at different points in time during the operation of the memory, and a second sequence of intervening journals which reflect updates to the metadata from one snapshot to the next. Requested portions of the metadata are recovered from the memory using a selected snapshot in the first sequence and first and second journals in the second sequence. | 2014-04-24 |
20140115233 | Restoring Virtualized GCU State Information - Method and apparatus for managing a memory, such as but not limited to a flash memory. In accordance with some embodiments, initial state information is stored which identifies an actual state of a garbage collection unit (GCU) of a memory during a normal operational mode. During a restoration mode after a memory power cycle event, a virtualized state of the GCU is determined responsive to the initial state information and to data read from the GCU. The memory is transitioned from the restoration mode to the normal operational mode once the virtualized state for the GCU is determined. | 2014-04-24 |
20140115234 | MEMORY SYSTEM COMPRISING NONVOLATILE MEMORY DEVICE AND RELATED METHOD OF OPERATION - A method of programming a nonvolatile memory device comprises generating write data and metadata associated with the write data, generating a seed associated with the write data and scrambling the generated seed, randomizing the write data and the metadata using the scrambled seed, and programming the randomized write data, the randomized metadata, and the scrambled seed in the nonvolatile memory device. | 2014-04-24 |
20140115235 | CACHE CONTROL APPARATUS AND CACHE CONTROL METHOD - A cache control apparatus comprises a primary cache part, a secondary cache part for caching data destaged from the primary cache part, and a controller connected to the primary cache part and to the secondary cache part. The secondary cache part has a first storage part and a second storage part having a lifetime longer than that of the first storage part. The controller determines whether the data destaged from the primary cache part is to be stored in the first storage part or the second storage part in the secondary cache part, based on a use state indicating whether or not the data has been updated, and stores the data in the first storage part or the second storage part determined. | 2014-04-24 |
20140115236 | SERVER AND METHOD FOR MANAGING REDUNDANT ARRAY OF INDEPENDENT DISK CARDS - In a method for managing redundant array of independent disk (RAID) cards, physical layer (PHY) chips of the RAID card is detected by a serial port. Information of a malfunctioning PHY chip and a standby PHY chip is read and stored in a firmware of a flash erasable programmable read only memory (EPROM) of the RAID card. An address of the malfunctioning PHY chip is set as an address of the standby PHY chip, and a hard disk electronically connected to the malfunctioning PHY chip is connected to the standby PHY chip. A new serial attached small computer system interface (SAS) address is obtained by amending an original SAS address according to the number and address of the standby PHY chip, and a new firmware is created in the flash EPROM according to the new SAS address. | 2014-04-24 |
20140115237 | ENCODING PROGRAM DATA BASED ON DATA STORED IN MEMORY CELLS TO BE PROGRAMMED - A method of programming data in a nonvolatile memory device comprises receiving program data to be programmed in selected memory cells of the nonvolatile memory device, reading data from the selected memory cells, encoding the program data using at least one encoding scheme selected from among multiple encoding schemes according to a comparison of the program data and the read data, generating flag data including encoding information, and programming the encoded program data and the flag data in the selected memory cells. | 2014-04-24 |
20140115238 | STORAGE CONTROLLERS AND STORAGE CONTROL METHODS - According to various embodiments, a storage controller configured to control storage of data in a pre-determined area of a storage medium may be provided. The storage controller may include a memory configured to store a write pointer, a reclaim pointer, and a wrapped around pointer. The write pointer may indicate a location of the storage medium to write incoming data. The reclaim pointer may indicate a location of the storage medium to perform a space reclamation. The wrapped around pointer may indicate a location of the storage medium where writing is to continue if writing of data reaches an end of the pre-determined area. | 2014-04-24 |
20140115239 | METHOD OF MANAGING DATA IN NONVOLATILE MEMORY DEVICE - A method of managing data in a nonvolatile memory device. The method includes providing a nonvolatile memory device having a hot region and a cold region. The hot region includes first through n-th blocks. Input pages having metadata are received from a host. The input pages are sequentially written to the first through n-th blocks. Valid pages are identified from the input pages written to the first block after the n-th block is written. The valid pages are written to the cold region. | 2014-04-24 |
20140115240 | STORAGE DEVICES AND METHODS FOR CONTROLLING A STORAGE DEVICE - According to various embodiments, a storage device may be provided. The storage device may include: a first memory including a magnetic recording medium and configured to store user data; a second memory including a solid state drive recording medium and configured to store at least one of metadata or other frequently accessed data; and an interface configured to access the second memory using a pre-determined communication protocol. | 2014-04-24 |
20140115241 | BUFFER MANAGEMENT APPARATUS AND METHOD - A buffer management apparatus ( | 2014-04-24 |
20140115242 | SYSTEMS AND METHODS FOR HANDLING NON-VOLATILE MEMORY OPERATING AT A SUBSTANTIALLY FULL CAPACITY - This can relate to handling a non-volatile memory (“NVM”) operating at a substantially full memory. The non-volatile memory can report its physical capacity to an NVM driver. The NVM driver can scale-up the physical capacity a particular number of times to generate a “scaled physical capacity,” which is then reported to the file system. Because the scaled physical capacity is greater than the NVM's actual physical capacity, the file system allocates a logical space to the NVM that is substantially greater than the NVM's capacity. This can cause less crowding of the logical block addresses within the logical space, thus making it easier for the file system to operate and improving system performance. A commitment budget can also be reported to the file system that corresponds to the NVM's physical capacity, and which can define the amount of data the file system can commit for storage in the NVM. | 2014-04-24 |
20140115243 | RESISTIVE RANDOM-ACCESS MEMORY DEVICES - A resistive random-access memory device includes a memory array, a read circuit, a write-back logic circuit and a write-back circuit. The read circuit reads the data stored in a selected memory cell and accordingly generates a first control signal. The write-back logic circuit generates a write-back control signal according to the first control signal and a second control signal. The write-back circuit performs a write-back operation on the selected memory cell according to the write-back control signal and a write-back voltage, so as to change a resistance state of the selected memory cell from a low resistance state to a high resistance state, and generates the second control signal according to the resistance state of the selected memory cell. | 2014-04-24 |
20140115244 | APPARATUS, SYSTEM AND METHOD FOR PROVIDING A PERSISTENT LEVEL-TWO CACHE - Aspects of the present disclosure disclose systems and methods for providing a level-two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. In particular, any data that is scheduled to be evicted or otherwise removed from a level-one cache is stored in the level-two cache with corresponding metadata in a manner that is quickly retrievable. | 2014-04-24 |
20140115245 | APPARATUS SYSTEM AND METHOD FOR PROVIDING RAW DATA IN A LEVEL-TWO CACHE - Aspects of the present disclosure disclose systems and methods for managing a level-two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. Any data stored in the level-two cache may be stored in a particular version or format of data known as “raw” data, in contrast to storing the data in a “cooked” version, as is typically stored in a level-one cache. | 2014-04-24 |
20140115246 | APPARATUS, SYSTEM AND METHOD FOR MANAGING EMPTY BLOCKS IN A CACHE - Aspects of the present disclosure disclose systems and methods for recognizing multiple and distinct references within a cache that identify or otherwise provide access to empty blocks of data. Multiple references identifying empty blocks of data are associated with a single block of empty data permanently stored in the cache. Subsequently, each time an empty block of data is added to the cache, a reference corresponding to the empty block is mapped to a generic empty block of data stored in the cache. When a reference is removed or deleted from the cache, only the reference is deleted; the single generic block of empty data continues to reside in the cache. | 2014-04-24 |
20140115247 | INFORMATION RECORDING DEVICE AND INFORMATION RECORDING METHOD - An information recording device includes a recording medium in which renewal data, which is a target of a data refresh operation, is recorded, a reading module that reads the renewal data recorded in the recording medium, a renewal module that performs updating of a value indicating a state of the data refresh operation, a generation module that generates parity data based on the value and the read renewal data, and a recording module that records the renewal data after recording the generated parity data. | 2014-04-24 |
20140115248 | DEVICE, SYSTEM, AND METHOD OF MEMORY ALLOCATION - Device, system, and method of memory allocation. For example, an apparatus includes: a Dual In-line Memory Module (DIMM) including a plurality of Dynamic Random Access Memory (DRAM) units to store data, wherein each DRAM unit includes a plurality of banks and each bank is divided into a plurality of sub-banks; and a memory management unit to allocate a set of interleaved sub-banks of said DIMM to a memory page of an Operating System, wherein a combined memory size of the set of interleaved sub-banks is equal to a size of the memory page of the Operating System. | 2014-04-24 |
20140115249 | Parallel Execution Mechanism and Operating Method Thereof - A thread priority control mechanism is provided which uses the completion event of the preceding transaction to raise the priority of the next transaction in the order of execution when the transaction status has been changed from speculative to non-speculative. In one aspect of the present invention, a thread-level speculation mechanism is provided which has content-addressable memory, an address register and a comparator for recording transaction footprints, and a control logic circuit for supporting memory synchronization instructions. This supports hardware transaction memory in detecting transaction conflicts. This thread-level speculation mechanism includes a priority up bit for recording an attribute operand in a memory synchronization instruction, a means for generating a priority up event when a thread wake-up event has occurred and the priority up bit is 1, and a means for preventing the CAM from storing the load/store address when the instruction is a non-transaction instruction. | 2014-04-24 |
20140115250 | PARALLEL ACCESS VIRTUAL TAPE LIBRARY AND DRIVES - A system and method described herein allows a virtual tape library (VTL) to perform multiple simultaneous or parallel read/write or access sessions with disk drives or other storage media, particularly when subject to a sequential SCSI-compliant layer or traditional limitations of VTLs. In one embodiment, a virtualizing or transaction layer can establish multiple sessions with one or more clients to concurrently satisfy the read/write requests of those clients for physical storage resources. A table or other data structure tracks or maps the sessions associated with each client and the location of data on the physical storage devices. | 2014-04-24 |
20140115251 | Reducing Memory Overhead of Highly Available, Distributed, In-Memory Key-Value Caches - Maintaining high availability of objects for both read and write transactions. Secondary copies of cached objects are created and maintained on disks of a secondary caching node and in remote data storage. In response to an update request, the secondary copies of cached objects are updated. Secondary cached objects are synchronously invalidated in response to the update request, and the update is asynchronously propagated to a secondary caching node. | 2014-04-24 |
20140115252 | BLOCK STORAGE-BASED DATA PROCESSING METHODS, APPARATUS, AND SYSTEMS - The present disclosure relates to the field of information technology, and in particular, to a block storage-based data processing method, apparatus, and system. The block storage-based data processing method provided in embodiments of the present disclosure is applied in a system including at least two storage nodes, each storage node including a CPU, a cache medium, and a non-volatile storage medium, and the cache medium in all the storage nodes forming a cache pool. According to the method, after receiving a data operation request sent by a client, a service processing node sends the data operation request to a corresponding storage node in the system according to a logical address carried in the data operation request, so that the data operation request is processed in the cache medium of the storage node under control of the CPU of the storage node. | 2014-04-24 |
20140115253 | GLOBAL DATA ESTABLISHMENT FOR STORAGE ARRAYS CONTROLLED BY A PLURALITY OF NODES - A plurality of data arrays are coupled to a plurality of nodes via a plurality of adapters. The plurality of adapters discover the plurality of data arrays during startup, and information about the plurality of data arrays are communicated to corresponding local nodes of the plurality of nodes, wherein the local nodes broadcast the information to other nodes of plurality of nodes. A director node of the plurality of nodes determines which data arrays of the plurality of data arrays are a current set of global metadata arrays, based on the broadcasted information. | 2014-04-24 |
20140115254 | ACCESS SCHEDULER - Embodiments of the present invention provide a system for scheduling memory accesses for one or more memory devices. This system includes a set of queues configured to store memory access requests, wherein each queue is associated with at least one memory bank or memory device in the one or more memory devices. The system also includes a set of hierarchical levels configured to select memory access requests from the set of queues to send to the one or more memory devices, wherein each level in the set of hierarchical levels is configured to perform a different selection operation. | 2014-04-24 |
20140115255 | STORAGE SYSTEM AND METHOD FOR CONTROLLING STORAGE SYSTEM - It is provided a storage system, comprising a storage device for storing data and at least one controller for controlling reading/writing of the data from/to the storage device. The at least one controller each includes a first cache memory for temporarily storing the data read from the storage device by file access, and a second cache memory for temporarily storing the data to be read/written from/to the storage device by block access. The processor reads the requested data from the storage device in the case where data requested by a file read request received from a host computer is not stored in the first cache memory, stores the data read from the storage device in the first cache memory without storing the data in the second cache memory, and transfers the data stored in the first cache memory to the host computer that has issued the file read request. | 2014-04-24 |
20140115256 | SYSTEM AND METHOD FOR EXCLUSIVE READ CACHING IN A VIRTUALIZED COMPUTING ENVIRONMENT - A technique for efficient cache management demotes a unit of data from a higher cache level to a lower cache level in a cache hierarchy when the higher level cache evicts the unit of data. In a virtualization computing environment, eviction of the unit of data may be inferred by observing privileged memory and disk operations performed by a guest operating system and trapped by virtualization software for execution. When the unit of data is inferred to be evicted, the unit of data is demoted by transferring the unit of data into the lower cache level. This technique enables exclusive caching without direct involvement or modification of the guest operating system. In alternative embodiments, a pseudo-driver installed within the guest operating system explicitly tracks memory operations and transmits page eviction information to the lower level cache, which is able to cache evicted pages while maintaining cache exclusivity. | 2014-04-24 |
20140115257 | PREFETCHING USING BRANCH INFORMATION FROM AN INSTRUCTION CACHE - A processor stores branch information at a “sparse” cache and a “dense” cache. The sparse cache stores the target addresses for up to a specified number of branch instructions in a given cache entry associated with a cache line address, while branch information for additional branch instructions at the cache entry is stored at the dense cache. Branch information at the dense cache persists after eviction of the corresponding cache line until it is replaced by branch information for a different cache entry. Accordingly, in response to the instructions for a given cache line address being requested for retrieval from memory, a prefetcher determines whether the dense cache stores branch information for the cache line address. If so, the prefetcher prefetches the instructions identified by the target addresses of the branch information in the dense cache concurrently with transferring the instructions associated with the cache line address. | 2014-04-24 |
20140115258 | SYSTEM AND METHOD FOR MANAGING A DEDUPLICATION TABLE - Implementations described and claimed herein provide systems and methods for allocating and managing resources for a deduplication table. In one implementation, an upper limit to an amount of memory allocated to a deduplication table is established. The deduplication table has one or more checksum entries, and each checksum entry is associates a checksum with unique data. A new checksum entry corresponding to new unique data is prevented from being added to the deduplication table where adding the new checksum entry will cause the deduplication table to exceed a size limit. The new unique data has a checksum that is different from the checksums in the one or more checksum entries in the deduplication table. | 2014-04-24 |
20140115259 | Methods And Apparatuses For Controlling Thread Contention - An apparatus comprises a plurality of cores and a controller coupled to the cores. The controller is to lower an operating point of a first core if a first number based on processor clock cycles per instruction (CPI) associated with a second core is higher than a first threshold. The controller is operable to increase the operating point of the first core if the first number is lower than a second threshold. | 2014-04-24 |
20140115260 | SYSTEM AND METHOD FOR PRIORITIZING DATA IN A CACHE - Implementations described and claimed herein provide a system and methods for prioritizing data in a cache. In one implementation, a priority level, such as critical, high, and normal, is assigned to cached data. The priority level dictates how long the data is cached and consequently, the order in which the data is evicted from the cache memory. Data assigned a priority level of critical will be resident in cache memory unless heavy memory pressure causes the system to reclaim memory and all data assigned a priority state of high or normal has been evicted. High priority data is cached longer than normal priority data, with normal priority data being evicted first. Accordingly, important data assigned a priority level of critical, such as a deduplication table, is kept resident in cache memory at the expense of other data, regardless of the frequency or recency of use of the data. | 2014-04-24 |
20140115261 | APPARATUS, SYSTEM AND METHOD FOR MANAGING A LEVEL-TWO CACHE OF A STORAGE APPLIANCE - Aspects of the present disclosure disclose systems and methods for managing a level-two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. In particular, any data that is scheduled to be evicted or otherwise removed from a level-one cache is stored in the level-two cache with corresponding metadata in a manner that is quickly retrievable. The data contained within the level-two cache is managing using a cache list that manages and/or maintains data chunk entries added to the level-two cache based on a temporal access of the data chunk. | 2014-04-24 |
20140115262 | CACHE CONTROL DEVICE AND CACHE CONTROL METHOD - A cache control device includes an area determination unit that determines an area of a cache memory which is allocated to each instruction flow on the basis of an allocation ratio of an execution time per unit time, which is allocated to each of a plurality of the instruction flows by a CPU. The area determination unit specifies the area allocated to the specified instruction flow in response to an access request from a memory access unit, and accesses the specified area in the cache memory. | 2014-04-24 |
20140115263 | CHILD STATE PRE-FETCH IN NFAs - Disclosed is a method and apparatus for pre-fetching child states in an NFA cell array. A pre-fetch depth value is determined for each transition in an NFA graph. The pre-fetch depth value is accessed for transition from an active state in the NFA graph. The child states of the active state are pre-fetched to the depth of the pre-fetch depth value recursively. A loader loads the pre-fetched states into the NFA cell array. | 2014-04-24 |
20140115264 | MEMORY DEVICE, PROCESSOR, AND CACHE MEMORY CONTROL METHOD - A memory device includes a plurality of ways; a register configured to hold an access history of accessing the plurality of ways; and a way control unit configured to select one or more ways among the plurality of ways according to an access request and the access history, put the selected one or more ways in an operation state, and put one or more of the plurality of ways other than the selected one or more ways in a non-operation state. The way control unit dynamically changes a number of the one or more ways to be selected, according to the access request. | 2014-04-24 |
20140115265 | OPTIMUM CACHE ACCESS SCHEME FOR MULTI ENDPOINT ATOMIC ACCESS IN A MULTICORE SYSTEM - The MSMC (Multicore Shared Memory Controller) described is a module designed to manage traffic between multiple processor cores, other mastering peripherals or DMA, and the EMIF (External Memory InterFace) in a multicore SoC. The invention unifies all transaction sizes belonging to a slave previous to arbitrating the transactions in order to reduce the complexity of the arbitration process and to provide optimum bandwidth management among all masters. The two consecutive slots assigned per cache line access are always in the same direction for maximum access rate. | 2014-04-24 |
20140115266 | OPTIONAL ACKNOWLEDGEMENT FOR OUT-OF-ORDER COHERENCE TRANSACTION COMPLETION - To enable efficient tracking of transactions, an acknowledgement expected signal is used to give the cache coherent interconnect a hint for whether a transaction requires coherent ownership tracking. This signal informs the cache coherent interconnect to expect an ownership transfer acknowledgement signal from the initiating master upon read/write transfer completion. The cache coherent interconnect can therefore continue tracking the transaction at its point of coherency until it receives the acknowledgement from the initiating master only when necessary. | 2014-04-24 |
20140115267 | Hazard Detection and Elimination for Coherent Endpoint Allowing Out-of-Order Execution - A coherence maintenance address queue tracks each memory access from receipt until the memory reports the access complete. The address of each new access is compared against the address of all entries in the queue. This check is made when the access is ready to transmit to the memory. If there is no address match, then the current access does not conflict with any pending access. If there is an address match, the current access is stalled. The multi-core shared memory controller would then typically proceed to another access waiting a slot to the endpoint memory. Stored addresses in the coherence maintenance address queue are retired when the endpoint memory reports completion of the operation. At this point the access is no longer a hazard to following operations. | 2014-04-24 |
20140115268 | HIGH PERFORMANCE INTERCONNECT COHERENCE PROTOCOL - A coherence protocol message is sent corresponding to a particular cache line. A potential conflict involving the particular cache line is identified and a forward request is sent to a home agent to identify the potential conflict. A forward response can be received in response to the forward request from the home agent and a response to the conflict can be determined. | 2014-04-24 |
20140115269 | Multi Domain Bridge with Auto Snoop Response - An asynchronous dual domain bridge is implemented between the cache coherent master and the coherent system interconnect. The bridge has 2 halves, one in each clock/powerdown domain-master and interconnect. The powerdown mechanism is isolated to just the asynchronous bridge implemented between the master and the interconnect with a basic request/acknowledge handshake between the master subsystem and the asynchronous bridge. | 2014-04-24 |
20140115270 | MULTI PROCESSOR BRIDGE WITH MIXED ENDIAN MODE SUPPORT - An asynchronous dual domain bridge is implemented between the cache coherent master and the coherent system interconnect. The bridge has 2 halves, one in each clock/powerdown domain—master and interconnect. The asynchronous bridge is aware of the endian view used by each individual processor within the attached subsystem, and can perform the appropriate endian conversion on each processor's transactions to adapt the transaction to/from the endian view used by the interconnect. | 2014-04-24 |
20140115271 | COHERENCE CONTROLLER SLOT ARCHITECTURE ALLOWING ZERO LATENCY WRITE COMMIT - This invention speeds operation for coherence writes to shared memory. This invention immediately commits to the memory endpoint coherence write data. Thus this data will be available earlier than if the memory controller stalled this write pending snoop responses. This invention computes write enable strobes for the coherence write data based upon the cache dirty tags. This invention initiates a snoop cycle based upon the address of the coherence write. The stored write enable strobes enable determination of which data to write to the endpoint memory upon a cached and dirty snoop response. | 2014-04-24 |
20140115272 | Deadlock-Avoiding Coherent System On Chip Interconnect - This invention mitigates these deadlocking issues by a adding a separate non-blocking pipeline for snoop returns. This separate pipeline would not be blocked behind coherent requests. This invention also repartitions the master initiated traffic to move cache evictions (both with and without data) and non-coherent writes to the new non-blocking channel. This non-blocking pipeline removes the need for any coherent requests to complete before the snoop request can reach the memory controller. Repartitioning cache initiated evictions to the non-blocking pipeline prevents deadlock when snoop and eviction occur concurrently. The non-blocking channel of this invention combines snoop responses from memory controller initiated requests and master initiated evictions/non-coherent writes. | 2014-04-24 |
20140115273 | DISTRIBUTED DATA RETURN BUFFER FOR COHERENCE SYSTEM WITH SPECULATIVE ADDRESS SUPPORT - The MSMC (Multicore Shared Memory Controller) described is a module designed to manage traffic between multiple processor cores, other mastering peripherals or DMA, and the EMIF (External Memory InterFace)in a multicore SoC. Each processor has an associated return buffer allowing out of order responses of memory read data and cache snoop responses to ensure maximum bandwidth at the endpoints, and all endpoints receive status messages to simplify the return queue. | 2014-04-24 |
20140115274 | EXTENDING A CACHE COHERENCY SNOOP BROADCAST PROTOCOL WITH DIRECTORY INFORMATION - In one embodiment, a method includes receiving a read request from a first caching agent, determining whether a directory entry associated with the memory location indicates that the information is not present in a remote caching agent, and if so, transmitting the information from the memory location to the first caching agent before snoop processing with respect to the read request is completed. Other embodiments are described and claimed. | 2014-04-24 |
20140115275 | SATISFYING MEMORY ORDERING REQUIREMENTS BETWEEN PARTIAL READS AND NON-SNOOP ACCESSES - A method and apparatus for preserving memory ordering in a cache coherent link based interconnect in light of partial and non-coherent memory accesses is herein described. In one embodiment, partial memory accesses, such as a partial read, is implemented utilizing a Read Invalidate and/or Snoop Invalidate message. When a peer node receives a Snoop Invalidate message referencing data from a requesting node, the peer node is to invalidate a cache line associated with the data and is not to directly forward the data to the requesting node. In one embodiment, when the peer node holds the referenced cache line in a Modified coherency state, in response to receiving the Snoop Invalidate message, the peer node is to writeback the data to a home node associated with the data. | 2014-04-24 |
20140115276 | INTRAPROCEDURAL PRIVATIZATION FOR SHARED ARRAY REFERENCES WITHIN PARTITIONED GLOBAL ADDRESS SPACE (PGAS) LANGUAGES - Partitioned global address space (PGAS) programming language source code is retrieved by an executed PGAS compiler. At least one shared memory array access indexed by an affine expression that includes a distinct thread identifier that is constant and different for each of a group of program execution threads targeted to execute the PGAS source code is identified within the PGAS source code. It is determined whether the at least one shared memory array access results in a local shared memory access by all of the group of program execution threads for all references to the at least one shared memory array access during execution of a compiled executable of the PGAS source code. A direct memory access executable code is generated for each shared memory array access determined to result in the local shared memory access by all of the group of program execution threads. | 2014-04-24 |
20140115277 | METHOD AND APPARATUS FOR OFFLOADING STORAGE WORKLOAD - Exemplary embodiments provide a technique to offload storage workload. In one aspect, a computer comprises: a memory; and a controller operable to manage a relationship among port information of an initiator port, information of a logical volume storing data from the initiator port, and port information of a target port to be used for storing data from the initiator port to the logical volume, and to cause another computer to process a storage function of a storage system including the logical volume and the target port by creating a virtual machine for executing the storage function and by configuring the relationship on said another computer, said another computer sending the data to the logical volume of the storage system after executing the storage function. In specific embodiments, by executing the storage function on said another computer, the workload of executing the storage function on the storage system is eliminated. | 2014-04-24 |
20140115278 | MEMORY ARCHITECTURE - According to one example embodiment, an arbiter is disclosed to mediate memory access requests from a plurality of processing elements. If two or more processing elements try to access data within the same word in a single memory bank, the arbiter permits some or all of the processing elements to access the word. If two or more processing elements try to access different data words in the same memory bank, the lowest-ordered processing element is granted access and the others are stalled. | 2014-04-24 |
20140115279 | Multi-Master Cache Coherent Speculation Aware Memory Controller with Advanced Arbitration, Virtualization and EDC - This invention is an integrated memory controller/interconnect that provides very high bandwidth access to both on-chip memory and externally connected off-chip memory. This invention includes an arbitration for all memory endpoints including priority, fairness, and starvation bounds; virtualization; and error detection and correction hardware to protect the on-chip SRAM banks including automated scrubbing. | 2014-04-24 |
20140115280 | FLEXIBLE CONTROL MECHANISM FOR STORE GATHERING IN A WRITE BUFFER - A store gathering policy is enabled or disabled at a data processing device. A store gathering policy to be implemented by a store buffer can be selected from a plurality of store gathering polices. For example, the plurality of store gathering policies can be constrained or unconstrained. A store gathering policy can be enabled by a user programmable storage location. A specific store gathering policy can be specified by a user programmable storage location. A store gathering policy can be determined based upon an attribute of a store request, such as based upon a destination address. | 2014-04-24 |
20140115281 | MEMORY SYSTEM CONNECTOR - According to one embodiment a memory system includes a circuit card and a separable area array connector on the circuit card. The system also includes a memory device positioned on the circuit card, wherein the memory device is configured to communicate with a main processor of a computer system via the area array connector. | 2014-04-24 |
20140115282 | WRITING DATA FROM HADOOP TO OFF GRID STORAGE - In one embodiment, data generated via a map process and/or reduce process may be obtained. A request message may be sent to a server, where the request message indicates a request for a location in storage at which the data is to be stored. Upon receiving the location from the server, the data may be copied to the location in the storage. A commit message may be sent to the server, where the commit message indicates that the data has been copied to the location. In addition, the data may be deleted. | 2014-04-24 |
20140115283 | BLOCK MEMORY ENGINE WITH MEMORY CORRUPTION DETECTION - Techniques for handling version information using a copy engine. In one embodiment, an apparatus comprises a copy engine configured to perform one or more operations associated with a block memory operation in response to a command. Examples of block memory operations may include copy, clear, move, and/or compress operations. In one embodiment, the copy engine is configured to handle version information associated with the block memory operation based on the command. The one or more operations may include operating on data in a cache and/or modifying entries in a memory. In one embodiment, the copy engine is configured to compare version information in the command with stored version information. The copy engine may overwrite or preserve version information based on the command. The copy engine may be a coprocessing element. The copy engine may be configured to maintain coherency with other copy engines and/or processing elements. | 2014-04-24 |
20140115284 | PROGRESS RECORDING METHOD AND RECOVERING METHOD FOR ENCODING OPERATION ON STORAGE DEVICE - A progress recording method and a corresponding recovering method adapted to an encoding operation performed on a storage area of a storage device are provided. The progress recording method includes the following steps. A variable set is initialized and stored. The encoding operation includes a plurality of sub-operations, and each of the sub-operations is corresponding to at least one flag variable in the variable set. The flag variables are used for recording execution progresses of the sub-operations. When each of the sub-operations is executed, the corresponding flag variable in the variable set is updated according to the execution progress of the sub-operation. | 2014-04-24 |
20140115285 | RECONFIGURING A SNAPSHOT OF A VIRTUAL MACHINE - Techniques for reconfiguring a snapshot of a virtual machine (VM) may be provided. The VM may be deployed on a hypervisor running on a computer. Techniques comprise provisioning of a VM, installing and configuring an operating system and a base program. A snapshot of the virtual machine may be taken together with the operating system and the base program together with configuration data defining the configuration of the virtual machine, the operating system and base application in a metadata descriptor. All may be stored in a persistent storage. Then the content of the metadata descriptor may be modified, and it may be reverted back to the snapshot using the modified content of the metadata descriptor such that the snapshot of the virtual machine with the operating system and the base program is reconfigured upon deployment of the snapshot including the operating system and the base program. | 2014-04-24 |
20140115286 | System and Method of Digital Content Manipulation - A data storage apparatus includes a primary storage device and a secondary storage device. The primary storage device includes a first non-volatile memory to store a content item. The secondary storage device includes a second non-volatile memory to store a command received from a first content appliance. The command indicates an operation to be performed with respect to the content item stored at the primary storage device. The secondary storage device is configured to send the command to a second content appliance for execution. | 2014-04-24 |
20140115287 | METHOD AND APPARATUS FOR PERFORMING VOLUME REPLICATION USING UNIFIED ARCHITECTURE - Method and apparatus for performing volume replication using a unified architecture are provided. Each volume has an exclusive volume log table (VLT) and an exclusive volume block update table (VBUT). The VLT is mainly used for recording the relationship between two volumes of a mirroring pair, and the VBUT is used for tracking the state of each data block of the volume itself. By means of the cross operations and applications between the VLT and the VBUT, various volume replication processes such as volume copying and volume mirroring can be enabled under a unified architecture. For each volume, different replication relationships with other volumes can be handled merely by administering its two exclusive tables. The method and the apparatus provided by the present invention can simplify the architecture for synchronization replication and reduce the burdens of administrating tables, thereby making the operation of a storage system more efficient. | 2014-04-24 |
20140115288 | CREATION OF LOGICAL UNITS VIA BORROWING OF ALTERNATIVE STORAGE AND SUBSEQUENT MOVEMENT OF THE LOGICAL UNITS TO DESIRED STORAGE - A computational device receives a request to create a logical unit. Associated with the request is a first type of storage pool in which creation of the logical unit is desired. In response to determining that adequate space is not available to create the logical unit in the first type of storage pool, a determination is made as to whether a first indicator is configured to allow borrowing of storage space from a second type of storage pool. In response to determining that the first indicator is configured to allow borrowing of storage space from the second type of storage pool, the logical unit is created in the second type of storage pool and a listener application is initiated. The listener application determines that free space that is adequate to store the logical unit has become available in the first type of storage pool. The logical unit is moved from the second type of storage pool to the first type of storage pool, in response to determining, via the listener application, that free space that is adequate to store the logical unit has become available in the first type of storage pool. | 2014-04-24 |