31st week of 2012 patent applcation highlights part 59 |
Patent application number | Title | Published |
20120198142 | SYSTEM AND APPARATUS FOR FLASH MEMORY DATA MANAGEMENT - The system and apparatus for managing flash memory data includes a host transmitting data, wherein when the data transmitted from the host have a first time transmission trait and the address for the data indicates a temporary address, temporary data are retrieved from the temporary address to an external buffer. A writing command is then executed and the temporary data having a destination address are written to a flash memory buffer. When the flash memory buffer is not full, the buffer data are written into a temporary block of the flash memory. The writing of buffer data into the temporary block includes using an address changing command, or executing a writing command to rewrite the external buffer data to the flash memory buffer so that the data are written into the temporary block. | 2012-08-02 |
20120198143 | Memory Package Utilizing At Least Two Types of Memories - A memory package and methods for writing data to and reading data from the memory package are presented. The memory package includes a volatile memory and a high-density memory. Data is written to the memory package at a bandwidth and latency associated with the volatile memory. A directory map associates a volatile memory address with data in the high-density memory. A copy of the directory map is stored in the high-density memory. The methods allow writing to and reading from the memory package using a first memory read/write interface (e.g. DRAM interface, etc.), though data is stored in a device of a different memory type (e.g. FLASH, etc.). | 2012-08-02 |
20120198144 | DYNAMICALLY SETTING BURST LENGTH OF DOUBLE DATA RATE MEMORY DEVICE BY APPLYING SIGNAL TO AT LEAST ONE EXTERNAL PIN DURING A READ OR WRITE TRANSACTION - A microprocessor system having a microprocessor and a double data rate memory device having separate groups of external pins adapted to receive addressing, data, and control information and a memory controller adapted to set a burst type of the double data rate memory to interleaved or sequential by sending a signal through one of the external pins of the double data rate memory device, such that when a read command is sent by the controller, depending on the burst type set, the double data rate memory device returns interleaved or sequentially output data to the memory controller. | 2012-08-02 |
20120198145 | MEMORY ACCESS APPARATUS AND DISPLAY USING THE SAME - A memory access apparatus and a display using the same are provided. The memory access apparatus includes a dynamic memory, a plurality of clients and a memory management unit. The dynamic memory is used to store a plurality of memory data. The clients access the dynamic memory and each client has a priority. The memory management unit executes an access action of the clients for the dynamic memory respectively according to the priorities thereof. Besides, the memory management unit has at least one buffer area built therein. The buffer area is used to temporarily store a plurality of buffer data generated while the access action is performed. | 2012-08-02 |
20120198146 | SYSTEM AND METHOD FOR STORING DATA WITH HOST CONFIGURATION OF STORAGE MEDIA - Systems and methods for storing and retrieving data on a magnetic tape accessed by a tape drive having an associated tape drive processor in communication with a host computer having an associated host processor include writing data to at least one partition within a logical volume having an associated number of sections designated by the host computer from a predetermined number of sections associated with the magnetic tape, wherein each partition extends across one section. | 2012-08-02 |
20120198147 | COMPUTER SYSTEM, STORAGE SYSTEM AND METHOD FOR SAVING STORAGE AREA BY INTEGRATING SAME DATA - Provided is a storage system capable of saving actually used physical storage areas and of achieving a high speed in write processing. There is disclosed a computer system including a server and a storage system, in which physical storage areas of a disk drive are managed for each one or more physical blocks of predetermined sizes, and allocation of one or more physical blocks to a plurality of logical blocks of predetermined sizes is managed, and the storage system stores data written in a first logical block in a first physical block allocated to the first logical block and allocates the first physical block to a second logical block where the same data as the data stored in the first physical block is to be written. | 2012-08-02 |
20120198148 | ADAPTIVE PRESTAGING IN A STORAGE CONTROLLER - In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Still further, a cache prestaging operation in accordance with further aspects may decrease one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from prestaged tracks being demoted before they are used. Conversely, a cache prestaging operation in accordance with another aspect may increase one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from waiting for a stage to complete. In yet another aspect, the prestage trigger may not be limited by the prestage amount. Instead, the pre-stage trigger may be permitted to expand as conditions warrant it by prestaging additional tracks and thereby effectively increasing the potential range for the prestage trigger. Other features and aspects may be realized, depending upon the particular application. | 2012-08-02 |
20120198149 | EFFICIENTLY SYNCHRONIZING WITH SEPARATED DISK CACHES - In a method of synchronizing with a separated disk cache, the separated cache is configured to transfer cache data to a staging area of a storage device. An atomic commit operation is utilized to instruct the storage device to atomically commit the cache data to a mapping scheme of the storage device. | 2012-08-02 |
20120198150 | ASSIGNING DEVICE ADAPTORS AND BACKGROUND TASKS TO USE TO COPY SOURCE EXTENTS TO TARGET EXTENTS IN A COPY RELATIONSHIP - Provided are a computer program product, system, and method for assigning device adaptors and background tasks to use to copy source extents to target extents in a copy relationship. A relation is provided of a plurality of source extents in source ranks to copy to a plurality of target extents in target ranks in the storage system. One target rank in the relation is used to determine an order in which the target ranks in the relation are selected to register for copying. For each selected target rank in the relation selected according to the determined order, an iteration of a registration operation is performed to register the selected target rank and a selected source rank copied to the selected target rank in the relation. The registration operation comprises indicating in a device adaptor assignment data structure a source device adaptor and target device adaptor to use to copy the selected rank to the selected target rank and adding an entry to a priority queue for the relation for the selected target rank. The selected source rank is copied to the selected target rank using as the source and target device adaptors indicated in the device adaptor assignment data structure for the selected target rank in response to processing the entry in the priority queue added to the priority queue for the selected target rank. | 2012-08-02 |
20120198151 | STORAGE APPARATUS AND DATA MANAGEMENT METHOD - Provided are a storage apparatus and data management method with which the usage ratio of each of the storage tiers is determined beforehand for each virtual volume and data can be managed by being migrated between storage tiers within a range of predetermined usage ratios. | 2012-08-02 |
20120198152 | SYSTEM, APPARATUS, AND METHOD SUPPORTING ASYMMETRICAL BLOCK-LEVEL REDUNDANT STORAGE - A block-level storage system and method support asymmetrical block-level redundant storage by automatically determining performance characteristics associated with at least one region of each of a number of block storage devices and creating a plurality of redundancy zones from regions of the block storage devices, where at least one of the redundancy zones is a hybrid zone including at least two regions having different but complementary performance characteristics selected from different block storage devices based on a predetermined performance level selected for the zone. Such “hybrid” zones can be used in the context of block-level tiered redundant storage, in which zones may be intentionally created for a predetermined tiered storage policy from regions on different types of block storage devices or regions on similar types of block storage devices but having different but complementary performance characteristics. | 2012-08-02 |
20120198153 | DATA STORAGE - A data storage system comprises a controller, a first lower performance storage medium and a second higher performance storage medium. The controller is connected to the storage mediums and is arranged to control Input/Output (IO) access to the storage mediums. In operation, the controller is arranged to store an image on the first storage medium, initiate a copy function from the first storage medium to the second storage medium, direct all IO access for the image to the second storage medium, and periodically age data from the second storage medium to the first storage medium. | 2012-08-02 |
20120198154 | METHOD OF AND SYSTEM FOR ENHANCED DATA STORAGE - A method of and system for enhanced storage allows more data to be backed up than would otherwise be possible. Instead of storing uncompressed base images and incremental images, differentials of non-current base images are compressed and stored. Furthermore, incremental images that are older than the current base image are removed. By only saving differential base images that are compressed, aside from the newest base image, and deleting older incremental images, a significant amount of space is saved. A removable drive is used as temporary storage in the process of generating a compressed differential base for previous base images. Additionally, a process ensures that previous base images are differentials of the most recent base image and not each other. | 2012-08-02 |
20120198155 | PORTABLE DATA CARRIER HAVING ADDITIONAL FUNCTIONALITY - In a method in a portable data carrier for executing an additional functionality in the data carrier, whereby the data carrier comprises a memory and whereby the additional functionality is called up by means of the one access of a conventional read command ordered from outside the data carrier to the memory of the data carrier, the additional functionality is further specified by a respective further access of at least one further conventional read command to the memory of the data carrier. | 2012-08-02 |
20120198156 | SELECTIVE CACHE ACCESS CONTROL APPARATUS AND METHOD THEREOF - A data processor is disclosed that definitively determines an effective address being calculated and decoded will be associated with an address range that includes a memory local to a data processor unit, and will disable a cache access based upon a comparison between a portion of a base address and a corresponding portion of an effective address input operand. Access to the local memory can be accomplished through a first port of the local memory when it is definitively determined that the effective address will be associated with an address range. Access to the local memory cannot be accomplished through the first port of the local memory when it is not definitively determined that the effective address will be associated with the address range. | 2012-08-02 |
20120198157 | GUEST INSTRUCTION TO NATIVE INSTRUCTION RANGE BASED MAPPING USING A CONVERSION LOOK ASIDE BUFFER OF A PROCESSOR - A method for translating instructions for a processor. The method includes accessing a plurality of guest instructions that comprise multiple guest branch instructions, and assembling the plurality of guest instructions into a guest instruction block. The guest instruction block is converted into a corresponding native conversion block. The native conversion block is stored into a native cache. A mapping of the guest instruction block to corresponding native conversion block is stored in a conversion look aside buffer. Upon a subsequent request for a guest instruction, the conversion look aside buffer is indexed to determine whether a hit occurred, wherein the mapping indicates whether the guest instruction has a corresponding converted native instruction in the native cache. The converted native instruction is forwarded for execution in response to the hit. | 2012-08-02 |
20120198158 | Multi-Channel Cache Memory - A cache memory including: a plurality of parallel input ports configured to receive, in parallel, memory access requests wherein each parallel input port is operable to receive a memory access request for any one of a plurality of processing units; and a plurality of cache blocks wherein each cache block is configured to receive memory access requests from a unique one of the plurality of input ports such that there is a one-to-one mapping between the plurality of parallel input ports and the plurality of cache blocks and wherein each of the plurality of cache blocks is configured to serve a unique portion of an address space of the memory. | 2012-08-02 |
20120198159 | INFORMATION PROCESSING DEVICE - An information processing device of the invention includes a measurement section which detects the changes in the uses of a built-in memory and an external memory, and a control section which monitors the measurement result from the measurement section, changes the configuration of the built-in memory, transfers the data stored in the built-in memory and the external memory, and changes the external memory area and the built-in memory area used by the CPU and other bus master devices, wherein it is possible to detect the changes in the memory utilization efficiency that cannot be predicted by static analysis, and to maintain an optimal memory configuration. | 2012-08-02 |
20120198160 | Efficient Cache Allocation by Optimizing Size and Order of Allocate Commands Based on Bytes Required by CPU - This invention is a data processing system having a multi-level cache system. The multi-level cache system includes at least first level cache and a second level cache. Upon a cache miss in both the at least one first level cache and the second level cache the data processing system evicts and allocates a cache line within the second level cache. The data processing system determine from the miss address whether the request falls within a low half or a high half of the allocated cache line. The data processing system first requests data from external memory of the miss half cache line. Upon receipt data is supplied to the at least one first level cache and the CPU. The data processing system then requests data from external memory for the other half of the second level cache line. | 2012-08-02 |
20120198161 | NON-BLOCKING, PIPELINED WRITE ALLOCATES WITH ALLOCATE DATA MERGING IN A MULTI-LEVEL CACHE SYSTEM - This invention handles write request cache misses. The cache controller stores write data, sends a read request to external memory for a corresponding cache line, merges the write data with data returned from the external memory and stores merged data in the cache. The cache controller includes buffers with plural entries storing the write address, the write data, the position of the write data within a cache line and unique identification number. This stored data enables the cache controller to proceed to servicing other access requests while waiting for response from the external memory. | 2012-08-02 |
20120198162 | Hazard Prevention for Data Conflicts Between Level One Data Cache Line Allocates and Snoop Writes - A comparator compares the address of DMA writes in the final entry of the FIFO stack to all pending read addresses in a monitor memory. If there is no match, then the DMA access is permitted to proceed. If the DMA write is to a cache line with a pending read, the DMA write access is stalled together with any DMA accesses behind the DMA write in the FIFO stack. DMA read accesses are not compared but may stall behind a stalled DMA write access. These stalls occur if the cache read was potentially cacheable. This is possible for some monitored accesses but not all. If a DMA write is stalled, the comparator releases it to complete once there are no pending reads to the same cache line. | 2012-08-02 |
20120198163 | Level One Data Cache Line Lock and Enhanced Snoop Protocol During Cache Victims and Writebacks to Maintain Level One Data Cache and Level Two Cache Coherence - This invention assures cache coherence in a multi-level cache system upon eviction of a higher level cache line. A victim buffer stored data on evicted lines. On a DMA access that may be cached in the higher level cache the lower level cache sends a snoop write. The address of this snoop write is compared with the victim buffer. On a hit in the victim buffer the write completes in the victim buffer. When the victim data passes to the next cache level it is written into a second victim buffer to be retired when the data is committed to cache. DMA write addresses are compared to addresses in this second victim buffer. On a match the write takes place in the second victim buffer. On a failure to match the controller sends a snoop write. | 2012-08-02 |
20120198164 | Programmable Address-Based Write-Through Cache Control - This invention is a cache system with a memory attribute register having plural entries. Each entry stores a write-through or a write-back indication for a corresponding memory address range. On a write to cached data the cache the cache consults the memory attribute register for the corresponding address range. Writes to addresses in regions marked as write-through always update all levels of the memory hierarchy. Writes to addresses in regions marked as write- back update only the first cache level that can service the write. The memory attribute register is preferably a memory mapped control register writable by the central processing unit. | 2012-08-02 |
20120198165 | Mechanism to Update the Status of In-Flight Cache Coherence In a Multi-Level Cache Hierarchy - Separate buffers store snoop writes and direct memory access writes. A multiplexer selects one of these for input to a FIFO buffer. The FIFO buffer is split into multiple FIFOs including: a command FIFO; an address FIFO; and write data FIFO. Each snoop command is compared with an allocated line set and way and deleted on a match to avoid data corruption. Each snoop command is also compared with a victim address. If the snoop address matches victim address logic redirects the snoop command to a victim buffer and the snoop write is completed in the victim buffer. | 2012-08-02 |
20120198166 | Memory Attribute Sharing Between Differing Cache Levels of Multilevel Cache - The level one memory controller maintains a local copy of the cacheability bit of each memory attribute register. The level two memory controller is the initiator of all configuration read/write requests from the CPU. Whenever a configuration write is made to a memory attribute register, the level one memory controller updates its local copy of the memory attribute register. | 2012-08-02 |
20120198167 | SYNCHRONIZING ACCESS TO DATA IN SHARED MEMORY VIA UPPER LEVEL CACHE QUEUING - A processing unit includes a store-in lower level cache having reservation logic that determines presence or absence of a reservation and a processor core including a store-through upper level cache, an instruction execution unit, a load unit that, responsive to a hit in the upper level cache on a load-reserve operation generated through execution of a load-reserve instruction by the instruction execution unit, temporarily buffers a load target address of the load-reserve operation, and a flag indicating that the load-reserve operation bound to a value in the upper level cache. If a storage-modifying operation is received that conflicts with the load target address of the load-reserve operation, the processor core sets the flag to a particular state, and, responsive to execution of a store-conditional instruction, transmits an associated store-conditional operation to the lower level cache with a fail indication if the flag is set to the particular state. | 2012-08-02 |
20120198168 | VARIABLE CACHING STRUCTURE FOR MANAGING PHYSICAL STORAGE - A method for managing a variable caching structure for managing storage for a processor. The method includes using a multi-way tag array to store a plurality of pointers for a corresponding plurality of different size groups of physical storage of a storage stack, wherein the pointers indicate guest addresses that have corresponding converted native addresses stored within the storage stack, and allocating a group of storage blocks of the storage stack, wherein the size of the allocation is in accordance with a corresponding size of one of the plurality of different size groups. Upon a hit on the tag, a corresponding entry is accessed to retrieve a pointer that indicates where in the storage stack a corresponding group of storage blocks of converted native instructions reside. The converted native instructions are then fetched from the storage stack for execution. | 2012-08-02 |
20120198169 | Binary Rewriting in Software Instruction Cache - Mechanisms are provided for dynamically rewriting branch instructions in a portion of code. The mechanisms execute a branch instruction in the portion of code. The mechanisms determine if a target instruction of the branch instruction, to which the branch instruction branches, is present in an instruction cache associated with the processor. Moreover, the mechanisms directly branch execution of the portion of code to the target instruction in the instruction cache, without intervention from an instruction cache runtime system, in response to a determination that the target instruction is present in the instruction cache. In addition, the mechanisms redirect execution of the portion of code to the instruction cache runtime system in response to a determination that the target instruction cannot be determined to be present in the instruction cache. | 2012-08-02 |
20120198170 | Dynamically Rewriting Branch Instructions in Response to Cache Line Eviction - Mechanisms are provided for evicting cache lines from an instruction cache of the data processing system. The mechanisms store, for a portion of code in a current cache line, a linked list of call sites that directly or indirectly target the portion of code in the current cache line. A determination is made as to whether the current cache line is to be evicted from the instruction cache. The linked list of call sites is processed to identify one or more rewritten branch instructions having associated branch stubs, that either directly or indirectly target the portion of code in the current cache line. In addition, the one or more rewritten branch instructions are rewritten to restore the one or more rewritten branch instructions to an original state based on information in the associated branch stubs. | 2012-08-02 |
20120198171 | Cache Pre-Allocation of Ways for Pipelined Allocate Requests - This invention is a data processing system with a data cache. The cache controller responds to a cache miss requiring allocation by pre-allocating a way in the set to an allocation request according to said least recently used indication of said ways and then update the least recently used indication of remaining ways of the set. This permits read allocate requests to the same set to proceed without introducing processing stalls due to way contention. This also allows multiple outstanding allocate requests to the same set and way combination. The cache also compares the address of a newly received allocation request to stall this allocation request if the address matches an address of any pending allocation request. | 2012-08-02 |
20120198172 | Cache Partitioning in Virtualized Environments - A mechanism is provided in a virtual machine monitor for providing cache partitioning in virtualized environments. The mechanism assigns a virtual identification (ID) to each virtual machine in the virtualized environment. The processing core stores the virtual ID of the virtual machine in a special register. The mechanism also creates an entry for the virtual machine in a partition table. The mechanism may partition a shared cache using a vertical (way) partition and/or a horizontal partition. The entry in the partition table includes a vertical partition control and a horizontal partition control. For each cache access, the virtual machine passes the virtual ID along with the address to the shared cache. If the cache access results in a miss, the shared cache uses the partition table to select a victim cache line for replacement. | 2012-08-02 |
20120198173 | ROUTER AND MANY-CORE SYSTEM - According to one embodiment, a router manages routing of a packet transferred between a plurality of cores and at least one of cache memories to which the cores can access. The router includes an analyzer, a packet memory and a controller. The analyzer determines whether the packet is a read-packet or a write-packet. The packet memory stores at least part of the write-packet issued by one of the cores. The controller stores cache data of the write-packet and a cache address in the packet memory when the analyzer determines that the packet is the write-packet. The cache address indicates an address in which the cache data is stored. The controller outputs the cache data stored in the packet memory to the core issuing a read-request as a response data corresponding to the read packet when the analyzer determines that the packet is the read-packet and the cache address corresponding to the read-request is stored in the packet memory. | 2012-08-02 |
20120198174 | APPARATUS, SYSTEM, AND METHOD FOR MANAGING EVICTION OF DATA - An apparatus, system, and method are disclosed for managing eviction of data. A cache write module stores data on a non-volatile storage device sequentially using a log-based storage structure having a head region and a tail region. A direct cache module caches data on the non-volatile storage device using the log-based storage structure. The data is associated with storage operations between a host and a backing store storage device. An eviction module evicts data of at least one region in succession from the log-based storage structure starting with the tail region and progressing toward the head region. | 2012-08-02 |
20120198175 | APPARATUS, SYSTEM, AND METHOD FOR MANAGING EVICTION OF DATA - An apparatus, system, and method are disclosed for managing eviction of data. A grooming cost module determines a grooming cost for a selected region of a nonvolatile solid-state cache. The grooming cost includes a cost of evicting the selected region of the nonvolatile solid-state cache relative to other regions. A grooming candidate set module adds the selected region to a grooming candidate set in response to the grooming cost satisfying a grooming cost threshold. A low cost module selects a low cost region within the grooming candidate set. A groomer module recovers storage capacity of the low cost region. | 2012-08-02 |
20120198176 | PREFETCHING OF NEXT PHYSICALLY SEQUENTIAL CACHE LINE AFTER CACHE LINE THAT INCLUDES LOADED PAGE TABLE ENTRY - A microprocessor includes a translation lookaside buffer, a request to load a page table entry into the microprocessor generated in response to a miss of a virtual address in the translation lookaside buffer, and a prefetch unit. The prefetch unit receives a physical address of a first cache line that includes the requested page table entry and responsively generates a request to prefetch into the microprocessor a second cache line that is the next physically sequential cache line to the first cache line. | 2012-08-02 |
20120198177 | SELECTIVE MEMORY ACCESS TO DIFFERENT LOCAL MEMORY PORTS AND METHOD THEREOF - A data processor is disclosed that definitively determines an effective address being calculated and decoded will be associated with an address range that includes a memory local to a data processor unit, and will disable a cache access based upon a comparison between a portion of a base address and a corresponding portion of an effective address input operand. Access to the local memory can be accomplished through a first port of the local memory when it is definitively determined that the effective address will be associated with an address range. Access to the local memory cannot be accomplished through the first port of the local memory when it is not definitively determined that the effective address will be associated with the address range. | 2012-08-02 |
20120198178 | ADDRESS-BASED HAZARD RESOLUTION FOR MANAGING READ/WRITE OPERATIONS IN A MEMORY CACHE - One embodiment provides a cached memory system including a memory cache and a plurality of read-claim (RC) machines configured for performing read and write operations dispatched from a processor. According to control logic provided with the cached memory system, a hazard is detected between first and second read or write operations being handled by first and second RC machines. The second RC machine is suspended and a subset of the address bits of the second operation at specific bit positions are recorded. The subset of address bits of the first operation at the specific bit positions are broadcast in response to the first operation being completed. The second operation is then re-requested. | 2012-08-02 |
20120198179 | AREA-EFFICIENT, WIDTH-ADJUSTABLE SIGNALING INTERFACE - A lateral transfer path within an adjustable-width signaling interface of an integrated circuit component is formed by a chain of logic segments that may be intercoupled in different groups to effect the lateral data transfer required in different interface width configurations, avoiding the need for a dedicated transfer path per width configuration and thereby substantially reducing number of interconnects (and thus the area) required to implement the lateral transfer structure. | 2012-08-02 |
20120198180 | NONVOLATILE MEMORY SYSTEM AND FLAG DATA INPUT/OUTPUT METHOD FOR THE SAME - Various embodiments of a nonvolatile memory system and related methods are disclosed. In one exemplary embodiment, the memory system may include: a memory area including a main memory area and a flag memory area; and an input/output controller configured to receive main data through a main data input line and provide the received main data to a page buffer circuit in response to a main data input control signal. The input/output controller may be further configured to receive flag data through the main data input line and provide the received flag data to the page buffer circuit in response to a flag data input control signal. | 2012-08-02 |
20120198181 | System and Method for Managing a Memory as a Circular Buffer - System and method for facilitating data transfer between logic systems and a memory according to various conditions. Embodiments include systems and methods for facilitating and improving throughput of data transfers using a shared non-deterministic bus, a system and method for managing a memory as a circular buffer, and a system and method for facilitating data transfer between a first clock domain and a second clock domain. Embodiments may be implemented individually or in combination. | 2012-08-02 |
20120198182 | MULTI-CORE SYSTEM AND METHOD FOR PROCESSING DATA IN PARALLEL IN MULTI-CORE SYSTEM - A multi-core system and a method for processing data in parallel in the multi-core system are provided. In the multi-core system, partitioning and allocating of data may be dynamically controlled based on local memory information. Thus, it is possible to increase an availability of a Central Processing Unit (CPU) and a local memory, and is possible to improve a performance of data parallel processing. | 2012-08-02 |
20120198183 | SUCCESSIVE APPROXIMATION RESISTOR DETECTION - An apparatus comprises a connector configured to receive an electrical contact of an accessory device that is electrically coupled to a resistor of the accessory device, a current source configured to apply a specified current to the resistor to generate a resulting voltage, a comparator configured to receive and compare the resulting voltage to a reference voltage, and a controller configured to store an outcome of the comparison as a bit in a register, to adjust the applied current using the outcome of the comparison, and to determine a resistance value for the resistor using the bit stored in the register. | 2012-08-02 |
20120198184 | MEMORY MANAGEMENT METHOD, COMPUTER SYSTEM AND COMPUTER READABLE MEDIUM - It is provided a memory management method for releasing an unnecessary area in a memory area used by a program stored in the memory and executed by the computing device. The memory management method including the step of: setting in the memory, a first memory area which is used to execute the program; setting in the memory, a second memory area which can be operated by the program; setting a utilized area in the second memory area based on an instruction from the program; storing objects including data in the utilized area of the second memory area based on an instruction from the program; determining whether the program uses the objects stored in the utilized area within the second memory area; and releasing, by the computing device, the utilized area occupied by an object that is not used by the program among the objects stored in the utilized area. | 2012-08-02 |
20120198185 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - A newer generation game terminal according to one embodiment of the present invention is provided with a storage access control unit. The storage access control unit accesses a newer generation storage according to a request for access from an AP designed to execute a synchronization process on the assumption of a speed of access to an older generation storage. The storage access control unit estimates time required to access the older generation storage in accordance with an evaluation function for calculating the required time. The storage access control unit executes an adjustment process to fill time gap between record time required to access the newer generation storage and time estimated to be required for access. | 2012-08-02 |
20120198186 | MEMORY DEVICE AND MEMORY SYSTEM - A memory device includes a plurality of nonvolatile memories configured to be erased at updating of data, and a memory controller configured to control the nonvolatile memory. The memory controller includes an address conversion table configured to convert a logical address specified by at data writing into a physical address of the nonvolatile memory, an erased physical block managing unit configured to manage an erased physical block address, the nonvolatile memory of the erased physical block address, and an erased physical block count on each nonvolatile memory, an erasable physical block managing unit configured to manage an erasable physical block address, the nonvolatile memory of the erasable physical block address, and an erasable physical block count on each nonvolatile memory, and a memory control unit configured to control writing and erasing on the plurality of nonvolatile memories. | 2012-08-02 |
20120198187 | Technique for preserving memory affinity in a non-uniform memory access data processing system - Techniques for preserving memory affinity in a computer system is disclosed. In response to a request for memory access to a page within a memory affinity domain, a determination is made if the request is initiated by a processor associated with the memory affinity domain. If the request is not initiated by a processor associated with the memory affinity domain, a determination is made if there is a page ID match with an entry within a page migration tracking module associated with the memory affinity domain. If there is no page ID match, an entry is selected within the page migration tracking module to be updated with a new page ID and a new memory affinity ID. If there is a page ID match, then another determination is made whether or not there is a memory affinity ID match with the entry with the page ID field match. If there is no memory affinity ID match, the entry is updated with a new memory affinity ID; and if there is a memory affinity ID match, an access counter of the entry is incremented. | 2012-08-02 |
20120198188 | REGENERATION OF DELETED DATA - To prevent loss of a data volume by unintended deletion, including various versions of the data volume, the data is preserved, and, when needed, is regenerated at a different volume address than that of the deleted volume. In a computer-implemented data storage system, a method responds to a received command to delete a volume of data identified by a volume address, preserving data of the volume prior to deletion, and generates a unique token associated with the volume and version of the volume as of the deletion. The unique token is communicated as associated with the received delete command. The method responds to a received command to regenerate the data of the deleted volume, generating a command to find the data identified by the unique token, and creating a new, different, volume address for the data of the preserved deleted volume, thereby keeping both volume versions. | 2012-08-02 |
20120198189 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND PROGRAM - An information processing apparatus connected to first and second storage devices via a storage control apparatus is provided. The apparatus includes: an acquisition unit configured to acquire a model number of the second storage device when the storage control apparatus operates in a mirror state; a determination unit configured to determine whether the second storage device needs to be used, based on the model number acquired by the acquisition unit; and a second transfer unit configured to transfer the storage control apparatus from the mirror state to a rebuilding state if the determination unit determines that the second storage device needs to be used. | 2012-08-02 |
20120198190 | MAKING AUTOMATED USE OF DATA VOLUME COPY SERVICE TARGETS - A computer implemented method for automatically managing copies of source data volumes is provided. A copy management agent receives a message that target volume copies of source volumes are available. The copy management agent accesses the target volume copies of the source volumes. The copy management agent analyzes metadata for the target volume copies. The copy management agent determines whether any of the target volume copies is a boot volume copy based on the analyzed metadata. In response to a determination that one of the target volume copies is a boot volume copy, the copy management agent directs a provisioning agent to provision a new host for the target volume copies. The copy management agent directs the storage subsystem to present the target volume copies to a storage area network port associated with the new host. Then, the new host is booted using the boot volume copy. | 2012-08-02 |
20120198191 | METHOD AND APPARATUS FOR DE-DUPLICATION AFTER MIRROR OPERATION - De-duplication operations are applied to mirror volumes. Data stored to a first volume is mirrored to a second volume. The second volume is a virtual volume having a plurality of logical addresses, such that segments of physical storage capacity are allocated for a specified logical address as needed when data is stored to the specified logical address. A de-duplication operation is carried out on the second volume following a split from the first volume. A particular segment of the second volume is identified as having data that is the same as another segment in the second volume or in the same consistency group. A link is created from the particular segment to the other segment and the particular segment is released from the second volume so that physical storage capacity required for the second volume is reduced. | 2012-08-02 |
20120198192 | Programmable Mapping of External Requestors to Privilege Classes for Access Protection - A memory management and protection system that manages memory access requests from a number of requestors. Memory accesses are allowed or disallowed based on the privilege level of the requestor, based on a Privilege Identifier that accompanies each memory access request. An extended memory controller selects the appropriate set of segment registers based on the Privilege Identifier to insure that the request is compared to and translated by the segment register associated with the master originating the request. A set of mapping registers allow flexible mapping of each Privilege Identifier to the appropriate access permission. | 2012-08-02 |
20120198193 | METHOD TO QUALIFY ACCESS TO A BLOCK STORAGE DEVICE VIA AUGMENTATION OF THE DEVICE'S CONTROLLER AND FIRMWARE FLOW - A method to qualify access to a block storage device via augmentation of the device's controller and firmware flow. The method employs one or more block exclusion vectors (BEVs) that include attributes specifying allowed access operations for corresponding block address ranges. Logic in accordance with the BEVs is programmed into the controller for the block storage device, such as a disk drive controller for a disk drive. In response to an access request, a block address range corresponding to the storage block(s) requested to be accessed is determined. Based on the BEV entries, a determination is made to whether the determined logical block address range is covered by a corresponding BEV entry. If so, the attributes of the BEV are used to determine whether the access operation is allowed. | 2012-08-02 |
20120198194 | Multi-Bank Memory Accesses Using Posted Writes - Systems and methods for reducing delays between successive write and read accesses in multi-bank memory devices are provided. Computer circuits modify the relative timing between addresses and data of write accesses, reducing delays between successive write and read accesses. Memory devices that interface with these computer circuits use posted write accesses to effectively return the modified relative timing to its original timing before processing the write access. | 2012-08-02 |
20120198195 | DATA STORAGE SYSTEM AND METHOD - A data storage system including a storage device. The storage device may include a plurality of data storage drives that may be logically divided into a plurality of groups and arranged in a plurality of rows and a plurality of columns such that each column contains only data storage drives from distinct groups. Furthermore, the storage device may include a plurality of parity storage drives that correspond to the rows and columns of data storage drives. | 2012-08-02 |
20120198196 | NONVOLATILE MEMORY SYSTEM AND BLOCK MANAGEMENT METHOD - A nonvolatile memory system includes a memory area including a nonvolatile memory apparatus divided into a plurality of blocks, and a controller configured to control the memory area. The controller groups the plurality of blocks of the memory area according to wear level and whether each of the plurality of blocks is in use, and manages the blocks of each group in wear level order. | 2012-08-02 |
20120198197 | TRANSFERRING DATA IN RESPONSE TO DETECTION OF A MEMORY SYSTEM IMBALANCE - A method begins by a processing module determining an imbalance between inode utilization and data storage utilization. When the imbalance compares unfavorably to an imbalance threshold, the method continues with the processing module determining whether utilization of another inode memory and utilization of another corresponding data storage memory are not imbalanced. When the utilization of the other inode memory and the utilization of the other corresponding data storage memory are not imbalanced, determining whether the inode utilization is out of balance with respect to the data storage utilization. When the inode utilization is out of balance, the method continues with the processing module transferring data objects from a data storage memory to the other corresponding data storage memory and transferring mapping information of data objects from a inode memory to the other inode memory. | 2012-08-02 |
20120198198 | Managing Line Items in a Computer-Based Modular Planning Tool - A computer-implemented modular planning tool and method are provided which allow a line item ( | 2012-08-02 |
20120198199 | Virtual Storage Mirror Configuration in Virtual Host - A method and a system for configuring mirrors of virtual storage devices in a virtual host includes obtaining a topology connection relationship between the virtual storage devices to be configured with mirrors and the virtual host, where the topology connection relationship is a hierarchical relationship in a tree shape with the virtual host as a root node and the virtual storage devices to be configured with mirrors as leaf nodes, and configuring the mirrors of the virtual storage devices to be configured with mirrors in the virtual host according to the obtained topology connection relationship. The method and the system for configuring mirrors of virtual storage devices in a virtual host can increase reliability. | 2012-08-02 |
20120198200 | METHOD AND APPARATUS OF MEMORY OVERLOAD CONTROL - A computer-implemented method, system, apparatus, and article of manufacture for memory overload management. The method includes: collecting memory application information of at least one node of a computer system that is implementing the method; predicting a memory overload period and an overload memory size of a first node where memory overload will occur based on the memory application information; and scheduling a memory space according to the memory overload period and the overload memory size. | 2012-08-02 |
20120198201 | MEMORY MODULE WITH CONFIGURABLE INPUT/OUTPUT PORTS - A memory module is coupled to a number of controllers. The memory module is configured to configure each of a number of data input/output ports thereof as at least one of an input and an output in response to a first command from a particular controller of the controllers. The memory module is configured to partition itself into memory partitions in response to a second command from the particular controller so that each memory partition corresponds to a respective one of the controllers. Each of a number of data input/output ports of the controllers is configurable as at least one of an input and an output to correspond to a respective one of the input/output ports of the memory module. The first and second commands may originate from the particular controller, or the controllers may be coupled in parallel to the memory module. | 2012-08-02 |
20120198202 | Paging Partition Arbitration Of Paging Devices To Shared Memory Partitions - A computer implemented method to establish at least one paging partition in a data processing system. The virtualization control point (VCP) reserves up to the subset of physical memory for use in the shared memory pool. The VCP configures at least one logical partition as a shared memory partition. The VCP assigns a paging partition to the shared memory pool. The VCP determines whether a user requests a redundant assignment of the paging partition to the shared memory pool. The VCP assigns a redundant paging partition to the shared memory pool, responsive to a determination that the user requests a redundant assignment. The VCP assigns a paging device to the shared memory pool. The hypervisor may transmit at least one paging request to a virtual asynchronous services interface configured to support a paging device stream. | 2012-08-02 |
20120198203 | MODIFYING DATA STORAGE IN RESPONSE TO DETECTION OF A MEMORY SYSTEM IMBALANCE - A method begins by a processing module determining an imbalance between inode memory utilization and data storage memory utilization. When the imbalance compares unfavorably to an imbalance threshold, the method continues with the processing module determining whether the inode memory utilization is out of balance with respect to the data storage memory utilization or whether the data storage memory utilization is out of balance with respect to the inode memory utilization. When the inode memory utilization is out of balance with respect to the data storage memory utilization, the method continues with the processing module transferring a set of data objects from a data object section to a data block section and transferring object mapping information of the set of data objects into block mapping information for the set of data objects. | 2012-08-02 |
20120198204 | FAST MASKED SUMMING COMPARATOR - A fast masked summing comparator apparatus includes a comparator unit configured to compare a masked first number to a masked sum of a second number and a third number to determine whether the masked sum is equivalent to the masked first number without performing a summation portion of an addition operation between the second number and the third number. The comparator unit may concurrently mask both the sum and the first number using the same mask value. | 2012-08-02 |
20120198205 | TRANSACTIONAL MEMORY - Subject matter disclosed herein relates to techniques to perform transactions using a memory device. | 2012-08-02 |
20120198206 | APPARATUS AND METHOD FOR PROTECTING MEMORY IN MULTI-PROCESSOR SYSTEM - Memory mapping in small units using a segment and subsegments is described, and thus it is possible to control a memory access even using a small amount of hardware, and it is possible to reduce costs incurred by hardware. Additionally, it is possible to prevent a memory from being destroyed due to a task error in the multi-processor system. | 2012-08-02 |
20120198207 | ASYMMETRIC PERFORMANCE MULTICORE ARCHITECTURE WITH SAME INSTRUCTION SET ARCHITECTURE - A method is described that entails operating enabled cores of a multi-core processor such that both cores support respective software routines with a same instruction set, a first core being higher performance and consuming more power than a second core under a same set of applied supply voltage and operating frequency. | 2012-08-02 |
20120198208 | SHARED FUNCTION MULTI-PORTED ROM APPARATUS AND METHOD - Various embodiments may be disclosed that may share a ROM pull down logic circuit among multiple ports of a processing core. The processing core may include an execution unit (EU) having an array of read only memory (ROM) pull down logic storing math functions. The ROM pull down logic circuit may implement single instruction, multiple data (SIMD) operations. The ROM pull down logic circuit may be operatively coupled with each of the multiple ports in a multi-port function sharing arrangement. Sharing the ROM pull down logic circuit reduces the need to duplicate logic and may result in a savings of chip area as well as a savings of power. | 2012-08-02 |
20120198209 | GUEST INSTRUCTION BLOCK WITH NEAR BRANCHING AND FAR BRANCHING SEQUENCE CONSTRUCTION TO NATIVE INSTRUCTION BLOCK - A method for translating instructions for a processor. The method includes accessing a plurality of guest instructions that comprise multiple guest branch instructions comprising at least one guest far branch, and building an instruction sequence from the plurality of guest instructions by using branch prediction on the at least one guest far branch. The method further includes assembling a guest instruction block from the instruction sequence. The guest instruction block is translated to a corresponding native conversion block, wherein an at least one native far branch that corresponds to the at least one guest far branch and wherein the at least one native far branch includes an opposite guest address for an opposing branch path of the at least one guest far branch. Upon encountering a missprediction, a correct instruction sequence is obtained by accessing the opposite guest address. | 2012-08-02 |
20120198210 | Microprocessor Having Novel Operations - A processor. The processor includes a first register for storing a first packed data, a decoder, and a functional unit. The decoder has a control signal input. The control signal input is for receiving a first control signal and a second control signal. The first control signal is for indicating a pack operation. The second control signal is for indicating an unpack operation. The functional unit is coupled to the decoder and the register. The functional unit is for performing the pack operation and the unpack operation using the first packed data. The processor also supports a move operation. | 2012-08-02 |
20120198211 | ARITHMETIC UNIT AND ARITHMETIC PROCESSING METHOD FOR OPERATING WITH HIGHER AND LOWER CLOCK FREQUENCIES - There is a need for providing a battery-less integrated circuit (IC) card capable of operating in accordance with a contact usage or a non-contact usage, preventing coprocessor throughput from degrading despite a decreased clock frequency for reduced power consumption under non-contact usage, and ensuring high-speed processing under non-contact usage. A dual interface card is a battery-less IC card capable of operating in accordance with a contact usage or a non-contact usage. The dual interface card operates at a high clock under contact usage and at a low clock under non-contact usage. A targeted operation comprises a plurality of different basic operations. The dual interface card comprises a basic arithmetic circuit group. Under the contact usage, the basic arithmetic circuit group performs one basic operation of the targeted operation at one cycle. Under the non-contact usage, the basic arithmetic circuit group sequentially performs at least two basic operations of the targeted operation at one cycle. | 2012-08-02 |
20120198212 | Microprocessor and Method for Enhanced Precision Sum-of-Products Calculation on a Microprocessor - A microprocessor, a method for enhanced precision sum-of-products calculation and a video decoding device are provided, in which at least one general-purpose-register is arranged to provide a number of destination bits to a multiply unit, and a control unit is adapted to provide at least a multiply-high instruction and a multiply-high-and-accumulate instruction to the multiply unit. The multiply unit is arranged to receive at least first and second source operands having an associated number of source bits, a sum of source bits exceeding the number of destination bits, connected to a register-extension cache comprising at least one cache entry arranged to store a number of precision-enhancement bits, and adapted to store a destination portion of a result operand in the general-purpose-register and a precision enhancement portion in the cache entry. The result operand is generated by a multiply-high operation or by a multiply-high-and-accumulate operation, depending on the received instructions. | 2012-08-02 |
20120198213 | PACKET HANDLER INCLUDING PLURALITY OF PARALLEL ACTION MACHINES - A packet handler for a packet processing system includes a plurality of parallel action machines, each of the plurality of parallel action machines being configured to perform a respective packet processing function; and a plurality of action machine input registers, wherein each of the plurality of parallel action machines is associated with one or more of the plurality of action machine input registers, and wherein an action machine of the plurality of parallel action machines is automatically triggered to perform its respective packet processing function in the event that data sufficient to perform the actions machine's respective packet processing function is written into the action machine's one or more respective action machine input registers. | 2012-08-02 |
20120198214 | N-WAY MEMORY BARRIER OPERATION COALESCING - One embodiment sets forth a technique for N-way memory barrier operation coalescing. When a first memory barrier is received for a first thread group execution of subsequent memory operations for the first thread group are suspended until the first memory barrier is executed. Subsequent memory barriers for different thread groups may be coalesced with the first memory barrier to produce a coalesced memory barrier that represents memory barrier operations for multiple thread groups. When the coalesced memory barrier is being processed, execution of subsequent memory operations for the different thread groups is also suspended. However, memory operations for other thread groups that are not affected by the coalesced memory barrier may be executed. | 2012-08-02 |
20120198215 | INSTRUCTION EXPLOITATION THROUGH LOADER LATE FIX-UP - A method, computer program product, and data processing system for substituting a candidate instruction in application code being loaded during load time. Responsive to identifying the candidate instruction, a determination is made whether a hardware facility of the data processing system is present to execute the candidate instruction. If the hardware facility is absent from the data processing system, the candidate instruction is substituted with a second set of instructions. | 2012-08-02 |
20120198216 | ENHANCED MONITOR FACILITY - A monitoring facility that is operable in two modes allowing compatibility with prior existing monitoring facilities. In one mode, in response to encountering a monitored event, an interrupt is generated. In another mode, in response to encountering a monitored event, one or more associated counters are incremented without causing an interrupt. | 2012-08-02 |
20120198217 | SELF-PROVISIONING OF CONFIGURATION FOR A SPECIFIC-PURPOSE CLIENT HAVING A WINDOWS-BASED EMBEDDED IMAGE WITH A WRITE-FILTER - Examples of methods and apparatus are provided for self-provisioning of configuration for a specific-purpose local client having a windows-based embedded image with a write-filter and obviating reinstallation of an entire windows-based embedded image onto the specific-purpose local client. The apparatus may include a retrieval module of the specific-purpose local client configured to facilitate locating a repository server containing a configuration file. The retrieval module may be configured to facilitate obtaining the configuration file from the repository server while the write-filter is enabled, while obviating reinstallation of an entire windows-based embedded image onto the specific-purpose local client. The apparatus may include an apply settings module of the specific-purpose local client configured to apply a configuration change to the windows-based embedded image based on the configuration file or another configuration file. | 2012-08-02 |
20120198218 | GENERATING, VALIDATING AND APPLYING CUSTOM EXTENSIBLE MARKUP LANGUAGE (XML) CONFIGURATION ON A CLIENT HAVING A WINDOWS-BASED EMBEDDED IMAGE - Examples of methods and apparatus are provided for generating, validating and applying custom extensible markup language (XML) configuration on a specific-purpose local client having a windows-based embedded image and obviating reinstallation of an entire windows-based embedded image onto the specific-purpose local client. The apparatus may include a configuration generation module configured to generate an XML configuration file and configured to validate the XML configuration file based on a validation file. The apparatus may include a retrieval module of the specific-purpose local client configured to automatically obtain the XML configuration file each time the specific-purpose local client boots up. The apparatus may include an apply settings module of the specific-purpose local client configured to automatically apply, to the windows-based embedded image, a configuration change based on the XML configuration file each time the specific-purpose local client boots up. | 2012-08-02 |
20120198219 | Component Drivers for a Component of a Device - A device including a ready only memory to include component drivers for at least one component of the device, a controller to determine whether a bootable storage device includes at least one operating system, and an embedded application to select and load component drivers onto the device for at least one of the components before installing an operating system from the bootable storage device. | 2012-08-02 |
20120198220 | METHODS AND RECONFIGURABLE SYSTEMS TO OPTIMIZE THE PERFORMANCE OF A CONDITION BASED HEALTH MAINTENANCE SYSTEM - Methods and reconfigurable systems are provided for monitoring the health of a complex system. The reconfigurable system comprises a plurality of standardized executable application modules containing instructions to perform one of a plurality of different standardized functions. The system further comprises a plurality of computing nodes arranged in a hierarchical structure comprising one or more layers of computing nodes. Each computing node of the plurality runs a host application and a workflow service module, each computing node thereby being configured by a configuration file that directs the execution of any of the standardized executable application modules in a cooperative fashion by the host application via the workflow service module. The system also comprises a loading means for populating each computing node with one or more standardized executable application modules of the plurality, a communication means, and a configuration means for programming the populated standardized executable application modules. | 2012-08-02 |
20120198221 | SPECIFIC-PURPOSE CLIENT WITH CONFIGURATION HISTORY FOR SELF-PROVISIONING OF CONFIGURATION AND OBVIATING REINSTALLATION OF EMBEDDED IMAGE - Examples of specific-purpose local clients are provided for self-provisioning of configurations and for obviating reinstallation of entire windows-based embedded images onto the specific-purpose local clients. Each local client may have a windows-based embedded image with a write-filter, and may include a configuration history memory unit configured to store a plurality of extensible markup language (XML) configuration files. The configuration history memory unit may be in a persistent storage area of the local client to allow the plurality of XML configuration files to be retained on the local client when it is shut down. The local client may include a retrieval module configured to facilitate automatically locating a remote repository server containing a new XML configuration file, to facilitate automatically obtaining the new XML configuration file from the remote repository server over a network, and to facilitate automatically obtaining a previous XML configuration file from the configuration history memory unit. | 2012-08-02 |
20120198222 | CONFIGURING AND CUSTOMIZING A SPECIFIC-PURPOSE CLIENT HAVING A WINDOWS-BASED EMBEDDED IMAGE USING EXTENSIBLE MARKUP LANGUAGE (XML) CONFIGURATION - Examples of methods and apparatus are provided for configuring and customizing a specific-purpose local client having a windows-based embedded image using extensible markup language (XML) configuration and obviating reinstallation of an entire windows-based embedded image onto the specific-purpose local client. The apparatus may include a retrieval module of the specific-purpose local client configured to automatically locate a remote repository server containing an XML configuration file and automatically obtain the XML configuration file from the remote repository server each time the specific-purpose local client boots up. The apparatus may include an apply settings module of the specific-purpose local client configured to automatically apply a configuration change to the windows-based embedded image based on the XML configuration file each time the specific-purpose local client boots up. The configuration change persists across a reboot of the specific-purpose local client while obviating reinstallation of an entire windows-based embedded image onto the specific-purpose local client. | 2012-08-02 |
20120198223 | AUTOMATIC RETRIEVAL, PARSING AND APPLICATION OF CONFIGURATION FOR A SPECIFIC-PURPOSE CLIENT HAVING A WINDOWS-BASED EMBEDDED IMAGE WITH A WRITE-FILTER - Examples of methods and apparatus are provided for automatic retrieval, parsing and application of configuration for a specific-purpose local client having a windows-based embedded image with a write-filter while obviating reinstallation of an entire windows-based embedded image onto the local client and while allowing persistent configuration change across a reboot. The apparatus may include a retrieval module of the local client configured to, each time the local client boots up, automatically locate a remote repository server containing a configuration file and automatically obtain the configuration file from the repository server over a network. The apparatus may include an apply settings module of the local client configured to, each time the local client boots up, automatically load the configuration file, automatically parse at least a portion of the configuration file, and automatically apply, to the embedded image, a configuration change based on the at least a portion of the configuration file. | 2012-08-02 |
20120198224 | Encryption Keys Distribution for Conditional Access Software in TV Receiver SOC - A method for securely generating and distributing encryption keys includes generating, by a secured server, a pair of keys including a first key and a second key and providing, by a key distributing unit, the first key to a first recipient and a second key to a second recipient. The first recipient may use the first key to encrypt a data file and send the encrypted data file via a non-volatile memory device to a target subscriber. The second recipient may program the second key into an one-time-programmable register contained in a secure element during a manufacturing process. The secure element may further include a random access memory configured to store an image of the encrypted data file, a read-only memory containing a boot code, and a processing unit coupled to the random-access memory and the read-only memory and operative to decrypt the encrypted data file. | 2012-08-02 |
20120198225 | COMPUTER SYSTEM FOR ACCESSING CONFIDENTIAL DATA BY MEANS OF AT LEAST ONE REMOTE UNIT AND REMOTE UNIT - A computer system for accessing confidential data via at least one remote unit ( | 2012-08-02 |
20120198226 | CHECKING A CONFIGURATION MODIFICATION FOR AN IED - Exemplary embodiments are directed to a system and method of checking, during regular operation of a Process Control PC or Substation Automation SA system, an intended configuration modification for a mission-critical IED. The IED receives, from an authenticated requestor, a modification request directed to IED configuration, parameter or setting data. The IED then checks the requested configuration modification, and rejects it in case no approval or confirmation is made by an approver independent of the requestor, and otherwise accepts and implements. The IED authenticates the approver prior to receiving the request, and stores, in a local memory, a configuration modification plausibility check provided by the approver. The stored plausibility check is then performed, by a plausibility checking unit, on the intended modification, and the latter is rejected or approved based on a result of the stored plausibility check when applied to specific circumstances of the configuration modification request. | 2012-08-02 |
20120198227 | CIPHER KEY GENERATION IN COMMUNICATION SYSTEM - Techniques are disclosed for generating a cipher key such that an encryption algorithm typically usable in accordance with a first security context can be used in accordance with a second security context. In one example, the first security context is a UMTS security context and the second security context is a GSM security context. | 2012-08-02 |
20120198228 | SYSTEM AND METHOD FOR DIGITAL USER AUTHENTICATION - A method according to preferred embodiment can include receiving a request at a server from a private key module associated with a first user device; directing a request for a first portion of the private key from the server to a second user device; and in response to a successful user challenge creating a first portion of a digital signature and a second portion of a digital signature at the server. The method of the preferred embodiment can further include combining the first portion of the digital signature and the second portion of the digital signature; and delivering the digital signature to the first user device. The method of the preferred embodiment can function to secure the digital signature process by splitting or dividing the user's private key into two or more portions, each of which require independent authorization from the user in order to create the digital signature. | 2012-08-02 |
20120198229 | INFORMATION PROCESSING APPARATUS, INFORMATION RECORDING MEDIUM MANUFACTURING APPARATUS, INFORMATION RECORDING MEDIUM, INFORMATION PROCESSING METHOD, INFORMATION RECORDING MEDIUM MANUFACTURING METHOD, AND COMPUTER PROGRAM - An information processing apparatus includes: a data processing unit that acquires content codes including a data processing program recorded in an information recording medium and executes data processing according to the content codes; and a memory that stores an apparatus certificate including an apparatus identifier of the information processing apparatus. The data processing unit is configured to execute an apparatus checking process applying the apparatus certificate stored in the memory on the basis of a code for apparatus checking process included in the content codes, acquire the apparatus identifier recorded in the apparatus certificate after the apparatus checking process, and execute data processing applying content codes corresponding to the acquired apparatus identifier. | 2012-08-02 |
20120198230 | Document Security System that Permits External Users to Gain Access to Secured Files - A system includes a server with an access manager configured to restrict access to files of an organization and maintain at least encryption keys for internal and external users and an external access server connected to the server and coupled between the server and a data network. The data network is configured to allow the external users use of the external access server. The external access server is also configured to permit file exchange between the internal users and the external users via the server. | 2012-08-02 |
20120198231 | SECURE COMMUNICATION DEVICE - The invention relates to a confidence core architecture that is more efficient in terms of design and evaluation than the usual architectures. The confidence core respects the partitioning principle of security recommendations, typically partitioning the red and black domains and the injection of keys. In this approach, the invention proposes the conversion of an existing single-interface component, namely an evaluated smart card component, into a multi-interface component that respects the partitioning principles. The component for carrying out the interface conversion is designed on a minimal, and if possible, an exclusively hardware basis that only implements the flow secure routing. | 2012-08-02 |
20120198232 | GENERALIZED POLICY SERVER - A scalable access filter that is used together with others like it in a virtual private network to control access by users at clients in the network to information resources provided by servers in the network. Each access filter use a local copy of an access control database to determine whether an access request made by a user. Changes made by administrators in the local copies are propagated to all of the other local copies. Each user belongs to one or more user groups and each information resource belongs to one or more information sets. Access is permitted or denied according to of access policies which define access in terms of the user groups and information sets. | 2012-08-02 |
20120198233 | METHOD FOR RECALLING A MESSAGE AND DEVICES THEREOF - A method for recalling a message and a device thereof are provided, thereby efficiently satisfying a message recall demand, and improving a service quality of a message service. The method includes: sending a message recall request to a message receiving device, in which the message recall request carries a message identifier of the message to be recalled and a message authentication header field, and the message authentication header field includes an encryption algorithm and a random number generated by encrypting a random number for authenticating the message through the encryption algorithm, so that the message receiving device determines the message to be recalled according to the message ID and the message authentication header field, and disposes the message to be recalled according to a local policy and a delivery status of the message to be recalled; and receiving a message recall disposition result returned by the message receiving device. | 2012-08-02 |
20120198234 | METHOD AND APPARATUS FOR ENSURING THE INTEGRITY OF A DOWNLOADED DATA SET - The disclosed embodiments provide a system that ensures the integrity of a downloaded data set. During operation, a browser application executing on a computing device receives a data set that was signed using the private key of a host computer. The browser application stores this signed data set in a browser data store. Subsequently, the browser application also receives a public key from the host computer (e.g., while accessing a web page associated with the signed data set). The browser application ensures the integrity of the data set by executing scripted program code that: uses the public key to decode the signature for the data set; calculates a hash value for the signed data set; and compares the decoded signature with the hash value to validate the data set. | 2012-08-02 |
20120198235 | SECURE MESSAGING WITH READ-UNDENIABILITY AND DELETION-VERIFIABILITY - A cryptographically-secure component is used to provide read-undeniability and deletion-verifiability for messaging applications. When a messaging application of a sending node desires to send a message to a messaging application of a receiving node, the sending node requests an encryption key from the receiving node. The cryptographically-secure component of the receiving node generates an encryption key that is bound to a state of the receiving node. The messaging application of the sending node encrypts the message using the encryption key and sends the encrypted message to the messaging application of the receiving node. Because the encryption key used to encrypt the message is bound to the state associated with reading the message by the cryptographically-secure component, if the receiving node desires to decrypt and read the encrypted message, the receiving node may advance its state to the bound state to retrieve the decryption key. | 2012-08-02 |
20120198236 | SYSTEMS, DEVICES, AND METHODS FOR SECURELY TRANSMITTING A SECURITY PARAMETER TO A COMPUTING DEVICE - Embodiments of the systems, devices, and methods described herein generally facilitate the secure transmittal of security parameters. In accordance with at least one embodiment, a representation of first data comprising a password is generated at the first computing device as an audio signal. The audio signal is transmitted from the first computing device to the second computing device. The password is determined from the audio signal at the second computing device. A key exchange is performed between the first computing device and the second computing device wherein a key is derived at each of the first and second computing devices. In at least one embodiment, one or more security parameters (e.g. one or more public keys) are exchanged between the first and second computing devices, and techniques for securing the exchange of security parameters or authenticating exchanged security parameters are generally disclosed herein. | 2012-08-02 |
20120198237 | DOCUMENT MANAGEMENT SYSTEM AND METHOD - A document management system includes a number generator and/or a secure controller, and a document. The document includes a map-file for each participant in a workflow of the document. Corresponding, randomly generated nonces and/or complementary workflow assurance tokens are distributed within the respective map-files of neighboring participants by the number generator or the secure controller. The system includes a private key that recovers the respective corresponding, randomly generated nonce of a receiving one of the neighboring participants and/or the respective complementary workflow assurance token of the receiving one of the neighboring participants. A communication mechanism enables transmission of the recovered corresponding, randomly generated nonce of the receiving one of the neighboring participants or a signature generated by the receiving one of the neighboring participants to a sending one of the neighboring participants for verification. | 2012-08-02 |
20120198238 | METHOD FOR ESTABLISHING AN ELECTRONIC AUTHORIZATION FOR A USER BEARING AN ELECTRONIC IDENTITY DOCUMENT, AND METHOD FOR SUPERVISING SAID AUTHORIZATION - The invention relates to a method for generating and validating a digital authorization request, as well as to the method for supervising said authorization. The method of invention enables the guarantee, due to a combination of a series of signatures, at any time, of the identity of the bearer of the document and of the validating body. | 2012-08-02 |
20120198239 | METHOD AND APPARATUS FOR INPUT OF CODED IMAGE DATA - An image input device which includes a means for inputting image data, a memory for storing secret information and an operator for carrying out an operation by using the image data and the secret information. | 2012-08-02 |
20120198240 | METHOD AND SYSTEM FOR ENTITY PUBLIC KEY ACQUIRING, CERTIFICATE VALIDATION AND AUTHENTICATION BY INTRODUCING AN ONLINE CREDIBLE THIRD PARTY - A method and system for entity public key acquiring, certificate validation and authentication by introducing an online credible third party is disclosed. The method includes the following steps: 1) an entity B transmits a message 1 to an entity A; 2) the entity A transmits a message 2 to a credible third party TP after receiving the message 1; 3) the credible third party TP determines the response RepTA after receiving the message 2; 4) the credible third party TP returns a message 3 to the entity A; 5) the entity A returns a message 4 to the entity B after receiving the message 3; 6) the entity B receives the message 4; 7) the entity B transmits a message 5 to the entity A; 8) the entity A receives the message 5. The present invention can achieve public key acquisition, certificate validation and authentication of the entity by integrating them in one protocol, thereby facilitate the execution efficiency and the effect of the protocol and facilitate the combination with various public key acquisition and public key certificate state enquiry protocols. The present invention suits with a “user-access point-server” access network structure to meet the authentication requirement of the access network. | 2012-08-02 |
20120198241 | SYSTEMS AND METHODS FOR SECURING DATA - Systems and methods are provided for securing data. A processing device receives a data set and identifies a first subset of data from a first dimension of a multi-dimensional representation of the data set. The processing device encrypts the first subset of data using a first encryption technique to yield a first encrypted subset of data and replaces the first subset of data in the multi-dimensional representation of the data set with the first subset of encrypted data. The processing device then identifies a second subset of data from a second dimension of the multi-dimensional representation of the data set, with the second subset of data including at least a portion of the first subset of encrypted data, and encrypts the second subset of data using a second encryption technique to yield a second encrypted subset of data. | 2012-08-02 |