45th week of 2011 patent applcation highlights part 54 |
Patent application number | Title | Published |
20110276728 | METHOD AND APPARATUS FOR STORAGE I/O PATH CONFIGURATION - An aspect of the invention is directed to a method for storage I/O (input/output) path configuration in a system that includes a storage system connected via a network to a plurality of nodes. The method comprises receiving an I/O access to one or more storage volumes in the storage system from one of the nodes; if the I/O access is an initial I/O access to any of the storage volumes in the storage system from any of the nodes in the system, allowing the initial I/O access from the one node and prohibiting I/O access to the storage volumes in the storage system by other nodes in the system; and if the I/O access is not an initial I/O access to any of the storage volumes in the storage system from any of the nodes in the system, allowing the I/O access only if the I/O access is from the one node which made the initial I/O access and rejecting the I/O access for other nodes in the system. | 2011-11-10 |
20110276729 | TRANSITIONS BETWEEN ORDERED AND AD HOC I/O REQUEST QUEUEING - Disclosed is a computer implemented method and apparatus for queuing I/O requests to a pending queue. The I/O device driver sets a maximum ordered queue length for an I/O device driver coupled to a storage device then receives an I/O request from an application. The I/O device driver determines whether the pending queue is sorted and responds to a determination that the pending queue is sorted, determining if queued I/O requests exceed the maximum ordered queue length. Responding to a determination that the pending queue exceeds the maximum ordered queue length, the I/O device driver adds the I/O request based on a high pointer, and points the high pointer to the I/O request. | 2011-11-10 |
20110276730 | PACKET BASED DATA TRANSFER SYSTEM AND METHOD FOR HOST-SLAVE INTERFACE - In a host-slave data transfer system, the slave device receives packet based data from an external device and stores the packet content in a buffer as data segments. The slave merges a plurality of data segments into data streams and transmits the data streams to the host. The host uses direct memory access (DMA) to unpack the data stream from the slave into individual data segments without memory copy. To enable the host to set up DMA, the slave transmits information regarding sizes of the data segments to the host beforehand via an outband channel, e.g. by transmitting the size information in headers and/or tailers inserted into previous data streams. The host utilizes the data segment size information to program descriptor tables, such that each descriptor in the descriptor tables causes one data segment in the data stream to be stored in the system memory of the host. | 2011-11-10 |
20110276731 | DUAL-PORT FUNCTIONALITY FOR A SINGLE-PORT CELL MEMORY DEVICE - A network node ( | 2011-11-10 |
20110276732 | PROGRAMMABLE QUEUE STRUCTURES FOR MULTIPROCESSORS - A command is received from a first agent via a first predetermined memory-mapped register, the first agent being one of multiple agents representing software processes, each being executed by one of processor cores of a network processor in a network element. A first queue associated with the command is identified based on the first predetermined memory-mapped register. A pointer is atomically read from a first hardware-based queue state register associated with the first queue. Data is atomically accessed at a memory location of the memory based on the pointer. The pointer stored in the first hardware-based queue state register is atomically updated, including incrementing the pointer of the first hardware-based queue state register, reading a queue size of the queue from a first hardware-based configuration register associated with the first queue, and wrapping around the pointer if the pointer reaches an end of the first queue based on the queue size. | 2011-11-10 |
20110276733 | Memory System And Device With Serialized Data Transfer - A memory system with serialized data transfer. The memory system includes within a memory controller and a plurality of memory devices. The memory controller receives a plurality of write data values from a host and outputs the write data values as respective serial streams of bits. Each of the memory devices receives at least one of the serial streams of bits from the memory controller and converts the serial stream of bits to a set of parallel bits for storage. | 2011-11-10 |
20110276734 | USB Dedicated Charger Identification Circuit - In an embodiment, set forth by way of example and not limitation, a USB dedicated charger identification circuit includes a USB D+ port, a USB D− port, a first circuit conforming to a first identification protocol, a second circuit conforming to a second identification protocol, and logic selectively coupling one of the first circuit and the second circuit to the USB D+ port and the USB D− port. In an alternate embodiment set forth by way of example and not limitation, a method to provide USB charger identification includes providing a first USB charger identification at a USB D+ port and a D− port. Next, it is detected if the first USB charger identification was inappropriate. Then, if the first USB charger identification was inappropriate, a second USB charger identification is provided at the USB D+ port and the D− port. | 2011-11-10 |
20110276735 | INTERCONNECT, BUS SYSTEM WITH INTERCONNECT AND BUS SYSTEM OPERATING METHOD - Provided are an interconnect, a bus system with interconnect, and bus system operating method. The bus system includes a master, slaves access by the master, and an interconnect. The interconnect connects the master with the slaves in response to selection bits identified in a master address provided by the master. | 2011-11-10 |
20110276736 | METHOD AND SYSTEM FOR A RFIC MASTER - Methods and systems for a RFIC master are disclosed. Aspects of one method may include configuring an on-chip programmable device that may function as a master on a bus that has at least one device interface, for example, RFIC interface, coupled to the bus. The on-chip programmable device may generate at least one signal to control at least one device coupled to at least one device interface. The on-chip programmable device may communicate the generated signal via the bus upon receiving an input timer signal and may be configured by writing at least one event data and an index-sample data to the on-chip programmable device. The index-sample data may comprise at least a count value and an event data index. When the count value equals a value of the timer signal, event data may be fetched and executed starting with the one specified by the event data index. | 2011-11-10 |
20110276737 | METHOD AND SYSTEM FOR REORDERING THE REQUEST QUEUE OF A HARDWARE ACCELERATOR - The invention discloses a system and method for reordering the request queue of the hardware accelerator, wherein, the request queue stores therein a plurality of coprocessor request blocks (CRBs) to be input into the hardware accelerator. The system including: content addressable memory connected to the request queue for storing the state pointer of each CRB in the request queue at a same physical storage location in the request queue, receiving the state pointer of a new CRB in response to the new CRB asking to join in the request queue and outputting the physical storage location of a CRB in the request queue whose state pointer stored in the content addressable memory is the same as the state pointer of the new CRB; and CRB insertion module for receiving the physical storage location of a CRB in the request queue whose state pointer is the same as the state pointer of the new CRB and inputting the new CRB in the request queue and the CRB in the request queue whose state pointer is the same as the state pointer of the new CRB adjacently into the hardware accelerator in the order of entering the request queue. The system and method can improve the process efficiency of the hardware accelerator. | 2011-11-10 |
20110276738 | SENSOR NODE INCLUDING GENERAL-PURPOSE INTERFACE PORT AND PLUG AND PLAY FUNCTION, SENSOR BOARD INCLUDING GENERAL-PURPOSE INTERFACE PORT AND SENSOR DEVICE DRIVER, GENERAL-PURPOSE INTERFACE PORT, AND OPERATION METHOD OF SENSOR NODE, SENSOR BOARD, AND GENERAL-PURPOSE INTERFACE PORT - Provided is a general-purpose interface port that may interface with a sensor board including multiple types of sensor device drivers and download a sensor device driver from the sensor board, and a sensor node that may recognize a type of sensor included in the sensor board using the downloaded sensor device driver, the sensor node including a micro control unit that may process sensing data received from the sensor board, thereby providing a plug and play function between a micro control unit of a sensor node and a sensing unit including a sensor board in a Ubiquitous Sensor Network (USN) or a Wireless Sensor Network (WSN). | 2011-11-10 |
20110276739 | INTEGRATED MEMORY CONTROL APPARATUS - An integrated memory control apparatus including a first interface decoder, a second interface decoder and an interface controller is provided. Wherein, the first interface decoder is coupled to a control chip through a first serial peripheral interface (SPI), the second interface decoder is coupled to a micro-processor unit through a general transmission interface, and the interface controller is coupled to a memory through a second SPI. When the interface controller receives the request signals from the control chip and the micro-processor unit, the control chip may correctly read data from the memory through the first and second SPI. On the other hand, the micro-processor unit may stop reading data from the memory through the general transmission interface. Therefore, the control chip and the micro-processor unit may share the same memory. | 2011-11-10 |
20110276740 | CONTROLLER FOR SOLID STATE DISK WHICH CONTROLS ACCESS TO MEMORY BANK - A controller for a solid state disk is provided. The controller includes a storage module to store an index of at least one idle bank among a plurality of memory banks, and a control module to control an access to the at least one idle bank using the stored index. Here, the access to the at least one idle bank may be controlled based on a state of a channel corresponding to each of the at least one idle bank. | 2011-11-10 |
20110276741 | MAINTAINING REVERSE MAPPINGS IN A VIRTUALIZED COMPUTER SYSTEM - For a virtual memory of a virtualized computer system in which a virtual page is mapped to a guest physical page which is backed by a machine page and in which a shadow page table entry directly maps the virtual page to the machine page, reverse mappings of guest physical pages are optimized by removing the reverse mappings of certain immutable guest physical pages. An immutable guest physical memory page is identified, and existing reverse mappings corresponding to the immutable guest physical page are removed. New reverse mappings corresponding to the identified immutable guest physical page are no longer added. | 2011-11-10 |
20110276742 | Characterizing Multiple Resource Utilization Using a Relationship Model to Optimize Memory Utilization in a Virtual Machine Environment - An approach is provided that uses a hypervisor to allocate a shared memory pool amongst a set of partitions (e.g., guest operating systems) being managed by the hypervisor. The hypervisor retrieves memory related metrics from shared data structures stored in a memory, with each of the shared data structures corresponding to a different one of the partitions. The memory related metrics correspond to a usage of the shared memory pool allocated to the corresponding partition. The hypervisor identifies a memory stress associated with each of the partitions with this identification based in part on the memory related metrics retrieved from the shared data structures. The hypervisor then reallocates the shared memory pool amongst the plurality of partitions based on the identified memory stress of the plurality of partitions. | 2011-11-10 |
20110276743 | USING EXTERNAL MEMORY DEVICES TO IMPROVE SYSTEM PERFORMANCE - The invention is directed towards a system and method that utilizes external memory devices to cache sectors from a rotating storage device (e.g., a hard drive) to improve system performance. When an external memory device (EMD) is plugged into the computing device or onto a network in which the computing device is connected, the system recognizes the EMD and populates the EMD with disk sectors. The system routes I/O read requests directed to the disk sector to the EMD cache instead of the actual disk sector. The use of EMDs increases performance and productivity on the computing device systems for a fraction of the cost of adding memory to the computing device. | 2011-11-10 |
20110276744 | FLASH MEMORY CACHE INCLUDING FOR USE WITH PERSISTENT KEY-VALUE STORE - Described is using flash memory, RAM-based data structures and mechanisms to provide a flash store for caching data items (e.g., key-value pairs) in flash pages. A RAM-based index maps data items to flash pages, and a RAM-based write buffer maintains data items to be written to the flash store, e.g., when a full page can be written. A recycle mechanism makes used pages in the flash store available by destaging a data item to a hard disk or reinserting it into the write buffer, based on its access pattern. The flash store may be used in a data deduplication system, in which the data items comprise chunk-identifier, metadata pairs, in which each chunk-identifier corresponds to a hash of a chunk of data that indicates. The RAM and flash are accessed with the chunk-identifier (e.g., as a key) to determine whether a chunk is a new chunk or a duplicate. | 2011-11-10 |
20110276745 | TECHNIQUES FOR WRITING DATA TO DIFFERENT PORTIONS OF STORAGE DEVICES BASED ON WRITE FREQUENCY - Techniques for writing data to different portions of storage devices based on write frequencies are disclosed. Frequencies of data writes to various portions of a memory are monitored. The memory includes various storage technologies. Each portion includes one of the storage technologies and has a respective lifetime. An order that the portions are written into and recycled is dynamically managed to equalize respective life expectancies of the portions in view of differences in endurance values of the portions, the monitored frequencies of data writes, and the lifetimes. In some embodiments, the storage technologies include Single-Level Cell (SLC) flash memory storage technology and Multi-Level Cell (MLC) flash memory storage technology. The SLC and MLC flash memory storage technologies are optionally integrated in one device. In some embodiments, the storage technologies include two or more different types of SLC flash memory storage technologies, optionally integrated in one device. | 2011-11-10 |
20110276746 | CACHING STORAGE ADAPTER ARCHITECTURE - An interface adapter includes a storage module including non-volatile random access memory (RAM), and a lookup module. The storage module is configured to store metadata in the non-volatile RAM. The metadata identifies data from an external storage device cached in a solid-state storage device. The lookup module is configured to receive a read request. The lookup module is further configured to, based on the metadata and in response to the read request, selectively provide cached data from the solid-state storage device or provide second data retrieved from the external storage device. | 2011-11-10 |
20110276747 | SOFTWARE MANAGEMENT WITH HARDWARE TRAVERSAL OF FRAGMENTED LLR MEMORY - Certain aspects of the present disclosure relate to a method and apparatus for processing wireless communications. According to certain aspects, a linked list of chunks of memory used to store logarithmic likelihood ratio (LLR) values for a transport block is generated. Each chunk holds LLR values for a code block of the transport block. The linked list is then provided to a hardware circuit for traversal. According to certain aspects, the hardware circuit may be an application specific integrated circuit (ASIC) processor or field programmable gate array (FPGA) configured to traverse the linked list of chunks of memory used to store LLR values. | 2011-11-10 |
20110276748 | NON-VOLATILE STORAGE DEVICE, HOST DEVICE, STORAGE SYSTEM, DATA COMMUNICATION METHOD AND PROGRAM - In a memory system including a host device and one or more nonvolatile memory devices, the host device reads, from a nonvolatile memory device connected in the system, a boot code used to operate a CPU of the host device before the CPU is activated. The boot code reading process is required to be performed with a simple method. A host device ( | 2011-11-10 |
20110276749 | MULTI-BIT-PER-CELL FLASH MEMORY DEVICE WITH NON-BIJECTIVE MAPPING - To store a plurality of input bits, the bits are mapped to a corresponding programmed state of one or more memory cells and the cell(s) is/are programmed to that corresponding programmed state. The mapping may be many-to-one or may be an “into” generalized Gray mapping. The cell(s) is/are read to provide a read state value that is transformed into a plurality of output bits, for example by maximum likelihood decoding or by mapping the read state value into a plurality of soft bits and then decoding the soft bits. | 2011-11-10 |
20110276750 | APPARATUS AND METHOD FOR PROCESSING DATA OF FLASH MEMORY - An memory device including a data region storing a main data, a first index region storing a count data, and a second index region storing an inverted count data, where the data region, the first index region, and the second index region are included in one logical address. | 2011-11-10 |
20110276751 | INTEGRATED MEMORY CONTROL APPARATUS AND METHOD THEREOF - An integrated memory control apparatus including a first interface decoder, a second interface decoder and an interface controller is provided. Wherein, the first interface decoder is coupled to a control chip through a first serial peripheral interface (SPI), the second interface decoder is coupled to a micro-processor unit through a general transmission interface, and the interface controller is coupled to a memory through a second SPI. When the interface controller receives the request signals from the control chip and the micro-processor unit, the control chip may correctly read data from the memory through the first and second SPI. On the other hand, the micro-processor unit may stop reading data from the memory through the general transmission interface. Therefore, the control chip and the micro-processor unit may share the same memory. | 2011-11-10 |
20110276752 | POWER EFFICIENT AND RULE MOVEMENT OPTIMIZED TCAM MANAGEMENT - A network device allocates a number of blocks of memory in a ternary content-addressable memory (TCAM) of the network device to each database of multiple databases, and assigns unused blocks of memory of the TCAM to a free pool. The network device also detects execution of a run mechanism by the TCAM, and allocates, based on the execution of the run mechanism, one of the unused blocks of memory to a filter or rule of one of the multiple databases. | 2011-11-10 |
20110276753 | LOCATING LOGICAL VOLUME RECORDS ON A PHYSICAL STACKED VOLUME - According to one embodiment, a method for accessing host data records stored on a VTS system includes receiving a mount request to access at least one host data record on a VTS system, determining a number of host compressed data records per physical block on a sequential access storage medium, determining a PBID that corresponds to the requested at least one host data record, accessing a physical block on the sequential access storage medium corresponding to the PBID, and outputting the physical block without outputting an entire logical volume that the physical block is stored to. In another embodiment, a VTS system includes random access storage, sequential access storage, support for at least one virtual volume, a storage manager having logic for determining a PBID that corresponds to a SLBID, and logic for performing the above described method. Other methods, systems, and computer program products are also described. | 2011-11-10 |
20110276754 | PARTIAL VOLUME ACCESS IN A PHYSICAL STACKED VOLUME - In one embodiment, a VTS system includes a tape volume cache, a storage drive for interacting with sequential access storage media; logic for receiving a mount request to access host data record(s) stored on a storage medium, the mount request including a virtual volume identifier of a logical volume and a logical block identifier of the first requested host data record therein; logic for issuing a locate command to position the sequential access storage medium to about a physical block in the logical volume having at least a portion of the requested host data record(s) therein based on the virtual volume identifier and the logical block identifier; logic for creating and supporting a partial virtual volume in the tape volume cache; and logic for copying at least the physical block to the partial virtual volume. Other systems, methods, and computer program products are also described, according to other embodiments. | 2011-11-10 |
20110276755 | METHOD FOR ANALYZING PERFORMANCE INFORMATION - A performance information display method using a computer, includes the steps, in the computer, of reading out information data of a storage device previously stored in a storage device and information data of a plurality of devices utilizing the storage device, displaying an identifier of the storage device and identifiers of a plurality of devices utilizing the storage device on a screen on the basis of the information data read out, accepting a command to select the displayed identifier of the storage device, and displaying performance information data of the devices utilizing the selected storage device in association on the basis of the accepted command and the information data read out. | 2011-11-10 |
20110276756 | MAPPING LOCATIONS OF LOGICAL VOLUME RECORDS ON A PHYSICAL STACKED VOLUME - In one embodiment, a method for accessing host data records stored in a VTS system includes receiving a mount request to access at least one host data record, determining a SLBID corresponding to the requested host data records, determining a PBID that corresponds to the SLBID, accessing a physical block on a sequential access storage medium corresponding to the PBID, and outputting at least the physical block corresponding to the PBID without outputting an entire logical volume that the physical block is stored to. According to another embodiment, a VTS system includes random access storage, sequential access storage, support for at least one virtual volume, a storage manager having logic for determining a PBID that corresponds to a SLBID, and logic for copying a portion of a logical volume from the sequential access storage to the random access storage without copying the entire logical volume. Other embodiments are disclosed also. | 2011-11-10 |
20110276757 | STORAGE CONTROL DEVICE, AND CONTROL METHOD FOR CACHE MEMORY - The storage control device of the present invention uses a plurality of queues to manage cache segments which are in use, so as to retain cache segments which contain large amounts of data for long periods of time. One of the queues manages segments in which large amounts of valid data are stored. Another queue manages segments in which small amounts of valid data are stored. If the number of unused segments becomes insufficient, then a segment which is positioned at the LRU end of the other queue is released, and is shifted to a free queue. Due to the use of this other queue, it is possible to retain segments in which comparatively large amounts of data are stored for comparatively long periods of time. | 2011-11-10 |
20110276758 | LOAD BALANCING OF DATA READS IN STORAGE ENVIRONMENTS - Exemplary method, system, and computer program product embodiments for, within a data storage system performing data mirroring, performing load balancing pursuant to completing a read request. At least one of a preferred storage controller and preferred storage device to accommodate the read request is determined by performing one of selecting a request queue having a closest offset to an offset of the read request, selecting a request queue having a most requests within a predetermined distance of the offset of the read request, selecting a request queue having a closest median offset to the offset of the read request, selecting a request queue having a closest average offset to the offset of the read request, and selecting a request queue having a predetermined additional number of entries than another request queue. The selected request queue is associated with the preferred storage controller and the preferred storage device. | 2011-11-10 |
20110276759 | DATA STORAGE SYSTEM AND CONTROL METHOD THEREOF - The invention discloses a data storage system and a control method thereof. The data storage system according to the invention includes N groups of storage devices, where N is an integer larger than 1. The invention is to judge if the use information of one of the batches of data satisfies the set of condition thresholds relative to the group of storage devices where said one batch of data is stored, and if NO, to re-allocate said one batch of data to one of the group of storage devices whose condition thresholds are satisfied by the use information of said one batch of data and to update the virtual drive locations of said one batch of data mapping the logical locations of the storage devices. | 2011-11-10 |
20110276760 | NON-COMMITTING STORE INSTRUCTIONS - Techniques relating to a processor that supports a non-committing store instruction that is executable during a scouting thread to provide data to a subsequently executed load instruction. The processor may include a memory access unit configured to perform an instance of the non-committing store instruction by storing a value in an entry of a store buffer without committing the instance of the non-committing store instruction. In response to subsequently receiving an instance of a load instruction of the scouting thread that specifies a load from the memory address, the memory access unit is configured to perform the instance of the load instruction by retrieving the value. The memory access unit may retrieve the value from the store buffer or from a cache of the processor. | 2011-11-10 |
20110276761 | ACCELERATING SOFTWARE LOOKUPS BY USING BUFFERED OR EPHEMERAL STORES - A method and apparatus for accelerating lookups in an address based table is herein described. When an address and value pair is added to an address based table, the value is privately stored in the address to allow for quick and efficient local access to the value. In response to the private store, a cache line holding the value is transitioned to a private state, to ensure the value is not made globally visible. Upon eviction of the privately held cache line, the information is not written-back to ensure locality of the value. In one embodiment, the address based table includes a transactional write buffer to hold addresses, which correspond to tentatively updated values during a transaction. Accesses to the tentative values during the transaction may be accelerated through use of annotation bits and private stores as discussed herein. Upon commit of the transaction, the values are copied to the location to make the updates globally visible. | 2011-11-10 |
20110276762 | COORDINATED WRITEBACK OF DIRTY CACHELINES - A data processing system includes a processor core and a cache memory hierarchy coupled to the processor core. The cache memory hierarchy includes at least one upper level cache and a lowest level cache. A memory controller is coupled to the lowest level cache and to a system memory and includes a physical write queue from which the memory controller writes data to the system memory. The memory controller initiates accesses to the lowest level cache to place into the physical write queue selected cachelines having spatial locality with data present in the physical write queue. | 2011-11-10 |
20110276763 | MEMORY BUS WRITE PRIORITIZATION - A data processing system includes a multi-level cache hierarchy including a lowest level cache, a processor core coupled to the multi-level cache hierarchy, and a memory controller coupled to the lowest level cache and to a memory bus of a system memory. The memory controller includes a physical read queue that buffers data read from the system memory via the memory bus and a physical write queue that buffers data to be written to the system memory via the memory bus. The memory controller grants priority to write operations over read operations on the memory bus based upon a number of dirty cachelines in the lowest level cache memory. | 2011-11-10 |
20110276764 | CRACKING DESTRUCTIVELY OVERLAPPING OPERANDS IN VARIABLE LENGTH INSTRUCTIONS - A method, information processing system, and computer program product manage computer executable instructions. At least one machine instruction for execution is received. The at least one machine instruction is analyzed. The machine instruction is identified as a predefined instruction for storing a variable length first operand in a memory location. Responsive to this identification and based on fields of the machine instruction, a relative location of a variable length second operand of the instruction with location of the first operand is determined. Responsive to the relative location having the predefined relationship, a first cracking operation is performed. The first cracking operation cracks the instruction into a first set of micro-ops (Uops) to be executed in parallel. The second set of Uops is for storing a first plurality of first blocks in the first operand. Each of said first block to be stored are identical. The first set Uops are executed. | 2011-11-10 |
20110276765 | System and Method for Management of Cache Configuration - Systems and methods for managing cache configurations are disclosed. In accordance with a method, a system management control module may receive access rights of a host to a logical storage unit and may also receive a desired caching policy for caching data associated with the logical storage unit and the host. The system management control module may determine an allowable caching policy indicator for the logical storage unit. The allowable caching policy indicator may indicate whether caching is permitted for data associated with input/output operations between the host and the logical storage unit. The system management control module may further set a caching policy for data associated with input/output operations between the host and the logical storage unit, based on at least one of the desired caching policy and the allowable caching policy indicator. The system management control module may also communicate the caching policy to the host. | 2011-11-10 |
20110276766 | CONFIGURABLE MEMORY CONTROLLER - Controlling access to memory includes receiving a plurality of memory access requests and assigning corresponding time values to each. The assigned time values are adjusted based upon a clock pulse and a priority access list is generated. Factors consider include missed access deadlines, closeness to missing access deadlines, and whether a page is open. The highest ranked client is then passed to a sequencer to allow the requested access. Time values may be assigned and adjusted according to client ID or client type (latency or bandwidth). A plurality of power modes of operation are defined wherein operation in a selected power mode of operation is based at least in part on the assigned or adjusted time values. The processing is performed in hardware in parallel (at the same time) by associated logic circuits. | 2011-11-10 |
20110276767 | Rate Matching and De-Rate Matching on Digital Signal Processors - Provided are systems and methods for rate matching and de-rate matching on digital signal processors. For example, there is a system for rate matching and de-rate matching, where the system includes a memory configured to contain a plurality of blocks of data, and a digital signal processor configured to pre-compute permutation parameters common to the plurality of blocks, wherein the plurality of blocks are subject to a set of given puncturing parameters. The digital signal processor is configured to process each block in the plurality of blocks by computing a block signature from pre-computed puncturing thresholds, matching the block signature to one of a set of pre-computed zone signatures, deriving a zone index corresponding to the one matched pre-computed zone signature, and applying pre-computed permutation and puncturing transformations corresponding to the zone index to the block. | 2011-11-10 |
20110276768 | I/0 COMMAND HANDLING IN BACKUP - Systems and methods for input/output command management. In some cases of a write command received from a host, a maximum capacity limit relating to primary memory may be disregarded because data relating to the write command is written to backup memory prior to acknowledging the write command. In some of these cases, timeout is less likely than if the maximum capacity limit had been respected. | 2011-11-10 |
20110276769 | DATA PROCESSOR - A system is described that generates reports from very large data sets. The reports are generated in real-time (or close to real time). Data from the large data set is replicated to a buffer as it arrives in the system. Once sufficient data is obtained (e.g. when the buffer is filled), the data is processed to generate a report. The report may summarize the data obtained and may be stored for later use. By storing summary data instead of the full data, the data storage requirements are reduced. | 2011-11-10 |
20110276770 | METHOD AND SYSTEM FOR ANALYSING MOST RECENTLY USED REGISTRY KEYS - A method is disclosed for analysing a computing device including a non-volatile storage medium in which a set of snapshots is stored. Each snapshot comprises at least one most recently used, MRU, key for an application, at least one MRU key having a plurality of elements. The method comprises comparing a first MRU key from a first snapshot for a first time and a corresponding second MRU key for a second snapshot for a second time temporally following said first time. If the second MRU key has a second element identified as less recently used than a first element of the second MRU key and the second element of the first MRU key is identified as not being less recently used than the first element of the first MRU key, the first element is labelled as having been newly modified between the first and second times. | 2011-11-10 |
20110276771 | STORAGE SYSTEM - A storage system includes: an identification information providing means that provides identification information distinguishing a group of data requested to be stored, to the group of data; a data set generating means that divides storage target data as part of the group of data into multiple pieces and makes the data redundant, thereby generating a data set composed of multiple fragment data; and a distribution storage controlling means that distributes the fragment data composing the data set and store the fragment data, respectively, in same positions within storage regions formed in the respective storing means, thereby storing the storage target data. The distribution storage controlling means stores the fragment data composing respective data sets corresponding to multiple storage target data included in the group of data provided with the same identification information, into the respective storage regions so that storing positions within the respective storage regions become successive. | 2011-11-10 |
20110276772 | MANAGEMENT APPARATUS AND MANAGEMENT METHOD - Proposed are a management apparatus and a management method capable of supporting and executing storage operation and management capable of improving the utilization ratio of storage resources. With this management apparatus for managing a storage apparatus equipped with a function for providing a virtual logical volume to a host system, and dynamically allocating a storage area to the virtual logical volume upon receiving a write request for writing data into the virtual logical volume, the capacity utilization of the virtual logical volume by a file system is acquired, the capacity utilization of the virtual logical volume configured from the capacity of the storage area allocated to the virtual logical volume is acquired, and the capacity utilization of the file system and the capacity utilization of the corresponding virtual logical volume are associated and displayed. | 2011-11-10 |
20110276773 | METHOD AND SYSTEM FOR GENERATING CONSISTENT SNAPSHOTS FOR A GROUP OF DATA OBJECTS - Snapshots that are consistent across a group of data objects are generated. The snapshots are initiated by a coordinator, which transmits a sequence of commands to each storage node hosting a data object within a group of data objects. The first command prepares a data object for a snapshot. After a data object has been successfully prepared, an acknowledgment is sent to the coordinator. Once all appropriate acknowledgments are received, the coordinator sends a command to confirm that a snapshot has been created for each data object in the respective group. After receiving this confirmation, the coordinator takes action to confirm or record the successful completion of the group-consistent snapshot. | 2011-11-10 |
20110276774 | OPERATION MANAGEMENT SYSTEM, MANAGEMENT APPARATUS, MANAGEMENT METHOD AND MANAGEMENT PROGRAM - A management apparatus has a control unit for realizing a management function through a comprehensive process of characteristic information of a storage apparatus, a connecting apparatus and a computer. The management apparatus also has an interface for receiving characteristic information from the storage apparatus, connecting apparatus and computer, depending on the standard protocol among the management apparatus, storage apparatus, connecting apparatus and computer. Moreover, an integrated management apparatus is also provided for integrated management based on the result of realization of a plurality of management functions. This integrated management apparatus includes an interface for receiving the result of realization of the management function from the management apparatus, depending on the standard protocol between the management apparatus and integrated management apparatus. | 2011-11-10 |
20110276775 | METHOD AND APPARATUS FOR CONCURRENTLY READING A PLURALITY OF MEMORY DEVICES USING A SINGLE BUFFER - A composite memory device including discrete memory devices and a bridge device for controlling the discrete memory devices in response to global memory control signals having a format or protocol that is incompatible with the memory devices. The discrete memory devices can be commercial off-the-shelf memory devices or custom memory devices which respond to native, or local memory control signals. The global and local memory control signals include commands and command signals each having different formats. The composite memory device includes a system in package including the semiconductor dies of the discrete memory devices and the bridge device, or can include a printed circuit board having packaged discrete memory devices and a packaged bridge device mounted thereto. | 2011-11-10 |
20110276776 | Addressing for Huge Direct-Mapped Object Systems - A method, computing system, and computer program product are provided for quickly and space-efficiently mapping an object's address to its home node in a computing system with a very large (possibly multi-petabyte) data set. The addresses of objects comprise three fields: a chunk number, a region sub-index within the chunk, and an offset within the region, with chunks being used to achieve good compromise between small lookup tables and reducing waste of usable virtual address space. | 2011-11-10 |
20110276777 | DATA STORAGE DEVICE AND RELATED METHOD OF OPERATION - A method of storing data in a storage medium of a data storage device comprises storing input data in the storage medium, and reading the input data from the storage medium and compressing the read data during a background operation of the data storage device. | 2011-11-10 |
20110276778 | EFFICIENT SUPPORT OF MULTIPLE PAGE SIZE SEGMENTS - An apparatus, system, and method are disclosed for improved support of MPS segments in a microprocessor. The virtual address is used to generate possible TLB index values for each of the supported page sizes of the MPS segment associated with the virtual address. The possible TLB index values may be a hash generated using the virtual address and one of the supported page sizes. The TLB is searched for actual TLB index values that match the possible TLB index values calculated using the different supported page sizes. TLB entries associated with those actual TLB index values are checked to determine whether any TLB entry is associated with the virtual address. If no match is found, the real address is retrieved from the PT. The actual page size in the PT is used to generate an actual TLB index value for the virtual address and the TLB entry is inserted into the TLB. | 2011-11-10 |
20110276779 | MEMORY MAPPED INPUT/OUTPUT BUS ADDRESS RANGE TRANSLATION - In an embodiment, a north chip receives a secondary bus identifier that identifies a bus that is immediately downstream from a bridge in a south chip, a subordinate bus identifier that identifies a highest bus identifier of all of buses reachable downstream of the bridge, and an MMIO bus address range that comprises a memory base and a memory limit. The north chip writes a translation of a bridge identifier and a south chip identifier to the secondary bus identifier, the subordinate bus identifier, and the MMIO bus address range. The north chip sends the secondary bus identifier, the subordinate bus identifier, the memory base, and the memory limit to the bridge. The bridge stores the secondary bus identifier, the subordinate bus identifier, the memory base, and the memory limit in the bridge. | 2011-11-10 |
20110276780 | Fast and Low-RAM-Footprint Indexing for Data Deduplication - The subject disclosure is directed towards a data deduplication technology in which a hash index service's index maintains a hash index in a secondary storage device such as a hard drive, along with a compact index table and look-ahead cache in RAM that operate to reduce the I/O to access the secondary storage device during deduplication operations. Also described is a session cache for maintaining data during a deduplication session, and encoding of a read-only compact index table for efficiency. | 2011-11-10 |
20110276781 | Fast and Low-RAM-Footprint Indexing for Data Deduplication - The subject disclosure is directed towards a data deduplication technology in which a hash index service's index maintains a hash index in a secondary storage device such as a hard drive, along with a compact index table and look-ahead cache in RAM that operate to reduce the I/O to access the secondary storage device during deduplication operations. Also described is a session cache for maintaining data during a deduplication session, and encoding of a read-only compact index table for efficiency. | 2011-11-10 |
20110276782 | RUNNING SUBTRACT AND RUNNING DIVIDE INSTRUCTIONS FOR PROCESSING VECTORS - The described embodiments provide a processor for generating a result vector with subtracted or mathematically divided values from a first input vector. During operation, the processor receives the first input vector, a second input vector, and a control vector, and optionally receives a predicate vector. The processor then records a value from an element at a key element position in the second input vector into a base value. Next, the processor generates a result vector. When generating the result vector, for each active element in the result vector to the right of the key element position, the processor is configured to set the element in the result vector equal to the base value minus a total of the values in each relevant element of the first input vector or to set the element in the result vector equal to the result of dividing the base value by a value in each relevant element of the first input vector, wherein the relevant elements include relevant elements from an element at the key element position to and including a predetermined element in the first input vector. | 2011-11-10 |
20110276783 | THREAD FAIRNESS ON A MULTI-THREADED PROCESSOR WITH MULTI-CYCLE CRYPTOGRAPHIC OPERATIONS - Systems and methods for efficient execution of operations in a multi-threaded processor. Each thread may include a blocking instruction. A blocking instruction blocks other threads from utilizing hardware resources for an appreciable amount of time. One example of a blocking type instruction is a Montgomery multiplication cryptographic instruction. Each thread can operate in a thread-based mode that allows the insertion of stall cycles during the execution of blocking instructions, during which other threads may utilize the previously blocked hardware resources. At times when multiple threads are scheduled to execute blocking instructions, the thread-based mode may be changed to increase throughput for these multiple threads. For example, the mode may be changed to disallow the insertion of stall cycles. Therefore, the time for sequential operation of the blocking instructions corresponding to the multiple threads may be reduced. | 2011-11-10 |
20110276784 | HIERARCHICAL MULTITHREADED PROCESSING - In one embodiment, a current candidate thread is selected from each of multiple first groups of threads using a low granularity selection scheme, where each of the first groups includes multiple threads and first groups are mutually exclusive. A second group of threads is formed comprising the current candidate thread selected from each of the first groups of threads. A current winning thread is selected from the second group of threads using a high granularity selection scheme. An instruction is fetched from a memory based on a fetch address for a next instruction of the current winning thread. The instruction is then dispatched to one of the execution units for execution, whereby execution stalls of the execution units are reduced by fetching instructions based on the low granularity and high granularity selection schemes. | 2011-11-10 |
20110276785 | BYTE CODE CONVERSION ACCELERATION DEVICE AND A METHOD FOR THE SAME - Provided is a bytecode conversion acceleration device and a method for the same: allowing a reduction in the size of a storage unit for a look-up table including a decoding table, a link table and a native code table; increasing the number of bytecodes that can be processed by hardware by using the look-up table to thereby enhance the overall performance of a virtual machine; and allowing an execution portion to immediately execute the first native code to thereby enhance performance of the virtual machine. | 2011-11-10 |
20110276786 | Shared Prefetching to Reduce Execution Skew in Multi-Threaded Systems - Mechanisms are provided for optimizing code to perform prefetching of data into a shared memory of a computing device that is shared by a plurality of threads that execute on the computing device. A memory stream of a portion of code that is shared by the plurality of threads is identified. A set of prefetch instructions is distributed across the plurality of threads. Prefetch instructions are inserted into the instruction sequences of the plurality of threads such that each instruction sequence has a separate sub-portion of the set of prefetch instructions, thereby generating optimized code. Executable code is generated based on the optimized code and stored in a storage device. The executable code, when executed, performs the prefetches associated with the distributed set of prefetch instructions in a shared manner across the plurality of threads. | 2011-11-10 |
20110276787 | MULTITHREAD PROCESSOR, COMPILER APPARATUS, AND OPERATING SYSTEM APPARATUS - A multithread processor for executing, in parallel, instructions included in a plurality of threads includes: a calculating group including a plurality of calculators each of which is for executing an instruction; instruction grouping units which classify, for each thread, the instructions included in the thread into groups each of which includes instructions that are simultaneously executable by the calculators; a thread selecting unit which selects, per execution cycle of the multithread processor, a thread including instructions to be issued to the calculators, from among the threads, by controlling execution frequency for executing the instructions included in the threads; and an instruction issuing unit which issues, to the calculators, per execution cycle of the multithread processor, the instructions classified into each of the groups and being among the instructions included in the thread selected by the thread selecting unit. | 2011-11-10 |
20110276788 | PIPELINE PROCESSOR - A bypass circuit is provided in a pipeline processor. A pipeline register is provided between an instruction execution stage and a write-back stage. The pipeline register stores a data validity flag and a WRITE control flag to control writing data into a general purpose register unit. The data retained in the pipeline register is allowed to be written back into the general purpose register unit when the WRITE control flag indicates “valid”. The pipeline register continues to retain the retained data even after the writing of the retained data into the general purpose register unit. The first pipeline register supplies the retained data to the second stage through the bypass circuit at the time of executing a subsequent instruction having data dependency on a preceding instruction. | 2011-11-10 |
20110276789 | PARALLEL PROCESSING OF DATA - A data parallel pipeline may specify multiple parallel data objects that contain multiple elements and multiple parallel operations that operate on the parallel data objects. Based on the data parallel pipeline, a dataflow graph of deferred parallel data objects and deferred parallel operations corresponding to the data parallel pipeline may be generated and one or more graph transformations may be applied to the dataflow graph to generate a revised dataflow graph that includes one or more of the deferred parallel data objects and deferred, combined parallel data operations. The deferred, combined parallel operations may be executed to produce materialized parallel data objects corresponding to the deferred parallel data objects. | 2011-11-10 |
20110276790 | INSTRUCTION SUPPORT FOR PERFORMING MONTGOMERY MULTIPLICATION - Techniques are disclosed relating to a processor including instruction support for performing a Montgomery multiplication. The processor may issue, for execution, programmer-selectable instruction from a defined instruction set architecture (ISA). The processor may include an instruction execution unit configured to receive instructions including a first instance of a Montgomery-multiply instruction defined within the ISA. The Montgomery-multiply instruction is executable by the processor to operate on at least operands A, B, and N residing in respective portions of a general-purpose register file of the processor, where at least one of operands A, B, N spans at least two registers of general-purpose register file. The instruction execution unit is configured to calculate P mod N in response to receiving the first instance of the Montgomery-multiply instruction, where P is the product of at least operand A, operand B, and R̂−1. | 2011-11-10 |
20110276791 | HANDLING A STORE INSTRUCTION WITH AN UNKNOWN DESTINATION ADDRESS DURING SPECULATIVE EXECUTION - The described embodiments provide a system for executing instructions in a processor. While executing instructions in an execute-ahead mode, the processor encounters a store instruction for which a destination address is unknown. The processor then defers the store instruction. Upon encountering a load instruction while the store instruction with the unknown destination address is deferred, the processor determines if the load instruction is to continue executing. If not, the processor defers the load instruction. Otherwise, the processor continues executing the load instruction. | 2011-11-10 |
20110276792 | RESOURCE FLOW COMPUTER - A scalable processing system includes a memory device having a plurality of executable program instructions, wherein each of the executable program instructions includes a timetag data field indicative of the nominal sequential order of the associated executable program instructions. The system also includes a plurality of processing elements, which are configured and arranged to receive executable program instructions from the memory device, wherein each of the processing elements executes executable instructions having the highest priority as indicated by the state of the timetag data field. | 2011-11-10 |
20110276793 | INJECTING A FILE FROM THE BIOS INTO AN OPERATING SYSTEM - Techniques for the BIOS to install a file into the runtime environment of an operating system of a computer. A system management interrupt (SMI) handler, resident within the BIOS, receives a first request. The SMI handler identifies an address in memory at which a first file is to be stored, and determines how to access a function provided by a kernel of the operating system. The SMI handler calls the function using the address as an argument to create a thread in the runtime environment of the operating system. Upon the SMI handler receiving a request from the thread, the SMI handler stores a second file in the memory of the runtime environment of the operating system. The thread may, but need not, store the second file to a file system provided by the operating system. In this way, the BIOS need not include a driver to the file system. | 2011-11-10 |
20110276794 | INFORMATION PROCESSING DEVICE HAVING CONFIGURATION CHANGING FUNCTION AND SYSTEM CONFIGURATION METHOD - An information processing device capable of connecting a device thereto includes a processing unit and a first storage device, wherein the first storage device stores device change information defining a configuration of a device connected to the information processing device and including difference information that is a deference between first system configuration information and each system configuration capable of being taken by the information processing device, and a first computer program causing the processing unit to execute a procedure, the procedure comprising detecting device configuration information that is device information of a device connected to the information processing device when any device connected to the information processing device is changed and changing the first system configuration information into second system configuration information based on the device configuration information detected by the detecting unit and the device change information. | 2011-11-10 |
20110276795 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - A method to allow a value to be written into one PCR domain, only if values from a second PCR domain are valid, thus ensuring the extension of the chain of trust between domains. | 2011-11-10 |
20110276796 | LOADING A PLURALITY OF APPLIANCES INTO A BLADE - A method for enabling a plurality of software appliances to be dynamically loaded onto a blade is described. During run-time and in response to receiving one or more sets of appliance loading instructions corresponding to one or more appliances, the one or more appliances is downloaded. Each appliance has a capability different from each other. The one or more appliances that are downloaded are stored at a first set of locations on a data store. Each of the first set of locations is different from each other. A first appliance of the one or more appliances that are stored is then installed at a second location on the data store. Then, the first appliance that is installed is booted on the blade. | 2011-11-10 |
20110276797 | AUTHENTICATION AND AUTHORIZATION FOR INTERNET VIDEO CLIENT - A device is enabled to display Internet TV by accessing a management server with a secret unique ID and receiving back from the server, assuming the ID is approved, a user token and a service list of content servers with knowledge of the user token. A user can select a content server which causes the device to upload its user token and in response receive a content list from the content server, from which content can be selected for display. Neither list may be modified by the device and the device can access only content on a content list. | 2011-11-10 |
20110276798 | SECURITY MANAGEMENT METHOD AND SYSTEM FOR WAPI TERMINAL ACCESSING IMS NETWORK - The present invention discloses a security management method and a security management system for a WAPI terminal accessing an IMS network. The method comprises: an authentication service unit (ASU) sending, under the circumstance that an access point and the WAPI terminal pass the verification of the ASU, a security information request message to a home subscriber server (HSS) (S | 2011-11-10 |
20110276799 | PERSONAL COMMUNICATION SYSTEM HAVING INDEPENDENT SECURITY COMPONENT - A personal communication system (PCS) incorporates a secure storage device, which includes a device processor, a CPU interface, and a system interface, a storage means and a removable storage media component. The device processor is communicably connected to the CPU of the PCS through the CPU interface, which exclusively enables communications between the device processor and the CPU. The system interface enables the device processor to manage one or more hardware components of the PCS. A network interface is also included to enable the device processor to communicate over a network with select file servers to the exclusion of other file servers. The storage means is communicably connected to the device processor and includes first and second designated storage sections. The device processor has read-write access to both storage sections and gives the CPU read-only access to the first storage section and read-write access to the second storage section. | 2011-11-10 |
20110276800 | Message Service Indication System and Method - Systems and methods for operation upon a data processing device for handling messages with different levels of security, are provided herein. A method for operation upon a data processing device for handling messages with different levels of security includes examining an attribute of a message received over a network in order to determine a security-related level associated with the message, generating a visual indication for display to a device user that is indicative of the determined security-related level, wherein the generated visual indication is applied to a displayed portion of text associated with the message, and changing the visual indication when the message viewed. | 2011-11-10 |
20110276801 | COMMUNICATING ADMISSION DECISIONS AND STATUS INFORMATION TO A CLIENT - In an example embodiment, a technique that employs a SAP/SDP packet to communicate data to a client device when a request for a multicast stream, such as a video stream, is denied. Rather than announcing a program, the SAP/SDP packet reports a status to the client device. The SAP/SDP packet may suitably comprise data representative of the video name, and a reason code, enabling the client device to provide an output, e.g. a text string, to a user associated with the client device indicating the reason for the denial. In addition, contact information such as an email address and a uniform resource locator (URL) pointing to a predetermined web page may also be included in the SAP/SDP packet that can inform the associated user of the client device where additional information can be obtained for the denial. | 2011-11-10 |
20110276802 | METHODS AND APPARATUS FOR PEER-TO-PEER TRANSFER OF SECURE DATA USING NEAR FIELD COMMUNICATIONS - The present invention discloses an apparatus and method of transferring data from a first device to a second device. The method includes transmitting a request to transfer the data from the first device to the second device, receiving, at the first device, a decryption key to allow transfer of the data stored in a memory of the first device, receiving, at the second device, an encryption key, and transmitting the data from the first device to the second device using peer-to-peer communications. The method also includes encrypting the data at the second device using the encryption key, storing the encrypted data in a memory of the second device, receiving, at the first device, an acknowledgement from the second device, the acknowledgement indicating that the data has been encrypted and stored in the memory of the second device, and deleting the data from the memory of the first device. | 2011-11-10 |
20110276803 | SYSTEM AND METHOD FOR MULTI-CERTIFICATE AND CERTIFICATE AUTHORITY STRATEGY - Operations or functions on a device may require an operational certificate to ensure that the user of the device or the device itself is permitted to carry out the operations or functions. A system and a method are provided for providing an operational certificate to a device, whereby the operational certificate is associated with one or more operations of the device. A manufacturing certificate authority, during the manufacture of the device, obtains identity information associated with the device and provides a manufacturing certificate to the device. An operational certificate authority obtains and authenticates at least a portion of the identity information associated with the device from the manufacturing certificate and, if at least the portion of the identity information is authenticated, the operational certificate is provided to the device. | 2011-11-10 |
20110276804 | SERVER AUTHENTICATION METHOD AND CLIENT TERMINAL - A server authentication method is provided. In the method, a client receives a public key of an evaluated server during establishment of a secure communication path with the evaluated server. The client terminal transmits a first ID to the evaluated server. The client terminal receives a second ID and a first random number from the evaluated server. The client terminal determines that the evaluated server is valid when the received first random number corresponds to the transmitted first ID and a public key stored in a public key management unit configured to manage the public key in advance is identical to the received public key. The client terminal transmits a second random number corresponding to the second ID to the evaluated server when the evaluated server is determined to be valid. | 2011-11-10 |
20110276805 | System and Method for Third Party Creation of Applications for Mobile Appliances - The creation of an application for any mobile appliance, for example Apple's iPhone, requires several elements to be present at compile time. In the Apple example of an enterprise application where an entity wishes to develop applications internally for its staff, two of these elements are the source code and a digital certificate. These must be combined in the compiler so that the application may be properly authorized to run in the appliance. Where the owner of the source code and the owner of the digital certificate are not the same, serious concerns arise because each element must be secured. An intermediating system and method are described that allows each party to cooperate securely through a third party escrow service to produce the complied application whilst leaving no unwanted residue of the independent parts. | 2011-11-10 |
20110276806 | Creation and Delivery of Encrypted Virtual Disks - The present application is directed to methods and systems for receiving a request for a virtual disk and creating a virtual disk that includes the virtual disk attributes identified in the request or determined by an organization's security policies. The created virtual disk can then be encrypted and in some aspects, an encryption key for the encrypted virtual disk can be stored in an encryption key database. Upon creating and encrypting the virtual disk, the virtual disk can be transmitted to a client. The client, upon receiving the encrypted virtual disk, can mount the virtual disk into the client system. The encrypted virtual disk may be stored as a file within an unencrypted virtual disk, and the unencrypted virtual disk backed up to a local or remote storage location. | 2011-11-10 |
20110276807 | REMOTE UPDATE METHOD FOR FIRMWARE - The present invention relates to a remote update method for a firmware, in which the encoded firmware is decoded and updated using the XOR table, checksum, and signature stored in the header of the remotely updated new firmware in the update of an automated teller machine, thereby updating the firmware in a convenient manner without moving the automated teller machine to the outside, thus improving the efficiency of managing the machine and preventing illegal operations of the automated teller machine performed by external hacking using a network. | 2011-11-10 |
20110276808 | APPLICATION INSTALLING METHOD - An application installing method according to the present invention in which an application file includes at least two application encrypting data in which the executable files are respectively encrypted using different encryption algorithms, and a license file includes at least two license encryption data in which application decryption keys for decrypting the application encryption data are encrypted using respectively different encryption algorithms. The process execution apparatus includes a calculation unit configured to execute the executable file, and a storage unit configured to store the application file and the license file. The method includes a step of decrypting the application encryption data by use of the application decryption key with the calculation unit based on the level of priority of the predesignated application encryption data stored in the storage unit, and installing the executable file corresponding to the application encryption data. | 2011-11-10 |
20110276809 | Method of Storing Data in a Memory Device and a Processing Device for Processing Such Data - In a method of storing data in a memory device, which data comprise content to be processed in a processing device in which the memory device is installed, the method comprises the steps of writing encrypted content (Enc_K | 2011-11-10 |
20110276810 | Systems and methods for monitoring and characterizing information handling system use behavior - Desktop power use behavior may be detected while a portable information handling system or any other type of battery powered information handling, system is operating on external power such as an AC adapter. The desktop power use behavior may be detected by monitoring one or more power usage parameters to detect usage characteristics that indicate a battery powered information handling system is being operated in a manner that is similar to operation of a desktop information handling system. Upon detection of desktop behavior, one or more processing devices of the information handling system may respond by taking one or more desktop use response actions. | 2011-11-10 |
20110276811 | IMAGING APPARATUS - An imaging apparatus includes an imaging section converting an image taken from an object into an original image signal, an apparatus body outputting an image or video signal to an external display and further delivering an image data signal to a terminal device, an external data input/output terminal including a main data input/output terminal for data communication with the terminal device and an secondary data input/output terminal for connection of peripheral device executing data communication with the terminal device, a single main power supply unit supplying power to the imaging section and the apparatus body, one or more power supply units supplying power to the peripheral device from the secondary data input/output terminal, and a power supply control unit executing control for converting voltage and current of an external power supply into voltages and currents suitable for operations of the imaging section, the apparatus body and the peripheral device respectively. | 2011-11-10 |
20110276812 | System On Chip, Devices Having The Same, and Method For Power Control of the SOC - Disclosed is an integrated circuit device including a plurality of power domain blocks, which includes a core power domain block. A power control circuit is configured to control power supplied to each of the plurality of power domain blocks independently responsive to control communication from the core power domain block. The power control circuit includes a plurality of power clusters corresponding to the plurality of power domain blocks, respectively. The plurality of power clusters control power supplied to the plurality of power domain blocks, respectively, independently responsive to the control communication from the core power domain block. | 2011-11-10 |
20110276813 | COMMUNICATION DEVICE - A communication device includes a main processing unit and a sub-processing unit. The main processing unit includes a main performing unit that acquires time information indicating a performance time, stores the time information in a time information storing unit, sets a timer unit to detect that it is the time indicated by the time information, and performs the process when detecting that it is the time, and a power saving determining unit that transmits a report of detecting that a power saving performance condition is satisfied to the sub-processing unit. The sub-processing unit includes a power control unit that stops the supply of power to the main processing unit when receiving the report that the power saving performance condition is satisfied, and restarts the supply of power when the timer unit detects that it is the time indicated by the time information of the time information storing unit. | 2011-11-10 |
20110276814 | COMMUNICATION DEVICE - Provided is a communication device connected to a network access device constituting a network and capable of communication through the network, includes: a circumstance determining unit that determines whether or not the network is in a circumstance of executing a protocol causing blocking of communication on the network by changing the link speed to the network access device; a power saving determining unit that determines whether or not a predetermined power saving performance condition is satisfied; and a link speed control unit that maintains the link speed when it is determined that the power saving performance condition is satisfied and when the network is in the circumstance of executing the protocol. | 2011-11-10 |
20110276815 | INTERFACE FREQUENCY MODULATION TO ALLOW NON-TERMINATED OPERATION AND POWER REDUCTION - Embodiments of the invention are generally directed to systems, methods, and apparatuses for using interface frequency modulation to allow non-terminated operation and power reduction. In some embodiments, an apparatus includes an interface having a termination mode and a power management controller coupled with the interface. The apparatus may also include a power management controller coupled with the interface. In some embodiments, the power management controller is capable of dynamically reducing the operating frequency of the interface and disabling the termination mode to reduce the power consumed by the interface. Other embodiments are described and claimed. | 2011-11-10 |
20110276816 | POWER MANAGEMENT OF LOW POWER LINK STATES - A method and apparatus for intelligent power management for low power link states. Some embodiments include methods, apparatuses, and systems for a device coupled to a controller via a link; a link power management engine to alter a power state of the link based on a transaction and some knowledge of future transactions between the device and the controller; and a memory or logic to store the link power management engine. In some embodiments, the memory stores information about at least one of the following: the power state of the link, the device buffering, the controller or device state or a history of transactions. In some embodiments, the device is a peripheral of a computer system. In some embodiments, the method may include transitioning the device to various link states. Other embodiments are described. | 2011-11-10 |
20110276817 | MEMORY POWER MANAGER - Controlling access to memory includes receiving a plurality of memory access requests and assigning corresponding time values to each. The assigned time values are adjusted based upon a clock pulse and a priority access list is generated. Factors consider include missed access deadlines, closeness to missing access deadlines, and whether a page is open. The highest ranked client is then passed to a sequencer to allow the requested access. Time values may be assigned and adjusted according to client ID or client type (latency or bandwidth). A plurality of power modes of operation are defined wherein operation in a selected power mode of operation is based at least in part on the assigned or adjusted time values. The processing is performed in hardware in parallel (at the same time) by associated logic circuits. | 2011-11-10 |
20110276818 | POWER CONSUMPTION QUANTITY ESTIMATION SYSTEM - A power consumption quantity estimation system | 2011-11-10 |
20110276819 | COMMUNICATIONS DEVICE - A mobile device is provided having a smart card. The smart card is powered by the mobile device and a maximum power supply value is defined by the mobile device to control the power drawn by the smart card. Provision is made for the smart card or the mobile device to renegotiate the maximum power supply level for the smart card without having to reset the mobile device. This provides the mobile device with dynamic control of the power drawn by the smart card, which can help the mobile device to optimize the power saving management. | 2011-11-10 |
20110276820 | Cross Controller Clock Synchronization - A system may include a plurality of subsystems, e.g. instrumentation units housed in separate chassis, each chassis including multiple instrumentation devices, e.g. data acquisition cards. Each subsystem may generate a local reference clock, which may be phase aligned and locked with respect to one or more similar reference clocks of other subsystems, via a high-level precision time protocol (PTP). Each instrumentation device within a given subsystem may generate its own sample clock based on the local reference clock, and may generate its own trigger clock based on its own sample clock. All trigger clocks may be synchronized with respect to each other through a future time event issued using the PTP, and each instrumentation device may then use its trigger clock to synchronize any received trigger pulses, which may also be issued through future time events using the PTP. This results in synchronizing the received triggers across all participating instrumentation devices across all participating subsystems, ensuring that data acquisition is properly synchronized across the multiple subsystems. | 2011-11-10 |
20110276821 | METHOD AND SYSTEM FOR MIGRATING DATA FROM MULTIPLE SOURCES - An approach is provided for migrating data. Data is received from a plurality of source systems. The received data is processed for conversion to a target system. A failure condition associated with the processing is detected. An action is selectively initiated from a point of failure corresponding to the detected failure condition. The action includes either retrying the processing, aborting the processing, initiating simulation of the process, forcing completion of the processing, or a combination thereof. | 2011-11-10 |
20110276822 | NODE CONTROLLER FIRST FAILURE ERROR MANAGEMENT FOR A DISTRIBUTED SYSTEM - A distributed system provides error handling wherein the system includes multiple nodes, each node being coupled to multiple node controllers for control redundancy. Multiple system controllers couple to the node controllers via a network bus. A particular node controller may detect an error of that particular node controller. The particular node controller may store error information relating to the detected error in respective nonvolatile memory stores in the system controllers and node controllers according to a particular priority order. In accordance with the particular priority order, for example, the particular node controller may first attempt to store the error information to a primary system controller memory store, then to a secondary system controller memory store, and then to sibling and non-sibling node controller memory stores. The primary system controller organizes available error information for use by system administrators and other resources of the distributed system. | 2011-11-10 |
20110276823 | INFORMATION PROCESSING APPARATUS, BACKUP SERVER AND BACKUP SYSTEM - An information processing apparatus includes a backup data storage unit, a monitoring information storage unit and a backup data transfer unit. The backup data storage unit stores backup data. The monitoring information storage unit stores monitoring information that includes at least identification information and priority information of the backup data. The backup data transfer unit transfers the backup data to a backup server via a network in response to a transfer request for the backup data. The transfer request is received from the backup server on the basis of the priority information of the monitoring information which is notified to the backup server from the information processing apparatus. | 2011-11-10 |
20110276824 | NETWORK SWITCH WITH BACKUP POWER SUPPLY - A network switch apparatus includes a housing, a first network port, a second network port, a first instrument port, an active component inside the housing, wherein the active component is configured to receive packets from the first network port, and pass at least some of the packets from the first network port to the first instrument port, a connector for supplying power from a power supply to the active component, and a backup power supply for supplying power to the active component when the active component does not receive power from the power supply. | 2011-11-10 |
20110276825 | DEVICE AND METHOD FOR COORDINATING AUTOMATIC PROTECTION SWITCHING OPERATION AND RECOVERY OPERATION - The present invention relates to a device and a method for coordinating an APS operation and a recovery operation. The device includes a working channel detection unit, a protection channel detection unit, a protection protocol unit and a recovery protocol unit. The method comprises: when the working channel of current service fails, the working channel detection unit reporting a working channel alarm to the protection protocol unit and a recovery protocol unit of current node; the recovery protocol unit starting up a timer after receiving the working channel alarm, and the protection protocol unit determining whether the recovery operation needs to be started up immediately after receiving the working channel alarm, and if yes, the protection protocol unit notifying the recovery protocol unit to start up the recovery operation immediately; the recovery protocol unit starting up the recovery operation immediately after receiving the notification. The present invention reduces the damage time of the service in the case of the APS function failure. | 2011-11-10 |
20110276826 | COMMUNICATION DEVICES AND METHODS WITH ONLINE PARAMETER CHANGE - Methods and devices are provided where when an online parameter change is requested a data transfer unit is stopped, a parameter change is communicated and the data transfer unit is started with changed parameters to generate new data transfer units. | 2011-11-10 |
20110276827 | DELTA CHECKPOINTS FOR A NON-VOLATILE MEMORY INDIRECTION TABLE - According to some embodiments, delta checkpoints are provided for a non-volatile memory indirection table to facilitate a recovery process after a power loss event. | 2011-11-10 |