14th week of 2013 patent applcation highlights part 52 |
Patent application number | Title | Published |
20130086299 | Security in virtualized computer programs - In an embodiment, a data processing method comprises implementing a memory event interface to a hypercall interface of a hypervisor or virtual machine operating system to intercept page faults associated with writing pages of memory that contain a computer program; receiving a page fault resulting from a guest domain attempting to write a memory page that is marked as not executable in a memory page permissions system; determining a first set of memory page permissions for the memory page that are maintained by the hypervisor or virtual machine operating system; determining a second set of memory page permissions for the memory page that are maintained independent of the hypervisor or virtual machine operating system; determining a particular memory page permission for the memory page based on the first set and the second set; processing the page fault based on the particular memory page permission, including performing at least one security function associated with regulating access of the guest domain to the memory page. | 2013-04-04 |
20130086300 | STORAGE CACHING ACCELERATION THROUGH USAGE OF R5 PROTECTED FAST TIER - A data storage system with redundant SSD cache includes an SSD cache organized into logical stripes, each logical stripe having several logical blocks. The logical blocks of each stripe are organized into logical data blocks and one logical parity block. Data may be written to the SSD cache by performing an exclusive disjunction operation on the logical parity block, the new data and the existing data in logical stripe to update the parity block, then writing the new data over the existing data in a logical data block in the same logical stripe. | 2013-04-04 |
20130086301 | Direct Memory Address for Solid-State Drives - A storage device is provided for direct memory access. A controller of the storage device performs a mapping of a window of memory addresses to a logical block addressing (LBA) range of the storage device. Responsive to receiving from a host a write request specifying a write address within the window of memory addresses, the controller initializes a first memory butler in the storage device and associates the first memory buffer with a first address range within the window of memory addresses such that the write address of the request is within the first address range. The controller writes to the first memory buffer based on the write address. Responsive to the buffer being full, the controller persists contents of the first memory buffer to the storage device using logical block addressing based on the mapping. | 2013-04-04 |
20130086302 | Enabling Throttling on Average Write Throughput for Solid State Storage Devices - A mechanism is provided for enabling throttling on average write throughput instead of peak write throughput for solid-state storage devices. The mechanism assures an average write throughput within a range but allows excursions of high throughput with periods of low throughput offsetting against those of heavy usage. The mechanism periodically determines average throughput and determines whether average throughput exceeds a high throughput threshold for a certain amount of time without being offset by periods of low throughput. | 2013-04-04 |
20130086303 | APPARATUS, SYSTEM, AND METHOD FOR A PERSISTENT OBJECT STORE - An apparatus, system, and method are disclosed for persistently storing data objects. An object store index module maintains an object store. The object store associates each data object of a plurality of data objects with a unique key value. A storage module persists object store data defining the object store to a logical block address of the solid-state storage device in response to an update event. The logical block address is a member of a restricted set of logical block addresses. The logical block address is mapped to a location of the object store data on the solid-state storage device. A read module provides a requested data object from the plurality of data objects to a requesting client in response to receiving a read request for the requested data object from the requesting client. The read request comprises the key value associated with the requested data object. | 2013-04-04 |
20130086304 | STORAGE SYSTEM COMPRISING NONVOLATILE SEMICONDUCTOR STORAGE MEDIA - Logical-physical translation information comprises information denoting the corresponding relationships between multiple logical pages and multiple logical chunks forming a logical address space of a nonvolatile semiconductor storage medium, and information denoting the corresponding relationships between the multiple logical chunks and multiple physical storage areas. Each logical page is a logical storage area conforming to a logical address range. Each logical chunk is allocated to two or more logical pages of multiple logical pages. Two or more physical storage areas of multiple physical storage areas are allocated to each logical chunk. A controller adjusts the number of physical storage areas to be allocated to each logical chunk. | 2013-04-04 |
20130086305 | NONVOLATILE SEMICONDUCTOR STORAGE SYSTEM - A nonvolatile semiconductor storage system has multiple nonvolatile semiconductor storage media, a control circuit having a media interface group (one or more interface devices) coupled to the multiple nonvolatile semiconductor storage media, and multiple switches. The media interface group and the multiple switches are coupled via data buses, and each switch and each of two or more nonvolatile chips are coupled via a data bus. The switch is configured so as to switch a coupling between a data bus coupled to the media interface group and a data bus coupled to any of multiple nonvolatile chips that are coupled to this switch. The control circuit partitions write-target data into multiple data elements, switches a coupling by controlling the multiple switches, and distributively sends the multiple data elements to multiple nonvolatile chips. | 2013-04-04 |
20130086306 | INFORMATION PROCESSOR AND MEMORY MANAGEMENT METHOD - According to one embodiment, an information processor includes: a controller, a volatile storage module, a non-volatile storage module, and a reader. The volatile storage module is configured to be allocated with a storage area which can be accessed by the controller. The non-volatile storage module is configured to save data stored in the storage area of the volatile storage module at transition to a power-off state. The reader is configured to read, if a state just prior to the transition to the power-off state is to be recovered, the data stored in the non-volatile storage module by each page, and to load the read data to the storage area in the volatile storage module. The page is configured by a plurality of memory cells. | 2013-04-04 |
20130086307 | INFORMATION PROCESSING APPARATUS, HYBRID STORAGE APPARATUS, AND CACHE METHOD - According to one embodiment, an information processing apparatus includes a determination module and a cache module. The determination module is configured to determine whether an access request from a host to the hard disk drive is a request for accessing a preset number of or more consecutive sectors in a hard disk drive. The cache module is configured to use a storage apparatus as a cache for the hard disk drive, and the cache module is configured not to use the storage apparatus as the cache when it is determined that the access request is the request for accessing the preset number of or more consecutive sectors. | 2013-04-04 |
20130086308 | STORAGE DEVICE AND METHOD OF ACCESSING COPY DESTINATION DATA - A storage device copies copy source data stored in a copy source volume to a copy destination volume and manages copied data in units of generations. The storage device includes a first storing unit, a second storing unit, and a processor. The first storing unit stores information representing presence/no-presence of a copy in association with a logical address of the copy destination volume in units of generations. The second storing unit stores a physical address of the copy destination volume in units of generations. The processor determines, when receiving an access request for a copy destination volume, presence/non-presence of a copy of a logical address for which the access request is made by using information in the first storing unit, and to access a physical address of the copy destination volume acquired from the second storing unit for a generation designated by a result of the determination. | 2013-04-04 |
20130086309 | FLASH-DRAM HYBRID MEMORY MODULE - A memory module that is couplable to a memory controller hub (MCH) of a host system includes a non-volatile memory subsystem, a data manager coupled to the non-volatile memory subsystem, a volatile memory subsystem coupled to the data manager and operable to exchange data with the non-volatile memory subsystem by way of the data manager, and a controller operable to receive read/write commands from the MCH and to direct transfer of data between any two or more of the MCH, the volatile memory subsystem, and the non-volatile memory subsystem based on the commands. | 2013-04-04 |
20130086310 | NON-TRANSITORY STORAGE MEDIUM ENCODED WITH COMPUTER READABLE PROGRAM, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD - An exemplary embodiment provides a non-transitory storage medium encoded with a computer readable program executable by the computer, for writing data in a semiconductor storage device capable of storing a plurality of bits in one memory cell. The program causes the computer to perform an allocation step of allocating a first area for storing first data in a storage area of a semiconductor storage device and a writing step of writing the first data only in an area of use, with a prescribed size from a boundary of the first area being defined as a protection area and a remaining area being defined as the area of use in response to a request for writing the first data. | 2013-04-04 |
20130086311 | METHOD OF DIRECT CONNECTING AHCI OR NVMe BASED SSD SYSTEM TO COMPUTER SYSTEM MEMORY BUS - A SSD system directly connected to the system memory bus includes at least one system memory bus interface unit, one storage controller with associated data buffer/cache, one data interconnect unit, one nonvolatile memory (NVM) module, and flexible association between storage commands and the NVM module. A logical device interface, the Advanced Host Controller Interface (AHCI) or NVM Express (NVMe), is used for the SSD system programming. The SSD system appears to the computer system physically as a dual-inline-memory module (DIMM) attached to the system memory controller, and logically as an AHCI device or an NVMe device. The SSD system may sit in a DIMM socket and scaling with the number of DIMM sockets available to the SSD applications. The invention moves the SSD system from I/O domain to the system memory domain. | 2013-04-04 |
20130086312 | Semiconductor Device - An object of the present invention is to realize a highly reliable long-life information processor capable of high-speed operation and easy to handle. The processor includes a semiconductor device comprising a nonvolatile memory device including a plurality of overwritable memory cells, and a control circuit device for controlling access to the nonvolatile memory device. The control circuit device sets assignments of second addresses to the nonvolatile memory device independently of first addresses externally supplied, such that the physical disposition of part of the memory cells used for writing of first data to be written externally supplied is one of the first to (N+1)th of every (N+1) memory cells (N: a natural number) at least in one direction. | 2013-04-04 |
20130086313 | METHODS TO SECURELY BIND AN ENCRYPTION KEY TO A STORAGE DEVICE - Embodiments of methods to securely bind a disk cache encryption key to a cache device are generally described herein. Other embodiments may be described and claimed. | 2013-04-04 |
20130086314 | STORAGE SYSTEM, STORAGE DEVICE, AND CONTROL METHOD THEREOF - A storage system including a storage device which includes media for storing data from a host computer, a medium controller for controlling the media, a plurality of channel controllers for connecting to the host computer through a channel and a cache memory for temporarily storing data from the host computer, wherein the media have a restriction on a number of writing times. The storage device includes a bus for directly transferring data from the medium controller to the channel controller. | 2013-04-04 |
20130086315 | DIRECT MEMORY ACCESS WITHOUT MAIN MEMORY IN A SEMICONDUCTOR STORAGE DEVICE-BASED SYSTEM - In general, embodiments of the present invention provide an approach for direct memory access (DMA) without main memory for a semiconductor storage device (SSD)-based system. Specifically, in a typical embodiment, an input/output hub (IOH) is provided with an inter-DMA engine. The IOH is coupled to a central processing unit (CPU), a set of double data rate (DDR) SSD memory disk units, and a graphics card. The graphics card can comprise a cache memory unit or other type of memory unit. Among other things, this embodiment provides one or more of the following features: interchangeability of hardware; resource allocation for DMA in the CPU utilizes inter-DMA resources; direct data transfer to the graphics card/processor; and/or no need to depend on a main memory comment needed in previous approaches. | 2013-04-04 |
20130086316 | Using unused portion of the storage space of physical storage devices configured as a RAID - Physical storage devices are configured as a redundant array of independent disks (RAID). As such, storage space of the physical storage devices is allocated to the RAID, and each physical storage device is part of the RAID. Where a portion of the storage space of the physical storage devices is not allocated to the RAID, this portion of the storage space from a mixed drive capacity is configured so that it is usable and is not wasted. | 2013-04-04 |
20130086317 | PASSING HINT OF PAGE ALLOCATION OF THIN PROVISIONING WITH MULTIPLE VIRTUAL VOLUMES FIT TO PARALLEL DATA ACCESS - An information system comprises: a storage system including a processor, a memory, and a plurality of virtual volumes to be allocated pages from a storage pool of volumes; and a metadata server which, upon receiving from a client a write request containing file data to be written to a virtual volume in the storage system, returns the write request to the client with parallel information which is added to a data layout of the file data to be written in the virtual volume. The storage system, upon receiving the write request with the parallel information, allocates, based on the parallel information, pages from the storage pool to the virtual volume for writing the file data, so that the data layout is striped and the allocated pages fit striped data access according to the striped data layout. | 2013-04-04 |
20130086318 | SAFE MANAGEMENT OF DATA STORAGE USING A VOLUME MANAGER - A method, system, and computer program product for safe management of data storage using a VM are provided in the illustrative embodiments. An I/O request is received from the VM. A determination is made whether the I/O request requests a data manipulation on the data storage in an address range that overlaps with an address range of a VM signature stored on the data storage. In response to determining that the address range of the data manipulation overlaps with the address range of the VM signature, a determination is made whether an identifier of the VM matches an identifier of a second VM associated with the signature. In response to determining that the identifier of the VM does not match the identifier of the second VM, the I/O request is failed, thereby preventing an unsafe overwriting of the signature on the data storage. | 2013-04-04 |
20130086319 | DISK ARRAY DEVICE, DISK ARRAY CONTROL METHOD, AND RECORDING MEDIUM - A disk array device includes a plurality of disk devices, and a computation portion generating a redundant code data denoting a redundant code able to recover an actual data sent from a higher-level device based on the actual data and operators, when at least part of the actual data is lost. The actual data is written by a predetermined device quantity of first disk devices in a first disk device group composed of disk devices as the first disk devices among the plurality of disk devices. The redundant code data is written by second disk devices of the same quantity as the predetermined device quantity of the first disk devices, in a second disk device group composed of disk devices as the second disk devices other than the first disk devices among the plurality of disk devices. The computation portion generates the redundant code data able to recover the actual data based on the data written into arbitrary disk devices of the device quantity among the plurality of disk devices, while the number of types of the operators used for generating the redundant code data is less than the device quantity by one or more. | 2013-04-04 |
20130086320 | MULTICAST WRITE COMMANDS - Techniques for implementing a multicast write command are described. A data block may be destined for multiple targets. The targets may be included in a list. A multicast write command may include the list. Write commands may be sent to each target in the list. | 2013-04-04 |
20130086321 | FILE BASED CACHE LOADING - A method for loading a cache is disclosed. Data in a computer file is stored on a storage device. The computer file is associated with a computer program. The first step is to determine which logical memory blocks on the storage device correspond to the computer files ( | 2013-04-04 |
20130086322 | SYSTEMS AND METHODS FOR MULTITENANCY DATA - Systems and methods are provided to support multitenant data in an EclipseLink environment. EclipseLink supports shared multitenant tables using tenant discriminator columns, allowing an application to be re-used for multiple tenants and have all their data co-located. Tenants can share the same schema transparently, without affecting one another and can use non-multitenant entity types as per usual. This functionality is flexible enough to allow for its usage at an Entity Manager Factory level or with individual Entity Manager's based on the application's needs. Support for multitenant entities can be done though the usage of a multitenant annotation or xml element configured in an eclipselink-orm.xml mapping file. The multitenant annotation can be used on an entity or mapped superclass and is used in conjunction with a tenant discriminator column or xml element. | 2013-04-04 |
20130086323 | EFFICIENT CACHE MANAGEMENT IN A CLUSTER - A content management system has at least two content server computers, a cache memory corresponding to each content server, the cache memory having a page cache to store cache objects for pages displayed by the content server, a dependency cache to store dependency information for the cache objects, and a notifier cache to replicate changes in dependency information to other caches. | 2013-04-04 |
20130086324 | INTELLIGENCE FOR CONTROLLING VIRTUAL STORAGE APPLIANCE STORAGE ALLOCATION - A change in workload characteristics detected at one tier of a multi-tiered cache is communicated to another tier of the multi-tiered cache. Multiple caching elements exist at different tiers, and at least one tier includes a cache element that is dynamically resizable. The communicated change in workload characteristics causes the receiving tier to adjust at least one aspect of cache performance in the multi-tiered cache. In one aspect, at least one dynamically resizable element in the multi-tiered cache is resized responsive to the change in workload characteristics. | 2013-04-04 |
20130086325 | DYNAMIC CACHE SYSTEM AND METHOD OF FORMATION - Embodiments of the present invention provide a dynamic cache system comprising: a multi-level inspector design that handles multi-level data formats; a cache function design that handles multi-level data formats; a cache size controller design that is able to handle the varying cache sizes based on characteristics such as hit-rates, usage patterns, etc.; a cache behavior controller design that handles different types of files; and heterogeneous storage controller design that is configured to handle volumes of the storage based on the types of storage (RAM Disk, flash, HDD, etc.). Advantages of system include (among others): caching for different types of data when different types of data need to be cached, and/or cache size can be allocated based on the cache level (which itself can be established). | 2013-04-04 |
20130086326 | SYSTEM AND METHOD FOR SUPPORTING A TIERED CACHE - A computer-implemented method and system can support a tiered cache, which includes a first cache and a second cache. The first cache operates to receive a request to at least one of update and query the tiered cache; and the second cache operates to perform at least one of an updating operation and a querying operation with respect to the request via at least one of a forward strategy and a listening scheme. | 2013-04-04 |
20130086327 | AUTOMATIC CACHING OF PARTIAL RESULTS WHILE EDITING SOFTWARE - An automatic caching system is described herein that automatically determines user-relevant points at which to incrementally cache expensive to obtain data, resulting in faster computation of dependent results. The system can intelligently choose between caching data locally and pushing computation to a remote location collocated with the data, resulting in faster computation of results. The automatic caching system uses stable keys to uniquely refer to programmatic identifiers. The system annotates programs before execution with additional code that utilizes the keys to associate and cache intermediate programmatic results. The system can maintain the cache in a separate process or even on a separate machine to allow cached results to outlive program execution and allow subsequent execution to utilize previously computed results. Cost estimations are performed in order to choose whether utilizing cached values or remote execution would result in a faster computation of a result. | 2013-04-04 |
20130086328 | General Purpose Digital Data Processor, Systems and Methods - The invention provides improved data processing apparatus, systems and methods that include one or more nodes, e.g., processor modules or otherwise, that include or are otherwise coupled to cache, physical or other memory (e.g., attached flash drives or other mounted storage devices) collectively, “system memory.” At least one of the nodes includes a cache memory system that stores data (and/or instructions) recently accessed (and/or expected to be accessed) by the respective node, along with tags specifying addresses and statuses (e.g., modified, reference count, etc.) for the respective data (and/or instructions). The tags facilitate translating system addresses to physical addresses, e.g., for purposes of moving data (and/or instructions) between system memory (and, specifically, for example, physical memory—such as attached drives or other mounted storage) and the cache memory system. | 2013-04-04 |
20130086329 | ALLOCATING CACHE FOR USE AS A DEDICATED LOCAL STORAGE - A method and apparatus dynamically allocates and deallocates a portion of a cache for use as a dedicated local storage. Cache lines may be dynamically allocated and deallocated for inclusion in the dedicated local storage. Cache entries that are included in the dedicated local storage may not be evicted or invalidated. Additionally, coherence is not maintained between the cache entries that are included in the dedicated local storage and the backing memory. A load instruction may be configured to allocate, e.g., lock, a portion of the data cache for inclusion in the dedicated local storage and load data into the dedicated local storage. A load instruction may be configured to read data from the dedicated local storage and to deallocate, e.g., unlock, a portion of the data cache that was included in the dedicated local storage. | 2013-04-04 |
20130086330 | Write-Back Storage Cache Based On Fast Persistent Memory - A storage device uses non-volatile memory devices for caching. The storage device operates in a mode referred to herein as write-back mode. In write-back mode, a storage device responds to a request to write data by persistently writing the data to a cache in a non-volatile memory device and acknowledges to the requestor that the data is written persistently in the storage device. The acknowledgement is sent without necessarily having written the data that was requested to be written to primary storage. Instead, the data is written to primary storage later. | 2013-04-04 |
20130086331 | INFORMATION PROCESSING SYSTEM AND A SYSTEM CONTROLLER - In a system including a plurality of CPU units having a cache memory of different capacity each other and a system controller that connects to the plurality of CPUs and controls cache synchronization, the system controller includes a cache synchronization unit which monitors an address contention between a preceding request and a subsequent request and a setting unit which sets different monitoring range of the contention between the preceding request and the subsequent request for each capacity of the cache memory in each of the CPU units. | 2013-04-04 |
20130086332 | Task Queuing in a Multi-Flow Network Processor Architecture - Described embodiments generate tasks corresponding to each packet received by a network processor. A destination processing module receives a task and determines, based on the task size, a queue in which to store the task, and whether the task is larger than space available within a current memory block of the queue. If the task is larger, an address of a next memory block in a memory is determined, and the address is provided to a source processing module of the task. The source processing module writes the task to the memory based on a provided offset address and the address of the next memory block, if provided. If a task is written to more than one memory block, the destination processing module preloads the address of the next memory block to a local memory to process queued tasks without stalling to retrieve the address of the next memory block. | 2013-04-04 |
20130086333 | SYSTEM AND METHOD FOR SUPPORTING A SELF-TUNING LOCKING MECHANISM IN A TRANSACTIONAL MIDDLEWARE MACHINE ENVIRONMENT - A lock mechanism can be supported in a transactional middleware system to protect transaction data in a shared memory when there are concurrent transactions. The transactional middleware machine environment comprises a semaphore provided by an operating system running on a plurality of processors. The plurality of processors operates to access data in the shared memory. The transactional middleware machine environment also comprises a test-and-set (TAS) assembly component that is associated with one or more processes. Each said process operates to use the TAS assembly component to perform one or more TAS operations in order to obtain a lock for data in the shared memory. Additionally, a process operates to be blocked on the semaphore and waits for a release of a lock on data in the shared memory, after the TAS component has performed a number of TAS operations and failed to obtain the lock. | 2013-04-04 |
20130086334 | SERIALLY CONNECTED MEMORY HAVING SUBDIVIDED DATA INTERFACE - A memory system has a controller. A plurality of memory devices are serially interconnected with the controller via an n-bit data interface. The memory system is configurable in a first mode to communicate each read and write operation between the controller and the memory devices using all n bits of the data interface. The memory system is configurable in a second mode to concurrently: communicate data associated with a first operation between the controller and a first target memory device using only m bits of the data interface, where m is less than n; and communicate data associated with a second operation between the controller and a second target memory device using the remaining n-m bits of the data interface. A memory device, a memory controller, and a method are also described. | 2013-04-04 |
20130086335 | MEMORY SYSTEM AND MEMORY INTERFACE DEVICE - A memory access source regards a plurality of memory circuits as single memory circuit and transmits a row address and a column address in time division to an access control circuit. The access control circuit performs a speculative access to the plurality of memory circuits when receiving the row address, and performs an access to a memory circuit which is specified by the column address after receiving the column address and sends a cancel command of the speculative access to the other memory circuit out of target. Or, in the case of read access, the access control circuit receives read data from the plurality of memory circuits and discards the read data of the memory circuit out of the target by the column address. | 2013-04-04 |
20130086336 | SCALABLE STORAGE DEVICES - Techniques using scalable storage devices represent a plurality of host-accessible storage devices as a single logical interface, conceptually aggregating storage implemented by the devices. A primary agent of the devices accepts storage requests from the host using a host-interface protocol, processing the requests internally and/or forwarding the requests as sub-requests to secondary agents of the storage devices using a peer-to-peer protocol. The secondary agents accept and process the sub-requests, and report sub-status information for each of the sub-requests to the primary agent and/or the host. The primary agent optionally accumulates the sub-statuses into an overall status for providing to the host. Peer-to-peer communication between the agents is optionally used to communicate redundancy information during host accesses and/or failure recoveries. Various failure recovery techniques reallocate storage, reassign agents, recover data via redundancy information, or any combination thereof. | 2013-04-04 |
20130086337 | MAINTAINING A TIMESTAMP-INDEXED RECORD OF MEMORY ACCESS OPERATIONS - A memory management system determines a timestamp for a memory access operation that accesses a block of data, and uses the timestamp to access a timestamp-indexed record. The timestamp-indexed record includes a plurality of record arrays, each of which corresponds to a different time range, and includes one or more record entries of a different array-specific time duration. The system selects a record entry that indicates a time range associated with the timestamp, and that indicates an amount of memory accessed during the indicated time range. The system then updates the selected record entry to account for the memory size of the block of data. | 2013-04-04 |
20130086338 | LINKING CODE FOR AN ENHANCED APPLICATION BINARY INTERFACE (ABI) WITH DECODE TIME INSTRUCTION OPTIMIZATION - A code sequence made up multiple instructions and specifying an offset from a base address is identified in an object file. The offset from the base address corresponds to an offset location in a memory configured for storing an address of a variable or data. The identified code sequence is configured to perform a memory reference function or a memory address computation function. It is determined that the offset location is within a specified distance of the base address and that a replacement of the identified code sequence with a replacement code sequence will not alter program semantics. The identified code sequence in the object file is replaced with the replacement code sequence that includes a no-operation (NOP) instruction or having fewer instructions than the identified code sequence. Linked executable code is generated based on the object file and the linked executable code is emitted. | 2013-04-04 |
20130086339 | METHOD AND APPARATUS FOR HIGH BANDWIDTH DICTIONARY COMPRESSION TECHNIQUE USING DELAYED DICTIONARY UPDATE - Method, apparatus, and systems employing novel delayed dictionary update schemes for dictionary-based high-bandwidth lossless compression. A pair of dictionaries having entries that are synchronized and encoded to support compression and decompression operations are implemented via logic at a compressor and decompressor. The compressor/decompressor logic operatives in a cooperative manner, including implementing the same dictionary update schemes, resulting in the data in the respective dictionaries being synchronized. The dictionaries are also configured with replaceable entries, and replacement policies are implemented based on matching bytes of data within sets of data being transferred over the link. Various schemes are disclosed for entry replacement, as well as a delayed dictionary update technique. The techniques support line-speed compression and decompression using parallel operations resulting in substantially no latency overhead. | 2013-04-04 |
20130086340 | MANAGING STORAGE DEVICES IN A CLOUD STORAGE ENVIRONMENT - A method of managing storage devices including storage resources that are virtualized and integrated into corresponding intermediate volumes, wherein the intermediate volumes are virtualized and integrated into individual logical volumes. The method comprises acquiring storage resource requirements presented to the logical volumes where the requirements comprise redundancy, obtaining storage resources available from respective intermediate volumes, selecting intermediate volumes to satisfy the storage resource requirements based on the requirements and available storage resources, where a minimum number of the intermediate volumes is determined based on the required redundancy, and storing user data in selected intermediate volumes based on the required redundancy. | 2013-04-04 |
20130086341 | BACKUP STORAGE MANAGEMENT - Methods, systems, and computer-readable media with executable instructions stored thereon for backup storage management are provided. A utilization threshold can be defined for each of a number of virtual tape Libraries (VTLs). A number of slipped backup jobs can be identified, wherein each of the number of slipped backup jobs is associated with one of the number of VTLs. A number of storage statistics for each of the number of VTLs can be collected and storage can be allocated for each of the number of slipped backup jobs via a VTL backup storage manager that analyzes the utilization threshold, the number of slipped backup jobs, and the number of storage statistics for each of the number of VTLs. | 2013-04-04 |
20130086342 | MAINTAINING MULTIPLE TARGET COPIES - Provided are techniques for maintaining instant virtual copies. A request to perform an instant virtual copy operation to create an instant virtual copy from a first volume to a new volume is received. It is determined that the first volume has not been modified since a last instant virtual copy operation has been performed. It is determined whether an intermediate volume and an intermediate map have already been created. In response to determining that the intermediate volume and the intermediate map have not already been created, the intermediate volume and the intermediate map are created, the intermediate volume is made dependent on the first volume in a dependency chain based on the intermediate map, and the new volume is made dependent on the intermediate volume in the dependency chain. | 2013-04-04 |
20130086343 | MAINTAINING MULTIPLE TARGET COPIES - Provided are techniques for maintaining instant virtual copies. A request to perform an instant virtual copy operation to create an instant virtual copy from a first volume to a new volume is received. It is determined that the first volume has not been modified since a last instant virtual copy operation has been performed. It is determined whether an intermediate volume and an intermediate map have already been created. In response to determining that the intermediate volume and the intermediate map have not already been created, the intermediate volume and the intermediate map are created, the intermediate volume is made dependent on the first volume in a dependency chain based on the intermediate map, and the new volume is made dependent on the intermediate volume in the dependency chain. | 2013-04-04 |
20130086344 | STORAGE SYSTEM AND STORAGE SYSTEM NETWORK - A storage system includes a first storage unit, a second storage unit and a controller to receive a write request for updated data to a first storage unit from the host and write the updated data into the first storage area, when the controller determines that there is not a free area in a storage area to be processed in the second storage unit, the controller changes the storage area to be processed to another storage area to be processed and instructs change of a storage area to be processed to another storage system to be connected to the host, and the controller reads the updated data from the first storage unit and transmits the updated data and writing destination information relating to the updated data to another storage system for backup. | 2013-04-04 |
20130086345 | STORAGE APPARATUS, CONTROL APPARATUS, AND STORAGE APPARATUS CONTROL METHOD - In a storage apparatus, when a received backup instruction is a backup start instruction, a control unit performs a backup process from a position indicated by an available area start pointer. When the received backup instruction is a backup end instruction, the control unit releases an allocated backup area. In addition, the control unit determines whether the released backup area is adjacent to a position indicated by an available area end pointer. If determining that the released backup are is adjacent to the position indicated by the available area end pointer, the control unit moves the available area end pointer indicating the end of an available area to the end of the released backup area. | 2013-04-04 |
20130086346 | DATA STORAGE APPARATUS - A method for controlling a storage apparatus connectable to a server, the storage apparatus including a first storage area and a second storage area for storing data, the method comprises: copying the data stored in the first storage area into the second storage area; copying data stored in the location of the first storage area addressed by the command to be accessed by the server into the location of the second storage area before execution of the command when the storage apparatus receives a command for accessing data stored in a location of either of the first storage area and second storage area from the server; and executing the command for accessing at least one of the location in the first storage area and the corresponding location in the second storage area. | 2013-04-04 |
20130086347 | SYSTEM AND METHOD FOR VIRTUALIZING BACKUP IMAGES - Facility for using images created by backup software to recreate an entire machine as it was at the point in time in the past when the backup was taken. The facility can be extended so as to bring up a set of machines which together serve some logical business function as in a cluster or federated servers, and further extended so that an entire data center may be virtualized from backup images. The virtualized servers provided may serve as an alternate data center standing in case of disaster or to meet maintenance windows achieving low cost Instant Disaster Recovery. A set of virtual machines may stand in for physical machines for a period of time and then resynchronized or re-seeded to physical machines via a combination of bare metal recovery and re-synchronizing from live LUNs that form the virtual machine disks. | 2013-04-04 |
20130086348 | Lock-Clustering Compilation for Software Transactional Memory - A lock-clustering compiler is configured to compile program code for a software transactional memory system. The compiler determines that a group of data structures are accessed together within one or more atomic memory transactions defined in the program code. In response to determining that the group is accessed together, the compiler creates an executable version of the program code that includes clustering code, which is executable to associate the data structures of the group with the same software transactional memory lock. The lock is usable by the software transactional memory system to coordinate concurrent transactional access to the group of data structures by multiple concurrent threads. | 2013-04-04 |
20130086349 | COMPUTER SYSTEM - A computer system includes: a first storage apparatus; a second storage apparatus; a first volume of the first storage apparatus; and a second volume of the second storage apparatus; wherein the first volume and the second volume have a copy pair relationship and a host system recognizes the second volume as the same volume as the first volume; and wherein the first storage apparatus sends reservation information of the first volume to the second storage apparatus; and the second storage apparatus controls access from the host system on the basis of the received reservation information. | 2013-04-04 |
20130086350 | METHOD AND SYSTEM FOR ENHANCED PERFORMANCE IN SERIAL PERIPHERAL INTERFACE - A method of conducting an operation in an integrated circuit having a plurality of memory cells includes receiving an operating command for the memory cells and receiving a first address segment associated with the memory cells in at least one clock cycle after receiving the operating command. The method further includes receiving a first performance enhancement indicator in at least one clock cycle after ending the first address segment while before starting to transfer data, for determining whether an enhanced operation is to be performed. | 2013-04-04 |
20130086351 | OBTAINING ADDITIONAL DATA STORAGE FROM ANOTHER DATA STORAGE SYSTEM - A main data storage system has a main storage control and data storage, and a user interface, the main storage control in communication with a local storage control of a local data storage system with local data storage. In response to a request to increase data storage from the user interface, the main control determines whether the main data storage is out of space. If so, the main control sends a command to the local control to create data space in local data storage. The local control creates the data space and associates the data space with the main control; and, in response to the local control creating data space in the local data storage and notifying the main control, the main control updates its metadata with respect to the data space, creating the impression that all the data is stored in the main data storage. | 2013-04-04 |
20130086352 | DYNAMICALLY CONFIGURABLE STORAGE DEVICE - A method dynamically configures resources in a storage device. The method includes determining a requirement for supplementary resources for processing upcoming storage specific commands associated with at least one of a plurality of logical unit in the storage device. The method also includes identifying type of supplementary resources required for the logical unit. Furthermore, the method includes determining whether unused resources of the identified resource type present in a common pool of resources shared between a plurality of logical units, and dynamically configuring the common pool of resources among the plurality of logical units such that the unused resources of the identified resource type present in the common pool of resources are allocated to the logical unit as supplementary resources for processing the upcoming storage specific commands. | 2013-04-04 |
20130086353 | VARIABLE LENGTH ENCODING IN A STORAGE SYSTEM - A system and method for maintaining a mapping table in a data storage subsystem. A data storage subsystem supports multiple mapping tables including a plurality of entries. Each of the entries comprise a tuple including a key. A data storage controller is configured to encode each tuple in the mapping table using a variable length encoding. Additionally, the mapping table may be organized as a plurality of time ordered levels, with each level including one or more mapping table entries. Further, a particular encoding of a plurality of encodings for a given tuple may be selected based at least in part on a size of the given tuple as unencoded, a size of the given tuple as encoded, and a time to encode the given tuple. | 2013-04-04 |
20130086354 | CACHE AND/OR SOCKET SENSITIVE MULTI-PROCESSOR CORES BREADTH-FIRST TRAVERSAL - Methods, apparatuses and storage device associated with cache and/or socket sensitive breadth-first iterative traversal of a graph by parallel threads, are disclosed. In embodiments, a vertices visited array (VIS) may be employed to track graph vertices visited. VIS may be partitioned into VIS sub-arrays, taking into consideration cache sizes of LLC, to reduce likelihood of evictions. In embodiments, potential boundary vertices arrays (PBV) may be employed to store potential boundary vertices for a next iteration, for vertices being visited in a current iteration. The number of PBV generated for each thread may take into consideration a number of sockets, over which the processor cores employed are distributed. In various embodiments, the threads may be load balanced; further data locality awareness to reduce inter-socket communication may be considered, and/or lock-and-atomic free update operations may be employed. Other embodiments may be disclosed or claimed. | 2013-04-04 |
20130086355 | Distributed Data Scalable Adaptive Map-Reduce Framework - A method, an apparatus and an article of manufacture for generating a distributed data scalable adaptive map-reduce framework for at least one multi-core cluster. The method includes partitioning a cluster into at least one computational group, determining at least one key-group leader within each computational group, performing a local combine operation at each computational group, performing a global combine operation at each of the at least one key-group leader within each computational group based on a result from the local combine operation, and performing a global map-reduce operation across the at least one key-group leader within each computational group. | 2013-04-04 |
20130086356 | Distributed Data Scalable Adaptive Map-Reduce Framework - A method for generating a distributed data scalable adaptive map-reduce framework for at least one multi-core cluster. The method includes partitioning a cluster into at least one computational group, determining at least one key-group leader within each computational group, performing a local combine operation at each computational group, performing a global combine operation at each of the at least one key-group leader within each computational group based on a result from the local combine operation, and performing a global map-reduce operation across the at least one key-group leader within each computational group. | 2013-04-04 |
20130086357 | STAGGERED READ OPERATIONS FOR MULTIPLE OPERAND INSTRUCTIONS - A central processing unit includes a register file having a plurality of read ports, a first execution unit having a first plurality of input ports, and logic operable to selectively couple different arrangements of the read ports to the input ports. A method for reading operands from a register file having a plurality of read ports by a first execution unit having a first plurality of input ports includes scheduling an instruction for execution by the first execution unit and selectively coupling a particular arrangement of the read ports to the input ports based on a type of the instruction. | 2013-04-04 |
20130086358 | COLLECTIVE OPERATION PROTOCOL SELECTION IN A PARALLEL COMPUTER - Collective operation protocol selection in a parallel computer that includes compute nodes may be carried out by calling a collective operation with operating parameters; selecting a protocol for executing the operation and executing the operation with the selected protocol. Selecting a protocol includes: iteratively, until a prospective protocol meets predetermined performance criteria: providing, to a protocol performance function for the prospective protocol, the operating parameters; determining whether the prospective protocol meets predefined performance criteria by evaluating a predefined performance fit equation, calculating a measure of performance of the protocol for the operating parameters; determining that the prospective protocol meets predetermined performance criteria and selecting the protocol for executing the operation only if the calculated measure of performance is greater than a predefined minimum performance threshold. | 2013-04-04 |
20130086359 | Processor Hardware Pipeline Configured for Single-Instruction Address Extraction and Memory Access Operation - Memory access instructions, such as load and store instructions, are processed in a processor-based system. Processor hardware pipeline configurations enable efficient performance of memory access instructions, such as a pipeline configuration that enables, for a memory access operation request by a register-operand based virtual machine, computation of the memory location corresponding to a virtual-machine register by extracting a bit-field from the virtual-machine instruction and accessing (load or store) the computed memory location that represents a virtual register of the virtual-machine, in a single pass through the pipeline. Thus this processor hardware pipeline configuration enables a virtual machine register read/write operation to be performed by a single hardware processor instruction through a single pass in the processor hardware pipeline, for a register-operand based virtual machine. | 2013-04-04 |
20130086360 | FIFO Load Instruction - An instruction identifies a register and a memory location. Upon execution of the instruction by a processor, an item is loaded from the memory location and a shift and insert operation is performed to shift data in the register and to insert the item into the register. | 2013-04-04 |
20130086361 | Scalable Decode-Time Instruction Sequence Optimization of Dependent Instructions - Producer-consumer instructions, comprising a first instruction and a second instruction in program order, are fetched requiring in-order execution, the second instruction is modified by the processor so that the first instruction and second instruction can be completed out-of-order, the modification comprising any one of extending an immediate field of the second instruction using immediate field information of the first instruction or providing a source location of the first instruction as an additional source location to source locations of the second instruction. | 2013-04-04 |
20130086362 | Managing a Register Cache Based on an Architected Computer Instruction Set Having Operand First-Use Information - A prefix instruction is executed and passes operands to a net instruction without storing the operands in an architected resource such that the execution of the next instruction uses the operands provided by the prefix instruction to perform an operation, the operands may be prefix instruction immediate field or a target register of the prefix instruction execution. | 2013-04-04 |
20130086363 | Computer Instructions for Activating and Deactivating Operands - An instruction set architecture (ISA) includes instructions for selectively indicating last-use architected operands having values that will not be accessed again, wherein architected operands are made active or inactive after an instruction specified last-use by an instruction, wherein the architected operands are made active by performing a write operation to an inactive operand, wherein the activation/deactivation may be performed by the instruction having the last-use of the operand or another (prefix) instruction. | 2013-04-04 |
20130086364 | Managing a Register Cache Based on an Architected Computer Instruction Set Having Operand Last-User Information - A multi-level register hierarchy is disclosed comprising a first level pool of registers for caching registers of a second level pool of registers in a system wherein programs can dynamically release and re-enable architected registers such that released architected registers need not be maintained by the processor, the processor accessing operands from the first level pool of registers, wherein a last-use instruction is identified as having a last use of an architected register before being released, the last-use architected register being released causes the multi-level register hierarchy to discard any correspondence of an entry to said last use architected register. | 2013-04-04 |
20130086365 | Exploiting an Architected List-Use Operand Indication in a Computer System Operand Resource Pool - A pool of available physical registers are provided for architected registers, wherein operations are performed that activate and deactivate selected architected registers, such that the deactivated selected architected registers need not retain values, and physical registers can be deallocated to the pool, wherein deallocation of physical registers is performed after a last-use by a designated last-use instruction, wherein the last-use information is provided either by the last-use instruction or a prefix instruction, wherein reads to deallocated architecture registers return an architected default value. | 2013-04-04 |
20130086366 | Register File with Embedded Shift and Parallel Write Capability - An apparatus includes a register file including a logical circuit. The register file is configured to perform one or more logical operations in conjunction with the logical circuit. The logical operation is performed in response to the register file receiving a register file control instruction. The register file control instruction is independent from an arithmetic logic unit (ALU) control instruction and a multiply-and-accumulate unit (MACU) control instruction. | 2013-04-04 |
20130086367 | Tracking operand liveliness information in a computer system and performance function based on the liveliness information - Operand liveness state information is maintained during context switches for current architected operands of executing programs the current operand state information indicating whether corresponding current operands are any one of enabled or disabled for use by a first program module, the first program module comprising machine instructions of an instruction set architecture (ISA) for disabling current architected operands, wherein a current operand is accessed by a machine instruction of said first program module, the accessing comprising using the current operand state information to determine whether a previously stored current operand value is accessible by the first program module. | 2013-04-04 |
20130086368 | Using Register Last Use Infomation to Perform Decode-Time Computer Instruction Optimization - Two computer machine instructions are fetched for execution, but replaced by a single optimized instruction to be executed, wherein a temporary register used by the two instructions is identified as a last-use register, where a last-use register has a value that is not to be accessed by later instructions, whereby the two computer machine instructions are replaced by a single optimized internal instruction for execution, the single optimized instruction not including the last-use register. | 2013-04-04 |
20130086369 | COMPILING CODE FOR AN ENHANCED APPLICATION BINARY INTERFACE (ABI) WITH DECODE TIME INSTRUCTION OPTIMIZATION - Compiling code for an enhanced application binary interface (ABI) including identifying, by a computer, a code sequence configured to perform a variable address reference table function including an access to a variable at an offset outside of a location in a variable address reference table. The code sequence includes an internal representation (IR) of a first instruction and an IR of a second instruction. The second instruction is dependent on the first instruction. A scheduler cost function associated with at least one of the IR of the first instruction and the IR of the second instruction is modified. The modifying includes generating a modified scheduler cost function that is configured to place the first instruction next to the second instruction. An object file is generated responsive to the modified scheduler cost function. The object file includes the first instruction placed next to the second instruction. The object file is emitted. | 2013-04-04 |
20130086370 | COMBINED BRANCH TARGET AND PREDICATE PREDICTION - Embodiments provide methods, apparatus, systems, and computer readable media associated with predicting predicates and branch targets during execution of programs using combined branch target and predicate predictions. The predictions may be made using one or more prediction control flow graphs which represent predicates in instruction blocks and branches between blocks in a program. The prediction control flow graphs may be structured as trees such that each node in the graphs is associated with a predicate instruction, and each leaf associated with a branch target which jumps to another block. During execution of a block, a prediction generator may take a control point history and generate a prediction. Following the path suggested by the prediction through the tree, both predicate values and branch targets may be predicted. Other embodiments may be described and claimed. | 2013-04-04 |
20130086371 | METHOD FOR DEVICE-LESS OPTION-ROM BIOS LOAD AND EXECUTION - An invention is provided for loading an Option-ROM BIOS into memory without the need of an associated hardware device, such as a PCI card. The invention includes loading code from a boot sector of a designated boot device into memory, wherein the code includes Option-ROM location data denoting a location of an Option-ROM BIOS. The Option-ROM BIOS then is loaded into memory utilizing the Option-ROM location data. As above, the Option-ROM BIOS includes MBR location data denoting a location of the MBR associated with the OS. Thus, the MBR can be loaded into memory utilizing the MBR location data in the Option-ROM BIOS and control can be transferred to the MBR. Thereafter, the OS system files are loaded into memory utilizing the MBR. | 2013-04-04 |
20130086372 | INFORMATION PROCESSING APPARATUS AND BOOT CONTROL METHOD - According to one embodiment, an information processing apparatus, to which devices are connected, includes a device information detector, a device setting module and an information storage module. The device information detector detects device information from the devices when the apparatus is booted at a first timing. The device setting module sets the devices to be ready to use using the information. The information storage module stores the information in a nonvolatile memory. The device setting module sets the devices to be ready to use using the information in the nonvolatile memory when the apparatus is booted at a second timing after the first timing. | 2013-04-04 |
20130086373 | CUSTOMIZED CONTENT FOR ELECTRONIC DEVICES - A method for providing customized content to an electronic device. The method may include activating the electronic device through a packaging that substantially surrounds the electronic device, without substantially damaging or removing the packaging. Once the device is activating, connecting the electronic device to a content and providing the content to the electronic device without substantially damaging or removing the packaging. | 2013-04-04 |
20130086374 | FINE-GRAINED CAPACITY MANAGEMENT OF COMPUTING ENVIRONMENTS THAT MAY SUPPORT A DATABASE - Computing capacity of a computing environment can be managed by controlling it associated processing capacity based on a target (or desired) capacity. In addition, fine-grained control over the processing capacity can be exercised. For example, a computing system can change the processing capacity (e.g., processing rate) of at least one processor operating based on a target capacity. The computing system may also be operable to change the processing capacity based on a measured processing capacity (e.g., a measured average of various processing rates of a processor taken over a period of time when a processor may have been operating at different processing rates over that period). By way of example, the processing rate of a processor can be switched between 1/8 and 2/8 of a maximum processing rate to achieve virtually any effective processing rates between them. | 2013-04-04 |
20130086375 | PERSONAL POINT OF SALE - Embodiments provided herein include techniques for enabling a mobile device to communicate with smart media in a manner that can sidestep the secure element of the mobile device—and the costs associated with it. The mobile device can communicate with the smart media using near-field communication (NFC) by creating an encrypted connection with a remote computer while bypassing a secure element of the mobile device. This allows the mobile device to provide point-of-sale (POS) functionality by reading and/or writing to the smart media, without compromising the security of the smart media. | 2013-04-04 |
20130086376 | SECURE INTEGRATED CYBERSPACE SECURITY AND SITUATIONAL AWARENESS SYSTEM - An integrated cyber security system for an organization, such as a governmental or private organization, is disclosed. The security system is installable across an organization and configured to monitor and protect against cyberspace or electronic data vulnerabilities. The security system includes a situational awareness application configurable to receive one or more definitions describing known electronic data access points associated with the organization. The system also includes a communication security system providing cryptographic communications among each of a plurality of users affiliated with the organization and configured to establish a plurality of communities of interest. The system also includes a reporting module configured to generate a plurality of reports based on information gathered across the organization from the situational awareness application and communicate one or more of the plurality of reports to one or more of the communities of interest. | 2013-04-04 |
20130086377 | PROCESSING A CERTIFICATE SIGNING REQUEST IN A DISPERSED STORAGE NETWORK - A method begins by a requesting device transmitting a certificate signing request to a managing unit, wherein the certificate signing request includes fixed certificate information and suggested certificate information. The method continues with the managing unit forwarding the certificate signing request to a certificate authority and receiving a signed certificate from the certificate authority, wherein the signed certificate includes a certificate and a certification signature and wherein the certificate includes the fixed certificate information and determined certificate information based on the suggested certificate information. The method continues with the managing unit interpreting the fixed certificate information of the signed certificate to identify the requesting device and forwarding the signed certificate to the identified requesting device. | 2013-04-04 |
20130086378 | PROXY SYSTEM FOR SECURITY PROCESSING WITHOUT ENTRUSTING CERTIFIED SECRET INFORMATION TO A PROXY - First communication units use a public key thereof certified by a certification authority on a PKI (Public Key Infrastructure), which is held by the first communication units in advance, and a secret key of the first communication units or delegation information generated by using secret information, as public key certificate, of the first communication units to thereby allow a proxy server to perform security processing, i.e. key exchange processing, authentication processing or processing for providing compatibility of encryption schemes, between the first communication units and a second communication unit on behalf of the first communication units. | 2013-04-04 |
20130086379 | COMMUNICATION APPARATUS, RECEPTION CONTROL METHOD, AND TRANSMISSION CONTROL METHOD - Lookaside-type communication apparatus and reception and transmission control methods make high-rate communication of a packet including encrypted data. Receive data including encrypted data are supplied to an encryption data processing part, and supplied to a security part through a second bus when the packet is received. The encrypted data becomes plain-text data in the security part, and supplied to the control part through the system bus. Transmit data including a data body including a plain-text data to be encrypted are supplied to the security part when the packet is transmitted. The plain-text data become the encrypted data in the security part, and the transmit data having the data body including the encrypted data are supplied to the encryption data processing part through the second bus. The transmit data are transmitted in the form of the packet in the transmission and reception part. | 2013-04-04 |
20130086380 | SYSTEM AND METHOD FOR FACILITATING COMMUNICATIONS BASED ON TRUSTED RELATIONSHIPS - Disclosed herein are systems, methods, and non-transitory computer-readable storage media for facilitating communications based on a trusted relationships. A system configured to practice the method first receives a communication request from a second communication device for a specific resource, wherein the communication request is based, at least in part, on trust information generated by a previously established trusted relationship. The system confirms, via an access to a trust database and using the trust information, (1) an identity of a sender of the communication request and (2) access permissions for a requested resource. Then, if the identity and the access permissions are confirmed, the system establishes communications between the first communications device and the second communications device in response to the communication request according to the specific resource. The trust information can include a trust user ID and a trust key. | 2013-04-04 |
20130086381 | MULTI-SERVER AUTHENTICATION TOKEN DATA EXCHANGE - A client is authenticated by a server receiving an initial request from the client at the beginning of a session. The server receiving the initial request generates an authentication token and returns the authentication token to the client in response to the client being authenticated. The user's credentials used to authenticate the client are stored in the authentication token along with other information. After receiving the authentication token from the server that generated the authentication token, the client passes the authentication token with each of the future requests to the pool of servers. Using the client to pass the transferrable authentication token, the servers share the user's identity/credentials in a decentralized manner. Any server from the shared pool of servers that receives a subsequent client request is able to decrypt the token and re-authenticate the user without having to prompt the client for authentication credentials again. | 2013-04-04 |
20130086382 | SYSTEMS AND METHODS FOR SECURELY TRANSFERRING PERSONAL IDENTIFIERS - A system for transferring secured data has an authentication facilitator that transmits data indicative of a graphical key pad to a remote display device of a user computing device and, in response, receives from the user computing device icon location data indicative of locations of icons selected by a user. Additionally, the authentication facilitator recovers a personal identifier (PI) from the icon location data, translates the recovered PI to obtain a translated PI, and transmits the translated PI. The system further has a partner computing apparatus that receives the translated PI and allows the user access to a secured area based upon the translated PI. | 2013-04-04 |
20130086383 | VIRTUAL MACHINE IMAGES ENCRYPTION USING TRUSTED COMPUTING GROUP SEALING - A host machine provisions a virtual machine from a catalog of stock virtual machines. The host machine instantiates the virtual machine. The host machine configures the virtual machine, based on customer inputs, to form a customer's configured virtual machine. The host machine creates an image from the customer's configured virtual machine. The host machine unwraps a sealed customer's symmetric key to form a customer's symmetric key. The host machine encrypts the customer's configured virtual machine with the customer's symmetric key to form an encrypted configured virtual machine. The host machine stores the encrypted configured virtual machine to non-volatile storage. | 2013-04-04 |
20130086384 | METHOD AND SYSTEM FOR POWER MANAGEMENT USING ICMPv6 OPTIONS - A method and system that facilitates power management over an IPv6 network connection is described. A first host having an application creates a power management option for managing power management settings of one or more second hosts, which is in network communication with the first host. A neighbor solicitation request is sent with the power management option to the one or more second hosts, wherein the power management option requests the power management settings of the one or more second hosts. A table of the power management settings for each of the one or more second hosts is generated from the responses received from the neighbor solicitation request, and the power management settings are applied to the one or more second hosts. | 2013-04-04 |
20130086385 | System and Method for Providing Hardware-Based Security - In some implementations, a method for managing resources of a device includes receiving, by a system-on-chip (SoC) in the device, from a customer, a request to access one or more resources of the SoC. The SoC includes a non-volatile memory (NVM), a feature register, programming history, and a plurality of resources including the one or more resources. A customer identifier (CID) is identified based on the received request. The customer is authenticated using a certificate including the CID. Whether the SoC grants, to the customer, access to the one or more resources is determine using the feature register and the CID. | 2013-04-04 |
20130086386 | METHOD AND SYSTEM FOR RESTRICTING EXECUTION OF VIRTUAL APPLICATIONS TO A MANAGED PROCESS ENVIRONMENT - Methods and systems for restricting the launch of virtual application files. In one embodiment, a launching application is signed with a digital signature. When the launching application launches a runtime engine and instructs it to execute an application file, the runtime engine determines whether an entity identifier associated with the launching application identifies an authorized entity. If the entity identifier identifies an authorized entity and the digital signature is valid, the runtime engine executes the application file. In another embodiment, a ticket is transmitted to the launching application along with an instruction to launch the application file. The ticket includes a digital signature and an expiration date. The launching application communicates the ticket to the runtime engine, which will execute the application file only if the digital signature is valid and a current date is not later than the expiration date. | 2013-04-04 |
20130086387 | Method for Certifying and Verifying Digital Web Content Using Public Cryptography - There is provided a method of, computer programs for and apparatus for providing and accessing digital content such as a news item. A news provider generates a news item, creates a digitally signed version of the news item and packages them together with a digital certificate issued by a certificate authority containing the public key required to decrypt the digitally signed version. The package is posted to a server and is transmitted, or made available or transmission, over a public data network together with a computer program for verifying the news item. A receiving party receives, over the public data network, the package at a client device and is provided with means for launching, and if necessary first downloading, the verifying program. The verifying program uses the public key contained in the certificate to verify the digitally signed news item. Before being first used to verify a news item, the verifying program receives a shared secret from the receiving party which is stored locally to the client device and is used by the verifying program to confirm that it performed the verification process. | 2013-04-04 |
20130086388 | CREDENTIALS MANAGEMENT - An encrypted file is decrypted to gain access to a stored hash value for a credentials setting component. A test hash value of the credentials setting component is formed. Before decrypting a set of encrypted credentials to form decrypted credentials, it is required that the test hash value of the credentials setting component match the stored hash value of the credentials setting component. The decrypted credentials are then passed to the credentials setting component to set credentials that instructions are to be executed under. | 2013-04-04 |
20130086389 | Security Token and Authentication System - Techniques are provided for entering a secret into a security token using an embedded tactile sensing user interface with the purpose of verifying the secret against a stored representation of the same secret. In particular, an embodiment of the security token according to the invention comprises a tactile sensing user interface being arranged to receive a user-encoded secret, a decoding unit being arranged to generate a decoded secret by decoding the user-encoded secret, a comparison unit being arranged to compare the decoded secret with a copy of the secret stored in the token in order to verify the authenticity of a user. Thereby, the security token provides on-card matching functionality. | 2013-04-04 |
20130086390 | System and Method of Securing Private Health Information - A system and method for the secure processing of private health information. Fully homomorphically encrypted private health information, along with a request to process that information, is transmitted to a third party who performs operations on the encrypted private health information in accordance with the request, yielding an encrypted result. The encrypted result may be decrypted only by the party in possession of the corresponding private key. The invention enables encrypted private health information to be processed by third parties while preventing them from decrypting it. | 2013-04-04 |
20130086391 | SYSTEM, ARCHITECTURE AND METHOD FOR SECURE ENCRYPTION AND DECRYPTION - There is disclosed a system, architecture and method for encryption and decryption of a record. In an embodiment, a method comprises identifying a target record to be encrypted; analyzing one or more clear text linguistic attributes of the target record; generating a linguistic encryption key based on the analysis of one or more clear text linguistic attributes; and encrypting the target record with the linguistic encryption key, the linguistic encryption key operable to decrypt the encrypted target record in a reverse operation. | 2013-04-04 |
20130086392 | INCREASING DATA SECURITY IN ENTERPRISE APPLICATIONS BY USING FORMATTING, CHECKSUMS, AND ENCRYPTION TO DETECT TAMPERING OF A DATA BUFFER - A method, system, and computer program product for using hidden buffer formatting and passing obfuscated encryption key values to detect tampering with and/or prevent unauthorized inspection of a data buffer. The method comprises receiving an unencrypted sequence to be encrypted, selecting a layout version to associate to an encryption method and a checksum method, then encrypting the unencrypted sequence using the encryption method to form an encrypted sequence, and calculating, using the checksum calculation method, an unencrypted sequence checksum. Further, storing the encrypted sequence to form a hidden buffer payload, which hidden buffer has its own hidden buffer payload checksum. Encryption keys are not stored in program data, nor sent in the hidden buffers. Instead obfuscated encryption key values are used to generate keys on the fly. The receiver of a hidden buffer and obfuscated encryption key values can detect tampering or data corruption of the payload for further processing. | 2013-04-04 |
20130086393 | INCREASING DATA SECURITY IN ENTERPRISE APPLICATIONS BY OBFUSCATING ENCRYPTION KEYS - A method, system, and computer program product for using hidden buffer formatting and passing obfuscated encryption key values to detect tampering with and/or prevent unauthorized inspection of a data buffer. The method comprises receiving an unencrypted sequence to be encrypted, selecting a layout version to associate to an encryption method and a checksum method, then encrypting the unencrypted sequence using the encryption method to form an encrypted sequence, and calculating, using the checksum calculation method, an unencrypted sequence checksum. Further, storing the encrypted sequence to form a hidden buffer payload, which hidden buffer has its own hidden buffer payload checksum. Encryption keys are not stored in program data, nor sent in the hidden buffers. Instead obfuscated encryption key values are used to generate keys on the fly. The receiver of a hidden buffer and obfuscated encryption key values can detect tampering or data corruption of the payload for further processing. | 2013-04-04 |
20130086394 | STORAGE SYSTEM, STORAGE CONTROL APPARATUS, AND STORAGE CONTROL METHOD - A storage system in which a storage control apparatus writes data in each of divided areas defined by division of one or more storage areas in one or more storage devices, after encryption of the data with an encryption key unique to each divided area. When the storage control apparatus receives, from a management apparatus, designation of one or more of the divided areas allocated as one or more physical storage areas for a virtual storage area to be invalidated and an instruction to invalidate data stored in the one or more of the divided areas, the storage control apparatus invalidates one or more encryption keys associated with the designated one or more of the divided areas. In addition, the storage control apparatus may further overwrite at least part of the designated one or more of the divided areas with initialization data for data erasion. | 2013-04-04 |
20130086395 | Multi-Core Microprocessor Reliability Optimization - Systems and methods for improving effective aging of a multi-core processor. Aging characteristics of the two or more cores of the multi-core processor are determined. Priority determination logic is configured to assign priorities for powering on the cores based on the aging characteristics. Optionally, an operating environment is detected and assigning priorities to the cores is based on a relative power consumption of each of the cores and the operating environment, in order to improve battery life. | 2013-04-04 |
20130086396 | POWER SUPPLY FOR PROCESSOR AND CONTROL METHOD THEREOF - The present invention provides a power supply for processor and control method thereof. The power supply comprises a reference adjusting circuit and a voltage regulator. The reference adjusting circuit is configured to receive a VID code from a processor, and adjust a reference voltage based on the VID code. The voltage regulator is coupled to the reference adjusting circuit and converts an input voltage into an output voltage in accordance to the reference voltage. The reference adjusting circuit adjusts the reference voltage in a plurality of steps until the reference voltage reaches a target value corresponding to the VID code. The reference adjusting circuit adjusts the reference voltage by a preset value during each step, and proceeds to adjust the reference voltage by a next step only after the output voltage reaches a predetermined scope of the reference voltage. | 2013-04-04 |
20130086397 | ELECTRONIC APPARATUS AND ITS CONTROL METHOD - One embodiment provides an electronic apparatus, including: a first power supply module configured to supply power to a storage device provided in an external device when the external device is connected to the electronic apparatus; a receiving module configured to receive, from the external device, identification information thereof; and a second power supply module configured to supply power to an input/output control module of the external device if authentication of the identification information received by the receiving module succeeds. | 2013-04-04 |
20130086398 | MANAGING SIDEBAND SEGMENTS IN ON-DIE SYSTEM FABRIC - Methods and apparatus for managing sideband segments in an On-Die System Fabric (OSF) are described. In one embodiment, a sideband OSF includes a plurality of segments that may be reset or powered down independently after power management logic determines that in progress messages have been handled and future messages to the segment being reset or powered down will be blocked. Other embodiments are also disclosed. | 2013-04-04 |