21st week of 2015 patent applcation highlights part 77 |
Patent application number | Title | Published |
20150143009 | USE OF AN IO LINK FOR LINKING FIELD DEVICES - The invention relates to the use of an IO link for linking a field device to a master assembly. | 2015-05-21 |
20150143010 | METHOD AND APPARATUS FOR COMPENSATING FOR DELAY IN REAL-TIME EMBEDDED SYSTEM - In a real-time embedded system, if a higher-level interrupt having a higher priority than a lower-level interrupt being processed occurs, the lower-level interrupt is stopped from being processed and the higher-level interrupt is processed. Upon completion of the processing of the higher-level interrupt, delay information about the lower-level interrupt is recorded in a compensation timer register corresponding to the lower-level interrupt, and when the processing is stopped, the lower-level interrupt is restarted. Upon completion of the processing of the lower-level interrupt, the next period of the lower-level interrupt is adjusted based on the delay information recorded in the compensation timer register to compensate for the delay. | 2015-05-21 |
20150143011 | INTERRUPTION FACILITY FOR ADJUNCT PROCESSOR QUEUES - Interruption facility for adjunct processor queues. In response to a queue transitioning from a no replies pending state to a reply pending state, an interruption is initiated. This interruption signals to a processor that a reply to a request is waiting on the queue. In order for the queue to take advantage of the interruption capability, it is enabled for interruptions. | 2015-05-21 |
20150143012 | METHOD FOR CONFIGURING MAXIMUM TRANSMISSION UNIT (MTU), TERMINAL AND USB DATA CARD - A method for configuring a maximum transmission unit (MTU) value, a terminal and a universal serial bus (USB) data card are provided. The method includes the following steps: after detecting that a connection to a USB data card is established ( | 2015-05-21 |
20150143013 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus including processing units and a connection control unit that controls the connections between the processing units, in which the connection control unit is provided with a table creation unit which, with respect to a first logical channel established with a processing unit, creates table information showing a correspondence between logical channels without designating a logical channel that corresponds to the first logical channel when there is no second logical channel established with another processing unit that corresponds to the first logical channel, a table storage unit that stores the table information created by the table creation unit, and a table update unit that updates the table information for the second logical channel that is stored in the table storage unit so as to configure the first logical channel as a logical channel that corresponds to the second logical channel when there is a second logical channel. | 2015-05-21 |
20150143014 | SUPPORT FOR IOAPIC INTERRUPTS IN AMBA-BASED DEVICES - One disclosed computing system comprises a x86 processor, memory, a PCIe root complex (RC), a PCIe bus, and an interconnect chip having a PCIe endpoint (EP) that is connected to the PCIe RC through a PCIe link, the PCIe EP being connected to an AMBA® bus. The interconnect chip may communicate with the IO device via the AMBA® bus in an AMBA® compliant manner and communicate with the host system in a PCIe compliant manner. This communication may include receiving a command from the processor, sending the command to the IO device over the AMBA® bus, receiving a response from the IO device over the AMBA® bus, and sending over the AMBA® bus and the PCIe link one or more DMA operations to the memory. Further communication may include sending an IOAPIC interrupt to the processor of the host system according to PCIe ordering rules. | 2015-05-21 |
20150143015 | DMA CONTROLLER AND DATA READOUT DEVICE - A DMA controller ( | 2015-05-21 |
20150143016 | METHOD AND APPARATUS FOR DELIVERING MSI-X INTERRUPTS THROUGH NON-TRANSPARENT BRIDGES TO COMPUTING RESOURCES IN PCI-EXPRESS CLUSTERS - An apparatus for initialization. The apparatus includes a management I/O device controller for managing initialization of a plurality of I/O devices coupled to a PCI-Express (PCIe) fabric. The management I/O device controller is configured for receiving a request to register a target interrupt register address of a first worker computing resource, wherein the target interrupt register address is associated with a first interrupt generated by a first I/O device coupled to the PCIe fabric. A mapping module of the management I/O device controller is configured for mapping the target interrupt register address to a mapped interrupt register address of a domain in which the first I/O device resides. A translating interrupt register table includes a plurality of mapped interrupt register addresses in the domain that is associated with a plurality of target interrupt register addresses of a plurality of worker computing resources. | 2015-05-21 |
20150143017 | Memory Device Debugging on Host Platforms - A system and method are disclosed for an electronic integrated circuit to communicate with different hosts via different interfaces using the same host protocol. The system may use a host interface circuit to select a first set of electrical contacts or a second set of electrical contacts in order for a first host or a second host, respectively, to communicate with the electronic integrated circuit using a host protocol. The method may include switching from communicating with the first host using the first set of electrical contacts to communicating with the second host using the second set of electrical contacts in order for the second host to test the electronic integrated circuit. | 2015-05-21 |
20150143018 | FLEXIBLE SERVER SYSTEM - A flexible server system includes an embedded processor, one or more processor boards, one or more storage boards, and a switch including a plurality of ports. The embedded processor, the one or more processor boards, and the one or more storage boards are connected to a corresponding one of the plurality of ports. The embedded processor, the one or more processor boards, the one or more storage boards, and the switch each include a peripheral component interconnect express (PCIe) interface. The one or more processor boards and the one or more storage boards are connected to the switch through connectors where each of the corresponding connectors have a same size. The embedded processor controls the switch to route a packet from one of the plurality of ports to another one of the plurality of ports. | 2015-05-21 |
20150143019 | Inexpensive Solid-State Storage Through Write Throttling - Many of the benefits of solid-state-based storage devices can be obtained, while minimizing the costs associated therewith, by write-throttling solid-state storage media in accordance with empirically derived capabilities. Untested solid-state storage media can be obtained inexpensively due to the lack of waste that is otherwise been inherent in the testing and subsequent discarding of solid-state storage media whose capabilities do not meet stringent manufacturer standards. The untested solid-state storage media is initialized through a testing procedure that empirically identifies capabilities of individual solid-state blocks, or groupings of blocks, within such solid-state storage media. Such empirically obtained capability information is then utilized to throttle the speed at which data is written to the solid-state storage media. Additionally, it can enable binning of individual solid-state blocks, or individual groupings of blocks, into bins that can comprise different performance thresholds. | 2015-05-21 |
20150143020 | Low Latency Memory Access Control for Non-Volatile Memories - A memory is provided that comprises a bank of non-volatile memory cells configured into a plurality of banklets. Each banklet in the plurality of banklets can be enabled separately and independently of the other banklets in the bank of non-volatile memory cells. The memory further comprises peripheral banklet circuitry, coupled to the bank of a non-volatile memory array, that is configured to enable selected subsets of bit lines within a selected banklet within the plurality of banklets. Moreover, the memory comprises banklet select circuitry, coupled to the peripheral banklet circuitry, that is configured to select data associated with a selected banklet for reading out from the banklet or writing to the banklet. | 2015-05-21 |
20150143021 | EQUALIZING WEAR ON STORAGE DEVICES THROUGH FILE SYSTEM CONTROLS - Data stored in file blocks and storage blocks of a storage device may be tracked by the file system. The file system may track a number of writes performed to each file block and storage block. The file system may also track a state of each storage block. The file system may use information, such as the write count and the block state, to determine locations for updated data to be stored on the storage device. Placement of data by the file system allows the file system to manage wear on storage devices, such as solid state storage devices. | 2015-05-21 |
20150143022 | REMOVABLE MEMORY CARD DISCRIMINATION SYSTEMS AND METHODS - Removable memory card discrimination systems and methods are disclosed. In particular, exemplary embodiments discriminate between secure digital (SD) cards and other removable memory cards that comply with the SD form factor, but support the Universal Flash Storage (UFS) protocol. That is, a host may have a receptacle that supports the SD card form factor and is configured to receive a device. In use, a removable memory card is inserted into the receptacle and, using an SD compliant interrogation signal, the host interrogates a common area on the card so inserted. The common area includes information related to capability descriptors of the card. An SD compliant card will respond with information such as capability descriptors about the SD protocol capabilities, while a UFS compliant card will respond with an indication that the card is UFS compliant. The host may then restart the communication with the card using the UFS protocol. | 2015-05-21 |
20150143023 | Detecting Access Sequences for Data Compression on Non-Volatile Memory Devices - Techniques are presented to allow non-volatile memory system to operate more efficiently by determining ranges of logical addresses that a host typically accesses as together. For example, the system's controller can determine that the host always, or most always, writes or reads a contiguous set of logical addresses as a single unit. The controller can exploit this information by operating on these ranges as single a unit for data operations it performs. To take one example, the memory system can treat such ranges as single units for on-system data compression prior to writing the data to non-volatile memory, thereby increasing the efficiency of such data compression. | 2015-05-21 |
20150143024 | REDUNDANT ARRAY OF INDEPENDENT MODULES - A Redundant Array of Independent Modules (RAIM) system has the similar function and architecture as Redundant Array of Independent Disk (RAID) system. It includes a RAID controller coupled to send and receive information to and from a host through an interface and a plurality of modules coupled to the RAID controller, wherein the plurality of modules are not disk drives, but SD/MMC/eMMC modules. Each such kind of modules in RAIM system acts as a single drive in RAID system. | 2015-05-21 |
20150143025 | Update Block Programming Order - Certain MLC blocks that tend to be reclaimed before they are full may be programmed according to a programming scheme that programs lower pages first and programs upper pages later. This results in more lower page programming than upper page programming on average. Lower page programming is generally significantly faster than upper page programming so that more lower page programming (and less upper programming) reduces average programming time. | 2015-05-21 |
20150143026 | TEMPERATURE BASED FLASH MEMORY SYSTEM MAINTENANCE - A memory system or flash card may include memory maintenance scheduling that improves the endurance of memory. Certain parameters, such as temperature, are measured and used for scheduling maintenance. For example, memory maintenance may be performed or postponed depending on the ambient temperature of the card. The memory maintenance operations may be ranked or classified (e.g. in a memory maintenance queue based on priority) to correspond with threshold values of the parameters for a more efficient scheduling of memory maintenance. For example, at a low temperature threshold, only high priority maintenance operations are performed, while at a higher temperature threshold, any priority maintenance operation is performed. | 2015-05-21 |
20150143027 | SOLID STATE DRIVE WITH RAID FUNCTIONS - A single solid state drive (SSD) includes an SSD controller coupled to send and receive information to and from a host through an interface. The SSD controller includes an embedded RAID controller and a plurality of non-volatile memory modules (NVMs) coupled to the SSD controller. The SSD controller causes storage of the received information in the NVMs and sending of the information from the NVMs under the control of the embedded RAID controller. | 2015-05-21 |
20150143028 | DATA STORAGE APPARATUS AND OPERATING METHOD THEREOF - A data storage apparatus includes a translation section suitable for performing a translation operation for translating first address mapping data to second address mapping data, and an operation memory device suitable for storing the second address mapping data. | 2015-05-21 |
20150143029 | DYNAMIC LOGICAL GROUPS FOR MAPPING FLASH MEMORY - A memory system or flash card may include a controller that indexes a global address table (GAT) with a single data structure that addresses both large and small chunks of data. The GAT may include both large logical groups and smaller logical groups for optimizing write amplification. The addressing space may be organized with a large logical group size for sequential data. For fragmented data, the GAT may reference an additional GAT page or additional GAT chunk that has a smaller logical group size. | 2015-05-21 |
20150143030 | Update Block Programming Order - Certain MLC blocks that tend to be reclaimed before they are full may be programmed according to a programming scheme that programs lower pages first and programs upper pages later. This results in more lower page programming than upper page programming on average. Lower page programming is generally significantly faster than upper page programming so that more lower page programming (and less upper programming) reduces average programming time. | 2015-05-21 |
20150143031 | METHOD FOR WRITING DATA INTO STORAGE DEVICE AND STORAGE DEVICE - A storage device includes a buffer memory and a flash memory, which can be connected with the host computer communicably. A method for writing data into the storage device includes: receiving a first write command from the host, the first write command including the data to be written, the address for the flash memory and the address or the buffer memory; based on the address for the buffer memory, writing the data to be written to the buffer memory; based on the address for the flash memory, writing the data to be written to the flash memory. | 2015-05-21 |
20150143032 | STORAGE MEDIUM STORING CONTROL PROGRAM, METHOD OF CONTROLLING INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING DEVICE - According to an embodiment, when data read from a first storage unit which is a backup source is not identical with data indicated by a first function, the read data is written to a second storage unit which is a backup destination. When the data read from the first storage unit is identical with the data indicated by the first function, the read data is not written to the second storage unit and a deletion notification is sent to the second storage unit. | 2015-05-21 |
20150143033 | CONTROLLING WRITE SPEED OF NONVOLATILE MEMORY DEVICE - A system comprises a nonvolatile memory device having multiple download speeds, and a computing device connected to the nonvolatile memory device and configured to determine a download environment of the nonvolatile memory device and to set the nonvolatile memory device to one of the download speeds according to the determined download environment. | 2015-05-21 |
20150143034 | HYBRID MEMORY ARCHITECTURES - Methods and apparatuses for providing a hybrid memory module having both volatile and non-volatile memories to replace a DDR channel in a processing system. | 2015-05-21 |
20150143035 | USER DEVICE HAVING A HOST FLASH TRANSLATION LAYER (FTL), A METHOD FOR TRANSFERRING AN ERASE COUNT THEREOF, A METHOD FOR TRANSFERRING REPROGRAM INFORMATION THEREOF, AND A METHOD FOR TRANSFERRING A PAGE OFFSET OF AN OPEN BLOCK THEREOF - A user device includes a storage device including a flash memory; and a host connected to the storage device via an interface and adapted to transmit data to the storage device. The host provides the storage device with erase count information of the flash memory using a host flash translation layer (FTL), provides the storage device with reprogram information when the flash memory uses a reprogram method, or provides the storage device with page offset information of an open block of the flash memory. | 2015-05-21 |
20150143036 | EXPORTING COMPUTATIONAL CAPABILITIES INTO A BLOCK-ORIENTED DISK MEMORY - A memory controller is provided that includes a host system interface that receives requests from applications and sends read or write commands to a disk for data retrieval. A threadlet core provides threadlets to the host system interface that enable the host system interface to use a logical bit address that can be sent to a memory device for execution without having to read and write entire blocks to and from the memory device. | 2015-05-21 |
20150143037 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR MULTI-THREAD OPERATION INVOLVING FIRST MEMORY OF A FIRST MEMORY CLASS AND SECOND MEMORY OF A SECOND MEMORY CLASS - An apparatus, computer program product, and associated method/processing unit are provided for utilizing a memory subsystem including a first memory of a first memory class, and a second memory of a second memory class communicatively coupled to the first memory. In operation, data is fetched using a time between a plurality of threads. | 2015-05-21 |
20150143038 | STORAGE PROCESSOR MANAGING SOLID STATE DISK ARRAY - A method of writing to one or more solid state disks (SSDs) employed by a storage processor includes receiving a command, creating sub-commands from the command based on a granularity, and assigning the sub-commands to the SSDs independently of the command thereby causing stripping across the SSDs. | 2015-05-21 |
20150143039 | Restoring Virtualized GCU State Information - Method and apparatus for managing a memory, such as but not limited to a flash memory. In accordance with some embodiments, initial state information is stored which identifies an actual state of a garbage collection unit (GCU) of a memory during a normal operational mode. During a restoration mode after a memory power cycle event, a virtualized state of the GCU is determined responsive to the initial state information and to data read from the GCU. The memory is transitioned from the restoration mode to the normal operational mode once the virtualized state for the GCU is determined. | 2015-05-21 |
20150143040 | MEMORY DEVICE AND METHOD HAVING ON-BOARD PROCESSING LOGIC FOR FACILITATING INTERFACE WITH MULTIPLE PROCESSORS, AND COMPUTER SYSTEM USING SAME - A memory device includes an on-board processing system that facilitates the ability of the memory device to interface with a plurality of processors operating in a parallel processing manner. The processing system includes circuitry that performs processing functions on data stored in the memory device in an indivisible manner. More particularly, the system reads data from a bank of memory cells or cache memory, performs a logic function on the data to produce results data, and writes the results data back to the bank or the cache memory. The logic function may be a Boolean logic function or some other logic function. | 2015-05-21 |
20150143041 | STORAGE CONTROL APPARATUS AND CONTROL METHOD - A storage control apparatus includes a plurality of control units that each controls access to one or more first storage areas among a plurality of first storage areas allocated to one or more storage media, and a storage unit configured to store information, for each of the control units, on one or more second storage areas to which unit storage areas reserved in the corresponding one or more first storage areas are allocated. When one of the control units receives an access request to a unit storage area reserved in a first storage area different from the corresponding one or more first storage areas, the one of the control units provides information on the control unit that controls access to the different first storage area as an access path to the unit storage area. | 2015-05-21 |
20150143042 | DISK STORAGE APPARATUS AND DATA STORAGE METHOD - According to one embodiment, a disk storage apparatus includes a disk, a detector, and a controller. The disk includes a first recording area for recording with a first track density, and a second recording area for recording with a second track density lower than the first track density. The detector is configured to detect a variation of an outside environment. The controller is configured to select a nonvolatile memory or the second recording area as a storage destination of write data transferred from a host, based on a content of the variation of the outside environment detected by the detector, and a state of capability or incapability of storage of the nonvolatile memory. | 2015-05-21 |
20150143043 | DECENTRALIZED ONLINE CACHE MANAGEMENT FOR DIGITAL CONTENT - A first cache is provided to cache a first portion of a first block of digital content received over a network connection shared between a first user associated with the first cache and at least one second user. The first cache caches the first portion in response to the first user or the second user(s) requesting the first block. The first cache selects the first portion based on a fullness of the first cache, a number of blocks cached in the first cache, or a cache eviction rule associated with the first cache. | 2015-05-21 |
20150143044 | MECHANISM FOR SHARING PRIVATE CACHES IN A SOC - Systems, processors, and methods for sharing an agent's private cache with other agents within a SoC. Many agents in the SoC have a private cache in addition to the shared caches and memory of the SoC. If an agent's processor is shut down or operating at less than full capacity, the agent's private cache can be shared with other agents. When a requesting agent generates a memory request and the memory request misses in the memory cache, the memory cache can allocate the memory request in a separate agent's cache rather than allocating the memory request in the memory cache. | 2015-05-21 |
20150143045 | CACHE CONTROL APPARATUS AND METHOD - Provided are a cache control apparatus and method for reducing a miss penalty. The cache control apparatus includes a first level cache configured to store data in a memory, a second level cache connected to the first level cache, and configured to be accessed by a processor when the first level cache fails to call data according to a data request instruction, a prefetch buffer connected to the first and second level caches, and configured to temporarily store data transferred from the first and second level caches to a core, and a write buffer connected to the first level cache, and configured to receive address information and data of the first level cache. | 2015-05-21 |
20150143046 | SYSTEMS AND METHODS FOR REDUCING FIRST LEVEL CACHE ENERGY BY ELIMINATING CACHE ADDRESS TAGS - Methods and systems which, for example, reduce energy usage in cache memories are described. Cache location information regarding the location of cachelines which are stored in a tracked portion of a memory hierarchy is stored in a cache location table. Address tags are stored with corresponding location information in the cache location table to associate the address tag with the cacheline and its cache location information. When a cacheline is moved to a new location in the memory hierarchy, the cache location table is updated so that the cache location information indicates where the cacheline is located within the memory hierarchy. | 2015-05-21 |
20150143047 | SYSTEMS AND METHODS FOR DIRECT DATA ACCESS IN MULTI-LEVEL CACHE MEMORY HIERARCHIES - Methods and systems for in direct data access in, e.g., multi-level cache memory systems are described. A cache memory system includes a cache location buffer configured to store cache location entries, wherein each cache location entry includes an address tag and a cache location table which are associated with a respective cacheline stored in a cache memory. The system also includes a first cache memory configured to store cachelines, each cacheline having data and an identity of a corresponding cache location entry in the cache location buffer, and a second cache memory configured to store cachelines, each cacheline having data and an identity of a corresponding cache location entry in the cache location buffer. Responsive to a memory access request for a cacheline, the cache location buffer generates access information using one of the cache location tables which enables access to the cacheline without performing a tag comparison at the one of the first and second cache memories. | 2015-05-21 |
20150143048 | MULTI-CPU SYSTEM AND COMPUTING SYSTEM HAVING THE SAME - A multi-CPU data processing system, comprising: a multi-CPU processor, comprising: a first CPU configured with at least a first core, a first cache, and a first cache controller configured to access the first cache; and a second CPU configured with at least a second core, and a second cache controller configured to access a second cache, wherein the first cache is configured from a shared portion of the second cache. | 2015-05-21 |
20150143049 | CACHE CONTROL APPARATUS AND METHOD - Provided is a cache control apparatus and method that, when a plurality of processors read a program from the same memory in a chip, maintain coherency of data and an instruction generated by a cache memory. The cache control apparatus includes a coherency controller client configured to include an MESI register, which is included in an instruction cache, and stores at least one of a modified state, an exclusive state, a shared state, and an invalid state for each line of the instruction cache, and a coherency interface connected to the coherency controller and configured to transmit and receive broadcast address information, read or write information, and hit or miss information of another cache to and from the instruction cache. | 2015-05-21 |
20150143050 | REUSE OF DIRECTORY ENTRIES FOR HOLDING STATE INFORMATION - The present application is directed to a control circuit that provides a directory configured to maintain a plurality of entries, wherein each entry can indicate sharing of resources, such as cache lines, by a plurality of agents/hosts. Control circuit of the present invention can further provide consolidation of one or more entries having a first format to a single entry having a second format when resources corresponding to the one or more entries are shared by the agents. First format can include an address and a pointer representing one of the agents, and the second format can include a sharing vector indicative of more than one of the agents. In another aspect, the second format can utilize, incorporate, and/or represent multiple entries that may be indicative of one or more resources based on a position in the directory. | 2015-05-21 |
20150143051 | Providing Common Caching Agent For Core And Integrated Input/Output (IO) Module - In one embodiment, the present invention includes a multicore processor having a plurality of cores, a shared cache memory, an integrated input/output (IIO) module to interface between the multicore processor and at least one IO device coupled to the multicore processor, and a caching agent to perform cache coherency operations for the plurality of cores and the IIO module. Other embodiments are described and claimed. | 2015-05-21 |
20150143052 | MANAGING FAULTY MEMORY PAGES IN A COMPUTING SYSTEM - Managing faulty memory pages in a computing system, including: tracking, by a page management module, a number of errors associated with a memory page; determining, by the page management module, whether the number of errors associated with the memory page exceeds a predetermined threshold; responsive to determining that the number of errors associated with the memory page exceeds the predetermined threshold, attempting, by the page management module, to retire the memory page; determining, by the page management module, whether the memory page has been successfully retired; and responsive to determining that the memory page has not been successfully retired, generating, by the page management module, a predictive failure alert. | 2015-05-21 |
20150143053 | SYSTEM AND METHOD FOR IMPROVED STORAGE REQUEST HANDLING IN HOST-SIDE CACHES - A system and method of improved storage request handling in host-side caches includes a host-side cache with a cache controller, a plurality of request queues, and a cache memory. The cache controller is configured to receive a storage request, assign a priority to the storage request based on a queuing policy, insert the storage request into a first request queue selected from the plurality of request queues based on the assigned priority, extract the storage request from the first request queue when the storage request is a next storage request to fulfill based on the assigned priority, forward the storage request to a storage controller, and receive a response to the storage request from the storage controller. The queuing policy is implemented using a rule-based policy engine. In some embodiments, the cache controller is further configured to update one or more monitoring metrics based on processing of the storage request. | 2015-05-21 |
20150143054 | Managing Faulty Memory Pages In A Computing System - Managing faulty memory pages in a computing system, including: tracking, by a page management module, a number of errors associated with a memory page; determining, by the page management module, whether the number of errors associated with the memory page exceeds a predetermined threshold; responsive to determining that the number of errors associated with the memory page exceeds the predetermined threshold, attempting, by the page management module, to retire the memory page; determining, by the page management module, whether the memory page has been successfully retired; and responsive to determining that the memory page has not been successfully retired, generating, by the page management module, a predictive failure alert. | 2015-05-21 |
20150143055 | VIRTUAL MACHINE BACKUP - A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines, a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, periodically check the image modification flags and write only the memory address of the flagged cache rows in the defined log. The processor unit is further arranged to monitor the free space available in the defined log and to trigger an interrupt if the free space available falls below a specific amount. | 2015-05-21 |
20150143056 | DYNAMIC WRITE PRIORITY BASED ON VIRTUAL WRITE QUEUE HIGH WATER MARK - A set associative cache is managed by a memory controller which places writeback instructions for modified (dirty) cache lines into a virtual write queue, determines when the number of the sets containing a modified cache line is greater than a high water mark, and elevates a priority of the writeback instructions over read operations. The controller can return the priority to normal when the number of modified sets is less than a low water mark. In an embodiment wherein the system memory device includes rank groups, the congruence classes can be mapped based on the rank groups. The number of writes pending in a rank group exceeding a different threshold can additionally be a requirement to trigger elevation of writeback priority. A dirty vector can be used to provide an indication that corresponding sets contain a modified cache line, particularly in least-recently used segments of the corresponding sets. | 2015-05-21 |
20150143057 | ADAPTIVE DATA PREFETCHING - A system and method for adaptive data prefetching in a processor enables adaptive modification of parameters associated with a prefetch operation. A stride pattern in successive addresses of a memory operation may be detected, including determining a stride length (L). Prefetching of memory operations may be based on a prefetch address determined from a base memory address, the stride length L, and a prefetch distance (D). A number of prefetch misses may be counted at a miss prefetch count (C). Based on the value of the miss prefetch count C, the prefetch distance D may be modified. As a result of adaptive modification of the prefetch distance D, an improved rate of cache hits may be realized. | 2015-05-21 |
20150143058 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR UTILIZING A DATA POINTER TABLE PRE-FETCHER - A system, method, and computer program product are provided for utilizing a data pointer table pre-fetcher. In use, an assembly of a data pointer table within a main memory is identified. Additionally, the data pointer table is pre-fetched from the main memory. Further, data is sampled from the pre-fetched data pointer table. Further still, the sampled data is stored within a data pointer table cache. | 2015-05-21 |
20150143059 | DYNAMIC WRITE PRIORITY BASED ON VIRTUAL WRITE QUEUE HIGH WATER MARK - A set associative cache is managed by a memory controller which places writeback instructions for modified (dirty) cache lines into a virtual write queue, determines when the number of the sets containing a modified cache line is greater than a high water mark, and elevates a priority of the writeback instructions over read operations. The controller can return the priority to normal when the number of modified sets is less than a low water mark. In an embodiment wherein the system memory device includes rank groups, the congruence classes can be mapped based on the rank groups. The number of writes pending in a rank group exceeding a different threshold can additionally be a requirement to trigger elevation of writeback priority. A dirty vector can be used to provide an indication that corresponding sets contain a modified cache line, particularly in least-recently used segments of the corresponding sets. | 2015-05-21 |
20150143060 | On-Chip Memory (OCM) Physical Bank Parallelism - According to an example embodiment, a processor is provided including an integrated on-chip memory device component. The on-chip memory device component includes a plurality of memory banks, and multiple logical ports, each logical port coupled to one or more of the plurality of memory banks, enabling access to multiple memory banks, among the plurality of memory banks, per clock cycle, each memory bank accessible by a single logical port per clock cycle and each logical port accessing a single memory bank per clock cycle. | 2015-05-21 |
20150143061 | PARTITIONED REGISTER FILE - A system includes a processing unit and a register file. The register file includes at least a first memory structure and a second memory structure. The first memory structure has a lower access energy than the second memory structure. The processing unit is configured to address the register file using a single logical namespace for both the first memory structure and the second memory structure. | 2015-05-21 |
20150143062 | CONTROLLER, STORAGE DEVICE, AND CONTROL METHOD - A controller of an embodiment includes: an interface unit configured to be connected to a storage unit and configured to execute a command performing one or more basic operations for the storage unit in a predetermined order; and a control unit configured to hold, for each category to which the basic operations belong, a control procedure of a signal between the interface unit and the storage unit during execution of the basic operations which belong to the category. The control unit is configured to obtain the basic operations constituting the command executed by the interface unit based on first information indicating the basic operations constituting the command and an order of execution of the basic operations, and to cause the interface unit to execute the obtained basic operations based on second information indicating the category to which the basic operations belong in the order indicated in the first information. | 2015-05-21 |
20150143063 | SUCCESSIVE DATA FINGERPRINTING FOR COPY ACCURACY ASSURANCE - Systems and methods for checking data integrity of a data object copied between storage pools in a storage system by comparing data samples copied from data objects. A series of successive copy operations are scheduled over time for copying a data object from a source data store to a target data store. A first data sample is generated based on a sampling scheme comprising an offset and a period. A second data sample is generated using a similar sampling scheme. The blocks of data in the first data sample and the second data sample are compared to determine if they differ to thereby indicate that the data object at the target store differs from the corresponding data object at the source data store. | 2015-05-21 |
20150143064 | TEST-AND-DEVELOPMENT WORKFLOW AUTOMATION - Computerized methods and systems for automating a process of creating and mounting live copies of data to applications in accordance with workflows that specify procedures for creating and mounting the live copies of data to the applications. The methods and systems comprise executing at least one workflow associated with a data object based on a triggering event, and executing a set of configurable work actions associated with the at least one workflow; creating a snapshot of data volumes associated with the data object; creating liveclone volumes based on the snapshot of the data volumes, and mounting and dismounting the liveclone volumes to and from at least one application. | 2015-05-21 |
20150143065 | Data Processing Method and Apparatus, and Shared Storage Device - A data processing method and apparatus, and a shared storage device, where the method includes receiving, by a shared storage device, a copy-on-write request sent by another storage device, where the copy-on-write request includes data on which copy-on-write is to be performed and a logical unit identifier and snapshot time point of the data; storing the data; and searching, according to the logical unit identifier and snapshot time point of the data, a preset shared mapping table for a corresponding entry, and storing, in the corresponding entry, mapping entry information of the data, where the mapping entry information includes the logical unit identifier and snapshot time point of the data and a storage address that is of the data and in the shared storage device, which can improve efficiency of snapshot data processing. | 2015-05-21 |
20150143066 | DATA CONFIGURATION AND MIGRATION IN A CLUSTER SYSTEM - A cluster system includes a plurality of computing nodes connected to a network. Each node is configured to access its own storage device, and to send and receive input/output (I/O) operations associated with its own storage device. Further, each node of the plurality of nodes may be configured to have a function of acting as a first node, which sends a first message to other nodes of the plurality of nodes. The first message may include configuration information indicative of a data placement of data on the plurality of nodes in the cluster system according to an event. Following receipt of the first message from the first node, each of the other nodes may be configured to determine, based at least in part on the configuration information, whether data stored on its own storage device is affected by the event. | 2015-05-21 |
20150143067 | METHOD AND SYSTEM FOR QUALIFICATION OF AN ELEMENT - A method and a system for creating and qualifying one or more elements, such as multimedia content or, more generally, a performance by an author. The invention more particularly aims at associating a qualification level with an element so that a consultation work can be available, as regards relevance, robustness, skills and authorisation, and thus a degree of objective reliability can be granted to said element. Preferably, the invention relates to the generation of a bank of elements such as questions for television or radio quiz shows, on-line games, etc. | 2015-05-21 |
20150143068 | DATA MANAGEMENT WITH MODULAR ERASE IN A DATA STORAGE SYSTEM - A system and method of data management with modular erase in a data storage system with a memory array having an erase block and a target block with the target block in a logical unit separate from the erase block including: performing an erase operation on the erase block, the erase operation having an operation matrix configured for partial erasing of the erase block; updating a command status for the erase block; enabling an intervening command on the target block based on the command status indicating an incomplete erase status with the intervening command updating the command status; performing an erase optimization based on the command status; performing an additional erase operation based on the erase optimization; and updating the command status to an erase complete status based on the additional erase operation. | 2015-05-21 |
20150143069 | Managing Data Delivery - Methods and systems for managing data and/or operations on data such as content are disclosed. A method can comprise receiving data from a source, determining timing information associated with the source and automatically modifying a storage operation of data received from the source based upon the timing information. | 2015-05-21 |
20150143070 | NONVOLATILE STORAGE AND OPERATING METHODS OF COMPUTING DEVICES INCLUDING THE NONVOLATILE STORAGE - An writing and reading method of a nonvolatile Storage, that includes a first partition and a second partition, and is configured to allow a read operation and a write operation with respect to the second partition only when an authentication is successful in a normal mode, may comprise: assigning a part of a storage space of the second partition to a temporary area by the nonvolatile storage according to a request of changing the normal mode to a secure temporary mode; and/or writing data to the temporary area by the nonvolatile storage. The nonvolatile storage may allow the read operation and with respect to the temporary area without the authentication. | 2015-05-21 |
20150143071 | MEMORY EVENT NOTIFICATION - Embodiments of apparatuses and methods for memory event notification are disclosed. In one embodiment, a processor includes address translation hardware and memory event hardware. The address translation hardware is to support translation of a first address, used by software to access a memory, to a second address, used by the processor to access the memory. The memory event hardware is to detect an access to a registered portion of memory. | 2015-05-21 |
20150143072 | METHOD IN A MEMORY MANAGEMENT UNIT FOR MANAGING ADDRESS TRANSLATIONS IN TWO STAGES - A memory management unit (MMU) may manage address translations. The MMU may obtain a first intermediate physical address (IPA) based on a first virtual address (VA) relating to a first memory access request. The MMU may identify, based on the first IPA, a first memory page entry in a second address translation table. The MMU may store, in a second cache memory, a first IPA-to-PA translation based on the identified first memory page entry. The MMU may store, in the second cache memory and in response to the identification of the first memory page entry, one or more additional IPA-to-PA translations that are based on corresponding one or more additional memory page entries in the second address translation table. The one or more additional memory page entries may be contiguous to the first memory page entry. | 2015-05-21 |
20150143073 | DATA PROCESSING SYSTEMS - A data processing system is described in which a plurality of data processing units | 2015-05-21 |
20150143074 | VECTOR EXCEPTION CODE - Vector exception handling is facilitated. A vector instruction is executed that operates on one or more elements of a vector register. When an exception is encountered during execution of the instruction, a vector exception code is provided that indicates a position within the vector register that caused the exception. The vector exception code also includes a reason for the exception. | 2015-05-21 |
20150143075 | VECTOR GENERATE MASK INSTRUCTION - A Vector Generate Mask instruction. For each element in the first operand, a bit mask is generated. The mask includes bits set to a selected value starting at a position specified by a first field of the instruction and ending at a position specified by a second field of the instruction. | 2015-05-21 |
20150143076 | VECTOR PROCESSING ENGINES (VPEs) EMPLOYING DESPREADING CIRCUITRY IN DATA FLOW PATHS BETWEEN EXECUTION UNITS AND VECTOR DATA MEMORY TO PROVIDE IN-FLIGHT DESPREADING OF SPREAD-SPECTRUM SEQUENCES, AND RELATED VECTOR PROCESSING INSTRUCTIONS, SYSTEMS, AND METHODS - Vector processing engines (VPEs) employing merging circuitry in data flow paths between execution units and vector data memory to provide in-flight merging of output vector data stored to vector data memory are disclosed. Related vector processing instructions, systems, and methods are also disclosed. Merging circuitry is provided in data flow paths between execution units and vector data memory in the VPE. The merging circuitry is configured to merge an output vector data sample set from execution units as a result of performing vector processing operations in-flight while the output vector data sample set is being provided over the output data flow paths from the execution units to the vector data memory to be stored. The merged output vector data sample set is stored in a merged form in the vector data memory without requiring additional post-processing steps, which may delay subsequent vector processing operations to be performed in execution units. | 2015-05-21 |
20150143077 | VECTOR PROCESSING ENGINES (VPEs) EMPLOYING MERGING CIRCUITRY IN DATA FLOW PATHS BETWEEN EXECUTION UNITS AND VECTOR DATA MEMORY TO PROVIDE IN-FLIGHT MERGING OF OUTPUT VECTOR DATA STORED TO VECTOR DATA MEMORY, AND RELATED VECTOR PROCESSING INSTRUCTIONS, SYSTEMS, AND METHODS - Vector processing engines (VPEs) employing merging circuitry in data flow paths between execution units and vector data memory to provide in-flight merging of output vector data stored to vector data memory are disclosed. Related vector processing instructions, systems, and methods are also disclosed. Merging circuitry is provided in data flow paths between execution units and vector data memory in the VPE. The merging circuitry is configured to merge an output vector data sample set from execution units as a result of performing vector processing operations in-flight while the output vector data sample set is being provided over the output data flow paths from the execution units to the vector data memory to be stored. The merged output vector data sample set is stored in a merged form in the vector data memory without requiring additional post-processing steps, which may delay subsequent vector processing operations to be performed in execution units. | 2015-05-21 |
20150143078 | VECTOR PROCESSING ENGINES (VPEs) EMPLOYING A TAPPED-DELAY LINE(S) FOR PROVIDING PRECISION FILTER VECTOR PROCESSING OPERATIONS WITH REDUCED SAMPLE RE-FETCHING AND POWER CONSUMPTION, AND RELATED VECTOR PROCESSOR SYSTEMS AND METHODS - Vector processing engines (VPEs) employing a tapped-delay line(s) for providing precision filter vector processing operations with reduced sample re-fetching and power consumption are disclosed. Related vector processor systems and methods are also disclosed. The VPEs are configured to provide filter vector processing operations. To minimize re-fetching of input vector data samples from memory to reduce power consumption, a tapped-delay line(s) is included in the data flow paths between a vector data file and execution units in the VPE. The tapped-delay line(s) is configured to receive and provide input vector data sample sets to execution units for performing filter vector processing operations. The tapped-delay line(s) is also configured to shift the input vector data sample set for filter delay taps and provide the shifted input vector data sample set to execution units, so the shifted input vector data sample set does not have to be re-fetched during filter vector processing operations. | 2015-05-21 |
20150143079 | VECTOR PROCESSING ENGINES (VPEs) EMPLOYING TAPPED-DELAY LINE(S) FOR PROVIDING PRECISION CORRELATION / COVARIANCE VECTOR PROCESSING OPERATIONS WITH REDUCED SAMPLE RE-FETCHING AND POWER CONSUMPTION, AND RELATED VECTOR PROCESSOR SYSTEMS AND METHODS - Vector processing engines (VPEs) employing a tapped-delay line(s) for providing precision correlation/covariance vector processing operations with reduced sample re-fetching and/or power consumption are disclosed. The VPEs disclosed herein are configured to provide correlation/covariance vector processing operations, such as code division multiple access (CDMA) correlation/covariance vector processing operations as a non-limiting example. A tapped-delay line(s) is included in the data flow paths between memory and execution units in the VPE. The tapped-delay line (s) is configured to receive and provide an input vector data sample set to execution units for performing correlation/covariance vector processing operations. The tapped-delay line(s) is also configured to shift the input vector data sample set for each filter delay tap and provide the shifted input vector data sample set to the execution units, so the shifted input vector data sample set need not be re-fetched from the vector data file during the filter vector processing operations. | 2015-05-21 |
20150143080 | VECTOR CHECKSUM INSTRUCTION - Elements from a second operand are added together one-by-one to obtain a first result. The adding includes performing one or more end around carry add operations. The first result is placed in an element of a first operand of the instruction. After each addition of an element, a carry out of a chosen position of the sum, if any, is added to a selected position in an element of the first operand. | 2015-05-21 |
20150143081 | PROCESSOR CAPABLE OF SUPPORTING MULTIMODE AND MULTIMODE SUPPORTING METHOD THEREOF - Embodiments include a processor capable of supporting multi-mode and corresponding methods. The processor includes front end units, a number of processing elements more than a number of the front end units; and a controller configured to determine if thread divergence occurs due to conditional branching. If there is thread divergence, the processor may set control information to control processing elements using currently activated front end units. If there is not, the processor may set control information to control processing elements using a currently activated front end unit. | 2015-05-21 |
20150143082 | Dynamically Erectable Computer System - A fault-tolerant computer system architecture includes two types of operating domains: a conventional first domain (DID) that processes data and instructions, and a novel second domain which includes mentor processors for mentoring the DID according to “meta information” which includes but is not limited to data, algorithms and protective rule sets. The term “mentoring” (as defined herein below) refers to, among other things, applying and using meta information to enforce rule sets and/or dynamically erecting abstractions and virtualizations by which resources in the DID are shuffled around for, inter alia, efficiency and fault correction. Meta Mentor processors create systems and sub-systems by means of fault tolerant mentor switches that route signals to and from hardware and software entities. The systems and sub-systems created are distinct sub-architectures and unique configurations that may be operated as separately or concurrently as defined by the executing processes. | 2015-05-21 |
20150143083 | Techniques for Increasing Vector Processing Utilization and Efficiency Through Vector Lane Predication Prediction - Techniques for increasing vector processing utilization and efficiency through use of unmasked lanes of predicated vector instructions for executing non-conflicting instructions are provided. In one aspect, a method of vector lane predication for a processor is provided which includes the steps of: fetching predicated vector instructions from a memory; decoding the predicated vector instructions; determining if a mask value of the predicated vector instructions is available and, if the mask value of the predicated vector instructions is not available, predicting the mask value of the predicated vector instructions; and dispatching the predicated vector instructions to only masked vector lanes. | 2015-05-21 |
20150143084 | HAND HELD DEVICE TO PERFORM A BIT RANGE ISOLATION INSTRUCTION - Receiving an instruction indicating a source operand and a destination operand. Storing a result in the destination operand in response to the instruction. The result operand may have: (1) first range of bits having a first end explicitly specified by the instruction in which each bit is identical in value to a bit of the source operand in a corresponding position; and (2) second range of bits that all have a same value regardless of values of bits of the source operand in corresponding positions. Execution of instruction may complete without moving the first range of the result relative to the bits of identical value in the corresponding positions of the source operand, regardless of the location of the first range of bits in the result. Execution units to execute such instructions, computer systems having processors to execute such instructions, and machine-readable medium storing such an instruction are also disclosed. | 2015-05-21 |
20150143085 | VECTOR PROCESSING ENGINES (VPEs) EMPLOYING REORDERING CIRCUITRY IN DATA FLOW PATHS BETWEEN EXECUTION UNITS AND VECTOR DATA MEMORY TO PROVIDE IN-FLIGHT REORDERING OF OUTPUT VECTOR DATA STORED TO VECTOR DATA MEMORY, AND RELATED VECTOR PROCESSOR SYSTEMS AND METHODS - Vector processing engines (VPEs) employing reordering circuitry in data flow paths between execution units and vector data memory to provide in-flight reordering of output vector data stored to vector data memory are disclosed. Related vector processor systems and methods are also disclosed. Reordering circuitry is provided in data flow paths between execution units and vector data memory in the VPE. The reordering circuitry is configured to reorder output vector data sample sets from execution units as a result of performing vector processing operations in-flight while the output vector data sample sets are being provided over the data flow paths from the execution units to the vector data memory to be stored. In this manner, the output vector data sample sets are stored in the reordered format in the vector data memory without requiring additional post-processing steps, which may delay subsequent vector processing operations to be performed in the execution units. | 2015-05-21 |
20150143086 | VECTOR PROCESSING ENGINES (VPEs) EMPLOYING FORMAT CONVERSION CIRCUITRY IN DATA FLOW PATHS BETWEEN VECTOR DATA MEMORY AND EXECUTION UNITS TO PROVIDE IN-FLIGHT FORMAT-CONVERTING OF INPUT VECTOR DATA TO EXECUTION UNITS FOR VECTOR PROCESSING OPERATIONS, AND RELATED VECTOR PROCESSOR SYSTEMS AND METHODS - Vector processing engines (VPEs) employing format conversion circuitry in data flow paths between vector data memory and execution units to provide in-flight format-converting of input vector data to execution units for vector processing operations are disclosed. Related vector processor systems and methods are also disclosed. Format conversion circuitry is provided in data flow paths between vector data memory and execution units in the VPE. The format conversion circuitry is configured to convert input vector data sample sets fetched from vector data memory in-flight while the input vector data sample sets are being provided over the data flow paths to the execution units to be processed. In this manner, format conversion of the input vector data sample sets does not require pre-processing, storage, and re-fetching from vector data memory, thereby reducing power consumption and not limiting efficiency of the data flow paths by format conversion pre-processing delays. | 2015-05-21 |
20150143087 | SERVICE SYSTEM AND METHOD - A system includes provision of a first set of instructions associated with a product to a user of the product, the user having one or more associated characteristics, reception of a revision to the first set of instructions from the user, determination of whether to modify the first set of instructions based the revision and on the characteristics, and modification, if a determination is made to modify the first set of instructions, of the first set of instructions based at least in part on the revision to generate a second set of instructions. | 2015-05-21 |
20150143088 | VECTOR ELEMENT ROTATE AND INSERT UNDER MASK INSTRUCTION - A Vector Element Rotate and Insert Under Mask instruction. Each element of a second operand of the instruction is rotated in a specified direction by a specified number of bits. For each bit in a third operand of the instruction that is set to one, the corresponding bit of the rotated elements in the second operand replaces the corresponding bit in a first operand of the instruction. | 2015-05-21 |
20150143089 | SYSTEM PERFORMANCE ENHANCEMENT WITH SMI ON MULTI-CORE SYSTEMS - Mechanisms for providing enhanced system performance and reliability on multi-core computing devices are discussed. Embodiments use modified hardware and/or software so that when a System Management Interrupt (SMI#) is generated, only a single targeted CPU core enters System Management Mode (SMM) in response to the SMI while the remaining CPU cores continue operating in normal mode. Further, a multi-threaded SMM environment and mutual exclusion objects (mutexes) may allow guarding of key hardware resources and software data structures to enable individual CPU cores among the remaining CPU cores to subsequently also enter SMM in response to a different SMI while the originally selected CPU core is still in SMM. | 2015-05-21 |
20150143090 | SYSTEM AND METHOD FOR CONFIGURING AND EXECUTING SERVICES - Systems and methods for configuring and executing services are disclosed. A plurality of services and a plurality of technology services are configured based on information stored in a knowledge repository. The plurality of services and the plurality of technology services correspond to a plurality of messages. The service is configured for a technology service. The configuration comprises transformation, validation and operation data, a service adapter and tools associated with each service and a plurality of operations to be performed by the tools corresponding to each service. Based on the configuration, a first service and a first operation identified corresponding to a first message. The first message is routed to the first service identified and the first operation is performed by invoking a first tool. After performing the operation, a second message is sent to identify the second service. Subsequently, the second service performs a second operation by invoking a second tool and sends result to the first service. The first service sends the results to the user. | 2015-05-21 |
20150143091 | METHODS AND SYSTEMS OF OPERATING COMPUTING DEVICE - In one or more embodiments, a system can configure a physical mobile device via configuring a configuration for an emulator of the physical mobile device. For example, a user (e.g., a customer) can request a physical mobile device, and a system can provide the user with an emulation of the physical mobile device, where the user can configure the emulation of the physical mobile device. In one or more embodiments, the user can be provided with the configuration via at least one of a network and a physical delivery of the physical mobile device, configured with the configuration. In one example, the user can execute an emulation of the physical mobile device configured with the configuration, received via the network. In another example, the physical mobile device can be configured with the configuration, and subsequently, the physical mobile device can be physically delivered to the user. | 2015-05-21 |
20150143092 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING APPARATUS STARTUP METHOD, AND RECORDING MEDIUM - An information processing apparatus includes a startup condition acquisition unit that acquires a startup condition of multiple program modules, a determination unit that determines a startup order of the multiple program modules by multiple CPU cores, a startup unit that starts up the multiple program modules by executing an executable program module in accordance with the startup order by the multiple CPU cores, an updating unit that updates load information that indicates multiple CPU core load that fluctuates during a startup process, and a limitation unit that limits startup of the program module by the multiple CPU cores based on the load information updated by the updating unit. | 2015-05-21 |
20150143093 | PLURALITY OF INTERFACE FILES USABLE FOR ACCESS TO BIOS - A computer may comprise a processor and first storage device coupled to the processor. The first storage device contains a basic input/output system (BIOS) executable by the processor. The system may also comprise a second storage device coupled to the processor. The second storage device may contain a management interface usable by an operating system to access the BIOS. A plurality of interface files may also be provided, each interface file being usable by the management interface to access the BIOS and each interface file defining one or methods for use by the interface or BIOS. Upon execution of the BIOS, the processor is to determine a configuration of the system and, based on the determined configuration, to select a particular interface file for use during run-time. | 2015-05-21 |
20150143094 | System and Method to Perform an OS Boot Using Service Location Protocol and Launching OS Using a Dynamic Update of Network Boot Order Without a Reboot - A system, method, and computer-readable medium are disclosed for a boot mapping system. More specifically, in certain embodiments, BIOS of an information handling system includes a boot mapping system which allows the information handling system to boot up regardless of a boot order change in a network mode of operation or a BIOS boot order change. Additionally, in certain embodiments, the boot mapping system further includes a service location protocol (SLP) which locates operating system images based on the type of network protocol selected for deployment. | 2015-05-21 |
20150143095 | BIOS FAILOVER UPDATE WITH SERVICE PROCESSOR - Certain aspects direct to basic input/output system (BIOS) failover update with a service processor (SP). In certain embodiments, the system includes a host computer and a SP. A CPU of the host computer loads and executes a current BIOS image stored in a BIOS chip to a memory as a BIOS instance. The SP receives, from the executed BIOS instance at the host computer, a BIOS image as a failover backup image, and stores the failover backup image in the volatile memory of the SP. When an error occurs at the host computer, the executed BIOS instance sends a request for the failover backup image to the SP. In response, the SP sends a copy of the failover backup image to the host computer such that the executed BIOS instance may replace the current BIOS image stored in the BIOS chip with the copy of the failover backup image. | 2015-05-21 |
20150143096 | METHOD AND CHIP CARD FOR TRANSMITTING INFORMATION - A card including a data transmission mechanism using annex transmission channels. A method is described for the transmission of data by a chip card at an end of its life using hidden communication channels different from standard communication channels of the card. The data are transmitted by modulating a binary signal that results from a modification of a hardware parameter of the card. | 2015-05-21 |
20150143097 | INFORMATION PROCESSING APPARATUS THAT SAVES DATA IN MAIN STORAGE DEVICE AND CONTROL METHOD THEREFOR, AND STORAGE MEDIUM - An information processing apparatus that can flexibly change an amount of data to be saved. A determining unit determines, at a time when the information processing apparatus is started up, whether termination processing has abnormally ended last-time or not. A detecting unit detects an instruction indicating processing that should be performed at the time of the start-up when the determining unit determines that the termination processing has abnormally ended last-time. A saving unit saves, under the instruction detected by the detecting unit, data stored in a save area determined by the instruction from among storage areas of a nonvolatile main storage device, to a save destination determined by the instruction. | 2015-05-21 |
20150143098 | METHOD FOR UPDATING FIRMWARE OF AN ELECTRONIC DEVICE WITHIN A COMPUTER - A method for updating firmware of a hard disk drive (HDD) within a computer. In order to use the firmware that has been updated without rebooting the computer, the old identification information of the old firmware is loaded into a random-access memory (RAM) of the HDD. The new firmware containing new identification information is written in the non-volatile memory of the HDD during a power-on state of the computer. The new firmware containing new identification information is loaded into the RAM, and the new identification information is rewritten with the old identification information. The old identification information at the RAM of the HDD is sent back in response to a request of identification information from the operating system prior to cold boot. | 2015-05-21 |
20150143099 | METHOD AND APPARATUS FOR ENHANCING A HIBERNATE AND RESUME PROCESS USING USER SPACE SYNCHRONIZATION - Before hibernating a computing device ( | 2015-05-21 |
20150143100 | TRIGGERED CONTROLLED EVENT LISTENER LEARNER - Aspects of the present invention provide a solution for responding to a change in an environment of a computer system. In an embodiment, a set of triggered controlled event listener learners (T-CELLs) are deployed in the computer system. Each T-CELL of the set of T-CELLs is a self-contained, persistent software construct. Further, each T-CELL has the ability to communicate with the other T-CELLs in the computer system. These T-CELLs can, in response to detecting a change in the computer system, automatically create a new T-CELL to respond to the change. | 2015-05-21 |
20150143101 | METHOD AND APPARATUS FOR EMBEDDED SYSTEMS REPROGRAMMING - A reprogramming device is used for reprogramming embedded systems. The reprogramming device comprises a microprocessor, a memory programmed with software to accomplish the reprogramming of distinctly different embedded systems architectures, and one or more hardware devices that facilitate communication over multiple protocols contained in a portable package designed for both one-time and multi-occurrence use scenarios. In some embodiments, the reprogramming device is able to be used to enhance one or more attributes of performance of existing embedded systems through the reconfiguration of internally stored parameters. In some embodiments, the reprogramming device is also to be used to extract and receive information and instruction from existing embedded systems and enable useful presentation of this information. As a result, the reprogramming device is able to be used to adjust and/or monitor the parameters of the on-board diagnostics computer of a vehicle to ensure peak performance and detect errors. | 2015-05-21 |
20150143102 | SENDING MESSAGES BY OBLIVIOUS TRANSFER - A system includes a server connectable to a client, the server configured to allow the client to acquire a message of an index designated by the client among N messages held by the server where N is an integer of two or more. The server includes a classification unit configured to classify the N messages into M classified messages by contents of the messages; a message encryption unit configured to encrypt each of the M classified messages; a message provision unit configured to provide the M encrypted classified messages to the client; and a key sending unit configured to send the client, by oblivious transfer, a message key for decrypting the classified message corresponding to the message of the index designated by the client. | 2015-05-21 |
20150143103 | MESSAGING AND NETWORKING KEEPSAKES - Disclosed herein are systems, methods, and non-transitory computer-readable storage media for allowing parties exchanging digital objects and members of social networks to catalog certain data objects as favorites in a cataloged interface and which allow the parties to access and interact with the catalog of favorited content. | 2015-05-21 |
20150143104 | APPARATUS, SYSTEM, METHOD, AND MEDIUM - An apparatus includes a memory; and a processor coupled to the memory and configured to generate a first common key whose key value varies based on a first elapsed time when a notification of the first elapsed time after a start-up of another apparatus to which a data frame to be encrypted is to be transmitted has been made, generate a second common key whose key value varies based on a second elapsed time after a start-up of the apparatus when a notification of the first elapsed time has not been made, and encrypt the data frame by any one of the first common key and the second common key as a common key and transmit the encrypted data frame to the another apparatus. | 2015-05-21 |
20150143105 | USB INTERFACE FOR PERFORMING TRANSPORT I/O - Systems and methods for implementing a Transport I/O system are described. Network encrypted content may be received by a device. The device may provide the network encrypted content to a secure processor, such as, for example, a smart card. The secure processor obtains a network control word that may be used to decrypt the network encrypted content. The secure processor may decrypt the network encrypted content to produce clear content. In embodiments, the secure processor may then use a local control word to generate locally encrypted content specific to the device. The device may then receive the locally encrypted content from the secure processor and proceed to decrypt the locally encrypted content using a shared local encryption key. The secure processor may connect to the device via a standard connection, such as via a USB 3.0 connector. | 2015-05-21 |
20150143106 | REMOTE AUTHENTICATION SYSTEM - One embodiment of the invention is directed to a method including receiving an alias identifier associated with an account associated with a presenter, determining an associated trusted party using the alias identifier, sending a verification request message to the trusted party after determining the associated trusted party, and receiving a verification response message | 2015-05-21 |
20150143107 | DATA SECURITY TOOLS FOR SHARED DATA - Embodiments of data security tools enable secure data sharing. A data sharing system includes a memory device and a processor. The processor encrypts data with a common key. The processor also assigns separate instances of the common key to each user having permissions to access the data. The processor also encrypts each instance of the common key with corresponding unique keys assigned to each user. | 2015-05-21 |
20150143108 | SYSTEM AND METHOD FOR UPDATING AN ENCRYPTION KEY ACROSS A NETWORK - Systems and methods are provided for generating subsequent encryption keys by a client device as one of a plurality of client devices across a network. Each client device is provided with the same key generation information and the same key setup information from an authentication server. Each client device maintains and stores its own key generation information and key setup information. Using its own information, each client device generates subsequent encryption keys that are common or the same across devices. These subsequent encryption keys are generated and maintained the same across devices without any further instruction or information from the authentication server or any other client device. Additionally, client devices can recover the current encryption key by synchronizing information with another client device. | 2015-05-21 |