17th week of 2013 patent applcation highlights part 62 |
Patent application number | Title | Published |
20130103866 | Method, Device, and System for Packet Transmission on PCIE Bus - A method, device, and system for packet transmission on the PCIE bus according to the embodiments of the present invention, a SCSI protocol packet is encapsulated to obtain an encapsulated SCSI protocol packet, and the encapsulated SCSI protocol packet is carried in a PCIE data packet, and then the PCIE data packet carrying the encapsulated SCSI protocol packet is transmitted to the receiver device through the PCIE bus. Thereby, transmission of SCSI protocol packets is implemented on the PCIE bus, and any devices interconnected through the PCIE bus can operate each other through SCSI protocol packets with a high data transmission bandwidth and high processing speed, without requiring a specific physical device or adapter to perform protocol conversion. | 2013-04-25 |
20130103867 | Method And Apparatus For Reducing Power Consumption In A Memory Bus Interface By Selectively Disabling And Enabling Sense Amplifiers - A technique includes amplifying data signals from a memory bus interface. The amplified data signals are sampled, and the amplifier is selectively disabled in response to the absence of a predetermined operation occurring over the memory bus. In some embodiments of the invention, the amplification may be selectively enabled in response to the beginning of the predetermined operation over the memory bus. | 2013-04-25 |
20130103868 | INTEGRATED CIRCUIT SYSTEM AND METHOD FOR OPERATING MEMORY SYSTEM - An integrated circuit system includes: a master chip; a slave chip configured to operate under a control of the master chip; and a data channel configured to transfer data between the master chip and the slave chip, wherein a data transfer rate from the master chip to the slave chip through the data channel is different from a data transfer rate from the slave chip to the master chip through the data channel. | 2013-04-25 |
20130103869 | BUS CONNECTION CIRCUIT, SEMICONDUCTOR DEVICE AND OPERATION METHOD OF BUS CONNECTION CIRCUIT - A bus connection circuit connects a bus master and a plurality of bus slaves. The bus connection circuit includes a mirror area access detecting circuit and a processing circuit. The mirror area access detecting circuit detects that the bus master accesses a mirror area of a first bus slave of the plurality of bus slaves, and output a detection signal based on a detection result. The processing circuit executes processing preset in correspondence to the detection result, to an area or data as an access object, based on the detection result. | 2013-04-25 |
20130103870 | INPUT OUTPUT BRIDGING - In one embodiment, a system comprises a memory, and a first bridge unit for processor access with the memory. The first bridge unit comprises a first arbitration unit that is coupled with an input-output bus, a memory free notification unit (“MFNU”), and the memory, and is configured to receive requests from the input-output bus and receive requests from the MFNU and choose among the requests to send to the memory on a first memory bus. The system further comprises a second bridge unit for packet data access with the memory that includes a second arbitration unit that is coupled with a packet input unit, a packet output unit, and the memory and is configured to receive requests from the packet input unit and receive requests from the packet output unit, and choose among the requests to send to the memory on a second memory bus. | 2013-04-25 |
20130103871 | Method of Handling Network Traffic Through Optimization of Receive Side Scaling - An information handling system includes a plurality of processors that each includes a cache memory, and a receive side scaling (RSS) indirection table with a plurality of pointers that each points to one of the processors. A network data packet received by the information handling system determines a pointer to a first processor. In response to determining the pointer, information associated with the network data packet is transferred to the cache memory of the first processor, The information handling system also includes a process scheduler that moves a process associated with the network data packet from a second processor to the first processor, and an RSS module that directs the process scheduler to move the process and associates the first pointer with the processor in response to directing the process scheduler. | 2013-04-25 |
20130103872 | COMPUTER APPARATUS AND METHOD FOR DISTRIBUTING INTERRUPT TASKS THEREOF - A computer apparatus and a method for distributing interrupt tasks thereof are provided. The computer apparatus has a plurality of CPUs and a chipset, and the chipset is electrically coupled to each of the CPUs. The chipset is configured for receiving an interrupt request sent from an external hardware device and judging whether or not a task type corresponding to the interrupt request has ever been performed by any one of the CPUs. If a judging result thereof is yes, the chipset assigns the interrupt request to the CPU that has ever performed the task type, so as to perform a corresponding interrupt task. | 2013-04-25 |
20130103873 | SYSTEMS AND METHODS FOR WIRELESS MUSIC PLAYBACK - Systems, methods, apparatus, and articles of manufacture to control audio playback devices via a playback network including a dock interface are disclosed. An example dock includes a docking connection to enable a portable playback device to be connected to the dock and a network communication interface to enable the portable playback device to connect to a playback network via the dock, the network communication interface to provide content from the portable playback device to at least one network playback device for playback of the content via the playback network. | 2013-04-25 |
20130103874 | UPGRADE SYSTEM FOR POWER SUPPLY UNIT - An upgrade system for a power supply unit. The power supply unit includes a master interface for outputting power. The upgrade system includes a test board with a slave interface and an upgrade interface, and an upgrade device. Each of the master interface and the slave interface includes four reserved pins. The four reserved pins of the master interface are correspondingly connected to the four reserved pins of the slave interface. The four reserved pins of the slave interface are further connected to the upgrade interface. The upgrade device communicates with the power supply unit through the upgrade interface and the reserved pins of the master interface and the slave interface. | 2013-04-25 |
20130103875 | CPU INTERCONNECT DEVICE - The present disclosure provides a CPU interconnect device, the CPU interconnect device connects with a first CPU, which includes a quick path interconnect QPI interface and a serial deserial SerDes interface, the quick path interconnect QPI interface receives serial QPI data sent from a CPU, converts the received serial QPI data into a parallel QPI data, and outputs the parallel QPI data to the serial deserial SerDes interface; the serial deserial SerDes interface converts the parallel QPI data output by the QPI interface into a high-speed serial SerDes data and then send the high-speed serial SerDes data to another CPU interconnect device connected with another CPU. The defects of poor scalability, long data transmission delay, and a high cost of an existing interconnect system among CPUs can be solved. | 2013-04-25 |
20130103876 | System and Method for Providing PCIE over Displayport - An apparatus and method is disclosed for providing an extensible information handling system (IHS) bus implemented on predetermined channels of a digital video interface. IHS video signal information is multiplexed with IHS bus information by a host multiplexer for transmission across a digital video connector. The multiplexed | 2013-04-25 |
20130103877 | APPARATUS AND METHODS TO COMMUNICATIVELY COUPLE FIELD DEVICES TO CONTROLLERS IN A PROCESS CONTROL SYSTEM - A disclosed example system includes a termination panel, and a shared bus on the termination panel. The shared bus is to removably receive a plurality of bases that removably receive modules to communicate with field devices, and communicatively couple the modules to an input/output card to exchange communications between the modules and a controller that is in communication with the input/output card via a second bus. | 2013-04-25 |
20130103878 | UNIVERSAL USB CHARGER - A universal USB charger connected to an electronic device stored with a set of preset voltage values has a power supply circuit, a USB interface having a V | 2013-04-25 |
20130103879 | LOAD CARD FOR TESTING PERIPHERAL COMPONENT INTERCONNECT SLOTS - A load card for testing different types of PCI slots is provided. The load card includes several gold fingers, and resistor selection circuits. Each gold finger corresponds to one PCI slot. Each resistor selection circuit includes a resistor to test at least one PCI slot working in one working voltage. When a PCI slot working at a working voltage is to be tested, the gold finger connects to the PCI slot, and the resistor selection circuit including the resistor to test the PCI slot working at the working voltage is enabled and others are disabled in response to an operation of the user. | 2013-04-25 |
20130103880 | METHODS AND SYSTEMS FOR HANDLING INTER-PROCESS AND INTER-MODULE COMMUNICATIONS IN SERVERS AND SERVER CLUSTERS - Pluggable modules communicate via a switch fabric dataplane accessible via a backplane. Various embodiments are comprised of varying numbers and arrangements of the pluggable modules in accordance with a system architecture that provides for provisioning virtual servers and clusters of servers from underlying hardware and software resources. The system architecture is a unifying solution for applications requiring a combination of computation and networking performance. Resources may be pooled, scaled, and reclaimed dynamically for new purposes as requirements change, using dynamic reconfiguration of virtual computing and communication hardware and software. | 2013-04-25 |
20130103881 | Multi-Processor Architecture Implementing A Serial Switch And Method Of Operating Same - A multi-processor architecture for a network device that includes a plurality of barrel cards, each including: a plurality of processors, a PCIe switch coupled to each of the plurality of processors, and packet processing logic coupled to the PCIe switch. The PCIe switch on each barrel card provides high speed flexible data paths for the transmission of incoming/outgoing packets to/from the processors on the barrel card. An external PCIe switch is commonly coupled to the PCIe switches on the barrel cards, as well as to a management processor, thereby providing high speed connections between processors on separate barrel cards, and between the management processor and the processors on the barrel cards. | 2013-04-25 |
20130103882 | METHOD AND SYSTEM FOR PROVIDING HARDWARE SUPPORT FOR MEMORY PROTECTION AND VIRTUAL MEMORY ADDRESS TRANSLATION FOR A VIRTUAL MACHINE - A method for providing hardware support for memory protection and virtual memory address translation for a virtual machine. The method includes executing a host machine application within a host machine context and executing a virtual machine application within a virtual machine context. A plurality of TLB (translation look aside buffer) entries for the virtual machine context and the host machine context are stored within a TLB. Memory protection bits for the plurality of TLB entries are logically combined to enforce memory protection on the virtual machine application. | 2013-04-25 |
20130103883 | NONVOLATILE MEMORY APPARATUS AND WRITE CONTROL METHOD THEREOF - A nonvolatile memory apparatus includes a memory cell array, and a write operation controller configured to verify a write operation by comparing input data to the write operation controller with cell data written into the memory cell array, measure a resistance value after a first time is elapsed, and determine whether or not to re-perform the write operation according to the measured resistance value. | 2013-04-25 |
20130103884 | FILE SYSTEM AND CONTROL METHOD THEREOF - A file system including a first memory unit which is non-volatile and has a plurality of blocks, a control unit configured to select one of the plurality of blocks of the first memory unit, determine whether the selected block is a valid block, control a data write with respect to the selected block if the selected block is a valid block, divide the plurality of blocks into valid blocks and bad blocks by checking the plurality of blocks of the first memory unit, generate an address table by mapping the valid blocks and the bad blocks to addresses and control a loading of the address table generated, and a second memory unit which is volatile and stores the address table for the plurality of blocks of the first memory unit. An address table of a flash memory, which is a non-volatile memory, is stored in another memory | 2013-04-25 |
20130103885 | ADMINISTERING THERMAL DISTRIBUTION AMONG MEMORY MODULES OF A COMPUTING SYSTEM - A computing system includes a number of memory modules and temperature sensors. Each temperature sensor measures a temperature of a memory module. In such a computing system a garbage collector during garbage collection, determines whether a temperature measurement of a temperature sensor indicates that a memory module is overheated and, if a temperature measurement of a temperature sensor indicates a memory module is overheated, the garbage collector reallocates one or more active memory regions on the overheated memory module to a non-overheated memory module. Reallocating the active memory regions includes copying contents of the active memory regions from the overheated memory module to the non-overheated memory module. | 2013-04-25 |
20130103886 | DUAL-FIRMWARE FOR NEXT GENERATION EMULATION - Disclosed is a host bus adapter (HBA) that to receives an input/output (I/O) command from an operating system I/O driver. Firmware stored on the host bus adapter includes primary firmware and secondary firmware to process the I/O command. The HBA is to respond to the I/O command under the control of one of the primary firmware or secondary firmware. The selected one of said primary firmware and secondary firmware may be used to certify a hardware driver for either the current generation (primary firmware) or a future generation (secondary firmware). | 2013-04-25 |
20130103887 | COMPUTING SYSTEM WITH NON-DISRUPTIVE FAST MEMORY RESTORE MECHANISM AND METHOD OF OPERATION THEREOF - A method for operating a computing system includes: monitoring a central interface for a power event; accessing a high-speed memory for pre-shutdown data; accessing a non-volatile memory during the power event for the pre-shutdown data previously stored on the high-speed memory; selecting a multiplexer for allowing external access to the high-speed memory; and formatting the pre-shutdown data in the non-volatile memory for access through a non-disruptive interface. | 2013-04-25 |
20130103888 | MEMORY ARRAY INCLUDING MULTI-STATE MEMORY DEVICES - A data storage system including a memory array including a plurality of memory devices programmable in greater than two states. A memory control module may control operations of the memory array, and an encoder module may encode input data for storing to the memory array. The memory array may be an m×n memory array, and the memory control module may control operations of storing data to and retrieving data from the memory array. | 2013-04-25 |
20130103889 | PAGE-BUFFER MANAGEMENT OF NON-VOLATILE MEMORY-BASED MASS STORAGE DEVICES - Mass storage devices and methods that use at least one non-volatile solid-state memory device, for example, one or more NAND flash memory devices, that defines a memory space for permanent storage of data. The mass storage device is adapted to be operatively connected to a host computer system having an operating system and a file system. The memory device includes memory cells organized in pages that are organized into memory blocks for storing data, and a page buffer partitioned into segments corresponding to a cluster size of the operating system or the file system of the host computer system. The size of a segment of the page buffer is larger than the size of any page of the memory device. The page buffer enables logically reordering multiple clusters of data fetched into the segments from pages of memory device and write-combining segments containing valid clusters. | 2013-04-25 |
20130103890 | CALIBRATING MEMORY - Apparatuses and methods of calibrating a memory interface are described. Calibrating a memory interface can include loading and outputting units of a first data pattern into and from at least a portion of a register to generate a first read capture window. Units of a second data pattern can be loaded into and output from at least the portion of the register to generate a second read capture window. One of the first read capture window and the second read capture window can be selected and a data capture point for the memory interface can be calibrated according to the selected read capture window. | 2013-04-25 |
20130103891 | ENDURANCE ENHANCEMENT CODING OF COMPRESSIBLE DATA IN FLASH MEMORIES - Methods described in the present disclosure may be based on a direct transformation of original data to “shaped” data. The disclosed methods may be performed “on-the-fly” and the disclosed methods may utilize an inherent redundancy in compressible data in order to achieve endurance enhancement and error reduction. In a particular example, a method comprises generating a first portion of output data by applying a mapping of input bit sequences to output bit sequences to a first portion of input data, updating the mapping of the input bit sequences to the output bit sequences based on the first portion of the input data to generate an updated mapping, reading a second portion of the input data, and generating a second portion of the output data by applying the updated mapping of the input bit sequences to the output bit sequences to the second portion of the input data. | 2013-04-25 |
20130103892 | COMBINED MEMORY BLOCK AND DATA PROCESSING SYSTEM HAVING THE SAME - A combined memory block includes a first memory unit configured to store data and an additional memory unit that forms a stacked structure with the memory unit, wherein the memory unit and the storage unit together form multi-level cells having variable resistance in storing data. | 2013-04-25 |
20130103893 | SYSTEM COMPRISING STORAGE DEVICE AND RELATED METHODS OF OPERATION - A memory system comprises a storage device and a host. The host classifies pages stored in the storage device into a plurality of data groups according to properties of the pages, and transmits setup information regarding the classified data groups to the storage device. | 2013-04-25 |
20130103894 | PROGRAMMING A MEMORY DEVICE TO INCREASE DATA RELIABILITY - Methods for programming a memory array, memory devices, and memory systems are disclosed. In one such method, the target reliability of the data to be programmed is determined The relative reliability of different groups of memory cells of the memory array is determined. The data is programmed into the group of memory cells of the array having a relative reliability corresponding to the target reliability. | 2013-04-25 |
20130103895 | FLASH MEMORY STORAGE SYSTEM - A flash memory storage system has a plurality of flash memory devices comprising a plurality of flash memories, and a controller having an I/O processing control unit for accessing a flash memory device specified by a designated access destination in an I/O request received from an external device from among the plurality of flash memory devices. A parity group can be configured of flash memory devices having identical internal configuration. | 2013-04-25 |
20130103896 | MEMORY MODULE WITH MEMORY STACK AND INTERFACE WITH ENHANCED CAPABILITES - A memory module, which includes at least one memory stack, comprises a plurality of DRAM integrated circuits and an interface circuit. The interface circuit interfaces the memory stack to a host system so as to operate the memory stack as a single DRAM integrated circuit. In other embodiments, a memory module includes at least one memory stack and a buffer integrated circuit. The buffer integrated circuit, coupled to a host system, interfaces the memory stack to the host system so to operate the memory stack as at least two DRAM integrated circuits. In yet other embodiments, the buffer circuit interfaces the memory stack to the host system for transforming one or more physical parameters between the DRAM integrated circuits and the host system. | 2013-04-25 |
20130103897 | SYSTEM AND METHOD FOR TRANSLATING AN ADDRESS ASSOCIATED WITH A COMMAND COMMUNICATED BETWEEN A SYSTEM AND MEMORY CIRCUITS - A memory circuit system and method are provided. An interface circuit is capable of communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to translate an address associated with a command communicated between the system and the memory circuits. | 2013-04-25 |
20130103898 | DRIVER FOR DDR2/3 MEMORY INTERFACES - An apparatus is described that includes a combined drive and termination circuit programmable to interface to DDR2 and DDR3 memory modules. In an exemplary embodiment the apparatus includes a combined output/termination driver, an input driver and a calibration subsystem. The combined output/termination driver includes a number of pull-up circuits and a number of pull-down circuits. One of the pull-up circuits presents a fixed output impedance. The rest of the pull-up circuits have an impedance programmable between two desired impedance values. One of the pull-down circuits presents a fixed output impedance. The rest of the pull-down circuits have an impedance programmable between two desired impedance values. The necessary number of pull-up circuits and pull-down circuits is activated in order to provide a desired driving and termination circuit such as to interface to specific impedance values as defined by the DDR2 and DDR3 interface protocol. | 2013-04-25 |
20130103899 | SYSTEM ON CHIP WITH RECONFIGURABLE SRAM - A system on chip includes electrical components and a first memory including memory blocks. A method of operating the system on chip includes generating an assignment of the memory blocks to the electrical components. The generating includes, initially, during a development phase of the system on chip, generating the assignment so that selected memory blocks of the memory blocks are assigned to first selected electrical components of the electrical components as emulated read-only memory. The generating includes, subsequently, during an operational phase of the system on chip, modifying the assignment so that one or more of the selected memory blocks are re-assigned to second selected electrical components of the electrical components as cache memory. The method also includes, according to the assignment, dynamically creating electrical connectivity between the memory blocks and the electrical components. | 2013-04-25 |
20130103900 | ELECTRONIC SYSTEM AND METHOD AND APPARATUS FOR SAVING DATA THEREOF - An electronic system, and a method and an apparatus for saving data of the electronic system are provided. The electrical system includes a central processing unit (CPU), a temperature sensor, a first controller, a second controller, a first storage device and a second storage device. When the CPU enters a suspend mode and the first controller detects a temperature of the electronic system to be lower than a threshold value through the temperature sensor, the second controller notify the application program to trigger the CPU to enter a hibernation mode, and operation data is moved from the first storage device to the second storage device. | 2013-04-25 |
20130103901 | DYNAMICALLY SWITCHING COMMAND TYPES TO A MASS STORAGE DRIVE - A method, device, and system are disclosed. In one embodiment method begins by receiving a first new mass storage disk access request. The method then determines the total number of access requests to the mass storage disk received in a window of time. If the total number of requests received over the period of time is greater than or equal to a request threshold number then a request frequency counter is decremented. Otherwise, the counter is incremented. The method continues by generating a legacy advanced technology attachment (ATA)-type command for the first new access request when the counter is greater than or equal to a counter threshold number. Otherwise, the method generates a native command queue (NCQ)-type command for the first new access request. | 2013-04-25 |
20130103902 | METHOD AND APPARATUS FOR IMPLEMENTING PROTECTION OF REDUNDANT ARRAY OF INDEPENDENT DISKS IN FILE SYSTEM - Embodiments of the present invention disclose a method and an apparatus for implementing protection of RAID in a file system, and are applied in the field of communications technologies. In the embodiments of the present invention, after receiving a file operation request, the file system needs to determine the type of a file to be operated as requested by the file operation request, and perform file operations in a hard disk drive of the file system directly according to a file operation method corresponding to the determined file type, that is, a RAID data protection method. Therefore, corresponding file operations may be performed in a proper operation method according to each different file types, and data of an important file type is primarily protected, thereby improving reliability of data storage. | 2013-04-25 |
20130103903 | Methods And Apparatus For Reusing Prior Tag Search Results In A Cache Controller - Methods and apparatus are provided for reusing prior tag search results in a cache controller. A cache controller is disclosed that receives an incoming request for an entry in the cache having a first tag; determines if there is an existing entry in a buffer associated with the cache having the first tag; and reuses a tag access result from the existing entry in the buffer having the first tag for the incoming request. An indicator can be maintained in the existing entry to indicate whether the tag access result should be retained. Tag access results can optionally be retained in the buffer after completion of a corresponding request. The tag access result can be reused by (i) reallocating the existing entry to the incoming request if the indicator in the existing entry indicates that the tag access result should be retained; and/or (ii) copying the tag access result from the existing entry to a buffer entry allocated to the incoming request if a hazard is detected. | 2013-04-25 |
20130103904 | SYSTEM AND METHOD TO REDUCE MEMORY ACCESS LATENCIES USING SELECTIVE REPLICATION ACROSS MULTIPLE MEMORY PORTS - In one embodiment, a system comprises multiple memory ports distributed into multiple subsets, each subset identified by a subset index and each memory port having an individual wait time. The system further comprises a first address hashing unit configured to receive a read request including a virtual memory address associated with a replication factor, and referring to graph data. The first address hashing unit translates the replication factor into a corresponding subset index based on the virtual memory address, and converts the virtual memory address to a hardware based memory address that refers to graph data in the memory ports within a subset indicated by the corresponding subset index. The system further comprises a memory replication controller configured to direct read requests to the hardware based address to the one of the memory ports within the subset indicated by the corresponding subset index with a lowest individual wait time. | 2013-04-25 |
20130103905 | Optimizing Memory Copy Routine Selection For Message Passing In A Multicore Architecture - In one embodiment, the present invention includes a method to obtain topology information regarding a system including at least one multicore processor, provide the topology information to a plurality of parallel processes, generate a topological map based on the topology information, access the topological map to determine a topological relationship between a sender process and a receiver process, and select a given memory copy routine to pass a message from the sender process to the receiver process based at least in part on the topological relationship. Other embodiments are described and claimed. | 2013-04-25 |
20130103906 | Combining Write Buffer with Dynamically Adjustable Flush Metrics - In an embodiment, a combining write buffer is configured to maintain one or more flush metrics to determine when to transmit write operations from buffer entries. The combining write buffer may be configured to dynamically modify the flush metrics in response to activity in the write buffer, modifying the conditions under which write operations are transmitted from the write buffer to the next lower level of memory. For example, in one implementation, the flush metrics may include categorizing write buffer entries as “collapsed.” A collapsed write buffer entry, and the collapsed write operations therein, may include at least one write operation that has overwritten data that was written by a previous write operation in the buffer entry. In another implementation, the combining write buffer may maintain the threshold of buffer fullness as a flush metric and may adjust it over time based on the actual buffer fullness. | 2013-04-25 |
20130103907 | MEMORY MANAGEMENT DEVICE, MEMORY MANAGEMENT METHOD, CONTROL PROGRAM, AND RECORDING MEDIUM - A memory management device includes a prefetch execution unit which performs prefetching data from a first memory unit, and moving the data to a second memory unit, and an initial data preservation unit which preserves data including at least a part of the data items which are placed in the second memory unit before the prefetch execution unit performs the prefetching, and data including the data which is prefetched by the prefetch execution unit as initial data which is data stored in the second memory unit when a system including the first and second memory units is started, before the prefetch execution unit performs prefetching. | 2013-04-25 |
20130103908 | PREVENTING UNINTENDED LOSS OF TRANSACTIONAL DATA IN HARDWARE TRANSACTIONAL MEMORY SYSTEMS - A method and apparatus are disclosed for implementing early release of speculatively read data in a hardware transactional memory system. A processing core comprises a hardware transactional memory system configured to receive an early release indication for a specified word of a group of words in a read set of an active transaction. The early release indication comprises a request to remove the specified word from the read set. In response to the early release request, the processing core removes the group of words from the read set only after determining that no word in the group other than the specified word has been speculatively read during the active transaction. | 2013-04-25 |
20130103909 | SYSTEM AND METHOD TO PROVIDE NON-COHERENT ACCESS TO A COHERENT MEMORY SYSTEM - In one embodiment, a system comprises a memory and a memory controller that provides a cache access path to the memory and a bypass-cache access path to the memory, receives requests to read graph data from the memory on the bypass-cache access path and receives requests to read non-graph data from the memory on the cache access path. A method comprises receiving a request at a memory controller to read graph data from a memory on a bypass-cache access path, receiving a request at the memory controller to read non-graph data from the memory through a cache access path, and arbitrating, in the memory controller, among the requests using arbitration. | 2013-04-25 |
20130103910 | CACHE MANAGEMENT FOR INCREASING PERFORMANCE OF HIGH-AVAILABILITY MULTI-CORE SYSTEMS - An apparatus and method for improving performance in high-availability systems are disclosed. In accordance with the illustrative embodiment, pages of memory of a primary system that are to be shadowed are initially copied to a backup system's memory, as well as to a cache in the primary system. A duplication manager process maintains the cache in an intelligent manner that significantly reduces the overhead required to keep the backup system in sync with the primary system, as well as the cache size needed to achieve a given level of performance. Advantageously, the duplication manager is executed on a different processor core than the application process executing transactions, further improving performance. | 2013-04-25 |
20130103911 | METHOD AND APPARATUS FOR SYNCHRONIZING A CACHE - An approach is provided for segmenting a cache into one or more cache segments and synchronizing the cache segments. An cache platform causes, at least in part, a segmentation of at least one cache into one or more cache segments. The cache platform further determines that at least one cache segment of the one or more cache segments is invalid. The cache platform also causes, at least in part, a synchronization of the at least one cache segment. The approach allows for a dynamic optimization of the synchronization of the cache segments based on one or more characteristics associated with the devices and/or the connection associated with the cache synchronization. | 2013-04-25 |
20130103912 | ARRANGEMENT - An arrangement includes a first part and a second part. The first part includes a memory controller for accessing a memory, at least one first cache memory and a first directory. The second part includes at least one second cache memory configured to request access to said memory. The first directory is configured to use a first coherency protocol for the at least one first cache memory and a second different coherency protocol for the at least one second memory. | 2013-04-25 |
20130103913 | SEMICONDUCTOR STORAGE DEVICE, SYSTEM, AND METHOD - A semiconductor storage system includes: a difference determining circuit configured to determine a difference between the number of first state values of sample data written to a memory and the number of first state values of read data read from the memory; and a compensation value determining circuit configured to determine a read voltage level compensation value corresponding to a difference between the number of the first state values of the sample data written to the memory and the number of the first state values of the read data read from the memory. | 2013-04-25 |
20130103914 | APPARATUS, METHOD, AND STORAGE MEDIUM FOR SAMPLING DATA - A data sampling apparatus includes a plurality of first-in first-out memories and a processor that executes a procedure. The procedure includes classifying received data signals in accordance with types of the data signals; storing the classified data signals in the corresponding memories; calculating a sampling rate based on a ratio between a total traffic volume of the received data signals per given time and a traffic volume of data signals stored in each of the memories per given time; and sampling the data signals stored in each of the memories based on the corresponding calculated sampling rate. | 2013-04-25 |
20130103915 | SECURE MEMORY ACCESS SYSTEM AND METHOD - A secure memory access system and method for providing secure access to Hyper Management Mode memory ranges is presented. | 2013-04-25 |
20130103916 | CLEARING BLOCKS OF STORAGE CLASS MEMORY - An abstraction for storage class memory is provided that hides the details of the implementation of storage class memory from a program, and provides a standard channel programming interface for performing certain actions, such as controlling movement of data between main storage and storage class memory or managing storage class memory. | 2013-04-25 |
20130103917 | EFFICIENT COMMAND MAPPING SCHEME FOR SHORT DATA BURST LENGTH MEMORY DEVICES - An exemplary system of the present disclosure comprises a memory controller, a command bus, a data bus, a memory device and a memory. The memory device is coupled to the memory controller by the command bus and the data bus. The memory stores instructions that when executed by the computer system perform a method of requesting data from the memory device. This method comprises receiving a plurality of commands for the memory device from the command bus, the memory device clocked by a clock. At least one command of the plurality of commands includes a first command and a second command within a single clock cycle of said clock. At least one of the first command and second command is a data access command. The first command is executed during a first clock cycle and the second command is executed during a second subsequent clock cycle. | 2013-04-25 |
20130103918 | Adaptive Concentrating Data Transmission Heap Buffer and Method - An apparatus includes a data container unloading circuit which frees a container either by discarding the contents or transmitting the contents to its destination. A data container loading circuit receives a plurality of submittals of various sizes and selects an appropriately sized free container. If no free container has sufficient capacity the loading circuit blocks all loading until a container of sufficient size becomes available. A container tailor circuit checks for available free space in the buffer and transfers capacity among free containers to resize one to fit an incoming submittal. The mix of container sizes can be adapted over time to reflect the changing sizes of the traffic. | 2013-04-25 |
20130103919 | ADMINISTERING THERMAL DISTRIBUTION AMONG MEMORY MODULES WITH CALL STACK FRAME SIZE MANAGEMENT - Administering thermal distribution among memory modules in a computing system that includes temperature sensors, where each temperature sensor measures temperature of a memory module and thermal distribution is effected by: determining, in real-time by a user-level application in dependence upon the temperature measurements of the temperature sensors, whether a memory module is overheated; if a memory module is overheated and if a current call stack frame is stored on the overheated memory module, increasing, by the user-level application, a size of the current call stack frame to fill remaining available memory space on the overheated memory module, ensuring a subsequent call stack frame is stored on a different memory module. | 2013-04-25 |
20130103920 | FILE STORAGE METHOD AND APPARATUS - A file storage method includes: splitting each of multiple files into one or more file block objects with different sizes; and writing the file block objects obtained from file splitting into corresponding large object storage files, wherein a preset number of large object storage files are pre-created in a storage apparatus, and storage spaces occupied by the preset number of large object storage files in the storage apparatus are continuous. | 2013-04-25 |
20130103921 | MANAGEMENT METHOD FOR A VIRTUAL VOLUME ACROSS A PLURALITY OF STORAGES - A first storage system includes a plurality of first storage devices and is coupled to a computer. A second storage system includes a plurality of second storage devices and is coupled to the first storage system. A first controller provides a thin provisioning logical volume (LU) to the computer. A second controller provides an external thin provisioning LU to the first storage system. The first controller provides pool areas associated with the thin provisioning LU, including a first pool area mapped to the external thin provisioning LU, and allocates the first pool area to a first region in the thin provisioning LU to store a write data to the first region in the thin provisioning LU. The second controller allocates at least one of a plurality of pool areas to store the write data to the first region in the thin provisioning LU. | 2013-04-25 |
20130103922 | METHOD, COMPUTER PROGRAM PRODUCT AND APPARTUS FOR ACCELERATING RESPONSES TO REQUESTS FOR TRANSACTIONS INVOLVING DATA OPERATIONS - Responding to IO requests made by an application to an operating system within a computing device implements IO performance acceleration that interfaces with the logical and physical disk management components of the operating system and within that pathway provides a system memory based disk block cache. The logical disk management component of the operating system identifies logical disk addresses for IO requests sent from the application to the operating system. These addresses are translated to physical disk addresses that correspond to disk blocks available on a physical storage resource. The disk block cache stores cached disk blocks that correspond to the disk blocks available on the physical storage resource, such that IO requests may be fulfilled from the disk block cache. Provision of the disk block cache between the logical and physical disk management components accommodates tailoring of efficiency to any applications making IO requests, and flexible interaction with various different physical disks. | 2013-04-25 |
20130103923 | MEMORY MANAGEMENT UNIT SPECULATIVE HARDWARE TABLE WALK SCHEME - A system and method for efficiently handling translation look-aside buffer (TLB) misses. A memory management unit (MMU) detects when a given virtual address misses in each available translation-lookaside-buffer (TLB). The MMU determines whether a memory access operation associated with the given virtual address is the oldest, uncompleted memory access operation in a scheduler. If this is the case, a demand table walk (TW) request may be stored in an available entry in a TW queue. During this time, the utilization of the memory subsystem resources may be low. While a demand TW request is stored in the TW queue, subsequent speculative TW requests may be stored in the TW queue. When the TW queue does not store a demand TW request, no more entries of the TW queue may be allocated to store TW requests. | 2013-04-25 |
20130103924 | EXPLOIT NONSPECIFIC HOST INTRUSION PREVENTION/DETECTION METHODS AND SYSTEMS AND SMART FILTERS THEREFOR - Exploit nonspecific host intrusion prevention/detection methods, systems and smart filters are described. Portion of network traffic is captured and searched for a network traffic pattern, comprising: searching for a branch instruction transferring control to a first address in the memory; provided the first instruction is found, searching for a subroutine call instruction within a first predetermined interval in the memory starting from the first address and pointing to a second address in the memory; provided the second instruction is found, searching for a third instruction at a third address in the memory, located at a second predetermined interval from the second address; provided the third instruction is a fetch instruction, indicating the presence of the exploit; | 2013-04-25 |
20130103925 | Method and System for Folding a SIMD Array - Systems and methods for folding a single instruction multiple data (SIMD) array include a newly defined processing element group (PEG) that allows interconnection of PEGs by abutment without requiring a row or column weave pattern. The interconnected PEGs form a SIMD array that is effectively folded at its center along the North-South axis, and may also be folded along the East-West axis. The folding of the array provides for north and south boundaries to be co-located and for east and west boundaries to be co-located. The co-location allows wrap-around connections to be done with a propagation distance reduced effectively to zero. | 2013-04-25 |
20130103926 | ESTABLISHING A DATA COMMUNICATIONS CONNECTION BETWEEN A LIGHTWEIGHT KERNEL IN A COMPUTE NODE OF A PARALLEL COMPUTER AND AN INPUT-OUTPUT ('I/O') NODE OF THE PARALLEL COMPUTER - Establishing a data communications connection between a lightweight kernel in a compute node of a parallel computer and an input-output (‘I/O’) node of the parallel computer, including: configuring the compute node with the network address and port value for data communications with the I/O node; establishing a queue pair on the compute node, the queue pair identified by a queue pair number (‘QPN’); receiving, in the I/O node on the parallel computer from the lightweight kernel, a connection request message; establishing by the I/O node on the I/O node a queue pair identified by a QPN for communications with the compute node; and establishing by the I/O node the requested connection by sending to the lightweight kernel a connection reply message. | 2013-04-25 |
20130103927 | CHARACTERIZATION AND VALIDATION OF PROCESSOR LINKS - A processor link that couples a first processor and a second processor is selected for validation and a plurality of communication parameter settings associated with the first and the second processors is identified. The first and the second processors are successively configured with each of the communication parameter settings. One or more test data pattern(s) are provided from the first processor to the second processor in accordance with the communication parameter setting. Performance measurements associated with the selected processor link and with the communication parameter setting are determined based, at least in part, on the test data pattern as received at the second processor. One of the communication parameter settings that is associated with the highest performance measurements is selected. The selected communication parameter setting is applied to the first and the second processors for subsequent communication between the first and the second processors via the processor link. | 2013-04-25 |
20130103928 | Method, Apparatus, And System For Optimizing Frequency And Performance In A Multidie Microprocessor - With the progress toward multi-core processors, each core is can not readily ascertain the status of the other dies with respect to an idle or active status. A proposal for utilizing an interface to transmit core status among multiple cores in a multi-die microprocessor is discussed. Consequently, this facilitates thermal management by allowing an optimal setting for setting performance and frequency based on utilizing each core status. | 2013-04-25 |
20130103929 | COUPLING PROCESSORS TO EACH OTHER FOR HIGH PERFORMANCE COMPUTING (HPC) - A High Performance Computing (HPC) node comprises a motherboard, a switch comprising eight or more ports integrated on the motherboard, and at least two processors operable to execute an HPC job, with each processor communicably coupled to the integrated switch and integrated on the motherboard. | 2013-04-25 |
20130103930 | DATA PROCESSING DEVICE AND METHOD, AND PROCESSOR UNIT OF SAME - A processor unit ( | 2013-04-25 |
20130103931 | MACHINE PROCESSOR - Disclosed are machine processors and methods performed thereby. The processor has access to processing units for performing data processing and to libraries. Functions in the libraries are implementable to perform parallel processing and graphics processing. The processor may be configured to acquire (e.g., to download from a web server) a download script, possibly with extensions specifying bindings to library functions. Running the script may cause the processor to create, for each processing unit, contexts in which functions may be run, and to run, on the processing units and within a respective context, a portion of the download script. Running the script may also cause the processor to create, for a processing unit, a memory object, transfer data into that memory object, and transfer data back to the processor in such a way that a memory address of the data in the memory object is not returned to the processor. | 2013-04-25 |
20130103932 | MULTI-ADDRESSABLE REGISTER FILES AND FORMAT CONVERSIONS ASSOCIATED THEREWITH - A multi-addressable register file is addressed by a plurality of types of instructions, including scalar, vector and vector-scalar extension instructions. It may be determined that data is to be translated from one format to another format. If so determined, a convert machine instruction is executed that obtains a single precision datum in a first representation in a first format from a first register; converts the single precision datum of the first representation in the first format to a converted single precision datum of a second representation in a second format; and places the converted single precision datum in a second register. | 2013-04-25 |
20130103933 | METHOD OF SHARING FIRMWARE SETTING VALUE - A method of sharing a firmware setting value suitable for an electronic apparatus is provided. The method is executed by an electronic apparatus and includes following steps: logging in to a sharing platform, wherein multiple firmware profiles are stored in the sharing platform and the firmware profiles respectively includes a firmware setting value and a hardware information; searching candidate profiles matching a search criterion in the firmware profiles; displaying the hardware information in the found candidate profiles; downloading a selected one of the candidate profiles according to a selection instruction; and applying the downloaded candidate profile. | 2013-04-25 |
20130103934 | COMPUTER SYSTEM AND METHOD FOR TAKING OVER MODULE THEREIN - In a computer system comprising an active computer and a standby computer, when an active computer stopped due to occurrence of a failure is switched over to a standby computer, the standby computer cannot access to a TPM in the stopped active computer if the TPM is mounted therein, and therefore, the standby computer cannot take over the TPM in use from the active computer. The present invention provides a computer system comprising a TPM provided outside an active computer, to enable takeover of a TPM in use from an active computer to a standby computer when performing a switchover therebetween. | 2013-04-25 |
20130103935 | METHOD AND SYSTEM FOR PROGRAMMABLE POWER STATE CHANGE IN A SYSTEM-ON-A-CHIP DEVICE - A method and system are set forth for enabling software control of a power management unit (PMU) in a System-On-a-Chip (SoC) device to effect changes in power state without having to adjust external board level states. In one embodiment, once the SoC system controller has been booted, it communicates with the PMU over a communication bus and is able to request changes in power states without requiring external trigger events. Complete remote control of power states according to the method and system set forth herein provides flexibility when debugging and testing SoC devices because there is no need to alter external board states. Also, providing programmable changes in reset states as an alternative to full system reset preserves state data so that the system can be restarted efficiently and quickly from known conditions. | 2013-04-25 |
20130103936 | SYSTEM AND METHOD FOR PROVIDING A PARAMETER FOR AN APPLICATION OPERATING ON AN ELECTRONIC DEVICE - A system and method of activating a set up application operating on an electronic device are provided. The method comprises: upon activation of the electronic device, determining a state of initial configuration for the electronic device from configuration data stored in the electronic device; when the state of initial configuration of the electronic device indicates that the set up application had previously been initiated, presenting a GUI screen on the electronic device where an application operating on the electronic device and the set up application are displayed; and upon activation of the set up application, activating the set up application at a point in its operation based on the operation history. | 2013-04-25 |
20130103937 | INFORMATION DEVICE, STORAGE MEDIUM AND INITIAL STATE RESTORATION METHOD - An information device has a storage medium storing information items which includes a first program provided on a first partition, a second program and data provided on a second partition to restore the first program on the first partition to a predetermined state, a boot block which causes system activation from one of the first partition and the second partition, and an active-partition switching program which indicates, to the boot block, one of the first and second partitions. An input/output system activates the active-partition switching program when a specific operation is performed. The active-partition switching program indicates to the boot block that system activation is to be executed from the second partition. | 2013-04-25 |
20130103938 | Reconfiguring A Secure System - Apparatuses, methods, and systems for reconfiguring a secure system are disclosed. In one embodiment, an apparatus includes a configuration storage location, a lock, and lock override logic. The configuration storage location is to store information to configure the apparatus. The lock is to prevent writes to the configuration storage location. The lock override logic is to allow instructions executed from sub-operating mode code to override the lock. | 2013-04-25 |
20130103939 | Securing Communications of a Wireless Access Point and a Mobile Device - In one or more embodiments, a network provider can receive a request to access a public network via a wireless network implemented via one or more wireless access points. The network provider can receive, via an unsecured wireless communication from a mobile device utilizing the wireless network and via a hypertext transfer protocol secure (HTTPS), an encryption key usable to secure wireless communications from the mobile device utilizing the wireless network. The encryption key can be encrypted via a public encryption key, received from the network provider or previously stored by the mobile device, associated with the network provider. The network provider can decrypt the encryption key and can provide the encryption key to a wireless access point implementing the wireless network and communicating with the mobile device. The wireless access point and the mobile device can communicate in a secure fashion based on the encryption key. | 2013-04-25 |
20130103940 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PERFORMING ENCAPSULATING SECURITY PAYLOAD (ESP) REHASHING - Methods, systems, and computer readable media for accelerating stateless IPsec traffic generation by performing ESP rehashing of ESP packets are disclosed. A first ESP packet is generated by encrypting a portion of the packet and adding ESP headers and trailers to the encrypted portion, hashing the encrypted portion and the ESP header to compute a first ESP integrity check value (ICV), and adding the ESP ICV as a trailer to the ESP packet. At least one second ESP packet is generated by modifying parameters in the first ESP packet. The first and second ESP packets are transmitted to a device under test. | 2013-04-25 |
20130103941 | METHOD FOR UPDATING DATA IN A SECURITY MODULE - A method for updating operating data in a security module associated to a user unit for processing digital data broadcast in a transport stream, said unit being connected to a conditional access system transmitting, in said transport stream, to the security module a first stream comprising management messages includes: broadcasting a second stream of operating data patch messages, adding to the first stream of management messages, a trigger message to direct the security module to a conditional access system transmitting a second stream transporting suitable operating data patch messages if a current version of the operating data in the security module requires an update, updating the operating data of the concerned security module with the operating data patch messages from the second stream, directing the security module towards the conditional access system transmitting another stream based on an identifier of the conditional access system in the security module. | 2013-04-25 |
20130103942 | SYSTEM AND METHOD FOR PSEUDO-RANDOM POLYMORPHIC TREE CONSTRUCTION - Disclosed herein are systems, methods, and non-transitory computer-readable storage media for obfuscating data via a pseudo-random polymorphic tree. A server, using a seed value shared with a client device, generates a tag stream according to a byte-string algorithm. The server passes the tag stream and the data to be transmitted to the client device through a pseudo-random polymorphic tree serializer to generate a pseudo-random polymorphic tree, which the server transmits to the client device. The client device, using the same seed and byte-string algorithm, generates the same tag stream as on the server. The client passes that tag stream and the received pseudo-random polymorphic tree through a pseudo-random polymorphic tree parser to extract the data. Data to be transmitted from the server to the client device is hidden in a block of seemingly random data, which changes for different seed values. This approach obfuscates data and has low processing overhead. | 2013-04-25 |
20130103943 | DISPLAYING PRIVATE INFORMATION USING ALTERNATE FRAME SEQUENCING - Private information can be displayed using alternate frame sequencing to prevent unauthorized viewing. The private information can be ascertained by an authorized user using an active shutter viewing device synchronized to the alternate frame sequencing display. Private information can be displayed on a portion of the display, while public information, including a basic user interface, can be displayed on a second portion visible to authorized and unauthorized users. For enhanced security, alternate frame sequencing synchronization parameters can be encrypted and exchanged between a display device and the viewing device. When and where to display private information using alternate frame sequencing can be determined using environmental sensors. A single display screen can be configured to simultaneously present private information to multiple users, each user permitted to view a portion of the private information according to the unique synchronization parameters employed by a user's viewing device. | 2013-04-25 |
20130103944 | Hypertext Link Verification In Encrypted E-Mail For Mobile Devices - A method, device and computer readable memory are provided for verifying hypertext links in an encrypted e-mail message to be sent to a mobile device to remove links that may contain malicious programs, link to a phishing website, or potentially comprise security of the mobile device or expose the user to unsafe sites or content. The hypertext links are extracted by decrypting the encrypted e-mail message. The hypertext links from the decrypted e-mail message are extracted and for each link the status is determined to verify the link. Actions can then be performed based upon the determined status of respective extracted hypertext links. | 2013-04-25 |
20130103945 | ENCRYPTING DATA OBJECTS TO BACK-UP - Provided are a computer program product, system, and method for encrypting data objects to back-up to a server. A client private key is intended to be maintained only by the client. A data object of chunks to store at the server is generated. A first portion of the chunks in the data object is encrypted with the client private key and the first portion of the chunks in the data object encrypted with the client private key are sent to the server to store. A second portion of the chunks in the data object not encrypted with the client private key are sent to the server to store. | 2013-04-25 |
20130103946 | Location-aware Mobile Connectivity and Information Exchange System - A computer platform and method for managing secure data transactions between user accounts on a server, based on the respective locations of mobile user devices related to the user accounts, where the user devices create a secured mobile communication cloud between themselves to ensure secure data communications. | 2013-04-25 |
20130103947 | TEMPORAL PROXIMITY TO VERIFY PHYSICAL PROXIMITY - A security system assesses the response time to requests for information to determine whether the responding system is in physical proximity to the requesting system. Generally, physical proximity corresponds to temporal proximity. If the response time indicates a substantial or abnormal lag between request and response, the system assumes that the lag is caused by the request and response having to travel a substantial or abnormal physical distance, or caused by the request being processed to generate a response, rather than being answered by an existing response in the physical possession of a user. If a substantial or abnormal lag is detected, for example due to the fact that the information was downloaded from the Internet, the system is configured to limit subsequent access to protected material by the current user, and/or to notify security personnel of the abnormal response lag. | 2013-04-25 |
20130103948 | POINT OF SALE (POS) PERSONAL IDENTIFICATION NUMBER (PIN) SECURITY - A key is securely injected into a POS PIN pad processor in its usual operating environment. In response to entry of a personal identification number (PIN) into a PIN pad, the processor puts the PIN into a PIN block; puts additional random data into the PIN block; and encrypts the entire PIN block using asymmetric cryptography with a public key derived from the injected key residing in the PIN pad processor. The corresponding private key may be held securely and secretly by an acquirer processor for decrypting the PIN block to retrieve the PIN. The encrypted random data defends the PIN against dictionary attacks. Time stamp data and constant data encrypted with the PIN block enables a defense of the PIN against replay attacks and tampering. The method may also include accepting the PIN from a mobile phone in communication with the processor. | 2013-04-25 |
20130103949 | SECURE PASSWORD GENERATION - A secure password generation method and system is provided. The method includes enabling by a processor of a computing system, password translation software. The computer processor generates and stores the random translation key. A first password is received and a second associated password is generated. The computer processor associates the second password with a secure application. The computer processor stores the random translation key within an external memory device and disables a connection between the computing system and the external memory device. | 2013-04-25 |
20130103950 | SYSTEM AND METHOD FOR SECURELY CREATING MOBILE DEVICE APPLICATION WORKGROUPS - Presented are systems and methods for providing moderator control in a heterogeneous conference including activating a secure workgroup sharing system between an organizing mobile device and one or more invitee mobile devices, such that activating the secure workgroup sharing system generates a secure workgroup invitation. The secure workgroup sharing system sends the secure workgroup invitation and a security key to one or more invitees associated with the one or more invitee mobile devices. The secure workgroup sharing system receives a security key, matching the sent security key, and an acceptance of the secure workgroup invitation from at least one of the one or more invitee mobile devices, and establishes a peer-to-peer workgroup allowing direct secure communications between the organizing mobile device and at least one of the one or more invitee mobile devices. | 2013-04-25 |
20130103951 | SYSTEMS AND METHODS FOR IDENTIFYING AN INDIVIDUAL - The present application relates to systems and methods using biometric data of an individual for identifying the individual and/or verifying the identity of an individual. These systems and methods are useful for, amongst many applications, more secure identification of high-risk individuals attempting to gain access to an entity, transport, information, location, security organization, law enforcement organization, transaction, services, authorized status, and/or funds. | 2013-04-25 |
20130103952 | User Authentication System and Method for Encryption and Decryption - A system configured to authenticate a user for encryption or decryption includes a user authentication apparatus, a computer-readable medium operable to communicate with the user authentication apparatus, and an encryption and decryption computer communicating with the user authentication apparatus. The computer-readable medium may store user identifying information and encryption and decryption data. The encryption and decryption computer may be configured to receive an application programming interface (API) for interfacing with the user authentication apparatus and receive the user identifying information from the computer-readable medium via the API. A user may be authenticated based on the user identifying information and, once the user is authenticated, the encryption and decryption data may be read. | 2013-04-25 |
20130103953 | APPARATUS AND METHOD FOR ENCRYPTING HARD DISK - An apparatus and method for encrypting a hard disk are provided. The apparatus includes a program management unit, an Internet Protocol (IP) management unit, and an encryption processing unit. The program management unit causes an allowed program or process to be executed based on a result of determination as to whether the program or process to be executed in a host terminal is allowed to gain access. The IP management unit causes data to be transmitted to an allowed destination IP address based on a result of determination as to whether the destination IP address to which the host terminal attempts to transmit the data is allowed to be accessed. The encryption processing unit encrypts and decrypts all data, exchanged between the host terminal and the hard disk by applying an algorithm, selected by a user, to the data. | 2013-04-25 |
20130103954 | KEY USAGE POLICIES FOR CRYPTOGRAPHIC KEYS - A computer program product for secure key management is provided. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes creating a token and populating the token with key material, and binding key control information to the key material. The key control information includes information relating to usage of the key material populating one or more key usage fields that define attributes that limit actions that may be performed with the key material. | 2013-04-25 |
20130103955 | Controlling Transmission of Unauthorized Unobservable Content in Email Using Policy - A system, method, and apparatus is disclosed to control mail server in handling encrypted messages according to a policy. | 2013-04-25 |
20130103956 | METHOD FOR CONTROLLING MOBILE TERMINAL DEVICE, MEDIUM FOR STORING CONTROL PROGRAM, AND MOBILE TERMINAL DEVICE - A method for controlling a mobile terminal device that includes a multi-core CPU and a display that displays an execution result of an application program executed by the multi-core CPU includes detecting an application program of which an execution result is displayed, calculating a CPU load per thread in the application program detected in the detecting, and increasing the number of cores operating in the multi-core CPU when the number of threads, each of the threads causing the CPU load to be equal to or higher than a first value, is equal to or higher than a second value. | 2013-04-25 |
20130103957 | METHOD AND DEVICE FOR ACTIVATION OF COMPONENTS - A method and electronic device for activating components based on predicted device activity. The method and device include maintaining a set of device activity information storing data collected from components in the device. The device activity information may be maintained over a predetermined time period and may include times associated with the collected component data. The device activity information may include data regarding scheduled events. Device activity and the appropriate activation state of a component on the device may be predicted based on the current time, current data collected from components in the device and data in the device activity information. | 2013-04-25 |
20130103958 | METHOD AND APPARATUS OF POWER OVER ETHERNET - The present disclosure discloses a method and an apparatus of power over Ethernet, and belongs to the field of communications. An Ethernet power sourcing equipment sets a power supply port to a sleep state, and sets a timer for the power supply port; enables the power supply port when the timer reaches a set time; and detects whether a powered device PD is connected to the power supply port, and if a PD is connected to the power supply port, triggers the power supply port to supply power to the PD; if no PD is connected to the power supply port, resets the power supply port to the sleep state, and sets the timer for the power supply port. Implementation of the present disclosure effectively reduces power consumption of the system and saves energy. | 2013-04-25 |
20130103959 | PROCESSING SYSTEM, PROCESSING DEVICE AND POWER SUPPLY CONTROL METHOD - A microcomputer of an ECU which is a master determines whether to turn on or off the power supply of a slave ECU, and outputs a power supply control signal indicating power-on/off via serial communication on the basis of the result of determination. A signal superposition circuit accepts the output power supply control signal and transmits the accepted power supply control signal to a CAN bus to which a CAN transceiver is connected. In the slave ECU, a signal separation circuit individually receives a CAN communication signal and a serial communication signal transmitted to the CAN bus, while an input/output control circuit to which the serial communication signal is input as the power supply control signal outputs a signal to the power supply circuit to control power-on/off of the microcomputer. | 2013-04-25 |
20130103960 | METHOD AND DEVICE WITH INTELLIGENT POWER MANAGEMENT - A wireless communication device ( | 2013-04-25 |
20130103961 | Providing Wakeup Logic To Awaken An Electronic Device From A Lower Power Mode - An electronic device ( | 2013-04-25 |
20130103962 | SLEEP STATE SYNCHRONIZATION METHOD BETWEEN DOCKED TERMINALS AND DOCKING SYSTEM USING THE SAME - A sleep state synchronization method between docked terminals and docking system using the same is disclosed. The state synchronization method between docked terminals includes determining whether or not a first terminal is docked on a second terminal, turning off signal transmitted to a TMDS data line when it is determined that the first terminal is not docked on the second terminal, and the entering by the second terminal into sleep mode. Accordingly, the docked terminals are able to enter into sleep mode together, enabling interlocked operations of the docked terminals instead of operating independently, which would prevent confusion to users when using the docked terminals. In addition, by the aforementioned method, it is unnecessary to make settings for each docked terminal separately, and it is possible to enter into sleep mode easily using TMDS signal of a HDMI interface without going through complex algorithms. | 2013-04-25 |
20130103963 | POWER SUPPLY CIRCUIT EMPLOYED IN COMPUTER FOR PERFORMING DIFFERENT STANDBY MODES - A power supply circuit includes a basic input output system (BIOS), a super input output (SIO), a bivibrator, a logical selector, and a voltage converter. The basic input output system (BIOS) is configured for storing different operation modes of a computer. The super input output (SIO) is configured for generating standby mode signals according to the different operation modes. The bivibrator is configured for generating a reference signal when upon receiving a clock signal from the computer when the computer is turned on. The logical selector is configured for generating a standby control signal in response to the reference signal and one of the standby mode signals. The voltage converter is configured for transforming the first standby voltage into a second standby voltage to drive the SIO. The SIO receives the second standby voltage before the clock signal is delayed and provided to the SIO to start the computer. | 2013-04-25 |
20130103964 | Device and method for the reliable detection of wakeup events in the shutdown phase of a control unit - A wakeup logic element for providing reliable detection of wakeup events in the shutdown phase of a control unit for a vehicle, having a wakeup source input, the wakeup source input being implemented as a flank-sensitive wakeup input, and an on/off state of a vehicle control unit being controllable by the wakeup source input, wherein a wakeup signal which arrives at the wakeup source input during a shutdown procedure of the control unit is able to be delayed or temporarily suppressed, such that the wakeup signal is applied at the wakeup source input after the control unit has been shut down. | 2013-04-25 |
20130103965 | ACCESSORY AND HOST BATTERY SYNCHRONIZATION - A portable electronic device that can receive power from an internal power source (such as a battery) and/or a second portable electronic device is described. In order to coordinate the power availability in these portable electronic devices, a power-management mechanism in the portable electronic device (which may be implemented in hardware and/or software) may determine a power state of an internal power source in the second portable electronic device, and may accordingly adjust a power consumption by circuits in the portable electronic device and/or the power received from the second portable electronic device. In this way, the power-management mechanism may approximately synchronize the power consumption in the portable electronic devices and/or the power states of the internal power sources in the portable electronic devices. This approximate synchronization may facilitate concurrent operation of the portable electronic devices. | 2013-04-25 |