40th week of 2008 patent applcation highlights part 84 |
Patent application number | Title | Published |
20080244154 | Method for Controlling Access to Data of a Tape Data Storage Medium - A method, system, and machine-readable medium for controlling access to data of a tape data storage medium are disclosed. In accordance with one embodiment, a method is provided which comprises conveying data access control metadata from a tape cartridge comprising a tape data storage medium to a host, receiving decrypted metadata from the host, comparing a checksum value determined utilizing the decrypted metadata with checksum data stored within the tape cartridge; and processing a request to access the tape data storage medium received from the host based upon a comparison of the checksum value and checksum data. In the described method embodiment, the data access control metadata comprises encrypted metadata corresponding to a data storage parameter, where data is stored within the tape data storage medium utilizing the data storage parameter and the decrypted metadata is generated by the host utilizing the encrypted metadata. | 2008-10-02 |
20080244155 | METHODS AND APPARATUS TO PROTECT DYNAMIC MEMORY REGIONS ALLOCATED TO PROGRAMMING AGENTS - Methods and apparatus to protect dynamic memory regions allocated to programming agents are disclosed. An example method to protect a dynamic memory region disclosed herein comprises mapping protected memory regions to a protected page table for address translation associated with a protected agent, updating the protected page table with address information corresponding to the dynamic memory region during a context switch from execution of an unprotected agent to execution of the protected agent when the dynamic memory region was allocated for the unprotected agent prior to the context switch, and accessing the dynamic memory region during execution of the protected agent based on the address information in the protected page table without causing a subsequent context switch. | 2008-10-02 |
20080244156 | APPLICATION PROCESSORS AND MEMORY ARCHITECTURE FOR WIRELESS APPLICATIONS - In one embodiment, the invention provides a method for accessing memory. The method comprises sending memory transactions to a memory sub-system for a first processor to an intermediate second processor interposed on a communication path between the first processor and the memory sub-system; and controlling when the memory transactions are allowed to pass through the second processor to reach the memory sub-system. | 2008-10-02 |
20080244157 | Semiconductor memory device - A semiconductor memory device includes: a memory core region; a data transfer unit configured to transfer external data to the memory core region; a data code storage unit configured to store test data; and a data selection unit configured to select one of the test data from the data code storage unit and the data from the data transfer unit and output the selected data to the memory core region. | 2008-10-02 |
20080244158 | Drawing Apparatus - A drawing apparatus which can create an exposure pattern rapidly. The drawing apparatus has a raster conversion processing module for converting vector images as wiring patterns into bitmap image data, an image cache module for temporarily storing a predetermined-size cached image supplied from the raster conversion processing module, a first compression module for compressing the cached image stored in the image cache module, a second compression module for compressing the cached image stored in the image cache module in a compression ratio differing from that of the first compression module, a comparison module for comparing data sizes of compressed data generated by the first and second compression modules and selecting one having a smaller data size, a memory access module for writing the compressed data selected by the comparison module, into a storage module, and a cache region control module for controlling a compression status of the cached image. | 2008-10-02 |
20080244159 | DATA TRANSFER CONTROL APPARATUS AND DATA TRANSFER CONTROL METHOD - A data transfer control apparatus includes a memory, a write control part controlling data writing to the memory, a read control part controlling data reading from the memory, a read-start calculation part calculating an output timing of a notification which indicates a read-start operation to the read control part based on each transfer condition of the data writing to the memory and the data reading from the memory, and an asynchronous transfer part asynchronously transferring a clock of the notification, and notifying the read control part of the notification. | 2008-10-02 |
20080244160 | Storage Method for a Gaming Machine - In a first aspect the invention provides a storage method for a gaming machine, including allocating program code to one of at least two program categories including a first category of program code that is expected to be modified more frequently than a second category of program code and storing program code from the first and second categories in logically separate storage areas. | 2008-10-02 |
20080244161 | Memory management apparatus and method for same - A memory management apparatus uses a link list memory to manage a use area and a vacant area of a data memory. The use of the use area of the data memory by each of plural ports is restricted, and the use area of the data memory is configured to be always provided for each of the plural ports. In this manner, monopolized use of the data memory by a specific port is prevented and each of the plural ports is securely provided with the use area of the data memory under control of the memory management apparatus. | 2008-10-02 |
20080244162 | METHOD FOR READING NON-VOLATILE STORAGE USING PRE-CONDITIONING WAVEFORMS AND MODIFIED RELIABILITY METRICS - Data stored in non-volatile storage is read using sense operations and associated pre-conditioning waveforms. The pre-conditioning waveform provides a short term history for a non-volatile element which is analogous to the conditions experienced during programming when a programming pulse is applied prior to a verify operation. The pre-conditioning waveform can cause electrons to enter and exit trap sites, for instance, so that the accuracy of a probabilistic decoding process is improved. In one approach, multiple read operations are performed, some with pre-conditioning waveforms and some without. Pre-conditioning waveforms with different characteristics, such as amplitude, shape, duration and time before the associated read pulse, can also be used. For probabilistic decoding, initial reliability metrics can be developed based on multiple reads. Tables which store the reliability metrics can then be prepared for use in subsequent decoding. | 2008-10-02 |
20080244163 | PORTABLE DATA ACCESS DEVICE - A portable data access device is applicable to a data processing system. The portable data access device includes at least a first data access sector preset to be a read-only data access sector, for storing at least data and/or application programs executable by the data processing system; at least a second data access sector set to be a general data access sector; and a controller for interfacing with the data processing system and controlling data access to the first data access sector and the second data access sector. The data processing system may execute the application programs and/or access the data through the portable data access device, and the risk of modifying or damaging the data and/or application programs can be reduced by the read-only data access sector. | 2008-10-02 |
20080244164 | STORAGE DEVICE EQUIPPED WITH NAND FLASH MEMORY AND METHOD FOR STORING INFORMATION THEREOF - A storage device equipped with NAND flash memory and method for storing information thereof includes a SLC processing structure to provide fast information access and improve processing performance and a MLC processing structure to increase data density of each storage unit and reduce the cost and size of each unit of information. The data storing method includes storing important information such as operating system programs, application programs and information that have been accessed frequently in the SLC processing structure, and storing ordinary information in the MLC processing structure to reduce the cost and size of each unit of information. | 2008-10-02 |
20080244165 | Integrated Memory Management Device and Memory Device - An example of a device comprises a first MMU converting a logical address into a physical address for a cache, a controller accessing the cache based on the physical address for the cache, a first storage storing history data showing an access state to a main memory outside a processor, a second storage storing relation data showing a relationship between a logical address and a physical address in the main memory, and a second MMU converting a logical address into a physical address for the main memory based on the history and relation data and accessing the main memory based on the physical address for the main memory. The first and second MMU, controller, first storage, second storage are included in the processor. | 2008-10-02 |
20080244166 | System and method for configuration and management of flash memory - A system and a method for configuration and management of flash memory is provided, including a flash memory, a virtual memory region, and a memory logical block region. The flash memory includes a plurality of physical erase units. Each physical erase unit is configured to include at least a consecutive segment, and each segment is configured to include at least a consecutive frame. Each frame is configured to include at least a consecutive page. Each virtual memory region is configured to include a plurality of areas, and each area is configured to include at least a virtual erase unit. The memory logical block region is configured to include a plurality of clusters, and each cluster includes at least a consecutive memory logical block. By forming correspondence among the physical erase unit, segment, frame, page, virtual erase unit, area, memory logical block and cluster to control the data access to the flash memory, the present invention achieves the reconfiguration and management of memory consumption and access efficiency for the flash memory. | 2008-10-02 |
20080244167 | Electronic device and method for installing software - A peripheral for a computer and a method of using the peripheral is for installing software onto the computer using Direct Memory Access. The peripheral comprises a computer accessible medium and a program product. The program product has codes to read and write to the Random Access Memory of the computer; and to bypass restrictions of the host computer Operating System that prevent the peripheral from gaining full access to all portions of the host computer's Random Access Memory. The preferred methods of using the peripheral automatically install software on a computer or copies forensic data from the computer's Random Access Memory once the peripheral is connected to the computer. | 2008-10-02 |
20080244168 | METHOD AND APPARATUS FOR A PRIMARY OPERATING SYSTEM AND AN APPLIANCE OPERATING SYSTEM - One embodiment includes a personal computer device comprising at least one machine to execute a primary user operating system, a first physical memory to be used by the primary user operating system, at least one appliance operating system that is independent from the primary user operating system, a second physical memory to be sequestered from the primary user operating system and an access violation monitor to restrict access from the at least one appliance operating system to the second physical memory, wherein the access violation monitor is to run only when the at least one appliance operating system is invoked and at least one appliance operating system is to be invoked only after the primary user operating system has been suspended to a standby state. | 2008-10-02 |
20080244169 | Apparatus for Efficient Streaming Data Access on Reconfigurable Hardware and Method for Automatic Generation Thereof - A content addressable memory (CAM) is disclosed that includes a memory having a first port configured to write a 1-bit data to the memory and a second port configured to read and write N-bit data. To update the CAM, an N-bit zero data word is written to the second port at a first address A | 2008-10-02 |
20080244170 | Intelligent allocation of programmable comparison operations for reducing the number of associative memory entries required - Intelligent allocation of programmable comparison operations may reduce the number of associative memory entries required for programming an associative memory (e.g., ternary content-addressable memory) with multiple matching definitions (e.g., access control list entries, routing information, etc.), which may be particularly useful in identifying packet processing operations to be performed on packets in a packet switching device. The higher-cost comparison operations, in terms of the number of associative memory entries required to natively support such operations, are allocated to one or more comparison evaluators (e.g., programmable logic and/or processing elements configured to evaluate one or more comparison operations) configured to evaluate an input value with one or more of the programmable comparison operations in order to generate and provide one or more values representing results of the evaluations to one or more associative memories for use in identifying the packet processing operations. | 2008-10-02 |
20080244171 | APPARATUS, SYSTEM, AND METHOD FOR UTILIZING TAPE MEDIA SEGMENTATION - An apparatus and system are presented for utilizing tape storage media segmentation to improve data access performance. A segmented tape storage medium within a tape cartridge having a first and second segment is utilized. A selection module allows a user to select a user-defined capacity of the tape storage medium that is less than the usable capacity of the tape storage medium. The user-defined capacity allows the user to prefer improved data access over tape storage capacity. Data, when written to the tape, is written only within the user-defined capacity. Data may be written exclusively on the first segment or written on both the first segment and second segment allowing data access to be improved. In addition, the user-defined capacity may correspond to the full capacity of the tape storage media. | 2008-10-02 |
20080244172 | Method and apparatus for de-duplication after mirror operation - An amount of storage capacity used during mirroring operations is reduced by applying de-duplication operations to the mirror volumes. Data stored to a first volume is mirrored to a second volume. The second volume is a virtual volume having a plurality of logical addresses, such that segments of physical storage capacity are allocated for a specified logical address as needed when data is stored to the specified logical address. A de-duplication operation is carried out on the second volume following a split from the first volume. A particular segment of the second volume is identified as having data that is the same as another segment in the second volume or in the same consistency group. A link is created from the particular segment to the other segment and the particular segment is released from the second volume so that physical storage capacity required for the second volume is reduced. | 2008-10-02 |
20080244173 | STORAGE DEVICE USING NONVOLATILE CACHE MEMORY AND CONTROL METHOD THEREOF - According to one embodiment, the present invention provides a storage device that sophisticatedly utilizes the characteristics of a nonvolatile cache memory and a hard disk, and compensates defects of the hard disk drive side to improve the reliability of the device. The storage device includes a host interface, a command analyzing section, a memory that stores request information which permits or forcibly forbids accessing the hard disk, a device state determining section that determines the request information of the memory, and a media access determining section that, when the determination result of the device state determining section indicates the “forbiddance”, forbids accessing the hard disk, and, when the determination result of the device state determining section indicates the “permission”, permits the accessing based on the analysis result of the command analyzing section and unique determination result. | 2008-10-02 |
20080244174 | Replication in storage systems - Embodiments include methods, apparatus, and systems for replication in storage systems. One embodiment includes a method that uses a target port on a storage array to function as an initiator port on a host in a storage area network (SAN). The target port discovers storage arrays in the SAN and mimics an initiator port to transmit input/output (I/O) requests. | 2008-10-02 |
20080244175 | MEMORY SYSTEM AND COMPUTER SYSTEM - When a memory card is inserted into a computer, a memory controller sends command information stored in a memory array to the computer. Then, the computer stores the command information received from the memory card into a RAM. The computer generates a command as needed on the basis of the stored command information and sends the generated command to the memory card. When the memory card receives the command from the computer, the memory controller analyzes the received command and performs it while making reference to command analysis information. This makes it possible to reduce a load accompanying the change and addition of commands in a semiconductor memory. | 2008-10-02 |
20080244176 | INFORMATION PROCESSING DEVICE AND DISK ARRAY CONSTRUCTION METHOD - According to one embodiment, an information processing device, includes a connecting port configured to be connected by a second storage device, having a plurality of areas to duplicate data to be stored by a internal storage device having a first capacity and to be disposed in a body, and of which the capacity of each area is larger than the first capacity, an inquiring unit configured to inquire to a user which of areas in the second storage device is selected for constructing a disk array, after the second storage device is connected to the connecting port, a rebuilding unit configured to rebuild the data in the internal storage device into the area selected by the user in response to the inquiry from the inquiring unit, and a disk array construction unit configured to construct the disk array by an area corresponding to the area selected by the user. | 2008-10-02 |
20080244177 | MODULAR SYSTEMS AND METHODS FOR MANAGING DATA STORAGE OPERATIONS - The invention is a modular backup and retrieval system. The software modules making up the backup and retrieval system run independently, and can run either on the same computing devices or on different computing devices. The modular software system coordinates and performs backups of various computing devices communicating to the modules. At least one module on one of the computing devices acts as a system manager for a network backup regimen. A management component acts as a manager for the archival and restoration of the computing devices on the network. It manages and allocates library media usage, maintains backup scheduling and levels, and supervises or maintains the archives themselves through pruning or aging policies. A second software module acts as a manager for each particular library media. | 2008-10-02 |
20080244178 | REBALANCING OF STRIPED DISK DATA - Provided are a method, system, and article of manufacture, where a plurality of extents are stored in a first set of storage units coupled to a controller. A determination is made that a second set of storage units has been coupled to the controller. The plurality of extents are distributed among all storage units included in the first set of storage units and the second set of storage units. | 2008-10-02 |
20080244179 | Memory device with a built-in memory array and a connector for a removable memory device - A memory device is provided comprising a built-in memory array, a first connector configured to connect to a removable memory device comprising a lower-endurance memory array than the built-in memory array, a second connector configured to connect to a host device, and circuitry operative to control read/write operations to the built-in memory array. In another embodiment, a memory device is provided comprising circuitry operative to determine if a removable memory device connected to a first connector of the memory device comprises a memory controller. In yet another embodiment, a memory device is provided comprising a built-in memory array, a connector configured to connect to a removable memory device comprising a memory array without a memory controller, and circuitry operative to control read/write operations to the built-in memory array and the removable memory device's memory array. | 2008-10-02 |
20080244180 | Navigation apparatus and method - A navigation apparatus and method increase the upper limit for the number of times map data may be written into a portable storage medium. A CPU reads an initial radius and an additional radius increment from a ROM and repeatedly adds the additional radius increment to the initial radius to obtain, with each addition, a new calculated radius centered on coordinates of a central geographic point of a map data extracting-region. The map data for each secondary grid unit within the map data extracting-region is sequentially read out from a CD-ROM. Then, when the map data within the incrementally enlarged map data extracting-region exceeds the maximum storage capacity of the SD memory card, the CPU deducts the last added increment of radius from the calculated radius to obtain a map region to be stored. The CPU sequentially reads out from the CD-ROM the map data for each grid unit within the map region to be stored and sequentially writes the grid units of map data into the SD memory card. | 2008-10-02 |
20080244181 | Dynamic run-time cache size management - Methods and apparatus relating to dynamic management of cache sizes during run-time are described. In one embodiment, the size of an active portion of a cache may be adjusted (e.g., increased or decreased) based on a cache busyness metric. Other embodiments are also disclosed. | 2008-10-02 |
20080244182 | Memory content inverting to minimize NTBI effects - In general, in one aspect, the disclosure describes an apparatus that includes a memory device having a plurality of memory cells. An inverter is used to invert data and tag information destined for the memory device. A register is used to capture the inverted data and tag information. A write inverted value logic is used to determine when to enable writing the inverted data and tag information from the register to the memory device. When inverted data and tag information is written to a memory cell the memory cell is invalidated. | 2008-10-02 |
20080244183 | Storage system - An object of the present invention is to provide a storage system which is shared by a plurality of application programs, wherein optimum performance tuning for a cache memory can be performed for each of the individual application programs. The storage system of the present invention comprises a storage device which provides a plurality of logical volumes which can be accessed from a plurality of application programs, a controller for controlling input and output of data to and from the logical volumes in response to input/output requests from the plurality of application programs, and a cache memory for temporarily storing data input to and output from the logical volume, wherein the cache memory is logically divided into a plurality of partitions which are exclusively assigned to the plurality of logical volumes respectively. | 2008-10-02 |
20080244184 | In-memory caching of shared customizable multi-tenant data - In a multi-tenant data sharing environment with shared, customizable data attributes are assigned to requested data and stored in a cache store along with the requested data. For non-customized data designated as system data, one copy is stored in the cache store for use by multiple tenants allowing optimization of memory and performance for each data request/retrieval operation. A “delete sentinel” attribute may be assigned to non-existing data in the cache store enabling notification of requesting tenant(s) without a need to access the tenant data store each time a request for the non-existing data is received. | 2008-10-02 |
20080244185 | Reduction of cache flush time using a dirty line limiter - The invention relates to a method for reducing cache flush time of a cache in a computer system. The method includes populating at least one of a plurality of directory entries of a dirty line directory based on modification of the cache to form at least one populated directory entry, and de-populating a pre-determined number of the plurality of directory entries according to a dirty line limiter protocol causing a write-back from the cache to a main memory, where the dirty line limiter protocol is based on a number of the at least one populated directory entry exceeding a pre-defined limit. | 2008-10-02 |
20080244186 | WRITE FILTER CACHE METHOD AND APPARATUS FOR PROTECTING THE MICROPROCESSOR CORE FROM SOFT ERRORS - A write filter cache system for protecting a microprocessor core from soft errors and method thereof are provided. In one aspect, data coming from a processor core to be written in primary cache memory, for instance, L1 cache memory system, is buffered in a write filter cache placed between the primary cache memory and the processor core. The data from the write filter is move to the main cache memory only if it is verified that main thread's data is soft error free, for instance, by comparing the main thread's data with that of its redundant thread. The main cache memory only keeps clean data associated with accepted checkpoints. | 2008-10-02 |
20080244187 | PIPELINING D STATES FOR MRU STEERAGE DURING MRU-LRU MEMBER ALLOCATION - A method and apparatus for preventing selection of Deleted (D) members as an LRU victim during LRU victim selection. During each cache access targeting the particular congruence class, the deleted cache line is identified from information in the cache directory. A location of a deleted cache line is pipelined through the cache architecture during LRU victim selection. The information is latched and then passed to MRU vector generation logic. An MRU vector is generated and passed to the MRU update logic, which is selects/tags the deleted member as a MRU member. The make MRU operation affects only the lower level LRU state bits arranged in a tree-based structure state bits so that the make MRU operation only negates selection of the specific member in the D state, without affecting LRU victim selection of the other members. | 2008-10-02 |
20080244188 | INFORMATION RECORDING APPARATUS AND CONTROL METHOD THEREOF - According to one embodiment, an information recording apparatus has a control unit configured to control mutual transfer of information between each of a disc-shaped recording medium, a cache memory, and a non-volatile memory and the outside, control mutual transfer of information between the disc-shaped recording medium, the cache memory, and the non-volatile memory, and control to set a substituting region corresponding to a defect region generated in the disc-shaped recording medium in the non-volatile memory. | 2008-10-02 |
20080244189 | Method, Apparatus, System and Program Product Supporting Directory-Assisted Speculative Snoop Probe With Concurrent Memory Access - A multiprocessor data processing system includes a memory controller controlling access to a memory subsystem, multiple processor buses coupled to the memory controller, and at least one of multiple processors coupled to each processor bus. In response to receiving a first read request of a first processor via a first processor bus, the memory controller initiates a speculative access to the memory subsystem and a lookup of the target address in a central coherence directory. In response to the central coherence directory indicating that a copy of the target memory block is cached by a second processor, the memory controller transmits a second read request for the target address on a second processor bus. In response to receiving a clean snoop response to the second read request, the memory controller provides to the first processor the target memory block retrieved from the memory subsystem by the speculative access. | 2008-10-02 |
20080244190 | Method, Apparatus, System and Program Product Supporting Efficient Eviction of an Entry From a Central Coherence Directory - In response to a memory access request missing in a central coherence directory of a data processing system, the central coherence directory issues a back-invalidate request and provides an indication of one or more processors possibly caching a copy of a victim memory block associated with a victim memory address. In response to the back-invalidate request, a memory controller initiates a lookup of coherency information for the victim memory address in the central coherence directory and, prior to receipt of the coherency information, speculatively issues a set of back-invalidate commands on one or more of multiple processor buses to invalidate any cached copy of the victim memory block. In response to receipt of the coherency information, the memory controller determines whether the set of speculatively issued back-invalidate commands was under-inclusive, and if not, removes a victim entry associated with the victim memory address from the central coherence directory. | 2008-10-02 |
20080244191 | Processor system management mode caching - In some embodiments, an apparatus comprises one or more processors supporting a system management mode, system management memory, and software controllable caching of memory, one or more memory modules, a memory controller, and a communication bus to couple the one or more memory modules to the memory controller. Other embodiments may be described. | 2008-10-02 |
20080244192 | MULTIPROCESSOR SYSTEM - A multiprocessor system includes cache memories each of which is provided in correspondence with one of processor cores and includes a tag storage unit configured to store validity information representing whether a cache line as a unit to store data is valid, update information representing whether data in the cache line has been rewritten, and address information of the data in the cache line, a shared memory shared by the processor cores, and an arbitration circuit configured to arbitrate access requests from the processor cores to the shared memory and send the arbitrated access request to the cache memories. Each cache memory includes a violation detection circuit configured to detect a violation access by comparing the information in the tag storage unit with the access request from the arbitration circuit. | 2008-10-02 |
20080244193 | ADAPTIVE RANGE SNOOP FILTERING METHODS AND APPARATUSES - Snoop filtering methods and apparatuses for systems utilizing memory are contemplated. Method embodiments comprise receiving a request for contents of a memory line by a home agent, comparing an address of the memory line to a range in a set of adaptive ranges, and snooping an I/O agent for the contents upon a match of the address with the range. Apparatus embodiments comprise a range table, a table updater, a receiver module, and a range comparator. The range tables allow for the tracking of memory addresses as I/O agents assert ownership of the addresses. | 2008-10-02 |
20080244194 | METHOD AND APARATHUS FOR FILTERING SNOOP REQUESTS USING STREAM REGISTERS - A method and apparatus for supporting cache coherency in a multiprocessor computing environment having multiple processing units, each processing unit having a local cache memory associated therewith. A snoop filter device is associated with each processing unit and includes at least one snoop filter primitive implementing filtering method based on usage of stream registers sets and associated stream register comparison logic. From the plurality of stream registers sets, at least one stream register set is active, and at least one stream register set is labeled historic at any point in time. In addition, the snoop filter block is operatively coupled with cache wrap detection logic whereby the content of the active stream register set is switched into a historic stream register set upon the cache wrap condition detection, and the content of at least one active stream register set is reset. Each filter primitive implements stream register comparison logic that determines whether a received snoop request is to be forwarded to the processor or discarded. | 2008-10-02 |
20080244195 | METHODS AND APPARATUSES TO SUPPORT MEMORY TRANSACTIONS USING PARTIAL PHYSICAL ADDRESSES - Methods and apparatuses to support memory transactions using partial physical addresses are disclosed. Method embodiments generally comprise home agents monitoring multiple responses to multiple memory requests, wherein at least one of the responses has a partial address for a memory line, resolving conflicts for the memory requ'fvests, and suspending conflict resolution for the memory requests which match partial address responses until determining the full address. Apparatus embodiments generally comprise a home agent having a response monitor and a conflict resolver. The response monitor may observe a snoop response of a memory agent, wherein the snoop response only has a partial address and is for a memory line of a memory agent. The conflict resolver may suspend conflict resolution for memory transactions which match the partial address of the memory line until the conflict resolver receives a full address for the memory line. | 2008-10-02 |
20080244196 | Method and apparatus for a unified storage system - A unified storage system for executing a variety of types of storage control software using a single standardized hardware platform includes multiple storage control modules connected to storage devices for storing data related to input/output (I/O) operations. A first type of storage control software is initially installed and executed on a first storage control module for processing a first type of I/O operations. A management module replaces the first type of storage control software by installing a second type of storage control software onto the first storage control module. When the second type of storage control software is installed and executed, the first storage control module processes a second type of I/O operation, different from the first type of I/O operation. Control of volumes originally accessed by the first storage control module may be transferred to a second storage control module having the first type of storage control software installed. | 2008-10-02 |
20080244197 | EXTERNAL MEMORY CONTROLLER NODE - A memory controller to provide memory access services in an adaptive computing engine is provided. The controller comprises: a network interface configured to receive a memory request from a programmable network; and a memory interface configured to access a memory to fulfill the memory request from the programmable network, wherein the memory interface receives and provides data for the memory request to the network interface, the network interface configured to send data to and receive data from the programmable network. | 2008-10-02 |
20080244198 | MICROPROCESSOR DESIGNING PROGRAM, MICROPOROCESSOR DESIGNING APPARATUS, AND MICROPROCESSOR - A microprocessor which can be operated with small power consumption, a microprocessor designing program which can design it in a short period of time, and a microprocessor designing apparatus. The microprocessor designing program comprises an execution program storing step of storing each of the execution programs in said specified address areas, in correspondence with the specification of said address areas made for each of the execution programs, an access number totalizing step of counting a total number of accesses given by the computation processing unit for each of the address areas, and an execution program name outputting step of outputting an execution program name outputted in an order based on said total number. The microprocessor designing apparatus packages a microprocessor designing program, and comprises an execution program name displaying component displaying, in characters, the name of an execution program outputted by the execution program name displaying step. | 2008-10-02 |
20080244199 | COMPUTER SYSTEM PREVENTING STORAGE OF DUPLICATE FILES - A plurality of contents intrinsic values that are values intrinsic to respective contents of a plurality of files stored in one or more first storage devices are calculated. Whether two or more identical contents intrinsic values are contained among the plurality of contents intrinsic values is determined. When two or more identical contents intrinsic values are present, an access destination of a first file corresponding to a first contents intrinsic value from among these two or more contents intrinsic values is changed to a position having stored therein a second file corresponding to a second contents intrinsic value from among these two or more contents intrinsic values. | 2008-10-02 |
20080244200 | System for Communicating Command Parameters Between a Processor and a Memory Flow Controller - A system and method for communicating command parameters between a processor and a memory flow controller are provided. The system and method make use of a channel interface as the primary mechanism for communicating between the processor and a memory flow controller. The channel interface provides channels for communicating with processor facilities, memory flow control facilities, machine state registers, and external processor interrupt facilities, for example. These channels may be designated as blocking or non-blocking. With blocking channels, when no data is available to be read from the corresponding registers, or there is no space available to write to the corresponding registers, the processor is placed in a low power “stall” state. The processor is automatically awakened, via communication across the blocking channel, when data becomes available or space is freed. Thus, the channels of the present invention permit the processor to stay in a low power state. | 2008-10-02 |
20080244201 | METHOD FOR DIGITAL STORAGE OF DATA ON A DATA MEMORY WITH LIMITED AVAILABLE STORAGE SPACE - The most important data in a first memory of a data processing system are stored in a limited second data memory given upon a transfer thereof. The demarcation between important (and still storable) data on the one hand and less important (and therefore no longer storable) data is made dependent on the available storage volume (SV) of the target data memory. This achieves that an optimal amount of the most important data can be stored on the target data memory. | 2008-10-02 |
20080244202 | Method combining lower-endurance/performance and higher-endurance/performance information storage to support data processing - An information storage arrangement that combines higher-endurance (or performance) storage with lower-endurance (or performance) storage is managed in a manner that makes judicious use of the lower-endurance (or performance) storage. It is therefore possible to exploit the economic advantage associated with lower-endurance (or performance) storage, while also avoiding storage capacity losses that would otherwise be associated with lower-endurance (or performance) storage. | 2008-10-02 |
20080244203 | Apparatus combining lower-endurance/performance and higher-endurance/performance information storage to support data processing - An information storage arrangement that combines higher-endurance (or performance) storage with lower-endurance (or performance) storage is managed in a manner that makes judicious use of the lower-endurance (or performance) storage. It is therefore possible to exploit the economic advantage associated with lower-endurance (or performance) storage, while also avoiding storage capacity losses that would otherwise be associated with lower-endurance (or performance) storage. | 2008-10-02 |
20080244204 | REPLICATION AND RESTORATION OF SINGLE-INSTANCE STORAGE POOLS - A system and method for managing single instance storage. A computer system includes at least two backup servers, each backup server included in a single-instance storage pool. A first backup server conveys a first de-duplicated list identifying data segments from the first storage pool to a second backup server. The first backup server receives from the second backup server a second de-duplicated list identifying a subset of the data segments and conveys the subset of the data segments to the second backup server. In response to receiving the first list from the first backup server, the second backup server de-duplicates the first list against a second storage pool and conveys the second list to the first backup server. In response to receiving the subset of the data segments, the second backup server adds the received data segments to the second storage pool. | 2008-10-02 |
20080244205 | Storage system and storage control method - The correspondence between a plurality of virtual storage positions in a virtual volume for logically holding a snapshot image of a main volume in which data elements transmitted from a higher-level device are written and a plurality of address information elements indicating a plurality of actual storage positions of a plurality of data elements constituting the snapshot image is managed. In the initial backup, all the data elements stored in all the actual storage positions indicated by a plurality of address information elements are backed up, then storage positions where a difference has occurred between the virtual volume and the backup destination storage device is managed, and in the next and subsequent backups, data elements on the storage positions specified from the address information elements corresponding to the differentially managed storage positions are backed up. | 2008-10-02 |
20080244206 | METHOD OF CONTROLLING MEMORY ACCESS - Provided is a method of controlling memory access. In a system including a first layer element executed in a privileged mode having a first priority of permission to access the entire region of a memory and second and third layer elements executed in an unprivileged mode having a second priority of permission to access a partial region of the memory, the method of controlling memory access determines whether the memory is accessible for each page that is an address space unit, based on which mode a layer element currently accessing the memory is executed in between the privileged mode and the unprivileged mode; and determines whether the memory is accessible based on which one of the first, second and third layer elements corresponds to a domain currently being attempted to be accessed from among a plurality of domains of the memory. Accordingly, a memory domain allocated to a guest operating system kernel is effectively protected from an application executed in the unprivileged mode in which the guest operating system kernel is executed. | 2008-10-02 |
20080244207 | System as well as a method for granting a privilege to a chip holder - A system for granting a privilege to a chip holder. The system comprises at least one chip provided with at least one secret key to be activated by a chip holder and at least one associated public key. The system further comprises at least one chip reader, which is connected to a device for carrying out the privilege, and at least one privilege database, which comprises data regarding privileges associated with respective chips. In the system a request route and a reply a route are set up between the chip reader and the privilege database over at least one network, wherein a reply from the privilege database can be sent to the chip reader in encoded form via the reply route by means of a public key of the chip obtained from an encryption database. The chip holder can decode the reply by means of the secret key, after which the decoded reply can be transferred to the device for carrying out the privilege. | 2008-10-02 |
20080244208 | Memory card hidden command protocol - A memory card compatible token includes non-memory components accessed using commands hidden in the data stream of a memory card access command. A mobile computing device such as a mobile phone accesses the non-memory components by writing to a specific address, including a known data value in the data stream, or both. The token may be activated using an activation code, and a subsequently chosen password may be used to authenticate the mobile computing device to the token each time a hidden command is issued. | 2008-10-02 |
20080244209 | METHODS AND DEVICES FOR DETERMINING QUALITY OF SERVICES OF STORAGE SYSTEMS - Methods and systems for allowing access to computer storage systems. Multiple requests from multiple applications can be received and processed efficiently to allow traffic from multiple customers to access the storage system concurrently. | 2008-10-02 |
20080244210 | ELIMINATING FRAGMENTATION WITH BUDDY-TREE ALLOCATION - This disclosure describes solutions for reducing the amount of fragmentation on a computer memory device, such as a hard disk, random access memory device, and/or the like. In an aspect, this disclosure describes systems, methods and software for allocating storage space for variable-sized data chunks in a fashion that reduces or eliminates the need for periodic de-fragmentation of the memory device. In another aspect, this disclosure describes solutions that provide for the dynamic re-allocation of existing data blocks on the memory device to provide contiguous available space that can be allocated for new data blocks. | 2008-10-02 |
20080244211 | MEMORY DEVICE AND CONTROLLER - A memory device comprises a nonvolatile memory including memory areas that are defined in accordance with a security levels, and a controller configured to write to a first area that is part of the memory areas in an M-value mode and to a second area that is part of the memory areas and provides lower security level than the first area in an N-value mode (N>M). | 2008-10-02 |
20080244212 | SYSTEM AND METHOD TO ENABLE HIERARCHICAL DATA SPILLING - In some embodiments, the invention involves managing access to firmware non-volatile storage which is currently an extremely limited resource. A system and method provide a seamless means by which to enable spilling of such access to an alternate non-volatile storage target. One embodiment uses a virtualization platform to proxy NV store I/O requests via a virtual machine manager (VMM). Another embodiment uses an embedded platform to proxy I/O requests. Another embodiment uses IDS redirection in an embedded microcontroller on the platform to proxy I/O requests. Non-priority data may be stored in the alternative medium, even when space is available on the firmware memory store, based on platform policy. Other embodiments are described and claimed. | 2008-10-02 |
20080244213 | WORKLOAD MANAGEMENT IN VIRTUALIZED DATA PROCESSING ENVIRONMENT - A system, method and computer-readable medium for balancing access among multiple logical partitions to the physical system resources of a computer system employing system virtualization. Each of the logical partitions is classified, initially during a startup period, in accordance with a level of allocated dispatch window utilization. Performance metrics of one or more of the physical system resources are determined in association with one or more of the logical partitions. The performance metrics determination is performed at a hardware level independent of programming interrupts. During a dispatch window in which a given set of the physical system resources are configured for allocation to one of the logical partitions, the given set of physical system resources are re-allocated to a replacement logical partition in accordance with the determined performance metrics associated with the replacement logical partition and the dispatch window utilization classification of the replacement logical partition. | 2008-10-02 |
20080244214 | WORKLOAD MANAGEMENT IN VIRTUALIZED DATA PROCESSING ENVIRONMENT - A system, method and computer-readable medium for balancing access among multiple logical partitions to the physical system resources of a computer system employing system virtualization. Each of the logical partitions is classified, initially during a startup period, in accordance with a level of allocated dispatch window utilization. Performance metrics of one or more of the physical system resources are determined in association with one or more of the logical partitions. The performance metrics determination is performed at a hardware level independent of programming interrupts. During a dispatch window in which a given set of the physical system resources are configured for allocation to one of the logical partitions, the given set of physical system resources are re-allocated to a replacement logical partition in accordance with the determined performance metrics associated with the replacement logical partition and the dispatch window utilization classification of the replacement logical partition. | 2008-10-02 |
20080244215 | WORKLOAD MANAGEMENT IN VIRTUALIZED DATA PROCESSING ENVIRONMENT - A system, method and computer-readable medium for balancing access among multiple logical partitions to the physical system resources of a computer system employing system virtualization. Each of the logical partitions is classified, initially during a startup period, in accordance with a level of allocated dispatch window utilization. Performance metrics of one or more of the physical system resources are determined in association with one or more of the logical partitions. The performance metrics determination is performed at a hardware level independent of programming interrupts. During a dispatch window in which a given set of the physical system resources are configured for allocation to one of the logical partitions, the given set of physical system resources are re-allocated to a replacement logical partition in accordance with the determined performance metrics associated with the replacement logical partition and the dispatch window utilization classification of the replacement logical partition. | 2008-10-02 |
20080244216 | User access to a partitionable server - A partitionable server that enables user access thereto is provided. The partitionable server includes a plurality of partitions, each running an independent instance of an operating system (OS) and a first management module located in the partitionable server and interfacing with the plurality of partitions, the first management module is separate from the plurality of partitions and includes a physical user interface for local access to the partitionable server. The first management module is operable to provide mapping of a physical user interface device, which locally accesses the partitionable server through the physical user interface, to a virtual user interface of any one of the plurality partitions as desired for accessing the one partition. | 2008-10-02 |
20080244217 | SAFETY MODULE FOR A FRANKING MACHINE - The invention relates to a safety module for the electronic data processing, with a safety core comprising a core processor, and connected therewith, a core memory and a core interface, the core processor being adapted to import via the core interface, to verify and with successful verification to store and to activate programs/data sets in the core memory. It is characterized by that the safety core is connected by the core interface with a mass storage of the safety module arranged outside of the safety core, wherein the memory capacity of the mass storage is a multiple of the memory capacity of the core memory, that the core processor is adapted to import, verify and activate programs/data sets loaded into the mass storage for a program execution in a partitioned manner in the core memory, and that the core processor is adapted to authenticate partitioned programs/data sets not required for the program execution and stored in the core memory and to export them into the mass storage and/or to delete them in the core memory. | 2008-10-02 |
20080244218 | SYSTEM AND PROGRAM PRODUCT FOR CACHING WEB CONTENT - The invention provides a system and program product for caching dynamic portal pages without changing the existing caching proxy infrastructure or the transportation protocol used by providing an advanced caching component. An advanced caching component provides the functionality that additional dynamic page specific cache information is provided as part of the response including the portal page. Each component in the portal that dynamically contributes page fragments to be aggregated to a portal page provides dynamic component specific cache information which includes component specific cache scope and expiration values. | 2008-10-02 |
20080244219 | METHOD AND APPARATUS FOR CONTROLLING A SINGLE-USER APPLICATION IN A MULTI-USER OPERATING SYSTEM - A control system enables a plurality of users to execute a single-user application simultaneously in a multi-user OS which causes address conflicts under simultaneous execution and save results for each user avoiding the address conflicts. The control system comprises a control unit | 2008-10-02 |
20080244220 | Filter and Method For Filtering - A filter and method of filtering modifies the computation order to accommodate horizontal symmetric filtering, and modifies the source operands while modifying the SIMD computation, so as to eliminate such heavy overhead of transposing a pixel matrix. The filter and method of filtering reformats the equations involved in the prior art to the following equations, thereby acquiring the interpolation results by reducing the required clock cycles to three cycles: | 2008-10-02 |
20080244221 | EXPOSING SYSTEM TOPOLOGY TO THE EXECUTION ENVIRONMENT - Embodiments of apparatuses, methods, and systems for exposing system topology to an execution environment are disclosed. In one embodiment, an apparatus includes execution cores and resources on a single integrated circuit, and topology logic. The topology logic is to populate a data structure with information regarding a relationship between the execution cores and the resources. | 2008-10-02 |
20080244222 | MANY-CORE PROCESSING USING VIRTUAL PROCESSORS - The present disclosure provides a method for virtual processing. According to one exemplary embodiment, the method may include partitioning a plurality of cores of an integrated circuit (IC) into a plurality of virtual processors, the plurality of virtual processors having a framework dependent upon a programming application. The method may further include performing at least one task using the plurality of cores. Of course, additional embodiments, variations and modifications are possible without departing from this embodiment. | 2008-10-02 |
20080244223 | BRANCH PRUNING IN ARCHITECTURES WITH SPECULATION SUPPORT - According to one example embodiment of the inventive subject matter, the method and apparatus described herein is used to generate an optimized speculative version of a static piece of code. The portion of code is optimized in the sense that the number of instructions executed will be smaller. However, since the applied optimization is speculative, the optimized version can be incorrect and some mechanism to recover from that situation is required. Thus, the quality of the produced code will be measured by taking into account both the final length of the code as well as the frequency of misspeculation. | 2008-10-02 |
20080244224 | Scheduling a direct dependent instruction - In one embodiment, the present invention includes an apparatus having an instruction selector to select an instruction, where the selector is to store a dependent indicator to indicate a direct dependent consumer instruction of a producer instruction, a decode logic coupled to the instruction selector to receive the dependent indicator when the producer instruction is selected and to generate a wakeup signal for the direct dependent consumer instruction, and wakeup logic to receive the wakeup signal and to indicate that the producer instruction has been selected. Other embodiments are described and claimed. | 2008-10-02 |
20080244225 | Integrated Circuit and Method For Transaction Retraction - An integrated circuit having a plurality of processing modules (I, T) is provided. At least one first processing module (I) issues at least one transaction towards at least one second processing module (T). Said integrated circuit further comprises at least one first transaction retraction unit (TRU | 2008-10-02 |
20080244226 | Thread migration control based on prediction of migration overhead - A processing system features a first processing core to operate in a first node, a second processing core to operate in a second node, and random access memory (RAM) responsive to the first and second processing cores. The processing system also features control logic to perform operations such as (a) automatically updating a resident set size (RSS) counter to correspond to the RSS for the thread on the first node in response to allocation of a page frame for a thread in the first node, and (b) using the RSS counter to predict migration overhead when determining whether the thread should be migrated from the first processing core to the second processing core. Other embodiments are described and claimed. | 2008-10-02 |
20080244227 | DESIGN STRUCTURE FOR ASYMMETRICAL PERFORMANCE MULTI-PROCESSORS - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design, for allocating processing functions between a primary processor and a secondary processor is disclosed. A primary processor is provided that performs routine processing duties, including execution of application program code, while the secondary processor is in a sleep state. When the load on the primary processor is deemed to be excessive, the secondary processor is awakened from a sleep state and assigned to perform processing functions that would otherwise need to be performed by the primary processor. If temperatures in the system rise above a threshold, the secondary processor is returned to the sleep state. | 2008-10-02 |
20080244228 | Electronic Device With an Array of Processing Units - The invention concerns electronics devices like X-ray detectors with an array of pixels ( | 2008-10-02 |
20080244229 | Information processing apparatus - In an information processing apparatus, a fetch to a storage address of a first storage unit which stores a first instruction executed at first within a plurality of instructions that is included in a software and executed when a processor starts the software via the channel is detected. It is detected that the processor executed a specific instruction within the plurality of instructions via the channel. It is determined whether a predetermined time has passed since the detection of the fetch to the storage address until the detection of the execution of the specific instruction. When it is determined that the predetermined time has not passed, it is determined whether an interrupt to the processor is prohibited based on a result of the processor executing the specific instruction, and an access is released to the process according to a result of determination. | 2008-10-02 |
20080244230 | SCALABLE PROCESSING ARCHITECTURE - A computation node according to various embodiments of the invention includes at least one input port capable of being coupled to at least one first other | 2008-10-02 |
20080244231 | Method and apparatus for speculative prefetching in a multi-processor/multi-core message-passing machine - In some embodiments, the invention involves a novel combination of techniques for prefetching data and passing messages between and among cores in a multi-processor/multi-core platform. In an embodiment, a receiving core has a message queue and a message prefetcher. Incoming messages are simultaneously written to the message queue and the message prefetcher. The prefetcher speculatively fetches data referenced in the received message so that the data is available when the message is executed in the execution pipeline, or shortly thereafter. Other embodiments are described and claimed. | 2008-10-02 |
20080244232 | Pre-fetch apparatus - Apparatus and computing systems associated with data pre-fetching are described. One embodiment includes a processor that includes a first unit to store data corresponding to a load instruction and an instruction pointer (IP) value associated with the load instruction. The processor also includes a second unit to produce a predicted demand address for a next load instruction, the predicted demand address being based on a constant stride value. The processor also includes a third unit to generate an instruction pointer pre-fetch (IPP) request for the predicted demand address. The processor may also include units to arbitrate between generated IP pre-fetch requests and alternative pre-fetch requests. | 2008-10-02 |
20080244233 | MACHINE CLUSTER TOPOLOGY REPRESENTATION FOR AUTOMATED TESTING - Software (such as server products) operating in a complex networked environment often run on multi-machine installations that are known as machine clusters. A server product can be tested on a server machine type. The server product can be tested by tracking the constituent machines of a machine cluster, and configuring and recording the roles that each machine in the machine cluster plays. Scenarios targeting a single server machine-type can be seamlessly mapped from the single machine scenario to a machine cluster of any number of machines, while handling actions such as executing tests and gathering log files from all machines of a machine cluster as a unit. | 2008-10-02 |
20080244234 | System and Method for Executing Instructions Prior to an Execution Stage in a Processor - A method of processing a plurality of instructions in multiple pipeline stages within a pipeline processor is disclosed. The method partially or wholly executes a stalled instruction in a pipeline stage that has a function other than instruction execution prior to the execution stage within the processor. Partially or wholly executing the instruction prior to the execution stage in the pipeline speeds up the execution of the instruction and allows the processor to more effectively utilize its resources, thus increasing the processor's efficiency. | 2008-10-02 |
20080244235 | CIRCUIT MARGINALITY VALIDATION TEST FOR AN INTEGRATED CIRCUIT - A high volume manufacturing (HVM) and circuit marginality validation (CMV) test for an integrated circuit (IC) is disclosed. The IC comprises a port binding and bubble logic in the front end to provide flexibility in binding a port to the uop and to create empty spaces (bubbles) in the uop flow. The out-of-order (OOO) cluster of the IC comprises reservation disable logic to control the flow sequence of the uops and stop schedule logic to temporarily stop dispatching the uops from the OOO cluster to the execution (EXE) cluster. The EXE cluster of the IC comprises signal event uops to generate fault information and fused uJump uops to specify combination of branch prediction, direction, and resolution in any portion of the test. Such features provide a tester the flexibility to perform HVM and CMV testing of the OOO and EXE clusters of the IC. | 2008-10-02 |
20080244236 | METHOD AND SYSTEM FOR COMPOSING STREAM PROCESSING APPLICATIONS ACCORDING TO A SEMANTIC DESCRIPTION OF A PROCESSING GOAL - A method for assembling a stream processing application, includes: inputting a plurality of data source descriptions, wherein each of the data source descriptions includes a graph pattern that semantically describes an output of a data source; inputting a plurality of component descriptions, wherein each of the component descriptions includes a graph pattern that semantically describes an input of a component and a graph pattern that semantically describes an output of the component; inputting a stream processing request, wherein the stream processing request includes a goal that is represented by a graph pattern that semantically describes a desired stream processing outcome; assembling a stream processing graph, wherein the stream processing graph includes at least one data source or at least one component that satisfies the desired processing outcome; and outputting the stream processing graph. | 2008-10-02 |
20080244237 | Compute unit with an internal bit FIFO circuit - A compute unit with an internal bit FIFO circuit includes at least one data register, a lookup table, a configuration register including FIFO base address, length and read/write mode fields for configuring a portion of the lookup table as a bit FIFO circuit and a read/write pointer register responsive to an instruction having a lookup table identification field, length of bits field and register extract/deposit field for selectively transferring in a single cycle between the FIFO circuit and the data register a bit field of specified length. | 2008-10-02 |
20080244238 | Stream processing accelerator - The present invention is a stream processing accelerator which includes multiple coupled processing elements which are interconnected through a shared file register and a set of global predicates. The stream processing accelerator has two modes: full-processor mode and circuit mode. In full-processor mode, a branch unit, an arithmetic logic unit and a memory unit work together as a regular processor. In circuit mode, each component acts like functional units with configurable interconnections. | 2008-10-02 |
20080244239 | Method and System for Autonomic Monitoring of Semaphore Operations in an Application - A method, an apparatus, and a computer program product in a data processing system are presented for using hardware assistance for gathering performance information that significantly reduces the overhead in gathering such information. Performance indicators are associated with instructions or memory locations, and processing of the performance indicators enables counting of events associated with execution of those instructions or events associated with accesses to those memory locations. The performance information that has been dynamically gathered from the assisting hardware is available to the software application during runtime in order to autonomically affect the behavior of the software application, particularly to enhance its performance. For example, the counted events may be used to autonomically collecting statistical information about the ability of a software application to successfully acquire a semaphore. | 2008-10-02 |
20080244240 | SEMICONDUCTOR DEVICE - A semiconductor device includes a first arithmetic engine which executes a first arithmetic process in every cycle and outputs first data representing the result of the first arithmetic process and a first valid signal representing a first or second value in every cycle, and a second arithmetic engine which executes a second arithmetic process in every cycle and outputs second data representing the result of the second arithmetic process and a second valid signal representing the first or second value in every cycle. The device also includes an inter-arithmetic-engine buffer which is used to exchange the first data and the second data between the first and second arithmetic engines, enables write of the first or second data if the first or second valid signal indicates the first value, and inhibits write of the first or second data if the first or second valid signal indicates the second value. | 2008-10-02 |
20080244241 | HANDLING FLOATING POINT OPERATIONS - A computing system capable of handling floating point operations during program code conversion is described, comprising a processor including a floating point unit and an integer unit. The computing system further comprises a translator unit arranged to receive subject code instructions including at least one instruction relating to a floating point operation and in response to generate corresponding target code for execution on said processor. To handle floating point operations a floating point status unit and a floating point control unit are provided within the translator. These units are cause the translator unit to generate either: target code for performing the floating point operations directly on the floating point unit; or target code for performing the floating point operations indirectly, for example using a combination of the integer unit and the floating point unit. In this way the efficiency of the computing system is improved. | 2008-10-02 |
20080244242 | Using a Register File as Either a Rename Buffer or an Architected Register File - A computer implemented method, apparatus, and computer usable program code are provided for implementing a set of architected register files as a set of temporary rename buffers. An instruction dispatch unit receives an instruction that includes instruction data. The instruction dispatch unit determines a thread mode under which a processor is operating. Responsive to determining the thread mode, the instruction dispatch unit determines an ability to use the set of architected register files as the set of temporary rename buffers. Responsive to the ability to use the set of architected register files as the set of temporary rename buffers, the instruction dispatch unit analyzes the instruction to determine an address of an architected register file in the set of architected register files where the instruction data is to be stored. The architected register file operating as a temporary rename buffer stores the instruction data as finished data. | 2008-10-02 |
20080244243 | Computer program product and system for altering execution flow of a computer program - A debugger alters the execution flow of a child computer program of the debugger at runtime by inserting jump statements determined by the insertion of breakpoint instructions. Breakpoints are used to force the child computer program to throw exceptions at specified locations. One or more instructions of the computer program are replaced by jump instructions. The jump destination addresses associated with the break instructions can be specified by input from a user. The debugger changes the instruction pointer of the child program to achieve the desired change in execution flow. No instructions are lost in the child program. | 2008-10-02 |
20080244244 | PARALLEL INSTRUCTION PROCESSING AND OPERAND INTEGRITY VERIFICATION - A method includes accessing, at a processing device, operand data associated with an instruction operation from a data cache and executing, at the processing device, the instruction operation using the operand data prior to determining the validity of the operand data. The method further includes retiring, at the processing device, the instruction operation in response to determining the operand data is valid. A processing device includes a data cache and an instruction pipeline. The instruction pipeline includes an execution stage configured to execute an instruction operation using operand data access from the data cache prior to determining the validity of the operand data and a retire stage configured to retire the instruction operation in response to determining the operand data is valid. | 2008-10-02 |
20080244245 | OPTIMAL SELECTION OF COMPRESSION ENTRIES FOR COMPRESSING PROGRAM INSTRUCTIONS - A method of compressing instructions in a program may include extracting unique bit patterns from the instructions in the program and constructing a linear programming formulation or an integer programming formulation from the unique bit patterns, the instructions, and/or the size of a memory storage. The linear programming formulation or the integer programming formulation may be solved to produce a solution. The method may include compressing at least some of the instructions based on the solution by storing at least some of the unique bit patterns in a memory and placing corresponding indices to the memory in new compressed instructions. | 2008-10-02 |
20080244246 | INTEGRATED MPE-FEC RAM FOR DVB-H RECEIVERS - A MPE-FEC memory chip and method for use in a DVB-H receiver, wherein the memory chip comprises a TS demux; a RS decoder; a system bus; and a RAM unit adapted to simultaneously interface to the TS demux, the RS decoder, and the system bus through time-multiplexing, wherein the RAM unit is adapted to (i) access multiple-words per clock cycle, and (ii) cache write and read accesses to reduce memory access from the TS demux and the system bus, and wherein the RAM unit is adapted to be clocked at a speed higher than an interfacing data-path to increase an effective throughput of the RAM unit. The RAM unit may comprise multiple RAM sub units, wherein while a first RAM sub unit is clock gated, the remaining multiple RAM sub units are accessible. | 2008-10-02 |
20080244247 | Processing long-latency instructions in a pipelined processor - There is provided a method and processor for processing a thread. The thread comprises a plurality of sequential instructions, the plurality of sequential instructions comprising some short-latency instructions and some long-latency instructions and at least one hazard instruction, the hazard instruction requiring one or more preceding instructions to be processed before the hazard instruction is processed. The method comprises the steps of: a) before processing each long-latency instruction, incrementing by one, a counter associated with the thread; b) after each long-latency instruction has been processed, decrementing by one, the counter associated with the thread; c) before processing each hazard instruction, checking the value of the counter associated with the thread, and i) if the counter value is zero, processing the hazard instruction, or ii) if the counter value is non-zero, pausing processing of the hazard instruction until a later time. The processor includes means for performing steps a), b) and c) of the method. | 2008-10-02 |
20080244248 | Apparatus, Method and Program Product for Policy Synchronization - Applications which function under a first operating system also function when it becomes necessary to call into action a second operating system due to provision having been made for configuration and other settings necessary to the execution of such applications (here generically called policy settings or policy source data) to be made available to the second operating system. | 2008-10-02 |
20080244249 | Managed redundant enterprise basic input/output system store update - A basic input/output system may be stored on two different memories coupled to active management technology firmware and a trusted platform module. The trusted platform module ensures that access to the correct memory. One of the memories is selected to store an update of the basic input/output system. | 2008-10-02 |
20080244250 | Instant on video - In some embodiments, the invention involves speeding boot up of a platform by initializing the video card early on in the boot process. In an embodiment, processor cache memory is to be used as cache as RAM (CAR). Video graphics adapter (VGA) card initialization uses the CAR instead of system RAM to perform initialization. A portion of the firmware code, interrupt vector tables and handlers are mirrored in the CAR, from flash memory to mimic the behavior of system RAM during the video initialization. VGA initialization may occur before system RAM has initialized to enable early visual feedback to a user. Other embodiments are described and claimed. | 2008-10-02 |
20080244251 | PREDICTIVE MODEL IMPLEMENTATION SYSTEM AND METHODOLOGY - The invention relates to a methodology and computer executable instructions configured to implement a prediction system. The invention deals with the use of a configuration file specifying at least the interactions to be completed between components of the prediction system, where this configuration file is transmitted to an implementation site. At the implementation site the configuration file is supplied as an input to at least one autonomous software agent where this agent or agents run the components of the prediction system as specified by the interactions defined within the configuration file. An extension to this method is also disclosed where the prediction system is built or constructed at the implementation site using the configuration file. | 2008-10-02 |
20080244252 | USING PROTECTED/HIDDEN REGION OF A MAGNETIC MEDIA UNDER FIRMWARE CONTROL - A method and firmware for accessing a protected area of a magnetic storage device via firmware control. During early system initialization, various firmware components are loaded and executed to initialize a computer system. These firmware components include a firmware driver for accessing magnetic storage devices connected to the computer system. The system firmware enables a protected area on a magnetic storage device's media to be accessed under firmware control. After firmware accesses, the protected area is closed from access by non-firmware entities (e.g., operating systems) by “hiding” the true size of the media such that those entities are unaware of this area of the media. Mechanisms are disclosed for providing firmware access to the protected area only during pre-boot, and for both pre-boot and run-time operations. The firmware-controlled media access scheme may be used to load firmware stored on magnetic media during pre-boot and to store system information in the protected area during pre-boot and/or run-time operations. | 2008-10-02 |
20080244253 | SYSTEM, METHOD AND PROGRAM FOR SELECTIVELY REBOOTING COMPUTERS AND OTHER COMPONENTS OF A DISTRIBUTED COMPUTER SYSTEM - Selectively rebooting components of a computer system. One or more tables which list respective costs to reboot the components and respective likelihoods that reboots of the respective components will correct respective problems with the computer system are generated. Each of the costs is based on a time to reboot or delays caused by the reboot of the respective component. In response to a subsequent problem with the computer system, an order to reboot components of the computer system is determined from the table based on the costs and likelihoods that the reboot will correct the problem, such that a component of the computer system characterized by a relatively low cost and high likelihood to correct the problem will be rebooted before another component characterized by a relatively high cost and low likelihood to correct the problem. The tables are updated through actual experience. | 2008-10-02 |