27th week of 2015 patent applcation highlights part 45 |
Patent application number | Title | Published |
20150186208 | MEMORY CONTROL APPARATUS - An internal buffer caches data from a memory. A memory address conversion unit receives as input a read request from a request source. A hit determination unit determines whether or not data of any one of two or more read out candidate addresses in which payload data requested by the read request and corresponding are stored has been cached or is going to be cached in the internal buffer. When data of any one of the addresses has been cached or is going to be cached in the internal buffer, a command issue interval control unit outputs to the memory a partial read command to instruct to read data from an address other than the address of the data that has been cached or is going to be cached in the internal buffer out of the read out candidate addresses, after a predetermined delay time has elapsed. | 2015-07-02 |
20150186209 | CLOCK DOMAIN CROSSING SERIAL INTERFACE - A method for serial interface clock domain crossing includes identifying a data communication command received over a serial interface. An address is decoded to determine whether the address falls within a direct latch address range of a register bank. Data is communicated over the serial interface. A multiplexed output clock is generated, for writing to and reading from the register bank, based on at least one of a current system operating state and a refresh control signal from a host processor. | 2015-07-02 |
20150186210 | METHOD FOR PERFORMING ERROR CORRECTION, ASSOCIATED MEMORY APPARATUS AND ASSOCIATED CONTROLLER THEREOF - A method for performing error correction, an associated memory apparatus and an associated controller thereof are provided, where the method includes: performing a read operation at a specific physical address of a flash memory; after an uncorrectable error of the read operation is detected, performing a first re-read operation at the specific physical address of the flash memory by setting a first retry parameter to the flash memory to obtain first data corresponding to the first retry parameter, and temporarily storing the first data into a volatile memory and performing a first hard decoding operation on the first data; and after decoding failure of the first hard decoding operation is detected, at least according to the first data read from the volatile memory, performing a soft decoding operation to perform error correction corresponding to the specific physical address. | 2015-07-02 |
20150186211 | METHOD, DEVICE AND OPERATING SYSTEM FOR PROCESSING AND USING BURN DATA OF NAND FLASH - A method for processing burn data of NAND flash is provided. The method includes: identifying all half-empty blocks in the burn data of the NAND flash, the half-empty blocks being blocks in which a part of pages are written with data and the remaining part of pages being blank pages; writing a predetermined label character to all the blank pages of the all the half-empty blocks to convert the half-empty blocks to full blocks. With the above approach, the present invention fulfills the requirement that every page is either empty or written with data, so as to prevent from damaging data in a high-temperature patching process to thereby enhance product quality and reliability. | 2015-07-02 |
20150186212 | DECODING METHOD, MEMORY STORAGE DEVICE, AND MEMORY CONTROLLING CIRCUIT UNIT - A decoding method, a memory storage device and a memory controlling circuit unit are provided. The decoding method includes: reading at least one memory cell according to a first read voltage to obtain at least one first verification bit; executing a hard bit mode decoding procedure according to the first verification bit, and determining whether a first valid codeword is generated by the hard bit mode decoding procedure; if the first valid codeword is not generated by the hard bit mode decoding procedure, obtaining storage information of the memory cell; deciding a voltage number according to the storage information; reading the memory cell according to second read voltages matching the voltage number to obtain second verification bits; and executing a soft bit mode decoding procedure according to the second verification bits. Accordingly, the speed of decoding is increased. | 2015-07-02 |
20150186213 | DATA PROCESSING SYSTEM HAVING END-TO-END ERROR CORRECTION AND METHOD THEREFOR - In a data processing system having a plurality of error coding function circuitries, a method includes receiving an address which indicates a first storage location for storing a first data value; using a first portion of the address to select one of the plurality of error coding function circuitries as a selected error coding function circuitry; and using the selected error coding function circuitry to generate a first checkbit value, wherein the selected error coding function circuitry uses the first data value to generate the first checkbit value. When the first portion of the address has a first value, a first one of the plurality of error coding function circuitries is selected as the selected error coding function circuitry. When the first portion of the address has a second value, a second one of the plurality of error coding function circuitries is selected as the selected error coding function circuitry. | 2015-07-02 |
20150186214 | ASSIGNING A DISPERSED STORAGE NETWORK ADDRESS RANGE IN A MAINTENANCE FREE STORAGE CONTAINER - A maintenance free storage container includes a container housing, storage servers, and a container controller. The container controller includes a processing module that is operable to maintain virtual storage server to physical storage server mapping information and to maintain storage server failure information. The processing module is further operable to dispersed storage error encode the virtual storage server to physical storage server mapping information to produce encoded mapping slices. The processing module is further operable to send the encoded mapping slices for dispersed storage outside of the maintenance free storage container. The processing module is further operable to dispersed storage error encode the storage server failure information to produce encoded failure data slices. The processing module is further operable to send the encoded failure data slices for dispersed storage outside of the maintenance free storage container. | 2015-07-02 |
20150186215 | Assisted Coherent Shared Memory - An apparatus for coherent shared memory across multiple clusters is described herein. The apparatus includes a fabric memory controller and one or more nodes. The fabric memory controller manages access to a shared memory region of each node such that each shared memory region is accessible using load store semantics, even in response to failure of the node. The apparatus also includes a global memory, wherein each shared memory region is mapped to the global memory by the fabric memory controller. | 2015-07-02 |
20150186216 | METHOD AND APPARATUS FOR SELECTING PROTECTION PATH ON ODU SMP (OPTICAL CHANNEL DATA UNIT SHARED MESH PROTECTION) NETWORK - Provided herein is a method and apparatus for selecting a protection path in an ODU SMP (optical channel data unit shared mesh protection) network, the method including in response to a shared resource managed by an intermediate node being preoccupied by a certain protection path, searching by the intermediate node for an identifier relevant to the shared resource; determining by the intermediate node whether or not the searched identifier is registered; in response to determining that the identifier is registered, searching by the intermediate node for a port corresponding to the identifier; and in response to the port being registered, transmitting by the intermediate node a shared resource availability message to a node corresponding to the port. | 2015-07-02 |
20150186217 | SNAPSHOT- PROTECTED CONSISTENCY CHECKING FILE SYSTEMS - Various of the disclosed embodiments provide for recovery following inadvertent errors in a Log-Structured File System (LSFS). Particularly, embodiments mitigate inadvertent errors introduced by a file system consistency check operation by creating file system images at an appropriate time. The images may be stored within the portion of the file system accessible to a user. The images may be created in conjunction with the mounting of the file system and in such a fashion as to preserve the file system data should an error occur. Restoring the system to one of the images may remove any errors introduced by the consistency check, or similar, operation. | 2015-07-02 |
20150186218 | PERFORMING A BACKGROUND COPY PROCESS DURING A BACKUP OPERATION - A method according to one embodiment includes receiving information from a performance monitoring facility of a database at a data protection manager implemented at least in part in hardware, wherein the performance monitoring facility is configured to measure performance of the database, wherein the data protection manager is configured to control a backup operation of the database on a storage subsystem. The backup operation is started, and the performance monitoring facility is informed about the starting of the backup operation. In response to receiving an indication that a predefined performance criterion has been violated, information from the performance monitoring facility is received at the data protection manager, the data protection manager causing compliance with the predefined performance criterion in response to the information. | 2015-07-02 |
20150186219 | ELECTRONIC DEVICE AND METHOD FOR TRACKING OPERATIONS OF DIFFERENT USERS - An operation tracking method creates a database recording unique identifiers of users and operation listings of the users. The method records operation information of the user in the operation listing of the user, and recovers each finalized data to the corresponding original data according to the operation information of the user when the user logs out of the electronic device. The method further recognizes the unique identifier of a current user when the current user logs into the electronic device, and recovers each original data to the corresponding finalized data according to the operation information of the current user recorded in the operation listings of the current user corresponding to the unique identifier of the current user. | 2015-07-02 |
20150186220 | INCREASING GRANULARITY OF DIRTY BIT INFORMATION - One or more unused bits of a virtual address range are allocated for aliasing so that multiple virtually addressed sub-pages can be mapped to a common memory page. When one bit is allocated for aliasing, dirty bit information can be provided at a granularity that is one-half of a memory page. When M bits are allocated for aliasing, dirty bit information can be provided at a granularity that is 1/(2 | 2015-07-02 |
20150186221 | VIRTUAL SNAPSHOT SYSTEM AND METHOD - The present disclosure relates generally to a method and system for creating, replicating, and providing access to virtual snapshots of a disk storage block of a disk storage system or subsystem. In one embodiment, the present disclosure relates to a virtual snapshot accessible to local users of a local data storage device. The virtual snapshot may direct local users to a snapshot stored on computer-readable storage medium at a remote data storage site, but give the appearance as if data of the corresponding snapshot is stored locally. The virtual snapshot is replaced by replication of the snapshot from the remote data storage site to the local data storage device. Each snapshot may relate to data of a logical data volume, the logical data volume being an abstraction of data blocks from one or more physical storage devices. | 2015-07-02 |
20150186222 | Methods and Systems For Vectored Data De-Duplication - The present invention is directed toward methods and systems for data de-duplication. More particularly, in various embodiments, the present invention provides systems and methods for data de-duplication that may utilize a vectoring method for data de-duplication wherein a stream of data is divided into “data sets” or blocks. For each block, a code, such as a hash or cyclic redundancy code may be calculated and stored. The first block of the set may be written normally and its address and hash can be stored and noted. Subsequent block hashes may be compared with previously written block hashes. | 2015-07-02 |
20150186223 | STORAGE MANAGEMENT SYSTEM AND STORAGE MANAGEMENT METHOD - An embodiment of this invention is a storage management system including a processor and a storage device to manage a storage system having one or more copy functions. The processor locates data designated to determine a backup method. The storage device stores copy function management information on the one or more copy functions of the storage system. The processor refers to the copy function management information to ascertain the unit of copy operation of each of the one or more copy functions. The processor determines a candidate for a copy function of the storage system to be used to back up the designated data depending on the data configuration in a volume holding the designated data and the unit of copy operation of the candidate for the copy function. | 2015-07-02 |
20150186224 | DATA STORAGE DEVICE AND FLASH MEMORY CONTROL METHOD - A data storage device and a flash memory control method with a power recovery design. A microcontroller is configured to allocate a flash memory to provide a first block from the blocks to work as a run-time write block for reception of write data. During a power recovery process due to an unexpected power-off event that interrupted write operations on the first block, the microcontroller is configured to allocate the flash memory to provide a second block from the blocks for complete data recovery of the first block and to replace the first block as the run-time write block. | 2015-07-02 |
20150186225 | DATA STORAGE DEVICE AND FLASH MEMORY CONTROL METHOD - A data storage device with flash memory and a flash memory control method are disclosed, in which the flash memory includes multi-level cells (MLCs) and single-level cells (SLCs). A microcontroller is configured to establish a first physical-to-logical address mapping table (F2H table) in a random access memory (RAM) for a first run-time write block containing MLCs. The microcontroller is further configured to establish a second F2H table in the RAM for a second run-time write block containing SLCs. When data that was previously stored in the first run-time write block with un-uploaded mapping information in the first F2H table is updated into the second run-time write block, the microcontroller is configured to update a logical-to-physical address mapping table (H2F table) in accordance with the first F2H table. The H2F table is provided within the flash memory. | 2015-07-02 |
20150186226 | DATA STORAGE WITH VIRTUAL APPLIANCES - A data storage system has at least two universal nodes each having CPU resources, memory resources, network interface resources, and a storage virtualizer. A system controller communicates with all of the nodes. Each storage virtualizer in each universal node is allocated by the system controller a number of storage provider resources that it manages. The system controller maintains a map for dependency of virtual appliances to storage providers, and the storage virtualizer provides storage to its dependent virtual appliances either locally or through a network protocol (N_IOC, S_IOC) to another universal node. The storage virtualizer manages storage providers and is tolerant to fault conditions. The storage virtualizer can migrate from any one universal node to any other universal node. | 2015-07-02 |
20150186227 | EFFICIENT DEBUGGING OF MEMORY MISCOMPARE FAILURES IN POST-SILICON VALIDATION - Debugging techniques performed post-silicon, but with reference to pre-silicon phase data and/or reference model data. For example, one debugging technique is as follows: (i) receiving a first memory location that is subject to a miscompare between an associated simulation value for the first memory location and an associated actual value for the first memory location; (ii) backtracking through instructions of a test case to determine the identity of a set of backtrack locations upon which the first memory location is dependent, with the set of backtrack locations being made up of at least one of: memory locations and register locations; and (iii) comparing respective simulation values and actual values for at least one of the backtrack locations to help determine a cause of the miscompare at the first memory location. | 2015-07-02 |
20150186228 | MANAGING NODES IN A DISTRIBUTED COMPUTING ENVIRONMENT - Various embodiments of systems and methods for managing a plurality of nodes in a distributed computing environment are described herein. Initially a request to process a to-be-processed request is received. Next one or more nodes from a plurality of nodes, included in a cluster, is identified to process the to-be-processed request. Next the to-be-processed request is divided into a plurality of sub-requests. Next the plurality of sub-requests are assigned to the identified one or more nodes and the generated additional node. A node failure of one of the one or more identified nodes is identified. Finally, one or more of the plurality of sub-requests assigned to the failed node is re-assigned to another node of the plurality of nodes. | 2015-07-02 |
20150186229 | EFFICIENT FAIL-OVER IN REPLICATED SYSTEMS - A method for selecting a leader node among a plurality of network nodes, comprising: providing a current configuration of selected nodes in replicated state machine based system, wherein a first node is set for handling commands received from clients; executing a consensus protocol by the selected nodes under the current configuration; identifying at least one fault indicative event of the first node; calculating a suggested configuration of selected nodes, wherein a second node is set for handling the commands; informing each member of the suggested configuration and the first node of the suggested configuration; executing the consensus protocol in parallel under both the suggested configuration and the current configuration; and when detecting that the first node is faulty, setting the second node for handling the commands under the current configuration in place of the first node and reconfiguring the current configuration to become the suggested configuration. | 2015-07-02 |
20150186230 | PHYSICAL MEMORY FAULT MITIGATION IN A COMPUTING ENVIRONMENT - Effects of a physical memory fault are mitigated. In one example, to facilitate mitigation, memory is allocated to processing entities of a computing environment, such as applications, operating systems, or virtual machines, in a manner that minimizes impact to the computing environment in the event of a memory failure. Allocation includes using memory structure information, including, information regarding fault containment zones, to allocate memory to the processing entities. By allocating memory based on fault containment zones, a fault only affects a minimum number of processing entities. | 2015-07-02 |
20150186231 | Allocating Machine Check Architecture Banks - In accordance with embodiments disclosed herein, there is provided systems and methods for allocating machine check architecture banks. The processing device includes a plurality of machine check architecture banks to communicate a machine check error. The processing also includes an allocator to allocate during runtime of the processor a target machine check architecture bank of the plurality of machine check architecture banks. The runtime of the processor is during an occurrence of the machine check error. | 2015-07-02 |
20150186232 | DEBUG INTERFACE - Techniques of debugging a computing system are described herein. The techniques may include generating debug data at agents in the computing system. The techniques may include recording the debug data at a storage element, wherein the storage element is disposed in a non-core portion of the circuit interconnect accessible to the agents. | 2015-07-02 |
20150186233 | REMOTE DIAGNOSTICS FOR A COMPUTING DEVICE - For remote diagnostics of a computing device, a method is disclosed that includes collecting failure information from a computing device, wherein the computing device has an error, encapsulating the failure information into a file, and transmitting the file from the computing device to a remote device using a low level file transfer protocol. | 2015-07-02 |
20150186234 | Communication Monitoring System - A system for monitoring of integrity of a communication bus includes a communication bus cooperating with at least one transmitter configured to generate and transmit a signal on communication bus. At least one receiver is configured to receive a signal generated by the transmitter and transmitted on communication bus. The receiver is further configured to receive the transmitted signal as well as any reflected signals arising from non-impedance matched section in communication bus and wherein a time difference between transmitted pulse width and received pulse width indicates a distance between the non-impedance matched section and the transmitter on the communication bus. | 2015-07-02 |
20150186235 | Systems and Methods for Random Number Generation Using a Fractional Rate Clock - Systems and methods are provided for generating a pseudo-random bit sequence at an output frequency using a clock signal operating at a first frequency that is lower than the output frequency. A first bit sequence of a particular type is generated using a clock signal operating at a first frequency. A second bit sequence is generated using the clock signal operating at the first frequency, where the second bit sequence is a delayed version of the first bit sequence. A delayed version of the first bit sequence is generated using the second bit sequence and another bit sequence, wherein the delayed version is delayed based on the particular type and a difference between the output frequency and the first frequency. The first bit sequence and the delayed version are combined to generate a pseudo-random bit sequence at the output frequency. | 2015-07-02 |
20150186236 | SCALABLE TESTING IN A PRODUCTION SYSTEM WITH AUTOSCALING - A network-based production service is configured to process client requests for the production service via a network, capture production request data defining the requests and store the production request data in a data store. A test system comprising one or more controllers creates test jobs according to a test plan for testing the production service. The test plan creates a test profile for using specified production request data to simulate a load on the production service. Each job created by the test plan specifies a portion of production request data. A job queue receives and queues test jobs from one or more controllers configured to add test jobs to the job queue according to the test plan. Workers access jobs from the job queue and the production request data from the data store as specified in each job and replay the production request data to the production service. | 2015-07-02 |
20150186237 | SYSTEMS AND METHODS FOR ERROR SIMULATION AND CODE TESTING - A method for error simulation in a data storage subsystem providing abstractions of one or more storage devices. The method includes dividing the data storage subsystem into two or more hierarchically organized subsystems, wherein the subsystems interact using IO Request Packets (IORPs), such that relatively higher level subsystems create and populate IORPs and pass them to relatively lower level subsystems for corresponding processing. The method further includes defining an IORP modifier configured to attach to matching IORPs based on one or more attributes of the IORP modifier and to modify at least one of the processing and one or more attributes of the IORP in order to simulate errors in the data storage subsystem. | 2015-07-02 |
20150186238 | HARDWARE PROFILING - According to one general aspect, an apparatus may include a trace control register, a first output path, and a second output path. The trace control register may be configured to receive one or more signals output by a combinatorial logic block. The trace control register may include a first register portion configured to capture the one or more signals. The trace control register may include a second register portion configured to capture whether an event occurred within the combinatorial logic block. The occurrence of the event is determined by at least a portion of the one or more signals having a predetermined state. The first output path configured to select between a plurality of captured signals provided by respective trace control registers. The second output path configured to output one or more captured events provided by one or more respective trace control registers. | 2015-07-02 |
20150186239 | METHOD AND SYSTEM PROVIDNG A SELF-TEST ON ONE OR MORE SENSORS COUPLED TO A DEVICE - A method and system for providing a self-test configuration in a device is disclosed. The method and system comprise providing a self-test mechanism in a kernel space of a memory and enabling a hook in a user space of the memory, wherein the hook is in communication with the self-test mechanism. The method and system also include running the self-test driver and utilizing the results. | 2015-07-02 |
20150186240 | EXTENSIBLE I/O ACTIVITY LOGS - A method of managing peripherals is performed in a device coupled to a processor in a computer system. In the method, information associated with I/O activity for one or more peripherals is recorded in a first segment of a log. A second segment of the log is identified based on a next-segment pointer associated with the first segment of the log. In response to detecting a lack of available capacity in the first segment of the log, information associated with further I/O activity for the one or more peripherals is recorded in the second segment of the log. | 2015-07-02 |
20150186241 | ASSESSMENT OF PROCESSOR PERFORMANCE METRICS BY MONITORING PROBES CONSTRUCTED USING INSTRUCTION SEQUENCES - Processor performance metrics are assessed by monitoring probes constructed using instruction sequences. A probe comprising an instruction sequence is selected. The instruction sequence can be configured to measure at least one hardware metric. A first probe value is received. The first probe value can be based, at least in part, on the hardware metric. The first probe value can be determined from execution of the probe in a first execution environment. The probe can be executed a second time to determine a second probe value. The second probe value can be based, at least in part, on the hardware metric. The second probe value is determined in a second execution environment including at least one workload. The first probe value and the second. probe value can be compared to produce a performance assessment of the second execution environment. | 2015-07-02 |
20150186242 | ASSESSMENT OF PROCESSOR PERFORMANCE METRICS BY MONITORING PROBES CONSTRUCTED USING INSTRUCTION SEQUENCES - Monitoring probes constructed using instruction sequences are used to assess processor performance metrics. A probe comprising an instruction sequence is selected. The instruction sequence can be configured to measure at least one hardware metric. A first probe value is received. The first probe value can be based, at least in part, on the hardware metric. The first probe value can be determined from execution of the probe in a first execution environment. The probe can be executed a second time to determine a second probe value. The second probe value can be based, at least in part, on the hardware metric. The second probe value is determined in a second execution environment including at least one workload. The first probe value and the second probe value can be compared to produce a performance assessment of the second execution environment. | 2015-07-02 |
20150186243 | APPARATUS AND METHOD FOR ENABLING A USER TO MONITOR SKEW OF RESOURCE USAGE ACROSS DIFFERENT COMPONENTS OF A LARGE DATABASE SYSTEM - A method and apparatus are provided for facilitating performance monitoring of a large database system. The apparatus comprises a processor and a storage device communicatively coupled with the processor. The processor is programmed to (i) retrieve resource usage data points, (ii) calculate an outlier threshold value based upon values of data points, and (iii) determine if the value of each data point is outside of the threshold value. The processor is further programmed to (iv) plot the value of the data point in histogram with a first color when the data point value is determined to be outside of the threshold value, (v) plot the data point value in histogram with a second color when the data point value is determined to be not outside of the threshold value, and (vi) display the histogram containing plotted data point values of the first color and plotted data point values of the second color to enable a user to quickly and easily identify a potential skew condition. | 2015-07-02 |
20150186244 | DATA RECORDER FOR FULL EVENT CAPTURE - Embodiments are directed to capturing data associated with an occurrence of an event via a battery-powered recorder, comprising: storing analog data in a delay circuit while a recording circuit is powered off, detecting, by a trigger circuit, the occurrence of the event while the recording circuit is powered off, powering-on the recording circuit based on the detection of the event, converting samples of analog data associated with the event provided by the delay circuit to a digital format while the recording circuit powers on, and storing the converted digital samples after the recording circuit has powered on, wherein a time delay associated with the delay circuit is greater than a time it takes for the recording circuit to power-on. | 2015-07-02 |
20150186245 | INTEGRATED PRODUCTION SUPPORT - Embodiments for integrating production support features are included in systems for receiving modules from a client application associated with an operator device. The embodiments include selecting at least one client module from the received modules, identifying a trace objective for the at least one client module, selecting a data collection level based on the trace objective, and collecting, by a processor, data associated with the at least one client module in response to the selected data collection level. The systems are combinable with additional production support features including event monitoring. | 2015-07-02 |
20150186246 | INCLUDING KERNEL OBJECT INFORMATION IN A USER DUMP - An improved method of analyzing software issues may include retrieving and storing selected data elements from the operating system kernel data prior to performing a memory dump. The method of retrieving the selected kernel data may include creating a thread dedicated to collecting the data and storing it in a memory location for analysis after the memory dump. The operating system kernel data may be analyzed in conjunction with the prior art dump data to identify a root cause of the software issue. | 2015-07-02 |
20150186247 | AUTONOMOUS MEDIA VERSION TESTING - Autonomous media version testing is described. A method may include testing, by a processing device of a server and without human interaction, a plurality of versions of a game, each having a different set of test conditions, using information received from play of the plurality of versions of the game after a first game move has been made in the game. The method may also include determining, by the processing device and without human interaction, which of the plurality of versions of the game to publicly release based on the testing. | 2015-07-02 |
20150186248 | CONTENT RECORDING METHOD AND DEVICE - Disclosed are a content recording method and device, for use in software development. The method includes: capturing the content displayed on a screen in the software development process; acquiring a mouse event related to the content displayed on the screen; and processing the mouse event and the content displayed on the screen to obtain the recorded content, the recorded content containing the content displayed on the screen and the mouse event. The technical solution can record a screen capture and a mouse/keyboard operation related thereto in the software test development process, thus effectively recording the test and development process, and improving test and development efficiency. | 2015-07-02 |
20150186249 | TESTING WEB PAGES USING A DOCUMENT OBJECT MODEL - Methods and systems to test web browser enabled applications are disclosed. In one embodiment, a browser application can allow a user to perform test and analysis processes on a candidate web browser enabled application. The test enabled browser can use special functions and facilities that are built into the test enabled browser. One implementation of the invention pertains to functional testing, and another implementation of the invention pertains to pertains to site analysis. | 2015-07-02 |
20150186250 | ARCHITECTURAL FAILURE ANALYSIS - Localizing errors by: (i) running the testcase on a software model version of a processor to yield first testcase-run results in the form of a first set of values respectively stored in the set of data storage locations; (ii) creating a resource dependency information set based on the instructions of the testcase; (iii) running the testcase on a hardware version of the processor to yield second testcase-run results in the form of a second set of values respectively stored in the set of data storage locations; (iv) determining a set of miscompare data storage location(s), including at least a first miscompare data storage location, by comparing the first set of values and the second set of values; and (v) creating an initial dynamic slice of the data flow. | 2015-07-02 |
20150186251 | CONTROL FLOW ERROR LOCALIZATION - Localizing errors by: (i) running a testcase on a hardware processor and saving results; (ii) running the testcase on a software model of the processor and saving results; (iii) recording control flow information during the software run; (iv) determining a set of miscompare data storage locations by comparing the results from the hardware run with those from the software run; (v) based on the set of miscompare data storage locations and/or the control flow information, generating and running a modified version of the testcase that takes a different execution path when run on the software model than did the original testcase when run on the software model; and (vii) comparing the results from the hardware run and the results obtained from the modified software run to provide an indication of similarity between execution paths taken in these respective runs. | 2015-07-02 |
20150186252 | TESTING OF TRANSACTION TRACKING SOFTWARE - In a method for generating test transactions across computing systems, a first test function of a first program on a first computing system of a plurality of computing systems receives a plurality of instructions, wherein a first instruction of the plurality of instructions is to invoke a first transaction between a second function of the first program and a second program on a second computing system of the plurality of computing systems. The first test function of the first program causes the transaction between the second function of the first program on the first computing system and the second program on the second computing system. The first test function of the first program sends the plurality of instructions to a second test function on a third computing system of the plurality of computing systems, based on a second instruction of the plurality of instructions. | 2015-07-02 |
20150186253 | STREAMLINED PERFORMANCE TESTING FOR DEVELOPERS - Performance testing is streamlined to facilitate assessing software performance. A performance test can be authored similar to familiar functional tests but with a tag that indicates the test is a performance test and specifies a data collection mechanism. Performance data collected during test execution can subsequently be reported to a software developer in various ways. Performance testing can also be integrated with one or more of a team development system or an individual development system. | 2015-07-02 |
20150186254 | TESTING OF TRANSACTION TRACKING SOFTWARE - In a method for generating test transactions across computing systems, a first test function of a first program on a first computing system of a plurality of computing systems receives a plurality of instructions, wherein a first instruction of the plurality of instructions is to invoke a first transaction between a second function of the first program and a second program on a second computing system of the plurality of computing systems. The first test function of the first program causes the transaction between the second function of the first program on the first computing system and the second program on the second computing system. The first test function of the first program sends the plurality of instructions to a second test function on a third computing system of the plurality of computing systems, based on a second instruction of the plurality of instructions. | 2015-07-02 |
20150186255 | RE-USE OF INVALIDATED DATA IN BUFFERS - Reusing data in a memory buffer. A method includes reading data into a first portion of memory of a buffer implemented in the memory. The method further includes invalidating the data and marking the first portion of memory as free such that the first portion of memory is marked as being usable for storing other data, but where the data is not yet overwritten. The method further includes reusing the data in the first portion of memory after the data has been invalidated and the first portion of the memory is marked as free. | 2015-07-02 |
20150186256 | PROVIDING VIRTUAL STORAGE POOLS FOR TARGET APPLICATIONS - The present disclosure relates to a method and system for providing a virtual storage pool set for a target application by receiving performance requirements associated with the target application; and providing a virtual storage pool set for the target application according to the performance requirements and on the basis of storage capabilities of physical storage resources, the virtual storage pool set comprising one or more virtual storage pools. By means of various embodiments of the present disclosure, virtual storage pools can be provided for the target application efficiently and flexibly and can be adjusted dynamically. | 2015-07-02 |
20150186257 | MANAGING A TRANSFER BUFFER FOR A NON-VOLATILE MEMORY - Embodiments include apparatuses, method, and systems for managing a transfer buffer associated with a non-volatile memory. In one embodiment, controller logic may be coupled to a non-volatile memory and a transfer buffer. The controller logic may read a plurality of sectors of data from the non-volatile memory and store the read sectors in the transfer buffer. The controller logic may further allocate individual sectors to pages according to a completion time of the read of individual sectors of the plurality of sectors, the individual pages including a plurality of the sectors. The controller logic may further write the pages of sectors to the non-volatile memory responsive to a determination that all sectors of the page have been read. | 2015-07-02 |
20150186258 | MEMORY SYSTEM INCLUDING PE COUNT CIRCUIT AND METHOD OF OPERATING THE SAME - A memory system includes a memory device. The memory device includes a substrate. A memory array defines a plurality of pages, each page including a data area for storing data and a spare area for storing a program/erase (PE) count value, the PE count value indicating a number of PE cycles performed on the page. A PE count circuit is configured to perform a PE count read operation on a target page. A host determines whether to perform a data write operation on the target page or another PE count read operation on a new target page based on a result of the PE count read operation. PE cycles of a page are controlled by the PE count read operation. The memory array and the PE count circuit are formed in different layers of the substrate. | 2015-07-02 |
20150186259 | METHOD AND APPARATUS FOR STORING DATA IN NON-VOLATILE MEMORY - Apparatus and methods implemented therein are disclosed for storing data in flash memories. The apparatus comprises a flash memory having several physical blocks, a logical to virtual mapping table, a virtual to physical mapping table and a memory controller. The memory controller retrieves a virtual block address from the logical to virtual mapping table. The virtual block address corresponds to an entry in the virtual to physical mapping table. The entry in the virtual to physical mapping table contains a reference to a physical block. The memory controller uses the virtual block address to retrieve the reference to the physical block and stores data in the physical block. The memory controller copies the stored data from the physical block to a second physical block. The memory controller then replaces the reference to the physical block contained in the entry of the virtual to physical mapping table with a reference to the second physical block. | 2015-07-02 |
20150186260 | TECHNIQUES FOR STORING DATA IN BANDWIDTH OPTIMIZED OR CODING RATE OPTIMIZED CODE WORDS BASED ON DATA ACCESS FREQUENCY - A technique for operating a data storage system that includes a non-volatile memory array controlled by a controller includes storing, in the non-volatile memory array, first data whose frequency of access is above a first access level in a bandwidth optimized code word. Second data whose frequency of access is below a second access level is stored in the non-volatile memory in a code rate optimized code word. | 2015-07-02 |
20150186261 | DATA STORAGE DEVICE AND FLASH MEMORY CONTROL METHOD - A data storage device with flash memory and a flash memory control method are disclosed, which upload the physical-to-logical address mapping information of one block to the flash memory section by section. A microcontroller is configured to allocate a flash memory to provide a first run-time write block. Between a first write operation and a second write operation of the first run-time write block, the microcontroller updates a logical-to-physical address mapping table in accordance with just part of a first physical-to-logical address mapping table. The logical-to-physical address mapping table is provided within the flash memory. The first physical-to-logical address mapping table is established in the random access memory to record logical addresses corresponding to physical addresses of one block. | 2015-07-02 |
20150186262 | DATA STORAGE DEVICE AND FLASH MEMORY CONTROL METHOD - A data storage device with flash memory and a flash memory control method are disclosed, in which the flash memory includes multi-level cells (MLCs) and single-level cells (SLCs). A microcontroller is configured to use the random access memory to cache data issued from the host before writing the data into the flash memory. The microcontroller is further configured to allocate the blocks of the flash memory to provide a first run-time write block containing multi-level cells and a second run-time write block containing single-level cells. Under control of the microcontroller, each physical page of data uploaded from the random access memory to the first run-time write block contains sequential data, and random data cached in the random access memory to form one physical page is written into the second run-time write block. | 2015-07-02 |
20150186263 | DATA STORAGE DEVICE AND FLASH MEMORY CONTROL METHOD - A data storage device and a flash memory control method with high erasing efficiency are disclosed. A microcontroller is configured to maintain a plurality of logical-to-physical address mapping tables and a link table on a flash memory to record mapping information between a host and the flash memory. The link table indicates positions of the plurality of logical-to-physical address mapping tables, and each entry in the link table corresponds to one logical-to-physical address mapping table. When erasing user data of logical addresses corresponding to N logical-to-physical address mapping tables, the microcontroller is configured to invalidate N entries corresponding to the N logical-to-physical address mapping tables in the link table, where N is an integer. | 2015-07-02 |
20150186264 | DATA STORAGE DEVICE AND FLASH MEMORY CONTROL METHOD - A data storage device and a flash memory control method with high efficiency are disclosed. The random access memory of the data storage device is allocated to provide a collection and update area for logical-to-physical address mapping tables that correspond to logical addresses recorded into the physical-to-logical address mapping table. When recording a logical address corresponding to a new logical-to-physical address mapping table that has not appeared in the collection and update area into the physical-to-logical address mapping table, the microcontroller of the data storage device is configured to collect the new logical-to-physical address mapping table into the collection and update area and perform an update of the new logical-to-physical address mapping table within the collection and update area. | 2015-07-02 |
20150186265 | Reclaiming Segments in Flash Memory - A storage device made up of multiple storage media is configured such that one such media serves as a cache for data stored on another of such media. The device includes a controller configured to manage the cache by consolidating information concerning obsolete data stored in the cache with information concerning data no longer desired to be stored in the cache, and erase segments of the cache containing one or more of the blocks of obsolete data and the blocks of data that are no longer desired to be stored in the cache to produce reclaimed segments of the cache. | 2015-07-02 |
20150186266 | Lock-Free, Scalable Read Access To Shared Data Structures Using Garbage Collection - At least one read operation of at least one object of a data container is initiated. The data container includes an anchor object, a first internal data object and a first garbage collection object, the anchor object comprising a pointer to a versioned structure tree. Thereafter, in response to the at least one incompatible write operation, a second internal data object and a second garbage collection object are created for the data container. The second garbage collection object has a reference to the second internal data object. Subsequently, the second internal data object is installed in the anchor object and the first garbage collection object is passed to a garbage collection process so that space used by the first garbage collection object in a database can be reused. Related apparatus, systems, techniques and articles are also described. | 2015-07-02 |
20150186267 | METHOD AND APPARATUS FOR DRAM SPATIAL COALESCING WITHIN A SINGLE CHANNEL - Aspects include computing devices, systems, and methods for reorganizing the storage of data in memory to energize less than all of the memory devices of a memory module for read or write transactions. The memory devices may be connected to individual select lines such that a re-order logic may determine the memory devices to energize for a transaction according to a re-ordered memory map. The re-order logic may re-order memory addresses such that memory address provided by a processor for a transaction are converted to the re-ordered memory address according to the re-ordered memory map without the processor having to change its memory address scheme. The re-ordered memory map may provide for reduced energy consumption by the memory devices, or a balance of energy consumption and performance speed for latency tolerant processes. | 2015-07-02 |
20150186268 | EXTENDIBLE INPUT/OUTPUT DATA MECHANISM FOR ACCELERATORS - Embodiments include methods, systems and computer program products for providing an extendable job structure for executing instructions on an accelerator. The method includes creating a number of data descriptor blocks, each having a fixed number of memory location addresses and a pointer to a next of the number of the data descriptor block. The method further includes creating a last data descriptor block having the fixed number of memory location addresses and a last block indicator. Based on determining that additional memory is required for executing instructions on the accelerator, the method includes modifying the last data descriptor block to become a data extender block having a pointer to one of one or more new data descriptor blocks and creating a new last data descriptor block. | 2015-07-02 |
20150186269 | MANAGING MEMORY - Embodiments of the present disclosure provide a method and apparatus for managing memory. Embodiments of the present disclosure, is related to a method and apparatus for managing memory, comprising: monitoring usage status of memory in a first computer device so as to determine available addresses; mapping at least one part of the available addresses to externally accessible shared addresses; and managing the shared addresses on the basis of a memory table so that the at least one part of the available addresses are accessible to a second computer device via the shared addresses, wherein the memory is connected to a dual in-line memory module interface of the first computer device. By means of the method and apparatus as described in the present disclosure, memory can be shared between a plurality of computer devices, the memory utilization efficiency can be increased while the cost reduced. | 2015-07-02 |
20150186270 | NON-VOLATILE MEMORY AND METHOD WITH ADAPTIVE LOGICAL GROUPS - A nonvolatile memory is organized into blocks as erase units and physical pages as read/write units. A host addresses data by logical pages, which are storable in corresponding physical pages. Groups of logical pages are further aggregated into ogical groups as addressing units. The memory writes host data in either first or second write streams, writing to respective blocks either logical-group by logical-group or logical-page by logical-page in order to reduce the size of logical-to-physical-address maps that are cached in a controller RAM. Only one block at a time needs be open in the second stream to accept logical pages from multiple logical groups that are active. Garbage collection is performed on the blocks from each write stream independently without data copying between the two streams. | 2015-07-02 |
20150186271 | Memory System Address Modification Policies - A memory system implements a plurality of virtual address modification policies and optionally a plurality of cache eviction policies. Virtual addresses are optionally, selectively, and/or conditionally modified by the memory system in accordance with a plurality of virtual address modification policies. The virtual address modification policies include no modification, modification according to two-dimensional Morton ordering, and modification according to three-dimensional Morton ordering. For example, in response to a reference to a particular virtual address, the particular virtual address is modified according to two-dimensional Morton ordering so that at least two elements in a same column and distinct respective rows of a two-dimensional data structure are loaded into a same cache line and/or are referenced via a same page table entry. | 2015-07-02 |
20150186272 | SHARED MEMORY IN A SECURE PROCESSING ENVIRONMENT - Embodiments of an invention for sharing memory in a secure processing environment are disclosed. In one embodiment, a processor includes an instruction unit and an execution unit. The instruction unit is to receive an instruction to match an offer to make a page in an enclave page cache shareable to a bid to make the page shareable. The execution unit is to execute the instruction. Execution of the instruction includes making the page shareable. | 2015-07-02 |
20150186273 | METHOD AND APPARATUS TO FACILITATE SHARED POINTERS IN A HETEROGENEOUS PLATFORM - A method and apparatus to facilitate shared pointers in a heterogeneous platform. In one embodiment of the invention, the heterogeneous or non-homogeneous platform includes, but is not limited to, a central processing core or unit, a graphics processing core or unit, a digital signal processor, an interface module, and any other form of processing cores. The heterogeneous platform has logic to facilitate sharing of pointers to a location of a memory shared by the CPU and the GPU. By sharing pointers in the heterogeneous platform, the data or information sharing between different cores in the heterogeneous platform can be simplified. | 2015-07-02 |
20150186274 | Memory System Cache Eviction Policies - A memory system implements a plurality of cache eviction policies and optionally a plurality of virtual address modification policies. A cache storage unit of the memory system has a plurality of cache storage sub-units. The cache storage unit is optionally managed by a cache management unit in accordance with the cache eviction polices. The cache storage sub-units are allocated for retention of information associated with respective memory addresses and are associated with the cache eviction policies in accordance with the respective memory addresses. For example, in response to a reference to an address that misses in a cache, the address is used to access a page table entry having an indicator specifying an eviction policy to use when selecting a cache line from the cache to evict in association with allocating a cache line of the cache to retain data obtained via the address. | 2015-07-02 |
20150186275 | Inclusive/Non Inclusive Tracking of Local Cache Lines To Avoid Near Memory Reads On Cache Line Memory Writes Into A Two Level System Memory - A processor is described that includes one or more processing cores. The processing core includes a memory controller to interface with a system memory having a near memory and a far memory. The processing core includes a plurality of caching levels above the memory controller. The processor includes logic circuitry to track state information of a cache line that is cached in one of the caching levels. The state information including a selected one of an inclusive state and a non inclusive state. The inclusive state indicates that a copy or version of the cache line exists in near memory. The non inclusive states indicates that a copy or version of the cache line does not exist in the near memory. The logic circuitry is to cause the memory controller to handle a write request that requests a direct write into the near memory without a read of the near memory beforehand if a system memory write request generated within the processor targets the cache line when the cache line is in the inclusive state. | 2015-07-02 |
20150186276 | REMOVAL AND OPTIMIZATION OF COHERENCE ACKNOWLEDGEMENT RESPONSES IN AN INTERCONNECT - According to one general aspect, a method of performing a cache transaction may include transmitting a cache request to a target device. The method may include receiving a cache response that is associated with the cache request. The method may further include completing the cache transaction without transmitting an exclusive cache response acknowledgement message to the target device. | 2015-07-02 |
20150186277 | CACHE COHERENT NOC WITH FLEXIBLE NUMBER OF CORES, I/O DEVICES, DIRECTORY STRUCTURE AND COHERENCY POINTS - The present application is directed to designing a NoC interconnect architecture by a means of specification, which can indicate implementation parameters of the NoC including, but not limited to, number of NoC agent interfaces, and number of cache coherency controllers. Flexible identification of NoC agent interfaces and cache coherency controllers allows for an arbitrary number of agents to be associated with the NoC upon configuring the NoC from the specification. | 2015-07-02 |
20150186278 | RUNTIME PERSISTENCE - Apparatus, systems, and methods to manage memory operations are described. In one embodiment, a controller is coupled to a processor unit, and comprising logic to block additional transactions on the processor unit, initiate a cache flush to flush data from cache memory coupled to the processor unit to a memory controller buffer, block incoming data from the cache memory, and initiate a buffer flush to flush data from the memory controller buffer to a nonvolatile memory. Other examples are also disclosed and claimed. | 2015-07-02 |
20150186279 | SYSTEM AND METHOD TO DEFRAGMENT A MEMORY - A system and method to defragment a memory is disclosed. In a particular embodiment, a method includes loading data stored at a first physical memory address of a memory from the memory into a cache line of a data cache. The first physical memory address is mapped to a first virtual memory address. The method further includes initiating modification, at the data cache, of lookup information associated with the first virtual memory address so that the first virtual memory address corresponds to a second physical memory address of the memory. The method also includes modifying, at the data cache, information associated with the cache line to indicate that the cache line corresponds to the second physical memory address instead of the first physical memory address. | 2015-07-02 |
20150186280 | CACHE REPLACEMENT POLICY METHODS AND SYSTEMS - An embodiment includes a system, comprising: a cache configured to store a plurality of cache lines, each cache line associated with a priority state from among N priority states; and a controller coupled to the cache and configured to: search the cache lines for a cache line with a lowest priority state of the priority states to use as a victim cache line; if the cache line with the lowest priority state is not found, reduce the priority state of at least one of the cache lines; and select a random cache line of the cache lines as the victim cache line if, after performing each of the searching of the cache lines and the reducing of the priority state of at least one cache line K times, the cache line with the lowest priority state is not found. N is an integer greater than or equal to 3; and K is an integer greater than or equal to 1 and less than or equal to N−2. | 2015-07-02 |
20150186281 | SYSTEMS AND METHODS FOR NON-VOLATILE CACHE CONTROL - In some embodiments, a method for controlling a cache having a volatile memory and a non-volatile memory during a power up sequence is provided. The method includes receiving, at a controller configured to control the cache and a storage device associated with the cache, a signal indicating whether the non-volatile memory includes dirty data copied from the volatile memory to the non-volatile memory during a power down sequence, the dirty data including data that has not been stored in the storage device. In response to the received signal, the dirty data is restored from the non-volatile memory to the volatile memory, and flushed from the volatile memory to the storage device. | 2015-07-02 |
20150186282 | REPRESENTING A CACHE LINE BIT PATTERN VIA META SIGNALING - A cache controller with a pattern recognition mechanism can identify patterns in cache lines. Instead of transmitting the entire data of the cache line to a destination device, the cache controller can generate a meta signal to represent the identified bit pattern. The cache controller transmits the meta signal to the destination in place of at least part of the cache line. | 2015-07-02 |
20150186283 | Smart Pre-Fetch for Sequential Access on BTree - Methods and systems configured to facilitate smart pre-fetching for sequentially accessing tree structures such as balanced trees (b-trees) are described herein. According to various described embodiments, a pre-fetch condition can be determined to have been met for a first cache associated with a first level of a tree such as a b-tree. A link to a bock of data to be read into the cache can be read into the cache by accessing a second level of the tree. The data elements associated with the retrieved link can subsequently read into the cache. | 2015-07-02 |
20150186284 | CACHE ELEMENT PROCESSING FOR ENERGY USE REDUCTION - A method for accessing a cache memory structure includes dividing a multiple cache elements of a cache memory structure into multiple groups. A serial probing process of the multiple groups is performed. Upon a tag hit resulting from the serial probing process, the probing process for remaining groups exits. | 2015-07-02 |
20150186285 | METHOD AND APPARATUS FOR HANDLING PROCESSOR READ-AFTER-WRITE HAZARDS WITH CACHE MISSES - According to one general aspect, an apparatus may include an instruction fetch unit, an execution unit, and a cache resynchronization predictor, as described above. The instruction fetch unit may be configured to issue a first memory read operation to a memory address, and a first memory write operation to the memory address, wherein the first memory read operation is stored at an instruction address. The execution unit may be configured to execute the first memory read operation, wherein the execution of the first memory read operation causes a resynchronization exception. The cache resynchronization predictor may be configured to associate the instruction address with a resynchronization exception, and determine if a memory read operation stored at the instruction address comprises a resynchronization predicted store. | 2015-07-02 |
20150186286 | Providing Memory System Programming Interfacing - A memory system implements a plurality of cache eviction policies, a plurality of virtual address modification policies, or both. One or more application programming interfaces provide access to memory allocation and parameters thereof relating to zero or more cache eviction policies and/or zero or more virtual address modification policies associated with memory received via a memory allocation request. The provided application programming interfaces are usable by various software elements, such as any one or more of basic input/output system, driver, operating system, hypervisor, and application software elements. Memory allocated via the application programming interfaces is optionally managed via one or more heaps, such as one heap per unique combination of values for each of any one or more parameters including eviction policy, virtual address modification policy, structure-size, and element-size parameters. | 2015-07-02 |
20150186287 | Using Memory System Programming Interfacing - A memory system implements a plurality of cache eviction policies, a plurality of virtual address modification policies, or both. One or more application programming interfaces are used for memory allocation via parameters thereof relating to zero or more cache eviction policies and/or zero or more virtual address modification policies associated with memory received via a memory allocation request. The application programming interfaces are usable by various software elements, such as any one or more of basic input/output system, driver, operating system, hypervisor, and application software elements. Memory allocated via the application programming interfaces is optionally managed via one or more heaps, such as one heap per unique combination of values for each of any one or more parameters including eviction policy, virtual address modification policy, structure-size, and element-size parameters. | 2015-07-02 |
20150186288 | APPARATUS AND METHOD OF OPERATING CACHE MEMORY - Provided are an apparatus and method of operating a cache memory. The cache memory apparatus includes a cache memory configured to store node data of an acceleration structure as cache data and to store hit frequency data corresponding to the cache data, and a controller configured to determine whether node data corresponding to a request is stored in the cache memory, and to update any one of the cache data based on the hit frequency data. | 2015-07-02 |
20150186289 | CACHE ARCHITECTURE - A cache controller for a processing system, the cache controller being capable of providing an interface between a data requester and a plurality of memories including a first memory, a second memory and a cache memory, the cache controller being configured to, in response to receiving a request for data at a specified address in a specified memory, perform the steps of: determining whether either (a) a data field in the cache memory that corresponds to the specified address has been populated from the specified memory or (b) the specified memory is the first memory and the data field corresponding to the specified address in the cache memory has been populated from the second memory; and if that determination is positive, responding to the request by providing the content of the data field in the cache memory corresponding to the specified address. | 2015-07-02 |
20150186290 | SYSTEM, APPARATUS, AND METHOD FOR TRANSPARENT PAGE LEVEL INSTRUCTION TRANSLATION - Detailed herein are systems, apparatuses, and methods for transparent page level instruction translation. Exemplary embodiments include an instruction translation lookaside buffer (iTLB), wherein each iTLB entry includes a linear address of a page in memory, a physical address of the page in memory, and a remapping indicator. | 2015-07-02 |
20150186291 | SYSTEMS AND METHODS FOR MEMORY MANAGEMENT IN A DYNAMIC TRANSLATION COMPUTER SYSTEM - Systems and methods for managing memory in a dynamic translation computer system are provided. Embodiments may include receiving an instruction packet and processing the instruction packet. The instruction packet may include one or more instructions for obtaining a block of virtual memory for use in an emulated operating environment from a slab of virtual memory in a host environment, maintaining a mapping between the block of virtual memory and physical memory when the block is returned to the host environment, and for filling the block of virtual memory with zeros and a pattern based, at least in part, on a detected fill type. | 2015-07-02 |
20150186292 | EFFICIENT FILL-BUFFER DATA FORWARDING SUPPORTING HIGH FREQUENCIES - A Fill Buffer (FB) based data forwarding scheme that stores a combination of Virtual Address (VA), TLB (Translation Look-aside Buffer) entry# or an indication of a location of a Page Table Entry (PTE) in the TLB, and a TLB page size information in the FB and uses these values to expedite FB forwarding. Load (Ld) operations send their non-translated VA for an early comparison against the VA entries in the FB, and are then further qualified with the TLB entry# to determine a “hit.” This hit determination is fast and enables FB forwarding at higher frequencies without waiting for a comparison of Physical Addresses (PA) to conclude in the FB. A safety mechanism may detect a false hit in the FB and generate a late load cancel indication to cancel the earlier-started FB forwarding by ignoring the data obtained as a result of the Ld execution. The Ld is then re-executed later and tries to complete successfully with the correct data. | 2015-07-02 |
20150186293 | HIGH-PERFORMANCE CACHE SYSTEM AND METHOD - A method for facilitating operation of a processor core is provided. The method includes: examining instructions being filled from a second instruction memory to a third instruction memory, extracting instruction information containing at least branch information and generating a stride length of base register corresponding to every data access instruction; creating a plurality of tracks based on the extracted instruction; filling at least one or more instructions that are likely to be executed by the processor core based on one or more tracks from the plurality of tracks from a first instruction memory to the second instruction memory; filling at least one or more instructions based on one or more tracks from the plurality of tracks from the second instruction memory to the third instruction memory; calculating possible data access address of the data access instruction to be executed next time based on the stride length of the base register. | 2015-07-02 |
20150186294 | SYSTEMS AND METHODS FOR MANAGING READ-ONLY MEMORY - Embodiments for managing read-only memory. A system includes a memory device including a real memory and a tracking mechanism configured to track relationships between multiple virtual memory addresses and real memory. The system further includes a processor configured to perform the below method and/or execute the below computer program product. One method includes mapping a first virtual memory address to a real memory in a memory device and mapping a second virtual memory address to the real memory. Here, the first virtual memory address is authorized to modify data in the real memory and the second virtual memory address is not authorized to modify the data in the real memory. | 2015-07-02 |
20150186295 | Bridging Circuitry Between A Memory Controller And Request Agents In A System Having Multiple System Memory Protection Schemes - A processor is described that includes one or more processing cores. The processor includes a memory controller to interface with a system memory having a protected region and a non protected region. The processor includes a protection engine to protect against active and passive attacks. The processor includes an encryption/decryption engine to protect against passive attacks. The protection engine includes bridge circuitry coupled between the memory controller and the one or more processing cores. The bridge circuitry is also coupled to the protection engine and the encryption/decryption engine. The bridge circuitry is to route first requests directed to the protected region to the protection engine and to route second requests directed to the non protected region to the encryption/decryption engine. | 2015-07-02 |
20150186296 | Systems And Methods For Security In Computer Systems - Systems and methods are provided for the prevention and mitigation of security attacks in computer systems. Virtualization technology is provided and leveraged to prevent and mitigate exploits in the computer systems. For example, malicious code may be prevented from system execution by inhibiting the delivery of such code in a payload to system memory. In other examples, virtualization technology is leveraged to mask the computer system machine architecture. By masking or otherwise hiding the machine architecture, the delivery of payloads into memory by malicious users can be prevented. In this manner, even if exploits are identified and accessed by malicious users of code, the denial of payload delivery prevents the execution of malicious actions within the computer system. | 2015-07-02 |
20150186297 | BUILDING AN UNDO LOG FOR IN-MEMORY BLOCKS OF DATA - Provided are techniques for building an undo log for in-memory blocks of data. Permission on a block of data in memory is set to prevent updates to that block of data using a memory protection function. In response to an update operation attempting to update the block of data in the memory, an interrupt with a location of the block of data is received, the block of data is copied to an undo log entry in an undo log, and the permission on the block of data in the memory is set to allow the update to that block of data to proceed using the memory protection function. | 2015-07-02 |
20150186298 | Location Sensitive Solid State Drive - A data storage system including a SSD includes a capability to detect whether its location is acceptable for function, and a capability to self-disable in the event the location of the device is unacceptable, or to self-enable only while the location of the device is acceptable. | 2015-07-02 |
20150186299 | LOAD INSTRUCTION FOR CODE CONVERSION - Embodiments of an invention for a load instruction for code conversion are disclosed. In one embodiment, a processor includes an instruction unit and an execution unit. The instruction unit is to receive an instruction having a source operand to indicate a source location and a destination operand to indicate a destination location. The execution unit is to execute the instruction. Execution of the instruction includes checking the access permissions of the source location and loading content from the source location into the destination location if the access permissions of the source location indicate that the content is executable. | 2015-07-02 |
20150186300 | Concurrent Execution of Critical Sections by Eliding Ownership of Locks - Critical sections of multi-threaded programs, normally protected by locks providing access by only one thread, are speculatively executed concurrently by multiple threads with elision of the lock acquisition and release. Upon a completion of the speculative execution without actual conflict as may be identified using standard cache protocols, the speculative execution is committed, otherwise the speculative execution is squashed. Speculative execution with elision of the lock acquisition, allows a greater degree of parallel execution in multi-threaded programs with aggressive lock usage. | 2015-07-02 |
20150186301 | BUILDING AN UNDO LOG FOR IN-MEMORY BLOCKS OF DATA - Provided are techniques for building an undo log for in-memory blocks of data. Permission on a block of data in memory is set to prevent updates to that block of data using a memory protection function. In response to an update operation attempting to update the block of data in the memory, an interrupt with a location of the block of data is received, the block of data is copied to an undo log entry in an undo log, and the permission on the block of data in the memory is set to allow the update to that block of data to proceed using the memory protection function. | 2015-07-02 |
20150186302 | INFORMATION PROCESSING APPARATUS AND CONTROL METHOD OF INFORMATION PROCESSING APPARATUS - An information processing apparatus includes a control unit configured to activate the information processing apparatus in a first activation mode or a second activation mode, a receiving unit configured to receive an operation for activating the information processing apparatus in the first activation mode from a user, a notification unit configured to notify the control unit of information corresponding to the operation of the user received by the receiving unit, and a connection unit configured to connect the control unit and the receiving unit without connecting the notification unit and to notify the control unit that a user has operated on the receiving unit, wherein the control unit activates, in the case where it is not notified via the connection unit that a user has operated on the receiving unit, the information processing apparatus in the second activation mode without waiting for activation of the notification unit. | 2015-07-02 |
20150186303 | Display System And Operation Optimization Method - The present disclose provides a display system. The display system comprises a signal source unit a signal source unit, a transmission unit and a display unit. The signal source unit is complied with a first standard and used for providing an image signal. The transmission unit is coupled to the signal source unit, and used for transmitting the image signal, wherein the transmission unit has a plurality of pins. The display unit is complied with a second standard and coupled to the transmission unit. The display unit comprises a detection unit and a determination unit. The detection unit is used for detecting the plurality of pins. The determination unit is couple to the detection unit and used for determining voltage levels of the pins, determining the first standard according to the voltage level of the pins and configuring the display device into a corresponding mode according to the first standard. | 2015-07-02 |
20150186304 | PORTABLE, COMPUTER-PERIPHERAL APPARATUS INCLUDING A UNIVERSAL SERIAL BUS (USB) CONNECTOR - A portable computer-peripheral apparatus comprises a Universal Serial Bus (USB) connector. The apparatus is operable to communicate with a computer terminal (e.g. a ‘PC’). Following connection to the PC, the apparatus initialises (i.e. presents or enumerates itself) as a HID keyboard and then sends to the terminal a first predefined sequence of keycodes automatically without manual interaction; the keycodes complying with the human interface device (HID) keyboard standard protocol. Each keycode represents and simulates a keystroke, such as those performed when a user strikes a key on the PC keyboard. The keycode sequence automates the direct access to content, and/or or the initiation of a task or other process. | 2015-07-02 |
20150186305 | I/O CO-PROCESSOR COUPLED HYBRID COMPUTING DEVICE - An apparatus and method provide power to perform functions on a computing device, In one example, the apparatus contains multiple processors that may operate at different power levels to consume different amounts of power. Also, any of the multiple processors may perform different functions. For example, one processor may be a low power processor that may control or operate at least one peripheral device to perform a low capacity function. Control may also switch from the low power processor to a high capacity processor. In one example, the high capacity processor controls the low power processor and further controls the at least one peripheral device through the lower power processor. | 2015-07-02 |
20150186306 | METHOD AND AN APPARATUS FOR CONVERTING INTERRUPTS INTO SCHEDULED EVENTS - A method and an apparatus embodying the method for converting interrupts into scheduled events, comprising receiving an interrupt at an interrupt controller; determining a vector number for the interrupt; determining properties of an interrupt work in accordance with the vector number; and scheduling the interrupt work in accordance with the properties of the interrupt work, is disclosed. | 2015-07-02 |
20150186307 | ADAPTIVE INTERRUPT MODERATION - Generally, this disclosure relates to adaptive interrupt moderation. A method may include determining, by a host device, a number of connections between the host device and one or more link partners based, at least in part, on a connection identifier associated with each connection; determining, by the host device, a new interrupt rate based at least in part on a number of connections; updating, by the host device, an interrupt moderation timer with a value related to the new interrupt rate; and configuring the interrupt moderation timer to allow interrupts to occur at the new interrupt rate. | 2015-07-02 |