25th week of 2019 patent applcation highlights part 51 |
Patent application number | Title | Published |
20190188047 | On-Demand Provisioning of Customized Developer Environments - Systems and techniques are provided for managing and creating customized testing and development environments by a custom environment manager for computer or data management systems. In a specific implementation, the custom environment manager includes request receivers that receive orders associated with a prioritization from custom environment requestors and store the received orders in a request queue that holds multiple orders having different prioritizations and made by different requestors. The custom environment manager also includes pooled resource managers that track available virtual and physical computing resources needed to build custom environments. The custom environment manager also includes configurators that create custom environments based upon prioritization of the orders and available resources and return the custom environment to the requestor of the order. | 2019-06-20 |
20190188048 | METHODS AND APPARATUS FOR LIMITING DATA TRANSFERRED OVER THE NETWORK BY INTERPRETING PART OF THE DATA AS A METAPROPERTY - Methods and apparatus to customize deployment using metaproperties are disclosed. An example deployment metaproperty manager can generate a first metaproperty payload including an initial application component metaproperty of an application component that provides a logical template of an application. A deployment event broker can reply-back to the deployment metaproperty manager with a second metaproperty payload that includes a processed application component metaproperty. | 2019-06-20 |
20190188049 | APPARATUS AND METHOD TO SELECT SERVICES FOR EXECUTING A USER PROGRAM BASED ON A CODE PATTERN INCLUDED THEREIN - An apparatus selects services to be used by a user from a plurality of candidates for services used to execute a program. The apparatus acquires a constraint condition and the program. When executing the acquired program, the apparatus specifies a set of services satisfying the constraint condition from the plurality of candidates based on a code pattern included in the program, and uses the specified set of services to execute the program. | 2019-06-20 |
20190188050 | RESOURCE BASED VIRTUAL COMPUTING INSTANCE SCHEDULING - Examples provide two-tiered scheduling within a cluster. A coarse-grained analysis is performed on a candidate set of hosts to select a host for a virtual computing instance based on optimization of at least one resource. A host is selected based on the analysis results. The identified virtual computing instance is placed on the selected host. A fine-grained analysis is performed on a set of communication graphs for a plurality of virtual computing instances to generate a set of penalty scores. A set of communicating virtual computing instances are selected based on the set of penalty scores. A first virtual computing instance from a first host is relocated to a second host to minimize a distance between the first virtual computing instance and a second virtual computing instance. Relocating the first virtual computing instance reduces at least one penalty score for the set of communicating virtual computing instances. | 2019-06-20 |
20190188051 | CONSTRAINED PLACEMENT IN HIERARCHICAL RANDOMIZED SCHEDULERS - A distributed scheduler for a virtualized computer system has a hierarchical structure and includes a root scheduler as the root node, one or more branch schedulers as intermediate nodes, and a plurality of hosts as leaf nodes. A request to place a virtual computing instance is propagated down the hierarchical structure to the hosts that satisfy placement constraints of the request. Each host that receives the request responds with a score indicating resource availability on that host, and the scores are propagated back up the hierarchical structure. Branch schedulers that receive such scores compare the received scores and further propagate a “winning” score, such as the highest or lowest score, up the hierarchical structure, until the root scheduler is reached. The root scheduler makes a similar comparison of received scores to select the best candidate among the hosts to place the virtual computing instance. | 2019-06-20 |
20190188052 | TASK QUEUING AND DISPATCHING MECHANISMS IN A COMPUTATIONAL DEVICE - A plurality of ordered lists of dispatch queues corresponding to a plurality of processing entities are maintained, wherein each dispatch queue includes one or more task control blocks or is empty. A determination is made as to whether a primary dispatch queue of a processing entity is empty in an ordered list of dispatch queues for the processing entity. In response to determining that the primary dispatch queue of the processing entity is empty, a task control block is selected for processing by the processing entity from another dispatch queue of the ordered list of dispatch queues for the processing entity, wherein the another dispatch queue from which the task control block is selected meets a threshold criteria for the processing entity. | 2019-06-20 |
20190188053 | INCREASING OPERATING FREQUENCY OF PROCESSOR CORES - Embodiments of the invention are directed to methods for improving performance of a multi-core processor. The method includes increasing a first operating frequency to a first elevated operating frequency of a first core of a gang of cores, the gang of cores comprising a plurality of cores of the multi-core processor. The method further includes upon a determination that an operating temperature of the first core is above a threshold temperature, switching processing of a thread from the first core to a second core in the gang of cores. The method further includes reducing the first operating frequency of the first core. The method further includes increasing the operating frequency of the second core to a second elevated operating frequency. | 2019-06-20 |
20190188054 | TRANSACTIONAL LOCK ELISION WITH DELAYED LOCK CHECKING - A computer-implemented method includes the following operations. A transactional lock elision transaction including a critical section is executed. The critical section is processed. After the processing of the critical section and prior to a commit point in the transactional lock elision transaction, a status of a lock is checked. Responsive to a determination that a status of the lock is free, a result of the transactional lock elision transaction is committed. | 2019-06-20 |
20190188055 | SUPPRESSION OF SPECULATIVE ACCESSES TO SHARED MEMORY LOCATIONS AT A PROCESSOR - A method of monitoring, by one or more cores of a multi-core processor, speculative instructions, where the speculative instructions store data to a shared memory location, and where a semaphore, associated with the memory location, specifies the availability of the memory location to store data. One or more speculative instructions are flushed based on when the semaphore specifies the memory location is unavailable. Any further speculative instructions are suppressed from being issued based on a count of flushed speculation instructions above a specified threshold, executing the speculative instructions when the semaphore specifies the memory location is available, and storing the data to the memory location. | 2019-06-20 |
20190188056 | BOOTSTRAPPING A CONVERSATION SERVICE USING DOCUMENTATION OF A REST API - Systems, methods, and computer-readable media for constructing a conversation model using documentation of an application programming interface (API) are disclosed. The conversation model can be used to train a natural language classifier. API endpoints may be represented in the API documentation as (verb, resource, element) tuples. These tuples can be converted into intent and parameters of the API endpoints can be converted into entities. In addition, example utterances may be created for each intent. The conversation model can be generated using the intents, example utterances, and/or entities. | 2019-06-20 |
20190188057 | System and Method to Measure the Response Time of Event Chains - A method to measure response time of a chain of events that is a set of individual events includes identifying a chain of events comprising a set of events for which the response time is to be measured; identifying a sequence for processing of said events; tracking the processing of each of the identified events in said sequence; waiting until all the identified events are executed in said sequence; and calculating response time of said chain of events by marking beginning of execution of first event and the end of execution of last event, after making sure that each of said events is considered executed only when the preceding event in said sequence is executed. | 2019-06-20 |
20190188058 | USING RESPONSE TIME OBJECTIVES IN A STORAGE SYSTEM - Techniques are described determining data movements. A first plurality of performance goals for a plurality of storage pools are received. Each of the first plurality of performance goals specifies a performance goal for one of the plurality of storage pools. A second plurality of performance goals for a plurality of applications are received. Each of the second plurality of performance goals specifies a performance goal with respect to I/O operations directed to one or more logical devices used by one of the plurality of applications. A set of proposed data movements between a first of the plurality of storage pools and a second of the plurality of storage pools is determined in accordance with criteria including any of the first plurality of performance goals and the second plurality of performance goals. | 2019-06-20 |
20190188059 | Task-Related Sorting, Application Discovery, and Unified Bookmarking for Application Managers - This document describes techniques and devices for task-related sorting, application discovery, and unified bookmarking for application managers. Through use of an application manager, multiple applications (including standalone applications, instant applications, websites, and other content) that a person can use to accomplish a single task, or multiple related tasks, are sorted into discrete groups for display in the application manager. The application manager can automatically recognize relationships between activities performed with the applications and recognize user actions with the applications that are related to the activities. Based on the relationships and user actions, the application manager can automatically determine that the activities and actions represent a task and display a task group that includes the applications that represent the task. The task groups may be visually displayed as a stack, strip, or pile of windows or thumbnails representing each application or other content the person could use for the task. | 2019-06-20 |
20190188060 | ENABLING A WEB APPLICATION TO CALL AT LEAST ONE NATIVE FUNCTION OF A MOBILE DEVICE - Enabling a web application to call at least one native function of a mobile device includes accessing the web application by a browser of the mobile device. The web application includes at least one GUI element. The mobile device operates a listener module that is a TCP/IP socket listener listening for an address including a localhost IP address and a port number. Upon selection of the GUI element, a cross domain HTTP request is submitted by the browser to the listener localhost IP address. The listener module receives the request and calls the at least one native function in dependence on the received request. | 2019-06-20 |
20190188061 | Mediating Interactions Among System Agents and System Clients - A system for mediating interactions among system agents and system clients includes a computing platform having a hardware processor and a system memory storing an interaction cueing software code including decision trees corresponding to storylines. The hardware processor executes the interaction cueing software code to receive interaction data corresponding to an interaction of a system client with a first system agent, identify a storyline for use in guiding subsequent interactions with the system client based on the interaction data, and store the interaction data and data identifying the storyline in a client profile assigned to the system client. The interaction cueing software code further determines an interaction cue or cues for coaching the same or another system agent in a second interaction with the system client based on the interaction data and a decision tree corresponding to the storyline, and transmits the interaction cue(s) to the system agent. | 2019-06-20 |
20190188062 | API NOTEBOOK TOOL - Techniques for an application programming interface (API) notebook tool are disclosed. In some implementations, an API notebook is a tool, framework, and ecosystem that enables easy exploration of services that expose APIs, creation and documentation of examples, use cases and workflows, and publishing and collaboration of APIs. In some embodiments, systems, processes, and computer program products for an API notebook tool include receiving a request for a client for calling an API for a service, and dynamically generating the client for the API for the service. | 2019-06-20 |
20190188063 | MAPPING COMPUTER PROGRAMS TO NETWORK PROTOCOL METHODS - A request to deploy a computer program is received. The computer program to be deployed on a computer system connected to a distributed peer-to-peer network. Computer systems in the network are referred to as nodes of a blockchain. The nodes are associated with corresponding databases storing copies of a distributed database record associated with the blockchain. Transactions executed over copies of the distributed database record are automatically replicated across the blockchain nodes. The deploy request includes metadata associated with the computer program. The metadata includes a property that associates a function of the computer program with a network protocol method. A mapping between the function of the program and the method is generated. When a request including the method is received, an invoke request including the function of the program and an endpoint, where the computer program is accessible, is automatically generated. The invoke request is sent to the endpoint. | 2019-06-20 |
20190188064 | ERROR INJECTION FOR ASSESSMENT OF ERROR DETECTION AND CORRECTION TECHNIQUES USING ERROR INJECTION LOGIC AND NON-VOLATILE MEMORY - A memory system includes a non-volatile memory unit, a content-addressable memory unit coupled to the non-volatile memory unit, and an error injection logic unit coupled to the non-volatile memory unit and the content addressable memory unit. The non-volatile memory unit is programmed to allow a first error injection onto a first data word using the error injection logic unit. The error injection logic in combination with the content addressable memory unit replaces a bit cell in the memory system. The memory system performs an evaluation of various error detection and correction techniques. | 2019-06-20 |
20190188065 | COMPUTERIZED HIGH-SPEED ANOMALY DETECTION - Embodiments of the invention include a computer-implemented method for detecting anomalies in non-stationary data in a network of computing entities. The method collects non-stationary data in the network and classifies the non-stationary data according to a non-Markovian, stateful classification, based on an inference model. Anomalies can then be detected, based on the classified data. The non-Markovian, stateful process allows anomaly detection even when no a priori knowledge of anomaly signatures or malicious entities exists. Anomalies can be detected in real time (e.g., at speeds of 10-100 Gbps) and the network data variability can be addressed by implementing a detection pipeline to adapt to changes in traffic behavior through online learning and retain memory of past behaviors. A two-stage scheme can be relied upon, which involves a supervised model coupled with an unsupervised model. | 2019-06-20 |
20190188066 | METHODS AND APPARATUS TO PROVIDE AN EFFICIENT SAFETY MECHANISM FOR SIGNAL PROCESSING HARDWARE - Methods, apparatus, and articles of manufacture providing an efficient safety mechanism for signal processing hardware are disclosed. An example apparatus includes an input interface to receive an input signal; a hardware accelerator to process the input signal, the hardware accelerator including: unprotected memory to store non-critical data corresponding to the input signal; and protected memory to store critical data corresponding to the input signal; and an output interface to transmit the processed input signal. | 2019-06-20 |
20190188067 | Conversational Problem Determination based on Bipartite Graph - A cognitive conversation system that generates effective diagnostic questions is provided. The cognitive conversation system receives a set of currently known symptoms (or currently available answers to diagnostic questions) of a reported problem or fault. The system identifies (i) a set of possible root causes of the reported problem based on the currently known symptoms and (ii) probabilities for the set of possible root causes by using a bipartite graph data structure that links possible symptoms with possible root causes. Upon determining that at least one possible root cause has a probability that is higher than a threshold, the system presents an explanation or solution associated with the at least one possible root cause. Upon determining that none of the possible root causes in the set of possible root causes has a probability higher than the threshold, the system presents a question based on information entropy that is computed based on probabilities of the identified possible root causes. | 2019-06-20 |
20190188068 | SYNCHRONOUSLY GENERATING DIAGNOSTIC DATA - An approach is provided for generating diagnostic data. In response to a determination that an error condition occurs in a first node executing a first process which restarts based on the error condition occurring, a first message is received, where the first message is broadcast from the first node to a second node and other node(s). In response to the first message, in-memory collections of diagnostic data are started in the nodes. Subsequent to receiving the first message, the error condition occurs in the second node. Based on the error condition occurring in the second node, a second message is broadcast from the second node to the first node and the other node(s) which causes the first node, the second node, and the other node(s) to dump the in-memory collections of diagnostic data at a predefined location. | 2019-06-20 |
20190188069 | DUAL PHYSICAL-CHANNEL SYSTEMS FIRMWARE INITIALIZATION AND RECOVERY - Aspects of the present invention include a method, system and computer program product. The method includes a processor operating first and second physical channel identifier (PCHID) devices comprised of a plurality of functional logic components, wherein one or more of the functional logic components are specific to one or more of the first and second PCHIDs and wherein one or more of the functional logic components are in common and not specific to one or more of the first and second PCHIDs; determining that an error condition exists in the first PCHID or the second PCHID; and executing a recovery method to remove the error condition from the first PCHID or the second PCHID in which the error condition exists. | 2019-06-20 |
20190188070 | METHOD AND SYSTEM FOR RESOLVING ERROR IN OPEN STACK OPERATING SYSTEM - Embodiments of present disclosure discloses system and method for resolving error in an open stack OS. An error code relating to an error in an open stack OS associated with the error resolution system may be retrieved. One or more services associated with the error code may be determined and at least one of one or more log files from the open stack OS and a resolver may be retrieved. The one or more services are enabled in the error resolution system for the retrieving. Further, a predefined action plan based on the one or more log files and the resolver may be determined. The error in the open stack OS is resolved based on the determined predefined action plan. | 2019-06-20 |
20190188071 | METHOD FOR CHECKING THE AVAILABILITY AND INTEGRITY OF A DISTRIBUTED DATA OBJECT - A method for checking the availability and integrity of a data object stored on a plurality of servers and having a number N of data words. For the distributed storage on the servers, the data object is fragmented. Each fragment is transmitted to and stored on one server. To check the availability and integrity of the fragments stored on the servers, the same random number is sent from an auditor unit to the servers. A checksum is created by the servers, in each case modified by application of the random number to the data of the respective fragment, and the checksum is transmitted to the auditor unit. The auditor unit uses the consistency check to determine whether the individual checksums sent by the servers are consistent and, if this is the case, establishes the availability and integrity of the data. | 2019-06-20 |
20190188072 | SEMICONDUCTOR DEVICES AND SEMICONDUCTOR SYSTEMS INCLUDING THE SAME - A semiconductor system includes a first semiconductor device and a second semiconductor device. The first semiconductor device generates a first error scrub control signal and a second error scrub control signal according to a logic level combination of an error code including information on the error occurrence number of times. The second semiconductor device performs an error scrub operation of a memory area on a first cycle time in response to the first error scrub control signal during a refresh operation and performs the error scrub operation of the memory area on a second cycle time in response to the second error scrub control signal during the refresh operation. | 2019-06-20 |
20190188073 | DISTRIBUTION OF A CODEWORD ACROSS INDIVIDUAL STORAGE UNITS TO REDUCE THE BIT ERROR RATE - Embodiments are directed towards apparatuses, methods, and systems for a codeword distribution manager to divide a codeword into portions to be written to individual storage units and read from the corresponding different individual storage units to reduce a raw bit error rate (RBER) related to storage of the codeword. In embodiments, the codeword distribution manager is included in a memory controller and the plurality of individual storage units are coupled to the memory controller and include individual memory die or individual pages of a memory die. In embodiments, the codeword is a single codeword and includes encoded data and an error correction code. In some embodiments, the codeword includes a low density parity data check code (LDPC). Additional embodiments may be described and claimed. | 2019-06-20 |
20190188074 | ERROR CORRECTION POTENCY IMPROVEMENT VIA ADDED BURST BEATS IN A DRAM ACCESS CYCLE - An embodiment includes a method for use in operating a memory chip, the method comprising: operating the memory chip with an increased burst length relative to a standard burst length of the memory chip; and using the increased burst length to access metadata during a given operation of the memory chip. Another embodiment includes a memory module, comprising a plurality of memory chips, each memory chip being operable with an increased burst length relative to a standard burst length of the memory chip, the increased burst length being used to access metadata during a given operation of the memory module. | 2019-06-20 |
20190188075 | ERROR CORRECTION METHODS AND SEMICONDUCTOR DEVICES USING THE SAME - A semiconductor device includes a read data generation circuit and a syndrome generation circuit. The read data generation circuit generates first read data from first output data and a first output parity which are generated during a first read operation. In addition, the read data generation circuit generates second read data from second output data and a second output parity which are generated during a second read operation. The syndrome generation circuit generates a syndrome signal from the first read data and the second read data. The syndrome generation circuit generates the syndrome signal so that column vectors of a first half matrix corresponding to the first read data are symmetric to column vectors of a second half matrix corresponding to the second read data. | 2019-06-20 |
20190188076 | MEMORY WITH AN ERROR CORRECTION FUNCTION AND RELATED MEMORY SYSTEM - A memory with an error correction function includes a controller and a memory cell array. The controller optionally writes written data to a normal storage area and a backup area of the memory cell array, and when the controller reads first data corresponding to the written data from the normal storage area, if at least two errors are included in the first data, the controller reads the backup area to output second data corresponding to the written data from the backup area. | 2019-06-20 |
20190188077 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system includes: a memory device, including a plurality of memory cells, suitable for reading and writing data with a parity bit on a basis of a page; and a memory controller suitable for obtaining an error mask pattern based on compressed data when a number of error bits detected based on the data and the parity bit is equal to or less than a first threshold value and greater than a second threshold value, and controlling to write the compressed data, the parity bit updated based on the compressed data in which the error mask pattern is reflected, compression information on the compressed data and pattern information on the error mask pattern to the page. | 2019-06-20 |
20190188078 | METHOD OF OPERATING MEMORY CONTROLLER FOR PERFORMING ENCODING AND DECODING BY USING A CONVOLUTION-TYPE LOW DENSITY PARITY CHECK CODE - A method of operating a memory controller that performs decoding by using a parity check matrix corresponding to a convolution-type low density parity check (LDPC) code includes receiving a codeword from at least one memory device, the codeword including a first sub-codeword and a second sub-codeword; decoding a first sub-codeword into first data by using first sliding windows in a first direction, set based on a first sub-matrix included in the parity check matrix and associated with the first sub-codeword; and decoding a second sub-codeword into second data by using second sliding windows in a second direction, set based on a second sub-matrix included in the parity check matrix and associated with the second sub-codeword. | 2019-06-20 |
20190188079 | DURABLE BLOCK STORAGE IN DATA CENTER ACCESS NODES WITH INLINE ERASURE CODING - Techniques are described in which network devices, such as one or more data center access nodes, are configured to support durable block storage with inline erasure coding, i.e., erasure coding in real time as data is updated. A Durable Block Device (DBD) supports a block level API for one or more storage volumes that may be mapped to one or more applications executed by servers in communication with the data center access nodes. The disclosure describes the operation of the data plane of the DBD that is hosted on one or more access nodes, and its interactions with the management and control planes of the DBD that are hosted on one or more of the servers. The disclosure describes generation of a log structured volume in the DBD configured to gather multiple data blocks into larger chunks of data for inline erasure coding for storage across multiple storage devices. | 2019-06-20 |
20190188080 | SYSTEMS AND METHODS FOR DATA SYNCHRONIZATION AND FAILOVER MANAGEMENT - A Data Synchronization and Failover Management (DSFM) system monitors simultaneous execution of non-identical instances of a software application and may label as a particular result of the software application the earliest output corresponding to that result produced by one of the instances. The DSFM may label one of the instances as a primary instance and the other instances as secondary instances and, if the primary instance fails, may re-label one of the secondary instances that computed all of the operations associated with the last result produced prior to the failure of the primary instance, as a new primary instance. | 2019-06-20 |
20190188081 | ELECTRONIC DEVICE WITH AUTOMATIC AND STABLE SYSTEM RESTART FUNCTION - An electronic device with reliable restart function includes a central processing unit (CPU), a complex programmable logic device (CPLD), and a platform controller hub (PCH). The CPU also outputs a trigger signal when a serious error occurs in the electronic device. The CPLD obtains the trigger signal from the CPU, and delays the trigger signal for a first preset time. The PCH chip obtains the trigger signal delayed by the CPLD, and controls the electronic device to perform a system restart according to the trigger signal. | 2019-06-20 |
20190188082 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system may include: a nonvolatile memory device including a plurality of memory blocks, each of which includes a plurality of pages, and among which a subset of memory blocks are managed as a system area and remaining memory blocks are managed as a normal area; and a controller may store system data, used to control the nonvolatile memory device, in the system area, and storing boot data, used in a host and normal data updated in a control operation for the nonvolatile memory device, in the normal area, the controller may perform a checkpoint operation each time storage of N number of boot data among the boot data is completed, and may perform the checkpoint operation each time the control operation for the nonvolatile memory device is completed, ‘N’ being a natural number. | 2019-06-20 |
20190188083 | MEMORY SYSTEM, METHOD OF OPERATING THE SAME, AND DATA PROCESSING SYSTEM INCLUDING THE SAME - A data processing system may include a host and a memory system, the memory system may include a volatile recovery selection register and a nonvolatile memory device, wherein the memory system checks, after being reset, a value of the recovery selection register and determines whether to perform a recovery operation on the nonvolatile memory device, and when a reset is requested from the host, the memory system sets the value of the recovery selection register and resets the nonvolatile memory device, and the host may read set first data from the memory system through a first booting operation that starts during a power-on operation, may request a reset to the memory system, and may read set second data form the memory system through a second booting operation that starts after the reset of the memory system. | 2019-06-20 |
20190188084 | DEVICE FOR CONTROLLING THE REINITIALIZATION OF A COMPUTER ON BOARD AN AUTOMOBILE - The invention pertains to a control device ( | 2019-06-20 |
20190188085 | PERSISTENTLY STORE CACHED DATA OF A WRITE TO A BLOCK DEVICE PRESENTATION - Examples include the persistent storage of cached data of a write to a block device presentation. Some examples may include a block device presentation of data represented by first backup objects stored in a deduplication backup appliance, and may cause the deduplication backup appliance to store second backup objects representing the data stored in a cache for each transient write to the block device presentation. | 2019-06-20 |
20190188086 | REDUNDANCY REDUCTION IN BLOCKCHAINS - Creation of a new block for a blockchain can be initiated. The new block can include a plurality of transactions. The blockchain can include a plurality of existing blocks. Transaction storage data can be generated for the new block. The transaction storage data can indicate the plurality of transactions and specify, for each of the plurality of transactions, which of a plurality of computing nodes are to keep the transaction in the new block. The generated transaction storage data can be added to the new block. The new block can be communicated to the plurality of computing nodes. | 2019-06-20 |
20190188087 | JOINT DE-DUPLICATION-ERASURE CODED DISTRIBUTED STORAGE - Methods and apparatus deduplicate and erasure code a message in a data storage system. One example apparatus includes a first chunking circuit that generates a set of data chunks from a message, an outer precoding circuit that generates a set of precoded data chunks and a set of parity symbols from the set of data chunks, a second chunking circuit that generates a set of chunked parity symbols from the set of parity symbols, a deduplication circuit that generates a set of deduplicated data chunks by deduplicating the set of precoded chunks or the set of chunked parity symbols, an unequal error protection (UEP) circuit that generates an encoded message from the set of deduplicated data chunks, and a storage circuit that controls the data storage system to store the set of deduplicated data chunks, the set of parity symbols, or the encoded message. | 2019-06-20 |
20190188088 | COLLABORATIVE RESTORE IN A NETWORKED STORAGE SYSTEM - A storage system according to certain embodiments includes a client-side signature repository that includes information representative of a set of data blocks stored in primary storage. During restore operations, the system can use the client-side signature repository to identify data blocks located in primary storage. The system can also use the client-side signature repository to identify multiple locations within primary storage where instances of some of the data blocks to be restored are located. Accordingly, during a restore operation of one client computing device, the system can source a data block to be restored to the client computing device from another client computing device that is in primary storage. | 2019-06-20 |
20190188089 | FORECAST RECOMMENDED BACKUP DESTINATION - A method for improving integrity and availability of data in a data center is provided. The data center is part of a network of data centers. The data centers in the network are adapted to act as a backup service provider. The method comprises registering backup service profile data of each of the data centers with viable data. The method also comprises accessing a forecast of monitorable events for a region, analyzing the forecast to predict a potential threat, identifying a data center in the regions, and determine a data center within the network of data centers as backup service provider. If more than one suitable backup service providers is identified, determine a best match backup service provider, establishing a backup communication connection, and transferring data from the source data center to the target data center. | 2019-06-20 |
20190188090 | Snapshot Deletion In A Distributed Storage System - A new snapshot of a storage volume is created by instructing computing nodes to suppress write requests. Once pending write requests from the computing nodes are completed, storage nodes create a new snapshot for the storage volume by allocating a new segment to the new snapshot and finalizes and performs garbage collection with respect to segments allocated to the previous snapshot. The snapshots may be represented by a storage manager in a hierarchy. Deleted snapshots may be flagged as such in the hierarchy and deletion may be implemented only in memory on a storage node, which is then restored from the hierarchy in the event of a crash. A snapshot is removed from the hierarchy when all segments previously are freed by garbage collection. A hybrid storage node may perform both computing and storage services. Data may be written with tags indicating encoding protocols used to encode the data. | 2019-06-20 |
20190188091 | WRITE-AHEAD STYLE LOGGING IN A PERSISTENT MEMORY DEVICE - The techniques disclosed herein improve performance of file system logging by writing log data to persistent memory instead of staging in RAM before writing to disk. In one embodiment, while the log is being written, checksums are inserted, such that during recovery, the checksums can be used to distinguish good log pages from bad log pages. In this way, good log pages can be evaluated to determine whether to roll a portion of a file system transaction forward, backward, or do nothing, while bad log pages can be safely ignored. Additionally or alternatively, non-temporal copies are employed when writing data to persistent memory, thereby reducing an amount of time log data is exposed to be lost in a crash. | 2019-06-20 |
20190188092 | MEMORY ERROR RECOVERY - DRAM errors that are not correctable automatically when detected are handled by replacing corrupt data with replacement data obtained in a cache of the computer system in which the DRAM error is detected. Cached data includes copied datasets and corresponding memory addresses for identifying the copied data from a location where an uncorrected DRAM error occurs. Searching the cache by address identifies the replacement data. | 2019-06-20 |
20190188093 | METHOD AND APPARATUS FOR REDUNDANT DATA PROCESSING - An arrangement for redundant data processing has an integrated circuit in which the functionality of a multi-core processor is implemented. Processor cores ( | 2019-06-20 |
20190188094 | DISASTER RECOVERY OF CONTAINERS - In one example, mapping information corresponding to a container running on a private data center may be generated in a public cloud by a processor-based disaster recovery manager. Further, volume data associated with the container may be synchronized to the public cloud based on the mapping information by the disaster recovery manager. Furthermore, a failure of the container running on the private data center may be determined by the disaster recovery manager. In response to the failure of the container running on the private data center, the container may be deployed in the public cloud using the synchronized volume data and the mapping information by the disaster recovery manager. | 2019-06-20 |
20190188095 | CONTENT STREAM INTEGRITY AND REDUNDANCY SYSTEM - A system can include one or more content distribution sites to provide content to one or more content satellite offices for delivery toward a set of destination devices for display. A content distribution site, of the one or more content distribution sites, can include one or more streamer devices. The one or more streamer devices can be associated with a single spoofed Internet protocol (IP) address. The spoofed IP address can facilitate failover among the one or more streamer devices. The one or more streamer devices can be configured to provide the content toward the set of destination devices using multicast with forward error correction (FEC). A content satellite office, of the one or more content satellite offices, can be configured to subscribe to a multicast group associated with the one or more streamer devices. The multicast group can be associated with the spoofed IP address to facilitate the failover among the one or more streamer devices. | 2019-06-20 |
20190188096 | SYSTEM, AND CONTROL METHOD AND PROGRAM FOR INPUT/OUTPUT REQUESTS FOR STORAGE SYSTEMS - Virtual first logical volumes are provided to a host, a virtual second logical volume correlated with any one of the first logical volumes is created in a storage node in correlation with a storage control module disposed in the storage node, a correspondence relationship between the first and second logical volumes is managed as mapping information, a storage node which is an assigning distribution of an I/O request is specified on the basis of the mapping information in a case where the I/O request in which the first logical volume is designated as an I/O destination is given from the host, the I/O request is assigned to the storage control module of its own node in a case where the specified storage node is its own node, and the I/O request is assigned to another storage node in a case where the specified storage node is another storage node. | 2019-06-20 |
20190188097 | MIRRORED WRITE AHEAD LOGS FOR DATA STORAGE SYSTEM - Data storage system and method for managing transaction requests to the data storage system utilizes an active write ahead log and a standby write ahead log to apply the transaction requests to a storage data structure stored in a storage system of the data storage system. | 2019-06-20 |
20190188098 | TRACKING AND RECOVERING A DISK ALLOCATION STATE - The subject matter described herein is generally directed towards tracking and recovering a disk allocation state. An on-disk log of operations is maintained to describe operations performed to an in-memory partial reference count map. Upon a crash of a host computing device during a checkpoint operation to an on-disk complete reference count map, the on-disk log of operations is used to undo and then redo the operations, or just redo the operations. In this manner, a disk allocation state prior to the crash is recreated in the on-disk complete reference count map with atomicity and crash consistency. | 2019-06-20 |
20190188099 | RAID ARRAY REBUILD ASSIST FROM EXTERNAL ARRAY COPY - When rebuilding a RAID (Redundant Array of Independent Disks) array in which a drive has failed, if another RAID array contains a mirror copy of the of the rebuilding RAID array content, this mirroring RAID array can be used to more rapidly rebuild the RAID array with the failed drive. Data requests to the rebuilding RAID array can be redirected to the mirroring RAID array; data can be transferred from the mirroring RAID array; or a combination of these can be used to finish rebuilding more quickly. When transferring data to the rebuilding array from the mirroring array, the transfer can be performed as a direct memory access (DMA) process independently of the RAID module of either array. | 2019-06-20 |
20190188100 | SITE RECOVERY SOLUTION IN A MULTI-TIER STORAGE ENVIRONMENT - A computer implemented method comprises detecting a failure of a primary volume at a first location, the primary volume having data stored on a first plurality of media according to a first heat map; in response to detecting the failure of the primary volume, overwriting a second heat map of a secondary volume at a second location with a copy of the first heat map, the secondary volume having data stored on a second plurality of media according to the second heat map; migrating extents of data on the second plurality of media at the second location according to the copy of the first heat map prior to a next heat map cycle update after detection of the failure; and processing data access requests from the secondary location using the extents of data on the secondary plurality of media migrated according to the copy of the first heat map. | 2019-06-20 |
20190188101 | MEMORY SYSTEM AND METHOD OF OPERATING THE SAME - A memory system includes: a nonvolatile memory device including a plurality of memory blocks and spare blocks; and a memory controller configured to control the nonvolatile memory device. The nonvolatile memory device may store spare information to any one block of the memory blocks or the spare blocks. When a bad block is detected from the memory blocks, the nonvolatile memory device replaces the bad block with any one of the spare blocks according to the spare information. | 2019-06-20 |
20190188102 | METHOD AND SYSTEM FOR DATA RECOVERY IN A CLOUD BASED COMPUTING ENVIRONMENT UTILIZING OBJECT STORAGE - A system and method for replicating block storage to an object storage, the method including: receiving write instructions from an original component (OC) in a first network, wherein the write instructions include a data block; mapping the write instructions to at least one object in the object storage; and storing the data block of the write instructions in the mapped at least one object in a second network. | 2019-06-20 |
20190188103 | IN-BAND MONITOR IN SYSTEM MANAGEMENT MODE CONTEXT FOR IMPROVED CLOUD PLATFORM AVAILABILITY - Optimizations are provided for remotely debugging a node in the cloud. Initially, a SMM environment is initialized in a computer's BIOS. Then, a debug agent that is located within the SMM environment receives an instruction indicative of a chipset-specific or platform-specific health-related issue. Based on this instruction, the debug agent executes a script entry by fetching health-related information from the computer's addressable endpoints. This information includes health-related metadata and/or counter information. The debug agent then records the information. Furthermore, the debug agent obtains a resolution for the health-related issue. Here, this resolution is at least partially based on the recorded information. | 2019-06-20 |
20190188104 | METHOD AND APPARATUS FOR MONITORING MEMORY AND FOR DISPLAYING USE IN ELECTRONIC CONTROL DEVICE - An operating method of an electronic control device for performing at least one program including a plurality of functions includes: recognizing a function call depth of the plurality of functions; inserting a probe code into an interrupt service routine (ISR) and a maximum depth function with a maximum function call depth; calculating a use amount of a memory area when the maximum depth function with the probe code inserted into the maximum depth function is performed; and when the probe code is executed, outputting the maximum function call depth or the function call depth of the plurality of functions and the use amount when the ISR is performed. | 2019-06-20 |
20190188105 | Intelligent Diagnostic System - A diagnostic system may utilize telemetry from a monitored system to infer information about the operation of various components systems within the monitored system. In embodiments, inferences may be drawn from a comparison of various component systems using a system of implication and exoneration. Exoneration is utilized to isolate faulty components from functioning components by comparing information between the systems, which may run in parallel. A dynamic grouping algorithm may eventually isolate faulty components and suggest the root cause as well as multiple distinct faults. | 2019-06-20 |
20190188106 | TRACE DATA COMPRESSION METHOD SELECTION DEVICE, METHOD, AND PROGRAM - A compression circuit compresses the trace data by a compression method selected from multiple compression methods. A compression circuit optimization section operates a program to be operated in the MCU, and generates the result of execution of the program and the result of trace data simulation. The compression circuit optimization section analyzes the result of execution of the program and analyzes the result of trace data simulation. The compression circuit optimization section determines the compression method of the compression circuit in accordance with the result of analysis of the result of execution of the program and with the result of trace data simulation. Further, the compression circuit optimization section generates compression circuit data for operating the compression circuit as a circuit that compresses the trace data by the determined compression method. | 2019-06-20 |
20190188107 | HEALTH MONITORING FOR CLOUD COMPUTING PLATFORMS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for implementing a health monitoring system for a cloud application platform. One of the methods includes deploying, by a health monitoring application of a cloud application platform that provisions resources from an underlying cloud infrastructure system, probes for testing components of the cloud application platform. Each probe is configured to perform tests that measure performance of a component of the cloud application platform. A probe can attempt to provision resources from the underlying cloud infrastructure system by launching a test application on the cloud application platform and determine whether the test application launched successfully using resources from the underlying cloud infrastructure system. The health monitoring application receives results of the tests and provides, for display in a graphical user interface, a representation of a history of the results of the tests for at least one of the components. | 2019-06-20 |
20190188108 | LOAD TEST FRAMEWORK - A computer-implemented method and system involve providing a script-driven framework to monitor performance of operations on two or more sets of objects to be executed on a target system in parallel on separate threads according to a test scenario with user-defined language stipulations from a script file, and executing the script file through the framework to implement the test scenario on the target system. The language stipulations include an operation hierarchy for each of the two or more sets of objects and at least one synchronization point corresponding to a point in time at which operations on the separate threads are to be synchronized. The framework can be XML-compliant. | 2019-06-20 |
20190188109 | METHOD AND APPARATUS FOR TESTING PERFORMANCE OF A PAGE CONTROL AND ELECTRONIC DEVICE - A method for testing performance of a page control comprising: testing each of multiple evaluation dimensions for a page control, the multiple evaluation dimensions comprising at least one of an FPS (Frames Per Second) at the time of scrolling a page, an FPS at the time of opening a message, the number of times of adding a message, an FPS at the time of cutting an image, an FPS at the time of adding a message, and the number of times of rendering a page; determining a test result of each of the evaluation dimensions; determining operation performance of the page control according to the test result of each of the evaluation dimensions. Also disclosed is an apparatus for testing performance of a page control, and an electronic device. Operation performance of the page control can be tested in multiple aspects and the credibility of performance evaluation results is improved. | 2019-06-20 |
20190188110 | INDUSTRIAL CONTROL SYSTEM, AND ASSISTANCE APPARATUS, CONTROL ASSIST METHOD, AND PROGRAM THEREOF - An industrial control apparatus that causes a common processing unit to execute a first execution task for executing processing that does not depend on the number of pieces of data and a second execution task for executing processing that depends on the number of pieces of data, and an assistance apparatus are included. The industrial control apparatus calculates a control load amount of a processing unit incurred by executing the first execution task, and extracts the second execution task from the first and second execution tasks. The assistance apparatus calculates a processing load amount of the processing unit according to the type of the extracted second execution task for the number of pieces of analysis data, and using this processing load amount and the control load amount, calculates a margin of processing that indicates a degree of remaining processing capability of the processing unit. | 2019-06-20 |
20190188111 | METHODS AND APPARATUS TO IMPROVE PERFORMANCE DATA COLLECTION OF A HIGH PERFORMANCE COMPUTING APPLICATION - Methods, apparatus, systems and articles of manufacture to improve performance data collection are disclosed. An example apparatus includes a performance data comparator of a source node to collect the performance data of an application of the source node from the host fabric interface at a polling frequency; an interface to transmit a write back instruction to the host fabric interface, the write back instruction to cause data to be written to a memory address location of memory of the source node to trigger a wake up mode; and a frequency selector to: start the polling frequency to a first polling frequency for a sleep mode; and increase the polling frequency to a second polling frequency in response to the data in the memory address location identifying the wake mode. | 2019-06-20 |
20190188112 | DEBUGGING OF PREFIXED CODE - A debugging capability that enables the efficient debugging of code that has prefixes, referred to herein as prefixed code. To debug application code, in which the application code includes a prefixed instruction to be modified by a prefix, a trap is provided. The trap is configured to report a presence of the prefix, but to otherwise perform the trap functions absent the prefix; i.e., the prefix is otherwise ignored in the processing of the trap. | 2019-06-20 |
20190188113 | METHOD OF REORDERING CONDITION CHECKS - Described is a computer-implemented method of reordering condition checks. Two or more condition checks in computer code that may be reordered within the code are identified. It is determined that the execution frequency of a later one of the condition checks is satisfied at a greater frequency than a preceding one of the condition checks. It is determined that there is an absence of side effects in the two or more condition checks. The values of the condition checks are propagated and abstract interpretation is performed on the values that are propagated. It is determined that the condition checks are exclusive of each other, and the condition checks are reordered within the computer code. | 2019-06-20 |
20190188114 | GENERATION OF DIAGNOSTIC EXPERIMENTS FOR EVALUATING COMPUTER SYSTEM PERFORMANCE ANOMALIES - A method includes performing, by a processor: detecting a performance anomaly in a production computer system, generating a snapshot image of software and data that were executed on the production computer system during the performance anomaly, generating diagnostic information for the performance anomaly, communicating the diagnostic information to an experiment computer system, generating an experiment based on the diagnostic information and the snapshot image to create an experimental image, executing the experimental image on the experiment computer system to perform the experiment, and evaluating an effect of the experiment on the performance anomaly. | 2019-06-20 |
20190188115 | COOPERATIVE TRIGGERING - There is disclosed in an example a processor, having: a front end including circuitry to decode instructions from an instruction stream; a data cache unit including circuitry to cache data for the processor; and a core triggering block (CTB) to provide integration between two or more different debug capabilities. | 2019-06-20 |
20190188116 | AUTOMATED SOFTWARE TESTING METHOD AND SYSTEM - A method and system for testing software including computer executable instructions in a networked software quality assurance testing system. The method, executed in a processor of a server computing device, comprises determining user type information and a program state model associated with a software application, and presenting a user with at least one action from an action library associated with the program state model and the user type information. Upon selection of the at least one action, generating one or more executable test scripts by causing the software application to advance through program states in accordance with the program state model, the one or more executable test scripts specifying a sequence of test steps based at least partly on the user type information. Executing the software application concurrently with the one or more executable test scripts causes performance of the sequence of test steps upon executing the software application concurrently with the one or more executable test scripts. | 2019-06-20 |
20190188117 | SYSTEM, METHOD AND RECORDING MEDIUM FOR GENERATING MOBILE TEST SEQUENCES - A test sequence generation method, system, and computer program product, include collecting an action sequence, training a recurrent neural network (RNN) model to encode a frequency of actions in the action sequence and determine meaningful action sequences, and applying the RNN model to prioritize the meaningful action sequences that have a frequency less than a predetermined threshold following the action sequence | 2019-06-20 |
20190188118 | SYSTEM AND METHOD FOR GENERATING DATABASE INDEPENDENT INPUT TEST DATA - A system and method for generating database independent input test data. The system includes a source database having an input and an output, a plurality of target databases each having an input and an output, a test data generator coupled to the input of the source database, and a data exchanger coupled between the output of the source database and the input of each of the target databases. The test data generator generates input test data that is database independent, and sends the generated input test data to the source database. The source database sends the input test data to the data exchanger. The data exchanger transforms the input test data into a database format compatible with at least one of the plurality of target databases. The data exchanger sends the transformed input test data to the target database for which the transformed input test data is compatible. | 2019-06-20 |
20190188119 | SYSTEM AND A METHOD FOR PROVIDING AUTOMATED PERFORMANCE DETECTION OF APPLICATION PROGRAMMING INTERFACES - A system and a method for automating performance detection of one or more application programming interfaces (APIs) is provided. The present invention provides for retrieving one or more test cases and associated test data as per respective test case ID's and generate one or more test requests by applying a data enrichment technique. Further, the present invention provides for executing one or more generated test requests on an API under test, analyze a response received from the API under test, perform response validation, detect any defects in the API based on the received response, and generate a detailed report of the executed test request. Furthermore, the present invention provides a visual interface for selecting test cases, creating test cases, editing test cases, editing test requests, display execution of test requests and test reports. | 2019-06-20 |
20190188120 | SYSTEM, METHOD AND RECORDING MEDIUM FOR OPTIMIZING SOFTWARE TESTING VIA GROUP TESTING - A Software optimization method, system, and computer program product, include defining a vocabulary of tokens to yield admissible inputs of a system, generating random test inputs based on combining inputs and input tuples, followed by application of these inputs into the system, and analyzing the correlations between system failures and the tokens present in respective inputs to localize failures to particular inputs and input tuples. | 2019-06-20 |
20190188121 | Systems and Methods for Use in Certifying Interactions With Hosted Services - Systems and methods are provided for validating customer use of application programming interfaces (APIs). An exemplary method includes selecting an API defining at least one service to be used by a customer and a standard associated with the API for data payloads directed to the API, and identifying the customer. The method also includes selecting at least one test case for the API and the customer, bundling the test case(s) into a test project for the customer, and transmitting the test project to the customer whereby the customer is able to execute the test project. The method further includes monitoring data payloads between the API and the customer and compiling a report indicative of a result of the test case(s) where the report indicates whether the data payloads are compliant with the standard associated with the API and whether the customer is certified to use the API, or not. | 2019-06-20 |
20190188122 | ELECTRONIC PRODUCT TESTING SYSTEMS - An electronic product testing system includes: a retrieving module configured to access an electronic file, the electronic file containing data generated based on a testing of a first product; and a testing device having a processing unit configured to perform testing of a second product based on the data in the electronic file, the second product having at least one feature that is different from the first product. | 2019-06-20 |
20190188123 | TESTING APPS IN MICRO-FLUIDICS BASED DEVICES - A method, system and computer program product are disclosed for remotely testing computing devices including dynamic, shapeable tactile touch screens. In an embodiment, a method comprises establishing a communications connection between a computing device under test and a remote testing computer system, the computing device under test including a dynamic, configurable tactile touch screen; and configuring a portion of the touch screen of the computing device under test, in a defined manner, to form three-dimensional physical features on the touch screen for interacting with the touch screen. In an embodiment, the method further comprises transmitting specified information about said configuring, via the established communications connection, from the computing device under test to the remote testing computer system; and generating a defined visual representation on the remote testing computer system, by using the specified information, of said configuring a portion of the touch screen. | 2019-06-20 |
20190188124 | MULTILEVEL ADDRESSING - In an example, a starting address corresponding to a location of particular information within a non-volatile storage memory is determined during an initialization process using a multilevel addressing scheme. Using the multilevel addressing scheme may include performing multiple reads of the storage memory at respective address levels to determine the starting address corresponding to the location of the particular information. | 2019-06-20 |
20190188125 | Data Storage Device and Methods for Processing Data in the Data Storage Device - A data storage device includes a memory device, an SRAM and a controller. The memory device includes a first buffer configured to store data of a plurality of consecutive logical pages. The SRAM stores a first mapping table. The first mapping table records which logical page the data stored in each physical page of the first buffer directs to. The controller is coupled to the memory device and the SRAM. When the controller performs an erase operation to erase the data stored in the first buffer in response to an erase command, the controller checks whether an interrupt signal or a reset command issued by a host device has been received every time the erase operations of a predetermined number (M) of logical pages have finished. The predetermined number (M) is a positive integer greater than 1. | 2019-06-20 |
20190188126 | MEMORY SYSTEM AND METHOD OF OPERATING THE SAME - Provided herein may be a memory system and a method of operating the memory system. The memory system may include: a nonvolatile memory device configured to perform internal operations in response to command/address sequences; and a memory controller configured to provide the command/address sequences to the nonvolatile memory device. The memory controller may include: a firmware section configured to manage read/write characteristic information about the nonvolatile memory device; and a hardware section configured to generate command/address sequences based on the read/write characteristic information. | 2019-06-20 |
20190188127 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - In a memory system and an operating method thereof, the method includes: receiving a read command and a read logical address; reading a raw map slice stored in a nonvolatile memory device, in a map read phase, in response to the read command, wherein the raw map slice includes a read physical address corresponding to the read logical address; generating a compressed map slice by compressing the raw map slice; storing a compression class corresponding to a ratio of a size of the compressed map slice to a size of the raw map slice in a compression class description table; storing the compressed map slice in a buffer memory; and reading data corresponding to the read command from the nonvolatile memory device, in a data read phase, based on the compressed map slice stored in the buffer memory. | 2019-06-20 |
20190188128 | NONVOLATILE MEMORY SYSTEM AND METHOD OF OPERATING THE SAME - To operate a nonvolatile memory system including a nonvolatile memory device and a memory controller, a mapping memory is divided into a plurality of mapping memory regions where the mapping memory stores mapping data representing a mapping relation between a logical address of a host device and a physical address of the nonvolatile memory device. Occupation information representing whether the mapping data are stored in each mapping memory region of the plurality of mapping memory regions are provided. Based on the occupation information, user data are stored in a corresponding mapping memory region of the plurality of mapping memory regions in which the mapping data are not stored. | 2019-06-20 |
20190188129 | DATA STORAGE DEVICE AND OPERATING METHOD FOR DYNAMICALLY EXECUTING GARBAGE-COLLECTION PROCESS - A data storage device for dynamically executing the garbage-collection process is provided which includes a flash memory and a controller. The flash memory includes a plurality of blocks wherein each of the blocks includes a plurality of pages. The controller is coupled to the flash memory and is utilized to execute the garbage-collection process on the flash memory according to a number of at least one spare block in the flash memory and the number of non-spare blocks corresponding to different ratios of effective pages. The garbage-collection process is utilized for merging at least two non-spare blocks to release at least one spare block. | 2019-06-20 |
20190188130 | Data Storage Device and Non-Volatile Memory Control Method - A hybrid data storage device is shown. In addition to a non-volatile memory, the hybrid data storage device has a volatile memory. A microcontroller of the data storage device generates and maintains a first mapping table and a second mapping table. According to the first mapping table, specific logical addresses are mapped to the volatile memory. The second mapping table records mapping information between logical addresses, including the specific logical addresses, and the non-volatile memory. When the data storage device is powered on, the microcontroller uploads data read from the non-volatile memory to the volatile memory according to the first mapping table and the second mapping table. | 2019-06-20 |
20190188131 | MEMORY DEVICE AND OPERATION METHOD THEREOF - Provided is a memory device including: a memory array, including a flag memory array having a plurality of flag memory cells and a data memory array having a plurality of data memory cells, the corresponding flag memory cells being used to record whether the corresponding data memory cells have been written or not. In initialization, the flag memory array is initialized by the control circuit but the data memory array is not initialized. | 2019-06-20 |
20190188132 | SQL SCAN HARDWARE ACCELERATOR - Various systems and methods for hardware acceleration circuitry are described. In an embodiment, circuitry is to perform 1-bit comparisons of elements of variable M-bit width aligned to N-bit width, where N is a power of 2, in a data path of P-bit width. Second and subsequent scan stages use the comparison results from the previous stage to perform 1-bit comparison of adjacent results, so that each subsequent stage results in a full comparison of element widths double that of the previous stage. A total number of stages required to scan, or filter, M-bit elements in N-bit width lanes is equal 1+log 2(N), and the total number of stages required for implementation in the circuitry is 1+log 2(P), where P is the maximum width of the data path comprising 1 to P elements. | 2019-06-20 |
20190188133 | ISSUE QUEUE SNOOPING FOR ASYNCHRONOUS FLUSH AND RESTORE OF DISTRIBUTED HISTORY BUFFER - Techniques are disclosed for performing issue queue snooping for an asynchronous flush and restore of a history buffer (HB) in a processing unit. One technique includes identifying an entry of the HB to restore to a register file in the processing unit. A restore ITAG of the HB entry is sent to the register file via a first restore bus, and restore data of the HB entry and the restore ITAG is sent to the register file via a second restore bus. After the restore ITAG and restore data are sent, an instruction is dispatched before the register file obtains the restore data. After it is determined that the restore data is still available via the second restore bus, a snooping operation is performed to obtain the restore data from the second restore bus for the dispatched instruction. | 2019-06-20 |
20190188134 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system may include: a nonvolatile memory device including a memory cell array and a page buffer coupled to the memory cell array; and a controller configured to interface with the nonvolatile memory device, wherein the controller moves descriptors on a cache command from a command queue to a cache queue, the cache command being transferred to the nonvolatile memory device, and selectively moves the descriptors moved to the cache queue to a response queue. | 2019-06-20 |
20190188135 | PROVIDING ROLLING UPDATES OF DISTRIBUTED SYSTEMS WITH A SHARED CACHE - Disclosed herein are system, apparatus, article of manufacture, method, and/or computer program product embodiments for providing rolling updates of distributed systems with a shared cache. An embodiment operates by receiving a platform update request to update data item information associated with a first version of a data item cached in a shared cache memory. The embodiment may further operate by transmitting a cache update request to update the data item information of the first version of the data item cached in the shared cache memory, and isolating the first version of the data item cached in the shared cache memory based on a collection of version specific identifiers and a version agnostic identifier associated with the data item. | 2019-06-20 |
20190188136 | EFFICIENT DATA TRANSFER BETWEEN A PROCESSOR CORE AND AN ACCELERATOR - A processor writes input data to a cache line of a shared cache, wherein the input data is ready to be operated on by an accelerator. It then notifies an accelerator that the input data is ready to be processed. The processor then determines that output data of the accelerator is ready to be consumed, the output data being located at the cache line or an additional cache line of the shared cache, wherein the cache line or the additional cache line comprises a set first flag that indicates the cache line or the additional cache line was modified by the accelerator and that prevents the output data from being removed from the cache line or the additional cache line until the output data is read by the processor. The processor reads and processes the output data from the cache line or the additional cache. | 2019-06-20 |
20190188137 | REGION BASED DIRECTORY SCHEME TO ADAPT TO LARGE CACHE SIZES - Systems, apparatuses, and methods for maintaining a region-based cache directory are disclosed. A system includes multiple processing nodes, with each processing node including a cache subsystem. The system also includes a cache directory to help manage cache coherency among the different cache subsystems of the system. In order to reduce the number of entries in the cache directory, the cache directory tracks coherency on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. Accordingly, the system includes a region-based cache directory to track regions which have at least one cache line cached in any cache subsystem in the system. The cache directory includes a reference count in each entry to track the aggregate number of cache lines that are cached per region. If a reference count of a given entry goes to zero, the cache directory reclaims the given entry. | 2019-06-20 |
20190188138 | COHERENCE PROTOCOL PROVIDING SPECULATIVE COHERENCE RESPONSE TO DIRECTORY PROBE - A data processing system includes first and second processing nodes and response logic coupled by an interconnect fabric. A first coherence participant in the first processing node is configured to issue a memory access request specifying a target memory block, and a second coherence participant in the second processing node is configured to issue a probe request regarding a memory region tracked in a memory coherence directory. The first coherence participant is configured to, responsive to receiving the probe request after the memory access request and before receiving a systemwide coherence response for the memory access request, detect an address collision between the probe request and the memory access request and, responsive thereto, transmit a speculative coherence response. The response logic is configured to, responsive to the speculative coherence response, provide a systemwide coherence response for the probe request that prevents the probe request from succeeding. | 2019-06-20 |
20190188139 | ARITHMETIC PROCESSING UNIT, MEMORY ACCESS CONTROLLER, AND METHOD FOR CONTROLLING ARITHMETIC PROCESSING UNIT - An arithmetic processing unit includes a processing unit, a cache control unit that issues a request for the memory access, and a memory access controller that includes a request queue, and a request selection unit which selects a request from among requests enqueued in the request queue and issues the selected request to a memory. After issue of a previous request in the request queue, the request selection unit inhibits, during an issue inhibition period corresponding to the issued previous request, issue of a subsequent request corresponding to the issue inhibition period, and the request selection unit issues a second request in preference to a first request in a case where the requests in the request queue are in a first state, the first request being one of a read request and a write request in the request queue, and the second request being a request in the request queue. | 2019-06-20 |
20190188140 | DATA-LESS HISTORY BUFFER WITH BANKED RESTORE PORTS IN A REGISTER MAPPER - A microprocessor has a data-less history buffer. Operands associated with a program instructions are stored in logical registers (LREGs) which are resolvable to physical registers that are not part of the history buffer. Register re-naming maintains integrity of data dependencies for instructions processed out of program order. The history buffer has pointers (RTAGs) to the LREGs. Entries in the history buffer are grouped into ranges. A mapper has a single port associated with each LREG, and each port receives data, from a single range of entries in the history buffer. Multiple entries, one from each range, may be restored concurrently from the history buffer to the mapper. | 2019-06-20 |
20190188141 | GENERAL PURPOSE INPUT/OUTPUT DATA CAPTURE AND NEURAL CACHE SYSTEM FOR AUTONOMOUS MACHINES - A mechanism is described for facilitating general purpose input/output data capture and neutral cache system for autonomous machines. A method of embodiments, as described herein, includes capturing, by an image capturing device, one or more images of one or more objects, where the one or more images represent input data associated with a neural network. The method may further include determining accuracy of first output results generated by a default neural caching system by comparing the first output results with second output results predicted by a custom neural caching system. The method may further include outputting, based on the accuracy, a final output results including at least one of the first output results or the second output results. | 2019-06-20 |
20190188142 | DEFRAGMENTED AND EFFICIENT MICRO-OPERATION CACHE - A processor includes a processor core and a micro-op cache communicably coupled to the processor core. The micro-op cache includes a micro-op tag array, wherein tag array entries in the micro-op tag array are indexed according to set and way of set-associative cache, and a micro-op data array to store multiple micro-ops. The data array entries in the micro-op data array are indexed according to bank number of a plurality of cache banks and to a set within one cache bank of the plurality of cache banks. | 2019-06-20 |
20190188143 | SYSTEMS AND METHODS FOR ACCELERATING DATA COMPUTATION - Systems and methods for precomputing data and storing cache objects corresponding to the precomputed data are described. A system creates a new cache object when a user interacts with the system. The system precomputes formulas in the newly created cache object by replacing the formulas with corresponding calculated values. The system precomputes the formulas in the background (i.e., the user is not presented with the precomputed values while the user is manipulating the data). The system may persistently store a precomputed version cache object in a dedicated version cache storage for later use. If updates are performed to the structure and/or values of a version represented in a precomputed version cache object, effected parts of the version cache object are invalidated by replacing calculated values with the underlying formulas. | 2019-06-20 |
20190188144 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - An operating method of a memory system may include: transmitting, by a descriptor generation unit, cache descriptors to a memory interface unit, and suspending the ordered cache output descriptors by ordering cache output descriptors in a response order; generating, by the memory interface unit, cache commands based on the cache descriptors, and transmitting the cache commands to memory devices; transmitting, by the descriptor generation unit, the cache output descriptors to the memory interface unit according to the response order, when the suspensions of the cache output descriptors are released; and generating, by the memory interface unit, cache output commands based on the cache output descriptors, and transmitting the cache output commands to the memory devices. | 2019-06-20 |
20190188145 | CACHE MEMORY DEVICE AND FPGA INCLUDING THE SAME - A cache memory device includes a tag memory configured to store tag data for a plurality of ways corresponding to a set address; and a plurality of data memories each configured to store data corresponding to the plurality of ways that correspond to the set address, wherein each of the plurality of data memories is configured to store a corresponding one of a plurality of divisions of a plurality of word data, the plurality of word data corresponding to a same set address and a same way address, the plurality of word data being divided into the plurality of divisions. | 2019-06-20 |
20190188146 | METHOD, SYSTEM, AND APPARATUS FOR STRESS TESTING MEMORY TRANSLATION TABLES - Disclosed is a system, method and/or computer product that includes generating translation requests that are identical but have different expected results, transmitting the translation requests from a MMU tester to a non-core MMU disposed on a processor chip, where the non-core MMU is external to a processing core of the processor chip, and where the MMU tester is disposed on a computing component external to the processor chip. The method also includes receiving memory translation results from the non-core MMU at the MMU tester, comparing the results to determine if there is a flaw in the non-core MMU. | 2019-06-20 |