16th week of 2020 patent applcation highlights part 45 |
Patent application number | Title | Published |
20200117529 | DIAGNOSIS OF DATA CENTER INCIDENTS WITH AUGMENTED REALITY AND COGNITIVE ANALYTICS - One embodiment provides a method for diagnosing data center incidents including receiving a data center incident report including information technology (IT) device incident information. Augmented reality (AR) is applied for an AR interface for receiving incident evidence information based on the IT device incident information. The incident evidence information is sent to a cognitive analytical process. Using the cognitive analytical process, statistical inference is determined and an incident diagnosis recommendation including analytical results is generated. The analytical results are received by the AR interface for determining a root cause of the incident report. | 2020-04-16 |
20200117530 | APPLICATION PERFORMANCE MANAGEMENT SYSTEM WITH COLLECTIVE LEARNING - An application performance management system is disclosed. Operational elements are dynamically discovered and extended when changes occur. Programmatic knowledge is captured. Particular instances of operational elements are recognized after changes have been made using a fingerprint/signature process. Metrics and metadata associated with a monitored operational element are sent in a compressed form to a backend for analysis. Metrics and metadata from multiple similar systems may be used to adjust/create expert rules to be used in the analysis of the state of an operational element. A 3-D user interface with both physical and logical representations may be used to display the results of the performance management system. | 2020-04-16 |
20200117531 | ERROR SOURCE MODULE IDENTIFICATION AND REMEDIAL ACTION - A server computing device is provided, including non-volatile memory and a processor. The processor may receive a plurality of telemetry signals from a plurality of modules executed on a plurality of computing devices. The plurality of modules may be arranged in a dependency hierarchy. The processor may further determine that the plurality of telemetry signals include a plurality of error signals indicating errors at one or more of the modules. Based on the plurality of error signals and a representation of the dependency hierarchy, the processor may further identify an error source module that, among the plurality of modules from which error signals are received, is highest in the dependency hierarchy. The processor may further select a remedial action based on the identification of the error source module. The processor may further output a remedial action notification including an indication of the error source module and/or the remedial action. | 2020-04-16 |
20200117532 | DATACENTER IoT-TRIGGERED PREEMPTIVE MEASURES USING MACHINE LEARNING - One example method includes performing a machine learning process that involves performing an assessment of a state of a computing system, and the assessment includes analyzing information generated by an IoT edge sensor in response to a sensed physical condition in the computing system, and identifying an entity in the computing system potentially impacted by an event associated with the physical condition. The example method further includes identifying a preemptive recovery action and associating the preemptive recovery action with an entity, and the preemptive recovery action, when performed, reduces or eliminates an impact of the event on the entity, determining a cost associated with implementation of the preemptive recovery action, evaluating the cost associated with the preemptive recovery actions and identifying the preemptive recovery action with the lowest associated cost, implementing the preemptive recovery action with the lowest associated cost, and repeating part of the machine learning process. | 2020-04-16 |
20200117533 | Method and Apparatus for Predictive Failure Handling of Interleaved Dual In-Line Memory Modules - An information handling system includes interleaved dual in-line memory modules (DIMMs) that are partitioned into logical partitions, wherein each logical partition is associated with a namespace. A DIMM controller sets a custom DIMM-level namespace-based threshold to detect a DIMM error and to identify one of the logical partitions of the DIMM error using the namespace associated with the logical partition. The detected DIMM error is repaired if it exceeds an error correcting code (ECC) threshold. | 2020-04-16 |
20200117534 | ONLINE FAILURE SPAN DETERMINATION - An indication is received from a storage device that an attempt to read a portion of data from a block of the storage device has failed. A command is transmitted to the storage device to perform a scan on data stored at the block comprising the portion of data to acquire failure information associated with a plurality of subsets of the data stored at the block. The failure information associated with the plurality of subsets of the data stored at the block is received from the storage device. | 2020-04-16 |
20200117535 | SYSTEM FOR GENERATING DATAFLOW LINEAGE INFORMATION IN A DATA NETWORK - A system for aggregating dataflow lineage information is disclosed. The system receives one or more input data elements and determines a dataflow path for the one or more input data elements. The dataflow path includes at least a data storage node and a computation node. Then, the system identifies a lineage control value associated with the data storage node and a version control value associated with the computation node. The system generates an output lineage for the one or more input data elements by appending the lineage control value to the version control value. | 2020-04-16 |
20200117536 | Data Storage System and Method for Decoding Data based on Extrapolated Flipped-Bit Data - An error management system for a data storage device can generate soft-decision log-likelihood ratios (LLRs) using multiple reads of memory locations. 0-to-1 and 1-to-0 bit flip count data provided by multiple reads of reference memory locations can be used to generate probability data that is used to generate possible LLR values for decoding target pages. Possible LLR values are stored in one or more look-up tables. | 2020-04-16 |
20200117537 | METHODS AND DEVICES FOR ERROR CORRECTION - Methods, systems, and devices are described herein for using codewords to detect or correct errors in data (e.g., data stored in a memory device). A host device may generate one or more codewords associated with data to be stored in the memory device. In some cases, the host device may generate one or more codewords for error detection and correction (e.g., corresponding to data transmitted by the host device to the memory device). In some cases, the host device may transmit the codewords and the associated data using an extended (e.g., adjustable) burst length such that the one or more codewords may be included in the burst along with the data. Additionally or alternatively, the host device may transmit one or more of the codewords over one or more channels different than the one or more channels used to transmit the data. | 2020-04-16 |
20200117538 | MOBILE NAND PARITY INFORMATION TECHNIQUES - Disclosed in some examples are techniques for handling parity data of a non-volatile memory device with limited cache memory. In certain examples, user data can be programmed into the non-volatile memory of the non-volatile memory device in data stripes, and parity information can be calculated for each individual data stripe within a limited capacity cache of the non-volatile memory device. The individual parity information can be swapped between a swap block of the non-volatile memory and the limited capacity cache as additional data stripes are programmed. | 2020-04-16 |
20200117539 | STORING DEEP NEURAL NETWORK WEIGHTS IN NON-VOLATILE STORAGE SYSTEMS USING VERTICAL ERROR CORRECTION CODES - Techniques are presented for efficiently storing deep neural network (DNN) weights or similar type data sets in non-volatile memory. For data sets, such as DNN weights, where the elements are multi-bit values, bits of the same level of significance from the elements of the data set are formed into data streams. For example, the most significant bit from each of the data elements are formed into one data stream, the next most significant bit into a second data stream, and so on. The different bit streams are then encoded with differing strengths of error correction code (ECC), with streams corresponding to more significant bits encoded with stronger ECC code than streams corresponding to less significant bits, giving the more significant bits of the data set elements a higher level of protection. | 2020-04-16 |
20200117540 | ERROR CORRECTION MANAGEMENT FOR A MEMORY DEVICE - Methods, systems, and devices for error correction management are described. A system may include a memory device that supports internal detection and correction of corrupted data, and whether such detection and correction functionality is operating properly may be evaluated. A known error may be included (e.g., intentionally introduced) into either data stored at the memory device or an associated error correction codeword, among other options, and data or other indications subsequently generated by the memory device may be evaluated for correctness in view of the error. Thus, either the memory device or a host device coupled with the memory device, among other devices, may determine whether error detection and correction functionality internal to the memory device is operating properly. | 2020-04-16 |
20200117541 | STORING DATA IN A DATA SECTION AND PARITY IN A PARITY SECTION OF COMPUTING DEVICES - A method includes generating, by a processing entity of a computing system, a plurality of parity blocks from a plurality of lines of data blocks. A first number of parity blocks of the plurality of parity blocks is generated from a first line of data blocks of the plurality of lines of data blocks. The method further includes storing, by the processing entity, the plurality of lines of data blocks in data sections of memory of a cluster of computing devices of the computing system in accordance with a read/write balancing pattern and a restricted file system. The method further includes storing, by the processing entity, the plurality of parity blocks in parity sections of memory of the cluster of computing devices in accordance with the read/write balancing pattern and the restricted file system. | 2020-04-16 |
20200117542 | MULTIPLE NODE REPAIR USING HIGH RATE MINIMUM STORAGE REGENERATION ERASURE CODE - A distributed storage system can use a high rate MSR erasure code to repair multiple nodes when multiple node failures occur. An encoder constructs m r-ary trees to determine the symbol arrays for the parity nodes. These symbol arrays are used to generate the parity data according to parity definitions or parity equations. The m r-ary trees are also used to identify a set of recovery rows across helper nodes for repairing a systematic node. When failed systematic nodes correspond to different ones of the m r-ary trees, a decoder may select additional recovery rows. The decoder selects additional recovery rows when the parity definitions do not provide a sufficient number of independent linear equations to solve the unknown symbols of the failed nodes. The decoder can select recovery rows contiguous to the already identified recovery rows for access efficiency. | 2020-04-16 |
20200117543 | METHOD, ELECTRONIC DEVICE AND COMPUTER READABLE STORAGE MEDIUM FOR DATA BACKUP AND RECOVERY - Embodiments of the present disclosure relate to a method, electronic device and computer readable storage medium for data backup and recovery. The data backup method comprises: receiving data to be backed up and metadata describing the data to be backed up, the data to be backed up comprising a file and a directory, the metadata comprising file data associated with the file and directory data associated with the directory; generating path data associated with both the file and the directory based on the file data and the directory data; and storing the file data, the directory data, and the path data in association with the data to be backed up. Correspondingly, the data recovery method comprises: receiving information about data to be recovered; in response to the information being related to a path, obtaining path data associated with both a file and a directory, the path data being generated based on file data associated with the file and directory data associated with the directory; determining, based on the path data, metadata describing the data to be recovered; obtaining, based on the metadata, the data to be recovered; and transmitting the data to be recovered to implement data recovery. In this way, rapid recovery of the backup data is achieved. | 2020-04-16 |
20200117544 | DATA BACKUP SYSTEM AND DATA BACKUP METHOD - The disclosure provides a data backup system. The data backup system comprises an electronic device and a server. The electronic device is configured to store original data. The server predicts a data size of predicted compressing data and a first predicted compressing time corresponding to the predicted compressing data, which are generated by compressing the original data with a plurality of compressing algorithm respectively. The server fetches a computing resource data of the electronic device and predicts respectively a plurality of second predicted compressing time for which the electronic device compresses the original data according to the computing resource data and the plurality of first predicted compressing time. The server computes a plurality of reference data and generates a recommending command according to a default compressing algorithm of the plurality of the compressing algorithm which corresponds to the minimal reference data. | 2020-04-16 |
20200117545 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR BACKUPING VIRTUAL MACHINE - Embodiments of the present disclosure relate to a method, device, and computer program product for backing up a virtual machine. In one embodiment, the method includes obtaining a first file path in a first operating system installed on a virtual machine, wherein the virtual machine executes on a second operating system. The method further includes determining a second file path in the second operating system corresponding to the first file path based on the first file path and backing up one or more files in the virtual machine based on the second file path. | 2020-04-16 |
20200117546 | MEMORY EFFICIENT PERFECT HASHING FOR LARGE RECORDS - Embodiments for a memory efficient perfect hashing for large records. A container ID set is divided into multiple fixed range sizes. These ranges are then mapped into perfect hash buckets until each bucket is filled to uniformly distribute the container IDs across different perfect hash buckets so that the number of CIDs in every perfect hash bucket is the same or nearly the same. Individual perfect hash functions are created for each perfect hash bucket. With container IDs as keys, the process maps n keys to n positions to reduce any extra memory overhead. The perfect hash function is implemented using a compress, hash, displace (CHD) algorithm using two levels of hash functions. The level 1 hash functions divides the keys into multiple internal buckets with a defined average number of keys per bucket. The CHD algorithm iteratively tries different level 2 hash variables to achieve collision-free mapping. | 2020-04-16 |
20200117547 | SYSTEM STATE RECOVERY IN A DISTRIBUTED, CLOUD-BASED STORAGE SYSTEM - The system state recovery methods, systems and products disclosed herein enable an efficient means of recovering from a permanent site outage event in a distributed, block-based storage system. Embodiments teach using directory trees and journal updates for neighboring zones, which are still operational, as a means of recovering data for the site experiencing an outage. We further disclose load balancing techniques in order to improve efficiency of recovery. Load balancing is performed by selecting a leader zone and a group of non-leaders, which will comprise a set of recovery drivers. The systems within the set of recovery drivers are used to piece together lost data from the zone experiencing an outage. In embodiments, the systems, methods and products could be used with an Elastic Cloud System.™ | 2020-04-16 |
20200117548 | OPTIMIZED BACKUP OF CLUSTERS WITH MULTIPLE PROXY SERVERS - Systems and methods for backing up and restoring virtual machines in a cluster environment. Proxy nodes in the cluster are configured with agents. The agents are configured to perform backup operations and restore operations for virtual machines operating in the cluster. During a backup operation or during a restore operation, a load associated with the backup/restore operation is distributed across at least some of the proxy nodes. The proxy nodes can backup/restore virtual machines on any of the nodes in the cluster. | 2020-04-16 |
20200117549 | SYSTEM AND METHOD FOR DEVICE INDEPENDENT BACKUP IN DISTRIBUTED SYSTEM - A production host for hosting applications includes a persistent storage and a production agent. The persistent storage stores application data of the applications. The production agent obtains a backup analysis request for an application executing on the production host; in response to obtaining the backup analysis request: obtains an identity of the application; identifies backups in a backup storage, wherein the identified backups are associated with the identity of the application; performs a backup policy compliance analysis of the identified backups to generate a backup protection map for the application. | 2020-04-16 |
20200117550 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR BACKING UP DATA - Embodiments of the present disclosure provide a method, a device and a computer program for data backup. A method of backing up data comprises: in response to receiving, from an application system, a request for backing up first data, storing the first data into a first backup node; generating first metadata corresponding to the first data, the first metadata comprising first digest information of the first data; storing the first metadata into a block chain system to which the first backup node belongs; and verifying integrity of the first data stored in the first backup node with the first metadata stored in the block chain system. By utilizing a block chain system, the embodiments of the present disclosure ensure that data backed up are protected from being tampered with. | 2020-04-16 |
20200117551 | EFFICIENT RECOVERY OF BACKUPS FOR DELETED CHECKPOINTS - Backup operations may save a full backup and subsequent checkpoints. Systems and methods for handling backup and restore operations when checkpoints are deleted. Checkpoints can be merged during a restore operation to account for deleted checkpoints. Also, the backup can continue to leverage existing backups even though checkpoints have been deleted. | 2020-04-16 |
20200117552 | TIERED FORENSICS OF IoT SYSTEMS IN CLOUD AND TIME SERIES DATABASES - One example method includes creating an empty reconstruction stream database, identifying a data time interval, identifying data sources in which data was stored during the data time interval, reading data from the data sources, where the data read out from the data sources are associated with respective timestamps that fall within the data time interval, inserting the read out data into the empty reconstruction stream database so as to create a high resolution data stream, where the data are ordered in the empty reconstruction stream database according to timestamp, processing the data in the high resolution data stream and, based on the processing of the data, identifying and resolving a problem relating to an operating environment in which the data was initially generated. | 2020-04-16 |
20200117553 | METHOD, APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM HAVING INSTRUCTIONS FOR CANCELLING A REDUNDANCY OF TWO OR MORE REDUNDANT MODULES - A method, an apparatus, and a computer-readable storage medium having instructions for cancelling a redundancy of two or more redundant modules. Results of the two or more redundant modules are received; reliabilities of the results are ascertained; and, based on the ascertained reliabilities, an overall result is determined from the results. The overall result is output for further processing. | 2020-04-16 |
20200117554 | IPS SOC PLL MONITORING AND ERROR REPORTING - The systems and methods described herein provide the ability to detect a clocking element fault within an IC device and switch to an alternate clock. In response to detection of a fault in a phase-lock-loop (PLL) clocking element, the device may switch to an alternate clock so that error reporting logic can make forward progress on generating error message. The error message may be generated within an Intellectual Property (IP) cores (e.g., IP blocks), and may send the error message from the IP core to a system-on-a-chip (SOC), such as through an SOC Functional Safety (FuSA) error reporting infrastructure. In various examples, the clocking error may also be output to a hardware SOC pin, such as to provide a redundant path for error indication. | 2020-04-16 |
20200117555 | TECHNIQUES FOR MAINTAINING COMMUNICATIONS SESSIONS AMONG NODES IN A STORAGE CLUSTER SYSTEM - Various embodiments are generally directed to techniques for preparing to respond to failures in performing a data access command to modify client device data in a storage cluster system. An apparatus may include a processor component of a first node coupled to a first storage device; an access component to perform a command on the first storage device; a replication component to exchange a replica of the command with the second node via a communications session formed between the first and second nodes to enable at least a partially parallel performance of the command by the first and second nodes; and a multipath component to change a state of the communications session from inactive to active to enable the exchange of the replica based on an indication of a failure within a third node that precludes performance of the command by the third node. Other embodiments are described and claimed. | 2020-04-16 |
20200117556 | MAINTAINING STORAGE ARRAY ONLINE - Embodiments of the present disclosure relate to a method, system and computer program product for maintaining a storage array online. According to the method, an unrecoverable error is detected by one or more processors as having occurred in a failed disk of a storage array in first storage. The failed disk is replaced with a spare disk in the first storage. Data is retrieved from a second storage for storing into a stripe of the first storage based on address information of a data block of the failed disk. The second storage stores mirrored data of data stored in the first storage. The stripe includes data blocks distributed across all disks in the storage array of the first storage. The retrieved data is caused to be written into the stripe of the storage array of the first storage. In other embodiments, a system and a computer program product are disclosed. | 2020-04-16 |
20200117557 | REACTIVE READ BASED ON METRICS TO SCREEN DEFECT PRONE MEMORY BLOCKS - A variety of applications can include apparatus and/or methods to preemptively detect detect one memory blocks in a memory device and handle these memory blocks before they fail and trigger a data loss event. Metrics based on memory operations can be used to facilitate the examination of the memory blocks. One or more metrics associated with a memory operation on a block of memory can be tracked and a Z-score for each metric can be generated. In response to a comparison of a Z-score for a metric to a Z-score threshold for the metric, operations can be performed to control possible retirement of the memory block beginning with the comparison. Additional apparatus, systems, and methods are disclosed. | 2020-04-16 |
20200117558 | INTELLIGENT POST-PACKAGING REPAIR - Techniques are provided for storing a row address of a defective row of memory cells to a bank of non-volatile storage elements (e.g., fuses or anti-fuses). After a memory device has been packaged, one or more rows of memory cells may become defective. In order to repair (e.g., replace) the rows, a post-package repair (PPR) operation may occur to replace the defective row with a redundant row of the memory array. To replace the defective row with a redundant row, an address of the defective row may be stored (e.g., mapped) to an available bank of non-volatile storage elements that is associated with a redundant row. Based on the bank of non-volatile storage elements the address of the defective row, subsequent access operations may utilize the redundant row and not the defective row. | 2020-04-16 |
20200117559 | DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device includes a plurality of dies, each of the dies comprising a plurality of planes, and each of the planes comprising a plurality of memory blocks; and a controller controlling an operation of the memory device, wherein the controller generates a super block including memory blocks, among the plurality of memory blocks, determining whether there is a replacement block obtained by way-interleaving, when a bad block is present in the generated super block, and regenerating the super block by replacing the bad block with a replacement block obtained through channel-interleaving, when there is no replacement block being obtained by way-interleaving, wherein the controller assigns the regenerated super block for an operation when data being obtained by channel-interleaving is received, and controlling the memory device to store the data in the regenerated super block. | 2020-04-16 |
20200117560 | ERASURE CODING REPAIR AVAILABILITY - Distributed storage systems frequently use a centralized metadata repository that stores metadata in an eventually consistent distributed database. However, a metadata repository cannot be relied upon for determining which erasure coded fragments are lost because of a storage node(s) failures. Instead, when recovering a failed storage node, a list of missing fragments is generated based on fragments stored in storage devices of available storage nodes. A storage node performing the recovery sends a request to one or more of the available storage nodes for a fragment list. The fragment list is generated, not based on a metadata database, but on scanning storage devices for fragments related to the failed storage node. The storage node performing the recovery merges retrieved lists to create a master list indicating fragments that should be regenerated for recovery of the failed storage node(s). | 2020-04-16 |
20200117561 | USING SYSTEM ERRORS AND MANUFACTURER DEFECTS IN SYSTEM COMPONENTS CAUSING THE SYSTEM ERRORS TO DETERMINE A QUALITY ASSESSMENT VALUE FOR THE COMPONENTS - Provided are a computer program product, system, and method for using system errors and manufacturer defects in system components causing the system errors to determine a quality assessment value for the components. A system error message indicates at least one at least one system error resulting from an operation of at least one component deployed in the system. A manufacturing defect for the at least one component whose operation results in the at least one system error is determined from information from a manufacturer of the component. A quality assessment value is determined from the system error and manufacturing defect, for each of the at least one component for which there is a manufacturing defect. A message is transmitted to an administrator of the system indicating a negative assessment of the component in response to a comparison of the quality assessment value and a threshold value indicate a negative assessment. | 2020-04-16 |
20200117562 | IMPLEMENTING POWER UP DETECTION IN POWER DOWN CYCLE TO DYNAMICALLY IDENTIFY FAILED SYSTEM COMPONENT RESULTING IN LOSS OF RESOUCES PREVENTING IPL - A method and apparatus for implementing power up detection in a power down cycle to dynamically determine whether a failed component in a system prevents another Initial Program Load (IPL) or re-IPL, or result in a loss of resources. Predefined mandatory functions are called to collect power down/up data that prevents re-IPL, or results in the reduction of resources. A user is notified, allowing the customer to continually utilize the system, while ordering hardware to be replaced. | 2020-04-16 |
20200117563 | TTCN-based test system and method for testing test-cases, non-transitory computer-readable recording medium - The present invention relates to a TTCN-based test system for testing test-cases is provided, the test system comprising: a test executable comprising a compiled TTCN code, a simulated device under test (DUT) comprising a pre-recorded log-file which describes at least partially the behavior of the simulated DUT, a test runtime interface between the test executable and the simulated DUT, a test computer (PC) which is configured to perform the testing by executing the compiled TTCN code and by exchanging via the test runtime interface protocol messages with the simulated DUT during the execution of the compiled TTCN code. The invention further relates to a TTCN-based test method for testing test-cases and a non-transitory computer-readable recording medium. | 2020-04-16 |
20200117564 | TEST CONTROLLER FOR CONCURRENT TESTING OF AN APPLICATION ON MULTIPLE DEVICES - A test controller interfacing between a master computing device and slave computing devices includes a processor configured to launch a master application on the master computing device and a slave application to be tested on each respective slave computing device, with each slave application being the same as the master application. The processor is also configured to receive from the master computing device an input test command along with a test result based on execution of the input test command by the master application, and transmit the received input test command to each slave computing device. In addition, the processor is configured to receive a respective test result from each slave computing device based on execution of the received input test command, and compare each respective test result from the slave computing devices to the test result from the master computing device. | 2020-04-16 |
20200117565 | ENHANCED IN-SYSTEM TEST COVERAGE BASED ON DETECTING COMPONENT DEGRADATION - In various examples, permanent faults in hardware component(s) and/or connections to the hardware component(s) of a computing platform may be predicted before they occur using in-system testing. As a result of this prediction, one or more remedial actions may be determined to enhance the safety of the computing platform (e.g., an autonomous vehicle). A degradation rate of a performance characteristic associated with the hardware component may be determined, detected, and/or computed by monitoring values of performance characteristics over time using fault testing. | 2020-04-16 |
20200117566 | Methods and Systems to Determine Baseline Event-Type Distributions of Event Sources and Detect Changes in Behavior of Event Sources - Automated methods and systems to determine a baseline event-type distribution of an event source and use the baseline event type distribution to detect changes in the behavior of the event source are described. In one implementation, blocks of event messages generated by the event source are collected and an event-type distribution is computed for each of block of event messages. Candidate baseline event-type distributions are determined from the event-type distributions. The candidate baseline event-type distribution has the largest entropy of the event-type distributions. A normal discrepancy radius of the event-type distributions is computed from the baseline event-type distribution and the event-type distributions. A block of run-time event messages generated by the event source is collected. A run-time event-type distribution is computed from the block of run-time event messages. When the run-time event-type distribution is outside the normal discrepancy radius, an alert is generated indicating abnormal behavior of the event source. | 2020-04-16 |
20200117567 | DUMP ANALYSIS GENERATION - A method includes recording, in a first database table, user interactions of one or more users with a user interface, retrieving a list of runtime errors that have occurred in a system resulting from the user interactions, for each runtime error in the list, identifying a type of the runtime error comprising one of a first type and a second type, and retrieving a corresponding call stack comprising a sequence of function calls that led to the runtime error, storing information from the call stack in a second database table, correlating the user interactions recorded in the recording step with the function calls in the call stack, and providing, on a display device, a visual reproduction of processing steps leading up to the runtime error using the correlations in the correlating step. | 2020-04-16 |
20200117568 | DESIGN METHOD FOR IMPLEMENTING BACKPLANE LIGHTING FOR MULTIPLE NVME HARD DISKS - A method for lighting a backplane lamp of multiple NVMe hard disks is provided. The method includes: transmitting a VPP address to the backplane in a cyclic manner by the controller, and analyzing the address transmitted by the controller by a programmable logic device of the backplane after a data stream transmitted by the controller is received; transmitting, by the controller, hard disk lamp lighting information of a corresponding disk position to the programmable logic device of the backplane, if a VPP address analyzed by the backplane is the same as the VPP address transmitted by the controller; and performing logical conversion on the hard disk lamp lighting information, to convert a serial data stream on the VPP signal wires into a parallel signal, lighting a backplane lamp at a corresponding port, and uploading information of a position of the hard disk to the controller. | 2020-04-16 |
20200117569 | GRAPHICAL USER INTERFACE FOR VISUAL CORRELATION OF VIRTUAL MACHINE INFORMATION AND STORAGE VOLUME INFORMATION - The disclosed embodiments include a method for identifying a performance metric to diagnose a cause of a performance issues of virtual machine. The method includes obtaining data of a virtual machine, an indication that a storage volume contains data of the virtual machine, data about the storage volume, and an identification of the storage volume. The data of the virtual machine is correlated with the data about the storage volume based on the indication that the storage volume contains data of the virtual machine and the identification of the storage volume. A performance metric is identified based at least in part on an outcome of the correlating. The performance metric indicates that the storage volume is a cause of a performance issue of the virtual machine. A state related to the storage volume is changed to mitigate the cause of the performance issue of the virtual machine. | 2020-04-16 |
20200117570 | ANALYZING LARGE-SCALE DATA PROCESSING JOBS - Methods, systems, and apparatus for data analysis in a distributed computing system by accessing data stored at a first processing zone associated with a distributed data processing job, detecting information identifying a particular child job associated with the distributed data processing job, comparing the identifying information to data stored at a second processing zone, and identifying an additional child job as associated with the distributed data processing job based on a result of the comparison. The methods, systems and apparatus are further for correlating particular output data associated with the particular child job and additional output data associated with the additional child job for the distributed data processing job, determining performance data for the distributed data processing job based on the output data associated with each of the particular child job and the additional child job, and providing for display the performance data for the distributed data processing job. | 2020-04-16 |
20200117571 | DERIVING COMPONENT STATISTICS FOR A STREAM ENABLED APPLICATION - A technique for generating component usage statistics involves associating components with blocks of a stream-enabled application. When the streaming application is executed, block requests may be logged by Block ID in a log. The frequency of component use may be estimated by analyzing the block request log with the block associations. | 2020-04-16 |
20200117572 | PROBLEM DIAGNOSIS TECHNIQUE OF MEMORY CORRUPTION BASED ON REGULAR EXPRESSION GENERATED DURING APPLICATION COMPILING - According to one embodiment, a method, computer system, and computer program product for memory corruption diagnosis is provided. The present invention may include generating a pattern expression (PE) header file, wherein a plurality of common datatypes associated with a software program are pre-defined. The invention may further include generating a PE for each of the plurality of common datatypes, and generating a PE table by merging the generated PEs for each of the plurality of common datatypes. Upon discovery that memory corruption has occurred, the invention may include transmitting a recorded state of the software program as a core dump file to a server, and using a dump utility to identify overlay content of the core dump file. Lastly, the invention may include identifying a possible source program of the memory corruption by matching the PE tables against the illegally-written overlay content. | 2020-04-16 |
20200117573 | LINKING SOURCE CODE WITH COMPLIANCE REQUIREMENTS - Concepts for linking source code with compliance requirements are presented. One example comprises analyzing a set of compliance requirements to identify one or more compliance topics. The example further comprises determining keywords for the identified one or more compliance topics. An item of source code is then analyzed to identify occurrences of the keywords in the source code. Mapping information representing a relationship between the item of source code and the compliance requirements is then generated based on the identified occurrence of the keywords. | 2020-04-16 |
20200117574 | AUTOMATIC BUG VERIFICATION - Embodiments of the present disclosure relate to a method, device and computer program product for software bug verification. In one embodiment, the method includes determining a test action for verification of a software bug to be verified based on an identification of the software bug. The method further includes determining similarities between the test action and a plurality of historical test actions. The method further includes in response to a similarity between the test action and at least one of the plurality of historical test actions exceeding a threshold similarity, associating the test action with a code fragment category associated with the at least one historical test action. The method further includes verifying the software bug by running one code fragment in the code fragment category. | 2020-04-16 |
20200117575 | SYSTEMS AND METHODS FOR VALIDATING DOMAIN SPECIFIC MODELS - Model driven engineering (MDE) approaches necessitate verification and validation (V&V) of the models used. Balancing usability of modeling languages with verifiability of the specification presents several challenges. Conventional modeling languages have automated solvers but are hard to interpret and use. Implementations of present disclosure provide systems and methods for validating domain specific models wherein rules and vocabularies in domain specific model are translated to machine interpretable rules (MIR) and machine interpretable vocabularies (MIV) which are processed (via logic programming technique) to generate a logic programming representation (LPR) of the MIR and MIV based on which solution scenarios are generated for validating the domain specific model. Validation of the model involves verifying LPR using a set of ground facts. During validation of solution scenarios of model, system also checks for inconsistences in rules, if any. These rules are corrected and solution scenarios are re-generated to obtain anomaly free solution scenarios. | 2020-04-16 |
20200117576 | ASSESSING THE CONTAINER-READINESS OF SOFTWARE APPLICATIONS - Techniques are disclosed relating to assessing the container-readiness of a software application. In some embodiments, a computer system performs an assessment of the container-readiness of a software application relative to a specified containerization procedure. The assessment may be based on a plurality of parameters associated with the software application. In some embodiments, the assessment includes parsing program code associated with the software application to determine, for one or more static parameters, corresponding static parameter scores. The assessment may further include analyzing runtime information corresponding to the software application to determine a runtime parameter score for at least one runtime parameters. Further, the assessment may include generating a container-readiness value for the software application based on the runtime parameter score and the static parameter scores. In some embodiments, the container-readiness value is indicative of a degree of compliance with the specified containerization procedure. | 2020-04-16 |
20200117577 | SCALABLE AI FRAMEWORK FOR TEST AUTOMATION - In one aspect, there is provided a method for software testing. In one aspect, there is provided a method. The method may include executing a test script including at least one test instruction requiring an input at a user interface element displayed on a screen of a device under test; determining, based on a machine learning model, a candidate location on the screen of the device under test, the candidate location representing a candidate portion of the screen having the user interface element for the required input associated with the at least one test instruction; recognizing, based on optical character recognition, one or more characters in the determined candidate location; selecting, based on the recognized characters, the determined candidate location as the user interface element having the required input; and executing an inserted value at the determined candidate location to test a result of the test script execution. | 2020-04-16 |
20200117578 | METHOD AND APPARATUS FOR DEBUGGING DEVICES - Techniques are described for debugging node devices. A node device may be connected to a host device for debugging purposes. A debugger, providing debug functionality, such as a debugging web application, may run on a remote server and be accessed via a web browser running at the host device, to debug the node device. Alternatively, the debugging web application may execute in the web browser running at the host device to debug the node device. In another alternative, the debugging web application may execute at a gateway device provided between the node device and the host device. In all cases the debugging web application is controlled via a debug user interface running at the web browser. Consequently, a user of the host device is not required to install a debugger at the host device in order to debug a node device. | 2020-04-16 |
20200117579 | STATELESS INJECTED SCRIPT DEBUGGING - Debugger requests are for debugging a script injected into a web application during a debug session are received. Each of the debugger requests include the same debug session identifier. A different one of the debugger requests is associated with each of the break points set for debugging the script. For each of the debugger requests: a new stateless debugger node is connected with a single stateless target tester node. Stateless debugger nodes and stateless target tester nodes reside inside of the multi-node cloud system. The script is debugged on the same stateless target tester node while the debugging is controlled from a developer computer system that is outside of the multi-node cloud system. After completion of each of the debugger requests: a current stateless debugger node is disconnected, and state stored in the multi-node cloud system used for servicing a current debugger request is destroyed. | 2020-04-16 |
20200117580 | Validation Sets for Machine Learning Algorithms - A computing device receives data comprising inputs representing a respective option for each of factors in each of test cases. The data comprises a response of the system for each of the test cases. The computing device receives a request requesting an evaluation of the data for generating a model (e.g. a machine learning algorithm) to predict responses based on the factors. The computing device obtains different group identifiers for each of groups for distributing the test cases for the system (e.g., groups of a K-fold cross-validation). The computing device for each of validation(s): generates a data set comprising a respective data element for each of the test cases of the plurality of test cases; and controls assignment of a group identifier of the different group identifiers to each of the respective data elements. The computing device outputs an indication of one or more generated data sets for the validation(s). | 2020-04-16 |
20200117581 | CONFIGURATION FILE UPDATING SYSTEM FOR USE WITH CLOUD SOLUTIONS - A hybrid cloud computing and local computing system is provided. The system may include a cloud computing environment and a local computing environment. The system may include a source control repository, a development repository, a production repository, a developer experience development repository and/or a disposable development and testing environment. The disposable environment may include a plurality of configuration files and an export function. The system may activate the export function to export the plurality of configuration files to the local computing environment. The system may store the configuration files within the local computing environment. The system may receive, at the local computing environment, modifications to the configuration files and transmit the modifications to the source control repository. The system may perform a comparison between the modifications and a set of source control configuration files and update the set of source control configuration files with the modifications. | 2020-04-16 |
20200117582 | GENERATING REPRESENTATIVE UNSTRUCTURED DATA TO TEST ARTIFICIAL INTELLIGENCE SERVICES FOR BIAS - A bias detection method, system, and computer program product to evaluate bias in an artificial intelligence service include selecting a bias context, the bias context having a bias specification associated with the bias context, generating test data for determining the bias in the artificial intelligence service based on the bias specification and the bias context, and testing the artificial intelligence service for the bias with the generated test data. | 2020-04-16 |
20200117583 | AUTOMATED TESTCASE BUILD ENGINE AND PREDICTION-BASED TEST SYSTEM - In some examples, a computing device may predict, using a machine learning module, scenarios and transactions associated with a usage of a software package. The computing device may select at least a portion of the scenarios and the transactions to cover a predetermined percentage of a predicted usage of the software package. The computing device may select a subset of unit test cases (e.g., created by software designers to test software units that are components of the software package) and execute the test cases to generate test results to determine whether the software package is ready to be deployed to customers. The computing device may train the machine learning module using at least one of the test results, the portion of the scenarios and the transactions, or the test cases. The test results may be evaluated to determine an effectiveness of the set of test cases. | 2020-04-16 |
20200117584 | ZERO CODING AUTOMATION WITH NATURAL LANGUAGE PROCESSING, SUCH AS FOR USE IN TESTING TELECOMMUNICATIONS SOFTWARE AND RESOURCES - A framework for automated testing for use in testing telecommunications software and resources is disclosed that reuses testing code modules, thereby reducing redundancy and increasing efficiency and productivity. The zero coding automation system disclosed herein provides an end-to-end testing automation framework, which minimizes (and in some cases eliminates) the requirement for testers to write software code to test software modules. Instead, the coding automation systems and methods provide a hierarchical framework to translate testing requests (commands, statements, and so on) received in a natural language (for example, English) to testing code modules written in, for example, one or more programming languages (for example, tool specific Application Program Interface (API)/libraries developed to test functionality). | 2020-04-16 |
20200117585 | METHOD AND APPARATUS FOR COMPUTER-AIDED TESTING OF A BLOCKCHAIN - Provided is a method for testing a blockchain in a computer-aided manner, having the following method steps: generating a specified transaction and/or a specified smart contract, the specified transaction and/or the specified smart contract being paired with a respective specification value; adding the specified transaction and/or the specified smart contract into the blockchain; carrying out the specified transaction and/or the specified smart contract, a measurement value of the specified transaction and/or the specified smart contract being detected; and testing the measurement value using the specification value, wherein a control signal is provided in the event of a deviation from the specification value. | 2020-04-16 |
20200117586 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR EXECUTING TEST CASES - A method and device for executing test cases includes obtaining a set of test cases to be executed, and determining a test platform type and a test script associated with each test case in the set of test cases based on a knowledge base. The set of test cases may be divided into a plurality of test subsets or test suites based on the test platform type, and test cases in each test subset executed using the respective test environment and test script. The test suites may be generated automatically based on the knowledge base, and the respective test environment and test script are used for executing each test suite. Automatic generation and execution of the test suites can improve the operation efficiency for test cases. | 2020-04-16 |
20200117587 | Log File Analysis - Disclosed is a method, system, and computer readable medium to implement differential log file analysis using a computer device. The differential technique including obtaining one or more log file entries representative of successful test executions of a computer process. The computer process may execute on a single computer system or be a distributed application across multiple computer systems in a target environment. Acceptable deviations between log file entries associated with different instances of successful test executions may be used to creating a pattern matching representation in the form of a line pattern, sequence pattern, timing pattern, branching sequence pattern, cyclical sequence pattern, or a combination thereof. Matching of run-time log file entries against known success acceptable patterns may provide indication of system or application anomalies based on a failed comparison of the matching. A state machine implementation may be used to perform the matching function. | 2020-04-16 |
20200117588 | DYNAMIC FEATURE AND PERFORMANCE TESTING AND ADJUSTMENT - Apparatuses, methods, systems, and computer program products are presented for dynamic feature and performance testing and adjustment. An audit module is configured to dynamically test a plurality of image capture settings for a camera of a mobile device of an end user in an executable mobile application executing on the mobile device. A feature module is configured to select one of a plurality of image capture settings for a camera of a mobile device based on a dynamic test. An adjustment module is configured to dynamically configure, during runtime of an executable mobile application on a plurality of different mobile devices of different end users, the different mobile devices to use a selected one of a plurality of image capture settings. | 2020-04-16 |
20200117589 | TECHNIQUES AND DEVICES FOR CLOUD MEMORY SIZING - Systems, apparatuses, and methods for cloud memory sizing are disclosed. An initial database memory allocation is determined for the provisioning of a database server instance. Periodically, sizes of key database tables of the database server instance are measured and an upper and a lower bound ratio are determined based on the key database table sizes and a buffer pool size. The upper and lower bound ratios are used to determine a desired memory allocation from which a report is generated including an interface for generating an instance move action for re-provisioning the database server instance with the desired memory allocation. | 2020-04-16 |
20200117590 | UNRETIRING MEMORY DEVICE BLOCKS - Various examples are directed to systems and methods for managing a memory device. Processing logic may identify a set of retired blocks at the memory device that were retired during use of the memory device. The processing logic may modify a first table entry referencing the first block to indicate that the first block is not retired. The processing logic may also modify a second table entry referencing the second block to indicate that the second block is not retired. The processing logic may also recreate a logical-to-physical table entry for a first page of at the first block, the logical-to-physical table entry associating a logical address with the first page. | 2020-04-16 |
20200117591 | ENCODER, ASSOCIATED ENCODING METHOD AND FLASH MEMORY CONTROLLER - An encoder of a flash memory controller is provided, which includes a barrel shifter module, an inverse matrix calculating circuit and a calculating circuit. The barrel shifter module processes multiple data blocks to generate multiple partial parity blocks including a first portion, a second portion and a third portion. The inverse matrix calculating circuit performs inverse matrix calculating operations on the first portion to generate a first portion of parity blocks. The calculating circuit performs inverse matrix calculating operations on the second portion and the third portion according to the first portion of the parity blocks, to generate a second portion of the parity blocks and a third portion of the parity blocks. The first portion of the parity blocks, the second portion of the parity blocks, and the third portion of the parity blocks serve as multiple parity blocks generated in response to encoding the data blocks. | 2020-04-16 |
20200117592 | HEURISTICS FOR SELECTING SUBSEGMENTS FOR ENTRY IN AND ENTRY OUT OPERATIONS IN AN ERROR CACHE SYSTEM WITH COARSE AND FINE GRAIN SEGMENTS - A memory device comprises a memory bank comprising a plurality of addressable memory cells, wherein the memory bank is divided into a plurality of segments. Further, the device comprises a cache memory operable for storing a second plurality of data words, wherein each data word of the second plurality of data words is either awaiting write verification associated with the memory bank or is to be re-written into the memory bank. The cache memory is divided into a plurality of primary segments, wherein each primary segment of the cache memory is direct mapped to a corresponding segment of the plurality of segments, wherein each primary segment is sub-divided into a plurality of secondary segments, and wherein each of the plurality of secondary segments comprises at least one counter for tracking a number of entries stored therein. | 2020-04-16 |
20200117593 | INFORMATION PROCESSING DEVICE, NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM, AND INFORMATION PROCESSING SYSTEM - According to one embodiment, an information processing device includes a nonvolatile memory, assignment unit, and transmission unit. The assignment unit assigns logical address spaces to spaces. Each of the spaces is assigned to at least one write management area included in a nonvolatile memory. The write management area is a unit of an area which manages the number of write. The transmission unit transmits a command for the nonvolatile memory and identification data of a space assigned to a logical address space corresponding to the command. | 2020-04-16 |
20200117594 | IMPLEMENTING LOW COST AND LARGE CAPACITY DRAM-BASED MEMORY MODULES - A heterogeneous dynamic random access memory (DRAM) module, including: a first set of DRAM chips; a second set of DRAM chips, wherein the DRAM chips in the second set of DRAM chips have a lower storage reliability than the DRAM chips in the first set of DRAM chips; and a controller coupled to the first and second sets of DRAM chips, wherein the controller includes a DRAM access engine for accessing the second set of DRAM chips and for ensuring a data storage integrity of the second set of DRAM chips. | 2020-04-16 |
20200117595 | System and Method for a Storage Controller Having a Persistent Memory Interface to Local Memory - A storage system with a controller having a persistent memory interface to local memory is provided. The persistent memory can be used to store a logical-to-physical address table. A logical-to-physical address table manager, local to the controller or remote in a secondary controller, can be used to access the logical-to-physical address table. The manager can be configured to improve bandwidth and performance in the storage system. | 2020-04-16 |
20200117596 | A Memory Allocation Manager and Method Performed Thereby for Managing Memory Allocation - A memory allocation manager and a method performed thereby for managing memory allocation, within a data centre, to an application are provided. The data centre comprises at least a Central Processing Unit, CPU, pool and at least one memory pool. The method comprises receiving ( | 2020-04-16 |
20200117597 | MEMORY WITH PROCESSING IN MEMORY ARCHITECTURE AND OPERATING METHOD THEREOF - A memory with a processing in memory architecture and an operating method thereof are provided. The memory includes a memory array, a mode register, an artificial intelligence core, and a memory interface. The memory array includes a plurality of memory regions. The mode register stores a plurality of memory mode settings. The memory interface is coupled to the memory array and the mode register, and is externally coupled to a special function processing core. The artificial intelligence core is coupled to the memory array and the mode register. The plurality of memory regions are respectively selectively assigned to the special function processing core or the artificial intelligence core according to the plurality of memory mode settings of the mode register, so that the special function processing core and the artificial intelligence core respectively access different memory regions in the memory array according to the plurality of memory mode settings. | 2020-04-16 |
20200117598 | SYSTEM AND METHOD TO IMPROVE INPUT OUTPUT COMMAND LATENCY BY DYNAMIC SIZE LOGICAL TO PHYSICAL CACHING - A method and apparatus are provided to divide a logical to physical table into multiple parts, one part in a first fast memory and a second part in a second non-volatile memory, wherein an algorithm may be used in the division. | 2020-04-16 |
20200117599 | Memory Mapping for Hibernation - A computing system has a processing device (e.g., CPU, FPGA, or GPU) and memory regions (e.g., in a DRAM device) used by the processing device during normal operation. The computing system is configured to: monitor use of the memory regions in volatile memory; based on monitoring the use of the memory regions, identify at least one of the memory regions of the volatile memory; initiate a hibernation process; and during the hibernation process, copy data stored in the identified memory regions to non-volatile memory. | 2020-04-16 |
20200117600 | MULTICORE SHARED CACHE OPERATION ENGINE - Techniques for accessing memory by a memory controller, comprising receiving, by the memory controller, a memory management command to perform a memory management operation at a virtual memory address, translating the virtual memory address to a physical memory address, wherein the physical memory address comprises an address within a cache memory, and outputting an instruction to the cache memory based on the memory management command and the physical memory address. | 2020-04-16 |
20200117601 | STORAGE CONTROLLER, STORAGE SYSTEM, STORAGE CONTROLLER CONTROLLING METHOD, AND PROGRAM - An object is to suppress a process of evicting cached data so as to improve a throughput of an entire system. A storage controller includes an access request section and an operation management section. The access request section requests access to a first storage and to a second storage that is higher in response speed than the first storage, the second storage storing part of data stored in the first storage. The operation management section manages, based on a usage state of the second storage, whether or not to transfer from the first storage to the second storage data targeted for access but not stored in the second storage. | 2020-04-16 |
20200117602 | DELAYED SNOOP FOR IMPROVED MULTI-PROCESS FALSE SHARING PARALLEL THREAD PERFORMANCE - Techniques for maintaining cache coherency comprising storing data blocks associated with a main process in a cache line of a main cache memory, storing a first local copy of the data blocks in a first local cache memory of a first processor, storing a second local copy of the set of data blocks in a second local cache memory of a second processor executing a first child process of the main process to generate first output data, writing the first output data to the first data block of the first local copy as a write through, writing the first output data to the first data block of the main cache memory as a part of the write through, transmitting an invalidate request to the second local cache memory, marking the second local copy of the set of data blocks as delayed, and transmitting an acknowledgment to the invalidate request. | 2020-04-16 |
20200117603 | MULTICORE, MULTIBANK, FULLY CONCURRENT COHERENCE CONTROLLER - A system includes a multi-core shared memory controller (MSMC). The MSMC includes a snoop filter bank, a cache tag bank, and a memory bank. The cache tag bank is connected to both the snoop filter bank and the memory bank. The MSMC further includes a first coherent slave interface connected to a data path that is connected to the snoop filter bank. The MSMC further includes a second coherent slave interface connected to the data path that is connected to the snoop filter bank. The MSMC further includes an external memory master interface connected to the cache tag bank and the memory bank. The system further includes a first processor package connected to the first coherent slave interface and a second processor package connected to the second coherent slave interface. The system further includes an external memory device connected to the external memory master interface. | 2020-04-16 |
20200117604 | FLUSHING ENTRIES IN A CACHE - Techniques are provided for performing a flush operation in a non-coherent cache. In response to determining to perform a flush operation, a cache unit flushes certain data items. The flush operation may be performed in response to a lapse of a particular amount of time, such as a number of cycles, or an explicit flush instruction that does not indicate any cache entry or data item. The cache unit may store change data that indicates which entry stores a data item that has been modified but not yet been flushed. The change data may be used to identify the entries that need to be flushed. In one technique, a dirty cache entry that is associated with one or more relatively recent changes is not flushed during a flush operation. | 2020-04-16 |
20200117605 | RECEIVE BUFFER MANAGEMENT - Examples described herein can be used to allocate replacement receive buffers for use by a network interface, switch, or accelerator. Multiple refill queues can be used to receive identifications of available receive buffers. A refill processor can select one or more identifications from a refill queue and allocate the identifications to a buffer queue. None of the refill queues is locked from receiving identifications of available receive buffers but merely one of the refill buffers is accessed at a time to provide identifications of available receive buffers. Identifications of available receive buffers from the buffer queue are provide to the network interface, switch, or accelerator to store content of received packets. | 2020-04-16 |
20200117606 | MULTI-POWER-DOMAIN BRIDGE WITH PREFETCH AND WRITE MERGING - Techniques for accessing data, comprising receiving a first memory request associated with a first clock domain, converting a first memory address of the first memory request from a first memory address format associated with the first clock domain to a second memory address format associated with the second clock domain, transitioning the first memory request to a second clock domain, creating a first scoreboard entry associated with the first memory request, transmitting the first memory request to a memory based on the converted first memory address, receiving a first response to the first memory request, transitioning the first response to the second clock domain and clearing the first scoreboard entry based on the received response. | 2020-04-16 |
20200117607 | CACHE LINE REPLACEMENT USING REFERENCE STATES BASED ON DATA REFERENCE ATTRIBUTES - A method comprises receiving input reference attributes from a data reference interface and selecting a replacement data location of a cache to store data. The replacement data location is selected based on the input reference attributes and reference states associated with cached-data stored in data locations of the cache and an order of state locations of a replacement stack storing the reference states. The reference states are based on reference attributes associated with the cached-data and can include a probability count. The order of state locations is based on the reference states and the reference attributes. In response to receiving some input reference attributes, reference states stored in the state locations can be modified and a second order of the state locations can be determined. A reference state can be stored in the replacement stack based on the second order. A cache can comprise a data reference interface, reference attributes, reference states, cached-data locations, a replacement stack, and a cache manager. The cache manager can perform the method. | 2020-04-16 |
20200117608 | STATE AND PROBABILTY BASED CACHE LINE REPLACEMENT - A cache comprises data locations in a storage medium and a set of reference states corresponding to the data locations and abased on reference attributes associated with data stored in the data locations. The cache receives reference information associated with a data reference and selects a data location to store data reference based on a reference state corresponding to the data location. The cache modifies reference states based on reference attributes associated with data references. A method of managing a cache includes receiving reference information associated with a data reference and selecting a data location in a storage medium to store data based on reference attributes associated with data stored in the selected data location. The method can include modifying reference states in response to receiving reference information. The cache and the method can include a count, based on reference attributes, in a reference state. | 2020-04-16 |
20200117609 | COHERENT MEMORY ACCESS - Apparatuses and methods related to providing coherent memory access. An apparatus for providing coherent memory access can include a memory array, a first processing resource, a first cache line and a second cache line coupled to the memory array, a first cache controller, and a second cache controller. The first cache controller coupled to the first processing resource and to the first cache line can be configured to provide coherent access to data stored in the second cache line and corresponding to a memory address. A second cache controller coupled through an interface to a second processing resource external to the apparatus and coupled to the second cache line can be configured to provide coherent access to the data stored in the first cache line and corresponding to the memory address. Coherent access can be provided using a first cache line address register of the first cache controller which stores the memory address and a second cache line address register of the second cache controller which also stores the memory address. | 2020-04-16 |
20200117610 | ERROR CACHE SYSTEM WITH COARSE AND FINE SEGMENTS FOR POWER OPTIMIZATION - A memory device for storing data comprises a memory bank comprising a plurality of addressable memory cells, wherein the memory bank is divided into a plurality of segments. The memory device also comprises a cache memory operable for storing a second plurality of data words, wherein further each data word of the second plurality of data words is either awaiting write verification or is to be re-written into the memory bank. The cache memory is divided into a plurality of primary segments, wherein each primary segment of the cache memory is direct mapped to a corresponding segment of the plurality of segments of the memory bank, wherein each primary segment of the plurality of primary segments of the cache memory is sub-divided into a plurality of secondary segments, and each of the plurality of secondary segments comprises at least one counter for tracking a number of valid entries stored therein. | 2020-04-16 |
20200117611 | SYSTEM AND METHOD FOR DATA PROCESSING - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data processing are provided. One of the methods includes: obtaining a bytecode compiled from source code comprising one or more input parameters, the source code including an encoding function to encode the one or more input parameters, save the encoded one or more input parameters in a memory segment, and provide a memory location of the memory segment; executing, according to the bytecode, the encoding function to encode the one or more input parameters to obtain the memory location of the memory segment storing the encoded one or more input parameters; and providing the memory location to a function for retrieving and decoding the encoded one or more input parameters to obtain the one or more input parameters. | 2020-04-16 |
20200117612 | TRANSPARENT SELF-REPLICATING PAGE TABLES IN COMPUTING SYSTEMS - An example method of managing memory in a computer system implementing non-uniform memory access (NUMA) by a plurality of sockets each having a processor component and a memory component is described. The method includes replicating page tables for an application executing on a first socket of the plurality of sockets across each of the plurality of sockets; associating metadata for pages of the memory storing the replicated page tables in each of the plurality of sockets; and updating the replicated page tables using the metadata to locate the pages of the memory that store the replicated page tables. | 2020-04-16 |
20200117613 | Configuration Cache For The ARM SMMUv3 - A method of translating a virtual address into a physical memory address in an ARM SMMUv3 system may comprise searching a Configuration Cache memory for a matching tag that matches the associated tag upon receiving the virtual address and an associated tag, and extracting, in a single memory lookup cycle, a matching data field associated with the matching tag when the matching tag is found in the Configuration Cache memory. The matching data field of the Configuration Cache may comprise a matching Stream Table Entry (STE) and a matching Context Descriptor (CD), both associated with the matching tag. The Configuration Cache may be configured as a content-addressable memory. The method may further comprise storing entries associated with a multiple memory lookup cycle virtual address-to-physical address translation into the Configuration Cache memory, each of the entries comprising a tag, an associated STE and an associated CD. | 2020-04-16 |
20200117614 | COMPUTING DEVICE AND METHOD - The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations. | 2020-04-16 |
20200117615 | APPARATUS AND METHOD FOR HANDLING PAGE PROTECTION FAULTS IN A COMPUTING SYSTEM - Method and apparatus for handling page protection faults in combination particularly with the dynamic conversion of binary code executable by a one computing platform into binary code executed instead by another computing platform. In one exemplary aspect, a page protection fault handling unit is used to detect memory accesses, to check page protection information relevant to the detected access by examining the contents of a page descriptor store, and to selectively allow the access or pass on page protection fault information in accordance with the page protection information. | 2020-04-16 |
20200117616 | INVALIDATION OF A TARGET REALM IN A REALM HIERARCHY - An apparatus has processing circuitry for performing data processing in response to software processes and memory access circuitry for enforcing ownership rights for memory regions. A given memory region is associated with an owner realm specified from a multiple realms with each realm corresponding to a portion of at least one software process. The owner realm has a right to exclude other realms from accessing data stored in the given memory region (including realms executed at a higher privilege level). The realms are managed according to a realm hierarchy in which each realm other than a root realm is a child realm initialised in response to a command triggered by its parent realm. In response to an invalidation command, a realm management unit makes the target realm and any descendant realm of the target realm inaccessible to the processing circuitry. | 2020-04-16 |
20200117617 | SYSTEM AND METHOD FOR APPLICATION MIGRATION FOR A DOCKABLE DEVICE - Described is a method and apparatus for application migration between a dockable device and a docking station in a seamless manner. The dockable device includes a processor and the docking station includes a high-performance processor. The method includes determining a docking state of a dockable device while at least an application is running. Application migration from the dockable device to a docking station is initiated when the dockable device is moving to a docked state. Application migration from the docking station to the dockable device is initiated when the dockable device is moving to an undocked state. The application continues to run during the application migration from the dockable device to the docking station or during the application migration from the docking station to the dockable device. | 2020-04-16 |
20200117618 | VIRTUAL NETWORK PRE-ARBITRATION FOR DEADLOCK AVOIDANCE AND ENHANCED PERFORMANCE - A device includes a data path, a first interface configured to receive a first memory access request from a first peripheral device, and a second interface configured to receive a second memory access request from a second peripheral device. The device further includes an arbiter circuit configured to, in a first clock cycle, a pre-arbitration winner between a first memory access request and a second memory access request based on a first number of credits allocated to a first destination device and a second number of credits allocated to a second destination device. The arbiter circuit is further configured to, in a second clock cycle select a final arbitration winner from among the pre-arbitration winner and a subsequent memory access request based on a comparison of a priority of the pre-arbitration winner and a priority of the subsequent memory access request. | 2020-04-16 |
20200117619 | CREDIT AWARE CENTRAL ARBITRATION FOR MULTI-ENDPOINT, MULTI-CORE SYSTEM - A device includes a data path, a first interface configured to receive a first memory access request from a first peripheral device, and a second interface configured to receive a second memory access request from a second peripheral device. The device further includes an arbiter circuit configured to determine a first destination device connected to the data path and associated with the first memory access request and a first credit threshold corresponding to the first memory access request. The arbiter circuit is further configured to determine a second destination device connected to the data path and associated with the second memory access request and a second credit threshold corresponding to the second memory access request. The arbiter circuit is configured to arbitrate access to the data path by the first memory access request and the second memory access request based on the first credit threshold and the second credit threshold. | 2020-04-16 |
20200117620 | ADAPTIVE CREDIT-BASED REPLENISHMENT THRESHOLD USED FOR TRANSACTION ARBITRATION IN A SYSTEM THAT SUPPORTS MULTIPLE LEVELS OF CREDIT EXPENDITURE - A device includes an arbiter circuit configured to receive a first request for a resource. The first request is associated with a first credit cost. The arbiter circuit is further configured to receive a second request for the resource. The second request is associated with a second credit cost. The arbiter circuit is further configured to select the first request for the resource as an arbitration winner. The arbiter circuit is further configured to decrement a number of available credits associated with the resource by the first credit cost. The arbiter circuit is further configured to, in response to the number of available credits associated with the resource falling to a lower credit threshold, wait until the number of available credits associated with the resource reaches an upper credit threshold to select an additional arbitration winner for the resource. | 2020-04-16 |
20200117621 | MULTI-PROCESSOR, MULTI-DOMAIN, MULTI-PROTOCOL, CACHE COHERENT, SPECULATION AWARE SHARED MEMORY AND INTERCONNECT - A device includes an interconnect and a plurality of devices connected to the interconnect. The plurality of devices includes a first interface connected to the interconnect and a second interface connected to the interconnect. The plurality of devices further includes a first memory bank connected to the interconnect and a second memory bank connected to the interconnect. The plurality of devices further includes an external memory interface connected to the interconnect and a controller configured to establish virtual channels among the plurality of devices connected to the interconnect. | 2020-04-16 |
20200117622 | FLEXIBLE BUS MANAGEMENT - Methods, systems, and devices for flexible bus management are described. A memory device may transfer data between the memory device and another device (e.g., host device) using a bus including a plurality of data pins. The memory device may transfer data according to a first bus configuration (e.g., according to a first width corresponding to using all of the data pins). After receiving an indication to adjust the configuration, the memory device may adjust the first bus configuration to a second bus configuration where the bus operates according to a second width (e.g., using a subset of the data pins). The memory device may adjust the bus width between the other device and the memory device without adjusting an internal bus width of the memory device (e.g., internal busses that transfer data from the data pins to various components within the memory device). | 2020-04-16 |
20200117623 | Adaptive Interrupt Coalescing - A storage device retrieves commands from a command queue of a host and monitors depth of the command queue. Commands are executed from the command queue and outcomes of the commands are written to a completion queue of the host. Interrupts for the completed commands are coalesced until an aggregation threshold or aggregation delay are met. Coalescing is disabled and interrupts generated upon completion of commands when depth of the command queue is below a threshold. | 2020-04-16 |
20200117624 | SCALABLE INTERRUPT VIRTUALIZATION FOR INPUT/OUTPUT DEVICES - Implementations of the disclosure provide processing device comprising: an interrupt managing circuit to receive an interrupt message directed to an application container from an assignable interface (AI) of an input/output (I/O) device. The interrupt message comprises an address space identifier (ASID), an interrupt handle and a flag to distinguish the interrupt message from a direct memory access (DMA) message. Responsive to receiving the interrupt message, a data structure associated with the interrupt managing circuit is identified. An interrupt entry from the data structure is selected based on the interrupt handle. It is determined that the ASID associated with the interrupt message matches an ASID in the interrupt entry. Thereupon, an interrupt in the interrupt entry is forwarded to the application container. | 2020-04-16 |
20200117625 | MANAGEMENT OF FAULT NOTIFICATIONS - Examples described herein relate to configuring an interrupt controller to gather zero or more interrupts of a first type and provide the zero or more interrupts of the first type to a first core after a threshold amount of time has elapsed. The interrupt controller is configured to transfer interrupts of a second type to a second core that executes at least one network protocol processing-related task. However, in some examples, the first core can perform any network protocol processing-related task. The first type of interrupts can be associated with faults that are correctable by an interrupt issuer or its delegate. The first core can be configured to perform a corrective action and acknowledge receipt of the group of interrupts or to merely acknowledge receipt of the group of interrupts but not perform a corrective action. | 2020-04-16 |
20200117626 | SPLIT DIRECT MEMORY ACCESS (DMA) - An integrated circuit (IC) includes first and second memory devices and a bridge. The IC also includes a first interconnect segment coupled between the first memory device and the bridge. The IC further includes a second interconnect segment coupled between the first and second memory devices, and a third interconnect segment coupled between the bridge and the second memory device. The IC includes a first DMA circuit coupled to the first interconnect segment, and a second DMA circuit coupled to the second interconnect segment. A fourth interconnect segment is coupled between the first and second DMA circuits. | 2020-04-16 |
20200117627 | Stacked Semiconductor Device Assembly in Computer System - This application is directed to a stacked semiconductor device assembly including a plurality of identical stacked integrated circuit (IC) devices. Each IC device further includes a master interface, a channel master circuit, a slave interface, a channel slave circuit, a memory core, and a modal pad configured to receive a selection signal for the IC device to communicate data using one of its channel master circuit or its channel slave circuit. In some implementations, the IC devices include a first IC device and one or more second IC devices. In accordance with the selection signal, the first IC device is configured to communicate read/write data via the channel master circuit of the first IC device, and each of the one or more second IC devices is configured to communicate respective read/write data via the channel slave circuit of the respective second IC device. | 2020-04-16 |
20200117628 | AUDIO TRANSFER - This application relates to transfer of digital audio data between a host device and an accessory apparatus that may be connected to the host device via a suitable connector, such as a USB connector. A path selector is operable to establish either a first digital data path or a second digital data path for transfer of digital data. The first digital data path includes a first data bus host and a general purpose digital data interface suitable for bulk data transfer between the first data bus host and the applications processor of the device. This may be a default USB path. The second digital data path includes a second data bus host and at least one pair of second path data interfaces. The second data bus host does not form part of the applications processor and each of said second path data interfaces comprises a digital audio interface suitable for streaming of audio data. The path selector selectively establishes the first data path for bulk digital data transfer or the second data path for streaming of audio data where latency is important. | 2020-04-16 |