27th week of 2020 patent applcation highlights part 50 |
Patent application number | Title | Published |
20200210241 | METHOD AND SYSTEM FOR GPU VIRTUALIZATION BASED ON CONTAINER - A GPU virtualization method based on a container comprises the steps of: transmitting, if the container is created, a configuration file including GPU resource constraint information and an API profile to the container, by a node controller; and implementing a virtual GPU, when the container is executed, by intercepting a library call and changing an argument related to a GPU resource amount by a library controller provided in the container, and by intercepting a system call and changing argument and return values by a system call controller. | 2020-07-02 |
20200210242 | METHOD AND SYSTEM FOR GPU VIRTUALIZATION BASED ON CONTAINER - A GPU virtualization method based on a container comprises the steps of: transmitting, if the container is created, a configuration file including GPU resource constraint information and an API profile to the container, by a node controller; and implementing a virtual GPU, when the container is executed, by intercepting a library call and changing an argument related to a GPU resource amount by a library controller provided in the container, and by intercepting a system call and changing argument and return values by a system call controller. | 2020-07-02 |
20200210243 | SYSTEM AND METHOD FOR OFFLOADING COMPUTATION TO STORAGE NODES IN DISTRIBUTED SYSTEM - One embodiment described herein provides a distributed computing system. The distributed computing system can include a compute cluster comprising one or more compute nodes and a storage cluster comprising a plurality of storage nodes. A respective compute node can be configured to: receive a request for a computation task; obtain path information associated with data required by the computation task; identify at least one storage node based on the obtained path information; send at least one computation instruction associated with the computation task to the identified storage node; and receive computation results from the identified storage node subsequently to the identified storage node performing the computation task. | 2020-07-02 |
20200210244 | VIRTUAL RESOURCE PLACEMENT - Various example embodiments for supporting placement of virtual resources in a resource virtualization system are presented. The resource virtualization system may include a set of hosts configured to host virtual resources based on underlying physical resources and a set of schedulers configured to receive and handle requests for virtual resources. The handling of requests for virtual resources by the schedulers may include selecting ones of the hosts to handle the requests for virtual resources and initiating instantiation of the virtual resources on the ones of the hosts selected to handle the requests for virtual resources. The selection of the ones of the hosts to handle the requests for virtual resources may be performed by the schedulers using groups of hosts that include subsets of the hosts of the resource virtualization system. | 2020-07-02 |
20200210245 | METHOD AND DEVICE FOR AIDING DECISION-MAKING FOR THE ALLOCATION OF COMPUTING MEANS ON A HIGH PERFORMANCE COMPUTING INFRASTRUCTURE - The invention relates to a method for aiding decision-making for the allocation of resources on an HPC-type infrastructure allowing identification of a set of instances that meet a resource requirement. The invention further relates to a computer device ( | 2020-07-02 |
20200210246 | DE-CENTRALIZED LOAD-BALANCING AT PROCESSORS - A mechanism is described for facilitating localized load-balancing for processors in computing devices. A method of embodiments, as described herein, includes facilitating hosting, at a processor of a computing device, a local load-balancing mechanism. The method may further include monitoring balancing of loads at the processor and serving as a local scheduler to maintain de-centralized load-balancing at the processor and between the processor and other one or more processors. | 2020-07-02 |
20200210247 | Computer System with Concurrency For Multithreaded Applications - Threads running in a computer system are managed. Responsive to a thread for an application attempting to acquire a lock to a shared computing resource to perform a task for the application, a determination is made by the computer system as to whether the lock for the shared computing resource was acquired by the thread for the application. An unrelated task for the application assigned by the computer system to the thread in an absence of a determination that the lock was acquired. | 2020-07-02 |
20200210248 | CONFIGURABLE INTER-PROCESSOR SYNCHRONIZATION SYSTEM - The disclosure relates to an interprocessor synchronization system, comprising a plurality of processors; a plurality of unidirectional notification lines connecting the processors in a chain; in each processor: a synchronization register having bits respectively associated with the notification lines, connected to record the respective states of upstream notification lines, propagated by an upstream processor, and a gate controlled by a configuration register to propagate the states of the upstream notification lines on downstream notification lines to a downstream processor. | 2020-07-02 |
20200210249 | SYSTEM AND METHOD FOR PROMOTING READER GROUPS FOR LOCK COHORTING - NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them. | 2020-07-02 |
20200210250 | Data Engine - Systems and methods for processing and/or presenting data are disclosed. In an aspect, one method can comprise receiving a request for information and detecting a type of data representing the information requested. The data can be processed via a type-dependent agent and the processed data can be provided via an agnostic data engine. | 2020-07-02 |
20200210251 | NOTIFICATION CONTROL DEVICE, NOTIFICATION CONTROL METHOD, AND STORAGE MEDIUM - A notification control device includes a surrounding situation determination unit that determines a situation around a user, an impact detection unit that detects an impact on an electronic device possessed by the user, a drop determination unit that determines whether or not the impact occurs by drop of the electronic device, a notification method selection unit that selects a notification method in accordance with the situation around the user in a case where the impact occurs by the drop, and a notification control unit that controls a notification unit configured to notify the user of information such that the notification is performed by the notification method selected by the notification method selection unit. | 2020-07-02 |
20200210252 | QUANTIFICATION OF COMPUTE PERFORMANCE ACROSS MULTIPLE INDEPENDENTLY EXECUTED MICROSERVICES WITH A STATE MACHINE SUPPORTED WORKFLOW GRAPH - A bipartite workflow graph, representing an understanding of an overall service, comprises two different graph elements: entities and processes and each individual microservice defines their logical constructs as either an entity or a process in accordance with a universal schema. Notifications from such microservices conform to the universal schema, thereby enabling microservices to individually change how they operate internally, without affecting an understanding of the overall system as represented by the workflow graph. Each graph element has its state maintained by a separately addressable execution unit executing a state machine, which can be individually updated based on information received from the microservices. Changes to the workflow graph are logged and an insight engine monitors such a log to insert insight markers in accordance with predefined events, thereby enabling the collection of metrics on a service wide basis and across multiple microservices. | 2020-07-02 |
20200210253 | INTERACTIVE PROCESSING DEVICE AND INTERACTIVE PROCESSING SYSTEM - An interactive processing device according to an embodiment acquires one/more items' information for predetermined operation through interaction with a user, and includes one/more itemized-processing-units and an interaction-control-unit. The itemized-processing-units corresponds to the items. The interaction-control-unit controls the interaction collaboratively with the itemized-processing-units. The interaction-control-unit transmits user-input information to each itemized-processing-unit. Each itemized-processing-unit extracts a candidate for to-be-acquired information and transmits the candidate to the interaction-control-unit with likelihood information of being the to-be-acquired information. The interaction-control-unit determines a candidate having first-threshold-satisfying likelihood as information of an item corresponding to the candidate-transmitted itemized-processing-unit. When undetermined information item exists, the interaction-control-unit outputs inquiry for the item information. Each itemized-processing-unit is implemented by a general-purpose processor given corresponding-item name and an operation-parameter determining each itemized-processing-unit's operation. The general-purpose processing unit has a basic rule for extracting the candidate and calculating the likelihood using interactive knowledge corresponding to the to-be-acquired information type. | 2020-07-02 |
20200210254 | INFORMATION PROCESSING SYSTEM - An information processing system includes: a plurality of information processing devices each including a processor; and a relay device that connects the information processing devices via an expansion bus and relays communication between the information processing devices. The relay device includes a control unit that represents, for one of the information processing devices, the rest of the information processing devices, and communicates with the one of the information processing devices as an integrated information processing device of the relay device and the rest of the information processing devices. | 2020-07-02 |
20200210255 | MESSAGE BUFFER FOR COMMUNICATING INFORMATION BETWEEN VEHICLE COMPONENTS - A method of communicating between a plurality of modules on a vehicle, each module configured as a publisher or subscriber node that communicate in the operation of the autonomous vehicle utilizing a shared memory communication system. The method may include generating groups of messages by publisher nodes, each group associated with a unique topic and generated by a single publisher node associated with the unique topic, writing a group of messages in a message buffer associated with a single topic, writing in a registry, location information indicating where the messages were written, reading new message information from the registry, the new message information indicative of whether a new message associated with a particular topic is available, reading location information indicating where the new message is stored if a new message is available, and reading the new message from the respective message buffer. | 2020-07-02 |
20200210256 | SAFE, SECURE, VIRTUALIZED, DOMAIN SPECIFIC HARDWARE ACCELERATOR - This disclosure relates to various implementations an embedded computing system. The embedded computing system comprises a hardware accelerator (HWA) thread user and a second HWA thread user that creates and sends out message requests. The HWA thread user and the second HWA thread user is communication with a microcontroller (MCU) subsystem. The embedded computing system also comprises a first inter-processor communication (IPC) interface between the HWA thread user and the MCU subsystem and a second IPC interface between the second HWA thread user and the MCU subsystem, where the first IPC interface is isolated from the second IPC interface. The MCU subsystem is also in communication with a first domain specific HWA and a second domain specific HWA. | 2020-07-02 |
20200210257 | Deduplication of Application Program Interface Calls - Embodiments regard deduplication of application program interface calls. An embodiment of an apparatus one or more processors to process data; a computer memory; and a network interface, wherein the apparatus includes an intermediary layer between one or more components of the apparatus and the network interface, the intermediary layer to perform deduplication of multiple server API calls from one or more components for the one or more APIs, wherein the deduplication includes one or more of preventing transmission of duplicated server calls from the one or more components to the one or more APIs; and generating one or more combined server calls based at least in part on the plurality of server API calls and transmitting the one or more combined server calls to the one or more APIs. | 2020-07-02 |
20200210258 | Validation Framework for Runtime Connected API Systems - Provided is a validation framework for modelling possible failures that might occur when an orchestrated transaction calls external services to ensure that error handling and reporting is robust and well designed. The disclosed techniques ensure that no changes are necessary to either the code making a call or the services that might be called. The techniques are not limited to web servers and REST APIs as they may be used to test and validate any kind of system that employs well defined APIs. The claimed subject matter, or “validation framework” may be added to an existing API or created as a new module that acts as a proxy server in a non-micro service type of system. Although described with respect to a gateway-API service, the claimed subject matter is equally applicable to other systems that process orchestrated transactions. | 2020-07-02 |
20200210259 | READ WINDOW SIZE - A processing device in a memory system receives a memory command indicating a read window size and a first read voltage and identifies a read window for a first data block of the memory component having the read window size and centered at the first read voltage. The processing device determines whether a number of bit flips for the first data block within the read window exceeds an error threshold and, in response to the number of bit flips exceeding the error threshold, refreshes data stored on the first data block of the memory component. | 2020-07-02 |
20200210260 | FLOW BASED PATTERN INTELLIGENT MONITORING SYSTEM - Systems, methods, and computer program products for identifying a data pattern change anomaly uses a distributing computing environment that processes thousands of different data flows are provided. Numerous data flows are collected from the application computing environment over a configurable time period. The flows are aggregated into aggregated data according to at least one attribute from the flows and without losing information included in the flows. Historical data that includes aggregated data from multiple flows that occurred prior to a time during which the numerous data flows were collected is provided from a distributed disk storage. An anomaly that indicates change in data patterns in the flows is identified by comparing the aggregated data to the historical data using one or more models that are tailored to the numerous flows. An alert that includes an anomaly and a reason for an anomaly is transmitted and recorded in the system. | 2020-07-02 |
20200210261 | TECHNOLOGIES FOR MONITORING NODE CLUSTER HEALTH - Technologies for monitoring node cluster health include a plurality of managed nodes of anode cluster communicatively coupled across a data network to a resource manager server. The resource manager server is configured to receive health data, via an out-of-band network, from each of the managed nodes of the node cluster. The resource manager server is further configured to identify whether a managed node of the plurality of managed nodes has indicated a failure, determine a cause of the failure, and classify the failure as being one of a soft failure or a hard failure as a function of the received health data and the cause of the failure. Additionally, the resource manager server is configured to transmit a health state change event to each of the other managed nodes of the plurality of managed nodes of the node cluster. Other embodiments are described herein. | 2020-07-02 |
20200210262 | METHOD OF DETECTING COMPATIBLE SYSTEMS FOR SYSTEMS WITH ANOMALIES - Systems and methods are provided for detecting system anomalies and detecting compatible modules for replacing computing systems. The described technique includes receiving system parameters specifying functionality of a first computing system, and interrogating a state model using the received system parameters to detect an anomaly within the first computing system. Responsive to detecting an anomaly in the first computing system based on the state model, the system re-interrogates the state model based on at least one candidate module such that the system parameters of the first computing system are replaced by equivalent system parameters of the candidate module. The system then selects the at least one candidate module based on a determination that the candidate module is compatible with the first computing system, and that no anomaly was detected during the repeat interrogation of the state model using the system parameters of the candidate module. | 2020-07-02 |
20200210263 | SYSTEM AND METHOD FOR DETECTING ANOMALIES IN CYBER-PHYSICAL SYSTEM WITH DETERMINED CHARACTERISTICS - Systems and methods for determining a source of anomaly in a cyber-physical system (CPS). A forecasting tool can obtain a plurality of CPS feature values during an input window and forecast the plurality of CPS feature values for a forecast window. An anomaly identification tool can determine a total forecast error for the plurality of CPS features in the forecast window, identify an anomaly in the cyber-physical system when the total forecast error exceeds a total error threshold, and identify at least one CPS feature as the source of the anomaly. | 2020-07-02 |
20200210264 | SYSTEM AND METHOD OF GENERATING DATA FOR MONITORING OF A CYBER-PHYSICAL SYSTEM FOR EARLY DETERMINATION OF ANOMALIES - The present disclosure provides systems and methods of early determination of anomalies using a graphical user interface. In one aspect such a method comprises: receiving information about one or more features of a cyber-physical system, receiving information about a period of time for monitoring the one or more features, generating a forecast of values of the one or more features of the cyber-physical system over the period of time based on a forecasting model for graphing in a graphical user interface, determining a total error of the forecast for all of the one or more features and determining an error for each of the one or more features over the period of time, determining that the error for one feature of the one or more features is greater than a predetermined threshold and identifying the one feature as a source of an anomaly in the cyber-physical system. | 2020-07-02 |
20200210265 | METHOD AND SYSTEM FOR PREDICTION OF CORRECT DISCRETE SENSOR DATA BASED ON TEMPORAL UNCERTAINTY - This disclosure relates generally to a method and system for prediction of correct discrete sensor data, thus enabling continuous flow of data even when a discrete sensor fails. The activities of humans/subjects, housed in a smart environment is continuously monitored by plurality of non-intrusive discrete sensors embedded in living infrastructure. The collected discrete sensor data is usually sparse and largely unbalanced, wherein most of the discrete sensor data is ‘No’ and comparatively only a few samples of ‘Yes’, hence making prediction very challenging. The proposed prediction techniques based on introduction of temporal uncertainty is performed in several stages which includes pre-processing of received discrete sensor data, introduction of temporal uncertainty techniques followed by prediction based on neural network techniques of learning pattern using historical data. | 2020-07-02 |
20200210266 | UNIVERSAL SERIAL BUS DOCKING APPARATUS AND ERROR DETECTING METHOD THEREOF - A universal serial bus (USB) docking apparatus and an error detecting method thereof are provided. The USB docking apparatus includes a connection port and a charging controller. The connection port is configured to connect an external electrical device electrically. The charging controller is electrically connected to the connection port to receive a pin-configuration signal and decides to execute a first power transition mode or a second power transition mode based on the pin-configuration signal. The charging controller decides to execute the first power transition mode and counts a first suspending time, wherein when the first suspending tine is larger than or equal to a threshold time value, the charging controller switches the first power transition mode to the second power transition mode. | 2020-07-02 |
20200210267 | NORMALIZATION OF DETECTING AND REPORTING FAILURES FOR A MEMORY DEVICE - Methods, systems, and apparatuses related to detecting and reporting failures for a memory device are described. When a count of bit-flip errors is above a fail threshold, a memory device can report a failure. Failure reports can indicate a rate at which the memory device is accumulating errors. An offset fail threshold may be applied instead of a default fail threshold, such as a standardized or specified threshold. The offset fail threshold can be a summation of the default fail threshold and an offset determined from an initial error count determined before the memory device has accumulated errors from use. | 2020-07-02 |
20200210268 | IMAGING MODALITY SMART SYMPTOM MAINTENANCE SYSTEMS AND METHODS - Methods, apparatus, systems and articles of manufacture providing an image modality smart symptom maintenance are disclosed. The example apparatus includes a system processor to identify a distinguishing symptom of a first subset of issues corresponding to an imaging device. The apparatus further includes an interface to transmit a prompt corresponding to an identification of the distinguishing symptom. The apparatus further includes a filter to filter out issues of the first subset of issues based on a response to the prompt to generate a second subset of issues. The apparatus further includes the system processor to transform the first subset of issues into a solution for servicing the imaging device by applying at least one of the symptom or the first subset of issues to an artificial intelligence model corresponding to the imaging device. The apparatus further includes a care package generator to generate a data structure based on the solution, the data structure including information to assist in the repair of the imaging device. | 2020-07-02 |
20200210269 | EVENT ACTIVATED ERROR RESOLUTION - Computerized systems and methods are provided to intelligently and dynamically monitor at least one account using an evolutionary algorithm to identify and resolve errors. After receiving one or more indications to initiate a controller process as a result of identifying one or more errors within one or more accounts, the controller process is activated. A local cloud controller determines whether the one or more errors are located on a job list that includes a plurality of errors and instruction sets to resolve each of the plurality of errors. Then, the local cloud controller creates one or more agents to implement the instruction sets on the one or more errors to resolve them. Following this, one or more reports are generated that include the status of the one or more errors after the instruction sets have been implemented. | 2020-07-02 |
20200210270 | MEMORY EVALUATION METHOD AND APPARATUS - A memory evaluation method and apparatus are provided. The method includes: determining a health degree evaluation model indicating a relationship in which a health degree of a memory changes with at least one health degree influencing factor of the memory; obtaining at least one running parameter value corresponding to each of the at least one health degree influencing factor; separately matching the at least one running parameter value corresponding to each health degree influencing factor to the health degree evaluation model, to obtain the health degree of the memory; and outputting health degree indication information which indicate whether the memory needs to be replaced. Therefore, the memory is not faulty and the health degree of the memory is a relatively low, a user is prompted to replace the memory. | 2020-07-02 |
20200210271 | SYSTEM AND METHOD OF DETERMINING COMPATIBLE MODULES - Systems and methods for provided for detecting compatible modules for replacing anomalous elements in computing systems. The described technique includes receiving system parameters specifying functionality of a first computing system, and querying a state model using the received system parameters to detect an anomaly within the first computing system. In response to detecting an anomaly in the first computing system based on the state model, the system determines a recovery method based on a recovery-method model and information about the detected anomaly, and selecting, from a tool database, a third-party, system-compatible tool configured to implement the determined recovery method. | 2020-07-02 |
20200210272 | SYSTEMS AND METHODS FOR MEMORY FAILURE PREVENTION, MANAGEMENT, AND MITIGATION - Some embodiments described herein are directed to memory page or bad block monitoring and retirement algorithms, systems and methods for random access memory (RAM). Reliability issues or errors can be detected for multiple memory pages using one or more retirement criterion. In some embodiments, when reliability errors are detected, it may be desired to remove such pages from operation before they create a more serious problem, such as a computer crash. Thus, bad block retirement and replacement mechanisms are described herein. | 2020-07-02 |
20200210273 | SYSTEM AND METHOD FOR DATA REDISTRIBUTION IN A DATABASE - A method for data redistribution of a job data in a first datanode (DN) to at least one additional DN in a Massively Parallel Processing (MPP) Database (DB) is provided. The method includes recording a snapshot of the job data, creating a first data portion in the first DN and a redistribution data portion in the first DN, collecting changes to a job data copy stored in a temporary table, and initiating transfer of the redistribution data portion to the at least one additional DN. | 2020-07-02 |
20200210274 | DATA PROCESSING DEVICE - A data processing device includes a plurality of variable nodes configured to receive and store a plurality of target bits; a plurality of check nodes each configured to receive stored target bits from one or more corresponding variable nodes of the plurality of variable nodes, check whether received target bits have an error bit, and transmit a check result to the corresponding variable nodes; and a group state value manager configured to determine group state values of variable node groups into which the plurality of variable nodes are grouped. | 2020-07-02 |
20200210275 | INDEXING AND RECOVERING ENCODED BLOCKCHAIN DATA - Disclosed herein are computer-implemented methods, computer-implemented systems, and non-transitory, computer-readable media, to index blockchain data for storage. One computer-implemented method includes generating one or more encoded blocks by executing error correction coding (ECC) on one or more blocks of a blockchain. Each of the one or more encoded blocks are divided into a plurality of datasets. An index is provided for the one or more encoded blocks, where the index is used to index each dataset of the plurality of datasets to a blockchain node at which a respective dataset is stored. | 2020-07-02 |
20200210276 | SYSTEM AND METHODS FOR HARDWARE-SOFTWARE COOPERATIVE PIPELINE ERROR DETECTION - An error reporting system utilizes a parity checker to receive data results from execution of an original instruction and a parity bit for the data. A decoder receives an error correcting code (ECC) for data resulting from execution of a shadow instruction of the original instruction, and data error correction is initiated on the original instruction result on condition of a mismatch between the parity bit and the original instruction result, and the decoder asserting a correctable error in the original instruction result. | 2020-07-02 |
20200210277 | DATA STORAGE DEVICE EMPLOYING MULTI-LEVEL PARITY SECTORS FOR DATA RECOVERY PROCEDURE - A data storage device is disclosed comprising a head actuated over a disk. A first plurality of codewords and corresponding parity sector are generated, and a second plurality of codewords and corresponding parity sector are generated. The first and second plurality of codewords are written to the disk, and during a read of the first and second set of codewords, M codeword locations within the data track that are unrecoverable are saved, and N codeword locations out of the M codeword locations are selected based on a quality metric of the read. The N codewords are reread from the data track at the N codeword locations and reliability metrics associated with the N codewords are saved. The saved reliability metrics are updated using at least one of the first parity sector or the second parity sector. | 2020-07-02 |
20200210278 | ERROR CORRECTION IN ROW HAMMER MITIGATION AND TARGET ROW REFRESH - Methods, systems, and apparatuses for memory (e.g., DRAM) having an error check and scrub (ECS) procedure in conjunction with refresh operations are described. While a refresh operation reads the code words of a memory row, ECS procedures may be performed on some of the sensed code words. When the write portion of the refresh begins, a code word discovered to have errors may be corrected before it is written back to the memory row. The ECS procedure can be incremental across refresh operations, beginning, for example, each ECS at the code word where the pervious ECS for that row left off. The ECS procedure can include an out-of-order (OOO) procedure where ECS is performed more often for certain identified code words. | 2020-07-02 |
20200210279 | METHODS AND SYSTEM WITH DYNAMIC ECC VOLTAGE AND FREQUENCY - Apparatus and methods are disclosed, including using a memory controller to monitor at least one parameter related to power level of a host processor of a host device, and dynamically adjusting at least one of a clock frequency and a voltage level of an error-correcting code (ECC) subsystem of the memory controller based on the at least one parameter to control power usage of the host device. | 2020-07-02 |
20200210280 | MULTI-PAGE PARITY PROTECTION WITH POWER LOSS HANDLING - A variety of applications can include use of parity groups in a memory system with the parity groups arranged for data protection of the memory system. Each parity group can be structured with multiple data pages in which to write data and a parity page in which to write parity data generated from the data written in the multiple data pages. Each data page of a parity group can have storage capacity to include metadata of data written to the data page. Information can be added to the metadata of a data page with the information identifying an asynchronous power loss status of data pages that precede the data page in an order of writing data to the data pages of the parity group. The information can be used in re-construction of data in the parity group following an uncorrectable error correction code error in writing to the parity group. | 2020-07-02 |
20200210281 | METHOD AND DEVICE FOR DETERMINING CHECK SUMS, BUFFER MEMORY AND PROCESSOR - A method for determining check sums for a buffer memory for a processor, the method including a step of reading in a data unit of the buffer memory marked as changed by an access of the processor, a step of ascertaining a check sum for the data unit using a check sum unit of the buffer memory and a step of supplementing the data unit with the check sum and marking the data unit as changed with a valid check sum. | 2020-07-02 |
20200210282 | DISPOSABLE PARITY - Devices and techniques for disposable parity are described herein. First and second portions of data can be obtained, and respective parity values stored in adjacent memory locations. An entry mapping the respective parity values to the first and second portions of data is updated when the parity values are stored. If an error occurs when writing a portion of data, the mapping entry is used to retrieve the parity data to correct the error. Otherwise, the parity data is discarded. | 2020-07-02 |
20200210283 | EXTENDED ERROR CORRECTION IN STORAGE DEVICE - Devices and techniques for extended error correction in a storage device are described herein. A first set of data, that has a corresponding logical address and physical address, is received. A second set of data can be selected based on the logical address. Secondary error correction data can be computed from the first set of data and the second set of data. Primary error correction data can be differentiated from the secondary error correction data by being computed from the first set of data and a third set of data. The third set of data can be selected based on the physical address of the first set of data. The secondary error correction data can be written to the storage device based on the logical address. | 2020-07-02 |
20200210284 | MINIMAL ALIASING BIT-ERROR CORRECTION CODE - Systems and methods related to data encoders that can perform error detection or correction. The encoders and decoders may minimize the addition of errors due to aliasing in error correction codes by implementing operators associated with reduced aliasing parity generating or reduced aliasing error checking matrices. | 2020-07-02 |
20200210285 | FLASH MEMORY AND OPERATION METHOD THEREOF - Disclosed are a nonvolatile memory and an operation method thereof. The nonvolatile memory includes a memory cell array and a controller. The controller is configured to: read out raw data from a plurality of memory cells in the memory cell array; correct the raw data by using error correction code (ECC) data to obtain corrected data; determine an address of a memory cell having a data loss error in the plurality of memory cells; and program the memory cell having the data loss error. After the ECC correction in the read operation, the data loss error is corrected by a program operation. | 2020-07-02 |
20200210286 | SOFT CHIPKILL RECOVERY FOR BITLINE FAILURES - Disclosed are devices, systems and methods for improving performance of a block of a memory device. In an example, performance is improved by implementing soft chipkill recovery to mitigate bitline failures in data storage devices. An exemplary method includes encoding each horizontal row of cells of a plurality of memory cells of a memory block to generate each of a plurality of codewords, and generating a plurality of parity symbols, each of the plurality of parity symbols based on diagonally positioned symbols spanning the plurality of codewords. | 2020-07-02 |
20200210287 | Error Correction Hardware With Fault Detection - Error correction code (ECC) hardware includes write generation (Gen) ECC logic and a check ECC block coupled to an ECC output of a memory circuit with read Gen ECC logic coupled to an XOR circuit that outputs a syndrome signal to a syndrome decode block coupled to a single bit error correction block. A first MUX receives the write data is in series with an input to the write Gen ECC logic or a second MUX receives the read data from the memory circuit in series with an input of the read Gen ECC logic. A cross-coupling connector couples the read data from the memory circuit to a second input of the first MUX or for coupling the write data to a second input of the second MUX. An ECC bit comparator compares an output of the write Gen ECC logic to the read Gen ECC logic output. | 2020-07-02 |
20200210288 | SYSTEM AND METHOD FOR FACILITATING DIFFERENTIATED ERROR CORRECTION IN HIGH-DENSITY FLASH DEVICES - Embodiments described herein provide a system for facilitating modulation-assisted error correction. The system can include a plurality of flash memory cells, an organization module, a mapping module, and a modulation module. During operation, the organization module groups bits of a cluster of cells in the plurality of flash memory cells into a first group and a second group. A respective of the first and second groups includes bits from a respective cell of the cluster of cells. The mapping module generates a modulation map that maps a subset of bits indicated by the first group in such a way that the subset of bits is repeated in a respective domain of bits indicated by the second group. The modulation module then programs user data bits in the cluster of cells based on the modulation map. | 2020-07-02 |
20200210289 | ERROR CORRECTING SYSTEM SHARED BY MULTIPLE MEMORY DEVICES - An error correcting system is provided. The error correcting system includes an error correcting code (ECC) circuit and a control circuit. The ECC circuit is configured to encode input data received from M input terminals to generate encoded data in response to a write operation, and output the encoded data. The input data includes write data associated with the write operation, and the encoded data includes the input data and associated parity data. The control circuit is coupled to at least one of the M input terminals. When the write operation is directed to a memory device having a data bit width less than M bits, the write data is inputted to a portion of the M input terminals, the control circuit is configured to provide reference data to another portion of the M input terminals, and the write data and the reference data serve as the input data. | 2020-07-02 |
20200210290 | METHOD AND APPARATUS FOR PERFORMING DYNAMIC RECOVERY MANAGEMENT REGARDING REDUNDANT ARRAY OF INDEPENDENT DISKS - A method and apparatus for performing dynamic recovery management regarding a RAID are provided. The method includes: writing a first set of protected data into a first protected access unit of multiple protected access units of the RAID, and recording a first set of management information corresponding to the first set of protected data, for data recovery of the first set of protected data; and when any storage device of multiple storage devices of the RAID malfunctions, writing a second set of protected data into a second protected access unit of the protected access units, and recording a second set of management information corresponding to the second set of protected data, for data recovery of the second set of protected data. Any set of the first set of protected data and the second set of protected data includes data and multiple parity-check codes. | 2020-07-02 |
20200210291 | STORAGE SYSTEM - Provided is a storage system that performs inter-node movement of parity and reconfiguration of a stripe when a node configuration is changed. The storage system includes a plurality of nodes and a management unit, in which the nodes are targets for data write and read requests, form a stripe by a plurality of data stored in different nodes and parity generated based on the plurality of data, and store the parity of the stripe to which the data under the write request belongs in a node different from the plurality of nodes that store the plurality of data so as to perform redundancy; and the management unit transmits, to the node, an arrangement change request to perform the inter-node movement of the parity and the reconfiguration of the stripe when the node configuration is changed. | 2020-07-02 |
20200210292 | ERROR CORRECTION APPARATUS, OPERATION METHOD THEREOF AND MEMORY SYSTEM USING THE SAME - An error correction apparatus may include: an input component configured to receive data; an error information generation component having a first error detection ability to detect L errors and a second error detection ability to detect K errors, where L is a positive integer and K is an integer larger than L, and configured to generate error information including the number of errors contained in the received data and the positions of the errors, based on the first error detection ability, and generate the error information based on the second error detection ability, when the error information is not generated on the basis of the first error detection ability; an error correction component configured to correct the errors of the received data based on the generated error information; and an output component configured to output the corrected data. | 2020-07-02 |
20200210293 | APPLICATION HEALTH MONITORING AND AUTOMATIC REMEDIATION - An application health monitoring system automatically resolves anomalies arising among clients of a messaging server. The messaging server clients (MSCs) include one or more applications and services included in the applications. The anomalies include MSC anomalies and process starter anomalies. When a messaging session is disconnected due to server restarts, the service may be automatically restarted a predetermined number of times to re-establish the connection. Similarly, if a process starter of a service fails to start up properly, the service can be automatically restarted a predetermined number of times before the anomaly is flagged for human review. The monitoring system also automatically implements rules whenever service configurations are changed in addition to validating web service ports and cloud provider queues. | 2020-07-02 |
20200210294 | SYNTHESIZING A RESTORE IMAGE FROM ONE OR MORE SECONDARY COPIES TO FACILITATE DATA RESTORE OPERATIONS TO A FILE SERVER - An illustrative media agent (MA) in a data storage management system instructs a NAS file server (filer) to restore an MA-created synthesized-copy instead of larger filer-created backup copies. The synthesized-copy is designed only for the particular files to be restored and mimics, and is typically much smaller than, a filer-created backup copy. The synthesized-copy is fed to the filer on restore as a “restore data image.” When receiving a restore request for certain backed-up data files, the MA synthesizes the synthesized-copy on the fly. The MA generates a header mimicking a filer-created backup header; extracts files from filer-created backup copies arranging them within the synthesized-copy as if in filer-created backups; and instructs filer to perform a full-volume restore from the synthesized-copy. The MA serves the synthesized-copy piecemeal as available, rather than waiting to synthesize the entire synthesized-copy. The synthesized-copy is not stored at the MA. | 2020-07-02 |
20200210295 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system includes: a memory device including a master block and a back-up master block; and a controller suitable for performing a boot operation by using boot data that is read from the master block or the back-up master block, wherein the controller includes: a booting manager suitable for reading boot data from the back-up master block when an operation of reading the boot data from the master block failed; and a test read manager suitable for performing a test read operation on the back-up master block whenever the number of times that the boot data is read reaches a threshold, and performing a recovery operation on the back-up master block when the test read operation is fails. | 2020-07-02 |
20200210296 | THREE-DIMENSIONAL STACKED MEMORY DEVICE AND METHOD - A three-dimensional stacked memory device includes a buffer die having a plurality of core die memories stacked thereon. The buffer die is configured as a buffer to occupy a first space in the buffer die. The first memory module, disposed in a second space unoccupied by the buffer, is configured to operate as a cache of the core die memories. The controller is configured to detect a fault in a memory area corresponding to a cache line in the core die memories based on a result of a comparison between data stored in the cache line and data stored in the memory area corresponding to the cache line in the core die memories. The second memory module, disposed in a third space unoccupied by the buffer and the first memory module, is configured to replace the memory area when the fault is detected in the memory area. | 2020-07-02 |
20200210297 | INFORMATION PROCESSING APPARATUS AND METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS - A main controller of an information processing apparatus includes a central processing unit (CPU), and the CPU includes a hard disk drive (HDD) control unit that performs write and read of data on an HDD, and controls the HDD control unit to control write and read on the HDD. In a case where the CPU detects that an error has occurred that write or read of data on the HDD cannot be performed by the HDD control unit, the CPU performs a recovery process that restores an error with respect to an area where the error has occurred. | 2020-07-02 |
20200210298 | METHOD AND DEVICE FOR REBUILDING RAID - Embodiments of the present disclosure provide a method and device for RAID rebuilding. In some embodiments, there is provided a computer-implemented method. The method comprises: determining a spare redundant array of independent disks (RAID) group with spare capacity from a plurality of disks included in at least one RAID group of a storage pool; building spare logic units from the spare RAID group; and in response to a RAID group of the at least one RAID group of the storage pool being in a degradation state, rebuilding a failed disk in a degraded RAID group using the spare logic units. | 2020-07-02 |
20200210299 | SELF-TEST DURING IDLE CYCLES FOR SHADER CORE OF GPU - The disclosure describes techniques for a self-test of a graphics processing unit (GPU) independent of instructions from another processing device. The GPU may perform the self-test in response to a determination that the GPU enters an idle mode. The self-test may be based on information indicating a safety level, where the safety level indicates how many faults in circuits or memory blocks of the GPU need to be detected. | 2020-07-02 |
20200210300 | DIAGNOSTIC SCAN - One embodiment provides a method, including: receiving, at an information handling device and in a pre-operating system (OS) environment, an indication to run a diagnostic application; conducting, using the diagnostic application, a diagnostic scan on one or more of the information handling device components; and generating, based on the diagnostic scan, a results report. Other aspects are described and claimed. | 2020-07-02 |
20200210301 | DEBUG FOR MULTI-THREADED PROCESSING - A system to implement debugging for a multi-threaded processor is provided. The system includes a hardware thread scheduler configured to schedule processing of data, and a plurality of schedulers, each configured to schedule a given pipeline for processing instructions. The system further includes a debug control configured to control at least one of the plurality of schedulers to halt, step, or resume the given pipeline of the at least one of the plurality of schedulers for the data to enable debugging thereof. The system further includes a plurality of hardware accelerators configured to implement a series of tasks in accordance with a schedule provided by a respective scheduler in accordance with a command from the debug control. Each of the plurality of hardware accelerators is coupled to at least one of the plurality of schedulers to execute the instructions for the given pipeline and to a shared memory. | 2020-07-02 |
20200210302 | USAGE PROFILE BASED RECOMMENDATIONS - A server may receive a device profile from a computing device. The device profile may identify a usage of at least software applications associated with the computing device. The server may perform a comparison of the device profile with other device profiles associated with other computing devices, determine a similarity index of the device profile with individual ones of the other device profiles, and select a subset of the other device profiles based on the similarity index to create a set of similar device profiles. The server may determine configuration differences between the device profile of the computing device and individual device profiles of the similar device profiles, determine recommendations based on the configuration differences, and send the recommendations to the computing device. Implementing one or more of the recommendations may cause the one or more tasks to execute faster or use less of one or more computing resources. | 2020-07-02 |
20200210303 | CONTROLLER AND OPERATION METHOD THEREOF - Provided is a controller for controlling a memory device. The controller may include a media scanner suitable for performing a media scan operation of reading a predetermined size of data from the memory device in a predetermined cycle, detecting an error of the read data, generating corrected data of the read data, and storing the corrected data in the memory device, a period calculator suitable for calculating a power-off period, and a media scan controller suitable for changing the predetermined cycle according to the power-off period. | 2020-07-02 |
20200210304 | SERVER POWER CONSUMPTION MANAGEMENT METHOD AND DEVICE - A power consumption management method and a power consumption management device are provided. When a power module of a power supply for a server is faulty, a power consumption management device receives fault information sent by the power supply, and reduces first power consumption, calculated when the power module works normally, of the server by a first value to second power consumption of the server based on the fault information. The first value is not less than a reduced value, calculated when the power module is faulty, of power consumption of the server. In addition, the power consumption management device adjusts the second power consumption of the server based on a power consumption capping value of the server. Therefore, the solution avoids a breakdown of the server, and further improves power utilization after the power module is faulty. | 2020-07-02 |
20200210305 | SYSTEM, DEVICE AND METHOD FOR FROZEN PERIOD DETECTION IN SENSOR DATASETS - A method is disclosed herein of detecting at least one frozen period in at least one sensor dataset associated with at least one sensor in a technical system. The method includes receiving the at least one sensor dataset in time series and computing run-lengths for the at least one sensor dataset, wherein each of the run-lengths is length of consecutive repetitions of a sensor value in the at least one sensor dataset. The method includes clustering the run-lengths into one of two clusters based on a run frequency, wherein the run frequency is a number of times the run-lengths are repeated in the at least one sensor dataset. Further, the method includes identifying a cluster from the two clusters with lower run frequency and detecting the at least one frozen period in the at least one sensor dataset based on the identified cluster. | 2020-07-02 |
20200210306 | METHOD INCLUDING COLLECTING AND QUERYING SOURCE CODE TO REVERSE ENGINEER SOFTWARE - A computer-implemented method includes: collecting, by a processor of a computer, analysis information from a source code of a computer program; tracing, by the processor of the computer, program behavior starting with a main entry point to produce trace data; and visualizing, by the processor of the computer, the trace data as a sequence diagram, wherein the trace data comprises a representation corresponding to a sequence diagram. | 2020-07-02 |
20200210307 | METHOD FOR AUTOMATICALLY ANALYZING BOTTLENECK IN REAL TIME AND AN APPARATUS FOR PERFORMING THE METHOD - The present invention relates to a method for automatically analyzing a bottleneck in real time and an apparatus for performing the method. The method for automatically analyzing a bottleneck in real time may comprise the steps of: an application server receiving a bottleneck analysis component; and the application server installing the bottleneck analysis component, wherein the bottleneck analysis component may add a call code for a performance information collector to an application installed on the application server that is to be monitored, wherein the bottleneck analysis component may call the performance information collector according to execution of a service function of the application, requested by a client, to generate service performance information for analyzing a bottleneck phenomenon. | 2020-07-02 |
20200210308 | GENERATION, ADMINISTRATION AND ANALYSIS OF USER EXPERIENCE TESTING - Systems and methods for generating, administering and analyzing a user experience study are provided. In particular, intents can be generated from a user experience study by applying one or more screener questions to participants and subjecting the screened participants to one or more tasks. Corresponding clickstreams and success data for each participant engaging in the tasks can be recorded. The success and clickstream data can also be aggregated for all the screened participants as aggregated results. Video data including audio for each of the screened participants can also be recorded. | 2020-07-02 |
20200210309 | CONTROLLER AND OPERATION METHOD THEREOF - Provided is a controller for controlling a memory device including a plurality of memory blocks. The controller may include a monitoring component suitable for monitoring a memory block usage of the plurality of memory blocks, and storing an actual memory block usage for a predetermined cycle, a memory block usage comparator suitable for calculating a desired memory block usage indicating a maximum memory block usage for the predetermined cycle, and comparing the desired memory block usage to the actual memory block usage, and a background operation manager suitable for performing a background operation according to the memory block usage comparison result. | 2020-07-02 |
20200210310 | ANALYTICS-BASED ARCHITECTURE COMPLIANCE TESTING FOR DISTRIBUTED WEB APPLICATIONS - A method includes: accessing a plurality of service side logs containing data pertaining to the performance of a computing system in a data center with respect to infrastructure resource consumption; evaluating the performance for architectural compliance based on the accessed data by comparing request patterns against expected resource usage models in the architectural design to identify departures from the expected resource usage models; and publishing the evaluation results with respect to the identified departures, the evaluation results including details of components, target resource uniform resource identifiers, frequency of usage, and infrastructure resource consumption. | 2020-07-02 |
20200210311 | CONFIGURING DATA PROCESSING PIPELINES - Systems and methods are provided that are useful for configuring data processing pipelines. During building of a dataset in a data processing pipeline, statistics can be calculated relating to the dataset. | 2020-07-02 |
20200210312 | TRACKING DATA FLOW THROUGH DATA SERVICES USING A PROCESSING REQUEST IDENTIFIER IN CALLSTACK DATA - There are provided systems and methods for tracking data flow through data services using a processing request identifier in callstack data. During processing requests with a service provider, each request is assigned a particular identifier, called a correlation identifier. The correlation identifier is stored in callstack data and may be used to map these individual data processing flows for the requests to the data processing services of the service provider used during the flows. Once the data flows are determined the actual used services may be identified. The mapping system may also provide for removal of erroneous callstack and reassembly of callstack data during asynchronous service calls. Additionally, the data flows may be used to see where multiple callstacks have divergent data flows. A service provider may utilize the data flows for determination of service usage rates. | 2020-07-02 |
20200210313 | MANAGEMENT OF INTERNET OF THINGS DEVICES - A method and system for communicating with IoT devices to gather information related to device failure or error(s) is disclosed. The system receives log files from an IoT device (e.g., a smart refrigerator) that recently failed. The system determines which log files the IoT device created before and/or after a failure. After gathering this information, the system stores the information in a database, sends it to the IoT device manufacturer, or sends it to a cloud provider. The system can also send the failure-related information to the IoT device-related entities (e.g., IoT device manufacturers), and the entity uses this information to troubleshoot the failure and send a fix or software update to the IoT device. | 2020-07-02 |
20200210314 | METHOD, APPARATUS, AND DEVICE FOR STORING OPERATION RECORD BASED ON TRUSTED EXECUTION ENVIRONMENT - In an implementation, operation instructions indicating application data to be used for performing one or more operations sent by a first client device are received. The application data is determined based on the operation instructions. One or more second client devices associated with the application data are determined. Operation codes in a trusted execution environment (TEE) associated with the application data to be executed are determined. That the operation codes has been executed for K times based on an indicator is determined. The operation codes are executed in the TEE based on the application data to generate an operation log. An indicator indicating a number of times the operation codes are executed is updated. The operation log and the indicator are sent as an operation record to a database server, the first client device, and the one or more second client devices to be stored. | 2020-07-02 |
20200210315 | METHOD AND SYSTEM FOR CACHE AGENT TRACE AND CAPTURE - In one embodiment, a processor comprises a fabric interconnect to couple a first cache agent to at least one of a memory controller or an input/output (I/O) controller; and a first cache agent comprising a cache controller coupled to a cache; and a trace and capture engine to periodically capture a snapshot of state information associated with the first cache agent; trace events to occur at the first cache agent in between captured snapshots; and send the captured snapshots and traced events via the fabric interconnect to the memory controller or I/O controller for storage at a system memory or storage device. | 2020-07-02 |
20200210316 | METHOD FOR VISUALIZING OBJECTS IN COMPUTER MEMORY - A computer-implemented method is disclosed that includes receiving content associated with a heap dump of a computer application, generating a plurality of files based on the heap dump content, and loading the files into the graph database. The files so generated are compatible with the graph database. In some implementations, additional analysis and route finding (e.g., finding the relationship between two nodes) may be performed on the resulting object graph. | 2020-07-02 |
20200210317 | GRAPHICS PROCESSING UNIT FOR DERIVING RUNTIME PERFORMANCE CHARACTERISTICS, COMPUTER SYSTEM, AND OPERATION METHOD THEREOF - A computing system is provided. The computing system includes: a memory configured to store a shader program; and a graphics processing unit (GPU) configured to obtain the shader program stored in the memory in a profile mode, the GPU being configured to perform: inserting, into the shader program, one or more monitor associative codes; compiling the shader program, into which the one or more monitor associative codes are inserted, into a language that is capable of being processed by a plurality of cores; and obtaining a runtime performance characteristic of the shader program by executing the compiled shader program and the one or more monitor associative codes. | 2020-07-02 |
20200210318 | EXPOSING AND REPRODUCING SOFTWARE RACE CONDITIONS - Exposing and reproduction of race conditions is presented herein. The method comprises identifying a synchronization mechanism of a grouping of operating system synchronization mechanisms; based on a tunable probability value, adjusting a race window associated with the synchronization mechanism; and based on the race window, raising a likelihood of revealing a race condition. | 2020-07-02 |
20200210319 | COMPRESSED TRACE LOG - A method for debugging a program includes executing, by a computing platform, a given program with a plurality of loops. Each of the plurality of loops includes multiple candidate iterations, and each loop in the given program includes a set of executable statements. A particular loop of the plurality of loops can include at least a particular iteration and one or more other iterations. The method can also include executing at least the particular iteration and the one or more other iterations for the particular loop. During execution of at least the particular iteration and the one or more other iterations for the particular loop, information that indicates which iteration of the particular loop is being executed is stored. Further, the method includes discarding temporarily stored information about the one or more other iterations without storing the information in the log. | 2020-07-02 |
20200210320 | TRACE MANAGEMENT DURING ABORTED SPECULATIVE OPERATIONS - A method for tracing software code executing on a core of a processor is described. The method includes generating a set of packets for a trace packet stream based on a main cycle counter, which maintains a count of cycles elapsing in the core since a packet was emitted into the trace packet stream, and a commit cycle counter, which maintains a cycle count in the core since the last commit operation, wherein the generating comprises (1) storing a value of the main cycle counter in the commit cycle counter in response to detecting a commit operation and (2) storing a value of the commit cycle counter in the main cycle counter in response to detecting an abort in the core; and emitting the set of packets from the processor into the trace packet stream for tracing execution of the software code. | 2020-07-02 |
20200210321 | TERMINAL FAILURE BUSTER - A system and method may include a mobile electronic device including a primary electronic display and a processor in communication with the primary electronic display. A secondary electronic display may be in communication with the processor. The processor may execute software that, in the event of an execution error of a software program or hardware component, causes at least one message inclusive of information to assist a user with a debugging process to be displayed on the secondary electronic display | 2020-07-02 |
20200210322 | APPLICATION MANAGEMENT SERVICE INCIDENT RESOLUTION METHOD - This invention relates to an application management service incident resolution method comprising, for at least a first category of complex incidents: selecting, from a library, an incident resolution workflow which is relevant for a given specific complex incident having occurred and which comprises a succession of activities: said activities being categorized as either automated or manual, automated activities proposing at least one or more robotic process automation agents and/or one or more automation scripts to be activated by user, manual activities proposing at least one or more contextual video clips to be displayed by user, performing effective incident resolution: by user at least selecting and combining, preferably and/or preferably amending and/or preferably adding and/or preferably removing, at least part of said activities among said succession of activities, said automated activity(ies) or part of them being simply activated by user, said manual activity(ies) being manually performed by user based on at least said one or more contextual video clips teaching(s), this effective incident resolution performance, if successful, being possibly converted to a new alternative incident resolution workflow then included in said library. | 2020-07-02 |
20200210323 | SELF SUSTAINED REGRESSION TESTING FRAMEWORK - Systems, methods, and computer program products for testing new software are provided. Multiple payloads that correspond to scenarios in a production computing environment are identified. From the multiple payloads unique payloads are identified. User data that corresponds to the unique payloads is created. A first testing environment conducts a test using software components in the production environment, the unique payloads, and the user data to generate expected results. A second testing environment conducts a test using new software that replaces at least one of the software components in the production environment, the unique payloads, and the user data, to generate actual results. The one or more attributes in the expected results are compared to the one or more attributes in the actual results to determine if the new software causes an error. | 2020-07-02 |
20200210324 | SYSTEM PROGRAM CHANGE DETECTION VIA PARALLEL MONITORING TECHNIQUES - Methods, apparatus, and processor-readable storage media for system program change detection via parallel monitoring techniques are provided herein. An example computer-implemented method includes determining multiple user interface elements to monitor at each of one or more action points during execution of at least one system program within an automated testing framework, wherein the at least one system program is designed for operation across multiple at least partially interconnected system devices. The method also includes monitoring, in parallel, at a given one of the one or more action points, for changes to the multiple user interface elements corresponding to the given action point, and performing, based at least in part on the monitoring and on processing of one or more data structures, at least one action within the automated testing framework in response to detection of a change to any one of the multiple user interface elements. | 2020-07-02 |
20200210325 | STREAMLINED CREATION OF INTEGRATION TESTS - Systems and methods for testing software programs during development are described that are provided in part by a software testing framework that can create unit tests for testing individual modules of code, and create corresponding integration tests for testing those code modules during later integration testing, without duplication of effort. The framework receives function calls, each corresponding to a unit test function. The framework generates unit test code based on the function calls, which is executed on a development device. Upon successful execution of the unit test code, the framework receives an indication to test the function calls in a test environment. The framework identifies dependencies of the function calls. The framework then generates integration test code corresponding to the function calls. The integration test code includes dependency resolution code for the evaluated dependencies. The generated integration test code is then deployed in a test environment. | 2020-07-02 |
20200210326 | RESOURCES USAGE FOR FUZZ TESTING APPLICATIONS - Improved utilization of spare resources for fuzz testing is provided. A production environment that includes a plurality of running applications having a plurality of user input fields is monitored over a period of time for consumer use. Actual usage data for the plurality of user input fields are determined during the period of time. Each user input field in the plurality of user input fields is ranked for fuzz testing based on, at least in part, the actual usage data corresponding to each respective user input field during the period of time. The fuzz testing is selectively performed on a portion of the plurality of user input fields based on user input field rankings. | 2020-07-02 |
20200210327 | M2M APPLICATION TEST DEVICE AND METHOD - An M2M application test apparatus and method are disclosed. According to one embodiment of the present invention, the M2M application test apparatus for testing an M2M application on the basis of a one M2M standard can comprise: an application storage for storing at least one application to be tested; and at least one server, which configures a test triggering message on the basis of the application to be tested and test-related information so as to transmit the test triggering message to the application to be tested, and receives a test result from the application to be tested and provides the same. | 2020-07-02 |
20200210328 | RULES TESTING FRAMEWORK - A method may include receiving a request for an output decision corresponding to a set of input parameters, the request including a test indication. The method also includes determining that the output decision is dependent on an output of a first module that is configured by default to communicate with a second module, the output of the first module being dependent on the output of the second module. Further, the method includes based on the test indication, causing the first module to transmit an information request to a simulation module, the simulation module identifying a predetermined output value corresponding to the first module in response to the information request. The method also includes generating the output decision according to a decisioning rule that is triggered based on the predetermined output value and transmitting the output decision to a testing module. | 2020-07-02 |
20200210329 | MEMORY MANAGEMENT METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM - The present disclosure provides a memory management method, and belongs to the technical field of networks. The method includes: allocating a first memory address to video frame data based on a memory multiplexing queue, wherein the memory multiplexing queue records a memory address of video frame data that has been rendered; storing the video frame data in a memory space indicated by the first memory address; and adding the first memory address to the memory multiplexing queue after performing rendering based on the video frame data. | 2020-07-02 |
20200210330 | GARBAGE COLLECTION CANDIDATE SELECTION USING BLOCK OVERWRITE RATE - A processing device in a memory system determines whether a first data block of a plurality of data blocks on the memory component satisfies a first threshold criterion pertaining to a first number of the plurality of data blocks having a lower amount of valid data than a remainder of the plurality of data blocks. Responsive to the first data block satisfying the first threshold criterion, the processing device determines whether the first data block satisfies a second threshold criterion pertaining to a second number of the plurality of data blocks having been written to more recently than the remainder of the plurality of data blocks. Responsive to the first data block satisfying the second threshold criterion, the processing device determines whether a rate of change of an amount of valid data on the first data block satisfies a third threshold criterion. Responsive to the rate of change satisfying the third threshold criterion, the processing device identifies the first data block as a candidate for garbage collection on the memory component. | 2020-07-02 |
20200210331 | USING A COMMON POOL OF BLOCKS FOR USER DATA AND A SYSTEM DATA STRUCTURE - A request to add content to a system data structure can be received. A first set of blocks of a common pool of blocks are allocated to the system data structure and a second set of blocks of the common pool of blocks are allocated to user data. A determination can be made as to whether a garbage collection operation associated with the first set of blocks of the common pool allocated to the system data structure satisfies a garbage collection performance condition. Responsive to determining that the garbage collection operation satisfies the garbage collection performance condition, a block from the common pool can be allocated to the first set of blocks allocated to the system data structure. | 2020-07-02 |
20200210332 | DYNAMIC CONTROL OF MEMORY BANDWIDTH ALLOCATION FOR A PROCESSOR - Examples include a computing system for receiving memory class of service parameters; setting performance monitoring configuration parameters, based at least in part on the memory class of service parameters, for use by a performance monitor of a memory controller to generate performance monitoring statistics by monitoring performance of one or more workloads by a plurality of processor cores based at least in part on the performance monitoring configuration parameters; receiving the performance monitoring statistics from the performance monitor; and generating, based at least in part on the performance monitoring statistics, a plurality of memory bandwidth settings to be applied by a memory bandwidth allocator to the plurality of processor cores to dynamically adjust priorities of memory bandwidth allocated for the one or more workloads to be processed by the plurality of processor cores. | 2020-07-02 |
20200210333 | JUST-IN-TIME DATA PROVISION BASED ON PREDICTED CACHE POLICIES - Systems, methods, and computer readable mediums are provided for predicting a cache policy based on usage patterns. Usage pattern data can be received and used with a predictive model to determine a cache policy associated with a datastore. The cache policy can identify the configuration of predicted output data to be provisioned in the datastore and subsequently provided to a client in a just-in-time manner. The predictive model can be trained to output the cache policy based on usage pattern data received from a usage point, a provider point, or a datastore configuration. | 2020-07-02 |
20200210334 | Cache Hit Ratio Simulation Using A Partial Data Set - A method of cache hit ratio simulation using a partial data set includes determining a set of sampled addresses, the set of sampled addresses being a subset of all addresses of a storage system of a storage environment. The method further includes using, by a simulation engine, a cache management algorithm to determine a cache hit ratio of the sampled addresses, the cache management algorithm being also used by a cache manager to place a portion of the addresses of the storage system into cache during a runtime operation. The method further includes determining a quantity of memory access operations to frequently accessed addresses in the set of sampled addresses, and correcting, by the simulation engine, the cache hit ratio of the sampled addresses based on the quantity of memory access operations to the frequently accessed addresses in the set of sampled addresses. The simulation also handles sequential operations accurately. | 2020-07-02 |
20200210335 | Incomplete Write Group Journal - Example storage systems, storage devices, and methods provide a write group journal for identifying incomplete writes. Related write request indicators are stored in a non-volatile journal in a solid state drive to identify a related write group and indicate whether the related write group has been stored in storage locations corresponding to physical page addresses. An event notification is sent to a host system when the related write request indicator indicates that the group was incomplete at the time of a data loss event. | 2020-07-02 |
20200210336 | METHOD AND DEVICE FOR SITUATION-DEPENDENT STORAGE OF DATA OF A SYSTEM - This disclosure relates to a method for situation-dependent storage of data of a system, in which data of the system is detected, is amalgamated in at least one data block and is stored in a volatile memory, and in which, in response to the occurrence of at least one predefined trigger event in the at least one data block, amalgamated data are transferred from the volatile memory into a read-only memory, and in which a time window, in which the data for the at least one data block is captured, is selected automatically and dynamically according to the at least one trigger event. | 2020-07-02 |
20200210337 | SYSTEM AND METHOD FOR EARLY DRAM PAGE-ACTIVATION - A system and a method provide a memory-access technique that effectively parallelizes DRAM operations and coherency operations to reduce memory-access latency. The system may include a memory controller, an interconnect and a processor. The interconnect may be coupled to the memory controller. The processor may be coupled to the memory controller through a first path and a second path in which the first path is through the interconnect and the second path bypasses the interconnect. The processor may be configured to send substantially concurrently a memory access request to the memory controller via the first path and send a page activation request or a hint request to the memory controller via the second path so that the DRAM access operations appear to be masked, or hidden by the coherency operations. | 2020-07-02 |
20200210338 | EXTEND GPU/CPU COHERENCY TO MULTI-GPU CORES - In an example, an apparatus comprises a plurality of processing unit cores, a plurality of cache memory modules associated with the plurality of processing unit cores, and a machine learning model communicatively coupled to the plurality of processing unit cores, wherein the plurality of cache memory modules share cache coherency data with the machine learning model. Other embodiments are also disclosed and claimed. | 2020-07-02 |
20200210339 | SYSTEM, METHOD, AND APPARATUS FOR ENHANCED POINTER IDENTIFICATION AND PREFETCHING - System and method for prefetching pointer-referenced data. A method embodiment includes: tracking a plurality of load instructions which includes a first load instruction to access a first data that identifies a first memory location; detecting a second load instruction which accesses a second memory location for a second data, the second memory location matching the first memory location identified by the first data; responsive to the detecting, updating a list of pointer load instructions to include information identifying the first load instruction as a pointer load instruction; prefetching a third data for a third load instruction prior to executing the third load instruction; identifying the third load instruction as a pointer load instruction based on information from the list of pointer load instructions and responsively prefetching a fourth data from a fourth memory location, wherein the fourth memory location is identified by the third data. | 2020-07-02 |
20200210340 | Cache Management Method, Cache and Storage Medium - There are provided in the present disclosure a cache management method for a computing device, a cache and a storage medium, the method including: storing, according to a first request sent by a processing unit of the computing device, data corresponding to the first request in a first cache line of a cache set, and setting age of the first cache line to a first initial age value according to a priority of the first request. | 2020-07-02 |