24th week of 2020 patent applcation highlights part 54 |
Patent application number | Title | Published |
20200183770 | WINDOW TYPE WATCHDOG TIMER AND SEMICONDUCTOR DEVICE - A window type watchdog timer includes a frequency dividing circuit for generating a frequency-divided clock signal by dividing a frequency of a reference clock signal; a monitoring circuit for monitoring occurrence of a first error in which clear control from a target device is interrupted for a first time or more, and occurrence of a second error in which an interval between two consecutive clear controls from the target device is shorter than a second time shorter than the first time, based on the frequency-divided clock signal; and outputting an error signal when the first error or the second error is detected; and a setting circuit for variably setting the first time and the second time by variably setting a frequency division ratio in the frequency dividing circuit and variably setting a detection condition of the first error and the second error. | 2020-06-11 |
20200183771 | BIT ERROR RATE BASED DYNAMIC PROGRAM STEP CHARACTERISTIC ADJUSTMENT - A BER corresponding to a group of memory cells programmed via a programing signal having one or more program step characteristics is determined. The determined BER and a target BER is compared. In response to the determined BER being different than the target BER, one or more program step characteristics are adjusted to adjust the determined BER to the target BER. | 2020-06-11 |
20200183772 | SYSTEMS AND METHODS OF ANALYZING USER RESPONSES TO INQUIRIES TO DIAGNOSE AND MITIGATE REPORTED PERFORMANCE ISSUES ON A CLIENT DEVICE - A system analyzes descriptions of performance issues that are submitted responsive to inquiries to expediently diagnose and mitigate performance issues. In implementation, inquiries associated with features of an application are exposed at client device. Then, user responses to the inquiries are provided to relief evaluators that analyze the user responses to diagnose reported performance issues. The relief evaluators include diagnostic packages that diagnose predetermined performance issues by analyzing individual user responses to particular inquiries. The relief evaluators also include relief packages that mitigate the predetermined performance issues. A relief package may mitigate the predetermined performance issue by displaying a message that informs the user how to adjust the system state to prevent the reported performance issue “symptom” from reoccurring. Additionally, or alternatively, a relief package may mitigate the performance issue by automatically adjusting the system state to prevent the reported performance issue “symptom” from reoccurring. | 2020-06-11 |
20200183773 | ENTITY RESOLUTION FRAMEWORK FOR DATA MATCHING - Systems and methods are described for matching a corrupted database record with a record of a validated database. The system receives a corrupted record from a first database. The corrupted record is vectorized to create an input data vector. A denoised data vector is generated by applying a denoising autoencoder to the input data vector, where the denoising autoencoder is specific to the first database. The system compares the denoised data vector with each of a plurality of validated data vectors generated based on records of the validated database to determine that a first denoised data vector matches a matching vector. In response, the system trains the denoising autoencoder using a data pair that includes the input data vector and the matching vector. The system also outputs the validated record that was used to generate the first matching vector. | 2020-06-11 |
20200183774 | PROFILING AND DIAGNOSTICS FOR INTERNET OF THINGS - A computing device and method for profiling and diagnostics in an Internet of Things (IoT) system, including matching an observed solution characteristic of the IoT system to an anomaly in an anomaly database. | 2020-06-11 |
20200183775 | CHRONOLOGICALLY ORDERED LOG-STRUCTURED KEY-VALUE STORE FROM FAILURES DURING GARBAGE COLLECTION - One embodiment provides a method for recovery from failures during garbage collection processing in a system including recording, by a processor, a specific offset within a garbage collection target slot of a log structure associated with a garbage collection transaction. Each transaction record of the garbage collection transaction includes a garbage collection target slot, a victim slot and a beginning offset in the garbage collection target slot. | 2020-06-11 |
20200183776 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - Disclosed is a memory system and a method of operating the memory system. The memory system includes a semiconductor memory device configured to read data stored in a selected logical page among a plurality of logical pages by applying different read voltages to a selected word line corresponding to the plurality of logical pages. The memory system also includes a controller configured to perform an operation for detecting and correcting an error of the data whenever each of the read voltages is applied to the selected word line. | 2020-06-11 |
20200183777 | MEMORY SYSTEM AND OPERATING METHOD OF MEMORY SYSTEM - A memory system includes a memory device including memory cells, and a controller that performs a write operation, a read operation, and a check operation on the memory device. During the check operation, the controller controls the memory device to read check data from target memory cells of the memory cells by using a check level, compares the check data with original data stored in the target memory cells, and determines a reliability of the target memory cells or the check data based on a result of the comparison. | 2020-06-11 |
20200183778 | MEMORY CONTROLLER, MEMORY SYSTEM AND APPLICATION PROCESSOR COMPRISING THE MEMORY CONTROLLER - According to an aspect of inventive concepts, there is provided a memory controller configured to control a memory device including a plurality of memory pages, the memory controller including an error correction code (ECC) region manager configured to manage the plurality of memory pages by dividing the plurality of memory pages into ECC enable regions and ECC disable regions, and an ECC engine configured to perform an ECC operation on data included in the ECC enable regions. | 2020-06-11 |
20200183779 | NAND DEVICE MIXED PARITY MANAGEMENT - Devices and techniques for NAND device mixed parity management are described herein. A first portion of data that corresponds to a first data segment and a second data segment—respectively defined with respect to a structure of a NAND device—are received. A parity value using the first portion of data and the second portion of data is computed and then stored for error correction operations. | 2020-06-11 |
20200183780 | SOLID STATE DEVICE IMPLEMENTING DYNAMIC POLAR ENCODING - A method for operating a solid state storage device comprising memory cells exhibiting respective threshold voltage distributions comprises: providing sets of frozen bits each one associated with a respective RBER estimate being estimated according to a respective shape of the threshold voltage distributions; determining a current value of operative parameter(s) affecting the shape of the threshold voltage distributions; based on the current value of the operative parameter(s), determining a current shape of the threshold voltage distributions; determining a current RBER estimate associated with the current shape of the threshold voltage distributions; selecting a current set of frozen bits associated with the current RBER estimate; encoding the information bits and the current set of frozen bits with a polar code; storing the polar encoded bits in selected memory cells; reading the stored polar encoded bits and decoding them according to said current set of frozen bits. | 2020-06-11 |
20200183781 | DIRECT-INPUT REDUNDANCY SCHEME WITH DEDICATED ERROR CORRECTION CODE CIRCUIT - Methods, systems, and devices for performing an error correction operation using a direct-input column redundancy scheme are described. A device that has read data from data planes may replace data from one of the planes with redundancy data from a data plane storing redundancy data. The device may then provide the redundancy data to an error correction circuit coupled with the data plane that stored the redundancy data. The error correction circuit may operate on the redundancy data and transfer the result of the operation to select components in a connected error correction circuit. The components to which the output is transferred may be selected based on data plane replaced by the redundancy data. The device may generate syndrome bits for the read data by performing additional operations on the outputs of the error correction circuit. | 2020-06-11 |
20200183782 | DIRECT-INPUT REDUNDANCY SCHEME WITH ADAPTIVE SYNDROME DECODER - Methods, systems, and devices for operating memory cell(s) using a direct-input column redundancy scheme are described. A device that has read data from data planes may replace data from one of the planes with redundancy data from a data plane storing redundancy data. The device may then provide the redundancy data to an error correction circuit coupled with the data plane that stored the redundancy data. An output of the error correction circuit may be used to generate syndrome bits, which may be decoded by a syndrome decoder. The syndrome decoder may indicate whether a bit of the data should be corrected by selectively reacting to inputs based on the type of data to be corrected. For example, the syndrome decoder may react to a first set of inputs if the data bit to be corrected is a regular data bit, and react to a second set of inputs if the data bit to be corrected is a redundant data bit. | 2020-06-11 |
20200183783 | MANAGEMENT OF CORRUPTIVE READ IN MEMORY SYSTEMS - Described herein are embodiments related to one-direction error recovery flow (ERF) operations on memory components of memory systems. A processing device determines that data from a read operation is not successfully decoded because of a partial write of the data. The partial write results from a number of memory cells written as a first state and read as a second state. The processing device performs a one-direction ERF on the memory cells by monotonically adjusting a read voltage level for one or more re-read operations from a first discrete read voltage level towards a second read voltage level in a first direction until the data from the one or more re-read operations is successfully decoded. The first direction corresponds to an opposite direction of a state shift of the partial write. The processing device can also can determine a directional EBC and perform a refresh write if necessary. | 2020-06-11 |
20200183784 | NONVOLATILE MEMORY DEVICE AND MEMORY SYSTEM INCLUDING NONVOLATILE MEMORY DEVICE - A nonvolatile memory device performs a compare and write operation. The compare and write operation includes reading read data from memory cells, inverting first write data to generate second write data, adding a first flag bit to the first write data to generate third write data and adding a second flag bit to the second write data to generate fourth write data, performing a reinforcement operation on each of the third write data and the fourth write data to generate fifth write data and sixth write data, and comparing the read data with each of the fifth write data and the sixth write data and writing one of the fifth and sixth write data in the memory cells based on a result of the comparison. | 2020-06-11 |
20200183785 | MEMORY SYSTEM INCLUDING FIELD PROGRAMMABLE GATE ARRAY (FPGA) AND METHOD OF OPERATING SAME - A memory system includes; a memory device, a memory controller including a first interface, a second interface, and a first data processor having a first error correction code (ECC) engine, and a field programmable gate array (FPGA) including a third interface connected to the first interface, a fourth interface connected to the second interface, a fifth interface connected to an external host, and a second data processor having a second ECC engine. The memory controller may configure a normal write operation path or highly reliable write operation path. | 2020-06-11 |
20200183786 | METHOD, MEMORY CONTROLLER, AND MEMORY SYSTEM FOR READING DATA STORED IN FLASH MEMORY - An exemplary method for reading data stored in a flash memory includes: selecting an initial gate voltage combination from a plurality of predetermined gate voltage combination options; controlling a plurality of memory units in the flash memory according to the initial gate voltage combination, and reading a plurality of bit sequences; performing a codeword error correction upon the plurality of bit sequences, and determining if the codeword error correction successful; if the codeword error correction is not successful, determining an electric charge distribution parameter; determining a target gate voltage combination corresponding to the electric charge distribution parameter by using a look-up table; and controlling the plurality of memory units to read a plurality of updated bit sequences according to the target gate voltage combination. | 2020-06-11 |
20200183787 | Accessing Error Statistics From DRAM Memories Having Integrated Error Correction - In described examples, a memory module includes a memory array with a primary access port coupled to the memory array. Error correction logic is coupled to the memory array. A statistics register is coupled to the error correction logic. A secondary access port is coupled to the statistics register to allow access to the statistics register by an external device without using the primary interface. | 2020-06-11 |
20200183788 | DATA PROCESSING PIPELINE FAILURE RECOVERY - Techniques are disclosed for re-executing a data processing pipeline following a failure of at least one of its components. The techniques may include a syntax for defining a compute graph associated with the data processing pipeline and receiving such a compute graph in association with a specific data processing pipeline. The technique may include executing the data processing pipeline, determining that a component of the data processing pipeline failed, and determining a portion of the data processing pipeline to execute/re-execute based at least in part on dependencies defined by the data processing pipeline in association with the failed component. Re-executing the one or more components may comprise retrieving an output saved in association with a component upon which the failed component depends. | 2020-06-11 |
20200183789 | INSTALLATION FILE PROCESSING METHOD AND DEVICE, AND SERVER - Embodiments of the present disclosure provide an installation file processing method and device, and a server. The method includes: receiving first installation information sent by a client and indicating a failed installation of a target software, the first installation information including a target software identifier, a first version number corresponding to the target software, installation environment information and installation parameter information; obtaining multiple pieces of second installation information each including the target software identifier and the installation environment information, and generating parameter statistics results corresponding to multiple second version numbers contained in the multiple pieces of second installation information; and selecting a target version number of which the parameter statistics result meets a first preset condition from the multiple second version numbers, and sending an installation file corresponding to the target version number to the client. | 2020-06-11 |
20200183790 | SINGLE EVENT EFFECT MITIGATION - A multi-logic device system, an electronic engine controller, and a method of operating the multi-logic device system. The multi-logic device system includes a primary logic device which is more resilient to single event effects, and one or more secondary logic devices, each secondary logic device being powered by a respective power supply unit and being more susceptible to single event effects. The primary logic device is configured to run, for each secondary logic device, a respective watchdog timer. Each watchdog timer is restarted upon receipt of a restart signal from the respective secondary logic device. The primary logic device is also configured, in response to a watchdog timer timing out, to identify and reset the secondary logic device corresponding to the timed out watchdog timer. | 2020-06-11 |
20200183791 | VOLTAILE MEMORY DEVICE AND METHOD FOR EFFICIENT BULK DATA MOVEMENT, BACKUP OPERATION IN THE VOLATILE MEMORY DEVICE - A volatile memory and a method for efficient bulk data movement, backup operation in the volatile memory device are provided. The volatile memory device includes: a plurality of subarray, configured to access data, wherein each of the subarray is electrically coupled to each other. The row address control, configured to control the row of each of the plurality of subarray. The column control, configured to control the column of each of the plurality of subarray. The plurality of sense amplifier, adapted to each of the plurality of sub array is periodically enabled during the data access operation. The plurality of sub word driver, adapted on the adjacent to the plurality of sub array provides a driving signal to the corresponding word line in the plurality of subarray. The volatile memory device performs a data movement operation in a predetermined block and determine an odd data and an even data in the predetermined block. The volatile memory device enables a first backup block and a second backup block in a dynamic memory array through the row address control and backup the odd data and the even data simultaneously into the first backup block and the second backup block. | 2020-06-11 |
20200183792 | PLUGGABLE DATABASE ARCHIVE - Techniques herein make and use a pluggable database archive file (AF). In an embodiment, a source database server of a source container database (SCD) inserts contents into an AF from a source pluggable database (SPD). The contents include data files from the SPD, a listing of the data files, rollback scripts, and a list of patches applied to the SPD. A target database server (TDS) of a target container database (TCD) creates a target pluggable database (TPD) based on the AF. If a patch on the list of patches does not exist in the TCD, the TDS executes the rollback scripts to adjust the TPD. In an embodiment, the TDS receives a request to access a block of a particular data file. The TDS detects, based on the listing of the data files, a position of the block within the AF. The TDS retrieves the block based on the position. | 2020-06-11 |
20200183793 | VOLUME GROUP STRUCTURE RECOVERY IN A VIRTUALIZED SERVER RECOVERY ENVIRONMENT - A method and system for performing a volume group structure recovery. A first physical volume is accessed. A last valid volume group backup for a volume group whose volume group structure is to be recovered is retrieved. The volume group is a logical group of one or more physical volumes that include the first physical volume. The volume group backup includes respective volume group identifiers corresponding to the physical volumes of the volume group. An existing volume group identifier is stored in a temporary file with a generated random volume group identifier for identifying the volume group. A set of new volume group identifiers are generated during re-initialization of all listed physical volumes. The volume group identifiers in the last valid volume group backup is replaced with the generated new volume group identifiers. The volume group's volume group structure is restored using a backup structure stored in the temporary file. | 2020-06-11 |
20200183794 | EVALUATION AND REPORTING OF RECOVERY READINESS IN A DATA STORAGE MANAGEMENT SYSTEM - An illustrative report server interoperates with one or more enhanced storage managers to evaluate whether backup operations and restore operations meet their recovery point objectives (RPO) and recovery time objectives (RTO), respectively. RTO is evaluated using a tiered approach based on past performance of restore and/or backup operations. The illustrative storage manager executes pre-defined queries that extract relevant information from an associated database that houses information about storage operations. The report server recommends alternative kinds of backup operations for data that fails to meet its RTO using traditional backups. The report server is configured to analyze and report RPO and RTO readiness for several levels of data entities, including multiple systems, single system, groups of clients, single clients, and subclients. | 2020-06-11 |
20200183795 | METHOD AND APPARATUS FOR PROCESSING INFORMATION - Embodiments of the present disclosure relate to a method and apparatus for processing information. The method can include: acquiring virtual machine device status information and physical address information from a target memory in response to determining a crash of a production kernel, the target memory being a memory pre-allocated to a running virtual machine by the production kernel, and the virtual machine device status information of the virtual machine and the physical address information corresponding to a virtual address of the memory of the virtual machine being stored into the target memory by the production kernel; acquiring data as target data according to the physical address information; and storing a file into a shared storage area according to the target data and the virtual machine device status information. | 2020-06-11 |
20200183796 | RECOVERY STRATEGY FOR A STREAM PROCESSING SYSTEM - The technology disclosed relates to discovering multiple previously unknown and undetected technical problems in fault tolerance and data recovery mechanisms of modem stream processing systems. In addition, it relates to providing technical solutions to these previously unknown and undetected problems. In particular, the technology disclosed relates to discovering the problem of modification of batch size of a given batch during its replay after a processing failure. This problem results in over-count when the input during replay is not a superset of the input fed at the original play. Further, the technology disclosed discovers the problem of inaccurate counter updates in replay schemes of modem stream processing systems when one or more keys disappear between a batch's first play and its replay. This problem is exacerbated when data in batches is merged or mapped with data from an external data store. | 2020-06-11 |
20200183797 | ARRAY INTEGRATION FOR VIRTUAL MACHINE BACKUP - Methods and systems for improving the performance of a primary system that is running one or more virtual machines and capturing snapshots of the one or more virtual machines over time are described. The performance penalty on the primary system when a hypervisor running the one or more virtual machines is used to capture the snapshots of the one or more virtual machines may be reduced by leveraging storage array snapshots to reduce the amount of time that the hypervisor must freeze virtual disks of the one or more virtual machines. In this case, changed block tracking information for changed data blocks associated with the snapshots may be acquired from the hypervisor and the changed data blocks themselves may be pulled from the storage array snapshots without requiring the hypervisor to keep the virtual disks of the one or more virtual machines in a frozen state. | 2020-06-11 |
20200183798 | COMMUNICATION OF DIAGNOSTIC PARAMETERS OF A DATA MIRRORING CONFIGURATION FROM A STORAGE CONTROLLER TO A HOST - A storage controller is configured to communicate with a host over a first storage area network. Data controlled via the storage controller is mirrored to another storage controller over a second storage area network. The storage controller receives a request from the host to provide read diagnostic parameters of the second storage area network. In response to receiving the request, the storage controller secures the read diagnostic parameters of the second storage area network. The storage controller transmits the read diagnostic parameters of the second storage area network to the host. | 2020-06-11 |
20200183799 | GENERATION OF HOST REQUESTS TO A STORAGE CONTROLLER FOR READ DIAGNOSTIC PARAMETERS FOR A DATA MIRRORING CONFIGURATION - A host is configured to communicate with a storage controller over a first storage area network. A request is transmitted from the host to the storage controller to provide read diagnostic parameters of a second storage area network that is used to mirror data controlled by the storage controller to another storage controller. The host receives the read diagnostic parameters of the second storage area network from the storage controller. | 2020-06-11 |
20200183800 | DYNAMIC DATA RESTORATION FROM MULTIPLE RECOVERY SITES IMPLEMENTING SYNCHRONOUS REMOTE MIRRORING - A computer-implemented method, according to one embodiment, includes: detecting an outage at a production site, and transferring I/O functionality to a first recovery site. In response to resolving the outage, first and second out-of-sync bitmaps are received from the first and second recovery sites, respectively. The out-of-sync bitmaps are merged together. Performance data which corresponds to achievable throughput is received from each of the first and second recovery sites, and the performance data is used to divide the merged out-of-sync bitmap into two portions. A request is sent to the first recovery site for data which corresponds to the first portion of the merged out-of-sync bitmap. Similarly, a request is sent to the second recovery site for data which corresponds to the second portion of the merged out-of-sync bitmap. Finally, data is received which corresponds to the first and second portions of the merged out-of-sync bitmap respectively, in parallel. | 2020-06-11 |
20200183801 | TRANSFERRING A WRITABLE DATA SET TO A CLOUD SERVICE - Transferring data from a storage device to cloud service includes initiating a snapshot of the data, accessing each block of the data corresponding to the snapshot to transfer each block to the cloud service, and terminating the snapshot after all of the blocks have been transferred to the cloud service. At least some blocks of the storage device that are modified after initiating the snapshot may be copied from the storage device to a storage pool prior to modification. Only a first modification of a particular one of the blocks of the storage device may cause the particular one of the blocks to be copied to the storage pool. Accessing each block of the data may include accessing blocks of the storage pool. Modifying a particular one of the blocks of the storage device may include modifying a corresponding block of a storage pool. | 2020-06-11 |
20200183802 | ASSIGNING BACKUP RESOURCES BASED ON FAILOVER OF PARTNERED DATA STORAGE SERVERS IN A DATA STORAGE MANAGEMENT SYSTEM - An illustrative data storage management system is aware that certain data storage resources for storing/serving primary data operate in a partnered configuration. Illustrative components of the data storage management system analyze the failover status of the partnered primary data storage resources to determine which is currently serving/storing primary data and/or snapshots targeted for backup. When detecting that a first partnered primary data storage resource has failed over to a second primary data storage resource, the example storage manager changes the assignment of backup resources that are pre-administered for the targeted data. Accordingly, the example storage manager assigns backup resources, including at least one media agent, that are associated with the second primary data storage resource, and which are “closer” thereto from a geography and/or network topology perspective, even if the pre-administered backup resources are available for backup. | 2020-06-11 |
20200183803 | System For Completely Testing Communication Links Inside Processor According To Processor Information And Method Thereof - A system for completely testing communication links inside a processor according to processor information and a method thereof are provided. By the technical means of configuring each thread corresponding to each node for each processing core according to processor information and component information after obtaining the processor information of a processor on a motherboard and the component information of an external component; using each processing core to execute the each thread to access the external component through the node, which corresponds to the executed thread and is connected to the external component, to generate an access result; and determining whether the access result is correct or not, the system and the method can test the stability between the processor and other components of a computing device and achieve the technical effect of improving the effectiveness of the test. | 2020-06-11 |
20200183804 | FLEXIBLE MICROCONTROLLER SUPPORT FOR DEVICE TESTING AND MANUFACTURING - The disclosed technology is generally directed to microcontrollers. In one example of the technology, an operating system is run on at least one processor of a multi-core controller. At the operating system, a command that is associated with a manufacturer test mode is received. A permission associated with the command is requested. The permission is based, at least in part, on the status of a one-way e-fuse. Responsive to the permission associated with the command being granted, the command is caused to be processed. | 2020-06-11 |
20200183805 | LOG ANALYSIS METHOD, SYSTEM, AND PROGRAM - The present invention provides a log analysis method, a system, and a program that can accurately output information associated with a particular event without prior knowledge of a log content. A log analysis system | 2020-06-11 |
20200183806 | SOFTWARE PERFORMANCE TESTING - Systems and methods for performance testing software using computer vision. Systems can include a performance testing computer vision system and a computer vision-based performance testbed system. Methods can include generating a computer vision-based testing package and performance testing software in one or more testing environments on at least one virtualized testbed machine according to testing constraints using the computer vision-based testing package. | 2020-06-11 |
20200183807 | MONITORING USER ACTIVITY WITHIN A PHYSICAL AREA - Monitoring user activity within a physical area controlled by an entity includes determining a location of a mobile device with respect to the physical area, in response to determining the location, exchanging communications with remotely located components associated with the entity to remotely configure the mobile device with a digital map of the physical area, information about things within the physical area and the ability to access the things, storing records of interactions between the mobile device and the a thing relative to the digital map, and transmitting the records as transactions of a blockchain to the remotely located components. The thing may have a QRC label and/or an RFID tag affixed thereto and remotely configuring the user device may include configuring the user device with an ability to read the QRC label and/or the RFID tag. | 2020-06-11 |
20200183808 | METHODS FOR TREATING OCULAR INFLAMMATORY DISEASES - A method of treating blepharitis includes administering to the affected eye of a subject an effective amount of an active ingredient in an ophthalmically acceptable vehicle for a sufficient period of time to treat blepharitis. The active ingredient consists essentially of a glucocorticoid in an ophthalmically acceptable vehicle that includes an aqueous polymer suspension that when mixed with tear fluid provides a sustained release of said active ingredient. The aqueous polymer suspension includes a carboxyl-containing polymer having less than about 5% by weight cross-linking agent and has a viscosity in a range from about 1,000 to about 30,000 centipoises. A kit includes: (a) a composition comprising about 0.1% by weight dexamethasone in this ophthalmically acceptable vehicle and (b) instructions for using the composition of (a) for the treatment of blepharitis. | 2020-06-11 |
20200183809 | USAGE AMOUNT MONITORING METHOD AND MONITORING UNIT OF ELECTRONIC CONTROL UNIT FOR VEHICLE - A usage amount monitoring method is provided. The method may include: recording a usage amount time that records a maximum usage amount of the central processing unit (CPU) by recording a start time and an end time of task and interrupt service routine (ISR); storing data in a non-volatile memory by obtaining the maximum usage amount of the CPU, an engine revolutions per minute (RPM), a software operating mode, a fault code, a number of tasks started, and a task response time; and transmitting relevant information that is delivered to an external communication such that the relevant information may be confirmed in a personal computer (PC) in a chronological order after storing a previous record in the chronological order when the maximum usage amount of the CPU is updated. | 2020-06-11 |
20200183810 | DYNAMICALLY DETERMINED ADAPTIVE TIMEOUT VALUE FOR DETECTING GRAPHICAL USER INTERFACE ELEMENT OF APPLICATION UNDERGOING FUNCTIONAL TESTING - An adaptive timeout value for a script operation associated with functional testing of an application is determined. The script operation specifies detecting display of a specific graphical user interface (GUI) element by the application. The adaptive timeout value is dynamically determined based on prior functional testing of the application. Responsive to encountering the script operation within a script while functionally testing the application under direction of the script, waiting occurs until display of the specific GUI element by the application has been detected, or until timing out has occurred in correspondence with the dynamically determined adaptive timeout value. | 2020-06-11 |
20200183811 | Automatically Performing and Evaluating Pilot Testing of Software - A method of and system for performing pilot testing of a software program in an organization is carried out by collecting pilot testing data generated from a pilot testing of a software program run on one or more hardware assets in the organization, determining whether a sufficient amount of pilot testing data has been collected, and, when so, calculating one or more pilot test metrics from the collected data. The calculated pilot test metrics may then be compared to similar metrics in a target population to evaluate the software program. | 2020-06-11 |
20200183812 | OPTIMIZING TEST COVERAGE BASED ON ACTUAL USE - Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for test optimization based on actual use of configuration parameters. Actions include receiving a parameter set from a monitoring system, the parameter set including multiple configuration parameters corresponding to development artifacts detected by the monitoring system, retrieving statistical data from a central data analysis infrastructure, the statistical data being retrieved from application systems executing software created out of the development artifacts, processing the parameter set using the statistical data to generate parameter clusters, and providing the parameter clusters to an integrated development environment to generate a test scope proposal based on the parameter clusters. | 2020-06-11 |
20200183813 | AUTOMATED TEST SCRIPT GENERATOR - A computer-implemented method, system and computer program product for automatically generating one or more test scripts for at least one software application based on one or more business documents, by: analyzing the business documents to identify one or more screens, including one or more fields on the screens, defined therein; and automatically generating the test scripts for the software application, wherein the test scripts are used to validate the screens, including the fields on the screens, as defined in the business documents. | 2020-06-11 |
20200183814 | FUZZ TESTING FOR QUANTUM SDK - The subject disclosure relates generally to an automated testing tool for quantum software development kits (SDKs). A system in accordance with an embodiment comprises a memory that stores computer-executable components. A processor is operably coupled to the memory and executes the computer-executable components stored in the memory. The computer-executed components comprises: a transformation component that receives a qasm program and transforms the qasm program; a testing component that tests the transformed qasm program on the SDK; and a reporting component that reports whether a quantum SDK has functioned properly or failed for the transformed qasm program. | 2020-06-11 |
20200183815 | Virtual Assistant Domain Selection Analysis - A virtual assistant platform provides a user interface for app developers to configure the enablement of domains for virtual assistants. Sets of test queries can be uploaded and statistical analyses displayed for the numbers of test queries served by each selected domain and costs for usage of each domain. Costs can vary according to complex pricing models. The user interface provides display views of tables, cost stack charts, and histograms to inform decisions that trade-off costs with benefits to the virtual assistant user experience. The platform interface shows, for individual queries, responses possible from different domains. Platform providers promote certain chosen domains. | 2020-06-11 |
20200183816 | SYSTEM LEVEL TEST GENERATION USING DNN TRANSLATION FROM UNIT LEVEL TEST - Embodiments of the present systems and methods may provide techniques that may provide unit-level test of an SUT, but which translates the unit-level test into a valid test of the SUT itself. For example, in an embodiment, a computer-implemented method for testing a system, the method may comprise analyzing the system to determine sub-components of the system and inputs to the sub-components, performing dynamic testing of the system and collecting pairs of inputs to the system and inputs to the sub-components, training a machine learning model to translate from inputs to the sub-components to inputs to the system input using the collected pairs of inputs to the system and inputs to the sub-components and performing sub-component level testing and translating the sub-component level testing to system level testing. | 2020-06-11 |
20200183817 | COMBINATORIAL TESTING OF SOFTWARE FOR MULTI-LEVEL DATA STRUCTURES - Methods and apparatus are disclosed for efficient combinatorial testing of multi-level datatypes and data objects. A multi-level datatype associated with a software library has a plurality of linked levels with corresponding metadata attributes. A sparse set of metadata combinations is generated, providing full coverage of identified tuples of the metadata. Multi-level test datatypes are defined, with metadata attributes following the generated metadata combinations, and used to execute a test suite and validate the software library. A user interface of the software library can be tested and validated directly using the defined test datatypes. Alternatively, functions of the software library can be tested with test objects that are instances of the test datatypes. In variations, the software library can be tested for combinations of data values, or a mix of data and metadata. The software library can be a rules framework providing configuration and implementation of if-then rules for client applications. | 2020-06-11 |
20200183818 | DETECTION AND CORRECTION OF CODING ERRORS IN SOFTWARE DEVELOPMENT - Techniques and solutions are described for automatically analyzing code for code principle violations. A code analysis can be configured that includes one or more tests for one or more code principle violations. The code analysis can be applied statically, against previously generated code, or can be conducted in a dynamic manner as code is being written or edited. Code, such as automatically generated code, can be excluded from analysis, or reports of analysis results. When a code principle violation is detected, the violation can be displayed to a user. Information regarding correcting the violation can be displayed. In some cases, a code principle violation can be automatically corrected. Code violations can be classified, such as by severity, and can be associated with particular code, such as code packages or objects, or particular developers or development groups. Reports can be prepared summarizing changes in code principle violations over time. | 2020-06-11 |
20200183819 | METHOD FOR EXECUTING A PROGRAM IN A COMPUTER - The disclosed embodiments relate to a method for memory modification resulting in a test probe for examining a program under test substantially during run-time. The ability to inject faults or errors in order to test the program's reaction to a fault in a particular state and to individually replace access to a regular operand by accessing a shadow operand allow for non-intrusive tests while the program is substantially executed in real-time and whereby the program itself is not substantially altered for testing purposes. | 2020-06-11 |
20200183820 | NON-REGRESSIVE INJECTION OF DECEPTION DECOYS - Systems and methods, as well as computing architecture for implementing the same, for decoy injection into an application. The systems and methods include splitting a standard test phase operation into two complementary phases, and add new unit tests to the process, dedicated to testing the proper coverage of the decoys and avoiding non-regression of the original code. | 2020-06-11 |
20200183821 | IDENTIFYING FLAKY TESTS - The present disclosure provides method and apparatus for identifying flaky tests. Historical running data of a test case may be obtained. Statistical analysis may be performed based on the historical running data. It may be determined whether the test case is a flaky test based on the statistical analysis. | 2020-06-11 |
20200183822 | MOCK SERVER FOR TESTING - Systems of the present disclosure provide a versatile, reusable mock server to respond to Application-Programming-Interface (API) requests. The mock server receives an API request and a cookie associated with the API request. The API server identifies response instructions found in the cookie. The response instructions may include a static response value, a name of an API server for the mock server to imitate, or code for the mock server to execute in the process of generating a mock API response. The mock server generates a mock API response based on the response instructions and sends the mock API response in reply to the API request. | 2020-06-11 |
20200183823 | Hardware-Based Memory Management For System-On-Chip (SOC) Integrated Circuits - Systems and related methods are disclosed to manage memory for an integrated circuit including a processor and logic circuitry to manage the memory. The memory includes segments available for storage of data, and the processor stores data within the memory. Logic circuitry is configured to manage the memory, forms a plurality of sections within the segments, and applies tokens to the plurality of sections. Further, for each storage operation, the logic circuitry searches the tokens to identify blocks of continuous available tokens based upon data length, selects a block from the blocks identified in the search, determines a first token for the selected block, and outputs a memory address to the processor based upon the first token. The processor stores the data at the memory address. For one embodiment, the storage operations are associated with storage of data within packets received from network communications. | 2020-06-11 |
20200183824 | SHARING METHOD, APPARATUS, STORAGE MEDIUM, AND TERMINAL - Provided are a sharing method and apparatus. The method acquires the first transmission parameter and the number of first channels supported by one network mode; calculates and obtains the first storage parameter corresponding to the one network mode according to the number of the first channels, the first transmission parameter and a preset calculation model; determines the first storage area satisfying the first storage parameter, and allocates the storage space for the first channels according to the first storage area. Further provided is a terminal. | 2020-06-11 |
20200183825 | DUAL MEDIA PACKAGING TARGETED FOR SSD USAGE - The present disclosure generally relates to data storage devices comprising one or more memory packages. At least one memory package of the storage device comprises a first stack of memory dies coupled together by a first chip select line and a second stack of memory dies coupled together by a second chip select line. Both the first stack and the second stack comprise a plurality of non-volatile memory dies and a dissimilar memory die disposed on top of the plurality of non-volatile memory dies. Within both the first stack and the second stack, the plurality of non-volatile memory dies is a different type of memory than the dissimilar memory die. Additionally, within both the first stack and the second stack, the plurality of non-volatile memory dies is configured to store host data, and the dissimilar memory die is configured to store cached data. | 2020-06-11 |
20200183826 | METHOD AND SYSTEM FOR IN-LINE ECC PROTECTION - A memory system having an interconnect configured to receive commands from a system to read data from and/or write data to a memory device. The memory system also has a bridge configured to receive the commands from the interconnect, to manage ECC data and to perform address translation between system addresses and physical memory device addresses by calculating a first ECC memory address for a first ECC data block that is after and adjacent to a first data block having a first data address, calculating a second ECC memory address that is after and adjacent to the first ECC block, and calculating a second data address that is after and adjacent to the second ECC block. The bridge may also check and calculate ECC data for a complete burst of data, and/or cache ECC data for a complete burst of data that includes read and/or write data. | 2020-06-11 |
20200183827 | EFFICIENT SCALING AND IMPROVED BANDWIDTH OF STORAGE SYSTEM - A system including embedded storage devices is described. A method of system operation includes determining, by a processing device of a storage system controller operatively coupled via a network to embedded storage devices, that data is to be stored in a first storage portion of a first storage device of the embedded storage devices. The method also includes buffering the data in a second storage portion of a second embedded storage device of the embedded storage devices. | 2020-06-11 |
20200183828 | DATA RELOCATION IN MEMORY HAVING TWO PORTIONS OF DATA - The present disclosure includes apparatuses, methods, and systems for data relocation in memory having two portions of data. An embodiment includes a memory having a plurality of physical blocks of memory cells, and a first and second portion of data having a first and second, respectively, number of logical block addresses associated therewith. Two of the plurality of physical blocks of cells do not have data stored therein. Circuitry is configured to relocate the data of the first portion that is associated with one of the first number of logical block addresses to one of the two physical blocks of cells that don't have data stored therein, and relocate the data of the second portion that is associated with one of the second number of logical block addresses to the other one of the two physical blocks of cells that don't have data stored therein. | 2020-06-11 |
20200183829 | OWNERSHIP-BASED GARBAGE COLLECTION OF DATA - The described technology is generally directed towards data storage using a node cluster, and garbage collecting unused chunks (data storage units) in the cluster based on which node owns the particular unused chunks. A node determines which chunks are in use, and exchanges datasets identifying those chunks with other nodes such that the other nodes know which of the chunks that they own are in use. When a node obtains the dataset identifying the chunks in use, the node determines the chunks not in use by a difference of those owned and those in use. This difference dataset is used to garbage collect owned, unused chunks. Garbage collection via this technology is able to be performed in a single cycle. | 2020-06-11 |
20200183830 | METHOD FOR GABAGE COLLECTING FOR NON-VOLATILE MEMORY - A method for garbage collecting for non-volatile memories are disclosed. The method includes steps: a) providing a SSD, connected to a host, containing a plurality of TLC blocks and SLC blocks; b) reading 3M TLC pages in a TLC block having data; c) moving valid data in the TLC blocks to at least one clean TLC block; d) sending a host program command of 1 page to the host; e) repeating step b) to step d) until valid data in 8 TLC blocks are moved; f) reading 1 SLC page in a SLC block having data; g) moving valid data in the SLC block to the at least one clean TLC block; h) sending a host program command of α page to the host; and i) repeating step f) to step h) until valid data in the SLC block having data are moved. | 2020-06-11 |
20200183831 | STORAGE SYSTEM AND SYSTEM GARBAGE COLLECTION METHOD - A storage system and a system garbage collection method are provided. The storage system includes a first controller, a second controller, and a solid state disk. The first controller or the second controller manages storage space of the solid state disk in a unit of a segment. The first controller is configured to perform system garbage collection on multiple segments of segments managed by the first controller. The second controller is configured to: when the first controller performs system garbage collection, perform system garbage collection on multiple segments of segments managed by the second controller. The multiple segments of the segments managed by the first controller and the multiple segments of the segments managed by the second controller are allocated within a same time period. Therefore, a quantity of times of write amplification in the solid state disk can be reduced. | 2020-06-11 |
20200183832 | CONTROLLER, MEMORY SYSTEM HAVING THE SAME, AND OPERATING METHOD THEREOF - There are provided a controller, a memory system having the same, and an operating method thereof. The controller includes: a host interface configured to receive a format request from a host, and output an internal format request including initial logical unit information; and a flash translation layer configured to initialize a map table for storing information on mapping between logical and physical unit numbers according to the initial logical unit information. | 2020-06-11 |
20200183833 | VIRTUAL SPACE MEMORY BANDWIDTH REDUCTION - A processing system includes a central processing unit (CPU) and a graphics processing unit (GPU) that has a plurality of compute units. The GPU receives an image from the CPU and determines a total result area in a virtual-matrix-multiplication space of a virtual matrix-multiplication output matrix based on convolutional parameters associated with the image in an image space. The GPU partitions the total result area of the virtual matrix-multiplication output matrix into a plurality of virtual segments. The GPU allocates convolution operations to the plurality of compute units based on each virtual segment of the plurality of virtual segments. | 2020-06-11 |
20200183834 | METHOD AND DEVICE FOR DETERMINING MEMORY SIZE - A method can be used to determine an overall memory size of a global memory area to be allocated in a memory intended to store input data and output data from each layer of a neural network. An elementary memory size of an elementary memory area intended to store the input data and the output data from the layer is determined for each layer. The elementary memory size is in the range between a memory size for the input data or output data from the layer and a size equal to the sum of the memory size for the input data and the memory size for the output data from the layer. The overall memory size is determined based on the elementary memory sizes associated with the layers. The global memory area contains all the elementary memory areas. | 2020-06-11 |
20200183835 | Multi-Ring Shared, Traversable, and Dynamic Advanced Database - Examples of the present disclosure describe systems and methods for sharing memory using a multi-ring shared, traversable and dynamic database. In aspects, the database may be synchronized and shared between multiple processes and/or operation mode protection rings of a system. The database may also be persisted to enable the management of information between hardware reboots and application sessions. The information stored in the database may be view independent, traversable, and resizable from various component views of the database. In some aspects, an event processor is additionally described. The event processor may use the database to allocate memory chunks of a shared heap to components/processes in one or more protection modes of the operating system. | 2020-06-11 |
20200183836 | METADATA FOR STATE INFORMATION OF DISTRIBUTED MEMORY - An approach is disclosed that maintains a status of a data granule. A local node maintains the status by tracking a set of state information associated with the data granule using a system memory metadata. The state information indicates whether the data granule that is associated with a block of memory is currently stored in a physical address on the local node. An interrupt is generated in response to detecting an access of the data granule when the data granule associated with the block of memory is not stored at the physical address on the local node. | 2020-06-11 |
20200183837 | DATAFLOW ACCELERATOR ARCHITECTURE FOR GENERAL MATRIX-MATRIX MULTIPLICATION AND TENSOR COMPUTATION IN DEEP LEARNING - A tensor computation dataflow accelerator semiconductor circuit is disclosed. The data flow accelerator includes a DRAM bank and a peripheral array of multiply-and-add units disposed adjacent to the DRAM bank. The peripheral array of multiply-and-add units are configured to form a pipelined dataflow chain in which partial output data from one multiply-and-add unit from among the array of multiply-and-add units is fed into another multiply-and-add unit from among the array of multiply-and-add units for data accumulation. Near-DRAM-processing dataflow (NDP-DF) accelerator unit dies may be stacked atop a base die. The base die may be disposed on a passive silicon interposer adjacent to a processor or a controller. The NDP-DF accelerator units may process partial matrix output data in parallel. The partial matrix output data may be propagated in a forward or backward direction. The tensor computation dataflow accelerator may perform a partial matrix transposition. | 2020-06-11 |
20200183838 | DYNAMIC CACHE RESIZE TAKING INTO ACCOUNT UNDERLYING RAID CHARACTERISTICS - A method for resizing write cache in a storage system is disclosed. In one embodiment, such a method includes maintaining, in a write cache, write data to be destaged to RAID arrays implemented on persistent storage drives. The method dynamically resizes the write cache in a way that takes into account the following: (1) an amount of battery power available to destage the write data to the persistent storage drives in the event of an emergency; and (2) underlying characteristics of the RAID arrays to which the write data is to be destaged. A corresponding system and computer program product are also disclosed. | 2020-06-11 |
20200183839 | Non-Uniform Pagination of Columnar Data - A computer implemented system and method of memory management for an in-memory database. The system implements a paged data vector using non-uniform compression of its chunks. In this manner, the system achieves greater compression than systems that use uniform compression. | 2020-06-11 |
20200183840 | CACHING DATA FROM REMOTE MEMORIES - An approach is disclosed that caches distant memories within the storage a local node. The approach provides a memory caching infrastructure that supports virtual addressing by utilizing memory in the local node as a cache of distant memories for data granules. The data granules are accessed along with metadata and an ECC associated with the data granule. The metadata is updated to indicate storage of the selected data granule in the cache. | 2020-06-11 |
20200183841 | RELAY CONSISTENT MEMORY MANAGEMENT IN A MULTIPLE PROCESSOR SYSTEM - Methods and apparatus for memory management are described. In one example, this disclosure describes a method that includes executing, by a first processing unit, first work unit operations specified by a first work unit message, wherein execution of the first work unit operations includes accessing data from shared memory included within the computing system, modifying the data, and storing the modified data in a first cache associated with the first processing unit; identifying, by the computing system, a second work unit message that specifies second work unit operations that access the shared memory; updating, by the computing system, the shared memory by storing the modified data in the shared memory; receiving, by the computing system, an indication that updating the shared memory with the modified data is complete; and enabling the second processing unit to execute the second work unit operations. | 2020-06-11 |
20200183842 | TRACKING TRANSACTIONS USING EXTENDED MEMORY FEATURES - An approach is disclosed that tracks memory transactions by a node. The approach establishes a transaction processing state corresponding to common virtual addresses accessed by a processing threads. Transactions are executed by the threads. A selected transaction is allowed to complete. In response to detecting a conflict in the transaction processing state, completion of a non-selected transaction is inhibited. | 2020-06-11 |
20200183843 | TRANSLATION ENTRY INVALIDATION IN A MULTITHREADED DATA PROCESSING SYSTEM - A multiprocessor data processing system includes a processor core having a translation structure for buffering a plurality of translation entries. In response to receipt of a translation invalidation request, the processor core determines from the translation invalidation request that the translation invalidation request does not require draining of memory referent instructions for which address translation has been performed by reference to a translation entry to be invalidated. Based on the determination, the processor core invalidates the translation entry in the translation structure and confirms completion of invalidation of the translation entry without regard to draining from the processor core of memory access requests for which address translation was performed by reference to the translation entry. | 2020-06-11 |
20200183844 | METHODS AND SYSTEMS FOR DISTRIBUTING MEMORY REQUESTS - A memory request, including an address, is accessed. The memory request also specifies a type of an operation (e.g., a read or write) associated with an instance (e.g., a block) of data. A group of caches is selected using a bit or bits in the address. A first hash of the address is performed to select a cache in the group. A second hash of the address is performed to select a set of cache lines in the cache. Unless the operation results in a cache miss, the memory request is processed at the selected cache. When there is a cache miss, a third hash of the address is performed to select a memory controller, and a fourth hash of the address is performed to select a bank group and a bank in memory. | 2020-06-11 |
20200183845 | SYSTEMS AND METHODS FOR PREFETCHING CONTENT ITEMS - Systems and methods for prefetching content items for display by applications executed on computing devices are provided. The method can include transmitting a first request for content to display within an environment of the application, the first request for content including a first parameter to be used to determine a first content item for display; storing in an associated memory element, the first parameter; transmitting a follow-on request for content including the first parameter of the first request for content; receiving a follow-on content item responsive to the follow-on request for content; storing the follow-on content item in a local cache structure specific to the application; transmitting a second request for content; retrieving, in response to the second request, the follow-on content item from the local cache structure; and displaying, in response to the second request, the follow-on content item within the environment of the application on the computing device. | 2020-06-11 |
20200183846 | METHOD AND DEVICE FOR OPTIMIZATION OF DATA CACHING - A computer-implemented method includes caching data from a persistent storage device into a cache. The method also includes caching a physical address and a logical address of the data in the persistent storage device into the cache. The method further includes in response to receiving an access request for the data, accessing the data cached in the cache using at least one of the physical address and the logical address. The embodiments of the present disclosure also provide an electronic apparatus and a computer program product. | 2020-06-11 |
20200183847 | STORAGE SYSTEM DE-THROTTLING TO FACILITATE EMERGENCY CACHE DESTAGE - A method for destaging data from cache is disclosed. In one embodiment, such a method includes maintaining, in cache, modified data to be destaged to persistent storage drives. The method further detects an emergency situation wherein the modified data needs to be promptly destaged to the persistent storage drives. In response to the emergency situation, the method automatically disables artificially-imposed throughput limits associated with the persistent storage drives. The method then destages the modified data to the persistent storage drives without restriction from the artificially-imposed throughput limits. A corresponding system and computer program product are also disclosed. | 2020-06-11 |
20200183848 | CACHE FOR STORING REGIONS OF DATA - Systems, apparatuses, and methods for efficiently performing memory accesses in a computing system are disclosed. A computing system includes one or more clients, a communication fabric and a last-level cache implemented with low latency, high bandwidth memory. The cache controller for the last-level cache determines a range of addresses corresponding to a first region of system memory with a copy of data stored in a second region of the last-level cache. The cache controller sends a selected memory access request to system memory when the cache controller determines a request address of the memory access request is not within the range of addresses. The cache controller services the selected memory request by accessing data from the last-level cache when the cache controller determines the request address is within the range of addresses. | 2020-06-11 |
20200183849 | SECTOR CACHE FOR COMPRESSION - In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed. | 2020-06-11 |
20200183850 | HYBRID MEMORY ACCESS FREQUENCY - Techniques that facilitate hybrid memory access frequency are provided. In one example, a system stores access frequency data for storage class memory and volatile memory in a translation lookaside buffer. The access frequency data is indicative of a frequency of access to the storage class memory and the volatile memory. The system also determines whether to store data in the storage class memory or the volatile memory based on the access frequency data stored in the translation lookaside buffer. | 2020-06-11 |
20200183851 | CONTROLLER AND OPERATION METHOD THEREOF - Provided is an operation method of a controller which controls a memory device including a plurality of memory blocks. The operation method may include calculating a number of extended free blocks in the memory device based on valid page counts of the respective memory blocks, when a number of substantive free blocks in the memory device is less than a first threshold value, and performing a garbage collection operation when the number of extended free blocks is less than a second threshold value. | 2020-06-11 |
20200183852 | MEMORY HAVING A STATIC CACHE AND A DYNAMIC CACHE - The present disclosure includes memory having a static cache and a dynamic cache. A number of embodiments include a memory, wherein the memory includes a first portion configured to operate as a static single level cell (SLC) cache and a second portion configured to operate as a dynamic SLC cache when the entire first portion of the memory has data stored therein. | 2020-06-11 |
20200183853 | TRANSLATION ENTRY INVALIDATION IN A MULTITHREADED DATA PROCESSING SYSTEM - A multiprocessor data processing system includes a processor core having a translation structure for buffering a plurality of translation entries. The processor core receives a sequence of a plurality of translation invalidation requests. In response to receipt of each of the plurality of translation invalidation requests, the processor core determines that each of the plurality of translation invalidation requests indicates that it does not require draining of memory referent instructions for which address translation has been performed by reference to a respective one of a plurality of translation entries to be invalidated. Based on the determination, the processor core invalidates the plurality of translation entries in the translation structure without regard to draining from the processor core of memory access requests for which address translation was performed by reference to the plurality of translation entries. | 2020-06-11 |
20200183854 | IDENTIFYING LOCATION OF DATA GRANULES IN GLOBAL VIRTUAL ADDRESS SPACE - An approach is disclosed that identifies a home node of a data granule. The process is performed by an information handling system (a local node) that retrieves a global virtual address directory. The global virtual address directory maps shared virtual addresses to a number nodes that includes the local node with one of the nodes being the home node. The shared virtual addresses correspond to a plurality of memory addresses that are stored in a shared virtual memory that is shared amongst the plurality of nodes. The approach receives a selected shared virtual address, retrieves, from the global virtual address directory, the home node associated with the selected shared virtual address, and accesses the data granule corresponding to the selected shared virtual address from the home node. | 2020-06-11 |
20200183855 | LOGICAL TO PHYSICAL MAPPING - The present disclosure includes apparatuses and methods for logical to physical mapping. A number of embodiments include a logical to physical (L2P) update table, a L2P table cache, and a controller. The controller may be configured to cause a list of updates to be applied to an L2P table to be stored in the L2P update table. | 2020-06-11 |
20200183856 | BUFFER AND METHODS FOR ADDRESS TRANSLATIONS IN A PROCESSOR - A method and system of translating addresses is disclosed that includes receiving an effective address for translation, providing a processor and a translation buffer where the translation buffer has a plurality of entries, wherein each entry contains a mapping of an effective address directly to a corresponding real address, and information on a corresponding intermediate virtual address. The method and system further include determining whether the translation buffer has an entry matching the effective address, and in response to the translation buffer having an entry with a matching effective address, providing the real address translation from the entry having the matching effective address. | 2020-06-11 |
20200183857 | LOCATING NODE OF NAMED DATA ELEMENTS IN COORDINATION NAMESPACE - An approach is disclosed that locates a named data element by a local node. A name corresponding to the named data element is received, the named data element exists in a Coordination Namespace allocated in a memory area that is distributed amongst a set of nodes that include the local node and remote nodes. A predicted node identifier is received and then the named data element is requested from the predicted node based on the predicted node identifier. | 2020-06-11 |
20200183858 | Methods and Systems for Incorporating Non-Tree Based Address Translation Into a Hierarchical Translation Lookaside Buffer (TLB) - A computer system includes a translation lookaside buffer (TLB) data cache and a processor. The TLB data cache includes a hierarchical configuration comprising a first TLB array, a second TLB array, a third TLB array, and a fourth TLB array. The processor is configured to receive a first address for translation to a second address, and determine whether translation should be performed using a hierarchical page table or a hashed page table. The processor also determines (using a first portion of the first address) whether the first array stores a mapping of the first portion of the first address in response to determining that the translation should be performed using the hashed page table, and retrieving the second address from the third TLB array or the fourth TLB array in response to determining that the first TLB array stores the mapping of the first portion of the first address. | 2020-06-11 |
20200183859 | DISTRIBUTED DIRECTORY OF NAMED DATA ELEMENTS IN COORDINATION NAMESPACE - An approach is described that provides a distributed directory structure within a storage of an information handling system (a local node). A request is received with the request corresponding to a shared virtual address. The shared virtual address that is shared amongst a number of nodes that includes the local node and some remote nodes. A Global Address Space Directory (GASD) is retrieved that corresponds to a global virtual address space. The GASD is stored in a Coordination Namespace that is stored in a memory that is distributed amongst the nodes. A mapping that is included in the GASD is used to determine the node where the shared virtual address currently resides. The shared virtual address is then accessed from the node where it currently resides. | 2020-06-11 |
20200183860 | PROCESSING DEVICE AND METHOD FOR CHANGING FUNCTION OF PINS - A processing device is provided. The processor of the processing device executes a first command to generate first setting values used to set the functions of the pins and writes the initial setting values into the register. When the initial setting values need to be changed to change the function of one or more pins, the processor executes a second command to generate second setting values used to set the functions of the pins and writes the second setting values into the register to replace the initial setting values. When the second setting values are written into the register, the register determines whether to replace the initial setting values with the second setting values according to the second setting values. When the register determines whether to replace the initial setting values with the second setting values, the register ignores the value 0 in the second setting values. | 2020-06-11 |
20200183861 | METHOD AND APPARATUS FOR SHARING SECURITY METADATA MEMORY SPACE - The presently disclosed method and apparatus for sharing security metadata memory space proposes a technique to allow metadata sharing two different encryption techniques. A section of memory encrypted using a first type of encryption and having first security metadata associated therewith is converted to a section of memory encrypted using a second type of encryption and having second security metadata associated therewith. At least a portion of said first security metadata shares a memory space with at least a portion of said second security metadata for a same section of memory. | 2020-06-11 |
20200183862 | DATA STORAGE MODULE AND SYSTEM HOST HAVING THE SAME - A data storage module includes an adapter and at least two data storage units. The adapter includes a hybrid port and at least two transmit ports coupled to the hybrid port, and the hybrid port is a hybrid U.2 transmission interface compatible with transmission protocols of SATA, SAS, and NVMe. Each of the data storage units has a transmission protocol different from that of other data storage units and coupling one of the compatible transmit ports according to a defined transmission protocol. A system host communicates for data storage or data access with each of the data storage units through the hybrid port and each of the transmit ports correspondingly coupled to each of the data storage units in the meantime. | 2020-06-11 |
20200183863 | ELECTRONIC DEVICE - An electronic device is disclosed. The electronic device comprises a circuit board, a memory part comprising a plurality of first memory chips mounted on the circuit board, a socket part comprising a plurality of terminals electrically connected to a memory module which comprises a plurality of second memory chips, a memory controller for controlling the operation of the plurality of first memory chips and, when the memory module is connected to the socket part, controlling the operation of the plurality of first memory chips and the plurality of second memory chips, a conductive pattern comprising a control line which sequentially connects, from the memory controller, one or more of the plurality of terminals on the socket part and the plurality of first memory chips, and a capacitive element connected to the control line at a preset position between the one or more terminals on the socket part and the memory controller. | 2020-06-11 |
20200183864 | PERIPHERAL DEVICE WITH EMBEDDED VIDEO CODEC FUNCTIONALITY - A NVMe™ or NVMe-over-fabrics enabled device with video codec functionality may be seen to overcome scalability problem of known hardware assisted video codec solutions. The device of aspects of the present application may or may not have storage media. A host computer communicates with the device through NVMe™ commands. The device may be in one of many SSD form factors, such as U.2 or AIC. The device may be provided as a component in NVMe-enabled computers or NVMe-over-fabrics-enabled systems. | 2020-06-11 |
20200183865 | DATA TRANSMISSION/RECEPTION CONTROL SYSTEM, METHOD AND PROGRAM - A data transmission/reception control system that can generate an appropriate combination of a data providing unit and a data acquiring unit in a case where value of the data providing unit varies is provided. A first data storage unit | 2020-06-11 |
20200183866 | Communications Interface Between Host System and State Machine - A communications interface for interfacing between a host system and a state machine includes an event slot, the event slot comprising a plurality of registers including: a write register for writing by the host system, and a read register for reading by the host system, wherein the event slot is addressed from the host system by a single address location permitting the host system to write data to the write register and/or read data from the read register; and wherein the write register and the read register are individually addressable by the state machine. | 2020-06-11 |
20200183867 | COMMUNICATION MODULE AND LIGHTING BUS SYSTEM HAVING A NETWORK INTERFACE - The invention relates to a communication module for connecting a lighting bus system ( | 2020-06-11 |
20200183868 | DATA TRANSMISSION USING FLIPPABLE CABLE - A data transmission medium includes first and second conductors and a first reversible plug connector coupled to a first end thereof. The first reversible plug connector includes a plurality of signal pins, a crossbar switch, a receiver, and a transmitter. In response to a first configuration state, the plurality of signal pins includes a first predetermined number of reception pins and a second predetermined number of transmission pins. The first and second predetermined numbers are different from each other and each is greater than zero. The crossbar switch couples the first predetermined number of reception pins to a first port and the second predetermined number of transmission pins to a second port. The receiver has an input coupled to the first conductor, and an output coupled to the first port. The transmitter has an input coupled to the second port and an output coupled to the second conductor. | 2020-06-11 |
20200183869 | ALIGNING RECEIVED BAD DATA INDICATORS (BDIS) WITH RECEIVED DATA ON A CROSS-CHIP LINK - Aligning received BDIs with received data on a cross-chip link including receiving, from the cross-chip link, a control flit comprising incoming data flit information for a plurality of incoming data flits; adding the incoming data flit information to a control structure; receiving, from the cross-chip link, the plurality of incoming data flits; directing each of the plurality of incoming data flits to virtual channel queues based on the incoming data flit information at a first read pointer in the control structure; receiving a bookend flit comprising a plurality of BDIs for the plurality of data flits; and associating each of the BDIs with the plurality of data flits based on the incoming data flit information at a second read pointer in the control structure. | 2020-06-11 |