02nd week of 2021 patent applcation highlights part 46 |
Patent application number | Title | Published |
20210011778 | METHODS AND APPARATUS FOR DEPLOYING A DISTRIBUTED SYSTEM USING OPERATING SYSTEM VIRTUALIZATION - Methods and apparatus are disclosed to deploying a distributed system using operating system or container virtualization. An example apparatus includes a management container including a configuration manager and a container manager. The example configuration manager to is receive an instruction for a desired deployment state and is to apply a first change to a first current deployment state of the management container based on the desired deployment state. The example container manager is to apply a second change to a second current deployment state of a deployed container based on the desired deployment state. The container manager is to return information indicative of the desired deployment state to the configuration manager when the second change from the second current deployment state to the desired deployment state is achieved. | 2021-01-14 |
20210011779 | PERFORMANCE-BASED WORKLOAD/STORAGE ALLOCATION SYSTEM - A performance-based workload/storage allocation system includes a workload/storage allocation device coupled via controller device(s) to storage devices that each include a respective storage device attribute structure having storage device attributes that identify performance capabilities of that storage device. The workload storage/allocation device identifies a first workload that requires storage resources, and retrieves first workload performance requirement(s) associated with the first workload. The workload storage/allocation device then retrieves the storage device attributes that identify the performance capabilities of each of the storage devices via the controller device(s) and from the respective storage device attribute structure included in each of the storage devices, and uses them to determine that at least one of the plurality of storage devices includes performance capabilities that satisfy the first workload performance requirement(s). The workload/storage allocation device then allocates the at least one of the plurality of storage devices for use with the first workload. | 2021-01-14 |
20210011780 | EXCHANGING RUNTIME STATE INFORMATION BETWEEN DATACENTERS USING A CONTROLLER BRIDGE - In an embodiment, a computer-implemented method for dynamically exchanging runtime state data between datacenters using a controller bridge is disclosed. In an embodiment, the method comprises: requesting, and receiving, one or more first runtime state data from one or more logical sharding central control planes (“CCPs”) controlling one or more logical sharding hosts; requesting, and receiving, one or more second runtime state data from one or more physical sharding CCPs controlling one or more physical sharding hosts; aggregating, to aggregated runtime state data, the one or more first runtime state data and the one or more second runtime state data; determining updated runtime state data based on the aggregated runtime state data, the one or more first runtime state data, and the one or more second runtime state data; and transmitting the updated runtime state data to the logical sharding CCPs and physical sharding CCPs. | 2021-01-14 |
20210011781 | EXCHANGING RUNTIME STATE INFORMATION BETWEEN DATACENTERS WITH A GATEWAY USING A CONTROLLER BRIDGE - In an embodiment, a computer-implemented method for dynamically exchanging runtime state data between datacenters with a gateway using a controller bridge is disclosed. In an embodiment, the method comprises: receiving one or more first runtime state data from one or more logical sharding central control planes (“CCPs”) controlling one or more logical sharding hosts; receiving one or more second runtime state data from a gateway that is controlled by a CCP that also controls one or more physical sharding hosts; aggregating to aggregated runtime state data, the one or more first runtime state data received from the one or more logical sharding CCPs and the one or more second runtime state data received from the gateway; determining updated runtime state data based on the aggregated runtime state data, the one or more first runtime state data, and the one or more second runtime state data; and transmitting the updated runtime state data to at least one of the one or more logical sharding CCPs and the gateway. | 2021-01-14 |
20210011782 | CLOUD ENVIRONMENT CONFIGURATION BASED ON TASK PARALLELIZATION - Example methods and computer systems for cloud environment configuration based on task parallelization. One example method may comprise: obtaining a task data structure specifying execution dependency information associated with a set of multiple configuration tasks that are executable to perform cloud environment configuration. The method may also comprise: In response to identifying a first configuration task and a second configuration task that are ready for execution based on the task data structure, triggering execution of the first configuration task and the second configuration task. The method may further comprise: in response to determination that the first configuration task has been completed, identifying third configuration task(s) that are ready for execution based on the task data structure; and triggering execution of the third configuration task(s) by respective third compute node(s). | 2021-01-14 |
20210011783 | Optimization of Parallel Processing Using Waterfall Representations - Event data for an application execution is accessed from a table of logged events, the event data comprising a sequence, a hierarchy, and a start time and duration for each event. Dependency data for each event is also accessed to determine whether the start time for an event is dependent on the prior completion of at least one other event. A waterfall representation is then generated, the representation including an entry for each event in the sequence, with a start time and duration represented for each event. Based on the dependencies and hierarchy, it is determined, for each event with a start time that is later than the start time of an event which precedes it in the sequence, whether the event's start time is dependent on the prior completion of at least one preceding event. The start time for each event may then be advanced based on the determination. | 2021-01-14 |
20210011784 | DIGITAL SIGNAL PROCESSING PLUG-IN IMPLEMENTATION - In some examples, digital signal processing plug-in implementation may include obtaining attributes of a user interface for a digital signal processing plug-in, and obtaining attributes of digital signal processing logic for the digital signal processing plug-in. The digital signal processing plug-in implementation may include generating, based on the attributes of the user interface and the attributes of the digital signal processing logic, a plug-in process to control operation of the user interface and the digital signal processing logic. Further, the digital signal processing plug-in implementation may include establishing, based on the generated plug-in process, a two-way communication link between a host and the plug-in process to implement the digital signal processing plug-in. | 2021-01-14 |
20210011785 | METHODS AND APPARATUS FOR CORRECTING OUT-OF-ORDER DATA TRANSACTIONS BETWEEN PROCESSORS - Methods and apparatus for correcting out-of-order data transactions over an inter-processor communication (IPC) link between two (or more) independently operable processors. In one embodiment, a peripheral-side processor receives data from an external device and stores it to memory. The host processor writes data structures (transfer descriptors) describing the received data, regardless of the order the data was received from the external device. The transfer descriptors are written to a memory structure (transfer descriptor ring) in memory shared between the host and peripheral processors. The peripheral reads the transfer descriptors and writes data structures (completion descriptors) to another memory structure (completion descriptor ring). The completion descriptors are written to enable the host processor to retrieve the stored data in the correct order. In optimized variants, a completion descriptor describes groups of transfer descriptors. In some variants, the peripheral processor caches the transfer descriptors to offload them from the transfer descriptor ring. | 2021-01-14 |
20210011786 | OPERATION METHOD OF ROBOT OPERATING SYSTEM AND A ROBOT CONTROL METHOD - An operation method of robot operating system and a robot control method are provided in this invention. The operation method of robot operating system includes steps of: monitoring an operating state of the Linux kernel through the security kernel when the security kernel and the Linux kernel of the robot operating system are both started; and hosting the Linux kernel through the security kernel when the Linux kernel runs abnormally or crashes. The technical scheme of the present invention is able to improve stability and safety of the robot operation. | 2021-01-14 |
20210011787 | TECHNOLOGIES FOR SCALING INTER-KERNEL TECHNOLOGIES FOR ACCELERATOR DEVICE KERNELS - Systems and methods for inter-kernel communication using one or more semiconductor devices. The semi-conductor devices include a kernel. The kernel may be in an inactive state unless performing an operation. One kernel of a first device may monitor data for an event. Once an event has occurred, the kernel sends an indication to a first inter-kernel communication circuitry. The inter-kernel communication circuitry determines an activation function of a plurality of activation functions is to be generated, generates the activation function, and transmits the activation function to a second kernel of a second device to waken and perform a function using a peer-to-peer connection. | 2021-01-14 |
20210011788 | VECTOR PROCESSING FOR RPC INFORMATION - Embodiments of the present specification disclose a vector-processing method, apparatus, and device for RPC information. The scheme comprises: acquiring an RPC-information sequence consisting of a plurality of RPC-information units of a user; establishing and initializing feature vectors of the RPC-information units; and training the feature vectors according to the RPC-information sequence and the feature vectors, so as to obtain feature vectors with accurate expression. | 2021-01-14 |
20210011789 | SAMPLING MANAGEMENT OF APPLICATION PROGRAMMING INTERFACE (API) REQUESTS - Systems, methods, and software described herein manage and process application programming interface (API) statistics associated with an API provider. In one example, a monitoring service may identify API statistics from a set of API requests to an API provider. From the statistics, the monitoring service may determine trends of interest in the API requests and modify at least one sampling rate of API requests to API provider to obtain the API statistics. | 2021-01-14 |
20210011790 | API Topology Hiding Method, Device, and System - Embodiments of this application relate to the field of communications technologies, and disclose an application programming interface (API) topology hiding method, a device, and a system, to hide, from an API invoker, an API exposing function (AEF) that provides an API. The method includes: receiving, by a common API framework core function (CCF) from a topology hiding request entity, a request message that includes information about an API and that is used to request to hide an AEF that provides the API; determining, based on the request message, a topology hiding entry point used by an API invoker to invoke the API; and sending, to the topology hiding entry point, an identifier of the API and an identifier of the AEF that provides the API, so that the topology hiding entry point hides the AEF that provides the API. | 2021-01-14 |
20210011791 | ABNORMALITY DETECTION SYSTEM, ABNORMALITY DETECTION METHOD, ABNORMALITY DETECTION PROGRAM, AND METHOD FOR GENERATING LEARNED MODEL - A method and system that efficiently selects sensors without requiring advanced expertise or extensive experience even in a case of new machines and unknown failures. An abnormality detection system includes a storage unit for storing a latent variable model and a joint probability model, an acquisition unit for acquiring sensor data that is output by a sensor, a measurement unit for measuring the probability of the sensor data acquired by the acquisition unit based on the latent variable model and the joint probability model stored by the storage unit, a determination unit for determining whether the sensor data is normal or abnormal based on the probability of the sensor data measured by the measurement unit, and a learning unit for learning the latent variable model and the joint probability model based on the sensor data output by the sensor. | 2021-01-14 |
20210011792 | CONTROLLER, DIAGNOSIS METHOD, AND DIAGNOSIS PROGRAM - It is desired to be able to easily identify a cause of a communication abnormality in an industrial machine. A controller of an industrial machine which communicates with an external device through a network includes: a plurality of communication units which respectively correspond to a plurality of communication protocols; and a diagnosis unit which starts up the communication units in a predetermined order and attempts communication using the communication protocols corresponding to each communication unit that is started up so as to diagnose the conditions of communication step by step. | 2021-01-14 |
20210011793 | DETERMINING ROOT-CAUSE OF FAILURES BASED ON MACHINE-GENERATED TEXTUAL DATA - A method and system for determining root-causes of incidences using machine-generated textual data. The method comprises receiving machine-generated textual data from at least one data source; classifying the received machine-generated textual data into at least one statistical metric; processing the statistical metric to recognize a plurality of incidence patterns; correlating the plurality of incidence patterns to identify at least a root-cause of an incidence that occurred in a monitored environment; and generating an alert indicating at least the identified root-cause. | 2021-01-14 |
20210011794 | ENHANCED IDENTIFICATION OF COMPUTER PERFORMANCE ANOMALIES BASED ON COMPUTER PERFORMANCE LOGS - In an exemplary embodiment, computer circuitry determines term characterization values for terms in computer performance logs and generates vectors that indicate the term characterization values. The computer circuitry determines vector similarity scores for these vectors. The computer circuitry aggregates the computer performance logs into aggregated logs based on the vector similarity scores. The computer circuitry selects rare logs from these aggregated logs and obtains computer performance anomaly labels for the rare logs. The computer circuitry matches new computer performance logs with the rare logs to detect the labeled computer performance anomalies. | 2021-01-14 |
20210011795 | COMPUTER SYSTEM, CONTROL METHOD, AND RECORDING MEDIUM - An FPGA includes a CRAM that records configuration data for defining a circuit configuration, a main circuit unit of which the circuit configuration is determined according to the configuration data, and an error detection unit that executes memory check processing of detecting whether or not any error is present in the configuration data. A control unit causes the main circuit unit to sequentially execute a plurality of sub-processing steps obtained by segmenting predetermined processing upon receiving a query requesting execution of the predetermined processing to execute the predetermined processing and enables the error detection unit to execute the memory check processing for each of the sub-processing steps. | 2021-01-14 |
20210011796 | AUTOMATED POWER DOWN BASED ON STATE OF FIRMWARE - Apparatus and methods are disclosed, including determining whether firmware has been successfully loaded and whether the firmware version is valid and operable, and if the firmware has not been successfully loaded or the firmware is not valid and operable, tracking a number of unsuccessful attempts to load the firmware or an elapsed time for unsuccessful attempts to load the firmware, and entering a memory device into a reduced-power state if either the number of unsuccessful attempts or the elapsed time has reached a programmable threshold. | 2021-01-14 |
20210011797 | CORRECTIVE DATABASE CONNECTION MANAGEMENT - Systems and methods for are provided for predicting impending failure of a database and preemptively initiating mitigating failover actions, for example by shedding connections or redirecting connection requests to an alternate database that can fulfill resources being requested. In an example embodiment, to detect a slow or unstable database, connection wait times are monitored over a rolling window of time intervals, a quantity of intervals in which at least one excessive wait time event occurred are counted during the time window, and if the quantity exceeds a threshold, the database is deemed unavailable, thereby triggering connection adjustments. | 2021-01-14 |
20210011798 | STREAMING SERVER STATISTICS AND PREDICTIVE MITIGATION - Aspects of the present disclosure involve systems and methods for improving the performance of a telecommunications network by monitoring the performance of one or more storage drives. Operational data is received from a plurality of storage drives of a storage server of a telecommunications network. A plurality of operational coefficients for each of the plurality of storage drives is derived based on the operational data, and a cluster plot is created from the plurality of operational coefficients for each of the plurality of storage drives. A distance is calculated between a subset of operational coefficients of the plurality of operational coefficients of the cluster plot, and a remedial action is initiated on a storage drive of the plurality of storage drives when a calculated distance of an operational coefficient associated with the storage drive exceeds a distance value from a cluster of the cluster plot. | 2021-01-14 |
20210011799 | GENERATING ERROR CHECKING DATA FOR ERROR DETECTION DURING MODIFICATION OF DATA IN A MEMORY SUB-SYSTEM - A request to store a first data is received. The first data and a first error-checking data are received. The first error-checking data can be based on a cyclic redundancy check (CRC) operation of the first data. A second data is generated by removing a portion of the first data. A second error-checking data of the second data is generated by using the first error-checking data and the removed portion of the first data. | 2021-01-14 |
20210011800 | GENERATING ERROR CHECKING DATA FOR ERROR DETECTION DURING MODIFICATION OF DATA IN A MEMORY SUB-SYSTEM - A request to store a first data is received. The first data and a first error-checking data are received. The first error-checking data can be based on a cyclic redundancy check (CRC) operation of the first data. A second data is generated by modifying the first data. A second error-checking data of the second data is generated by using the first error-checking data and a difference between the first data and the second data. | 2021-01-14 |
20210011801 | LOGIC BASED READ SAMPLE OFFSET IN A MEMORY SUB-SYSTEM - The present disclosure is directed to logic based read sample offset operations in a memory sub-system. A processing device performs a first read, a second read, and a third read of data from a memory devices using a first center value corresponding to a first read level threshold, a negative offset value, and a positive offset value, respectively. The processing device performs a XOR operation on results from the first and second reads to obtain a first value and a XOR operation on results from the second and third reads to obtain a second value. The processing device performs a first count operation on the first value to determine a first difference bit count and a second count operation on the second value to determine a second difference bit count. The processing device can store or output the first difference bit count and the second difference bit count. | 2021-01-14 |
20210011802 | READ LEVEL EDGE FIND OPERATIONS IN A MEMORY SUB-SYSTEM - The present disclosure is directed to read level edge find operations in a memory sub-system. A processing device receives a request to locate a first distribution edge at a target bit error rate (BER) of a first programming distribution. The processing device measures a first BER sample of the first programming distribution using a first offset value that is offset from a first center value corresponding to a first read level threshold and a second BER sample using a second offset value that is offset from the first offset value. The processing device determines that the second BER sample exceeds the target BER and the first BER sample does not exceed the target BER. The processing device determines a first location of the first distribution edge by interpolating between the first BER sample and the second BER sample. | 2021-01-14 |
20210011803 | SYSTEMS AND METHODS FOR PERFORMING A WRITE PATTERN IN MEMORY DEVICES - A semiconductor device may include a memory bank and a plurality of mode registers that communicatively couple to each of the plurality of memory banks. The plurality of mode registers may include a pattern of data stored therein. The semiconductor device may also include a bank control that receives a write pattern command that causes the bank control to write the pattern of data into the memory bank, send a signal to a multiplexer to couple the plurality of mode registers to the memory bank, and write the pattern of data to the memory bank via the plurality of mode registers. | 2021-01-14 |
20210011804 | READ RECOVERY CONTROL CIRCUITRY - An apparatus includes an error correction component coupled to read recovery control circuitry. The error correction component can be configured to perform one or more initial error correction operations on codewords contained within a managed unit received thereto. The read recovery control circuitry can be configured to receive the error corrected codewords from the error correction component and determine whether codewords among the error corrected codewords contain an uncorrectable error. The read recovery control circuitry can be further configured to determine that a redundant array of independent disks (RAID) codeword included in the plurality of error corrected codewords contains the uncorrectable error, request that codewords among the error corrected codewords that contain the uncorrectable error are rewritten in response to the determination, and cause the plurality of error corrected codewords to be transferred to a host coupleable to the read recovery control circuitry. | 2021-01-14 |
20210011805 | SINGLE SNAPSHOT FOR MULTIPLE AGENTS - A data storage system according to certain aspects can share a single snapshot for multiple applications and/or agents. For example, the data storage system can receive snapshot commands from multiple applications and/or agents, and can group them for a single snapshot (e.g., based on time of receipt of the snapshot commands). Data associated with the multiple applications and/or agents may reside on a single LUN or volume. The data storage system can take a single snapshot of the LUN or volume, and generate metadata regarding which portion of the snapshot is related to which application. The single snapshot can be stored in one or more secondary storage devices. The single snapshot may be partitioned into portions relating to different applications and stored separately. | 2021-01-14 |
20210011806 | MEMORY DEVICE FAILURE RECOVERY SYSTEM - A memory device failure recovery system includes a memory device management engine that is coupled to a first memory device via a first memory device slot, and a memory device management database. The memory device management engine identifies that the first memory device has experienced a failure in a configuration region of the first memory device during a current boot operation and, in response, retrieves memory device component information and memory device configuration information that is stored in the memory device management database and that was retrieved as part of a prior boot operation from a memory device that was connected to the first memory device slot. During the current boot operation, the memory device management engine determines whether first memory device components on the first memory device correspond to the memory device component information and, if so, uses the memory device configuration information to configure the first memory device. | 2021-01-14 |
20210011807 | METHODS AND SYSTEMS FOR RECOGNIZING UNINTENDED FILE SYSTEM CHANGES - A computing system includes a memory device, a persistent storage device, and a processor. The persistent storage device includes a filesystem having filesystem objects and a protection system stored thereon. The protection system includes a filesystem minifilter driver and a protection service. The minifilter driver intercepts an input/output (I/O) event directed to a target filesystem object and extracts system event metadata from the I/O event. The system event metadata includes an identifier of the target filesystem object. The system event metadata is transmitted to the protection service and recorded in a record file. A backup copy of the target filesystem object created. The I/O event is released after recordation of the system event metadata and creation of the backup copy, thereby enabling the I/O event to be performed on the target filesystem object. During a system restore operation, the target filesystem object is replaced with the backup copy. | 2021-01-14 |
20210011808 | AUTOMATICALLY SETTING A DYNAMIC RESTORE POLICY IN A NATIVE CLOUD ENVIRONMENT - One example method includes identifying a group of microservices that form respective portions of an application, capturing any relations among microservices in the group of microservices, generating one or more restore policies for the application, based on identified relations among the microservices in the group of microservices, and configuring one of the restore policies so that such restore policy specifies restoring, together, a microservice that was identified as a partial cause of a problem, and any other microservices that are dependent on that microservice. | 2021-01-14 |
20210011809 | CAPACITOR ENERGY MANAGEMENT FOR UNEXPECTED POWER LOSS IN DATACENTER SSD DEVICES - Various implementations described herein relate to systems and methods for a Solid State Drive (SSD) to manage data in response to a power loss event, including writing data received from a host to a volatile storage of the SSD, detecting the power loss event before the data is written to a non-volatile storage of the SSD, storing the write commands to a non-volatile storage of the SSD, marking at least one storage location of the SSD associated with the write commands as uncorrectable, for example, after the power is restored. | 2021-01-14 |
20210011810 | Synthetic Full Backup Storage Over Object Storage - Disclosed embodiments include a method (system and non-transitory computer-readable medium) for backing up updated portions of a plurality files having hierarchical relationships through object storage. In one or more embodiments, a file is segregated into chunks, and objects corresponding to the chunks are generated for storage at an object storage. For a chunk, an object for storing the chunk and additional objects for storing mapping information are generated. The mapping information may include path information identifying a path of the file in a hierarchical structure, a file version list identifying a version of the file, a chunk list describing an association between the file and the chunks, a chunk version list identifying a version of the chunk, etc. When a portion of the file is updated, objects corresponding to the updated portion of the file can be generated, and stored at the object storage. | 2021-01-14 |
20210011811 | Scalable Cloud-Based Backup Method - A computer-implemented system and method of backing up and restoring a containerized application or a cloud-based application using a datamover service includes determining a stateful set of services of the containerized application or cloud-based application to be backed up. A persistent volume associated with the determined stateful set of services of the containerized application or cloud-based application is identified. Then, a snapshot of the identified persistent volume is created and a new persistent volume is created from the snapshot. The created new persistent volume is attached to a datamover service. Data from the created new persistent volume is then copied to a network file system or storage system using the datamover service, thereby creating backup data stored in a storage system. | 2021-01-14 |
20210011812 | PREPARING CONTAINERIZED APPLICATIONS FOR BACKUP USING A BACKUP SERVICES CONTAINER AND A BACKUP SERVICES CONTAINER-ORCHESTRATION POD - A “backup services container” comprises “backup toolkits,” which include scripts for accessing containerized applications plus enabling utilities/environments for executing the scripts. The backup services container is added to Kubernetes pods comprising containerized applications without changing other pod containers. For maximum value and advantage, the backup services container is “over-equipped” with toolkits. The backup services container selects and applies a suitable backup toolkit to a containerized application to ready it for a pending backup. Interoperability with a proprietary data storage management system provides features that are not possible with third-party backup systems. Some embodiments include one or more components of the proprietary data storage management within the illustrative backup services container. Some embodiments include one or more components of the proprietary data storage management system in a backup services pod configured in a Kubernetes node. All configurations and embodiments are suitable for cloud and/or non-cloud computing environments. | 2021-01-14 |
20210011813 | Information Backup Method and Related Device - This application disclose an information backup method and a related device, to ensure continuity of a user service. The method is applied to a communications system including a primary device, a secondary device, and a cloud device, and the method is performed by the primary device. The method includes: sending a first identity notification to the cloud device, where the first identity notification is a notification indicating that the primary device has a primary device identity; and uploading obtained first user information to the cloud device when determining that a communication status of the cloud device is normal, where the first user information is stored by the cloud device and provided to the secondary device, and the first user information is to-be-backed-up information of user equipment that gets online from the primary device when the communication status of the cloud device is normal. | 2021-01-14 |
20210011814 | SYSTEM AND METHOD FOR RESTORATIONS OF VIRTUAL MACHINES IN VIRTUAL SYSTEMS - A method for restoring virtual machines in accordance with one or more embodiments of the invention includes obtaining, by a data protection manager, a restoration request, and in response to the restoration request: identifying a plurality of virtual machines (VMs) to restore based on the restoration request, determining a restoration process based on the plurality of virtual machines, and initiating a deployment of a production agent based on the restoration process, wherein the production agent initiates a restoration on at least a portion of the plurality of VMs. | 2021-01-14 |
20210011815 | BACKUP DATA RESTORATION WITHOUT USER INTERVENTION - According to examples, an apparatus may include a processor that may automatically restore a backup copy from a remote backup storage system to a user device without a user request to do so. For example, the apparatus may, at various times without user intervention, determine whether a restoration of a backup copy of local data is to be performed based on various criteria for automatically restoring the backup copy without a user request to do so. Based on satisfaction of the criterion, the apparatus may restore the backup copy to the original location of the local data (to immediately replace the local data) and/or to a temporary location accessible to the apparatus (such as via onboard storage) from which the backup copy may replace the local data at a later time (such as on-demand). | 2021-01-14 |
20210011816 | PREPARING CONTAINERIZED APPLICATIONS FOR BACKUP USING A BACKUP SERVICES CONTAINER IN A CONTAINER-ORCHESTRATION POD - A “backup services container” comprises “backup toolkits,” which include scripts for accessing containerized applications plus enabling utilities/environments for executing the scripts. The backup services container is added to Kubernetes pods comprising containerized applications without changing other pod containers. For maximum value and advantage, the backup services container is “over-equipped” with toolkits. The backup services container selects and applies a suitable backup toolkit to a containerized application to ready it for a pending backup. Interoperability with a proprietary data storage management system provides features that are not possible with third-party backup systems. Some embodiments include one or more components of the proprietary data storage management within the illustrative backup services container. Some embodiments include one or more components of the proprietary data storage management system in a backup services pod configured in a Kubernetes node. All configurations and embodiments are suitable for cloud and/or non-cloud computing environments. | 2021-01-14 |
20210011817 | Virtual Machine Recovery Method and Virtual Machine Management Device - A virtual machine recovery method, where after receiving a virtual machine recovery command for recovering a to-be-recovered virtual machine, a virtual machine management device obtains configuration information of the to-be-recovered virtual machine from a cloud server. Then the virtual machine management device creates, according to the configuration information, a recovered virtual machine and a local storage. After downloading basic system data from the cloud server, the recovered virtual machine is started according to the basic system data. When receiving an input/output (IO) request for accessing a first data of the to-be-recovered virtual machine, the virtual machine management device downloads the first data from the cloud server to the local storage. | 2021-01-14 |
20210011818 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - There are provided a memory system and an operating method thereof. A method for operating a memory system includes: performing a program operation on a first page of a first page group included in a first memory block and storing physical-logical address mapping information on the first page in a physical-logical address mapping information storing section; performing a program operation on a second page of the first page group included in the first memory block and storing physical-logical address mapping information on the second page in the physical-logical address mapping information storing section; and copying the physical-logical address mapping information on the first and second pages of the first page group, which are stored in the physical-logical address mapping information storing section, to a second memory block. | 2021-01-14 |
20210011819 | TECHNIQUES FOR MANAGING CONTEXT INFORMATION FOR A STORAGE DEVICE WHILE MAINTAINING RESPONSIVENESS - Disclosed are techniques for managing context information for data stored within a computing device. According to some embodiments, the method can include the steps of (1) loading, into a volatile memory of the computing device, the context information from a non-volatile memory of the computing device, where the context information is separated into a plurality of portions, and each portion of the plurality of portions is separated into a plurality of sub-portions, (2) writing transactions into a log stored within the non-volatile memory, and (3) each time a condition is satisfied: identifying a next sub-portion to be processed, where the next sub-portion is included in the plurality of sub-portions of a current portion being processed, identifying a portion of the context information that corresponds to the next sub-portion, converting the portion from a first format to a second format, and writing the portion into the non-volatile memory. | 2021-01-14 |
20210011820 | ELECTRONIC APPARATUS AND CONTROLLING METHOD THEREOF - An electronic apparatus is provided. The electronic apparatus communicates with an external display apparatus including plural display modules, and includes first and second connection interfaces and a processor. The first communication interface is connected to a first display module from among the display modules connected together in a daisy chain configuration. The second communication interface is connected to a second display module from among the display modules. The processor transmits control data to the first display module through the first communication interface, and based on identifying that an error has occurred in a reception of the control data in any of the display modules, controls the second communication interface to transmit the control data to the second display module. | 2021-01-14 |
20210011821 | REMOTE HEALTH MONITORING IN DATA REPLICATION ENVIRONMENTS - A method for more effectively utilizing computing resources in a data replication environment is disclosed. In one embodiment, such a method detects, at a primary system, activity occurring on the primary system. This activity is recorded in systems logs located at the primary system. The method automatically mirrors the system logs from the primary system to a secondary system that is in a mirroring relationship with the primary system. The system logs are analyzed at the secondary system. In the event abnormal activity is detected in the system logs at the secondary system, the method automatically sends, from the secondary system to the primary system, one or more commands that are designed to address the abnormal activity. A corresponding system and computer program product are also disclosed. | 2021-01-14 |
20210011822 | PREDICTABLE SYNCHRONOUS DATA REPLICATION - The method, apparatus, and system disclosed herein relate to a faster and more predictable way to achieve synchronous phase from non-synchronous phase for synchronous data replication between a source volume and a destination volume. Consistency data and replication data are sent in parallel during a pre-synchronous phase to reestablish a synchronous phase of operation. Sequence identifiers and consistency sequence identifiers are used to determine whether to write consistency data to the destination volume, or to leave consistency data unwritten for blocks already updated with replication data during the process of reestablishing synchronization. | 2021-01-14 |
20210011823 | CONTINUOUS TESTING, INTEGRATION, AND DEPLOYMENT MANAGEMENT FOR EDGE COMPUTING - Various aspects of methods, systems, and use cases for testing, integration, and deployment of failure conditions in an edge computing environment is provided through use of perturbations. In an example, operations to implement controlled perturbations in an edge computing platform include: identifying at least one perturbation parameter available to be implemented with a hardware components of an edge computing system that provides a service using the hardware components; determining values, which disrupt operation of the service, to implement the perturbation parameter among the hardware components; deploying the perturbation parameters to the hardware components, during operation of the service to process a computing workload, to cause perturbation effects on the service; collecting telemetry values associated with the hardware components, produced during operation of the service that indicate the perturbation effects upon the operation of the service; and cause a computing operation to occur based on the collected telemetry values. | 2021-01-14 |
20210011824 | SYSTEM, APPARATUS AND METHODS FOR AUTOMATICALLY TESTING MOBILE DEVICES - Apparatus and methods for automatically testing mobile devices are disclosed according to various embodiments. In one example, a disclosed apparatus includes: a robot having a retention device into which the mobile device to be tested is positioned; a test computer having a processor and a non-transitory computer readable storage medium storing test software for testing the mobile device; and a user monitor electrically connected to the test computer and configured for providing a result of the testing of the mobile device. The mobile device is wirelessly connected to the test computer and has a test application installed thereon corresponding to the test software. The robot is configured for performing interaction and manipulation of the mobile device in cooperation with the test application and the test software during the testing. | 2021-01-14 |
20210011825 | SEAMLESS MULTI-CLOUD SDWAN DISTASTER RECOVERY USING ORCHESTRATION PLANE - The present disclosure is directed to management of migration of SD-WAN solutions in a multi-cloud structure upon detection of a failover event. In one aspect, a method includes monitoring, using virtual bonds of a network orchestration component, clusters of virtual management components of multiple cloud networks, corresponding virtual management components of one of the multiple cloud networks implementing one or more services of a Software-Defined Wide Access Network (SD-WAN) solution; detecting, using the virtual bonds, a failover event at the one of the multiple cloud networks; and identifying, by the virtual bonds, a new destination cloud network to migrate the one or more services of the SD-WAN solution to, from a source cloud network at which the failover event is detected. | 2021-01-14 |
20210011826 | Flattened Historical Material Extracts - A system to generate historical usage data of a computing resource includes a module is configured to use at least one processor of the system to receive a query including a target and a time window and to retrieve historical file system data from backups of computing resources, where the historical file system data includes a file system object that was processed by the target during the time window. The module is further configured to use at least one processor of the system to generate historical usage data by converting the historical file system data to a temporally flat format that preserves a provenance of the file system object and to store the historical usage in a hierarchical data structure. The module is additionally configured to use at least one processor of the system to provide the hierarchical data structure in a response to the received query. | 2021-01-14 |
20210011827 | INFORMATION PROCESSING APPARATUS - An information processing apparatus includes a thread scheduler that allocates a process to multiple process execution hardware that process a program having a graph structure. The information processing apparatus includes: a code reader that reads a diagnostic code stored in advance; and an allocator that causes the multiple process execution hardware to execute the diagnostic code so as to complete diagnosis within a mean time to failure. | 2021-01-14 |
20210011828 | SYSTEMS AND METHODS FOR DYNAMICALLY SIMULATING LOAD TO AN APPLICATION UNDER TEST - In some embodiments, apparatuses and methods are provided herein useful to simulate a load to an application under test. In some embodiments, there is provided a system including one or more control circuits configured to implement a plurality of agent test virtual machines (VMs) each cooperatively configured to simulate a load; a database; and a main control circuit configured to execute in parallel a load testing tool associated with each agent test VM. The main control circuit configured to send an execute signal; change a status of at least one of free agent test VMs to a running agent test VM; in response to a determination to increase an overall simulated load, send the execute signal to another one of the free agent test VMs; and in response to a determination to decrease the overall simulated load, send a stop signal to one of running agent test VMs. | 2021-01-14 |
20210011829 | ELECTRONIC DEVICE AND ON-DEVICE METHOD FOR ENHANCING USER EXPERIENCE IN ELECTRONIC DEVICE - Embodiments herein provide an on-device method for enhancing user experience in an electronic device. The method includes monitoring a plurality of parameters associated with an operation of the electronic device. The method includes identifying an anomaly associated with the electronic device based on the plurality of parameters associated with the operation of the electronic device and identifying a class of anomaly to which the anomaly associated with the electronic device belongs using a first on-device model. Further, the method includes presenting at least one question associated with the identified class of anomaly to user of the electronic device using on a second on-device model and receiving at least one user input for the at least one question. Furthermore, the method includes performing at least one action for enhancing the user experience based on the at least one user input for the at least one question. | 2021-01-14 |
20210011830 | PREDICTIVE STORAGE MANAGEMENT SYSTEM - A predictive storage management system includes a storage system having storage devices, and a predictive storage management device coupled to the storage system via a network. The predictive storage management device includes a statistical time-series storage device usage sub-engine that retrieves first storage device usage data from a first storage device in the storage system and uses it to generate a first storage device usage trend model. A machine-learning storage system usage sub-engine in the predictive storage management device retrieves storage system implementation information from the storage system and uses it to generate a storage system implementation model. A storage management sub-engine in the predictive storage management device analyzes the first storage device usage trend model and the storage system implementation model to predict future usage of the first storage device and, based on that predicted future usage, performs a management action associated with the first storage device. | 2021-01-14 |
20210011831 | INGESTING DATA TO A PROCESSING PLATFORM - Example embodiments describe a method performed by one or more processors. The method may comprise sending over a network, to a software component installed at a remote data source, a request to download data stored at, or in association with, the remote data source, the software component being configured to access performance data at said remote data source. In response to sending the request, the method may comprise receiving from the software component at least an indication of the performance data accessed by said software component, determining whether to proceed with the data download request or to modify the data download request based on the received performance data. | 2021-01-14 |
20210011832 | LOG ANALYSIS SYSTEM, LOG ANALYSIS METHOD, AND STORAGE MEDIUM - Provided are a log analysis system, a log analysis method, and a storage medium that can generate information indicating a state of a system without requiring to manually define a state of the target system in advance. The log analysis system includes: a feature extraction unit that extracts at least one feature of a text log file including a plurality of text log messages corresponding to information in which an event in a target system and a time when the event occurred are associated with each other; and an index generation unit that, based on the feature and numerical data including numerical information related to the target system and a time when the numerical information was stored, generates an index indicating a state of the target system. | 2021-01-14 |
20210011833 | Handling Trace Data for Jumps in Program Flow - A processor supervisory unit for monitoring the program flow executed by a processor, the supervisory unit being arranged to store a set of values representing locations to which the program flow is expected to return after jumps in the program flow, the unit being capable of: in a first mode, on detecting a jump in the program flow to store a location value representing a location to which the program flow is expected to return from that jump; and in a second mode, on detecting a jump in the program flow to increment a counter associated with a location value representing a location to which the program flow is expected to return from that jump. | 2021-01-14 |
20210011834 | Service Upgrade Management Method, Apparatus, And Storage Medium - Example service upgrade management methods, apparatuses, and storage medium are provided. One example method includes creating a gray release policy and a gray traffic distribution rule. A gray traffic distribution status can then be controlled. The gray release policy, the gray traffic distribution rule, and the gray traffic distribution status can then be delivered to a gray traffic distribution device, where the gray release policy, the gray traffic distribution rule, and the gray traffic distribution status are used by the gray traffic distribution device to control a flow direction of a service message. | 2021-01-14 |
20210011835 | SYSTEM AND METHOD FOR DEBUGGING SOURCE CODE OF AN APPLICATION - Disclosed is a method and system for debugging source code of an application. The method includes establishing a WebSocket connection with a Chrome DevTools Protocol, using a unique Uniform Resource Locator (URL) created by Node.js. The Chrome DevTools Protocol is a V8 inspector. The method further includes listening for asynchronous messages from the V8 inspector. The asynchronous messages are defined by V8 inspector protocol. Responses received from the V8 inspector protocol are processed and debugger operations selected by a user are translated into operations known to the V8 inspector. The source code of the application is executed on the Node.js utilizing launch configuration and data required for starting a Node.js process with responsibilities of terminating the Node.js process when the user ends the debug session. | 2021-01-14 |
20210011836 | DETERMINE ERRORS AND ANOMALIES IN A FLOW PROCESS INTEGRATION INSTANCE THAT INTEGRATES SERVICES - Implementations include a method and system configured to First information is collected during the processing of a flow process integration in the known environment while applying a stress test to a first service and recording the processing as a data recording. The data recording is analyzed to determine a nodal structure of the flow process integration instance. An updated version of the data recording with a second service that is modified is received. The updated version of the data recording is processed in the known environment. Second information pertaining to errors and anomalies associated with the updated version is collected while traversing the nodal structure during the processing of the updated version of the data recording in the known environment. The first information with the second information are compared to determine whether the errors and the anomalies are within an error threshold. | 2021-01-14 |
20210011837 | SYSTEMS AND METHODS FOR FUZZING WITH FEEDBACK - A system can include one or more processors and computer-readable instructions that when executed by the one or more processors, cause the one or more processors to provide a first test signal to an electronic device, monitor at least one parameter of the electronic device during a time period subsequent to the test signal being provided to the electronic device, determine, based on the at least one parameter, a detected response of the electronic device to the first test signal, determine, using a response model, an expected response of the electronic device to the first test signal, and provide a second test signal based on the detected response and the expected response to the electronic device. The system can include a communications circuit that provides the test signal and receives at least some feedback indicating the parameters, and sensors that receive at least some feedback indicating the parameters. | 2021-01-14 |
20210011838 | PARTIAL-RESULTS POST-SILICON HARDWARE EXERCISER - A method for testing an integrated circuit, comprising: accessing a database associated with a test template, wherein said test template is configured to test a selected function of the integrated circuit; storing, in said database, data corresponding to at least partial predicted results of one or more random instruction sequences generated based on said test template; generating, by an automated test generation tool, a random instruction sequence based on said test template; executing said instruction sequence by a hardware exerciser, in the integrated circuit; and comparing results of said instruction sequence with said at least partial predicted results, to verify a function of said integrated circuit. | 2021-01-14 |
20210011839 | Automation of Enterprise Software Inventory and Testing - Disclosed herein are system, method, and computer program product embodiments for automating component management in enterprise applications. An embodiment operates by receiving metadata associated with the enterprise application implementation and storing an inventory including at least a portion of the metadata. The system then determines one or more component dependencies of the enterprise application implementation based on the inventory and providing one or more recommendations for component installation or deletion based on the inventory and one or more component dependencies. The system also generates one or more testcases based on the inventory and the one or more component dependencies. | 2021-01-14 |
20210011840 | SOFTWARE TESTING METHOD, SYSTEM, APPARATUS, DEVICE MEDIUM, AND COMPUTER PROGRAM PRODUCT - The present disclosure provides a software testing method, system, apparatus, device, medium, and computer program product. The method includes: acquiring, by an automated compilation and deployment platform, a first source code and first code information corresponding to the first source code; compiling and deploying, by the automated compilation and deployment platform, the first source code to obtain first deployment information of the first source code; creating a test-version software according to the first deployment information, and determining a first test case corresponding to the first code information according to a preset correspondence between code information and test cases; performing, by an automated testing platform, a functional test for the test-version software based on the first test case to obtain a test result. In the embodiments of the present disclosure, the correspondence between code information and test cases may be established in advance and preset. | 2021-01-14 |
20210011841 | SYSTEMS AND METHODS FOR MOBILE APPLICATION ACCESSIBILITY TESTING - Systems and methods for mobile application accessibility testing are disclosed. According to one embodiment, in a test bench comprising at least one computer processor, a method for mobile application accessibility testing may include: (1) identifying an accessibility checkpoint for testing; (2) generating a test command for the accessibility checkpoint; (3) communicating the test command to a mobile electronic device, the mobile electronic device having a mobile application to be tested, an instrument application, and probe application in a memory thereof; (4) executing the test command on the mobile application to be tested using the instrument application; and (5) collecting results of the execution using the probe application. | 2021-01-14 |
20210011842 | CONTROLLER AND OPERATION METHOD THEREOF - A controller configured to control memory chips in communication with the controller is provided. The controller comprises: a host interface configured to receive a request from a host; an address mapper configured to, upon receipt of both a turbo write request for writing data to one or more high-speed storage blocks at a high speed to and a normal write request for writing data to one or more storage blocks at a lower speed, allocate a first plane including a memory block configured to perform write operations in a single level cell mode at the high speed to a first plane group in order to respond to the turbo write request, and allocate a second plane to a second plane group at the slower speed in order to respond to the normal write request; and a memory interface configured to control the memory chips. | 2021-01-14 |
20210011843 | MEMORY SYSTEM AND OPERATION METHOD THEREOF - A memory system may include: a memory device; and a controller. When at least one data group is received, the data group including a plurality of data which is required to be collectively processed, the controller reads preceding logical-to-physical (L2P) map information for the data group from a first table and stores the read L2P map information in a second table before reception of the plurality of the data of the data group is committed, and the controller stores the plurality of the data in the memory device, and the controller updates the L2P map information for the data group that is stored in the first table in response to the storing of the plurality of the data in the memory device. | 2021-01-14 |
20210011844 | MEMORY CONTROLLER AND OPERATING METHOD THEREOF - A memory controller for performing garbage collection without moving data of a valid page, controls a memory device including a plurality of memory blocks in which data is stored. The memory controller includes a victim block setting circuit for selecting a victim block among the memory blocks by receiving memory block information representing whether a valid page and an invalid page are included in each of the plurality of memory blocks, when garbage collection is performed, and a sub-block controller for outputting a sub-block read command for determining valid pages included in each of sub-blocks within the victim block, by dividing the victim block into the sub-blocks, and outputting a sub-block erase command for selectively erasing a part of the sub-blocks included in the victim block, by receiving sub-block information corresponding to the sub-block read command from the memory device. | 2021-01-14 |
20210011845 | CAPTURING TIME-VARYING STORAGE OF DATA IN MEMORY DEVICE FOR DATA RECOVERY PURPOSES - A memory device (or memory sub-system) includes one or more memory components having multiple blocks, the multiple blocks containing pages of data. A processing device is coupled to the one or more memory components. The processing device to execute firmware to: track write timestamps of the pages of data that have been marked as invalid; retain a storage state stored for each page marked as invalid, wherein invalid data of the marked pages remains accessible via the storage states; in response to a write timestamp of a page being beyond a retention time window, mark the page as expired, indicating that the page is an expired page; and reclaim the expired page for storage of new data during a garbage collection operation. | 2021-01-14 |
20210011846 | SYSTEMS AND METHODS FOR READING AND WRITING SPARSE DATA IN A NEURAL NETWORK ACCELERATOR - Disclosed herein includes a system, a method, and a device for reading and writing sparse data in a neural network accelerator. A plurality of slices can be established to access a memory having an access size of a data word. A first slice can be configured to access a first side of the data word in memory. Circuitry can access a mask identifying byte positions within the data word having non-zero values. The circuitry can modify the data word to have non-zero byte values stored starting at an end of the first side, and any zero byte values stored in a remainder of the data word. A determination can be made whether a number of non-zero byte values is less than or equal to a first access size of the first slice. The circuitry can write the modified data word to the memory via at least the first slice. | 2021-01-14 |
20210011847 | OPTIMIZED SORTING OF VARIABLE-LENGTH RECORDS - Optimized techniques are disclosed for sorting variable-length records using an optimized amount of memory while maintaining good locality of references. The amount of memory required for sorting the variable length records is optimized by reusing some of the memory used for storing the variable length records being sorted. Pairs of input runs storing variable length records may be merged into a merged run that contains the records in a sorted order by incrementally scanning, sorting, and copying the records from the two input runs being merged into memory pages of the merged run. When all the records of a memory page of an input run have been processed or copied to the merged run, that memory page can be emptied and released to a cache of empty memory pages. Memory pages available from the cache of empty memory pages can then be used for generating the merged run. | 2021-01-14 |
20210011848 | DATA PROCESSING FOR ALLOCATING MEMORY TO APPLICATION CONTAINERS - A system and related method for managing memory in data processing comprises allocating each of a plurality of application containers a respective portion of a memory communicatively coupled to a plurality of processing units. The method further comprises allocating each of the plurality of application containers a respective group of the plurality of processing units and allocating, to each of the plurality of application containers, nursery and tenured heap spaced in the memory. The method then comprises performing, responsive to a request from an application container, garbage collection from the nursery and tenured heap spaces allocated to the application container. | 2021-01-14 |
20210011849 | PROCESSOR CLUSTER ADDRESS GENERATION - Techniques for data manipulation using processor cluster address generation are disclosed. One or more processor clusters capable of executing software-initiated work requests are accessed. A plurality of dimensions from a tensor is flattened into a single dimension. A work request address field is parsed, where the address field contains unique address space descriptors for each of the plurality of dimensions, along with a common address space descriptor. A direct memory access (DMA) engine coupled to the one or more processor clusters is configured. Addresses are generated based on the unique address space descriptors and the common address space descriptor. The plurality of dimensions can be summed to generate a single address. Memory is accessed using two or more of the addresses that were generated. The addresses are used to enable DMA access. | 2021-01-14 |
20210011850 | DETECTING AND CORRECTING CACHE MEMORY LEAKS - Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to monitor one or more processing threads of a storage device. Each of the one or more processing threads includes two or more cache states. The at least one processor also updates one or more data structures to indicate a subject cache state of each of the one or more processing threads and detect an event that disrupts at least one of the one or more processing threads. Further, the processor determines a cache state of the at least one of the one or more processing threads contemporaneous to the disruption event using the one or more data structures and performs a recovery process for the disrupted at least one of the one or more processing threads. | 2021-01-14 |
20210011851 | DETERMINING PRE-FETCHING PER STORAGE UNIT ON A STORAGE SYSTEM - A pre-fetching technique determines what data, if any, to pre-fetch on a per-logical storage unit basis. For a given logical storage unit, what, if any, data to prefetch is based at least in part on a collective sequential proximity of the most recently requested pages of the logical storage unit. Determining what, if any, data to pre-fetch for a logical storage unit may include determining a value for a proximity metric indicative of the collective sequential proximity of the most recently requested pages, comparing the value to a predetermined proximity threshold value, and determining whether to pre-fetch one or more pages of the logical storage unit based on the result of the comparison. A data structure may be maintained that includes most recently requested pages for one or more logical storage units. This data structure may be a table. | 2021-01-14 |
20210011852 | DATA PLACEMENT IN WRITE CACHE ARCHITECTURE SUPPORTING READ HEAT DATA SEPARATION - A computer-implemented method, according to one approach, includes: receiving write requests, accumulating the write requests in a destage buffer, and determining a current read heat value of each logical page which corresponds to the write requests. Each of the write requests is assigned to a respective write queue based on the current read heat value of each logical page which corresponds to the write requests. Moreover, each of the write queues correspond to a different page stripe which includes physical pages, the physical pages included in each of the respective page stripes being of a same type. Furthermore, data in the write requests is destaged from the write queues to their respective page stripes. Other systems, methods, and computer program products are described in additional approaches. | 2021-01-14 |
20210011853 | GRAPHICS MEMORY EXTENDED WITH NONVOLATILE MEMORY - An embodiment of an electronic processing system may include an application processor, system memory communicatively coupled to the application processor, a graphics processor communicatively coupled to the application processor, graphics memory communicatively coupled to the graphics processor, and persistent storage media communicatively coupled to the application processor and the graphics processor to store one or more graphics assets, wherein the graphics processor is to access the one or more graphics asset mapped from the persistent storage media. The persistent storage media may include a low latency, high capacity, and byte-addressable nonvolatile memory. The one or more graphics assets may include one or more of a mega-texture and terrain data. Other embodiments are disclosed and claimed. | 2021-01-14 |
20210011854 | DISTRIBUTED STORAGE ADDRESSING - A method of applying an address space to data storage in a non-volatile solid-state storage is provided. The method includes receiving a plurality of portions of user data for storage in the non-volatile solid-state storage and assigning to each successive one of the plurality of portions of user data one of a plurality of sequential, nonrepeating addresses of an address space. The address range of the address space exceeds a maximum number of addresses expected to be applied during a lifespan of the non-volatile solid-state storage. The method includes writing each of the plurality of portions of user data to the non-volatile solid-state storage such that each of the plurality of portions of user data is identified and locatable for reading via the one of the plurality of sequential, nonrepeating addresses of the address space. | 2021-01-14 |
20210011855 | ZERO COPY METHOD THAT CAN SPAN MULTIPLE ADDRESS SPACES FOR DATA PATH APPLICATIONS - A system and method for transferring data between a user space buffer in the address space of a user space process running on a virtual machine and a storage system are described The user space buffer is represented as a file with a file descriptor in the method, a file system proxy receives a request for I/O read or write from the user space process without copying data to be transferred. The file system proxy then sends the request to a file system server without copying data to be transferred. The file system server then requests that the storage system perform the requested I/O directly between the storage system and the user space buffer, the only transfer of data being between the storage system and the user space buffer. | 2021-01-14 |
20210011856 | Method and Apparatus for Enhancing Isolation of User Space from Kernel Space - A method and an apparatus for enhancing isolation of user space from kernel space, to divide an extended page table into a kernel-mode extended page table and a user-mode extended page table, such that user-mode code cannot access some or all content in the kernel space, and/or kernel-mode code cannot access some content in the user space, thereby enhancing isolation of the user space from the kernel space and preventing content leakage of the kernel space. | 2021-01-14 |
20210011857 | METHOD AND APPARATUS FOR BUFFERING DATA BLOCKS, COMPUTER DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM - A method and apparatus for caching a data block are provided. The method includes: obtaining, from a terminal, an access request for requesting access to a first data block; determining that the first data block is missed in a cache space of a storage system; detect whether a second data block satisfies a lazy condition, the second data block being a candidate elimination block in the cache space and the lazy condition being a condition for determining whether to delay replacing the second data block from the cache space according to a re-access probability; determining that the second data block satisfies the lazy condition; and accessing the first data block from a storage space of the storage system and skipping replacing the second data block from the cache space. | 2021-01-14 |
20210011858 | Memory Space Protection - Executable memory space is protected by receiving, from a process, a request to configure a portion of memory with a memory protection attribute that allows the process to perform at least one memory operation on the portion of the memory. Thereafter, the request is responded to with a grant, configuring the portion of memory with a different memory protection attribute than the requested memory protection attribute. The different memory protection attribute restricting the at least one memory operation from being performed by the process on the portion of the memory. In addition, it is detected when the process attempts, in accordance with the grant, the at least one memory operation at the configured portion of memory. Related systems and articles of manufacture, including computer program products, are also disclosed. | 2021-01-14 |
20210011859 | METHOD AND COMPUTER PROGRAM PRODUCT AND APPARATUS FOR CONTROLLING DATA ACCESS OF A FLASH MEMORY DEVICE - The invention introduces a method for controlling data access to a flash memory, performed by a processing unit, including steps of: obtaining a logical address associated with a data read operation; determining whether a group table corresponding to the logical address is queued in a locked queue, or a hot zone of a swap queue; and prohibiting content of the locked queue and the swap queue from being modified when the group table corresponding to the logical address is queued in the locked queue, or the hot zone of the swap queue. | 2021-01-14 |
20210011860 | DATA STORAGE DEVICE, DATA PROCESSING SYSTEM, AND ACCELERATION DEVICE THEREFOR - A data processing system includes a host device and a data storage device. The host device is configured to select a speed mode of a memory bandwidth based on a network model, or a batch size, or both. The data storage device includes an accelerator configured to change a structure of a processing element (PE) array by controlling transmission paths of first input data and second input data that are input to the PE array based on the speed mode of the memory bandwidth. Computing power and memory power of the accelerator are adjusted according to the selection of the speed mode. | 2021-01-14 |
20210011861 | APPARATUS AND METHOD AND COMPUTER PROGRAM PRODUCT FOR EXECUTING HOST INPUT-OUTPUT COMMANDS - The invention introduces a method for executing host input-output (IO) commands, performed by a processing unit of a device side when loading and executing program code of a first layer, at least including: receiving a host IO command from a host side through a frontend interface; generating a slot bit table (SBT) including an entry according to the host IO command; creating a thread of a second layer; and sending addresses of callback functions and the SBT to the thread of the second layer, thereby enabling the thread of the second layer to call the callback functions according to the IO operation of the SBT for driving the frontend interface to interact with the host side to transmit user data read from a storage unit to the host side, or receive user data to be programmed into the storage unit from the host side. | 2021-01-14 |
20210011862 | APPARATUS AND METHOD AND COMPUTER PROGRAM PRODUCT FOR EXECUTING HOST INPUT-OUTPUT COMMANDS - The invention introduces a method for executing host input-output (IO) commands, performed by a processing unit of a device side, at least including: in response to different types of host IO commands, using multiple stages of a generic framework to drive a frontend interface to interact with a host side for transmitting user data read from a storage unit to the host side, and receiving user data to be programmed into the storage unit from the host side. | 2021-01-14 |
20210011863 | NON-VOLATILE MEMORY BASED PROCESSORS AND DATAFLOW TECHNIQUES - A monolithic integrated circuit (IC) including one or more compute circuitry, one or more non-volatile memory circuits, one or more communication channels and one or more communication interface. The one or more communication channels can communicatively couple the one or more compute circuitry, the one or more non-volatile memory circuits and the one or more communication interface together. The one or more communication interfaces can communicatively couple one or more circuits of the monolithic integrated circuit to one or more circuits external to the monolithic integrated circuit. | 2021-01-14 |
20210011864 | SYSTEM, APPARATUS AND METHODS FOR DYNAMICALLY PROVIDING COHERENT MEMORY DOMAINS - In one embodiment, an apparatus includes: a table to store a plurality of entries, each entry to identify a memory domain of a system and a coherency status of the memory domain; and a control circuit coupled to the table. The control circuit may be configured to receive a request to change a coherency status of a first memory domain of the system, and dynamically update a first entry of the table for the first memory domain to change the coherency status between a coherent memory domain and a non-coherent memory domain, in response to the request. Other embodiments are described and claimed. | 2021-01-14 |
20210011865 | METHOD AND APPARATUS FOR SYMBOL DETECTION - Apparatus and method for symbol detection are disclosed. The solution comprises obtaining ( | 2021-01-14 |
20210011866 | PROTOCOL INCLUDING TIMING CALIBRATION BETWEEN MEMORY REQUEST AND DATA TRANSFER - The described embodiments provide a system for controlling an integrated circuit memory device by a memory controller. During operation, the system sends a memory-access request from the memory controller to the memory device using a first link. After sending the memory-access request, the memory controller sends to the memory device a command that specifies performing a timing-calibration operation for a second link. The system subsequently transfers data associated with the memory-access request using the second link, wherein the timing-calibration operation occurs between sending the memory-access request and transferring the data associated with the memory-access request. | 2021-01-14 |
20210011867 | COORDINATING MEMORY OPERATIONS USING MEMORY-DEVICE-GENERATED REFERENCE SIGNALS - A memory system includes a memory controller coupled to multiple memory devices. Each memory device includes an oscillator that generates an internal reference signal that oscillates at a frequency that is a function of physical device structures within the memory device. The frequencies of the internal reference signals are thus device specific. Each memory device develops a shared reference signal from its internal reference signal and communicates the shared reference signal to the common memory controller. The memory controller uses the shared reference signals to recover device-specific frequency information from each memory device, and then communicates with each memory device at a frequency compatible with the corresponding internal reference signal. | 2021-01-14 |
20210011868 | APPARATUSES AND METHODS INCLUDING MEMORY COMMANDS FOR SEMICONDUCTOR MEMORIES - Apparatuses and methods including memory commands for semiconductor memories are described. A controller provides a memory system with memory commands to access memory. The commands are decoded to provide internal signals and commands for performing operations, such as operations to access the memory array. The memory commands provided for accessing memory may include timing command and access commands. Examples of access commands include a read command and a write command. Timing commands may be used to control the timing of various operations, for example, for a corresponding access command. The timing commands may include opcodes that set various modes of operation during an associated access operation for an access command. | 2021-01-14 |
20210011869 | SEMICONDUCTOR DEVICES INCLUDING COMMAND PRIORITY POLICY MANAGEMENT AND RELATED SYSTEMS - Provided is a semiconductor device and a semiconductor system. A semiconductor device can include a command priority policy manager circuit which generates command priority policy information including a command priority compliance policy for a command directed to a device. A host interface circuit can be coupled to the command priority policy manager circuit to receive the command priority policy information from the command priority policy manager circuit, where the host interface circuit operable to transmit the command priority policy information via an electrical interface to the device. | 2021-01-14 |
20210011870 | NETWORK-ON-CHIP FOR NEUROLOGICAL DATA - The embodiments disclosed herein relate to chips used to receive and process neurological events in brain matter as captured by electrodes. Such chips may include an array of amplifiers and electrodes to receive neurological voltage signals, the chip including a config circuitry in communication with the array of amplifiers and a controller, the config circuitry configured to receive program instructions and instruct the amplifiers of a voltage threshold and instruct the controller to pass on signals from only specific rows and columns of amplifiers, the controller in communication with the array of amplifiers, the controller configured to packetize the neurological voltage signals into data packets. | 2021-01-14 |
20210011871 | METHOD FOR CONTROLLING COMMANDS SUITABLE TO BE PROCESSED BY A PERIPHERAL SUCH AS AN ACTUATOR - Method for controlling commands suitable to be processed by a peripheral ( | 2021-01-14 |
20210011872 | MULTICORE BUS ARCHITECTURE WITH NON-BLOCKING HIGH PERFORMANCE TRANSACTION CREDIT SYSTEM - This invention is a bus communication protocol. A master device stores bus credits. The master device may transmit a bus transaction only if it holds sufficient number and type of bus credits. Upon transmission, the master device decrements the number of stored bus credits. The bus credits correspond to resources on a slave device for receiving bus transactions. The slave device must receive the bus transaction if accompanied by the proper credits. The slave device services the transaction. The slave device then transmits a credit return. The master device adds the corresponding number and types of credits to the stored amount. The slave device is ready to accept another bus transaction and the master device is re-enabled to initiate the bus transaction. In many types of interactions a bus agent may act as both master and slave depending upon the state of the process. | 2021-01-14 |
20210011873 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND SEMICONDUCTOR DEVICE - A bridge apparatus includes slave circuits connected to each other via a bus. Each of the slave circuits is connected to one of master apparatuses, function as a slave for the master apparatus connected thereto, and performs communication in accordance with a protocol in which the number of masters in a system is restricted. Addresses of memories are respectively set in the slave circuits, and the memories are connected to the master apparatuses to which the slave circuits are respectively connected. When a first master apparatus accesses a memory connected to a second master apparatus by specifying a first address of the memory, the bridge apparatus causes the first master apparatus and the second master apparatus to communicate via a first slave circuit, a second slave circuit in which an address corresponding to the first address is set, and the bus, based on the addresses of the memories. | 2021-01-14 |
20210011874 | Systems and Methods for Aligning Received Data - The present application is directed to an electronic device that includes a receiver configured to receive data from a second electronic device. The data includes a plurality of blocks, and each block of the plurality of blocks comprises a sync header. The receiver is also configured to align the data by performing 2 to 1 multiplexing and output the aligned data. | 2021-01-14 |
20210011875 | CONFIGURATION VIA HIGH SPEED SERIAL LINK - Mechanisms and techniques for configuring a configurable slave device using a high speed serial link where a different number of lanes of the high speed serial link are used to send data between the slave device and a master device, depending on whether the slave device is in configuration mode or in normal operations mode, are provided. | 2021-01-14 |
20210011876 | MEMORY WITH ALTERNATIVE COMMAND INTERFACES - A memory device or module selects between alternative command ports. Memory systems with memory modules incorporating such memory devices support point-to-point connectivity and efficient interconnect usage for different numbers of modules. The memory devices and modules can be of programmable data widths. Devices on the same module can be configured select different command ports to facilitate memory threading. Modules can likewise be configured to select different command ports for the same purpose. | 2021-01-14 |
20210011877 | BOARD PORTAL SUBSIDIARY MANAGEMENT SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT - A board portal system provides the ability to manage multiple boards, where each of the boards may be a separate legal entity. The board portal may provide the ability to establish links between the multiple boards and create parent-child relationships with subsidiary boards. With the board portal, users can create content and make it viewable and accessible across multiple boards that related through a parent-child relationship. At the same time, the board portal maintains a requisite level of separation between the related boards in the portal using encryption and/or other separation techniques. As a result, the board portal facilitates flexible workflow patterns and communication processes based on the proper hierarchical structure that exists between the parent organization and its subsidiaries. | 2021-01-14 |