48th week of 2021 patent applcation highlights part 54 |
Patent application number | Title | Published |
20210373968 | LOAD LEVELING DEVICE, LOAD LEVELING METHOD, AND LOAD LEVELING PROGRAM - Load leveling between hosts (computes) is realized in a virtual infrastructure regardless of application restrictions on the virtualization technique and by reducing the influence on services. A load leveling apparatus | 2021-12-02 |
20210373969 | METHODS, SYSTEMS AND APPARATUS TO DYNAMICALLY FACILITATE BOUNDARYLESS, HIGH AVAILABILITY M:N WORKING CONFIGURATION SYSTEM MANAGEMENT - In a Boundaryless Control High Availability (“BCHA”) system (e.g., industrial control system) comprising multiple computing resources (or computational engines) running on multiple machines, technology for computing in real time the overall system availability based upon the capabilities/characteristics of the available computing resources, applications to execute and the distribution of the applications across those resources is disclosed. In some embodiments, the disclosed technology can dynamically manage, coordinate recommend certain actions to system operators to maintain availability of the overall system at a desired level. High Availability features may be implemented across a variety of different computing resources distributed across various aspects of a BCHA system and/or computing resources. Two example implementations of BCHA systems described involve an M:N working configuration and M:N+R working configuration. | 2021-12-02 |
20210373970 | DATA PROCESSING METHOD AND CORRESPONDING APPARATUS - A data processing method applied to a data processing apparatus is provided. The data processing apparatus includes a central processing unit and a sensor processing unit set, and the sensor processing unit set includes at least one sensor processing unit. This solution resolves problems such as high costs and high function requirements, or overload and frequent system breakdown caused by massive data processing tasks in an existing data processing architecture. A data processing apparatus and a computer-readable storage medium are also provided. | 2021-12-02 |
20210373971 | CROSS-CLUSTER LOAD BALANCER - Various examples are disclosed for performing cross-cluster load balancing. In some aspects, a workload is selected for cross-cluster migration. A destination cluster is identified for a migration of the workload from a source cluster to the destination cluster. A cross-cluster migration recommendation is generated to migrate the workload from the source cluster to the destination cluster. | 2021-12-02 |
20210373972 | VGPU SCHEDULING POLICY-AWARE MIGRATION - Disclosed are aspects of virtual graphics processing unit (vGPU) scheduling-aware virtual machine migration. Graphics processing units (GPUs) that are compatible with a current virtual GPU (vGPU) profile for a virtual machine are identified. A scheduling policy matching order for a migration of the virtual machine is determined based on a current vGPU scheduling policy for the virtual machine. A destination GPU is selected based on a vGPU scheduling policy of the destination GPU being identified as a best available vGPU scheduling policy according to the scheduling policy matching order. The virtual machine is migrated to the destination GPU. | 2021-12-02 |
20210373973 | Workload Placement Based On Carbon Emissions - Workload placement based on carbon emissions, including: calculating, for each execution environment of a plurality of execution environments, a carbon emission cost associated with a workload; selecting, based on each carbon emission cost for the plurality of execution environments, a target execution environment; and executing the workload on the target execution environment. | 2021-12-02 |
20210373974 | Accelerated Operation of a Graph Streaming Processor - Methods, systems and apparatuses for graph processing are disclosed. One graph streaming processor includes a thread manager, wherein the thread manager is operative to dispatch operation of the plurality of threads of a plurality of thread processors before dependencies of the dependent threads have been resolved, maintain a scorecard of operation of the plurality of threads of the plurality of thread processors, and provide an indication to at least one of the plurality of thread processors when a dependency between the at least one of the plurality of threads that a request has or has not been satisfied. Further, a producer thread provides a response to the dependency when the dependency has been satisfied, and each of the plurality of thread processors is operative to provide processing updates to the thread manager, and provide queries to the thread manager upon reaching a dependency. | 2021-12-02 |
20210373975 | WORKGROUP SYNCHRONIZATION AND PROCESSING - A processing system monitors and synchronizes parallel execution of workgroups (WGs). One or more of the WGs perform (e.g., periodically or in response to a trigger such as an indication of oversubscription) a waiting atomic instruction. In response to a comparison between an atomic value produced as a result of the waiting atomic instruction and an expected value, WGs that fail to produce a correct atomic value are identified as being in a waiting state (e.g., waiting for a synchronization variable). Execution of WGs in the waiting state is prevented (e.g., by a context switch) until corresponding synchronization variables are released. | 2021-12-02 |
20210373976 | SYSTEMS AND METHOS FOR STATELESS MAINTENANCE OF A REMOTE STATE MACHINE - Systems and methods of implementing a finite-state machine using electronic notifications delivered to a client device in a computer networking environment are provided. A content item can be received, along with first and second notifications associated with the content item. The first and second notifications can be stored in a queue. In some implementations, a state machine can be maintained in which at least some states may cause the first or second notifications to be displayed, and in which transitional conditions between states may depend at least in part on user interaction with the displayed notifications. | 2021-12-02 |
20210373977 | DETERMINATIONS OF WHETHER EVENTS ARE ANOMALOUS - According to examples, an apparatus may include a memory on which is stored instructions that when executed by a processor, cause the processor to cluster a set of data points into a hierarchy of a plurality of clusters, in which each level of the hierarchy includes a different number of the plurality of clusters. The processor may also select a number of the plurality of clusters to be used in modeling behaviors of the plurality of clusters and for each cluster of the selected number of the plurality of clusters, determine a distribution type of the cluster. The processor may further merge the distribution types of the clusters to generate a mixture model, identify an event, evaluate the event based on the generated mixture model to determine whether the event is likely anomalous, and based on a determination that the event is likely anomalous, execute a response action. | 2021-12-02 |
20210373978 | FINDING THE OPTIMUM TIME TO PRESTART CONTAINER RUNNING FUNCTIONS USING EVENTS - A method includes identifying a first event that has been at least partly performed, wherein the first event comprises an element of a sequence of events, and the first event comprises performance of a first computing function, predicting a second event expected to occur next in the sequence after completion of the first event, and the second event comprises performance of a second computing function, predicting a start time of the second event, based on information about the second event, identifying a particular container capable of implementing the second computing function associated with the second event, predicting a start time for start-up of the container, starting up the container, and completing start-up of the container prior to receipt of a request for the second computing function to be performed by the container, wherein the container is ready to perform the second computing function immediately after start-up has been completed. | 2021-12-02 |
20210373979 | FILE UPLOAD MODIFICATIONS FOR CLIENT SIDE APPLICATIONS - Methods and systems are provided for a client computing device including a browser that renders a web page. Program code generates a mock upload event and a corresponding mock data transfer object for uploading data using the web page. The mock upload event and the corresponding mock data transfer object are propagated to an upload event listener of the web page and executed. Prior to generating the mock upload event and corresponding mock data transfer object, an embedded upload event listener may receive an upload event, read the upload event, drop the received upload event from an event handler pipeline, and call synchronously or asynchronously, code to perform logic on the received upload event for the generation of the mock upload event and a corresponding mock data transfer object. | 2021-12-02 |
20210373980 | DIGITAL PROCESSING SYSTEMS AND METHODS FOR THIRD PARTY BLOCKS IN AUTOMATIONS IN COLLABORATIVE WORK SYSTEMS - Systems, methods, and computer-readable media for remotely automating changes to third party applications from within a primary application are disclosed. The systems and methods may involve maintaining in the primary application, a table having rows, columns, and cells at intersections of the rows and columns, wherein the primary application is configured to enable the construction of automations defined by conditional rules for altering internal information in the primary application and external information in the third party applications; receiving an automation definition conditional on specific information input into at least one specific cell in the table of the primary application, wherein the automation definition is constructed using internal blocks and external blocks, the external blocks having links to the external third party applications; monitoring the at least one specific cell of the primary application for an occurrence of the specific information. | 2021-12-02 |
20210373981 | CONSERVATION OF ELECTRONIC COMMUNICATIONS RESOURCES AND COMPUTING RESOURCES VIA SELECTIVE PROCESSING OF SUBSTANTIALLY CONTINUOUSLY UPDATED DATA - In a system including a primary process followed by a secondary process, which are performed serially and sequentially, i.e., in a FIFO manner, where the secondary process is downstream of the primary process, the disclosed embodiments relate to selective/conditional secondary processing of electronic data transaction request messages, which speeds up the primary processing of the electronic data transaction request messages, reduces reduce the amount of computing resources wasted on calculating inaccurate information, and reducing the usage of network resources associated with publishing market data feeds and receiving new responsive messages. | 2021-12-02 |
20210373982 | LOCK-FREE METHOD OF TRANSMITTING HIERARCHICAL COLLECTIONS THROUGH SHARED MEMORY - Systems and methods for creating a new entry in a hierarchical state data structure with object entries is disclosed. The method includes allocating a shared memory buffer for a new entry in a shared memory. A request to create the new entry for a child object in a hierarchical state data structure in the shared memory is received. The new entry is to span at least one shared memory buffer uniquely identifiable in a location of the shared memory. The child object is a logical representation of a state of a system. In response to a request for an allocation of a shared memory buffer within a region of the shared memory for the new entry, a location identifier corresponding to a location of a parent entry holding a parent object to the child object in the hierarchical state data structure of an allocated region is received. The child object is created in the shared memory buffer for the new entry, and the new entry is available for concurrent access by one or more readers of the shared memory. | 2021-12-02 |
20210373983 | LEASING PRIORITIZED ITEMS IN NAMESPACE INDICES - A method, system, and computer program product for implementing indexes in a dispersed storage network (dsNet) are provided. The method accesses a work queue containing a set of work items as a set of key-value pairs. The key-value pairs are tuples including a work identifier and a work lease timestamp. The method selects a first work identifier and a first lease timestamp for a new work. The set of work items and the new work item are ordered according to a priority scheme to generate a modified work queue. Based on the modified work queue, the method transmits a work request to a plurality of data source units. The work request including a hash parameter and a bit parameter. The hash parameter is associated with a key-value pair of the modified work queue. The bit parameter indicates a number of bits of the hash parameter to consider. | 2021-12-02 |
20210373984 | AUTOMATIC CODE GENERATION FOR API MASHUPS - According to an aspect of an embodiment, operations include receiving a first input corresponding to a selection of one or more Application Programming Interface (API)-based trigger options associated with one or more electronic trigger events. The operations further include receiving a second input corresponding to a selection of one or more trigger rules which are applicable on event data associated with the one or more electronic trigger events and receiving a third input corresponding to a selection of one or more API-based actions. The operations further include constructing an API mashup template based on the first input, the second input, and the third input and generating an API mashup code based on the constructed API mashup template. The API mashup code is configured to be computer-executable on a runtime system. | 2021-12-02 |
20210373985 | METHOD AND SYSTEM FOR PREDICTING AN OCCURRENCE OF A FAILURE CONDITION IN A VDI ENVIRONMENT - The present disclosure is related to Virtual Desktop Infrastructure (VDI) that discloses a method and system for predicting an occurrence of a failure condition in a VDI environment. A failure prediction system simulates a workload condition, to generate a functional experience corresponding to each information system. Thereafter, the failure prediction system determines a deviation in, performance patterns of each information system, and the functional experience corresponding to each information system, based on historical data of the corresponding information system and transactional data of an enterprise. Finally, an occurrence of a failure condition in a VDI environment is predicted by performing predictive analytics on the determined deviation, based on one or more benchmark metrics. The present disclosure rectifies the performance issues based on the prediction, which in turn prevents the occurrence of the failure condition, thereby improving user experience and productivity in the VDI environment. | 2021-12-02 |
20210373986 | TRANSPORTATION OF CONFIGURATION DATA WITH ERROR MITIGATION - A method for mitigating errors in the transportation of configuration data may include identifying, at a development system, dependent configuration data associated with a first transport request. The dependent configuration data may implement a customization to a software application hosted at a production system. A reference table identifying the dependent configuration data may be sent to the production system. A missing object list identifying dependent configuration data absent from the production system may be generated at the production system based on the reference table. The missing object list may be sent to the development system where a corrective action may be performed such that the dependent configuration data identified by the missing object list as being absent from the production system is sent to the production system in the first transport request and/or a second transport request. Related systems and articles of manufacture, including computer program products, are also provided. | 2021-12-02 |
20210373987 | REINFORCEMENT LEARNING APPROACH TO ROOT CAUSE ANALYSIS - Aspects of the invention include generating a vector representation of a root node of the error based on a hierarchical topology of a computing system; generating a respective vector representations of each subject matter expert of a plurality of subject matter experts based at least in part on the hierarchical topology; selecting a subject matter expert based at least in part on the vector representation of root cause of the error; and uploading a diagnostic software to the computing system. | 2021-12-02 |
20210373988 | CIRCUIT DETECTION METHOD AND DATA DETECTION CIRCUIT - Embodiments of the present disclosure disclose a circuit detection method and a data detection circuit. The circuit detection method comprises: if a current time point reaches a preset detection time period, based on a data storage address of a detected module, reading a data to be detected corresponding to the detected module from a storage area corresponding to the data storage address; using a preset calculation method corresponding to the detected module to perform a calculation on the data to be detected to obtain a first calculation result; based on the first calculation result and a preset calculation result corresponding to the data storage address, determining a fault state of the detected module. The embodiments of the present disclosure can detect the storage circuit in a timely and accurate manner without data verification by adding hardware, thereby saving space occupied by the system and reducing power consumption. | 2021-12-02 |
20210373989 | BIG TELEMATICS DATA NETWORK COMMUNICATION FAULT IDENTIFICATION SYSTEM - Apparatus, device, methods and system relating to a vehicular telemetry environment for the for identifying in real time unpredictable network communication faults based upon pre-processed raw telematics big data logs that may include GPS data and an indication of vehicle status data, and supplemental data that may further include location data and network data. | 2021-12-02 |
20210373990 | CLUSTERING OF STRUCTURED LOG DATA BY KEY-VALUES - Clustering structured log data by key-values includes receiving, via a user interface, a request to apply an operator to cluster a set of raw log messages according to values for a set of keys associated with the request. At least a portion of each raw log message comprises structured machine data including a set of key-value pairs. It further includes receiving a raw log message in the set of raw log messages. It further includes determining whether to include the raw log message in a cluster based at least in part on an evaluation of values in the structured machine data of the raw log message for the set of keys associated with the request. The cluster is included in a plurality of clusters. Each cluster in the plurality is associated with a different combination of values for the set of keys associated with the request. It further includes providing, via the user interface, information associated with the cluster | 2021-12-02 |
20210373991 | AUTOMATED ALERT AUGMENTATION FOR DEPLOYMENTS OF SOFTWARE-DEFINED STORAGE - Methods, apparatus, and processor-readable storage media for automated alert augmentation for deployments of software-defined storage are provided herein. An example computer-implemented method includes obtaining an alert from at least one software-defined storage device; determining one or more items of additional information pertaining to one or more of the alert and the at least one software-defined storage device; augmenting the alert based at least in part on the one or more determined items of additional information; generating a modified version of the augmented alert by incorporating, into the augmented alert, dependency information pertaining to the at least one software-defined storage device and one or more additional software-defined storage devices; and performing one or more automated actions based at least in part on the modified version of the augmented alert. | 2021-12-02 |
20210373992 | MEMORY SYSTEM - According to one embodiment, a memory system includes a first memory, an interface circuit, and a processor. The interface circuit is configured to receive a first request from an external device. The processor is configured to select a mode among a plurality of modes in response to the first request, and perform, on data read from the first memory, error correction of the selected mode. | 2021-12-02 |
20210373993 | DATA SHAPING FOR INTEGRATED MEMORY ASSEMBLY - A non-volatile memory system comprises an integrated memory assembly in communication with a memory controller. The integrated memory assembly comprises a memory die bonded to a control die with bond pads. The control die includes one or more control circuits for controlling the operation of the memory die. The one or more control circuits are configured to receive data to be programmed into the memory die, select a number of parity bits, encode the data to add error correction information and form a codeword that includes the number of parity bits, shape the codeword, and program the shaped codeword into the memory die. | 2021-12-02 |
20210373994 | VIRTUAL DISK FILE RESILIENCY FOR CONTENT BASED READ CACHE (CBRC) ENABLED ENVIRONMENT - Disclosed herein is a system and method for checking and maintaining consistency of blocks stored in a virtual disk with a content based read cache (CBRC). When blocks are written to the cache and virtual disk, a hash is computed for the block and stored in a digest file on the virtual disk. In the background, each block is obtained from the virtual disk, its hash is recomputed, and the hash is compared to the stored hash in the digest file. If the comparison indicates a mismatch, then an error is reported. | 2021-12-02 |
20210373995 | METHOD FOR ACCESSING SEMICONDUCTOR MEMORY MODULE - A method for accessing a memory module includes; encoding first data of a first partial burst length to generate first parities and first cyclic redundancy codes, encoding second data of a second partial burst length to generate second parities and second cyclic redundancy codes, writing the first data and the second data to first memory devices, and writing the first parities, the first cyclic redundancy codes, the second parities, and the second cyclic redundancy codes to a second memory device and a third memory device. | 2021-12-02 |
20210373996 | MEMORY MODULE AND OPERATING METHOD - A memory module includes a memory device configured to receive a first refresh command from a host, and perform a refresh operation in response to the first refresh command during a refresh time, and a computing unit configured to detect the first refresh command provided from the host to the memory device, and write a first error pattern at a first address of the memory device during the refresh time. | 2021-12-02 |
20210373997 | DYNAMIC DATA VERIFICATION AND RECOVERY IN A STORAGE SYSTEM - In one implementation, storage system includes embedded storage devices, where each embedded storage device includes a direct-mapped solid state drive (SSD) storage portion and storage system controllers. The storage system controllers may be operatively coupled to the embedded storage devices via a bus. The storage system controllers may receive data to be written to the plurality embedded storage devices, select a plurality of available allocation units from the direct-mapped SSD storage portions of the plurality of embedded storage devices, respectively, and calculate a verification signature corresponding to the data. The storage system controllers may also write the data and the verification signature to a first subset of the plurality of available allocation units, calculate an erasure code corresponding to the data and the verification signature, and write the erasure code to a second subset of allocation units. | 2021-12-02 |
20210373998 | NAMESPACE INDICES IN DISPERSED STORAGE NETWORKS - A method, system, and computer program product for implementing indices in a dispersed storage network (dsNet) are provided. The method receives a key-value pair to be stored in a dsNet. The method routes the key and the value within a data source containing a SourceName repository and a data buffer. The key is routed to the SourceName repository and the value is routed to the data buffer. The data source is erasure encoded into a set of data slices having a slice name and a slice buffer. The method stores the set of data slices within the dsNet. The method generates a namespace index with an index entry for the key-value pair. The index entry represents the key-value pair as a SourceName and a data source indicator with the SourceName and the data source indicator being associated with the set of data slices. | 2021-12-02 |
20210373999 | LISTING AND PROTOCOL FOR NAMESPACE INDEX - A method, system, and computer program product for implementing indexes in a dispersed storage network (dsNet) are provided. The method receives a key-value list request including a start key. In response to the key-value list request, a set of key-value list requests are transmitted to a plurality of data source units within a dispersed storage network (dsNet). The method identifies a set of keys returned from the plurality of data source units and identifies a subset of keys from the set of keys returned. The subset of keys including a union of the set of keys. The key-value pairs associated with the subset of keys are restored. The method generates a key-value list response to the key-value list request. The key-value list response includes the restored key-value pairs associated with the subset of keys. | 2021-12-02 |
20210374000 | MEMORY INTEGRITY PERFORMANCE ENHANCEMENT SYSTEMS AND METHODS - A write request causes controller circuitry to write an encrypted data line and First Tier metadata portion including MAC data and a first portion of ECC data to a first memory circuitry portion and a second portion of ECC data to a sequestered, second memory circuitry portion. A read request causes the controller circuitry to read the encrypted data line and the First Tier metadata portion from the first memory circuitry portion. Using the first portion of the ECC data included in the First Tier metadata portion, the controller circuitry determines if an error exists in the encrypted data line. If no error is detected, the controller circuitry decrypts and verifies the data line using the MAC data included in the First Tier metadata portion. If an error in the data line is detected by the controller circuitry, the Second Tier metadata portion, containing the second portion of the ECC data is fetched from the sequestered, second memory circuitry portion and the error corrected. | 2021-12-02 |
20210374001 | MEMORY DEVICE AND MEMORY MODULE INCLUDING SAME - A memory device includes a peripheral circuit communicating with memory banks. Each of the banks includes a memory cell array including memory cells, a row decoder connected with the memory cells through word lines, bit line sense amplifiers connected with the memory cells through bit lines including first bit lines and second bit lines, and a column decoder configured to connect the bit line sense amplifiers with the peripheral circuit. The memory cell array includes a first section connected with the first bit lines and a second section connected with the second bit lines, and the first section and second section are independent of each other with regard to a row-dependent error. | 2021-12-02 |
20210374002 | PROCESSING-IN-MEMORY INSTRUCTION SET WITH HOMOMORPHIC ERROR CORRECTION - A method includes transmitting an ECC encoded first data and an ECC encoded second data from a memory to a logic circuit, and generating an ECC encoded output data by executing an ECC-Space operation using the ECC encoded first data as a first operand and the ECC encoded second data as a second operand. The ECC encoded first data and the ECC encoded second data are the corresponding results of encoding a first data and a second data with an ECC algorithm. The ECC-Space operation is translated from a two operands operation that is operative to transform the first data and the second data into a third data. The ECC encoded output data is identical to a result of encoding the third data with the ECC algorithm if the third data is encoded with the ECC algorithm. | 2021-12-02 |
20210374003 | ZNS Parity Swapping to DRAM - The present disclosure generally relates to methods of operating storage devices. The storage device comprises a controller comprising first random access memory (RAM1), second random access memory (RAM2), and a storage unit divided into a plurality of zones. A first command to write data to a first zone is received, first XOR data is generated in the RAM1, and the data of the first command is written to the first zone. When a second command to write data to a second zone is received, the generated first XOR data is copied from the RAM1 to the RAM2, and second XOR data for the second zone is copied from the RAM2 to the RAM1. The second XOR data is updated with the second command, and the data of the second command is written to the second zone. The updated second XOR data is copied from the RAM1 to the RAM2. | 2021-12-02 |
20210374004 | FAULT TOLERANT MEMORY SYSTEMS AND COMPONENTS WITH INTERCONNECTED AND REDUNDANT DATA INTERFACES - A memory system includes dynamic random-access memory (DRAM) components that include interconnected and redundant component data interfaces. The redundant interfaces facilitate memory interconnect topologies that accommodate considerably more DRAM components per memory channel than do traditional memory systems, and thus offer considerably more memory capacity per channel, without concomitant reductions in signaling speeds. The memory components can be configured to route data around defective data connections to maintain full capacity and continue to support memory transactions. | 2021-12-02 |
20210374005 | SYSTEMS AND METHODS FOR VERIFYING AND PRESERVING THE INTEGRITY OF BASIC INPUT/OUTPUT SYSTEM BEFORE POWERING ON OF HOST SYSTEM AND MANAGEMENT ENGINE - A method may include responsive to a power event associated with an information handling system, intercepting the power event and holding a platform controller hub of the information handling system from completing the power event to prevent execution of code of a basic input/output system of the information handling system, attempting to verify image integrity of the basic input/output system, and allowing the platform controller hub to complete the power event to allow execution of the code of the basic input/output system responsive to successful verification of the image integrity of the basic input/output system. | 2021-12-02 |
20210374006 | REFRESH MANAGEMENT FOR DRAM - A memory controller interfaces with a dynamic random access memory (DRAM). The memory controller selectively places memory commands in a memory interface queue, and transmits the commands from the memory interface queue to a memory channel connected to at least one dynamic random access memory (DRAM). The transmitted commands are stored in a replay queue. A number of activate commands to a memory region of the DRAM is counted. Based on this count, a refresh control circuit signals that an urgent refresh command should be sent to the memory region. In response to detecting a designated type of error, a recovery sequence initiates to re-transmit memory commands from the replay queue. Designated error conditions can cause the recovery sequence to restart. If an urgent refresh command is pending when such a restart occurs, the recovery sequence is interrupted to allow the urgent refresh command to be sent. | 2021-12-02 |
20210374007 | GROUPING OF MULTIPLE CONCURRENT SCHEDULES OF POINT-IN-TIME SNAPSHOTS FOR CONSISTENTLY DEFINED DATA IMAGE PROTECTION - Targetless snapshot schedules are defined by policy objects that include a snap creation interval, maximum snap count, and schedule ID. Multiple schedule IDs can be associated with a single storage object to implement different concurrent targetless snapshot schedules with a single storage object. Multiple storage objects may use the same targetless snapshot schedule independently. Because the targetless snapshot schedules are implemented independently, discard of old snapshots to maintain a snap count for a first storage object does not cause discard of snapshots for a second storage object. Further, discard of old snapshots to maintain a snap count for a first schedule does not cause discard of snapshots for a second schedule applied to the same storage object. | 2021-12-02 |
20210374008 | METHOD, ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT FOR BACKUP - Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for backup. The method includes: determining a plurality of buffer periods associated with a plurality of pending backup operations of a plurality of objects, each buffer period being a preprocessing period for a corresponding pending backup operation. The method further includes: determining a plurality of predicted execution durations of the plurality of pending backup operations based on historical execution durations of respective historical backup operations of the plurality of objects. The method further includes: determining priorities of the plurality of pending backup operations based on the plurality of predicted execution durations and the plurality of buffer periods. The method further includes: executing the plurality of pending backup operations based on the priorities. | 2021-12-02 |
20210374009 | METHOD AND APPARATUS FOR SUBSCRIBER MANAGEMENT - Aspects of the subject disclosure may include, for example, a method including enabling, by a system comprising a processor, a first modification of a first user profile at a secondary data repository of a communication network; the first modification is directed by a first application operating at a first communication device of the communication network. A usage event is identified that is associated with the first communication device according to the first modification to the first user profile. A second modification to a second user profile is replicated at a primary data repository according to a change in operation of a second application associated with the usage event that is identified to the first communication device; the replication is performed according to an update policy for the primary data repository. Other embodiments are disclosed. | 2021-12-02 |
20210374010 | EMULATING HIGH-FREQUENCY APPLICATION-CONSISTENT SNAPSHOTS BY FORMING RESTORE POINT DATA SETS BASED ON REMOTE SITE REPLAY OF I/O COMMANDS - The disclosed systems emulate high-frequency application-consistent snapshots by forming restore point data sets based on remote site replay of I/O commands. A method embodiment commences upon identifying a primary computing site and a secondary computing site, then identifying an application to be restored from the secondary computing site after a disaster. Prior to the disaster, a group of computing entities of the application to be restored from the secondary computing site are identified. Input/output operations that are performed over any of the computing entities at the primary site are streamed to the secondary site where they are stored. An I/O map that associates a time with an indication of a last received I/O command that had been performed over a changing set of computing entities is sent to the secondary site. An agent at the secondary site accesses the I/O map and the streamed-over I/Os to construct recovery data. | 2021-12-02 |
20210374011 | DATA OBJECT BACKUP VIA OBJECT METADATA - Examples are disclosed that relate to backing up data objects. One example provides, at a computing device, a method, comprising detecting one or more conditions triggering backup of a data object, and in response to detecting the one or more conditions, accessing the data object to retrieve, from metadata associated with the data object, instructions for backing up the data object. The method further comprises executing one or more backup sequences specified by the instructions in which at least a portion of the data object is backed up to one or more storage devices. | 2021-12-02 |
20210374012 | EFFICIENT METHOD TO BUILD A BACKUP SCHEDULING PLAN FOR A SERVER - One example method includes identifying a group of asset backups to be performed, and each asset backup is associated with a respective asset and has an associated backup time and RPO, selecting an asset backup to run first, and the asset backup that will run first is chosen based on a start deadline of that asset backup relative to respective start deadlines of one or more other asset backups, and the start deadline falls within a time slot, selecting a stream from a group of streams for the selected asset backup, and the selected stream is a stream with a lowest value of first available time slot, and backing up the asset at a backup server by running the selected asset backup, and backup begins at a start time that is a time when the selected stream becomes available, and the asset backup runs on the selected stream. | 2021-12-02 |
20210374013 | DATA PLACEMENT METHOD BASED ON HEALTH SCORES - Embodiments described herein relate to techniques for placing backup data based on health scores. The techniques may include: obtaining data items associated with a first data domain restorer; obtaining data items associated with a second data domain restorer; making a prediction that the first data domain restorer is operating normally; making a prediction that the second data domain restorer is operating normally; assigning a confidence value to the first prediction; making a classification of the first data domain restorer in a first group based on the confidence value; assigning a confidence value to the second prediction; making a classification of the second data domain restorer in a second group based on the confidence value; and performing a data backup to the first data domain restorer from a first computing device based on the classification and a first service level required for the first computing device. | 2021-12-02 |
20210374014 | SYSTEM AND METHOD FOR AN APPLICATION CONTAINER EVALUATION BASED ON CONTAINER EVENTS - A method for a backup operation in accordance with one or more embodiments of the invention includes obtaining, by a vulnerability analyzer executing on a backup server, a plurality of container event entries, wherein a container event entry of the plurality of container event entries specifies a an application container identifier, a container event identifier, an event severity, and an event type, selecting the container event identifier, identifying a portion of the plurality of container event entries that specify the container event identifier, generating a mean event severity based on the portion of the plurality of container event entries, generating a first vulnerability value associated with the application container identifier, and initiating a backup policy update based on a vulnerability ranking, wherein the vulnerability ranking is based on the first vulnerability value. | 2021-12-02 |
20210374015 | SYSTEM AND METHOD FOR AN APPLICATION CONTAINER BACKUP PRIORITIZATION BASED ON CONTAINER INFORMATION AND RESOURCE AVAILABILITY - A method for performing a backup operation includes obtaining, by a backup server, container information associated with a plurality of application containers, generating a container score for each application container in the plurality of application containers based on the container information, identifying a resource availability for a backup agent associated with the plurality of application containers, generating an ordering of the plurality of application containers based on the availability and the container scores, and sending a prioritization list update request to the backup agent based on the ordering. | 2021-12-02 |
20210374016 | SYNCHRONIZATION OF A DATABASE BY RESTORING COPIES OF CHANGED DATABASE OBJECTS - Systems and methods for performing backup and other secondary copy operations for large databases (e.g., “big data”), such as the Greenplum database, are described. In some cases, the systems and methods may maintain a second instance of a source database (e.g., Greenplum) using live synchronization (e.g., “Live Sync”), which performs incremental replication between a virtual machine containing a large database (e.g., a virtual machine containing a Greenplum database) and a synced copy of the virtual machine. | 2021-12-02 |
20210374017 | SNAPSHOT METADATA DEDUPLICATION - Snapshots may be managed on a data storage system including logical storage unit including data portions. For a first logical storage unit, a first snapshot pointer structure may be provided including entries, each entry corresponding to a physical storage location at which data is stored for a data portion of the first logical storage unit at a particular point in time. A first virtual snapshot lookup table may be provided for a first portion of the first logical storage unit, the first virtual snapshot lookup table including a plurality of entries, each entry corresponding to a respective data portion of the first logical storage unit and including a reference to a respective entry of the first snapshot pointer structure. The virtual lookup table may correspond to multiple snapshots of the first logical storage unit that have a same value for each data portion of the at least first portion. | 2021-12-02 |
20210374018 | LOAD BALANCING ACROSS MULTIPLE DATA PATHS - Multiple data paths may be available to a data management system for transferring data between a primary storage device and a secondary storage device. The data management system may be able to gain operational advantages by performing load balancing across the multiple data paths. The system may use application layer characteristics of the data for transferring from a primary storage to a backup storage during data backup operation, and correspondingly from a secondary or backup storage system to a primary storage system during restoration. | 2021-12-02 |
20210374019 | SYSTEM AND METHOD FOR AN APPLICATION CONTAINER PRIORITIZATION DURING A RESTORATION - The method includes obtaining, by a restoration policy manager, a restoration request for a plurality of application containers, and in response to the restoration request: obtaining, by a backup server, container information associated with the plurality of application containers, assigning a restoration type to each application container in the plurality of application containers, updating a restoration type list based on the assigning, and initiating a restoration of the plurality of application containers using the restoration type list. | 2021-12-02 |
20210374020 | TRANSACTION CONSENSUS PROCESSING METHOD AND APPARATUS FOR BLOCKCHAIN AND ELECTRONIC DEVICE - A transaction consensus processing method for a blockchain is provided. A target node that initiates a proposition performs compression processing on proposed transaction data based on a compression algorithm, and fragments the compressed transaction data into a number of data fragments based on an erasure code algorithm. The method includes: receiving a data fragment of the transaction data that is sent by the target node in a unicast mode, data fragments sent by the target node to nodes in the unicast mode being different; broadcasting the received data fragment to other nodes, and receiving data fragments of the transaction data that are broadcast by the other nodes; performing data recovery on the received data fragment based on an erasure code reconstruction algorithm, performing decompression processing on the recovered transaction data based on a decompression algorithm to obtain original content of the transaction data, and completing the consensus. | 2021-12-02 |
20210374021 | AUTOMATED MEDIA AGENT STATE MANAGEMENT - Described herein are techniques for automating media agent state management. For example, if a media agent is running poorly, then the media agent can be disabled and an alternate media agent can perform secondary copy job operations in place of the poorly running media agent. To determine whether a media agent is running poorly, a storage manager can determine whether the media agent has an anomalous number of failed jobs, pending jobs, and/or long running jobs and/or can determine whether the amount of resources used by the media agent is high or is increasing constantly, at a constant rate, or at a near constant rate. | 2021-12-02 |
20210374022 | SERIAL INTERFACE WITH IMPROVED DIAGNOSTIC COVERAGE - A serial interface, such as a serial peripheral interface (SPI), with improved diagnostic coverage is disclosed. The serial interface includes a data verification module that selects an error detection value in response to a mode signal indicating if the transmitting device is in user mode or test mode. For example, the data verification module computes a cyclic redundancy check (CRC) value and selects either the computed CRC value or its inverse based on the mode. The receiving device can determine the mode of the transmitting device based on the error detection value used. The serial interface further includes a read detector for clearing the transmit data buffer after data is read out. The serial interface may further include a loopback circuit for verifying that the data output from an output pin matches the data from the transmit data buffer. | 2021-12-02 |
20210374023 | FLEXIBLE INTERFACE - A system and method are provided on one or more companion chips having a plurality of cores. Each core has core circuitry and a test interface for carrying out tests in relation to the core circuitry. The test interface has an address register to hold an address of the core and address determination circuitry. The address determination circuitry is configured to compare an address received on an address line to the address held in the address register to determine whether a core is being addressed. The address determination circuitry is also configured to direct the test interface to carry out a testing operation in response to the determination. | 2021-12-02 |
20210374024 | IMMERSIVE WEB-BASED SIMULATOR FOR DIGITAL ASSISTANT-BASED APPLICATIONS - Immersive web-based simulator for digital assistant-based applications is provided. A system can provide, for display in a web browser, an inner iframe configured to load, in a secure, access restricted computing environment, an application configured to integrate with a digital assistant. The application can be provided by a third-party developer device. The system can provide, for display in a web browser, an outer iframe configured with a two-way communication protocol to communicate with the inner iframe. The system can provide a state machine to identify a current state of the application loaded in the inner frame, and load a next state of the application responsive to a control input. | 2021-12-02 |
20210374025 | Fault Injection Method and Apparatus, Electronic Device and Storage Medium - A fault injection method and apparatus, an electronic device and a storage medium are provided, which are related to the technical fields of computers and cloud computing, in particular to the field of testing. The fault injection method includes: acquiring a fault injection task, which includes at least one target service identification and a fault scenario corresponding to the target service identification; determining a target service according to each target service identification, and acquiring a state of the target service; and injecting the fault scenario corresponding to the target service identification into the target service in a case that the state of the target service is a normal state. The application is beneficial to reduction of labor cost. | 2021-12-02 |
20210374026 | SYSTEM AND METHOD FOR AUTOMATED DESKTOP ANALYTICS TRIGGERS - The present invention is a method and system for automatedly producing at least one desktop analytics trigger. Upon receiving at least one type of data input, the system analyzes the data input and produces at least one desktop analytics trigger based on the results of the analysis of the data input. The data input can include data on the programs, applications, or information a user utilizes during a task, to allow use of desktop process analytics. This process may be used to either generate a new desktop analytics trigger or update an existing desktop analytics trigger. | 2021-12-02 |
20210374027 | SELF-LEARNING ALERTING AND ANOMALY DETECTION - Methods and systems for evaluating metrics (e.g., quality of service metrics) corresponding to a monitored computer, detecting metric anomalies, and issuing alerts, are disclosed. A metrics collecting agent, operating on a monitored computer, collects metrics corresponding to the monitored computer and/or one or more monitored services. These metrics are transmitted to a monitoring server that dynamically determines metric thresholds corresponding to normal metrics and anomalous metrics. Using these metric thresholds, along with a machine learning model, the monitoring server can determine whether one or more metrics are anomalous, automatically issue alerts to security and operations teams, and/or transmit a control instruction to the monitored computer in order to fix the issue causing the anomalous metrics. | 2021-12-02 |
20210374028 | DATABASE MONITORING USING SHARED INFRASTRUCTURE - A method for database monitoring may include sending, to a central database, a query on a database view at the central database. The database view may include a first operational data from a first local database and a second operational data from a second local database. An operational state of the central database may be determined based on the response to the query on the database view. Moreover, in response to receiving, from the central database, a response including the first operational data, a first operational state of the first local database may be determined based on the first operational data. Alternatively and/or additionally, in response to receiving, from the central database, a response including the second operational data, a second operational state of the second local database may be determined based on the second operational data. Related systems and articles of manufacture, including computer program products, are also provided. | 2021-12-02 |
20210374029 | System and Method for Monitoring Computing Platform Parameters and Dynamically Generating and Deploying Monitoring Packages - A system for monitoring a computing platform configured to receive a particular metric from metrics associated with the computing platform. A plurality of layers of the computing platform are monitored. Monitoring parameters of the plurality of layers of the plurality of layers are determined. Heuristics of each monitoring parameter are determined over a time period. Monitoring packages are created from the monitoring parameters based on correlations between groups of monitoring parameters, the plurality of layers, and the metrics. Based on the particular metric, a string of monitoring packages is dynamically created from the monitoring packages. A behavior of the particular metric is determined in a configurable time duration in the future using the dynamically created string of monitoring packages. Possible failures of the computing platform related to the particular metric are predicted in an environment of the computing platform based on the determined behavior of the particular metric. | 2021-12-02 |
20210374030 | COMPUTATION OF AFTER-HOURS ACTIVITIES METRICS - A system and method for computing a metric indicating a user operation of an application is described. The system accesses online activity data of a user operation of the application. The system filters the online activity data based on a preset time range. An after-hours activity score is calculated based on a type of activity or a duration of an activity from the filtered online activity data, and a weight assigned to the type of activity or the duration of the activity. The system computes an after-hours metric based on the after-hours activity score. A configuration setting for the application based on the after-hours metric is applied to the application. | 2021-12-02 |
20210374031 | EVENT MONITORING APPARATUS, METHOD AND PROGRAM RECORDING MEDIUM - Provided an apparatus configured to calculate a periodicity of time series data, generate a plurality of subsequences, from the time series data, a length of each subsequence set to the periodicity, calculate feature values of the plurality of subsequences; categorize the plurality of subsequences, based on the feature values thereof, into one or more groups, find a periodicity of the subsequences belonging in common to one group, based on an occurrence order of the subsequences belonging in common to the one group and perform missing event detection by identifying the subsequence, occurrence of which is expected according to the periodicity of the subsequences belonging in common to the one group, but not found. | 2021-12-02 |
20210374032 | DIGITAL TWIN WORKFLOW SIMULATION - Systems, methods and computer program products for simulating workflows and activities of physical assets using digital twin models. User-defined simulations are performed by selectin digital twin components being analyzed during the simulation, concentrating the analysis on the selectively defined components and bypassing components that will not be simulated. Users can design the digital twin simulation using one or more available digital twin models. The model can be the most current digital twin model, a previous version of a model or a hybridized model comprising components or portions from multiple versions of the available digital twins. Users can further customize simulations by selecting components or sections of the digital twin model to selectively bypass during the simulation or provide overriding values for non-simulated portions of the digital twin which can be used as entry criteria inputted into the next simulated section or component of the digital twin, to complete the simulation. | 2021-12-02 |
20210374033 | FEATURE DEPLOYMENT READINESS PREDICTION - Systems and methods directed to generating a predicted quality metric are provided. Telemetry data may be received from a from a first group of devices executing first software. A quality metric for the first software may be generated based on the first telemetry data. Telemetry data from a second group of devices may be received, where the second group of devices is different from the first group of devices. Covariates impacting the quality metric based on features included in the first telemetry data and the second telemetry data may be identified, and a coarsened exact matching process may be performed utilizing the identified covariates to generate a predicted quality metric for the first software based on the second group of devices. | 2021-12-02 |
20210374034 | DYNAMIC TUNING OF COMPUTING DEVICES USING APPLICATION LOG DATA - A system includes a memory and at least one processor in communication with the memory. A processor is configured to receive a first log message denoting an event associated with a first application executing in the system. A machine learning model generates a predicted log message based at least in part on the first log message. The predicted log message represents a prediction of a subsequent log message to be received from the first application. First metric data associated with the predicted log message is determined. The first metric data describes system conditions of the system associated with the predicted log message. A tuning profile associated with the system conditions is determined and the current system configuration of the system is modified using the tuning profile. | 2021-12-02 |
20210374035 | MANAGEMENT OF EVENT LOG INFORMATION OF A MEMORY SUB-SYSTEM - A set of log entries associated with a memory sub-system is caused to be stored in a first log subject to a wrapping process. A summarized log entry representing data of a portion of the set of log entries matching a pattern is generated and caused to be stored in a second log not subject to the wrapping process, where the summarized log entry is preserved from deletion via the wrapping process of the first log. The set of log entries is deleted from the first log by executing the wrapping process. | 2021-12-02 |
20210374036 | APPLICATION-SPECIFIC LOG ROUTING - Implementations for application-specific log routing are described. An example method may include receiving, by an application server, a log message; responsive to determining that a log router associated with the application server is enabled, identifying a thread context associated with an execution thread that created the log message; responsive to identifying a logger associated with the thread context, forwarding the log message to the logger; and processing the log message by the logger. | 2021-12-02 |
20210374037 | DEBUGGING SHARED MEMORY ERRORS - There is provided a method for debugging errors in a shared memory. The method comprises executing instrumented machine code of a plurality of processes to generate a recorded execution of each of the plurality of processes for deterministic replay of the recorded execution. The method further comprises logging accesses to the shared memory by each of the plurality of processes in a shared memory log for debugging errors in the shared memory by analysing the recorded executions and the shared memory log. The shared memory log is accessible by each of the plurality of processes. | 2021-12-02 |
20210374038 | SELECTIVE TRACKING OF REQUESTS THROUGH ENTERPRISE SYSTEMS BY MULTIPLE VENDORS WITH A SINGLE TRACKING TOKEN SCHEME - Aspects of the invention include receiving requests to be executed by a processing system, and receiving first stakeholder token from first monitoring agent and second stakeholder token from second monitoring agent, first and second stakeholder tokens being indicators to track at least one of the requests through the processing system. A tracking token is built for tracking at least one of the requests through the processing system, tracking token having a format acceptable by protocols of the processing system, tracking token including first and second stakeholder tokens and a unique correlator. Requests are transmitted to the processing system, where the tracking token is associated with at least one of requests in the processing system. Access is enabled to tracking information generated by the processing system associated with requests for the first monitoring agent based on first stakeholder token and for the second monitoring agent based on second stakeholder token. | 2021-12-02 |
20210374039 | AUTOMATED TEST INPUT GENERATION FOR INTEGRATION TESTING OF MICROSERVICE-BASED WEB APPLICATIONS - Techniques for automated generation of inputs for testing microservice-based applications are provided. In one example, a computer-implemented method comprises: traversing, by a system operatively coupled to a processor, a user interface of a microservices-based application by performing actions on user interface elements of the user interface; and generating, by the system, an aggregated log of user interface event sequences and application program interface call sets based on the traversing. The computer-implemented method also comprises: determining, by the system, respective user interface event sequences that invoke application program interface call sets; and generating, by the system, respective test inputs based on the user interface event sequences that invoke the application program interface call sets. | 2021-12-02 |
20210374040 | Auto Test Generator - The technology disclosed relates to generating automated test plan scripts. A selection of a first test plan to automate is received. Test scripts and data from a repository are retrieved and the test scripts and the data correspond to the first test plan. Test steps of the first test plan are performed. A prediction of a reusable component for a particular test step or test validation is provided for each of the test steps. A selection of at least one prediction for at least one of the test steps is received. An automated test plan script corresponding to the selection of the at least one prediction is generated. | 2021-12-02 |
20210374041 | HIGHLY SCALABLE SYSTEM AND METHOD FOR AUTOMATED SDK TESTING - A highly scalable automated SDK testing system includes an automated testing controller, an automated testing message server and an automated testing message terminal running on a set of target devices. The controller and the terminals register themselves with the message server. A testing case is programmed for testing an SDK on a number of target devices concurrently and transformed into a set of command messages in JSON message format. The controller sends the set of messages to the message server. The message server then distributes the set of command messages to the terminals. In response, the terminal calls corresponding APIs of the SDK. The APIs called can be the same or different between the devices within the set of target devices. The SDK returns a result that is forwarded to the server. The server sends the results from the target devices to the controller. The controller verifies the results. | 2021-12-02 |
20210374042 | AUTOMATIC PORTABLE DEVICE TESTING METHOD AND SYSTEM - Systems and methods of testing portable access devices are described. One method includes receiving, by a reader controller, a configuration file configuration data and providing, by the reader controller, a configuration command associated with configuration data to an access application on an access device. The method also includes configuring the access application on the access device for an interaction scenario based on the configuration command and initiating the interaction scenario between the access device and a test tool. The method also includes receiving, by the access device, interaction data from the test tool and processing, by the access device, the interaction data, thereby generating test output data. The method also includes providing, by the access device, the test output data to the reader controller and sending, by the reader controller, the test output data to the test tool. | 2021-12-02 |
20210374043 | ZERO CODING AUTOMATION WITH NATURAL LANGUAGE PROCESSING, SUCH AS FOR USE IN TESTING TELECOMMUNICATIONS SOFTWARE AND RESOURCES - A framework, such as for automated testing for telecommunications software and resources is disclosed that reuses code modules, thereby reducing redundancy and increasing efficiency and productivity. The zero coding automation system disclosed herein provides an end-to-end automation framework, which minimizes (and in some cases eliminates) the requirement to write software code, e.g. to test software modules. Instead, the coding automation systems and methods provide a hierarchical framework to translate requests (e.g. testing commands, statements, and so on) received in a natural language (for example, English) to testing code modules written in, for example, one or more programming languages (for example, tool specific Application Program Interface (API)/libraries developed to test functionality). | 2021-12-02 |
20210374044 | TESTING AGENT FOR APPLICATION DEPENDENCY DISCOVERY, REPORTING, AND MANAGEMENT TOOL - Techniques for monitoring operating statuses of an application and its dependencies are provided. A monitoring application may collect and report the operating status of the monitored application and each dependency. Through use of existing monitoring interfaces, the monitoring application can collect operating status without requiring modification of the underlying monitored application or dependencies. The monitoring application may determine a problem service that is a root cause of an unhealthy state of the monitored application. Dependency analyzer and discovery crawler techniques may automatically configure and update the monitoring application. Machine learning techniques may be used to determine patterns of performance based on system state information associated with performance events and provide health reports relative to a baseline status of the monitored application. Also provided are techniques for testing a response of the monitored application through modifications to API calls. Such tests may be used to train the machine learning model. | 2021-12-02 |
20210374045 | SAVING VIRTUAL MEMORY SPACE IN A CLONE ENVIRONMENT - Virtual memory space may be saved in a clone environment by leveraging the similarity of the data signatures in swap files when a chain of virtual machines (VMs) includes clones spawned from a common parent and executing common applications. Deduplication is performed across the chain, rather than merely within each VM. Examples include generating a common deduplication identifier (ID) for the chain; generating a logical addressing table linked to the deduplication ID, for each of the VMs in the chain; and generating a hash table for the chain. Examples further include, based at least on a swap out request, generating a hash value for a block of memory to be written to a storage medium; and based at least on finding the hash value within the hash table, updating the logical addressing table to indicate a location of a prior-existing duplicate of the block on the storage medium. | 2021-12-02 |
20210374046 | PERFORMANCE COUNTERS FOR COMPUTER MEMORY - In some examples, performance counters for computer memory may include ascertaining a request associated with a memory address range of computer memory. The memory address range may be assigned to a specified performance tier of a plurality of specified performance tiers. A performance value associated with a performance attribute of the memory address range may be ascertained, and based on the ascertained performance value, a weight value may be determined. Based on the ascertained request and the determined weight value, a count value associated with a counter associated with the memory address range may be incremented. Based on an analysis of the count value associated with the counter, a determination may be made as to whether the memory address range is to be assigned to a different specified performance tier of the plurality of specified performance tiers. Based on a determination that the memory address range is to be assigned to the different specified performance tier, the memory address range may be assigned to the specified different performance tier. | 2021-12-02 |
20210374047 | METHODS, DEVICES, AND MEDIA FOR HARDWARE-SUPPORTED OBJECT METADATA RETRIEVAL - Methods and devices for hardware-supported schemes for efficient metadata retrieval are described. The schemes may use hardware to efficiently enforce type safety and speed up memory bound checks without imposing undue memory overhead. Multiple such schemes may be supported by a device, permitting the selection of an optimal scheme based on a given memory allocation request. The schemes may be compatible with legacy code and applicable to a wide range of data objects and system constraints. Compilation, instrumentation, and linking of code to effect such schemes is also described. | 2021-12-02 |
20210374048 | METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR STORAGE MANAGEMENT - Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for storage management. According to an example implementation of the present disclosure, a method for storage management includes: determining a state of cached data stored in an initial cache space of a storage system including a plurality of cache disks, the state indicating that a size of the cached data does not match a size of the initial cache space; determining, based on the state, a target cache space of the storage system; and storing at least a part of the cached data into the target cache space to change the size of the initial cache space. Therefore, the management performance can be improved, and the storage costs can be reduced. | 2021-12-02 |
20210374049 | SELECT DECOMPRESSION HEADERS AND SYMBOL START INDICATORS USED IN WRITING DECOMPRESSED DATA - One or more units of decompressed data of a plurality of units of decompressed data is written to a target location for subsequent writing to memory. The plurality of units of decompressed data includes a plurality of symbol outputs and has associated therewith a plurality of decompression headers. A determination is made that the subsequent writing to memory of at least a portion of another unit of decompressed data to be written to the target location is to be stalled. A symbol start position of the other unit of decompressed data and a decompression header of a selected unit of the one or more units of decompressed data written to the target location are provided to a component of the computing environment. The decompression header is used for the subsequent writing of the other unit of decompressed data to memory. | 2021-12-02 |
20210374050 | SYSTEM AND METHOD FOR EFFICIENT CACHE COHERENCY PROTOCOL PROCESSING - To reduce latency and bandwidth consumption in systems, systems and methods are provided for grouping multiple cache line request messages in a related and speculative manner. That is, multiple cache lines are likely to have the same state and ownership characteristics, and therefore, requests for multiple cache lines can be grouped. Information received in response can be directed to the requesting processor socket, and those speculatively received (not actually requested, but likely to be requested) can be maintained in queue or other memory until a request is received for that information, or until discarded to free up tracking space for new requests. | 2021-12-02 |
20210374051 | STORAGE APPARATUS AND METHOD - A storage apparatus includes: a memory that stores data and main management information, the main management information identifying a physical address of the data; and processing circuitry configured to generate preliminary management information that includes information of the same content as the main management information, and select, as use management information, any one of the main management information and the preliminary management information upon start of the storage apparatus. Access to the data stored in the memory is performed using the selected use management information. | 2021-12-02 |
20210374052 | RELOCATING DATA IN A MEMORY DEVICE - Methods that can facilitate more optimized relocation of data associated with a memory are presented. In addition to a memory controller component, a memory manager component can be employed to increase available processing resources to facilitate more optimal execution of higher level functions. Higher level functions can be delegated to the memory manager component to allow execution of these higher level operations with reduced or no load on the memory controller component resources. A uni-bus or multi-bus architecture can be employed to further optimize data relocation operations. A first bus can be utilized for data access operations including read, write, erase, refresh, or combinations thereof, among others, while a second bus can be designated for higher level operations including data compaction, error code correction, wear leveling, or combinations thereof, among others. | 2021-12-02 |
20210374053 | MEMORY CONTROLLER, MEMORY SYSTEM AND OPERATING METHOD OF MEMORY DEVICE - A memory controller includes a block ratio calculator configured to calculate a ratio of free blocks among memory blocks for storing data; a policy selector configured to select, based on the calculated ratio of free blocks, any one garbage collection policy of a first garbage collection policy of specifying priorities to be used to select a victim block depending on attributes of the data, and a second garbage collection policy of specifying the priorities to be used to select the victim block regardless of the attributes of the data; and a garbage collection performing component configured to perform a garbage collection operation on at least one memory block of the memory blocks according to the garbage collection policy selected by the policy selector. | 2021-12-02 |
20210374054 | SYSTEM AND METHOD FOR OPTIMIZING NON-VOLATILE RANDOM ACCESS MEMORY USAGE - An information handling system includes a non-volatile memory (NVRAM) and a processor. The NVRAM stores a plurality of NVRAM variables and a basic input/output system (BIOS) of the information handling system. The BIOS includes system BIOS variable services. The processor executes the system BIOS variable services. While executing the system BIOS variable services, the processor determines whether a holding area of a first NVRAM variable of the NVRAM variables is completely used. In response to the storage being completely used, the processor calculates a new size of the holding area based on metadata of the first NVRAM variable, and creates a new storage area for the first NVRAM variable. The size of a second holding area of the new storage area equals the new size. | 2021-12-02 |
20210374055 | MEMORY ACCESS COMMANDS WITH NEAR-MEMORY ADDRESS GENERATION - A memory controller may be configured with command logic that is capable of sending a memory access command having incomplete address information via a command/address bus that connects the memory controller to memory modules. The memory controller may send the memory access command via the bus for accessing data stored at memory locations of the memory modules. The memory locations may correspond to different near-memory generated reflecting that the data is not address aligned across the memory modules. Nonetheless, because of the near-memory address generation, the memory controller can send the memory access command having incomplete address information for accessing the data stored at the different addresses, as opposed to having to send multiple memory access commands specifying complete address information on the bus for accessing the data at the different addresses, thereby conserving usage of the available bus bandwidth, reducing power consumption, and increasing compute throughput. | 2021-12-02 |
20210374056 | SYSTEMS AND METHODS FOR SCALABLE AND COHERENT MEMORY DEVICES - Provided are systems, methods, and apparatuses for providing a storage resource. The method can include: operating a first controller coupled to a network interface in accordance with a cache coherent protocol; performing at least one operation on data associated with a cache using a second controller coupled to the first controller and coupled to a first memory; and storing the data on a second memory coupled to one of the first controller or the second controller. | 2021-12-02 |
20210374057 | LOW LATENCY INTER-CHIP COMMUNICATION MECHANISM IN A MULTI-CHIP PROCESSING SYSTEM - Systems and methods of multi-chip processing with low latency and congestion. In a multi-chip processing system, each chip includes a plurality of clusters arranged in a mesh design. A respective interconnect controller is disposed at the end of each column. The column is linked to a corresponding remote column in the other chip. A share cache controller in the column is paired with a corresponding cache controller in the remote column, the pair of cache controllers are configured to control data caching for a same set of main memory locations. Communications between cross-chip cache controllers are performed within linked columns of clusters via the column-specific inter-chip interconnect controllers. | 2021-12-02 |
20210374058 | PARTITIONED MID-TIER CACHE BASED ON USER TYPE - A server includes a data cache for storing data objects requested by users logged in under different user roles. Different user roles may have different permissions to access individual fields within a data object. When a cache miss occurs, the cache may begin loading portions of a requested data object from various data sources. Instead of waiting for the entire object to load to change the object state to “valid,” the cache may incrementally update the state through various levels of validity based on the user role of the request. When a portion of the data object used by a low-level user role is received, the object state can be upgraded to be valid for that user role while data for higher-level user roles continues to load. The portion of the data object can then be sent to the low-level user roles without waiting for the rest of the data object to load. | 2021-12-02 |
20210374059 | Core-to-Core Cache Stashing and Target Discovery - A method and apparatus is disclosed for transferring data from a first processor core to a second processor core. The first processor core executes a stash instruction having a first operand associated with a data address of the data. A second processor core is determined to be a stash target for a stash message, based on the data address or a second operand. A stash message is sent to the second processor core, notifying the second processor core of the written data. Responsive to receiving the stash message, the second processor core can opt to store the data in its cache. The data may be included in the stash message or retrieved in response to a read request by the second processing core. The second processor core may be determined by prediction based, at least in part, on monitored data transactions. | 2021-12-02 |
20210374060 | Timed Data Transfer between a Host System and a Memory Sub-System - A memory sub-system configured to schedule the transfer of data from a host system for write commands to reduce the amount and time of data being buffered in the memory sub-system. For example, after receiving a plurality of streams of write commands from a host system, the memory sub-system identifies a plurality of media units in the memory sub-system for concurrent execution of a plurality of write commands respectively. In response to the plurality of commands being identified for concurrent execution in the plurality of media units respectively, the memory sub-system initiates communication of the data of the write commands from the host system to a local buffer memory of the memory sub-system. The memory sub-system has capacity to buffer write commands in a queue, for possible out of order execution, but limited capacity for buffering only the data of a portion of the write commands that are about to be executed. | 2021-12-02 |
20210374061 | INSTRUCTION CACHING SCHEME FOR MEMORY DEVICES - Methods, systems, and devices for an enhanced instruction caching scheme are described. A memory controller may include a first closely-coupled memory component that is associated with storing data and control information and a second closely-coupled memory component that is associated with storing control information. The memory controller may be configured to retrieve data from the first memory closely-coupled component and control information from a second closely-coupled memory component. Control information may be stored in the first closely-coupled memory component, and a memory controller may access the control information stored in the first closely-coupled memory component by transferring, from the first closely-coupled memory component, the control information into the second closely-coupled memory component. After transferring the control information into the second closely-coupled memory component, the memory controller may access the control information from the second closely-coupled memory component. | 2021-12-02 |
20210374062 | SECTOR CACHE FOR COMPRESSION - In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed. | 2021-12-02 |
20210374063 | METHOD FOR PROCESSING PAGE FAULT BY PROCESSOR - Disclosed is a method for processing a page fault. The method includes performing demand paging depending on an application operation in a system including a processor and an operating system, and loading, at the processor, data on a memory in response to the demand paging. | 2021-12-02 |
20210374064 | BYPASS PREDICTOR FOR AN EXCLUSIVE LAST-LEVEL CACHE - A system and a method to allocate data to a first cache increments a first counter if a reuse indicator for the data indicates that the data is likely to be reused and decremented the counter if the reuse indicator for the data indicates that the data is likely not to be reused. A second counter is incremented upon eviction of the data from the second cache, which is a higher level cache than the first cache. The data is allocated to the first cache if the value of the first counter is equal to or greater than the first predetermined threshold or the value of the second counter equals zero, and the data is bypassed from the first cache if the value of the first counter is less than the first predetermined threshold and the value of the second counter is not equal to zero. | 2021-12-02 |
20210374065 | EFFECTIVE AVOIDANCE OF LINE CACHE MISSES - A system includes a line cache, a memory device, and a processing device to execute firmware to detect that a received event is located in an events list, wherein events stored in the events list are associated with critical functions that occur no more than once per a threshold number of days and time out after between 15 microseconds and a predetermined number of hundreds of seconds. The firmware is further to enable access to the line cache and execute a critical function associated with the received event out of an always-loaded area of the line cache. | 2021-12-02 |
20210374066 | MEMORY SYSTEM WITH A PREDICTABLE READ LATENCY FROM MEDIA WITH A LONG WRITE LATENCY - Systems and methods related to a memory system with a predictable read latency from media with a long write latency are described. An example memory system includes an array of tiles configured to store data corresponding to a cache line associated with a host. The memory system further includes control logic configured to, in response to a write command from a host, initiate writing of a first cache line to a first tile in a first row of the tiles, a second cache line to a second tile in a second row of the tiles, a third cache line to a third tile in a third row of the tiles, and a fourth cache line in a fourth row of the tiles. The control logic is configured to, in response to a read command from the host, initiate reading of data stored in an entire row of tiles. | 2021-12-02 |
20210374067 | Moving Change Log Tables to Align to Zones - The present disclosure generally relates to methods of operating storage devices. The storage device is comprised of a controller, a random access memory (RAM) unit, and a NVM unit, wherein the NVM unit is comprised of a plurality of zones. The RAM unit comprises a first logical to physical address table and the NVM unit comprises a second logical to physical address table. The zones are partitioned into sections, and each partitioned section aligns with a change log table. Data is written to each zone sequentially, and only one partitioned section is updated at a time for each zone. Each time a zone is erased or written to in the NVM unit, the first logical to physical address table is updated and the second logical to physical address table is periodically updated to match the first logical to physical address table. | 2021-12-02 |