18th week of 2021 patent applcation highlights part 55 |
Patent application number | Title | Published |
20210132951 | METHOD FOR SOLVING THE PROBLEM OF CLUSTERING USING CELLULAR AUTOMATA BASED ON HEAT TRANSFER PROCESS - A computer-implemented method, which enables the data to be clustered without being required to perform any distance calculations among the points of the dataset, includes assigning points of a dataset to cells of a cellular automaton; assigning each cell, having a data point assigned, a distinct state value and a constant temperature value; and assigning all cells, to which a data point is not assigned, a unique state value different from the state values utilized for cells having a data point and to a temperature lower than the constant temperature value; selecting a cell in the cellular automaton randomly; calculating the average temperature of the selected cell and its neighbor cells; setting the temperature of the cells having no data point, as the average temperature; if a neighbor cell temperature is above the predetermined threshold value, moving this neighbor cell to the state of the selected cell. | 2021-05-06 |
20210132952 | CODE AND DATA SHARING AMONG MULTIPLE INDEPENDENT PROCESSORS - A system includes a memory and multiple processors. The memory further includes a shared section and a non-shared section. The processors further include at least a first processor and a second processor, both of which read-only access to the shared section of the memory. The first processor and the second processor are operable to execute shared code stored in the shared section of the memory, and execute non-shared code stored in a first sub-section and a second sub-section of the non-shared section, respectively. The first processor and the second processor execute the share code according to a first scheduler and a second scheduler, respectively. The first scheduler operates independently of the second scheduler. | 2021-05-06 |
20210132953 | ARITHMETIC DEVICES FOR NEURAL NETWORK - An arithmetic device includes an input distribution signal generation circuit, an output distribution signal generation circuit, and an output distribution signal compensation circuit. The input distribution signal generation circuit generates an input distribution signal and a compensation signal based on an arithmetic result signal generated from a result of a multiplying-accumulating (MAC) calculation. The output distribution signal generation circuit applies the input distribution signal to an activation function to generate first and second output distribution signals. The output distribution signal compensation circuit compensates for the first output distribution signal based on the compensation signal, the first output distribution signal, and the second output distribution signal to generate a compensated distribution signal. | 2021-05-06 |
20210132954 | ARITHMETIC DEVICES FOR NEURAL NETWORK - An arithmetic device includes a multiplying-accumulating (MAC) operator and an activation function (AF) circuit. The MAC operator performs a MAC arithmetic operation for weight data and vector data to generate an arithmetic result signal. The AF circuit extracts a first bit group and a second bit group from the arithmetic result signal. In addition, the AF circuit generates an input distribution signal based on the first bit group and the second bit group. Moreover, the AF circuit selects and outputs an output distribution signal that corresponds to the input distribution signal based on an activation function. | 2021-05-06 |
20210132955 | Secure Start System for an Autonomous Vehicle - A secure start system for an autonomous vehicle can include a communications router comprising an input interface to receive a boot-loader to enable network communications with a backend system. The secure start system utilizes a tunnel key from the backend system to establish a private communications session with a backend data vault. The secure start system then retrieves a set of decryption keys from the backend data vault, via the private communications session, to decrypt a plurality of encrypted drives of the autonomous vehicle, which enables one or more functions of the autonomous vehicle. | 2021-05-06 |
20210132956 | METHOD AND APPARATUS FOR MANAGING SCHEDULING OF SERVICES DURING BOOT-UP - Accordingly the embodiments herein provide a method for managing scheduling of services during a boot-up process in an electronic device including a multi-core processor. The method includes determining a plurality of services initiated during the boot-up process of the electronic device. Further, the method includes registering system parameters associated with the electronic device for each one of the determined services. Further, the method includes determining whether the service is critical or non-critical for the boot-up process. Further, the method includes tagging a label data to each one of the determined services, wherein the label data represents whether the service is critical or non-critical. Further, the method includes clustering each of the services into one of an accelerating cluster and a decelerating cluster based on the registered system parameters associated with the electronic device and the tagged label data. | 2021-05-06 |
20210132957 | CONFIGURATION AFTER CLUSTER MIGRATION - A method, computer system and computer program product for processing configuration after a cluster migration are provided. In this method, a network booting program is received at a computing node from a management node for a cluster. The cluster includes at least one computing node. An operating system is booted in a memory of the computing node with the received network booting program. Configuration changes are received from the management node, and the configurations in a local storage of the computing node are updated according to the received configuration changes. | 2021-05-06 |
20210132958 | System and Method for Suspending and Processing Commands on a Configuration Object - A method, computer program product, and computing system for receiving a plurality of input/output (IO) commands for a plurality of configuration objects of a storage system. A modification command for a configuration object of the plurality of configuration objects may be received. The configuration object may be suspended in response to receiving the modification command. One or more IO commands directed to the suspended configuration object from the plurality of IO commands may be processed before the configuration object is modified. | 2021-05-06 |
20210132959 | BOOTSTRAPPING FRAMEWORKS FROM A GENERATED STATIC INITIALIZATION METHOD FOR FASTER BOOTING - A system includes a memory, a processor in communication with the memory, and a compiler. The compiler is configured to initialize at least one class for an application at compilation time, start a framework at compilation time, and serialize a framework container of the framework into a native image at compilation time. The processor is configured to run the native image to start the application. | 2021-05-06 |
20210132960 | VISUAL TRIGGER CONFIGURATION OF A CONVERSATIONAL BOT - A visual trigger node, indicative of a trigger, is displayed on a first display portion. A trigger configuration user interface is also displayed. A property input, indicative of a property that activates the trigger, is detected and code is generated to map the trigger to the property. | 2021-05-06 |
20210132961 | CONFIGURATION SYNTHESIS UTILIZING INFORMATION EXTRACTION FROM SERVICE ORIENTED ARCHITECTURES - A method to generate configuration data to enable and/or to enhance real-time communication in a cyber-physical system or in a cyber-physical system of systems. The system includes components connected to each other by a communication infrastructure. The components each execute at least one application, which applications exchange information with at least one application being executed on another component. The components are configured to send and/or receive said information according to configuration data: The first configuration data for two or more of the components, on each of which at least one application is executed, is generated by execution of a publish-subscribe protocol, which is executed by two or more of the components, for which the first configuration data are provided. The first configuration data is used as input to a process that produces second configuration data, wherein (i) first and second configuration data are not equal, and (ii) the two or more components, on each of which at least one application is executed, use said second configuration data as configuration data for their real-time communication. | 2021-05-06 |
20210132962 | REAL TIME RESTRUCTURING OF ENTERPRISE OR SUPPLY CHAIN APPLICATION - The present invention provides a system and method for restructuring of an enterprise application (EA) or a supply chain management (SCM) application. The system and method enable a user to restructure the applications dynamically by operating with configurable user interface components without a need to involve a developer for coding. The system includes a platform layer component and a data layer component associated with UI component for executing a task. The system further includes a rule engine configured to interact with a dynamic module injector for conditionally loading modules on the user interface for restructuring the applications thereby redefining EA and SCM operations. | 2021-05-06 |
20210132963 | SYSTEM AND METHOD FOR PRESENTING NOTIFICATIONS ON HEAD-MOUNTED DISPLAY DEVICE AND EXTERNAL COMPUTING DEVICE - A system for presenting notifications on display device and external computing device. The display device includes image renderer, external visual indicator and first processor, the external computing device comprises display and second processor. The system comprises first and second clients executing on first processor, third client executing on second processor, first, second and third clients are configured to generate and render first, second and third user interfaces on image renderer, external visual indicator and display, respectively; and control server. The control server is configured to obtain information; detect whether or not notification is to be presented; determine notification type and content; and select clients from amongst plurality of clients and send content on selected clients, wherein selected clients are configured to generate and render their respective user interfaces to present notification substantially simultaneously. | 2021-05-06 |
20210132964 | SYSTEM AND METHOD FOR PRESENTING AN OBJECT - Method, system for presenting an object on a computing device. A metaphor application on a computing device organizes a user interface based upon a metaphor. The metaphor organizes a document, file, application, or combination thereof based on geospheric direction, geolocation, or both. The metaphor may also organize a document, file, application, data, or a combination thereof based on a solid geometrical figure in three-dimensional Euclidean space. A document, file, application, or any combination thereof may be associated with geophysical direction, a geolocation, or both. The document, file, application, data, or any combination thereof may further be associated with a solid geometrical figure. A presentation object containing data on the document, file, application, data, or combination thereof, and the geospheric direction, geolocation, or both is formatted into data blocks for rendering on a display. The display may be the display screen of the computing device. The metaphor application causes the presentation object to be rendered on the display when the computing device is pointing in the geospheric direction, in the geolocation or both associated with the presentation object. | 2021-05-06 |
20210132965 | SYSTEM AND METHOD FOR DISPLAYING CUSTOMIZED USER GUIDES IN A VIRTUAL CLIENT APPLICATION - Systems and methods for displaying a user guide in a Client Virtual Application (“CVA”). The methods include determining, by a client device, a position associated with a user interaction in CVA. The position of the user interaction may be a mouse position relative to the CVA's window or a position of a widget of CVA's window with which the user is interacting via an input device. The client device transfers the position data and application name to a server device on the network. The server device subsequently retrieves, from a content datastore, user guide content associated with the application and position, and transfer the user guide content for rendering on the client device. The server device may also determine a display position and send it to the client device. The client device may render the user guide according to a rendering policy. | 2021-05-06 |
20210132966 | CUSTOMIZABLE ANIMATIONS - Disclosed herein are system, method, and device embodiments for implementing dynamic customizable animations. A multi-tenant service may configure a visual component of an application to present an animation based on a rule, generate a multi-tenant entity record defining the rule, and generate application code associated with the application. Further, the multi-tenant service may receive a request for animation information associated with the multi-tenant entity record, and send the animation information to the client device. Further, in some embodiments, a client device executing the application may present the animation based on evaluating the animation information. | 2021-05-06 |
20210132967 | SYSTEM AND METHOD FOR CONFIGURATION MANAGEMENT DATABASE, GOVERNANCE, AND SECURITY IN A COMPUTING ENVIRONMENT - A Hybrid Configuration Management Database methodology is disclosed. In a computer-implemented method, components of a computing environment are automatically monitored, and have a feature selection analysis performed thereon. Provided the feature selection analysis determines that features of the components are subjectively defined, a classification of the features is performed. Provided the feature selection analysis determines that features of the components are not well defined, a similarity analysis of the features is performed. Results of the feature selection methodology are generated. | 2021-05-06 |
20210132968 | SYSTEM AND METHOD FOR IMPLEMENTING TRUSTED EXECUTION ENVIRONMENT ON PCI DEVICE - System and method for providing trusted execution environments uses a peripheral component interconnect (PCI) device of a computer system to receive and process commands to create and manage a trusted execution environment for a software process running in the computer system. The trusted execution environment created in the PCI device is then used to execute operations for the software process. | 2021-05-06 |
20210132969 | Quantum Virtual Machine for Simulation of a Quantum Processing System - Quantum operations can be simulated on a classical processing system using a quantum virtual machine (QVM). The QVM receives a quantum virtual state including a virtual wavefunction of n qubits. The virtual wavefunction is represented by probability amplitudes stored in a memory location of the classical processing system. The QVM simulates a received quantum operation by determining a set of virtual partial wavefunctions, accessing probability amplitudes for the virtual partial wavefunctions, and executing the quantum operation on the sub-bitstrings. The QVM can measure the result of the quantum operation, add noise, share the virtual wavefunction, or generate efficient machine instructions when simulating the quantum operation. | 2021-05-06 |
20210132970 | SYSTEM AND METHOD FOR IMPLEMENTING A GENERIC PARSER MODULE - Various methods, apparatuses/systems, and media for implementing a generic parser module are disclosed. A repository that stores a plurality of files each having a corresponding file format. A processor accesses the repository to obtain the plurality of files and format each file into a class-based logical hierarchy. The processor also creates a Java model based on the formatted files having the class-based logical hierarchy, each file having a plurality of data and the Java model having file-level classes that contain a list of said plurality of data; generates Java annotations identifying each file type included in the file-level classes; injects the Java annotations into the file-level classes of the Java model to indicate how to process each file; calls a method along with the file containing the data; and parses the file to read the data into the Java model injected with the Java annotations. | 2021-05-06 |
20210132971 | CHANNEL IDENTIFIER COUPLING WITH VIRTUALIZED COMMUNICATIONS HARDWARE - Embodiments include a method of organizing communications channels associated with virtual functions of a single root input and output virtualization (SR-IOV) adaptor. The method includes organizing a first coupling channel according to a first channel path identifier bound to designated communications over a first virtual function of the SR-IOV adaptor allocated according to first virtual function resources that support the first coupling channel. The method also includes enabling access to the first coupling channel to a first guest operating system. The method also includes receiving a teardown command associated with the first coupling channel. The method further includes initiating a reset of the first virtual function that deallocates the first virtual function resources associated the first virtual function. | 2021-05-06 |
20210132972 | Data Storage System Employing Dummy Namespaces For Discovery of NVMe Namespace Groups as Protocol Endpoints - A data storage system (DSS) in a cluster provides virtual-volume data storage to virtual-computing (VC) hosts using NVMe-oF storage interconnect. A DSS creates protocol endpoints (PEs) and corresponding namespace groups, each being a grouping of namespaces corresponding to virtual volumes (vVols) to be bound for access by a respective VC host, and each namespace being mapped to corresponding underlying physical storage. Each namespace group is initially created with a corresponding in-band discoverable dummy namespace. In response to in-band storage discovery commands from the VC hosts, and based on the existence of the dummy namespaces, the DSS responds with responses identifying the namespace groups. Then in response to subsequent vVol creation commands from the VC hosts, the DSS creates new namespaces in respective namespace groups and provides namespace identifiers for the new namespaces to the VC hosts for use in accessing data of the vVols. | 2021-05-06 |
20210132973 | GUEST-TO-HOST VIRTUAL NETWORKING - Guest-to-host virtual networking can include linking a virtual entity proxy to a network adapter of a host machine through a virtual bridge. In response to a request that starts a guest running on the host machine, the guest can be configured to point to the virtual entity proxy and to communicatively couple to a network through the virtual entity proxy linked to the network adapter of the host machine. The virtual entity proxy can be bound to the network, such that the virtual entity proxy intermediates communications between the guest and one or more other guests running on one or more different host machines that are also communicatively coupled to the network. | 2021-05-06 |
20210132974 | SYSTEM FOR PEERING CONTAINER CLUSTERS RUNNING ON DIFFERENT CONTAINER ORCHESTRATION SYSTEMS - A system of one or more computers is configured to peer container clusters running on different container orchestration systems. One general aspect includes moving an endpoint service container between an original cluster and a target cluster of a cluster mesh. In at least one embodiment, a remote service endpoint container is instantiated at the original cluster using service registry information accessed by a mesh operator. In at least one embodiment, the service registry information includes the hostname/path information for the endpoint service container operating at the target cluster. The remote service endpoint container is configured to allow the dependent container at the original cluster to consume services available at the endpoint service container at the target cluster as though the endpoint service container is local to the dependent container. | 2021-05-06 |
20210132975 | AUTOMATED HOST ATTESTATION FOR SECURE RUN-TIME ENVIRONMENTS - A virtualization host is identified for an isolated run-time environment. One or more records generated at a security module of the host, which indicate that a first phase of a multi-phase establishment of an isolated run-time environment has been completed by a virtualization management component of the host, is transmitted to a resource verifier. In response to a host approval indicator from the resource verifier, the multi-phase establishment is completed at the virtualization host. | 2021-05-06 |
20210132976 | CLOUD-BASED MANAGED NETWORKING SERVICE THAT ENABLES USERS TO CONSUME MANAGED VIRTUALIZED NETWORK FUNCTIONS AT EDGE LOCATIONS - A method for providing a managed networking service for a cloud computing system enables users to consume managed virtualized network functions (VNFs) at edge locations. The method includes registering a plurality of third-party vendors for the managed networking service. The plurality of third-party vendors provide a plurality of VNFs for the managed networking service. The method also includes receiving user input from a user of the cloud computing system. The user input includes a request to deploy the plurality of VNFs at an edge location. The plurality of VNFs can be provided by different third-party vendors through the managed networking service. The method also includes causing the plurality of VNFs to be deployed on an edge device that is located at the edge location. The plurality of VNFs can be represented as logical entities in a database that is utilized by the managed networking service. | 2021-05-06 |
20210132977 | INFORMATION PROCESSING APPARATUS, PROGRAM, AND INFORMATION PROCESSING SYSTEM - According to one embodiment, an information processing apparatus includes: a resource calculator configured to calculate a computing resource amount required to execute a test on a computer platform, the test causing an emulator to transmit data based on a communication model defined in a test scenario and causing a service to receive the data, and configured to determine allocation of the emulator for a computer on the computer platform; a first controller configured to access the computer platform to acquire the computing resource amount; and a second controller configured to configure a setting of the emulator allocated to the computer. | 2021-05-06 |
20210132978 | VIRTUALIZATION SYSTEM AND OPERATION MANAGEMENT METHOD - In a virtualization system that includes a hypervisor that performs OSID management for linking a plurality of OSs with resources, a guest OS that receives an initial value from the hypervisor and sets a OSID for each resource, and a OSID manager that sets a OSID for each resource, a new OSID created by OSID generator in OSID manager after a certain period of time has elapsed after setting the initial value is set to the guest OS and the IP (resource), and is requested to be updated to a new OSID set by the update controller in OSID manager. This enables simultaneous updating of OSID of the guest operating system and the resources, thus achieving high robustness. | 2021-05-06 |
20210132979 | GOAL-DIRECTED SOFTWARE-DEFINED NUMA WORKING SET MANAGEMENT - Initializing a software-defined server having software-defined NUMA domains includes, when booting a virtual environment defined by a set of hyper-kernels running on a plurality of physically interconnected computing nodes, accessing information associated with a software-defined NUMA domain configuration. It further includes, based at least in part on the accessed information, assigning software-defined NUMA domains to computing nodes. It further includes assigning virtualized resources to the software-defined NUMA domains. | 2021-05-06 |
20210132980 | MULTI-SITE VIRTUAL INFRASTRUCTURE ORCHESTRATION OF NETWORK SERVICE IN HYBRID CLOUD ENVIRONMENTS - A method of deploying a network service (NS) across multiple data centers includes identifying virtual network functions (VNFs) associated with the NS in response to a request for or relating to the NS, generating commands to deploy VNFs based on VNF descriptors, and issuing the commands to the data centers to deploy VNFs. The data centers each have a cloud management server in which cloud computing management software is run to provision virtual infrastructure resources thereof for a plurality of tenants. The cloud computing management software of a first data center is different from the cloud computing management software of a second data center, and the commands issued to the first and second data centers are each a generic command that is not in a command format of the cloud computing management software of either the first data center or the second data center. | 2021-05-06 |
20210132981 | MULTI-SITE VIRTUAL INFRASTRUCTURE ORCHESTRATION OF NETWORK SERVICE IN HYBRID CLOUD ENVIRONMENTS - A method of deploying a virtual network function of a network service in a data center having a cloud management server running a cloud computing management software to provision virtual infrastructure resources of the data center to at least one tenant, includes generating at least first and second API calls to the cloud computing management software in response to external commands received at the data center to deploy a virtual network function, and executing at least the first and second API calls by the cloud computing management software to deploy the virtual network function. The cloud computing management software creates at least one virtual machine by executing the first API call and at least one virtual disk by executing the second API call. | 2021-05-06 |
20210132982 | MULTISITE SERVICE PERSONALIZATION HYBRID WORKFLOW ENGINE - A method of executing workflows in virtual machines that have been deployed to implement virtual network functions of a network service, wherein the virtual machines are running in a plurality of data centers each having a cloud management server running a cloud computing management software to provision virtual infrastructure resources thereof for a plurality of tenants, includes upon receiving a request to execute a workflow along with a plurality of parameters including first and second parameters at a data center, identifying a virtual machine deployed in the data center, in which the workflow is to be executed based on the first parameter, designating one of a plurality of methods by which the workflow is to be executed in the virtual machine according to the second parameter, and issuing a command to the virtual machine to execute the workflow according to the designated method. | 2021-05-06 |
20210132983 | SECURING A MANAGED FORWARDING ELEMENT THAT OPERATES WITHIN A DATA COMPUTE NODE - Some embodiments provide a method for a first managed forwarding element operating within a first data compute node (DCN) that executes on a host machine. From the first DCN, the method receives a packet destined for a second DCN that is logically connected to the first DCN through a set of logical forwarding elements of a logical network. The method performs forwarding processing on the packet in order to (i) identify a particular logical forwarding element in the set of logical forwarding elements, a logical port of which is coupled to the second DCN, and (ii) identify a second managed forwarding element that implements the logical port of the particular logical forwarding element. The method forwards the packet to the second managed forwarding element. | 2021-05-06 |
20210132984 | Handling Memory Requests - A converter module is described which handles memory requests issued by a cache (e.g. an on-chip cache), where these memory requests include memory addresses defined within a virtual memory space. The converter module receives these requests, issues each request with a transaction identifier and uses that identifier to track the status of the memory request. The converter module sends requests for address translation to a memory management unit and where there the translation is not available in the memory management unit receives further memory requests from the memory management unit. The memory requests are issued to a memory via a bus and the transaction identifier for a request is freed once the response has been received from the memory. When issuing memory requests onto the bus, memory requests received from the memory management unit may be prioritized over those received from the cache. | 2021-05-06 |
20210132985 | SHADOW LATCHES IN A SHADOW-LATCH CONFIGURED REGISTER FILE FOR THREAD STORAGE - A processing system includes a processor core and a scheduler coupled to the processor core. The processing system executes a first active thread and a second active thread in the processor core and detects a swap event for the first active thread or the second active thread. Based on the swap event, using a shadow-latch configured fixed mapping system, to the processing system replaces either the first active thread or the second active thread with a shadow-based thread, the shadow-based thread being stored in a shadow-latch configured register file. | 2021-05-06 |
20210132986 | BACK-END TASK FULFILLMENT FOR DIALOG-DRIVEN APPLICATIONS - A determination is made as to whether a value of a first parameter of a first application is to be obtained using a natural language interaction. Based on received input, a first service of a plurality of services is identified. The first service is to be used to perform a first task associated with the first parameter. Portions of the first application to determine the value of the first parameter and to invoke the first service are generated. | 2021-05-06 |
20210132987 | COMPUTER PROGRAM FOR ASYNCHRONOUS DATA PROCESSING IN A DATABASE MANAGEMENT SYSTEM - Disclosed is a non-transitory computer readable medium storing a computer program, in which when the computer program is executed by one or more processors of a computing device, the computer program performs operation for asynchronous data processing in a database management system and the operations include: dividing an operation corresponding to a query into one or more tasks, if the query issued from a client is received; allocating a subtask for each of the one or more tasks to each of one or more worker threads; determining a balance of processing of the one or more tasks; and reallocating a subtask of a task related to an imbalance to a worker thread, if the processing of the one or more tasks is determined as the imbalance. | 2021-05-06 |
20210132988 | DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device includes a shared command queue, a queue controller, a processor, and a memory. The command queue is configured to queue a plurality of jobs transmitted from a plurality of host processors. The queue controller is configured to classify the plurality of jobs into a plurality of levels of jobs according to priority threshold values and assign jobs of the plurality of levels of jobs the processor. The processor is configured to process the jobs assigned by the queue controller. The memory may store data needed to process the job. | 2021-05-06 |
20210132989 | TASK PROCESSING METHOD, EQUIPMENT, STORAGE MEDIUM AND DEVICE - Disclosed are a task processing method, equipment, storage medium and device. The method includes: acquiring associated conditions of target tasks, and matching the associated conditions with the target tasks to obtain a matching result; establishing a task association table among the target tasks according to the matching result; acquiring an initial execution sequence of the target tasks, and generating a task matrix according to the initial execution sequence and the task association table; in a determination that the task matrix is not in a preset format, adjusting the initial execution sequence until a task matrix obtained according to the task association table and an adjusted execution sequence meets the preset format, and taking the adjusted execution sequence as a target execution sequence; and taking the target execution sequence as a task planning scheme of the target tasks. | 2021-05-06 |
20210132990 | Operator Operation Scheduling Method and Apparatus - An operator operation scheduling method includes obtaining an operator parameter and a processor parameter corresponding to an operator operation, creating N scheduling policies based on the operator parameter and the processor parameter, where the N scheduling policies are classified into M scheduling policy subsets, and each scheduling policy subset includes at least one scheduling policy, filtering the M scheduling policy subsets based on the operator parameter and the processor parameter, to obtain K feasible scheduling policies, where the K feasible scheduling policies are optimal scheduling policies of K feasible scheduling subsets in the M scheduling policy subsets, inputting the operator parameter and the K feasible scheduling policies into a cost model to obtain K operator operation costs, and determining, based on a target requirement and the K operator operation costs, an optimal scheduling policy used for the operator operation. | 2021-05-06 |
20210132991 | TECHNIQUES FOR BEHAVIORAL PAIRING IN A TASK ASSIGNMENT SYSTEM - Techniques for behavioral pairing in a task assignment system are disclosed. In one particular embodiment, the techniques may be realized as a method for behavioral pairing in a task assignment system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, a priority for each of a plurality of tasks; determining, by the at least one computer processor, an agent available for assignment to any of the plurality of tasks; and assigning, by the at least one computer processor, a first task of the plurality of tasks to the agent using a task assignment strategy, wherein the first task has a lower-priority than a second task of the plurality of tasks. | 2021-05-06 |
20210132992 | TECHNIQUES FOR BEHAVIORAL PAIRING IN A TASK ASSIGNMENT SYSTEM - Techniques for behavioral pairing in a task assignment system are disclosed. In one particular embodiment, the techniques may be realized as a method for behavioral pairing in a task assignment system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, a priority for each of a plurality of tasks; determining, by the at least one computer processor, an agent available for assignment to any of the plurality of tasks; and assigning, by the at least one computer processor, a first task of the plurality of tasks to the agent using a task assignment strategy, wherein the first task has a lower-priority than a second task of the plurality of tasks. | 2021-05-06 |
20210132993 | RATE LIMITING COMPLIANCE ASSESSMENTS WITH MULTI-LAYER FAIR SHARE SCHEDULING - The embodiments disclosed herein relate to predictive rate limiting. A workload for completing a request is predicted based on, for example, characteristics of a ruleset to be applied and characteristics of a target set upon which the ruleset is to be applied. The workload is mapped to a set of tokens or credits. If a requestor has sufficient tokens to cover the workload for the request, the request is processed. The request may be processed in accordance with a set of processing queues. Each processing queue is associated with a maximum per-tenant workload. A request may be added to a processing queue as long as adding the request does not result in exceeding the maximum per-tenant workload. Requests within a processing queue may be processed in a First In First Out (FIFO) order. | 2021-05-06 |
20210132994 | PREDICTIVE RESOURCE ALLOCATION FOR NETWORK GROWTH IN AN EDGE COMPUTING NETWORK - The present technology relates to improving computing services in a distributed network of remote computing resources, such as edge nodes in an edge compute network. In an aspect, the technology relates to a method that includes aggregating historical request data for a plurality of requests for services to be performed by one or more edge nodes; training a machine learning model based on the aggregated historical request data; generating, from the trained machine learning model, a prediction for an amount of requests for services at the one or more edge nodes; generating a predicted capacity needed to perform the predicted amount of requests; comparing the predicted capacity to a current capacity for the one or mode edge nodes; and based on the comparison, generating a recommendation for an alteration of hardware resources at the one or more edge nodes. | 2021-05-06 |
20210132995 | System, Method, and Computer Program Product for Processing Large Data Sets by Balancing Entropy between Distributed Data Segments - Systems, methods, and computer program products are provided for load balancing for processing large data sets. The method includes identifying a number of segments and a transaction data set comprising transaction data for a plurality of transactions, the transaction data for each transaction of the plurality of transactions comprising a transaction value, determining an entropy of the transaction data set based on the transaction value of each transaction of the plurality of transactions, segmenting the transaction data set into the number of segments based on the entropy of the transaction data set and balancing respective entropies of each segment of the number of segments, and distributing processing tasks associated with each segment of the number of segments to at least one processor of a plurality of processors to process each transaction in each respective segment. | 2021-05-06 |
20210132996 | DYNAMIC DETERMINATION OF MEMORY REQUIREMENTS FOR FUNCTION AS A SERVICE MULTI-INVOCATION FLOWS - Embodiments of the present systems and methods may provide techniques to provide simple and accurate estimate of memory requirements for application invocation in a serverless environment. For example, a method may comprise selecting sample invocations of functions as a service from a larger plurality of invocations, submitting for execution the plurality of sample invocations and, for each sample invocation, submitting a specification of a memory size to be used for execution of each sample invocation, determining, whether the specification of the memory size to be used for execution of each sample invocation results in unsuccessful execution of at least some of the sample invocations due to insufficient memory and, if so, adjusting the specification of the memory size for at least some of the sample invocations, and submitting for execution at least those invocations in the larger plurality of invocations that were not included in the plurality of sample invocations. | 2021-05-06 |
20210132997 | RESOURCE CONTROL DEVICE, RESOURCE CONTROL METHOD, AND COMPUTER READABLE MEDIUM - A process control unit ( | 2021-05-06 |
20210132998 | SEMICONDUCTOR DEVICE AND CONTROL METHOD THEREOF - A semiconductor device performs exclusive control between a first processor element and a second processor element using a spinlock. Each of the first processor element and the second processor element includes a processing unit and a storage unit. The processing unit generates first spinlock trace information and second spinlock trace information, determines, based on a first spinlock operation state after the first spinlock trace information is generated and the second spinlock trace information, a second spinlock operation state after the second spinlock trace information is generated, and generates an output control signal for determining whether to store the second spinlock trace information in the storage unit in accordance with the second spinlock operation state. | 2021-05-06 |
20210132999 | INTER-SERVER MEMORY POOLING - A memory allocation device for deployment within a host server computer includes control circuitry, a first interface to a local processing unit disposed within the host computer and local operating memory disposed within the host computer, and a second interface to a remote computer. The control circuitry allocates a first portion of the local memory to a first process executed by the local processing unit and transmits, to the remote computer via the second interface, a request to allocate to a second process executed by the local processing unit a first portion of a remote memory disposed within the remote computer. The control circuitry further receives instructions via the first interface to store data at a memory address within the first portion of the remote memory and transmits those instructions to the remote computer via the second interface. | 2021-05-06 |
20210133000 | LIGHTWEIGHT REMOTE PROCESS EXECUTION - The present disclosure involves systems, software, and computer implemented methods for remotely executing binaries in a containerized computing environment using a lightweight inter-process communications protocol (IPC) and UNIX domain sockets. One example method includes establishing, in a shared computing image comprising a plurality of containers, a listening UNIX domain socket, where the listening UNIX domain socket is shared between all containers in the shared computing image. A request to execute a binary in the target container is received at a target container and from a client container using the listening UNIX domain socket. A worker service is generated in the target container. The worker service executes the binary in the target container. A return exit code associated with the executed binary is received and sent to the client container using the UNIX domain socket. | 2021-05-06 |
20210133001 | METHODS AND SYSTEMS FOR OPTIMIZING PROCESSOR USAGE - A method for managing file systems, includes receiving, by a processor coordinator, a first operation request, identifying a file system associated with the first operation request, making a first determination that the file system is local, and in response to the first determination identifying a core thread pool associated with the file system, and directing operation of the first operation request to be executed on a core associated with the core thread pool, wherein the core is associated with a processor. | 2021-05-06 |
20210133002 | USING SCRIPTS TO BOOTSTRAP APPLICATIONS WITH METADATA FROM A TEMPLATE - Systems and methods are described for bootstrapping an application with metadata specified in a template. The template specifies a stack of resources that will be used to execute an application and also includes a set of metadata for customizing the resources and the application. When the system receives the template, it instantiates a compute node which will execute the application. The compute node may contain at least one initialization script for bootstrapping the application with the metadata contained in the template. This functionality allows users to bootstrap the application running on the compute node with data from within the template that was used to create the stack of resources for executing the application. In this manner, metadata, configuration files, package names and versions can be passed by the application owner to the remote compute node. | 2021-05-06 |
20210133003 | STATELESS CONTENT MANAGEMENT SYSTEM - One embodiment comprises a stateless container of binaries and a broker. The stateless container of binaries includes a code memory having stored thereon code for a first version of a first functional component of a content management system, the first functional component executable to provide a first version of a service. The broker may be executable to: receive a request for the service from a client application, the request associated with a user of the content management system; determine that the first version of the service is accessible with regard to the user; determine an available first server that hosts the first version of the service; provide an indication of the first version of the service to the client application; and provide an IP address and a port number associated with the available first server to the client application. | 2021-05-06 |
20210133004 | METHOD FOR INSTALLING A VIRTUALISED NETWORK FUNCTION - A method is described for installing a virtualized network function. The method is implemented in a service operating entity, and contributes to the implementation of a communications service, in a first data center of a group of data centers of a virtualized communications architecture, on the basis of a profile defined by a set of virtualized network function placement parameters specific to the type of virtualized network function. The profile, obtained from a profile management entity, is used to determine the first data center of the group. The identifier of the data center is then added to the profile and is transmitted to an administration entity of the virtualized architecture with a request for installation of the virtualized function in the determined first data center. | 2021-05-06 |
20210133005 | OPERATING SYSTEM FOR A SENSOR OF A SENSOR NETWORK AND ASSOCIATED SENSOR - This operating system ( | 2021-05-06 |
20210133006 | SYSTEM AND METHOD FOR DYNAMICALLY DELIVERING CONTENT - Systems and methods for dynamically delivering content from a content provider system to a user device. A bridging device is configured to dynamically connect an application executing on the user device to the content provider system during execution of an application extension of the application. The application extension is configured to activate a connection to the bridging device, in response to the detection of an activation condition. The application is configured to transmit at least some of the data items comprised in each input data block received at the user device as input data via an application interface to the bridging device during the connection to the bridging device. The bridging device is configured to generate a request for content according to a predefined request format using the data items in the input data received from the application and to transmit the request to the content provider system. | 2021-05-06 |
20210133007 | System Using Adaptive Interrupts for Controlling Notifications to a User - A method, system and computer-usable medium are disclosed for implementing a machine learning system for using adaptive interrupts to control notifications to a user. In at least one embodiment, a computer-implemented method for adaptively interrupting a user with communication notifications at an information handling system is disclosed, including: receiving a communication for a user at the information handling system; intercepting a notification relating to the received communication; assessing a degree of importance of the notification using contextual information associated with the notification; assessing a degree of busyness of the user at the information handling system by actively monitoring interactions between the user and the information handling system; and selectively interrupting the user with the notification based on the busyness of the user at the information handling system and the degree of importance of the notification. | 2021-05-06 |
20210133008 | THROTTLING USING MESSAGE PARTITIONING AND BUFFERING - Provided are techniques for throttling using message partitioning and buffering. A plurality of messages are stored in an input topics buffer, where the input topics buffer is stored in a plurality of partitions, and where each of the partitions of the plurality of partitions is associated with a tenant subgroup of a plurality of tenant subgroups of a tenant. A message of the plurality of messages from the tenant subgroup of the tenant is selected. A throttle count for the tenant subgroup is retrieved. A maximum message threshold for the tenant is retrieved. In response to determining that the throttle count is less than the maximum message threshold, the message is moved from the input topics buffer to a work topics buffer. In response to determining that the throttle count is equal to or greater than the maximum message threshold, throttling the tenant subgroup. | 2021-05-06 |
20210133009 | APPLICATION PROGRAMMING INTERFACE FINGERPRINT DATA GENERATION AT A MOBILE DEVICE EXECUTING A NATIVE MOBILE APPLICATION - Systems, methods, and computer-readable storage devices to enable secured data access from a mobile device executing a native mobile application and a headless browser are disclosed. | 2021-05-06 |
20210133010 | FORECASTING FAILURES OF INTERCHANGEABLE PARTS - A material failure forecasting system accesses historical failure data to forecasts future failures. The failure data of a material is analyzed using text processing techniques to identify failures and suspensions. The text processing techniques provide for identifying failures when fault words are associated with negations. A fault ontology establishes different failure modes that include primary, secondary and tertiary levels which enable identifying a sequence of failures. The failures thus identified are fitted to a data distribution selected from a plurality of data distributions. The parameters from the data distribution are used for simulating a demand profile for the material which considers interchangeability. Similarly failure data of the materials in an equipment can be analyzed and the reliability of the equipment can be estimated. | 2021-05-06 |
20210133011 | STORAGE MANAGEMENT SYSTEM AND METHOD - A method, computer program product, and computing system for processing memory page metadata received from a cache memory system within a data storage system to determine if the memory page metadata includes corruption due to a power failure event; if the memory page metadata includes post-acknowledgement data corruption, initiating a data recovery process to attempt to recover content associated with the post-acknowledgement data corruption; and if the memory page metadata includes pre-acknowledgement data corruption, reobtaining content associated with the pre-acknowledgement data corruption. | 2021-05-06 |
20210133012 | PROGRAM PULSE CONTROL USING ENVIRONMENTAL PARAMETERS - A method comprising receiving, at a memory sub-system from a host system, configuration parameters associated with usage of the memory sub-system, monitoring environmental parameters of the memory sub-system, wherein the environmental parameters comprise characteristics of the memory sub-system and an environment of the memory sub-system, and selecting values for program pulse characteristics of the memory sub-system based on the configuration parameters and environmental parameters, the program pulse characteristics comprising at least a program pulse voltage. | 2021-05-06 |
20210133013 | METHOD OF MONITORING CLOSED SYSTEM, APPARATUS THEREOF AND MONITORING DEVICE - A method of monitoring a closed system, an apparatus thereof and a monitoring device are provided. The method of monitoring the closed system includes: performing a page capturing on a web page of the closed system; searching from a captured page, according to configuration information of data to be monitored of the closed system, a text content corresponding to the data to be monitored; and converting the text content corresponding to the data to be monitored into monitored data which a system monitoring platform is capable of recognizing, and storing the monitored data. | 2021-05-06 |
20210133014 | TRACKING ERROR PROPAGATION ACROSS MICROSERVICES BASED APPLICATIONS USING DISTRIBUTED ERROR STACKS - A method of performing error analysis in a system comprising microservices comprises identifying a root cause error span from among a plurality of error spans for a trace associated with a user-request, wherein an error span is a span that returns an error to a microservice initiating a call resulting in the span, and wherein a root cause error span is an error span associated with an error originating microservice. The method further comprises determining a call path associated with the root cause error span, where the call path comprises a chain of spans starting at the root cause error span, and where each subsequent span in the chain is a parent span of a prior span. Subsequently the method comprises mapping each span in the chain to a span error frame to create an error stack and rendering an image of the error stack. | 2021-05-06 |
20210133015 | IN A MICROSERVICES-BASED APPLICATION, MAPPING DISTRIBUTED ERROR STACKS ACROSS MULTIPLE DIMENSIONS - A method of tracking errors in a system comprising microservices comprises ingesting a plurality of spans generated by the microservices during a given duration of time. The method further comprises consolidating the plurality of spans associated with the given duration of time into a plurality of traces, wherein each trace comprises a subset of the plurality of spans that comprise a common trace identifier. For each trace, the method comprises: a) mapping a respective trace to one or more error stacks computed for the respective trace and to one or more attributes determined for the respective trace; and b) emitting each error stack computed from the respective trace with an associated pair of attributes. The method then comprises reducing duplicate pairs of error stack and associated attributes and maintaining a count for each pair of error stack and associated attributes. | 2021-05-06 |
20210133016 | INFORMATION PROCESSING APPARATUS AND METHOD, COMPUTER PROGRAM, AND RECORDING MEDIUM - An information processing device is provided with: a first processing unit that generates first information by performing first processing with respect to sensor information acquired from a sensor; a second processing unit that generates second information by performing, with respect to the first information, second processing that is different from the first processing; and a third processing unit, which generates third information by performing, with respect to the first information, third processing, i.e., processing that corresponds to at least a part of the second processing, and which acquires the second information, and outputs the second information and the third information. | 2021-05-06 |
20210133017 | APPROACH TO PREDICTING ENTITY FAILURES THROUGH DECISION TREE MODELING - Systems and methods for predicting device failure, including inputting a plurality of records for electronic communication devices, each including one or more attributes and a label, as a table to a modeling algorithm, wherein there are separate tables for each period in a time sequence; building a multi-stage decision tree from the time sequence of records using the modeling algorithm running on a processor device; inputting a record for a device having an empty label value into the decision tree to determine the likelihood of entity failure; and reporting a predicted failure for the device to a user on a display to initiate replacement before a next time period. | 2021-05-06 |
20210133018 | A UNIFYING SEMI-SUPERVISED APPROACH FOR MACHINE CONDITION MONITORING AND FAULT DIAGNOSIS - A computer-implemented method for performing machine condition monitoring for fault diagnosis includes collecting multivariate time series data from a plurality of sensors in a machine and partitioning the multivariate time series data into a plurality of segment clusters. Each segment cluster corresponds to one of a plurality of class labels related to machine condition monitoring. Next, the segment clusters are clustered into segment cluster prototypes. The segment clusters and the segment cluster prototypes are used to learn a discriminative model that predicts a class label. Then, as new multivariate time series data is collected from the sensors in the machine, the discriminative model may be used to predict a new class label corresponding to segments included in the new multivariate time series data. If the new class label indicates a potential fault in operation of the machine, a notification may be provided to one or more users. | 2021-05-06 |
20210133019 | SYSTEM AND METHOD FOR TARGETED EFFICIENT LOGGING OF MEMORY FAILURES - An information handling system includes a memory controller with an error logger, and a DIMM coupled to the memory controller via a memory channel. The DIMM includes a non-volatile memory device mapped to include event blocks that store error information associated with memory events occurring the memory controller, the DIMM, and the memory channel. Each event block includes a flag field and a data field. The error logger receives an indication that a memory event has occurred, reads first flag information from a flag field of an event block, determines whether the event block is locked based upon the first flag information, and if the event block is not locked, then writes second flag information to the flag field and writes event information to a data field of the event block. The event information describes the memory event. | 2021-05-06 |
20210133020 | SEMICONDUCTOR DEVICE AND SEMICONDUCTOR SYSTEM EQUIPPED WITH THE SAME - A semiconductor device includes a master circuit which outputs a first write request signal for requesting to write data, a bus which receives the data and the first write request signal, a bus control unit which is arranged on the bus, generates an error detection code for the data and generates a second write request signal which includes second address information corresponding to first address information included in the first write request signal and memory controllers which each write the data into a storage area of an address designated by the first write request signal and writes the error detection code into a storage area of an address designated by the second write request signal in the storage areas of memories. | 2021-05-06 |
20210133021 | MEMORY SYSTEM AND MEMORY CONTROLLER - A memory system and a memory controller are disclosed. By determining whether an error has occurred in target data stored in a predetermined target memory area of the memory device and determining, in response to whether an error has occurred in the target data, the magnitude of the supplied power based on a first operation parameter selected among predetermined candidate operation parameters in connection with the magnitude of the supplied power, the memory controller may stably drive a firmware, and may handle an operation error of the firmware due to a change in external environment. | 2021-05-06 |
20210133022 | MEMORY SCRUB SYSTEM - A memory scrubbing system includes a persistent memory device coupled to an operating system (OS) and a Basic Input/Output System (BIOS). During a boot process and prior to loading the OS, the BIOS retrieves a known memory location list that identifies known memory locations of uncorrectable errors in the persistent memory device and performs a partial memory scrubbing operation on the known memory locations. The BIOS adds any known memory locations that maintain an uncorrectable error to a memory scrub error list. The BIOS then initiates a full memory scrubbing operation on the persistent memory device, cause the OS to load and enter a runtime environment while the full memory scrubbing operation is being performed, and provides the memory scrub error list to the OS. | 2021-05-06 |
20210133023 | ERROR IDENTIFICATION IN EXECUTED CODE - The present disclosure includes apparatuses, methods, and systems for error identification on executed code. An embodiment includes memory and circuitry configured to read data stored in a secure array of the memory, identify a different memory having an error correcting code (ECC) corresponding to the read data of the memory, execute an integrity check to compare the ECC to the read data of the memory; and take an action in response to the comparison of the read data of the memory and the ECC, wherein the comparison indicates that the ECC identified an error in the read data of the memory. | 2021-05-06 |
20210133024 | SYSTEM AND METHOD FOR FACILITATING HIGH-CAPACITY SYSTEM MEMORY ADAPTIVE TO HIGH-ERROR-RATE AND LOW-ENDURANCE MEDIA - The system receives a request to write a first piece of data to a non-volatile memory. The system encodes, based on an error correction code (ECC), the first piece of data to obtain a first ECC codeword which includes a plurality of ordered parts and a first parity. The system writes the plurality of ordered parts in multiple rows. The system writes the first parity to a same row in which a starting ordered part is written. The system updates, in a data structure, entries associated with the ordered parts. A respective entry indicates: a virtual address associated with a respective ordered part, a physical address at which the respective ordered part is written, and an index corresponding to a virtual address associated with a next ordered part. A first entry associated with the starting ordered part further indicates a physical address at which the first parity is written. | 2021-05-06 |
20210133025 | RANDOM SELECTION OF CODE WORDS FOR READ VOLTAGE CALIBRATION - Method and apparatus for managing data in a non-volatile memory (NVM) of a storage device, such as a solid-state drive (SSD). In some embodiments, flash memory cells are arranged along word lines to which read voltages are applied to sense programmed states of the memory cells, with the flash memory cells along each word line being configured to concurrently store multiple pages of data. An encoder circuit is configured to apply error correction encoding to input data to form code words having user data bits and code bits, where an integral number of the code words are written to each page. A reference voltage calibration circuit is configured to randomly select a single selected code word from each page and to use the code bits from the single selected code word to generate a set of calibrated read voltages for the associated page. | 2021-05-06 |
20210133026 | Erasure Coded Data Shards Containing Multiple Data Objects - Example storage systems, storage nodes, and methods provide erasure coding of data shards containing multiple data objects. Storage nodes store data shards having a data shard size and each containing a plurality of data objects, where the sum of the data object sizes is less than the data shard size. Some storage nodes store a parity shard containing parity data for the other data shards. | 2021-05-06 |
20210133027 | NON-VOLATILE MEMORY ON CHIP - A system-on-chip is provided that includes functional circuitry that performs a function. Control circuitry controls the function based one or more configuration parameters. Non-volatile storage circuitry includes a plurality of non-volatile storage cells each being adapted to write at least a bit of the one or more configuration parameters in a rewritable, persistent manner a plurality of times. Read circuitry locally accesses the non-volatile storage circuitry, obtains the one or more configuration parameters from the non-volatile storage circuitry and provides the one or more configuration parameters to the control circuitry. Write circuitry obtains the one or more configuration parameters and provides the one or more configuration parameters to the non-volatile storage circuitry by locally accessing the non-volatile storage circuitry. | 2021-05-06 |
20210133028 | MEMORY CONTROLLERS, MEMORY SYSTEMS INCLUDING THE SAME AND MEMORY MODULES - A memory controller configured to control a memory module including a plurality of memory devices which constitute a first channel and a second channel includes an error correction code (ECC) engine, and a control circuit configured to control the ECC engine. The ECC engine is configured to generate a codeword including a plurality of symbols by adaptively constructing, based on device information including mapping information, each of the plurality of symbols from a predetermined number of data bits received via a plurality of input/output pads of each of the plurality of memory devices, and transmit the codeword to the memory module. The mapping information indicates whether each of the plurality of input/output pads is mapped to the same symbol among the plurality of symbols or different symbols among the plurality of symbols. Each of the plurality of symbols corresponds to a unit of error correction of the ECC engine. | 2021-05-06 |
20210133029 | METHODS FOR DATA WRITING AND FOR DATA RECOVERY, ELECTRONIC DEVICES, AND PROGRAM PRODUCTS - Techniques for data recovery involve: reading target data corresponding to a first logical block from a first data block of a stripe of a RAID system, the target data being a compressed version of data in the first logical block; in accordance with a determination that an error occurs in the target data, reading data from a plurality of second data blocks of the stripe and first parity information from a first parity block of the stripe; comparing respective checksums of the data read from the plurality of second data blocks with a first predetermined checksum and a checksum of the first parity information with a second predetermined checksum; and determining recoverability of the target data based on a result of the comparison. Accordingly, it is possible to simplify the data recovery process, reduce the calculation and time costs in the data recovery, and improve the data recovery efficiency. | 2021-05-06 |
20210133030 | DYNAMIC DATA PLACEMENT FOR REPLICATED RAID IN A STORAGE SYSTEM - A method is disclosed for destaging data to a storage device set that is arranged to maintain M replicas of the data, the storage device set having M primary storage devices and N secondary storage devices, the method comprising: detecting a destage event; and in response to the destage event, destaging the data item that is stored in a journal, the destaging including: issuing M primary write requests for storing the data item, each of the M primary write requests being directed to a different one of the M primary storage devices; in response to detecting that L of the primary write requests have failed, issuing L secondary write requests for storing the data item, each of the L secondary write requests being directed to a different secondary storage device; updating a bitmap to identify all primary and secondary storage devices where the data item has been stored. | 2021-05-06 |
20210133031 | DATA MANAGEMENT PLATFORM - Some examples relate generally to a data management platform comprising a storage device configured to store secondary data and one or more processors in communication with the storage device and configured to perform certain operations. The operations may include identifying an aspect of the secondary data stored in the storage device, the secondary data including a backup of respective primary data stored in a primary data source; identifying or receiving an indication of a target to receive data associated with the identified aspect of the secondary data; transmitting the data associated with the aspect of the secondary data to the target as a push transmission; and performing data management operations related to the secondary data subsequent to the push transmission. | 2021-05-06 |
20210133032 | Application High Availability via Application Transparent Battery-Backed Replication of Persistent Data - Techniques for achieving application high availability via application-transparent battery-backed replication of persistent data are provided. In one set of embodiments, a computer system can detect a failure that causes an application of the computer system to stop running. In response to detecting the failure, the computer system can copy persistent data written by the application and maintained locally at the computer system to one or more remote destinations, where the copying is performed in a manner that is transparent to the application and while the computer system runs on battery power. The application can then be restarted on another computer system using the copied data. | 2021-05-06 |
20210133033 | Versioned file system using structured data representations - A versioned file system comprises a set of structured data representations. At a first time, an interface creates and exports to a cloud data store a first structured data representation corresponding to a first version of the local file system. The first structured data representation is an XML tree having a root element, one or more directory elements associated with the root element, and one or more file elements associated with a given directory element. Upon a change within the file system, the interface creates and exports a second structured data representation corresponding to a second version of the file system. The second structured data representation differs from the first structured data representation up to and including the root element of the second structured data representation. The interface continues to generate and export the structured data representations to the data store. | 2021-05-06 |
20210133034 | MANAGING FILES ACCORDING TO CATEGORIES - According to various embodiments, with respect to a target set of files being managed (e.g., protected by data snapshots), each file in the target set of files is classified into one of two or more filesets (discontiguous filesets), where each of these filesets comprises one or more files that are related to each other by one or more factors, such as frequency of file change or purpose of existence (e.g., used by a software application). Once classified, files within the target set of files can be uniquely processed by a data management operation (e.g., incremental data snapshot process) according to their association to a discontiguous fileset. | 2021-05-06 |
20210133035 | DATA MANAGEMENT PLATFORM - Some examples relate generally to a data management platform comprising: a storage device configured to store secondary data and one or more processors in communication with the storage device and configured to perform certain operations. The operations may include identifying an aspect of the secondary data stored in the storage device, the secondary data including a backup of respective primary data stored in a primary data source; identifying or receiving an indication of a target to receive data associated with the identified aspect of the secondary data; and transmitting the data associated with the aspect of the secondary data to the target. | 2021-05-06 |
20210133036 | METHOD AND SYSTEM FOR ASSET PROTECTION THREAT DETECTION AND MITIGATION USING INTERACTIVE GRAPHICS - A method and system for asset protection threat detection and mitigation using interactive graphics. Specifically, the disclosed method and system entail discerning protection vulnerabilities exhibited by assets (or databases) based on maintained backup metadata. These protection vulnerabilities may subsequently be visualized as part of a projected graphical user interface, which may not only disclose the protection vulnerabilities to a user but also may enable the user to rectify the disclosed protection vulnerabilities through on-demand asset backup operations. | 2021-05-06 |
20210133037 | METHOD AND SYSTEM FOR OPTIMIZING BACKUP AND BACKUP DISCOVERY OPERATIONS USING CHANGE BASED METADATA TRACKING (CBMT) - A method and system for optimizing backup and backup discovery operations using change based metadata tracking (CBMT). Specifically, the disclosed method and system entail eliminating the storage and subsequent transmission of redundant asset metadata information to a central coordination point during backup discovery operations, which may strain central coordination point resources, as well as client device resources. Accordingly, rather than re-sharing the same asset metadata information every time a backup discovery initiates, the client device tracks, maintains, and transmits only changes in asset metadata, thereby conserving resource utilization. | 2021-05-06 |
20210133038 | METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MANAGING FILE SYSTEM - Techniques for managing a file system involve in response to receiving, at a first backup device of the file system, a request for replicating data of the file system from the first backup device to a second backup device of the file system, determining a synchronization state between the first backup device and the file system, the second backup device being a backup device located downstream of the first backup device; creating, based on the synchronization state, a target snapshot associated with the file system; and causing the data to be replicated from the first backup device to the second backup device based on the target snapshot. Therefore, the data backup flexibility and accuracy of a file system can be significantly improved and therefore the reliability of the whole system may be enhanced. | 2021-05-06 |
20210133039 | SYSTEM AND METHOD FOR A HYBRID WORKFLOW BACKUP OPERATION OF DATA IN A CLOUD-BASED SERVICE WITH THIRD-PARTY APPLICATIONS - A method for performing a backup operation includes obtaining, by a backup agent, a backup request, and in response to the backup request: obtaining a complete application listing, wherein the complete application listing specifies a plurality of applications associated with the backup request, comparing the complete application listing to a cloud-based application listing, wherein the cloud-based application listing specifies a portion of the plurality of applications, making a first determination that the complete application listing specifies more than the portion of the plurality of applications, and in response to the first determination, initiating a hybrid workflow, wherein the hybrid workflow specifies backing up each of the plurality of applications. | 2021-05-06 |
20210133040 | SYSTEM AND METHOD FOR INDEXING IMAGE BACKUPS - A backup manager for providing backup services includes persistent storage and a backup orchestrator. The persistent storage includes protection policies. The backup orchestrator generates a backup for a client based on the protection policies; identifies a portion of the backup that includes an allocation scheme; extracts system metadata from the backup using the allocation scheme; generates an index for the backup using the system metadata; and stores the backup and the index in backup storage. | 2021-05-06 |
20210133041 | ACHIEVING GUARANTEED APPLICATION PERFORMANCE USING TRANSACTIONAL I/O SCHEDULING FOR SSD STORAGE BY INTERLEAVING AND SPLITTING READ/WRITE I/Os WITH REQUIRED LATENCY CONFIGURATIONS - Embodiments are described for prioritizing input/output (I/O) operations dispatched to a solid-state device (SSD) cache in a network, by defining a maximum write I/O operation size for writing data to the SSD cache, splitting large write I/O operations into smaller write I/O operations, each with a size less than the maximum write I/O operation size, interleaving cache read I/O operations in between the smaller write I/O operations, and performing the cache read I/O operations and the smaller write I/O operations in an order created by the interleaving. The network may comprise a deduplication backup system storing data to storage media including the SSD cache. | 2021-05-06 |
20210133042 | METHOD AND SYSTEM FOR INTELLIGENTLY MIGRATING TO A CENTRALIZED PROTECTION FRAMEWORK - A method and system for intelligently migrating to a centralized protection framework. Specifically, the disclosed method and system entail redirecting the target of asset backup operations for any given asset from one or more legacy backup devices to a centrally-managed backup device. | 2021-05-06 |
20210133043 | SYSTEM AND METHOD FOR RESILIENT DATA PROTECTION - A manager for providing services to clients includes persistent storage and an orchestration manager. The persistent storage includes protection policies. The orchestration manager obtains a backup from a client of the clients based on a protection policy of the protection policies; makes a determination that an application catalog associated with the client is not stored in backup storages; in response to making the determination: obtains the application catalog from the client; stores the application catalog in the backup storages; and stores the obtained backup in the backup storages. | 2021-05-06 |
20210133044 | SYSTEM AND METHOD FOR VISUAL REPRESENTATION OF BACKUPS - A backup manager for managing backup services includes persistent storage and a backup analyzer. The persistent storage includes a backup data repository and protection policies. The backup analyzer identifies a new backup stored in backup storage; performs a backup compatibility analysis on the new backup to determine inter-backup compatibility of the identified new backup; updates the backup data repository based on the inter-backup compatibility to obtain an updated backup data repository; and modifies a backup schedule using the updated backup data repository to meet a requirement of a protection policy of the protection policies. | 2021-05-06 |
20210133045 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT TO BACKUP DATA - Techniques for backing up data involve receiving, by a proxy server and from an application, a backup request comprising a backup path. The techniques further involve determining a target server associated with backup request according to the backup path, and the proxy server and the target server share the same storage processor. The techniques further involve backing up a file in the target server associated with the backup path. Along these lines, a dedicated proxy server may be provided, and the proxy server may be used to handle backup requests for all file systems on the storage processor. Then, the backup requests can be forwarded to the corresponding target servers through a virtual file system without configuring corresponding Internet protocol (IP) address for each target server. | 2021-05-06 |
20210133046 | SYSTEM AND METHOD FOR EFFICIENT BACKUP GENERATION - A backup manager for providing backup services includes persistent storage and a backup orchestrator. The persistent storage includes protection policies. The backup orchestrator identifies a last backup generation time for a client in response to a protection policy of the protection policies triggering a backup generation for the client; obtains system metadata for the client; identifies a portion of client data that has been modified since the last backup generation time using the system metadata; generates an incremental backup based on the identified portion of the client data; and stores the incremental backup in backup storage. | 2021-05-06 |
20210133047 | SYSTEM AND METHOD FOR FAST REBUILD OF METADATA TIER - A method, computer program product, and computer system for identifying a bit for an allocation unit. It may be determined if data has been modified on the allocation unit while degraded. A rebuild of the allocation unit may be executed when the bit is a first value. The rebuild of the allocation unit may be skipped when the bit is a second value. | 2021-05-06 |
20210133048 | System and Method for Weight Based Data Protection - A method, computer program product, and computer system for determining, by a computing device, a weight of an indirect block page. The weight of the indirect block page may be compared to a threshold. it may be determined that the weight of the indirect block page is greater than the threshold. A copy of the indirect block page may be created as a backup page based upon, at least in part, determining that the weight of the indirect block page is greater than the threshold. | 2021-05-06 |
20210133049 | Two-Step Recovery employing Erasure Coding in a Geographically Diverse Data Storage System - Recovery of chunk segments stored via hierarchical erasure coding in a geographically diverse data storage system is disclosed. Chunks can be stored according to a first-level erasure coding scheme in zones of a geographically diverse data storage system. The chunks can then be further protected via one or more second-level erasure coding schema within a corresponding zone of the geographically diverse data storage system. In response to determining a segment of a chunk has become less accessible, recovering at least the segment to enable intra-zone recovery of the compromised chunk can be performed according to the hierarchical erasure coding scheme of relevant chunks at relevant zones of the geographically diverse data storage system. | 2021-05-06 |
20210133050 | METHOD AND APPARATUS FOR INVOKING SYSTEM FILE, AND STORAGE MEDIUM - A method for invoking a system file includes: detecting, during a startup process, a trigger operation that is input; establishing, in response to the detecting of the trigger operation, a link with a cloud server, and the trigger operation being used to trigger an invocation to the system file; and invoking, based on the link, the system file from the cloud server. As such, the system file can be invoked from the cloud server, and an invocation to the system file can be realized. | 2021-05-06 |