46th week of 2021 patent applcation highlights part 45 |
Patent application number | Title | Published |
20210357244 | DISTRIBUTED RESOURCE SCHEDULER AS A SERVICE - Methods and systems for balancing resources in a virtual machine computing environment are disclosed. A server can receive data illustrating the configuration of host machines and virtual machines in client computing environment. A simulated computing environment can be created that mirrors the configuration of the client computing environment. Data relating to resource usage (e.g., processor, memory, and storage) of the host machines can be received. The resource usage can be simulated in the simulated computing environment to mirror the usage of the client computing environment. A recommendation to execute a migration of a virtual machine can be received from the simulated computing environment. Instructions to execute a migration corresponding to the recommended migration can be generated and sent to the client computing environment. | 2021-11-18 |
20210357245 | INFERENCE ENGINE FOR CONFIGURATION PARAMETERS IN A NETWORK FUNCTIONS VIRTUALIZATION ORCHESTRATOR - A system and method for deploying a virtual network function (VNF) are disclosed. Deploying a VNF includes receiving a request to instantiate a VNF in a network virtualization infrastructure, obtaining input from a user providing parameters needed for performing the instantiation of the VNF, determining a type of deployment for the VNF, and adding parameters inferred from the type of deployment to the user data to complete the parameters needed for deployment of the VNF, wherein the added parameters are inferred based on stored data regarding previous instantiations of the VNF. Determining the type of deployment for the VNF includes determining a number of instances of the VNFs to be deployed and a number of virtual infrastructure managers that will be instructe | 2021-11-18 |
20210357246 | LIVE MOUNT OF VIRTUAL MACHINES IN A PUBLIC CLOUD COMPUTING ENVIRONMENT - Live mounting a virtual machine (VM) causes the VM to run off a backup copy or snapshot previously taken of a “live” production VM. The live-mounted VM is generally intended for temporary use such as to validate the integrity and contents of the backup copy for disaster recovery validation, or to access some contents of the backup copy from the live-mounted VM without restoring all backed up files. These uses contemplate that changes occurring during live mount are not preserved after the live-mounted VM expires or is taken down. Thus, live mounting a VM is not a restore operation and usually does not involve access to every block of data in the backup copy. However, live mounting provides live VM service in the cloud sooner than waiting for all of the backup copy/snapshot to be restored. | 2021-11-18 |
20210357247 | SYSTEMS AND METHODS FOR JAVA VIRTUAL MACHINE MANAGEMENT - A virtual machine (VM) management utility tool may deploy an object model that may persist one or more virtual machine dependencies and relationships. Through a web front-end interface, for example, the VMs may be started in a specific order or re-booted, and the tool automatically determines the additional VMs that need to be re-booted order to maintain the integrity of the environment. Through the web interface, for example, the object model may be managed, and start-up orders or VM dependencies may be updated. For VMs that may not start under load, the object model may access to the VM until the VM is fully initialized. | 2021-11-18 |
20210357248 | VM/CONTAINER AND VOLUME ALLOCATION DETERMINATION METHOD IN HCI ENVIRONMENT AND STORAGE SYSTEM - Provided is a resource allocation determination method for a VM/container, volume, and the like created as a new VM/container or volume without exceeding an upper limit of a computer resource of a node in an HCI environment. In order to determine allocation of at least one of a virtual machine, a container, and a volume in a system of the HCI environment, a use state of a computer resource shared by a virtual machine and a storage controller operating on each node is managed, and an allocation destination node of the new virtual machine, container, or volume is determined based on the use state without exceeding an upper limit of a computer resource of the allocation destination node. | 2021-11-18 |
20210357249 | PROVIDING PHYSICAL HOST HARDWARE STATE INFORMATION TO VIRTUAL MACHINES DEPLOYED ON THE PHYSICAL HOST - A device may receive, from a virtual machine deployed on the device, a request to register for an event associated with a hardware component of the device, and may create a path to a script associated with providing information about the event when the event occurs. The device may provide the script to an event plugin associated with the event and the hardware component, and may register the event plugin with a kernel associated with the device. The device may receive, the kernel, information indicating occurrence of the event associated with the hardware component, and may cause, via the event plugin, execution of the script based on the occurrence of the event associated with the hardware component. The device may provide, based on execution of the script, a notification to the virtual machine, where the notification may indicate the occurrence of the event associated with the hardware component. | 2021-11-18 |
20210357250 | PROCESSING FILES VIA EDGE COMPUTING DEVICE - Examples are disclosed that relate to processing files between a local network and a cloud computing service. One example provides a computing device configured to be located between a local network and a cloud computing service, comprising a logic machine and a storage machine comprising instructions executable to receive, from a device within the local network, a file at a local share of the computing device, and in response to receiving the file, generate a file event indicating receipt of the file at the local share and provide the file event to a virtual machine executing on the computing device. The instructions are further executable to, based upon a property of the file, provide the file to a program operating within a container in the virtual machine to process the file, and send a result of executing the program on the file to the cloud computing service. | 2021-11-18 |
20210357251 | ELECTRONIC DEVICE AND NON-TRANSITORY STORAGE MEDIUM IMPLEMENTING TEST PATH COORDINATION METHOD - A test path coordination method includes obtaining information of a number of products to be tested, obtaining information of each test device, and planning a test path of each product according to a preset rule according to the information of the products and the information of each test device. The information of the products includes the number of the products, test items of each product, and test devices required for testing the test items. The information of each test device includes whether the test device is currently testing a product and test information of the product currently being tested. The test information of the product includes a length of time the product has been tested and a test result. The test path includes a test sequence of each product and a test sequence of the test items of each product. | 2021-11-18 |
20210357252 | APPLICATION MANAGEMENT METHOD AND APPARATUS, AND DEVICE - The exemplary embodiments may provide an application management method and apparatus, and a device, to unfreeze some processes in an application. The method includes: obtaining an unfreezing event, where the unfreezing event includes process information, and the unfreezing event is used to trigger an unfreezing operation to be performed on some processes in a frozen application; and performing an unfreezing operation on the some processes based on the process information. | 2021-11-18 |
20210357253 | AGENT CONTROL DEVICE - An agent control device configured to execute a plurality of agents and including a processor, the processor being configured to: request execution of each of the agents at a prescribed trigger; store an interruptibility list that stipulates interruptibility of execution for each function of a given agent being executed or for an execution status of the given agent; reference the interruptibility list in order to set permissibility information relating to executability of another one of the agents in conjunction with execution of the given agent; and perform management such that, in a case in which there is a request for execution of the other agent while the given agent is executing and the permissibility information indicates that the other agent is not executable, execution of the given agent continues without responding to the request. | 2021-11-18 |
20210357254 | MULTI-PROCESSOR SYSTEM AND METHOD ENABLING CONCURRENT MULTI-PROCESSING UTILIZING DISCRETE COMPONENT PROCESSOR ELEMENTS - A system and method for the dynamic, run-time configuration of logic core register files, and the provision of an associated execution context. The dynamic register files as well as the associated execution context information are software-defined so as to be virtually configured in random-access memory. This virtualization of both the processor execution context and register files enables the size, structure and performance to be specified at run-time and tailored to the specific processing, instructions and data associated with a given processor state or thread, thereby minimizing both the aggregate memory required and the context switching time. In addition, the disclosed system and method provides for processor virtualization which further enhances the flexibility and efficiency. | 2021-11-18 |
20210357255 | SYSTEM AND METHOD FOR RESOURCE SCALING FOR EFFICIENT RESOURCE MANAGEMENT - A system and method for automatically adjusting computing resources provisioned for a computer service or application by applying historical resource usage data to a predictive model to generate predictive resource usage. The predictive resource usage is then simulated for various service configurations, determining scaling requirements and resource wastage for each configuration. A cost value is generated based on the scaling requirement and resource wastage, with the cost value for each service configuration used to automatically select a configuration to apply to the service. Alternatively, the method for automatically adjusting computer resources provisioned for a service may include receiving resource usage data of the service, applying it to a linear quadratic regulator (LQR) to find an optimal stationary policy (treating the resource usage data as states and resource-provisioning variables as actions), and providing instructions for configuring the service based on the optimal stationary policy. | 2021-11-18 |
20210357256 | SYSTEMS AND METHODS OF RESOURCE CONFIGURATION OPTIMIZATION FOR MACHINE LEARNING WORKLOADS - Systems and methods are provided for optimally allocating resources used to perform multiple tasks/jobs, e.g., machine learning training jobs. The possible resource configurations or candidates that can be used to perform such jobs are generated. A first batch of training jobs can be randomly selected and run using one of the possible resource configuration candidates. Subsequent batches of training jobs may be performed using other resource configuration candidates that have been selected using an optimization process, e.g., Bayesian optimization. Upon reaching a stopping criterion, the resource configuration resulting in a desired optimization metric, e.g., fastest job completion time can be selected and used to execute the remaining training jobs. | 2021-11-18 |
20210357257 | Dynamic Resource Optimization For High Availability Virtual Network Functions - Dynamically allocating workloads to a fixed number of CPU resources within a compute platform. Determining whether a workload should be in a Dedicated Class of workloads and assigned to a dedicated CPU resource or in a Shared Class of workloads that is handled by a set of at least one shared CPU resource, wherein a shared CPU resource may service more than one workload. The determination may be made based on a comparison of a parameter from two samples of a parameter taken at different times. The determination may be made using metadata associated with the workload. The determination may be made repeatedly so that some workloads may change from being in the Dedicated Class to the Shared Class or from the Shared Class to the Dedicated Class. High availability virtual network functions may be handled economically by deeming the failover workloads to be in the Shared Class. | 2021-11-18 |
20210357258 | METHOD, DEVICE AND MEDIUM FOR ALLOCATING RESOURCE BASED ON TYPE OF PCI DEVICE - A method, a device and a medium for allocating a resource based on a type of a PCI device are provided. In a case of running a BIOS program during a start-up process, information of a Switch chip captured by a PCI enumeration operation is acquired. It is determined whether the PCI device is connected to a GPU server based on the information of the Switch chip. An operation of allocating the PCI device with an IO resource is cancelled in a case that the PCI device is connected to the GPU server, and the PCI device is allocated with an IO resource and a memory resource based on a preset allocation rule in a case that the PCI device is not connected to the GPU server. | 2021-11-18 |
20210357259 | COGNITIVE PROCESSING RESOURCE ALLOCATION - A processor may run a background process to identify a first task being initiated by a first user on a device, where the first task is associated with a first application. The processor may identify the first user of the device. The processor may analyze one or more interactions of the first user associated with the first application on the device. The processor may allocate, based at least in part on identification of the first user, identification of the first task, or analysis of the one or more interactions of the first user, computing resources to one or more hardware components on the device. | 2021-11-18 |
20210357260 | SYSTEMS AND METHODS FOR MAINTAINING POOLED TIME-DEPENDENT RESOURCES IN A MULTILATERAL DISTRIBUTED REGISTER - The present disclosure is directed to a novel system for using a distributed register to generate, manage, and store data for interest-pooled time deposit resource accounts. The invention leverages a pooled resource account approach, allowing for multiple disparate resource accounts to benefit from an enhanced interest return by pooling resource accounts. The system components of the invention contemplate the use of distributed register technology to provide a verified ledger of information related to one or more resource accounts, as well as store system data, user data, and metadata related to the movement and management of resources. By using a distributed register approach to store and verify data related to time-dependent resource account services, the invention provides an automated system and methods for enhancing the flow of sensitive verified information, reducing the need for manual review and increasing the speed at which various resource account services can be validated and executed. | 2021-11-18 |
20210357261 | FAST SHUTDOWN OF LARGE SCALE-UP PROCESSES - A system for shutting down a process of a database is provided. In some aspects, the system performs operations including tracking, during startup of a process, code locations of a process in the at least one memory. The operations may further include tracking, during runtime of the process and in response to the tracking the code locations, memory segments of the at least one memory allocated to the process. The operations may further include receiving an indication for a shutdown of a process. The operations may further include waking, in response to the indication, at least one processing thread of a plurality of processing threads allocated to a database system. The operations may further include allocating a list of memory mappings to the plurality of processing threads. The operations may further include freeing, by the first processing thread, the physical memory assigned to the processing thread by the memory mappings. | 2021-11-18 |
20210357262 | MULTI-DIMENSIONAL MODELING OF RESOURCES FOR INTERACTION SYSTEMS - A system that provides an exchange platform for resource interaction processors that qualify under a modelling process. The platform is continuously updated with information regarding the resource interaction processors with both private and public information. The platform includes regulatory and other rules dictating interaction parameters associated with the resource interaction processors. The platform provides a viability metric of the resource interaction processors for providing resource interaction processor services to the resource interaction processors. Authorized entities may access and view the merchant exchange platform to gather information about the resource interaction processors for use in determining whether to provide services. Furthermore, the platform provides information regarding regulatory oversight for the specific resource interaction processors, allows for the establishment of a dialogue between resource interaction processors and service providers, provides ongoing review of resource interaction processors in real-time data upgrading in an encrypted environment. | 2021-11-18 |
20210357263 | FLEXIBLE COMPUTING - Embodiments of the present disclosure may provide dynamic and fair assignment techniques for allocating resources on a demand basis. Assignment control may be separated into at least two components: a local component and a global component. Each component may have an active dialog with each other; the dialog may include two aspects: 1) a demand for computing resources, and 2) a total allowed number of computing resources. The global component may allocate resources from a pool of resources to different local components, and the local components in turn may assign their allocated resources to local competing requests. The allocation may also be throttled or limited at various levels. | 2021-11-18 |
20210357264 | Assignment of Resources to Database Connection Processes Based on Application Information - Techniques are disclosed relating to using different process groups to control allocation of execution resources for database connection processes that handle application requests. In disclosed embodiments, a database server receives a request from an application server for database resources, including application information specifying one or more attributes of the request. The server may assign a database connection process to access a database for the request and assign the database connection process to a process group based on the application information. The server may assign execution resources based on resource allocation parameters that are associated with the assigned process group. In disclosed embodiments, tenants that are using inappropriate amounts of resources are identified and requests from the identified tenants may be assigned to process groups whose processes are allocated a smaller amounts of resources per process than other process groups, which may reduce performance degradation in a database system. | 2021-11-18 |
20210357265 | AUTOMATED INSTANTIATION OF VIRTUAL PRODUCTION CONTROL ROOM - Systems and methods for providing an environment for creating media content are disclosed. According to at least one embodiment, a method for providing an environment including a virtual production control room (VPCR) for creating media content is disclosed. The method includes: requesting, from a user, information regarding the environment to be provided; receiving, from the user, the information; and creating the environment by provisioning a plurality of resources for the VPCR from among a plurality of cloud computing resources based on the received information. | 2021-11-18 |
20210357266 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND COMPUTER READABLE MEDIUM - A first division unit ( | 2021-11-18 |
20210357267 | DEFINING AND ACCESSING DYNAMIC REGISTERS IN A VIRTUAL MULTI-PROCESSOR SYSTEM - A system and method for the dynamic, run-time configuration of logic core register files, and the provision of an associated execution context. The dynamic register files as well as the associated execution context information are software-defined so as to be virtually configured in random-access memory. This virtualization of both the processor execution context and register files enables the size, structure and performance to be specified at run-time and tailored to the specific processing, instructions and data associated with a given processor state or thread, thereby minimizing both the aggregate memory required and the context switching time. In addition, the disclosed system and method provides for processor virtualization which further enhances the flexibility and efficiency. | 2021-11-18 |
20210357268 | ENSEMBLE MACHINE LEARNING FRAMEWORK FOR PREDICTIVE OPERATIONAL LOAD BALANCING - There is a need for more effective and efficient constrained-optimization-based operational load balancing. In one example, a method comprises determining constraint-satisfying operator-unit mapping arrangements that satisfy an operator unity constraint and an operator capacity constraint; for each constraint-satisfying operator-unit mapping arrangement, determining an arrangement utility measure; processing each arrangement utility measure using an optimization-based ensemble machine learning model that is configured to determine an optimal operator-unit mapping arrangement of the plurality of constraint-satisfying operator-unit mapping arrangements; and performing one or more operational load balancing operations based on the optimal operator-unit mapping arrangement. | 2021-11-18 |
20210357269 | QUALITY OF SERVICE SCHEDULING WITH WORKLOAD PROFILES - Examples described herein include systems and methods for prioritizing workloads, such as virtual machines, to enforce quality of service (“QoS”) requirements. An administrator can assign profiles to workloads, the profiles representing different QoS categories. The profiles can extend scheduling primitives that can determine how a distributed resource scheduler (“DRS”) acts on workloads during various workflows. The scheduling primitives can be used to prioritize workload placement, determine whether to migrate a workload during load balancing, and determine an action to take during host maintenance. The DRS can also use the profile to determine which resources at the host to allocate to the workload, distributing higher portions to workloads with higher QoS profiles. Further, the DRS can factor in the profiles in determining total workload demand, leading to more efficient scaling of the cluster. | 2021-11-18 |
20210357270 | LOAD BALANCING AND FAULT TOLERANT SERVICE IN A DISTRIBUTED DATA SYSTEM - Techniques for load balancing and fault tolerant service are described. An apparatus may comprise load balancing and fault tolerant component operative to execute a load balancing and fault tolerant service in a distributed data system. The load balancing and fault tolerant service distributes a load of a task to a first node in a cluster of nodes using a routing table. The load balancing and fault tolerant service stores information to indicate the first node from the cluster of nodes is assigned to perform the task. The load balancing and fault tolerant service detects a failure condition for the first node. The load balancing and fault tolerant service moves the task to a second node from the cluster of nodes to perform the task for the first node upon occurrence of the failure condition. | 2021-11-18 |
20210357271 | SYNCHRONIZATION OF DATA PROCESSING IN A CALCULATING SYSTEM - A control node and method therein to split and distribute processing task to multiple calculating nodes for synchronization of their data processing in a calculating system are disclosed. The calculating system includes a primary system input interface, a secondary system input interface, the control node and the multiple calculating nodes which are independent and perform data processing in parallel. The control node receives a second processing task from the secondary system input interface and splits the second processing task into a number of execution requests according to the number of the multiple calculating nodes. The control node queries any one of the multiple calculating nodes for a time reference retrieved from a time source common to all of the multiple calculating nodes and calculates an execute time for the execution requests to be processed in the multiple calculating nodes. | 2021-11-18 |
20210357272 | METHOD AND APPARATUS FOR PREVENTING TASK-SIGNAL DEADLOCK DUE TO CONTENTION FOR MUTEX IN RTOS - A method for preventing a task-signal deadlock arising due to contention for a mutex in a real-time operating system (RTOS) includes detecting, by a processing unit, a signal notification sent to a task for execution of a signal handler; identifying, by the processing unit, a mutex to be acquired by the signal handler, when the signal notification is detected; determining whether the identified mutex has been acquired by the task; and utilizing, by the processing unit, an alternative stack for execution of the signal handler, in response to determining that the mutex has been acquired by the task, for preventing a task-signal deadlock during the execution. | 2021-11-18 |
20210357273 | METHOD OF CONTENTIONS MITIGATION FOR AN OPERATIONAL APPLICATION, ASSOCIATED COMPUTER PROGRAM PRODUCT AND METHOD FOR DETERMINING A STRESS APPLICATION - The present invention relates to a method of contentions mitigation for an operational application implemented by an embedded platform comprising a plurality of cores and a plurality of shared resources. | 2021-11-18 |
20210357274 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND PROGRAM - A first information processing apparatus ( | 2021-11-18 |
20210357275 | MESSAGE STREAM PROCESSOR MICROBATCHING - Embodiments provide a batching system that conforms message batches to publication constraints and also to message ordering requirements. An output array of messages is formed from messages received from a plurality of input streams, in which the messages are ordered. The output array preserves the ordering of the messages found in the source input streams. Messages are added from a head of the output array to a batch until addition of a next message to the batch would violate a particular batch processing constraint imposed on the batch. According to embodiments, one or more additional messages are included in the current batch when addition of the one or more additional messages to the batch (a) does not violate the particular batch processing constraint, and (b) continues to preserve the ordering of the messages, in the batch, with respect to the respective ordering of each of the plurality of input streams. | 2021-11-18 |
20210357276 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR SHARING INFORMATION IN A DISTRIBUTED FRAMEWORK - A system, method and computer program product are provided for receiving information associated with a message, issuing a storage resource request in connection with a storage resource and determining whether the storage resource is available. In use, the information is capable of being shared in less than one second, utilizing an automotive electronic control unit which includes a plurality of interfaces. | 2021-11-18 |
20210357277 | SYSTEM AND METHOD FOR PROCESSING DATA OF ANY EXTERNAL SERVICES THROUGH API CONTROLLED UNIVERSAL COMPUTING ELEMENTS - Disclosed herein are systems and methods for multi-system connectivity and automation via universal computing elements. Universal computing elements may comprise an object queue, one or more counters, and a function operating on parameters of objects in the object queue. Universal computing elements may be interconnected into processes of arbitrary complexity. Universal computing elements may facilitate modular and scalable business process development, including application programming interface and database connectivity. | 2021-11-18 |
20210357278 | METHOD FOR SERVING CLOUD OF QUANTUM COMPUTING AND COMPUTING DEVICE FOR EXECUTING THE METHOD - A method for serving cloud of quantum computing according to an embodiment may include receiving a request to perform quantum computing from a client and providing a standard resource description to the client, receiving, from the client, resource-related information corresponding to the standard resource description, providing a standard application program interface (API) to the client, and receiving, from the client, standard quantum computing code created through the standard API, selecting quantum computing hardware, which is to execute the standard quantum computing code, among a plurality of quantum computing hardware, based on one or more among the resource-related information and the standard quantum computing code, converting the standard quantum computing code into quantum computing code executable on the selected quantum computing hardware, and executing the converted quantum computing code using the selected quantum computing hardware. | 2021-11-18 |
20210357279 | HANDLING OPERATION SYSTEM (OS) IN SYSTEM FOR PREDICTING AND MANAGING FAULTY MEMORIES BASED ON PAGE FAULTS - A method of operating a system running a virtual machine that executes an application and an operating system (OS) includes performing first address translation from first virtual addresses to first physical addresses, identifying faulty physical addresses among the first physical addresses, each faulty physical address corresponding to a corresponding first physical address associated with a faulty memory cell, analyzing a row address and a column address of each faulty physical address and specifying a fault type of the faulty physical addresses based on the analyzing of the row address and the column address of each faulty physical address, and performing second address translation from second virtual addresses to second physical addresses based on a faulty address, thereby excluding the faulty address from the second physical addresses. | 2021-11-18 |
20210357280 | SYSTEMS AND METHODS FOR APPLICATION OPERATIONAL MONITORING - A method for application operational monitoring may include an operational monitoring computer program: (1) ingesting a plurality of service level indicator (SLI) metrics for an application, each SLI metric identifying a number of successful observations and a number of total observations; (2) calculating a SLI score for each SLI metric based on the number of successful observations and the number of total observations for the SLI metric; (3) weighting the SLI score for each SLI metric; (4) combining the weighted SLI scores into an application SLI score; (5) calculating a calculated error budget based on the application SLI score; (6) determining that the calculated error budget exceeds an error budget for the application; (7) generating a notification in response to the calculated error budget breaching the error budget; and (8) causing implementation of a restriction on the application, wherein the restriction prevents enhancements to the application. | 2021-11-18 |
20210357281 | Using User Equipment Data Clusters and Spatial Temporal Graphs of Abnormalities for Root Cause Analysis - Concepts and technologies are disclosed herein for using user equipment data clusters and spatial temporal graphs of abnormalities for root cause analysis. User equipment data can be obtained from a cellular network. A filter having a threshold can be applied to the user equipment data to obtain a records. A determination is made whether the threshold is to be adaptively adjusted. If a determination is made that the threshold is not to be adjusted, the records can be added to a record set. The records in the subset of records can be correlated based on a key to obtain a filtered and correlated version of the record set, a spatial temporal graph of abnormalities associated with the cellular network can be generated based on the filtered and correlated version of the record set, and a root cause of a failure can be determined based on the spatial temporal graph of abnormalities. | 2021-11-18 |
20210357282 | METHODS AND SYSTEMS FOR SERVER FAILURE PREDICTION USING SERVER LOGS - Embodiments provide methods and systems of predicting server failures. A method may include accessing distinct log clusters representing instructions executed in server, applying first density machine learning model over input vector of distinct log clusters, with length equal to number of distinct log clusters, for obtaining first prediction output, applying first sequential machine learning model over time length sequence of distinct log clusters for obtaining second prediction output, applying second density machine learning model over input vector for obtaining third prediction output, applying second sequential machine learning model over time length sequence of distinct log clusters for obtaining fourth prediction output, aggregating first, second, third and fourth prediction outputs by ensemble model, and predicting likelihood of next log clusters to have anomalous behavior based on the aggregating. First density and first sequential models are trained by normal logs. Second density and second sequential models are trained by abnormal logs. | 2021-11-18 |
20210357283 | RELIABLE VIRTUALIZED NETWORK FUNCTION SYSTEM FOR A CLOUD COMPUTING SYSTEM - A reliable network function virtualization (rVNF) system includes a virtualized network function (VNF) application instance that includes a plurality of physical VNF instances. A load balancer provides an interface between a client and the VNF application instance. A load balancer interface facilitates delivery of packets related to a particular user context to the same physical VNF instance. A communication interface facilitates communication between the client and the VNF application instance. Application storage stores session data associated with the VNF application instance. | 2021-11-18 |
20210357284 | INCIDENT MANAGEMENT FOR TRIAGING SERVICE DISRUPTIONS - For incident management, parsing, responsive to an incident ticket being opened relative to a first application. The parsing identifying a set of incident data. Identifying, using a dependency graph, a set of applications, wherein each application in the set of applications is dependent on the first application through at least one dependency relationship. Notifying, responsive to the incident ticket, a subset of a set of users of a second application about the incident ticket related to the first application, the second application being a member of the set of applications, the subset of the set of users of the second application performing a type of transaction with the second application wherein the type of transaction is indicated in a dependency relationship between the first application and the second application. Preventing a user in the subset of users from creating a second incident ticket. | 2021-11-18 |
20210357285 | Program Generation Apparatus and Parallel Arithmetic Device - A program for causing a parallel arithmetic device including a plurality of arithmetic groups to execute parallel arithmetic is input. The program includes information defining each of the following: application arithmetic constituting predetermined processing; redundant arithmetic (which is redundant arithmetic of the application arithmetic and is arithmetic assigned to a surplus core(s) in a diagnosis target arithmetic group); and diagnostic arithmetic (arithmetic that is a comparison of results of the same redundant arithmetic by two or more diagnosis target arithmetic groups and is assigned to surplus cores in an arithmetic group for diagnosis). The surplus core(s) is a core(s) to which no application arithmetic is assigned. | 2021-11-18 |
20210357286 | STORAGE APPARATUS, DUMP DATA MANAGEMENT METHOD, AND DUMP DATA MANAGEMENT PROGRAM - Dump data of a memory is allowed to be easily and appropriately managed. In a storage apparatus including a CPU, a DRAM, and a drive, the CPU executes storage processing to store data of the DRAM to be stored into the drive as dump data upon a failure occurrence in the storage apparatus. The CPU deduplicates the dump data in the storage processing or after storing the dump data into the drive. The CPU may execute deduplication processing between the dump data and other dump data stored in the drive or may execute deduplication processing within the data to be stored. | 2021-11-18 |
20210357287 | MEMORY CONTROLLERS, MEMORY SYSTEMS AND MEMORY MODULES - A memory controller to control a memory module includes an error correction code (ECC) engine, a central processing unit to control the ECC engine and an error managing circuit. The ECC engine performs an ECC decoding on a read codeword set from the memory module to generate a first syndrome and a second syndrome in a read operation, corrects correctable error in a user data set based on the first syndrome and the second syndrome and provides the error management circuit with the second syndrome associated with the correctable error. The error managing circuit counts error addresses associated with correctable errors detected through read operations, stores second syndromes associated with the correctable errors by accumulating the second syndromes, determines attribute of the correctable errors based on the counting and the accumulated second syndromes, and determine an error management policy on a memory region associated with the correctable errors. | 2021-11-18 |
20210357288 | SEMICONDUCTOR STORAGE APPARATUS AND ECC RELATED INFORMATION READING METHOD - A semiconductor storage apparatus and an error checking and correction (ECC) related information reading method, which can output various information related to pages that have been error-corrected during a continuous reading operation, are provided. A NAND flash memory includes a memory cell array, a continuous reading component, an ECC related information memory part, and an output component. The continuous reading component continuously reads pages of the memory cell array. The ECC related information memory part stores page addresses of all of the pages that have been error-corrected by an ECC circuit regarding the pages continuously read by the continuous reading component. The output component outputs page addresses stored in the ECC related information memory part in response to a read command after the continuous reading operation. | 2021-11-18 |
20210357289 | MEMORY SYSTEM - A memory system includes a semiconductor storage device and a memory controller including a storage circuit that stores correction value for read voltages in association with the word line, and a control circuit that reads data from the memory cells, performs a correction operation on the read data to determine a number of error bits therein, determines the correction value for each read voltage based on the number of error bits and a ratio of a lower tail fail bit count and an upper tail fail bit count, and stores the correction values for the read voltages in the storage circuit. The lower tail fail bit count represents the number of memory cells in a first state having threshold voltages of an adjacent state, and the upper tail fail bit count represents the number of memory cells in the adjacent state having threshold voltages of the first state. | 2021-11-18 |
20210357290 | Retrieval of Data Objects with a Common Trait in a Storage Network - A method includes identifying an independent data object of a plurality of independent data objects for retrieval from dispersed storage network (DSN) memory. The method further includes determining a mapping of the plurality of independent data objects into a data matrix, wherein the mapping is in accordance with the dispersed storage error encoding function. The method further includes identifying, based on the mapping, an encoded data slice of the set of encoded data slices corresponding to the independent data object. The method further includes sending a retrieval request to a storage unit of the DSN memory regarding the encoded data slice. When the encoded data slice is received, the method further includes decoding the encoding data slice in accordance with the dispersed storage error encoding function and the mapping to reproduce the independent data object. | 2021-11-18 |
20210357291 | IDENTIFYING A FAULT DOMAIN FOR A DELTA COMPONENT OF A DISTRIBUTED DATA OBJECT - The disclosure herein describes placing a delta component of a base component in a target fault domain. A delta component associated with a base component is generated. The generation includes selecting a first fault domain as a target fault domain for the delta component based on the first fault domain including a witness component associated with the distributed data object of the base component. Otherwise, the generation includes selecting a second fault domain as the target fault domain based on the second fault domain including at least one data component that includes a different address space than the base component. Otherwise, the generation includes selecting a third fault domain as the target fault domain based on the third fault domain being unused. Then, the delta component is placed on the target fault domain, whereby data durability of the distributed data object is enhanced, and available fault domains are preserved. | 2021-11-18 |
20210357292 | DATA COMMUNICATION - There is described a method for communicating data, the method comprising: receiving an incomplete data stream, wherein the incomplete data stream comprises a plurality of sequences of data points having respective values and a plurality of sequences of missing data points; receiving a missing data model; determining values for each of the plurality of sequences of missing data points, comprising: selecting a sequence of missing data points that has not previously been processed, wherein the sequence of missing data points to be processed is selected as a smallest sequence of missing data points of the plurality of sequences of missing data points that have not previously been processed; processing the incomplete data stream to determine values for the selected sequence of missing data points based upon the missing data model; updating the incomplete data stream to include the determined values for the selected sequence of missing data points; and wherein values for subsequent sequences of missing data points are generated based upon the updated data stream; and outputting a corrected data stream comprising the determined values for each of the plurality of sequences of missing data points. | 2021-11-18 |
20210357293 | OPTIMIZING INCREMENTAL BACKUP FOR CLIENTS IN A DEDUPE CLUSTER TO PROVIDE FASTER BACKUP WINDOWS WITH HIGH DEDUPE AND MINIMAL OVERHEAD - An intelligent method of handling incremental backups concurrent with load balancing movement. The file system uses placement tags, incremental backup requests and capacity balancing data movement to make intelligent decision to avoid affecting any backup windows for clients or backup apps. The file system tracks capacity balancing file movements inside the cluster. When switching locations of files in a cluster from one node to another, it is performed as an atomic change of switching inode attributes by the file system after the contents of the file have been copied over to the new node. During the file movement for capacity balancing, the file system handles requests for full backups differently than requests for incremental backups. The file system continues to handle virtual systhesis and fastcopy requests on the node that hosts the previous backup to ensure that the incremental backup succeeds with the expected smaller backup window from the client. | 2021-11-18 |
20210357294 | Object Store Backup Method and System - A computer-implemented method of backing up an application to an object storage system includes receiving a policy with a retention attribute for the application being backed up, and receiving a file including data from the application being backed up at a locally-mounted-file-system representation. A manifest including file segment metadata based on the file, at least one attribute associated with the locally-mounted-file-system representation, and at least one version is generated. A file segment including data corresponding to at least one version in the manifest, and including at least some of the data in a bucket comprising an object lock in the object storage system is generated and stored. The manifest is stored as an object in the object storage system. | 2021-11-18 |
20210357295 | RECOVERY IMAGE DOWNLOADS VIA DATA CHUNKS - An example non-transitory computer-readable storage medium comprising instructions that when executed cause a processor of a computing device to: in response to receiving a chunk size request from a recovery agent executable at an operating system of the computing device, determine a chunk size via firmware instructions of the computing device; transmit the chunk size from the firmware instructions to the recovery agent; receive data chunks of a recovery image from the recovery agent in sequence; store the data chunks in a storage device of the computing device; and construct the recovery image using the data chunks. | 2021-11-18 |
20210357296 | CROSS-APPLICATION DATABASE RESTORE - A system according to certain aspects improves the process of restoring database objects and converting those objects into another database file format. According to certain aspects, a database file is backed up in a block-level fashion. Instead of restoring the entire backup file, the information management system may restore a particular database object from a backup database file that is stored as multiple blocks or other granular units. Then, the information management system can extract the desired data from the restored block(s). By using block-level mapping and storage techniques described herein, the system can restore a database object in a backup database file without restoring the entire backup database file, thereby speeding up restore operations and reducing consumption of storage and processing resources. In addition, the information management system can convert the blocks, using a staging memory, to another database file format as desired. | 2021-11-18 |
20210357297 | REAL TIME DATABASE BACKUP STATUS INDICATION AND RESTORE - A computer-implemented method at a data management system comprises: retrieving start and end times of a backup of a database; retrieving time stamps of log backups of the database; retrieving sequence numbers of the log backups; generating a graphical user interface illustrating a timeline of availability of database restoration and unavailability; making a second backup of the database; illustrating, on the graphical user interface during the making, pending availability of the second database backup; receiving a command to restore the database at an available time as illustrated by the graphical user interface; and restoring the database. | 2021-11-18 |
20210357298 | RECOVERY VIA BACKUPS OF RECOVERY INFORMATION - An example computing device includes a controller to control operation of a firmware subsystem of the computing device. The controller is separate from a main processor of the computing device. A memory stores subsystem data that is useable by the controller. The subsystem data includes recovery information executable by the controller to initiate recovery of the subsystem. The computing device further includes recovery coordination instructions. The recovery coordination instructions determine integrity of the recovery information as stored on the memory and. In response to determining that the recovery information lacks integrity, the recovery coordination instructions initiate recovery of the firmware subsystem using a backup of the recovery information and perform recovery of the firmware subsystem using an update to the firmware subsystem. | 2021-11-18 |
20210357299 | TEST PLATFORM EMPLOYING TEST-INDEPENDENT FAULT INSERTION - A method of testing a data storage system includes maintaining libraries of test routines, a first library including a set of normal-functional tests each operable to test corresponding normal functionality of the data storage system, a second library including a set of fault inserters each being independently operable to induce a corresponding fault condition into the data storage system. Normal-functional tests are executed concurrently with one or more of the fault inserters to cause the normal-functional tests to encounter the corresponding fault conditions during execution and thereby test a response of the normal functionality of the data storage system to the occurrence of the fault conditions. | 2021-11-18 |
20210357300 | METHOD AND SYSTEM FOR CLASSIFICATION AND RANKING OF DELTA ARMS - Source code of any application may be edited/modified to accommodate new changes. The changes in the source code may also affect static analysis alarms that were generated for the original source code. Changes in the source code may result in newly generated alarms, some of the alarms in the original source code may repeat in the new source code. Many of the repeated alarms get suppressed using appropriate techniques. The repeated alarms that remain after the suppression, and the newly generated alarms together form the delta alarms. Each of the delta alarms may have been generated due to different reasons. Classification of the delta alarms is performed based on reasons/causes for their generation. The system further performs ranking of the classes of the delta alarms and thus ranking of the delta alarms. Further, the system groups the alarms having common cause and reports the delta alarms with their causes. | 2021-11-18 |
20210357301 | MONITORING SYSTEM, MONITORING APPARATUS, AND MONITORING METHOD - It is made possible to grasp a situation and to more quickly provide handling when a failure occurs in a monitored system. A monitoring system | 2021-11-18 |
20210357302 | DYNAMICALLY MAPPING SOFTWARE INFRASTRUCTURE UTILIZATION - A computer-based system and method for real-time monitoring computer resource usage, including obtaining, by a monitoring application executed by a processor, from a plurality of applications, each application executed by a processor, a report upon the accessing of at least one accessed resource by at least one accessing user; and generating, by the monitoring application based on the report, a map of resources accessed by the plurality of applications. If a notification that a resource has been compromised is obtained, a list of all applications that have accessed the resource may be generated based on the map. | 2021-11-18 |
20210357303 | REALTIME DATA STREAM CLUSTER SUMMARIZATION AND LABELING SYSTEM - A method is provided for automatically discovering topics in electronic posts, such as social media posts. The method includes receiving a corpus that includes a plurality of electronic posts. The method further includes identifying a plurality of candidate terms within the corpus and selecting, as a trimmed lexicon, a subset of the plurality of candidate terms using predefined criteria. The method further includes clustering at least a subset of the plurality of electric posts according to a plurality of clusters using the lexicon to produce a plurality of statistical topic models. The method further includes storing information corresponding to the statistical topic models. | 2021-11-18 |
20210357304 | IN-MEMORY DATABASE SERVICE TRACE MINING - In an example embodiment, a solution is provided to mine trace data, detangle it, and rewrite the trace data without redundancy. In an example embodiment, mining may take place before detangling, but such an ordering is not mandatory. Combining mining with detangling solves the technical problem of the production of difficult-to-read service traces, as it mines the “interesting” parts, corrects the timestamp order, and removes redundancy. | 2021-11-18 |
20210357305 | METHODS AND SYSTEMS FOR MEASURING USER AND SYSTEM METRICS - A method including receiving, from a user device, a user request to access data associated with a web page; generating, by a processor, a first transaction identification; collecting transaction information, the transaction information comprising server-side metrics; integrating, by the processor, the first transaction identification with the transaction information; transmitting, by the processor, the first transaction identification to the user device; receiving, from the user device, client-side data associated with a second transaction identification; integrating, by the processor, the server-side metrics and the client-side data; and analyzing, by the processor, the integrated server-side metrics and the client-side data. | 2021-11-18 |
20210357306 | MEASURING MOBILE APPLICATION PROGRAM RELIABILITY CAUSED BY RUNTIME ERRORS - A quality score for a computer application release is determined using a first number of unique users who have launched the computer application release on user devices and a second number of unique users who have encountered at least once an abnormal termination with the computer application release on user devices. Additionally or optionally, an application quality score can be computed for a computer application based on quality scores of computer application releases that represent different versions of the computer application. Additionally or optionally, a weighted application quality score can be computed for a computer application by further taking into consideration the average application quality score and popularity of a plurality of computer applications. | 2021-11-18 |
20210357307 | AUTOMATED PROGRAM REPAIR TOOL - An automated program repair tool utilizes a neural transformer model with attention to predict the contents of a bug repair in the context of source code having a bug of an identified bug type. The neural transformer model is trained on a large unsupervised corpus of source code using a span-masking denoising optimization objective, and fine-tuned on a large supervised dataset of triplets containing a bug-type annotation, software bug, and repair. The bug-type annotation is derived from an interprocedural static code analyzer. A bug type edit centroid is computed for each bug type and used in the inference decoding phase to generate the bug repair. | 2021-11-18 |
20210357308 | SYSTEMS AND METHODS FOR DEBUGGING AND APPLICATION DEVELOPMENT - Disclosed are implementations for software debugging and application development, including a software debugging method that includes receiving from a remote device, by an instrumentation agent operating at an application system, one or more instrumentation requests for application data resulting from execution of an application process on the application system, processing the instrumentation requests to generate injection point objects, inserting the objects into code (e.g., bytecode) of the application process, and capturing blocks of application data resulting from the inserted one or more injection point objects. At least one injection point objects includes a multi-point object that includes multiple cooperating object parts that are each inserted into one of multiple insertion points of the code, with the multi-point injection point object configured to capture application data generated from execution of a segment of the code bounded by the multiple insertion points of the code corresponding to the multiple object parts. | 2021-11-18 |
20210357309 | SYSTEMS AND METHODS FOR DEBUGGING AND APPLICATION DEVELOPMENT - Disclosed are implementations for software debugging and application development, including a method that includes receiving an instrumentation request, associated with one or more contextual conditions, for application data resulting from execution of an application process on an application system, the application process corresponding to source code with a segment to capture data at a first observability level. The instrumentation request includes information to cause adjustment of the first observability level to a second observability level different from the first observability level. The method also includes identifying running code segment of the application process corresponding to the segment of the source code, and modifying the identified running code segment into a modified conditional running code segment to capture data at the adjusted second level of observability upon determination that current system contextual information matches at least some of the one or more contextual conditions associated with the instrumentation request. | 2021-11-18 |
20210357310 | SYSTEMS AND METHODS FOR DEBUGGING AND APPLICATION DEVELOPMENT - Disclosed are implementations for software debugging and application development, including a method that includes receiving instrumentation requests for application data resulting from execution of an application process on an application system, generating from the received instrumentation requests injection point objects configured to obtain blocks of application data, determining risk of adverse impact by an injection point object on performance and/or state of the application system, and processing the injection point object based on the determined risk of adverse impact. The processing includes evaluating the injection point object by the application process if the injection point object is determined to be safe for evaluation by the application process, evaluating the injection point object by an evaluation process if the injection point object has an uncertain risk of adverse impact, or performing mitigation operations if the injection point object is associated with a high risk of causing adverse impact. | 2021-11-18 |
20210357311 | DEBUGGING A MEMORY SUB-SYSTEM WITH DATA TRANSFER OVER A SYSTEM MANAGEMENT BUS - A processing device in a memory system receives, from a host system, a request for a debug slave address associated with a system management bus port of a memory sub-system and sends a response comprising the debug slave address to the host system. The processing device receives, from the host system, a request to enable the system management bus port to receive a request for debug information directed to the debug slave address, receives, from the host system, the request for debug information directed to the debug slave address, and sends the debug information to the host system over a system management bus coupled to the system management bus port of the memory sub-system. | 2021-11-18 |
20210357312 | METHOD AND DEVICE FOR TESTING ROBUSTNESS AND STABILITY OF SMM, AND STORAGE MEDIUM - There are provided a method and a device for testing robustness and stability of an SMM, and a computer readable storage medium. In the method, a target variable is firstly obtained, and it is judged whether an SMI is triggered. Once it is judged that the SMI is triggered, the SMM is entered. In the SMM, a target testing model corresponding to a current value of the target variable is determined, and a target SMM function is tested with the target testing model pre-stored in a system management memory, and serial port information is printed by using firmware to determine the robustness and stability of the SMM, thereby achieving the testing for the robustness and stability of the SMM. | 2021-11-18 |
20210357313 | SYSTEMS AND METHODS FOR TEST DEPLOYMENT OF COMPUTATIONAL CODE ON VIRTUAL SERVERS - Methods and systems for test deployment of computational code on virtual servers are disclosed. In one embodiment, an exemplary method comprises receiving test computational code programmed to provide resources; selecting a test virtual server from a plurality of virtual servers; uploading the test computational code to the test virtual server; initializing the test computational code on the test virtual server; receiving computational performance measurements of the test virtual server and a remainder of the plurality of virtual servers; calculating a test score of the test virtual server based on the received computational performance measurements; and stopping the test computational code if the test score is outside a set range. | 2021-11-18 |
20210357314 | SMART REGRESSION TEST SELECTION FOR SOFTWARE DEVELOPMENT - A method of testing a change in a software code includes, searching a database of tests to identify a subset of the tests that include a function that executes the change, forming, from the subset, a multitude of groups each having a different execution path. The tests in the same group have the same execution path. The method further includes prioritizing the tests within each of the multitude of groups based on one or more testing characteristics, and selecting, from each of the groups, one or more of the prioritized tests to test the change. | 2021-11-18 |
20210357315 | METHOD AND APPARATUS FOR CALCULATING A SOFTWARE STABILITY INDEX - A method is provided, comprising: deploying source code to a non-production instance of a software application; executing one or more tests on the non-production instance of the software application and logging any events that are generated during the tests in one or more test logs; retrieving data from the one or more test logs and calculating a stability index for the source code based on the data that is retrieved from the one or more test logs; and deploying the source code to a production-instance of the software application based on the stability index of the source code. | 2021-11-18 |
20210357316 | Synthesizing Data based on Topic Modeling for Training and Testing Machine Learning Systems - Systems and methods for generating a dataset of synthesized data items from a dataset of original data items are disclosed herein. Some embodiments include (i) selecting an original data item from the dataset of original data items, where each original data item (a) comprises a combination of first-type codes and second-type codes, and (b) is associated with a topic in a topic model; and (ii) generating a synthesized data item based on the original data item and the topic associated with the original data item, where the synthesized data item comprises a combination of first-type codes and second-type codes that differs from the combination of first-type codes and second-type codes in the original data item by one first-type code or one second-type code. | 2021-11-18 |
20210357317 | MEMORY DEVICE AND OPERATION METHOD - An operation method is applied to a memory device. The memory device includes a plurality of memory tiles. The operation method includes following steps: utilizing a first wear leveling process to perform an intra-tile wear leveling on the plurality of memory tiles by a processor; and utilizing a second wear leveling process to perform an inter-tile wear leveling on the plurality of memory tiles by the processor. | 2021-11-18 |
20210357318 | MEMORY CONTROLLER AND METHOD OF OPERATING THE SAME - Memory controller devices, memory systems, and operating methods for memory controller devices and memory systems are disclosed. In one aspect, a memory controller having improved wear leveling performance is disclosed. The memory controller may control a first memory area and a second memory area, and include a first software layer configured to control the first memory area based on first logical addresses, a second software layer configured to control the second memory area based on second logical addresses, and a logical address manager configured to compare a logical address received from a host with a reference address selected from among a plurality of logical addresses to be used by the host, and transmit the logical address received from the host to the first software layer or the second software layer according to a criterion selected from between a first criterion and a second criterion based on the comparison. | 2021-11-18 |
20210357319 | USER DEVICE INCLUDING A NONVOLATILE MEMORY DEVICE AND A DATA WRITE METHOD THEREOF - An access method of a nonvolatile memory device included in a user device includes receiving a write request to write data into the nonvolatile memory device; detecting an application issuing the write request, a user context, a queue size of a write buffer, an attribute of the write-requested data, or an operation mode of the user device; and deciding one of a plurality of write modes to use for writing the write-requested data into the nonvolatile memory device according to the detected information. The write modes have different program voltages and verify voltage sets. | 2021-11-18 |
20210357320 | MEMORY CONTROLLER, MEMORY SYSTEM AND OPERATING METHOD OF MEMORY DEVICE - A memory controller includes a block ratio calculator configured to calculate a ratio of free blocks among memory blocks for storing data; a policy selector configured to select, based on the calculated ratio of free blocks, any one garbage collection policy of a first garbage collection policy of specifying priorities to be used to select a victim block depending on attributes of the data, and a second garbage collection policy of specifying the priorities to be used to select the victim block regardless of the attributes of the data; and a garbage collection performing component configured to perform a garbage collection operation on at least one memory block of the memory blocks according to the garbage collection policy selected by the policy selector. | 2021-11-18 |
20210357321 | TWO-WAY INTERLEAVING IN A THREE-RANK ENVIRONMENT - A memory controller maintains a mapping of target ranges in system memory space interleaved two-ways across locations in a three-rank environment. For each range of the target ranges, the mapping comprises a two-way interleaving of the range across two ranks of the three-rank environment and offsets from base locations in the two ranks. At least one of the ranges has offsets that differ relative to each other. Such offsets allow the three ranks to be fully interleaved, two ways. An instruction to read data at a rank-agnostic location in the diverse-offset range causes the memory controller to map the rank-agnostic location to two interleaved locations offset different amounts from their respective base locations in their ranks. The controller may then affect the transfer of the data at the two interleaved locations. | 2021-11-18 |
20210357322 | STORAGE SYSTEM JOURNAL OWNERSHIP MECHANISM - A storage system in one embodiment comprises storage nodes, an address space, address mapping sub-journals and write cache data sub-journals. Each address mapping sub-journal corresponds to a slice of the address space, is under control of one of the storage nodes and comprises update information corresponding to updates to an address mapping data structure. Each write cache data sub journal is under control of the one of the storage nodes and comprises data pages to be later destaged to the address space. A given storage node is configured to store write cache metadata in a given address mapping sub journal that is under control of the given storage node. The write cache metadata corresponds to a given data page stored in a given write cache data sub-journal that is also under control of the given storage node. | 2021-11-18 |
20210357323 | WRITE SORT MANAGEMENT IN A MULTIPLE STORAGE CONTROLLER DATA STORAGE SYSTEM - In one aspect of write sort management in accordance with the present disclosure, a sort/no-sort determination is made prior to issuing to a write command to a target storage controller. The write command identifies a write data unit such track write data, for example, of a first write list of write data units to be written to storage locations of storage. The write command further identifies the storage location at which the write data unit of the first write list is to be stored. In one embodiment, the sort/no-sort determination determines whether an insertion point for an entry in a target write list is to be determined as a function of a write list search such as a logarithmic time search for a write list sort. As a result, the write list search for a write list sort, may be selectively either performed or bypassed for insertion of the target write list entry as a function of the sort/no-sort determination Other aspects and advantages are provided, depending upon the particular application. | 2021-11-18 |
20210357324 | SECTOR-BASED TRACKING FOR A PAGE CACHE - Exemplary methods, apparatuses, and systems include identifying that a first cache line from a first cache is subject to an operation that copies data from the first cache to a non-volatile memory. A first portion of the first cache line stores clean data and a second portion of the first cache line stores dirty data. A redundant copy of the dirty data is stored in a second cache line of the first cache. In response to identifying that the first cache line is subject to the operation, metadata associated with the redundant copy of the dirty data is used to copy the dirty data to a non-volatile memory while omitting the clean data. | 2021-11-18 |
20210357325 | EFFICIENT MEMORY DUMP - A method of operating a storage unit having non-volatile random-access memory (NVRAM) and solid-state memory is provided. The method includes transferring contents of the NVRAM to the solid-state memory, in response to detecting a power loss. The method includes during the transferring, having each of a plurality of channels in parallel, reading one or more of a plurality of logical unit numbers (LUNs) each corresponding to a portion of the NVRAM, performing an XOR of data of each of the one or more of the plurality of LUNs with data of a preceding LUN, and writing results of the XOR to the solid-state memory. | 2021-11-18 |
20210357326 | BIT MASKING VALID SECTORS FOR WRITE-BACK COALESCING - A processing device identifies a portion of data in a cache memory to be written to a managed unit of a separate memory device and determines, based on respective memory addresses, whether an additional portion of data associated with the managed unit is stored in the cache memory. The processing device further generates a bit mask identifying a first location and a second location in the managed unit, wherein the first location is associated with the portion of data and the second location is associated with the additional portion of data, and performs, based on the bit mask, a read-modify-write operation to write the portion of data to the first location in the managed unit of the separate memory device and the additional portion of data to the second location in the managed unit of the separate memory device. | 2021-11-18 |
20210357327 | METHOD OF VERIFYING ACCESS OF MULTI-CORE INTERCONNECT TO LEVEL-2 CACHE - The present disclosure provides a method and a system of verifying access by a multi-core interconnect to an L2 cache in order to solve problems of delays and difficulties in locating errors and generating check expectation results. A consistency transmission monitoring circuitry detects, in real time, interactions among a multi-core interconnects system, all single-core processors, an L2 cache and a primary memory, and sends collected transmission information to an L2 cache expectation generator and a check circuitry. The L2 cache expectation generator obtains information from a global memory precise control circuitry according to a multi-core consistency protocol and generates an expected result. The check circuitry is responsible for comparing the expected result with an actual result, thus implementing determination of multi-core interconnect's access accuracy to the L2 cache without delay. | 2021-11-18 |
20210357328 | DYNAMIC RECONFIGURABLE MULTI-LEVEL CACHE FOR MULTI-PURPOSE AND HETEROGENEOUS COMPUTING ARCHITECTURES - Embodiments of a system for dynamic reconfiguration of cache are disclosed. Accordingly, the system includes a plurality of processors and a plurality of memory modules executed by the plurality of processors. The system also includes a dynamic reconfigurable cache comprising of a multi-level cache implementing a combination of an L1 cache, an L2 cache, and an L3 cache. The one or more of the L1 cache, the L2 cache, and the L3 cache are dynamically reconfigurable to one or more sizes based at least in part on an application data size associated with an application being executed by the plurality of processors. In an embodiment, the system includes a reconfiguration control and distribution module configured to perform dynamic reconfiguration of the dynamic reconfigurable cache based on the application data size. | 2021-11-18 |
20210357329 | MEMORY SYSTEM - A memory system may include a storage medium, a first cache, a second cache, and a control unit suitable for preferentially or selectively storing, in the first cache, write data corresponding to a write request received from a host device and preferentially or selectively checking the second cache in response to a read request received from the host device. | 2021-11-18 |
20210357330 | LOCK-FREE SHARING OF LIVE-RECORDED CIRCULAR BUFFER RESOURCES - Novel techniques are described for lock-free sharing of a circular buffer. Embodiments can provide shared, lock-free, constant-bitrate access by multiple consumer systems to a live stream of audiovisual information being recorded to a circular buffer by a producer. For example, when a producer system writes a data stream to the circular buffer, the producer system records shared metadata. When a consumer system desires to begin reading from the shared buffer at a particular time, the shared metadata is used to compute a predicted write pointer location and corresponding dirty region around the write pointer at the desired read time. A read pointer of the consumer system can be set to avoid the dirty region, thereby permitting read access to a stable region of the circular buffer without relying on a buffer lock. | 2021-11-18 |
20210357331 | MONITORING SERVICE FOR PRE-CACHED DATA MODIFICATION - The described technology is generally directed towards detecting and propagating changes that affect information maintained in a cache. Data may be pre-cached in advance of its actual need, however such data can change, including in various different source locations. A change monitoring/signaling service detects relevant changes and publishes change events to downstream listeners, including to a cache population service that updates pre-cache data as needed in view of such data changes. Per-user-specific data also may be pre-cached, such as when a user logs into a data service. | 2021-11-18 |
20210357332 | DYNAMICALLY SIZED REDUNDANT WRITE BUFFER WITH SECTOR-BASED TRACKING - Exemplary methods, apparatuses, and systems include detecting an operation to write dirty data to a cache. The cache is divided into a plurality of channels. In response to the operation, the dirty data is written to a first cache line in the cache, the first cache line being accessed via a first channel. Additionally, a redundant copy of the dirty data is written to a second cache line in the cache. The second cache line serves as a redundant write buffer and is accessed via a second channel, the first and second channels differing from one another. A metadata entry for the second cache line is updated to reference a location of the dirty data in the first cache line. | 2021-11-18 |
20210357333 | Method and Apparatus for Accessing Caches in Clustered Storage Systems - A clustered storage system includes a plurality of storage devices, each of which contributes a portion of its memory to form a global cache of the clustered storage system that is accessible by the plurality of storage devices. Cache metadata for accessing the global cache may be organized in a multi-layered structure. In one embodiment, multi-layered structure has a first layer first including a first address array, and the first address array include addresses pointing to a plurality of second address arrays in a second layer. Each second address array in the second layer includes addresses, each of which points to data that has been cached in the global cache. | 2021-11-18 |
20210357334 | SYSTEM AND METHOD FOR CACHE DIRECTORY TCAM ERROR DETECTION AND CORRECTION - Systems and methods are provided for addressing die are inefficiencies associated with the use of redundant ternary content-addressable memory (TCAM) for facilitating error detection and correction. Only a portion of redundant TCAMs (or portions of the same TCAM) are reserved for modified coherency directory cache entries, while remaining portions are available for unmodified coherency directory cache entries. The amount of space reserved for redundant, modified coherency directory cache entries can be programmable and adaptable. | 2021-11-18 |
20210357335 | ARBITRATION CONTROL FOR PSEUDOSTATIC RANDOM ACCESS MEMORY DEVICE - An arbitration control circuit in a pseudo-static random access memory (PSRAM) device includes a set-reset latch circuit receiving a normal access request signal and a refresh access request signal as first and second input signals and generating a first output signal having zero or more signal transitions in response to the order the first input signal and the second input signal is asserted. The arbitration control circuit further includes a unidirectional delay circuit applying a unidirectional delay to the first output signal and a D-flip-flop circuit latching the first output signal as data in response to the delayed signal as clock. The D-flip-flop generates a second output signal having a first logical state indicative of granting the normal access request and a second logical state indicative of granting the refresh access request to the memory cells of the PSRAM device. | 2021-11-18 |
20210357336 | EFFICIENT MEMORY BUS MANAGEMENT - A memory controller an arbiter which causes streaks of read commands and streaks of write commands over the memory channel. During a streak, the arbiter monitors an indicator of data bus efficiency of the memory channel. Responsive to the indicator showing that data bus efficiency is less than a designated threshold, the arbiter stops the current streak and start a streak of the other type. | 2021-11-18 |
20210357337 | SYSTEM AND METHOD FOR DIRECT MEMORY ACCESS - A method for direct memory access includes: receiving a direct memory access request designating addresses in a data block to be accessed in a memory; randomizing an order of the addresses the data block is accessed; and accessing the memory at addresses in the randomized order. A system for direct memory access is disclosed. | 2021-11-18 |
20210357338 | DUAL MEMORY SECURE DIGITAL (SD) CARD AND SYSTEM AND METHOD FOR WIRELESSLY AND AUTOMATICALLY UPDATING DATA IN HOST COMPUTER USING DUAL MEMORY SD CARD - A dual memory Secure Digital (SD) card is provided which allows for remote data updates without disruption to a currently executing program, as well as a system and method that utilize the dual memory SD card. The dual memory SD card may include a primary memory, an independent secondary memory, and a microcontroller or Application Specific Integrated Circuit (ASIC) that can load either memory upon boot up of a host computer. The dual memory SD card may also include a wireless interface, such as Wi-Fi or Bluetooth, in addition to a standard SD pin interface. An automated data synchronization system is provided which allows a new version of data to be uploaded onto the secondary memory of the dual memory SD card while an existing data version is running on that same dual memory SD card and swapped into operation upon the next reboot of a host device. | 2021-11-18 |
20210357339 | EFFICIENT MANAGEMENT OF BUS BANDWIDTH FOR MULTIPLE DRIVERS - Systems and methods are disclosed for efficient management of bus bandwidth among multiple drivers. An example method may comprise: receiving a request from a driver to write data via a bus; reading contents of a random access memory (RAM) at a specified interval of time to determine whether the data written by the driver is accumulated in the RAM; responsive to determining that the data written by the driver is accumulated in the RAM, determining whether a bandwidth of the bus satisfies a bandwidth condition; and responsive to determining that the bandwidth satisfies the bandwidth condition, forwarding, via the bus, a portion of the data written by the driver in the RAM to a device memory of a device. | 2021-11-18 |
20210357340 | Gateway Processing - A gateway for use in a computing system to interface a host with the subsystem for acting as a work accelerator to the host, the gateway having an streaming engine for controlling the streaming of batches of data into and out of the gateway in response to pre-compiled data exchange synchronisation points attained by the subsystem, wherein the streaming of batches of data is selectively via at least one of an accelerator interface, a data connection interface, a gateway interface and an memory interface, wherein the streaming engine is configured to perform data preparation processing of the batches of data streamed into the gateway prior to said batches of data being streamed out of the gateway, wherein the data preparation processing comprises at least one of: data augmentation; decompression; and decryption. | 2021-11-18 |
20210357341 | PRIORITY SCHEDULING IN QUEUES TO ACCESS CACHE DATA IN A MEMORY SUB-SYSTEM - A processing device in a memory sub-system generates a fill operation to store data from a memory device at a cache of a memory sub-system, assigns a first priority indicator to the fill operation associated with the data, and assigns a second priority indicator to a read operation associated with a request to read the data from the memory sub-system. The processing device further determines a schedule of executing the fill operation and the read operation based on the first priority indicator and the second priority indicator and executes the fill operation and the read operation based on the determined schedule. | 2021-11-18 |
20210357342 | INTERRUPT MIGRATION - Apparatuses, methods, program products, and systems are presented for interrupt migration in connection with migration of a logical partition. | 2021-11-18 |
20210357343 | COORDINATING OPERATIONS OF MULTIPLE COMMUNICATION CHIPS VIA LOCAL HUB DEVICE - Embodiments relate to coordinating the operations of subsystems in a communication system of an electronic device where a coexistence hub device monitors the state information transmitted as coexistence messages over one or more multi-drop buses, processes the monitored coexistence messages and sends out control messages as coexistence messages to other systems on chips (SOCs). The coexistence hub device can also update the operations of the communication system. The coexistence hub device may receive an operation policy from a central processor and may execute the operation policy without further coordination of the central processor. The coexistence hub device broadcasts the control messages as coexistence messages according to the executed operation policy. | 2021-11-18 |