40th week of 2021 patent applcation highlights part 46 |
Patent application number | Title | Published |
20210311740 | CIRCULAR SHADOW STACK IN AUDIT MODE - Performing shadow stack functionality for a thread in an audit mode includes initiating execution of a thread at the processor. Execution of the thread includes initiating execution of executable code of an application binary as part of the thread and enabling shadow stack functionality for the thread in an audit mode. Based at least on the execution of the thread in the audit mode, at least a portion of the shadow stack is enabled to be a circular stack. In response to determining that usage of the shadow stack has reached the defined threshold, one or more currently used entries of the shadow stack are overwritten, preventing the shadow stack from overflowing. | 2021-10-07 |
20210311741 | PROCESSOR HAVING READ SHIFTER AND CONTROLLING METHOD USING THE SAME - A processor that includes a register file, a read shifter, a decode unit and a plurality of functional units is introduced. The register file includes a read port. The read shifter includes a plurality of shifter entries and is configured to shift out a shifter entry among the plurality of shifter entries every clock cycle. Each of the plurality of shifter entries is associated with a clock cycle and each of the plurality of shifter entries comprises a read value that indicates an availability of the read port of the register file for a read operation in the clock cycle. The decode unit is coupled to the read shifter and is configured to decode and issue an instruction based on the read values included in the plurality of shifter entries of the read shifter. The plurality of functional units is coupled to the decode unit and the register file and is configured to execute the instruction issued by the decode unit and perform the read operation to the read port of the register file. | 2021-10-07 |
20210311742 | AN APPARATUS AND METHOD FOR PREDICTING SOURCE OPERAND VALUES AND OPTIMIZED PROCESSING OF INSTRUCTIONS - An apparatus and method are provided for processing instructions. The apparatus has execution circuitry for executing instructions, where each instruction requires an associated operation to be performed using one or more source operand values in order to produce a result value. Issue circuitry is used to maintain a record of pending instructions awaiting execution by the execution circuitry, and prediction circuitry is used to produce a predicted source operand value for a chosen pending instruction. Optimisation circuitry is then arranged to detect an optimisation condition for the chosen pending instruction when the predicted source operand value is such that, having regard to the associated operation for the chosen pending instruction, the result value is known without performing the associated operation. In response to detection of the optimisation condition, an optimisation operation is implemented instead of causing the execution circuitry to perform the associated operation in order to execute the chosen pending instruction. This can lead to significant performance and/or power consumption improvements. | 2021-10-07 |
20210311743 | MICROPROCESSOR HAVING SELF-RESETTING REGISTER SCOREBOARD - A microprocessor using a counter in a scoreboard is introduced to handle data dependency. The microprocessor includes a register file having a plurality of registers mapped to entries of the scoreboard. Each entry of the scoreboard has a counter that tracks the data dependency of each of the registers. The counter decrements for every clock cycle until the counter resets itself when it counts down to 0. With the implementation of the counter in the scoreboard, the instruction pipeline may be managed according to the number of clock cycles of a previous issued instruction takes to access the register which is recorded in the counter of the scoreboard. | 2021-10-07 |
20210311744 | Microprocessor with Multistep-Ahead Branch Predictor - A microprocessor with a multistep-ahead branch predictor is shown. The branch predictor is coupled to an instruction cache and has an N-stage pipelined architecture, which is configured to perform branch prediction to control the instruction fetching of the instruction cache. The branch predictor performs branch prediction for (N-1) instruction-address blocks in parallel, wherein the (N-1) instruction-address blocks include a starting instruction-address block and (N-2) subsequent instruction-address blocks. The branch predictor is thereby ahead of branch prediction of the starting instruction-address block. The branch predictor stores reference information about branch prediction in at least one memory and performs a parallel search of the memory for the branch prediction of the (N-1) instruction-address blocks. | 2021-10-07 |
20210311745 | COMPUTER RESOURCE MANAGEMENT BASED ON PRIORITIZATION OF COMPUTER EXECUTABLE EVENTS - Systems and methods directed to managing computer resource allocation by monitoring signals indicating demand for services utilizing computer resources are described. A method includes maintaining, for each first event of first events, historical registration data and respective parameter values of the first event and identifying, for a second event having an open registration status, respective parameter values of the second event, and registration data for the second event. The method includes computing a similarity score between the second event and each first event of the plurality of first events, based on the respective parameter values of the first event and the second event and the registration data of the second event and the historical registration data of the first event, generating, for the second event, a projected number of entities based on determined information and determining a ranking of the second event. | 2021-10-07 |
20210311746 | METHOD AND DEVICE FOR RECOGNIZING APPARATUS AND COMPUTER READABLE STORAGE MEDIUM AND PROGRAM - A method and device are for recognizing an apparatuses and computer readable storage medium and program are provided. In an embodiment, the method includes reading a combined sequence table including candidate device information, candidate communication parameters and historical occurrence numbers of combinations of the candidate device information and the candidate communication parameters for each candidate device information; determining priority levels of the combinations according to the historical occurrence numbers; and determining a current combination according to the priority levels, sending a message to the apparatus to be recognized by using a candidate communication parameter in the current combination, and determining whether the current combination is the correct combination capable of establishing a communication with the apparatus to be recognized according to a feedback from the apparatus to be recognized. The recognition efficiency may be improved effectively and the recognition time may be shortened significantly through the method. | 2021-10-07 |
20210311747 | SERVER WITH SYSTEM SETTING DATA SYNCHRONIZATION FUNCTION - A local server is provided. The local server includes a BIOS memory and control circuit. The BIOS memory stores a BIOS code and an actual setting data. The control circuit reads a current setting data corresponding to the local server from a cloud server in a POST procedure of the local server, and compares the actual setting data with the current setting data, and when the actual setting data does not match the corresponding current setting data, the control circuit sends the actual setting data to the cloud server, so that the actual setting data overwrites the current setting data in the cloud server. | 2021-10-07 |
20210311748 | SYSTEM AND METHOD FOR IDENTIFYING, INDEXING, AND NAVIGATING TO DEEP STATES OF MOBILE APPLICATIONS - A mobile application development system includes a developer portal that receives an application from a developer and provides a routing library to the developer to augment the application. An offline analysis system analyzes the application to (i) determine a set of activities that a handler within the application is programmed to resume in response to respective resumption requests from a host operating system and (ii) determine parameters for each of the activities. The offline analysis system generates a set of links that each corresponds to a respective one of the activities. The routing library, installed as part of the augmented application onto a user device, receives a link, from the user device's operating system, that identifies a first activity. The routing library includes instructions for generating a first resumption request based on parameters corresponding to the first activity and transmitting the first resumption request to the augmented application's handler. | 2021-10-07 |
20210311749 | RESOURCE MANAGEMENT WITH DYNAMIC RESOURCE POLICIES - A method and apparatus of a device for resource management by using a hierarchy of resource management techniques with dynamic resource policies is described. The device terminates several misbehaving application programs when available memory on the device is running low. Each of those misbehaving application programs consumes more memory space than a memory consumption limit assigned to the application program. If available memory on the device is still low after terminating those misbehaving application programs, the device further sends memory pressure notifications to all application programs. If available memory on the device is still running low after sending the memory pressure notifications, the device further terminates background, idle, and suspended application programs. The device further terminates foreground application programs when available memory on the device is still low after terminating the background, idle, and suspended application programs. | 2021-10-07 |
20210311750 | SYSTEM FOR FACILITATING ADVANCED CODING TO INDIVIDUALS WITH LIMITED DEXTERITY - A method and system for improving accessibility of advanced coding to individuals with limited dexterity using a personalized screen touch option as replacement of standard typing. to the user's functional mobility ability. The system and method involve a touch-screen coding platform and a personalized screen touch option corresponding to the user's functional mobility activity, which allows the user to write with their prefer code language at a professional level. All actions on the coding platform may be performed with finger taps allowing individuals with limited dexterity to perform complex coding. | 2021-10-07 |
20210311751 | MACHINE-LEARNING MODELS APPLIED TO INTERACTION DATA FOR DETERMINING INTERACTION GOALS AND FACILITATING EXPERIENCE-BASED MODIFICATIONS TO INTERFACE ELEMENTS IN ONLINE ENVIRONMENTS - A method includes identifying interaction data associated with user interactions with a user interface of an interactive computing environment. The method also includes computing goal clusters of the interaction data based on sequences of the user interactions and performing inverse reinforcement learning on the goal clusters to return rewards and policies. Further, the method includes computing likelihood values of additional sequences of user interactions falling within the goal clusters based on the policies corresponding to each of the goal clusters and assigning the additional sequences to the goal clusters with greatest likelihood values. Furthermore, the method includes computing interface experience metrics of the additional sequences using the rewards and the policies corresponding to the goal clusters of the additional sequences and transmitting the interface experience metrics to the online platform. The interface experience metrics are usable for changing arrangements of interface elements to improve the interface experience metrics. | 2021-10-07 |
20210311752 | ELECTRONIC APPARATUS AND OPERATING METHOD THEREOF - An electronic device and an operating method are provided. The electronic device includes a display and a processor. The processor may be configured to display a first-mode launch screen for an application on the display based on an application launching request in a state where a lock function is set, switch the first-mode launch screen displayed on the display to a second-mode launch screen of the application based on a mode switching request, and determine whether to proceed with an authentication operation based on an operation selected from the second-mode launch screen. | 2021-10-07 |
20210311753 | METHODS AND SYSTEMS FOR CONTENT GENERATION VIA TEMPLATES WITH RULES AND/OR TRIGGERS - Adding electronic content by a user within the prior art requires the user formats every item or uses a template that predetermines the position and type of content added. However, it would be beneficial to provide users with templates which provide rules which are applied to the content as it is added based upon aspects of the template and/or data associated with the content. It would be beneficial if such templates automatically associated format elements, icons, other display elements, sourced additional content etc. based upon aspects such as the region of the template the content is added or data associated with the content being added. Further, where rendering is based upon data associated with the content if the user modifies the rendered content then these changes should be beneficially reflected in the data associated with the content such that a subsequent rendering reflects the user adjustments, etc. | 2021-10-07 |
20210311754 | EMULATING SCRATCHPAD FUNCTIONALITY USING CACHES IN PROCESSOR-BASED DEVICES - Emulating scratchpad functionality using caches in processor-based devices is disclosed. In one aspect, each cache line within a cache of a processor-based device is associated with a corresponding scratchpad indicator indicating whether the corresponding cache line is exempt from the replacement policy used to select a cache line for eviction. Upon receiving data that corresponds to a memory access operation indicated as requiring scratchpad functionality, the cache controller stores the data in a cache line of the cache, and then sets the corresponding scratchpad indicator for the cache line. Subsequently, the cache controller emulates scratchpad functionality by allowing conventional memory read and write operations to be performed on the cache line, but does not apply its replacement policy to that cache line when selecting a cache line as a candidate for eviction. In this manner, the cache line may remain in the cache for use as scratchpad memory by software. | 2021-10-07 |
20210311755 | SCHEMA-BASED CLASSIFICATION OF DATA ON A SYSTEM - A virtualized computing system includes a plurality of hosts, each of which is configured with a virtualization software for supporting execution of virtual machines therein. A method of managing a configuration of a system service in the virtualized computing system includes: upon receiving an API call to operate on a configuration object for the system service that is backed by a configuration schema of the system service, updating a database in accordance with the configuration schema to update or store the configuration of the system service, so that the system service executes with the configuration stored in the database. | 2021-10-07 |
20210311756 | GENERATING AND PRESERVING DEFAULT CONFIGURATIONS OF A SYSTEM - A virtualized computing system includes a plurality of hosts, each of which is configured with a virtualization software for supporting execution of virtual machines therein. A method of managing a configuration of a system service in the virtualized computing system includes upon receiving an application program interface (API) call to operate on a configuration object for a system service that is backed by a configuration schema of the system service, updating a database in accordance with the configuration schema to update or store the configuration of the system service. The system service executes with a configuration that is a combination of a default configuration that is defined in a default configuration schema of the system service and the configuration stored in the database. | 2021-10-07 |
20210311757 | INTROSPECTION INTO WORKLOADS RUNNING WITHIN VIRTUAL MACHINES - Introspection into containers running in virtual machines (VMs) that are instantiated on a host computer is achieved. A method of processing an introspection command for a container, funning in a virtual machine, is carried out by a VM management process, and includes the steps of receiving a first request that is formulated according to a first protocol, e.g., transmission control protocol, and includes the introspection command, identifying the virtual machine from the first request, formulating a second request that includes the introspection command, according to a second protocol (e.g., virtual socket protocol), and transmitting the second request to a container management process running in the virtual machine for the container management process to execute the introspection command. | 2021-10-07 |
20210311758 | MANAGEMENT OF A CONTAINER IMAGE REGISTRY IN A VIRTUALIZED COMPUTER SYSTEM - A container image registry is managed in a virtualized computing system. The container image registry manages container images for deploying containers in a host cluster, the host cluster includes hosts and a virtualization layer executing on hardware platforms of the hosts, and the virtualization layer supports execution of virtual machines (VMs). The method includes: creating a namespace for an orchestration control plane integrated with the virtualization layer, the namespace including constraints for deploying workloads in the VMs; invoking, by a registry service in response to creation of the namespace, a management application programming interface (API) of the container image registry to create a project for the container images; and invoking, by the registry service, the management API of the container image registry to both add members to the project, and assign image registry roles to the members, in response to bindings of users and namespace roles derived from the constraints. | 2021-10-07 |
20210311759 | EPHEMERAL STORAGE MANAGEMENT FOR CONTAINER-BASED VIRTUAL MACHINES - A virtualized computing system includes: a host cluster including hosts executing a virtualization layer on hardware platforms thereof, the virtualization layer configured to support execution of virtual machines (VMs), the VMs including a pod VM, the pod VM including a container engine configured to support execution of containers in the pod VM, the pod VM including a first virtual disk attached thereto; and an orchestration control plane integrated with the virtualization layer, the orchestration control plane including a master server in communication with a pod VM controller, the pod VM controller configured to execute in the virtualization layer external to the VMs and cooperate with a pod VM agent in the pod VM, the pod VM agent generating root directories for the containers in the pod VM, each of the root directories comprising a union a read/write ephemeral layer stored on the first virtual disk and a read-only layer. | 2021-10-07 |
20210311760 | SOFTWARE-DEFINED NETWORK ORCHESTRATION IN A VIRTUALIZED COMPUTER SYSTEM - An example method of orchestrating a software-defined (SD) network layer of a virtualized computing system is described, the virtualized computing system including a host cluster, a virtualization management server, and a network management server each connected to a physical network, the host cluster having hosts and a virtualization layer executing on hardware platforms of the hosts. The method includes receiving, at the virtualization management server, a declarative specification describing a proposed state of an SD network for the host cluster, deploying, by the virtualization management server, virtualized infrastructure components in the host cluster in response to the proposed state in the declarative specification, and deploying, by the virtualization management server in cooperation with the network management server, logical network services supported by the virtualized infrastructure components in response to the proposed state in the declarative specification. | 2021-10-07 |
20210311761 | METHOD FOR ACCESSING APPLICATION LOGS WITHIN VIRTUAL MACHINES BASED ON OPERATOR-DEFINED CRITERIA - Log information is retrieved from a log of a container running in a virtual machine in response to a request for the log information, by accessing a virtual disk of the virtual machine, reading the log of the container from the virtual disk and preparing the requested log information from the log, and transmitting the requested log information to a virtual machine (VM) management process running in a host computer of the virtual machine for the VM management process to forward to a requestor of the log information. Alternatively, log data of the container running in the virtual machine may be streamed to the VM management process over a virtual socket communication channel that is established between the virtual machine and the VM management process. | 2021-10-07 |
20210311762 | GUEST CLUSTER DEPLOYED AS VIRTUAL EXTENSION OF MANAGEMENT CLUSTER IN A VIRTUALIZED COMPUTING SYSTEM - An example virtualized computing system includes: a host cluster having hosts and a virtualization layer executing on hardware platforms of the hosts, the virtualization layer supporting execution of virtual machines (VMs); an orchestration control plane integrated with the virtualization layer, the orchestration control plane including a master server executing in a first VM of the VMs; guest cluster infrastructure software (GCIS) executing in the master server, the GCIS configured to create a set of objects defining a container orchestration cluster, and manage lifecycles of second VMs of the VMs based on state of the set of objects; and guest software executing in the second VMs to implement the container orchestration cluster as a guest cluster of the host cluster, the guest software having components that interface with the GCIS. | 2021-10-07 |
20210311763 | SOFTWARE COMPATIBILITY CHECKING FOR MANAGED CLUSTERS IN A VIRTUALIZED COMPUTING SYSTEM - An example method of checking compatibility of a guest cluster executing as a virtual extension of a host cluster having an orchestration control plane managing the guest cluster, the host cluster being part of a software defined data center (SDDC), is described. The method includes: receiving, at the orchestration control plane, a guest cluster infrastructure software (GCIS) compatibility document that specifies what a GCIS of the orchestration control plane requires and offers; receiving a request for a compatibility check on the guest cluster with respect to the GCIS; obtaining, at the orchestration control plane in response to the request, an SDDC compatibility documents for the SDDC and a guest cluster compatibility document for the guest cluster; computing, at the orchestration control plane, the compatibility check in response to the GCIS compatibility document, the SDDC compatibility document, and the guest cluster compatibility document; and transmitting a result of the compatibility check from the orchestration control plane to a virtual infrastructure (VI) control plane of the SDDC. | 2021-10-07 |
20210311764 | CONTAINER ORCHESTRATION IN A CLUSTERED AND VIRTUALIZED COMPUTER SYSTEM - An example virtualized computing system includes a host cluster having a virtualization layer directly executing on hardware platforms of hosts, the virtualization layer supporting execution of virtual machines (VMs), the VMs including pod VMs, the pod VMs including container engines supporting execution of containers in the pod VMs; an orchestration control plane integrated with the virtualization layer, the orchestration control plane including a master server and pod VM controllers, the pod VM controllers executing in the virtualization layer external to the VMs, the pod VM controllers configured as agents of the master server to manage the pod VMs; pod VM agents, executing in the pod VMs, configured as agents of the pod VM controllers to manage the containers executing in the pod VMs. | 2021-10-07 |
20210311765 | OPERATIONAL HEALTH OF AN INTEGRATED APPLICATION ORCHESTRATION AND VIRTUALIZED COMPUTING SYSTEM - An example method of determining operational health of a virtualized computing system includes: monitoring, at a service executing in the virtualized computing system, a current configuration of a software-defined data center (SDDC) with respect to a desired state, the desired state including: a host cluster having hosts executing a virtualization layer thereon; a software-defined (SD) network deployed in the host cluster; shared storage accessible by the host cluster; a virtual infrastructure (VI) control plane managing the host cluster, the SD network, and the shared storage; and an orchestration control plane integrated with the virtualization layer and the VI control plane; determining a configuration status for the current configuration of the SDDC; monitoring, at the service, operational status of an application management system executing on the SDDC having the current configuration; and determining at least one measure of the operational health in response to the configuration status and the operational status. | 2021-10-07 |
20210311766 | VALIDATION AND PRE-CHECK OF COMBINED SOFTWARE/FIRMWARE UPDATES - An image of a virtualization software and firmware in a plurality of hosts are upgraded by performing the steps of: validating a desired image of the virtualization software by extracting dependencies and conflicts defined in metadata of all payloads of the desired image of the virtualization software, and confirming there are no violations of the extracted dependencies and conflicts; performing a pre-check of the desired image of the virtualization software against a current image of the virtualization software and a pre-check of the desired version of the firmware against a current version of the firmware; and upon determining from results of the pre-check that the virtualization software can be upgraded to the desired image and the firmware can be upgraded to the desired version, upgrading the current image of the virtualization software to the desired image and upgrading the current version of the firmware to the desired version. | 2021-10-07 |
20210311767 | STORAGE SYSTEM, STORAGE DEVICE THEREFOR, AND OPERATING METHOD THEREOF - A storage system includes a plurality of storage devices coupled to at least one host through a network and configured to form a virtual network of virtual machines generated when the plurality of storage devices are coupled to the network, wherein each of the plurality of storage devices allocates memory resources to the virtual machines and shares device information for the plurality of storage devices through the virtual machines and wherein the host: selects from the plurality of storage devices a main storage device, and transmits a storage pool generation condition to the main storage device that identifies a number of storage pools and a capacity of each storage pool, wherein the main storage device generates at least one storage pool that satisfies the storage pool generation condition using the memory resources allocated to each of the virtual machines. | 2021-10-07 |
20210311768 | SWITCHING BETWEEN MASTER AND STANDBY CONTAINER SYSTEMS - A method and an apparatus for switching between master and standby container systems are provided. The method may be applied to a first container system deployed in a physical network device and functioning as a master container system, and a shared file is created in the physical network device. The method includes: receiving a system upgrade instruction input by a user; in response to the system upgrade instruction, sending container system data to a second container system functioning as a standby container system corresponding to the master container system in the physical network device; when receiving a data backup completion notification from the second container system, writing a master/standby container system switching notification into the shared file, so that the second container system switches to function as the master container system when detecting that the master/standby container system switching notification in the shared file. | 2021-10-07 |
20210311769 | JOINT PLACEMENT AND CHAINING OF VIRTUAL NETWORK FUNCTIONS FOR VIRTUALIZED SYSTEMS BASED ON A SCALABLE GENETIC ALGORITHM - A system performs joint placement and chaining of virtual network functions (VNFs) based on a genetic algorithm in response to a request for virtual network services, including an in-line service. The request includes a description of a virtual network of VNFs and virtual links connecting the VNFs. A description of a physical network including servers and physical links is provided. Each chromosome in a population encodes a mapping between the virtual links enumerated to form a locus and a corresponding sequence of server pairs. Each chromosome is evaluated against objective functions subject to constraints to identify a chromosome as a solution. The VNFs are placed on the servers according to the mapping encoded in the identified chromosome. According to the mapping, each VNF is mapped to one of the servers and each virtual link is mapped to a path composed of one or more of the physical links. | 2021-10-07 |
20210311770 | METHOD FOR IMPLEMENTING SMART CONTRACT BASED ON BLOCKCHAIN - A method for implementing a smart contract based on a blockchain, a device, and a medium are provided. The detailed implementation includes: creating a resident process for a resident smart contract and creating a virtual machine by the resident process when achieving an enable condition of the resident smart contract; loading codes of the resident smart contract into a memory through the virtual machine; receiving a data access request generated by a normal smart contract within a block generation cycle through an across-contract calling interface of the virtual machine; and executing the resident smart contract through the virtual machine to process the data access request and returning a data access result to the normal smart contract. | 2021-10-07 |
20210311771 | PERFORMANCE OF CONTAINERS - A method, computer program product, and a system where a processor(s), in a computing environment comprised of multiple containers comprising modules, includes a processor(s) parsing a module originating from a given container in the computing environment by copying various identifying aspects of a module file comprising the module and calculating, based on contents of the module file, a digest value as a unique identifier for the module file. The processor(s) stores the various identifying aspects of the module file and the digest value in one or more memory objects, wherein the one or more memory objects comprise a module content map to correlate the unique identifier for the module file with the contents of the module, images in the module file with the unique identifier for the module file, and layers with the unique identifier for the module file. | 2021-10-07 |
20210311772 | PROVIDING SERVICES WITH GUEST VM MOBILITY - Some embodiments provide novel methods for performing services for machines operating in one or more datacenters. For instance, for a group of related guest machines (e.g., a group of tenant machines), some embodiments define two different forwarding planes: (1) a guest forwarding plane and (2) a service forwarding plane. The guest forwarding plane connects to the machines in the group and performs L2 and/or L3 forwarding for these machines. The service forwarding plane (1) connects to the service nodes that perform services on data messages sent to and from these machines, and (2) forwards these data messages to the service nodes. In some embodiments, the guest machines do not connect directly with the service forwarding plane. For instance, in some embodiments, each forwarding plane connects to a machine or service node through a port that receives data messages from, or supplies data messages to, the machine or service node. In such embodiments, the service forwarding plane does not have a port that directly receives data messages from, or supplies data messages to, any guest machine. Instead, in some such embodiments, data associated with a guest machine is routed to a port proxy module executing on the same host computer, and this other module has a service plane port. This port proxy module in some embodiments indirectly can connect more than one guest machine on the same host to the service plane (i.e., can serve as the port proxy module for more than one guest machine on the same host). | 2021-10-07 |
20210311773 | Efficient Condition Variables via Delegated Condition Evaluation - Efficient use of condition variables for communication between threads of a multi-threaded application may be ensured using delegated condition evaluation. A thread in a runnable state may request to wait for a change to a condition, the request including instructions that, when executed, return a value indicating if the wait is to be terminated. The thread may then be placed in a non-runnable state waiting for a change to the condition, and upon determining a change to the condition, the instructions are executed to receive the value indicating if the wait is to be terminated. If the value indicates that the wait is to be terminated, the thread is placed in a runnable state. If the value indicates that the wait is not to be terminated, the thread remains in a non-runnable state. | 2021-10-07 |
20210311774 | Contextual Application Switch Based on User Behaviors - Methods and systems for using machine learning to provide application recommendations are described herein. A computing device may capture a first edge frame of a first application displayed at the computing device. The computing device may apply machine learning to the first edge frame of the first application to identify a context tag. The computing device may identify applications subsequently accessed in a sequential manner after accessing the first application in a context corresponding to the identified context tag, where each of the applications corresponds to a context level score. The computing device may identify a second application, which may have a higher context level score than remaining applications. Along with the first application, the computing device may display a selectable interface element corresponding to the second application, and selection of the selectable interface element may cause display of an application list that includes the second application. | 2021-10-07 |
20210311775 | METHOD OF PROVIDING SESSION CONTAINER MOUNTED WITH PLURALITY OF LIBRARIES REQUESTED BY USER - A method for providing a session container mounted with a plurality of libraries requested by a user. The method includes: in response to receiving a container execution request from the user, searching for a library container in a container catalog by a node controller; checking, by the node controller, whether a session container to be mounted with a library in the library container and the library container are compatible; and when the session container to be mounted with the library in the library container and the library container are compatible, generating, by a container engine, a session container mounted with the library in the library container. | 2021-10-07 |
20210311776 | Techniques and Architectures for Importation of Large Data Load - Techniques and architectures for data ingestion in an environment having a distributed data storage system. A request to import data into the environment. The data to be imported from an external source through an application programming interface (API). The request is analyzed to determine if the request corresponds to a request to import a large data load. The data is staged if the request is for the large data load. Data management jobs are created to cause the data to be stored in one or more nodes of the distributed data storage system. The data management jobs are transmitted to corresponding nodes in the distributed data storage system. The jobs are executed asynchronously to cause the data to be stored in the nodes. | 2021-10-07 |
20210311777 | DETERMINING ACTION SELECTION POLICIES OF AN EXECUTION DEVICE - Computer-implemented methods, systems, and apparatus, including computer-readable medium, for generating an action selection policy for causing an execution device to complete a task are described. Data representing a task that is divided into a sequence of subtasks are obtained. Data specifying a strategy neural network (SNN) for a subtask in the sequence of subtasks are obtained. The SNN receives inputs include a sequence of actions that reach an initial state of the subtask, and predicts an action selection policy of the execution device for the subtask. The SNN is trained based on a value neural network (VNN) for a next subtask that follows the subtask in the sequence of subtasks. An input to the SNN is determined. The input includes a sequence of actions that reach a subtask initial state of the subtask. An action selection policy for completing the subtask is determined based on an output of the SNN. | 2021-10-07 |
20210311778 | DETERMINING ACTION SELECTION POLICIES OF AN EXECUTION DEVICE - Computer-implemented methods, systems, and apparatus, including computer-readable medium, for generating an action selection policy for causing an execution device to complete a task are described. Data representing a task that is divided into a sequence of subtasks are obtained. For a specified subtask except for a first subtask in the sequence of subtasks, a value neural network (VNN) is trained. The VNN receives inputs include reach probabilities of reaching a subtask initial state of the specified subtask, and predicts a reward of the execution device in the subtask initial state of the specified subtask. A strategy neural network (SNN) for a prior subtask that precedes the specified subtask is trained based on the VNN. The SNN receives inputs include a sequence of actions that reach a subtask state of the prior subtask, and predicts an action selection policy of the execution device in the subtask state of the prior subtask. | 2021-10-07 |
20210311779 | DATA PROCESSING DEVICE, DATA PROCESSING SYSTEM, DATA PROCESSING METHOD, AND PROGRAM - A data processing device ( | 2021-10-07 |
20210311780 | METHOD AND SYSTEM FOR ARRANGING BUSINESS PROCESS, COMPUTING DEVICE, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM - A method and a system for arranging business process, a computing device, and a non-transitory computer readable storage medium are provided. The method includes: receiving an application module and a business process rule that are transmitted by a user terminal, wherein the application module is configured to indicate a processing logic in a link in a business process, and wherein the business process rule is configured to indicate a processing rule of the application module; determining a target edge device corresponding to the application module; and transmitting the application module and the business process rule to the target edge device for the target edge device to execute the application module according to the business process rule. | 2021-10-07 |
20210311781 | METHOD AND SYSTEM FOR SCALABLE JOB PROCESSING - The present invention relates to methods for processing jobs within a cluster architecture. One method comprises the pausing of a job when waiting upon external dependencies. Another method comprises the transmission of messages relating to the ongoing processing of jobs back to a client via a persistent messaging channel. Yet another method comprises determining capacity at a node before allocating a job for processing by the node or adding the job to a cluster queue. A system for processing jobs within a cluster architecture is also disclosed. | 2021-10-07 |
20210311782 | THREAD SCHEDULING FOR MULTITHREADED DATA PROCESSING ENVIRONMENTS - Methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to implement thread scheduling for multithreaded data processing environments are disclosed. Example thread schedulers disclosed herein for a data processing system include a buffer manager to determine availability of respective buffers to be acquired for respective processing threads implementing respective functional nodes of a processing flow, and to identify first ones of the processing threads as stalled due to unavailability of at least one buffer in the respective buffers to be acquired for the first ones of the processing threads. Disclosed example thread schedulers also include a thread execution manager to initiate execution of second ones of the processing threads that are not identified as stalled. | 2021-10-07 |
20210311783 | ELECTRONIC DEVICE FOR PROCESSING BACKGROUND TASK BY USING EXTERNAL INPUT AND STORAGE MEDIUM THEREOF - A storage medium according to various embodiments may store instructions. When the instructions are executed by a processor of a computer including an operating system which provides an activation state in which the instructions are executed in a foreground in relation to at least one application program, a suspended state in which the instructions are executed but do not perform a command in a background in relation thereto, and a background state in which the instructions are executed and perform a command in a background in relation thereto, the instructions may cause the computer to: in relation to an application program configured to perform a task, store information on the task in a memory of the computer; after a state of the application program is changed from an activation state to a suspended state, receive a push message from an outside of the computer; change the state of the application program from the suspended state to a background state, according to reception of the push message; and allow the application program to perform the task on the basis of the information on the task in the background state. Further, an electronic device according to various embodiments may be provided. | 2021-10-07 |
20210311784 | METHODS AND SYSTEMS FOR PROVIDING ON-DEMAND CLOUD COMPUTING ENVIRONMENTS - A cloud management system can be configured to provide a cloud computing environment in response to a request for an execution platform. The cloud management system can be configured to determine a set of resources from non-dedicated cloud controlled resources or third-party resources that meet specifications of the requested execution platform. The cloud management system can be configured to create the on-demand cloud from the determined set of resources to serve as the execution platform. | 2021-10-07 |
20210311785 | SYSTEM AND METHODS FOR GENERATION AND ANALYSIS OF REAL-TIME RESOURCE REQUESTS - Embodiments of the invention are directed to a system, method, or computer program product generation and analysis of real-time resource requests via a resource platform. A resource platform is provided for receiving and automating the management and processing of resource requests submitted by entities or users. The system embraces a fully digital approach to resource request processing, analysis, authentication, and reporting. In addition, the invention allows for analysis of reconciliation data of executed resource transfers for identification of useful trends that can be used for proactive accommodation of entity policies to align with perceived user preferences. | 2021-10-07 |
20210311786 | WORKLOAD MANAGEMENT USING REINFORCEMENT LEARNING - Aspects of the invention include determining, by a machine learning model, a predicted workload for a system and a current system state of the system, determining an action to be enacted for the system based at least in part on the predicted workload and the current system state, enacting the action for the system, evaluating a state of the system after the action has been enacted, determining a reward for the machine learning model based at least in part on the state of the system after the action has been enacted, and updating the machine learning model based on the reward. | 2021-10-07 |
20210311787 | SYSTEM AND METHOD FOR STATE MANAGEMENT OF DEVICES - A deployment manager includes storage for storing a state repository including a state transitions associated with event descriptions generated by a computing device and a computing device manager. The computing device manager obtains a new event description associated with the computing device, and a workload performed by the computing device; in response to obtaining the new event description: matches the new event description to a state transition of the state transitions; and manages the workload based on a predicted next state associated with the state transition. | 2021-10-07 |
20210311788 | SYSTEM AND METHODS FOR PROCESSING AND AUTHORIZATION OF REAL-TIME RESOURCE REQUESTS - Embodiments of the invention are directed to a system, method, or computer program product for processing and authorization of electronic resource requests via a resource platform. A platform is provided for receiving and processing resource request data, and automating the authorization process involved with resource requests. The resource platform may be leveraged by one or more entities via secure communication interface through the conversation of electronic resource request data to industry standard formatting as well as formatting for multiple digital channel types. Authorization of electronic resource requests may include intelligent user configuration for convenient authentication via the use of multiple authenticated users. | 2021-10-07 |
20210311789 | ELECTRONIC DEVICE AND CONTROL METHOD THEREOF - Disclosed are an electronic apparatus and a method of controlling the same, the electronic apparatus including: a storage; a processor configured to execute a program stored in the storage; and a memory configured to load the program based on execution of the program, wherein: the program includes an operating system (OS) and an application program, a user stack is assigned to a process of the application program, data of the user stack is stored in a certain area of the memory based on execution of the process, and the OS discards an area of the memory, in which data not to use among the data of the user stack is stored, during stopped change in the data of the user stack, and allows the discarded area to be usable by another process. | 2021-10-07 |
20210311790 | DATA STORAGE RESOURCE MANAGEMENT - A resource management system in a data center one or more data storage resource providers and a transaction server. The transaction server is configured to receive, from a client, a request for read and/or write access for a data storage resource, the request comprising one or more specifications, to provide, to the one or more data storage resource providers, at least a portion of the request, and to receive, from the one or more data storage resource providers, respective responses to the request, the responses respectively comprising one or more allocation options. The transaction server is further configured to select one of the one or more allocation options for registration, and register the selected allocation option with a data manager. At least one of the one or more data storage providers is configured to provide the data storage resource in accordance with the registered allocation option. | 2021-10-07 |
20210311791 | SYSTEMS AND METHODS FOR MANAGING USAGE OF COMPUTING RESOURCES - A processor-implemented method is disclosed. The method includes: obtaining, from an activity logging system, activity data associated with one or more defined computing tasks, the activity data indicating progress towards completion of the one or more defined computing tasks, the defined computing tasks being associated with one or more projects; obtaining, from a resource usage monitoring system, time-based resource tracking data associated with at least one of the projects, the resource tracking data including project identifying data associated with the at least one project and project time data identifying one or more time periods reflecting use of a computing resource in association with the at least one project; determining mappings of the one or more time periods to the one or more defined computing tasks based on the project identifying data and the activity data associated with the one or more defined computing tasks; determining, based on the mappings, that at least one task-based resource usage criterion is satisfied; and in response to determining that the at least one task-based resource usage criterion is satisfied, generating a notification of resource usage for display on a computing device. | 2021-10-07 |
20210311792 | NAMESPACES AS UNITS OF MANAGEMENT IN A CLUSTERED AND VIRTUALIZED COMPUTER SYSTEM - An example method of managing an application in a virtualized computing system that includes a cluster of hosts managed by a virtualization management server, the hosts including a virtualization layer executing on hardware platforms is described. The method includes: receiving a specification for a namespace at the virtualization management server, the specification defining resource constraints and authorization constraints for the namespace; preparing an environment within the virtualized computing system for the namespace in response to the specification, the environment including: a resource pool implementing at least a portion of the resource constraints as reservations and limits of resources in the virtualized computing system; and a user access policy implementing the authorization constraints within the virtualized computing system for the namespace; and managing, by the virtualization management server as a single unit, workloads of the application, the workloads deployed on the virtualization layer within the resource pool consistent with the user access policy. | 2021-10-07 |
20210311793 | SYSTEM AND METHOD FOR ALLOCATING CENTRAL PROCESSING UNIT (CPU) CORES FOR SYSTEM OPERATIONS - A method, computer program product, and computing system for allocating a first set of cores of a plurality of cores of a multicore central processing unit (CPU) for processing host input-output (IO) operations of a plurality of operations on a storage system. A second set of cores of the plurality of cores may be allocated for processing flush operations of the plurality of operations on the storage system. A third set of cores of the plurality of cores may be allocated for processing rebuild operations of the plurality of operations on the storage system. At least one of one or more host TO operations, one or more rebuild operations, and one or more flush operations may be processed, via the plurality of cores and based upon, at least in part, the allocation of the plurality of cores for processing the plurality of operations. | 2021-10-07 |
20210311794 | SYSTEM AND METHOD FOR IMPLEMENTING A STANDALONE APPLICATION MODULE - Various methods, apparatuses/systems, and media for implementing a standalone application module are disclosed. A configuration database stores information about one or more infrastructure resources. A receiver receives a request to connect to an infrastructure resource. A processor accesses the configuration database to fetch information about the infrastructure resource; accesses one or more external resource databases to fetch infrastructure resources that are required to run an application; dynamically creates the infrastructure resources accessed from the one or more external resource databases; and establishes and maintains a connection to the dynamically created infrastructure resources required by the application to function running in background in a user's system. | 2021-10-07 |
20210311795 | METHOD AND APPARATUS FOR ALLOCATING DATABASE SERVER - The present disclosure provides a database server allocation method and apparatus. The database server allocation method may be performed by an allocation server interfaced with a plurality of database servers each of which collects data from one or more data generators. The method includes: allocating an initial database server for each data generator; receiving information on an amount of data generated by each data generator from the plurality of database servers; analyzing the amount of the data generated by each data generator to determine a data generation pattern for each data generator; and grouping the data generators according to the data generation pattern for each data generator and reallocating a new database server for each data generator. | 2021-10-07 |
20210311796 | SYSTEM AND METHOD FOR DETERMINING AND TRACKING CLOUD CAPACITY METRICS - A cloud capacity system enables calculation and tracking of cloud capacity metrics for data center pods. The system includes a “Cloud Capacity Snapshot” table having a number of different cloud capacity columns; a “Cloud Capacity Query” table that stores a respective, customizable query for each of the cloud capacity columns defining criteria for selecting and combining data to calculate the corresponding cloud capacity metric value; and a “Cloud Capacity URLs” table that stores cloud capacity universal resource locator (URLs). Each cloud capacity URL embodies or encodes a respective cloud capacity query of the “Cloud Capacity Query” table for a given combination of a particular cloud capacity column and a particular pod in the “Cloud Capacity Snapshot” table. As such, by executing the queries encoded in the “Cloud Capacity URLs” table, each cloud capacity field of the “Cloud Capacity Snapshot” table is populated with the corresponding cloud capacity metric value. | 2021-10-07 |
20210311797 | ELECTRONIC APPARATUS AND CONTROLLING METHOD THEREOF - Provided herein are an electronic apparatus and a controlling method thereof. An electronic apparatus according to the disclosure includes a communicator, a memory storing information on a recipe wherein a plurality of unit functions for provision of a service are combined, and a processor configured to, based on receiving information for a unit function that can be performed at each electronic apparatus from each of a plurality of electronic apparatuses through the communicator, identify a plurality of electronic apparatuses matched to the plurality of unit functions included in the recipe based on the received information, and control the communicator to transmit a signal for performing each matched unit function to each of the plurality of identified electronic apparatuses. | 2021-10-07 |
20210311798 | DYNAMIC MICROSERVICES ALLOCATION MECHANISM - A computing platform comprising a plurality of disaggregated data center resources and an infrastructure processing unit (IPU), communicatively coupled to the plurality of resources, to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster. | 2021-10-07 |
20210311799 | WORKLOAD ALLOCATION AMONG HARDWARE DEVICES - An example method corresponding to workload allocation among hardware devices can include monitoring, by a processing unit, workload characteristics associated with execution of workloads by a plurality of hardware devices, such as hardware accelerators. The method can include determining, by the processing unit, particular characteristics corresponding to a workload processed by at least one of the hardware devices and performing, by the processing unit, an action to determine that a particular hardware device exhibits higher performance in executing the workload than a different hardware device. The method can further include allocating a subsequent workload that has characteristics corresponding to the workload exhibiting the particular characteristics to the hardware device that exhibits higher performance in executing the workload than a different hardware device. | 2021-10-07 |
20210311800 | CONNECTING ACCELERATOR RESOURCES USING A SWITCH - The present disclosure describes a number of embodiments related to devices and techniques for implementing an interconnect switch to provide a switchable low-latency bypass between node resources such as CPUs and accelerator resources for caching. A resource manager may be used to receive an indication of a node of a plurality of nodes and an indication of an accelerator resource of a plurality of accelerator resources to connect to the node. If the indicated accelerator resource is connected to another node of the plurality of nodes, then transmit, to a interconnect switch, one or more hot-remove commands. The resource manager may then transmit to the interconnect switch one or more hot-add commands to connect the node resource and the accelerator resource. | 2021-10-07 |
20210311801 | SYSTEM AND METHOD FOR OFFLOADING COMPUTATION TO STORAGE NODES IN DISTRIBUTED SYSTEM - One embodiment described herein provides a distributed computing system. The distributed computing system can include a compute cluster comprising one or more compute nodes and a storage cluster comprising a plurality of storage nodes. A respective compute node can be configured to: receive a request for a computation task; obtain path information associated with data required by the computation task; identify at least one storage node based on the obtained path information; send at least one computation instruction associated with the computation task to the identified storage node; and receive computation results from the identified storage node subsequently to the identified storage node performing the computation task. | 2021-10-07 |
20210311802 | RESOURCE ALLOCATION FOR VIRTUAL MACHINES - A system and method include reception of a request to create a virtual machine associated with a requested number of resource units of each of a plurality of resource types, determination, for each of the plurality of resource types, of a pool of available resource units, random selection, for each of the plurality of resource types, of the requested number of resource units from the pool of available resource units of the resource type, and allocation of the selected resource units of each of the plurality of resource types to the virtual machine. | 2021-10-07 |
20210311803 | DEFINING SERVICES FOR VIRTUAL INTERFACES OF WORKLOADS - Some embodiments of the invention provide a method for deploying network elements for a set of machines in a set of one or more datacenters. The datacenter set is part of one availability zone in some embodiments. The method receives intent-based API (Application Programming Interface) requests, and parses these API requests to identify a set of network elements to connect and/or perform services for the set of machines. In some embodiments, the API is a hierarchical document that can specify multiple different compute and/or network elements at different levels of compute and/or network element hierarchy. The method performs automated processes to define a virtual private cloud (VPC) to connect the set of machines to a logical network that segregates the set of machines from other machines in the datacenter set. In some embodiments, the set of machines include virtual machines and containers, the VPC is defined with a supervisor cluster namespace, and the API requests are provided as YAML files. | 2021-10-07 |
20210311804 | SYSTEM AND METHOD FOR MANAGING A HYBRID COMPUTER ENVIRONMENT - Disclosed are systems, hybrid compute environments, methods and computer-readable media for dynamically provisioning nodes for a workload. In the hybrid compute environment, each node communicates with a first resource manager associated with the first operating system and a second resource manager associated with a second operating system. The method includes receiving an instruction to provision at least one node in the hybrid compute environment from the first operating system to the second operating system, after provisioning the second operating system, pooling at least one signal from the resource manager associated with the at least one node, processing at least one signal from the second resource manager associated with the at least one node and consuming resources associated with the at least one node having the second operating system provisioned thereon. | 2021-10-07 |
20210311805 | RESOURCE RESERVATION MANAGEMENT DEVICE, RESOURCE RESERVATION MANAGEMENT METHOD, AND RESOURCE RESERVATION MANAGEMENT PROGRAM - [Problem] Available resources are efficiently used even in a case in which continuous available resources cannot be secured on a cloud. | 2021-10-07 |
20210311806 | COMPUTER SYSTEM WORKLOAD MANAGER - A computer-implemented method includes storing usage of a resource of said computer system as time-stamped resource usage values, comparing said time-stamped resource usage values with predetermined time-stamped performance goal values, assigning a time-stamped priority value to an application that is running based on at least one of said performance goal values, identifying a future workload demand value by applying a time-series analysis algorithm to at least some of said time-stamped resource usage values and a corresponding at least some of said time-stamped performance goal values for said application resulting in workload demand time frames and related amplitudes of said workload demand time frames, and adjusting a dispatch priority value for said application by setting a minimum dispatch priority for said application based on said future workload demand value. | 2021-10-07 |
20210311807 | Update of Model Parameters in a Parallel Processing System - A data processing system comprising a plurality of processing nodes that are arranged to update a model in a parallel manner Each of the processing nodes starts with a different set of updates to model parameters. Each of the processing nodes is configured to perform one or more reduce-scatter collectives so as to exchange and reduce the updates. Having done so, each processing node is configured to apply the reduced set of updates to obtain an updated set of model parameters. The processing nodes then exchange the updated model parameters using an all-gather so that each processing node ends up with the same model parameters at the end of the process. | 2021-10-07 |
20210311808 | Control of Data Transfer Between Processing Nodes - A data processing system comprising a plurality of processing nodes, each comprising at least one memory configured to store an array of data items, wherein each of the plurality of processing nodes is configured to execute compute instructions during a compute phase and following a precompiled synchronisation barrier, enter at least one exchange phase. During the at least one exchange phase, a series of collective operations are carried out. Each processing node is configured to perform a reduce scatter collective in at least one first dimension. Using the results of the reduce scatter collective, each processing node performs an allreduce in a second dimension. The processing nodes then perform an all-gather collective in the at least one first dimension using the results of the allreduce. | 2021-10-07 |
20210311809 | APPARATUS AND METHOD FOR LOCKING PCIE NETWORK HAVING NON-TRANSPARENT BRIDGING - An interconnected computer system includes a Peripheral Component Interconnect Express (PCIe) fabric, a first computer system communicatively coupled to the PCIe fabric, a second computer system communicatively coupled to the PCIe fabric, and a shared single-access hardware resource coupled to the PCIe fabric. The first computer system includes a first processor and first memory coupled to the first processor configured to store a first flag indicating a desire of the first computer system to access the shared single-access hardware resource and a turn variable indicating which of the first computer system and the second computer system has access to the shared single-access hardware resource. The second computer system includes a second processor and second memory coupled to the second processor configured to store a second flag indicating a desire of the second computer system to access the shared single-access hardware resource. | 2021-10-07 |
20210311810 | COMPLEX SYSTEM FOR KNOWLEDGE LAYOUT FACILITATED EPICENTER ACTIVE EVENT RESPONSE CONTROL - A system maintains a knowledge layout to support the analysis of active events and determination of epicenter and aftershock nodes via an event reach stack. At an input layer of the event reach stack, the system may receive active event data. At a semantic layer, the system may parse the active event data to determine event phrases. Based on the event phrases, the system may identify epicenter nodes directly affected by the active event. At an analytic model layer, the system may successively determine aftershock nodes by traversing the knowledge layout outward from the epicenter nodes. The system then directs the response to the active event to the aftershock and epicenter nodes, via action at a focus response layer of the event reach stack. | 2021-10-07 |
20210311811 | CACHING IDENTIFIERS FOR ACCESS COMMANDS - Methods, systems, and devices for caching identifiers for access commands are described. A memory sub-system can receive an access command to perform an access operation on a transfer unit of the memory sub-system. The memory sub-system can store an identifier associated with the access command in a memory component and can generate an internal command using a first core of the memory sub-system. In some embodiments, the memory sub-system can store the identifier in a shared memory that is accessible by the first core and can issue the internal command to perform the access operation on the memory sub-system. | 2021-10-07 |
20210311812 | ELECTRONIC DEVICE AND FAULT DIAGNOSIS METHOD OF ELECTRONIC DEVICE - An electronic device for diagnosing a fault of a plurality of external devices is disclosed. The electronic device comprises a communication unit and a processor. The processor receives, from the plurality of external devices, information related to an operation of the plurality of external devices through the communication unit; on the basis of the information related to the operation of any one of the plurality of external devices, determines whether any external device is operating abnormally; when the external device is operating abnormally, diagnoses the cause of the abnormality on the basis of the information related to the operation of the one external device and information related to an operation of another external device of the plurality of external devices that is relevant to the operation of the one external device; and provides, via the communication unit, information on the diagnosed abnormality to the at least one of the one external device and a communication device of a user of the external devices. | 2021-10-07 |
20210311813 | FAULT PREDICTION METHOD, APPARATUS AND STORAGE MEDIUM - The present application discloses a fault prediction method, apparatus, and a storage medium, and relates to fields of cloud computing and fault processing. An embodiment includes acquiring a fault alarm request, wherein the fault alarm request is obtained by at least two fault triggering parameters of a fault generated by cloud operation, a hidden danger generated by cloud operation, and a change generated by terminal operation at a user level; analyzing the fault alarm request, to obtain at least two fault triggering parameters of the fault, the hidden danger, and the change; establishing an association between the at least two fault triggering parameters of the fault, the hidden danger, and the change, to obtain an association result; and predicting a fault causing the fault alarm request according to the association result, to obtain a fault prediction result. | 2021-10-07 |
20210311814 | PATTERN RECOGNITION FOR PROACTIVE TREATMENT OF NON-CONTIGUOUS GROWING DEFECTS - Pattern recognition is used to proactively treat defects of repeating circuit topologies. A component of a computing environment is monitored for failures. The component includes one or more repeating circuit topologies. A determination is made as to whether a new failure within a repeating circuit topology of the one or more repeating circuit topologies has occurred within a predefined amount of time from a previous failure matching a selected pattern, in which the selected pattern indicates a non-contiguous growing defect. Based on determining the new failure has occurred within the predefined amount of time from the previous failure matching the selected pattern, corrective action for the component is proactively taken. | 2021-10-07 |
20210311815 | PROACTIVE OUTREACH PLATFORM FOR SERVER-DRIVEN COMMUNICATION CHANNEL PRESENTATION - There are provided systems and methods for a proactive outreach platform for server-driven communication channel presentation. A service provider, such as an electronic transaction processor for digital transactions, may utilize a data aggregation operation to detect different error or help events occurring thought different domains, pages, and interfaces of service provider. Using these events, mappings of user intents that resulted in the error event, and assistance channels may be generated. Rule may be established for mapping user intents to channels based on the error events, as well as issue priority and user engagement. Thereafter, when a user accesses a particular online service of the service provider and is performing an operation that results in an error, such as an account access failure, the mappings and rules may be used to proactively outreach to the user with a particular assistance channel that may result the user's error event. | 2021-10-07 |
20210311816 | CONTROL UNIT AND METHOD FOR OPERATING A CONTROL UNIT - A control unit having a plurality of error shutdown interfaces by which, upon activation, in each case one or more components to be controlled by the control unit is/are able to be switched off. The control unit is set up to run one or more different applications, each of which is equipped to trigger an error shutdown if necessary, the control unit additionally being set up to provide internal interfaces for the one or more applications. The internal interfaces and the error shutdown interfaces are predefinably assignable to each other, so that in response to an invocation of one of the internal interfaces, the one or more error shutdown interfaces assigned to it is/are activated. A method for operation of the control unit is also described. | 2021-10-07 |
20210311817 | APPLICATION LOGGING MECHANISM - A system to facilitate application logging is described. The system includes a processor and a machine readable medium storing instructions that, when executed, cause the processor to record a system state, perform application logging at a first logging rate, record an occurrence of task failures during the logging, determine a predicted queue size threshold value based on the recorded occurrence of task failures, determine whether that the predicted queue size threshold value is less than an actual queue size and perform the application logging at a second logging rate upon a determination that the predicted queue size threshold value is less than an actual queue size, wherein the second logging rate is greater than the first logging rate. | 2021-10-07 |
20210311818 | RUNTIME POST PACKAGE REPAIR FOR DYNAMIC RANDOM ACCESS MEMORY - Systems, apparatuses and methods may provide for technology that handles failures in memory hardware (e.g., dynamic random access memory (DRAM)) via runtime post package repair. Such technology may include operations to perform a runtime post package repair in response to a memory hardware failure detected in the memory. In such an example, the runtime post package repair may be done after power up boot operations have been completed. | 2021-10-07 |
20210311819 | Cloud-based Providing of One or More Corrective Measures for a Storage System - An illustrative method includes detecting, by a cloud based storage system services provider based on a problem signature, that a storage system has experienced a problem that is associated with the problem signature; and deploying, without user intervention, one or more corrective measures that modify the storage system to resolve the problem. | 2021-10-07 |
20210311820 | SEMICONDUCTOR MEMORY DEVICES AND MEMORY SYSTEMS - A semiconductor memory device includes a memory cell array, an error correction code (ECC) engine circuit, an error information register and a control logic circuit. The memory cell array includes memory cell rows. The control logic circuit controls the ECC engine circuit to generate an error generation signal based on performing a first ECC decoding on first sub-pages in a first memory cell row in a scrubbing operation and based on performing a second ECC decoding on second sub-pages in a second memory cell row in a normal read operation on the second memory cell row. The control logic circuit records error information in the error information register and controls the ECC engine circuit to skip an ECC encoding and an ECC decoding on a selected memory cell row of the first memory cell row and the second memory cell row based on the error information. | 2021-10-07 |
20210311821 | SEMICONDUCTOR MEMORY DEVICES - A semiconductor memory device including: a buffer die; memory dies stacked on the buffer die; and TSVs, at least one of the memory dies includes: a memory cell array; an error correction code (ECC) engine; an error information register; and a control logic circuit configured to control the ECC engine to perform a read-modify-write operation, wherein the control logic circuit is configured to: record, in the error information register, a first address associated with a first codeword based on the an generation signal and a first syndrome obtained by an ECC decoding; and determine an error attribute of the first codeword based on a change of the first syndrome, recorded in the error information register, based on a plurality of read-modify-write operations. | 2021-10-07 |
20210311822 | SEMICONDUCTOR DEVICE WITH MODIFIED ACCESS AND ASSOCIATED METHODS AND SYSTEMS - Memory devices, systems including memory devices, and methods of operating memory devices are described, in which a host device may access a group of memory cells (e.g., portion of an array configurable to store ECC parity bits) otherwise reserved for ECC functionality of a memory device. The memory device may include a register to indicate whether its ECC functionality is enabled or disabled. When the register indicates the ECC functionality is disabled, the memory device may increase a storage capacity available to the host device by making the group of memory cells available for user-accessible data. Additionally or alternatively, the memory device may store metadata associated with various operational aspects of the memory device in the group of memory cells. Moreover, the memory device may modify a burst length to accommodate additional information to be stored in or read from the group of memory cells. | 2021-10-07 |
20210311823 | DETECTION OF A COLD BOOT MEMORY ATTACK IN A DATA PROCESSING SYSTEM - A method is provided for detecting a cold boot attack in a data processing system. The data processing system includes a processor, a memory with ECC, and a monitor circuit. In the method, during a boot process of the data processing system, the monitor circuit counts read and write accesses to the memory and maintains a count of the number of errors in the memory detected by the ECC. The read and write access count and the error count are used to detect suspicious activity that may indicate a cold boot attack on the memory. A data processing system that implements the method is also provided. | 2021-10-07 |
20210311824 | ERASURE DECODING FOR A MEMORY DEVICE - Methods, systems, and devices for erasure decoding for a memory device are described. In accordance with the described techniques, a memory device may be configured to identify conditions associated with an erasure, a possible erasure, or an otherwise indeterminate logic state (e.g., of a memory cell, of an information position of a codeword). Such an identification may be used to enhance aspects of error handling operations, including those that may be performed at the memory device or a host device (e.g., error handling operations performed at a memory controller external to the memory device). For example, error handling operations may be performed using speculative codewords, where information positions associated with an indeterminate or unassigned logic state are assigned with a respective assumed logic state, which may extend a capability of error detection or error correction compared to handling errors with unknown positions. | 2021-10-07 |
20210311825 | APPARATUS AND METHOD FOR CONTROLLING INPUT/OUTPUT THROUGHPUT OF A MEMORY SYSTEM - A memory system includes a memory device including a plurality of memory units capable of inputting or outputting data individually, and a controller coupled with the plurality of memory units via a plurality of data paths. The controller is configured to perform a correlation operation on two or more read requests among a plurality of read requests input from an external device, so that the plurality of memory units output plural pieces of data corresponding to the plurality of read requests via the plurality of data paths based on an interleaving manner. The controller is configured to determine whether to load map data associated with the plurality of read requests before a count of the plurality of read requests reaches a threshold, to divide the plurality of read request into two groups based on whether to load the map data, and to perform the correlation operation per group. | 2021-10-07 |
20210311826 | SEMICONDUCTOR STORING APPARATUS AND READOUT METHOD - A semiconductor storing apparatus capable of shortening a ECC processing time of a readout operation is provided, including a flash memory includes: a memory cell array; a page buffer/sense circuit holding data read out from a selected page of the memory cell array; an error correcting code circuit receiving data from the page buffer/sense circuit and holding error address information of the data; an output circuit selecting data from the page buffer/sense circuit based on a column address, and outputting the selected data to a data bus; and an error correction part correcting data of the data bus based on the error address information. | 2021-10-07 |
20210311827 | ERROR CORRECTING MEMORY SYSTEMS - Error correcting memory systems and methods of operating the memory systems are disclosed. In some embodiments, a memory system includes: a data memory; an ECC memory; and a data scrubbing circuit electrically coupled to the ECC memory and the data memory. The data scrubbing circuit may be configured to, in response to receiving a scrub data command, correct an error in the data memory. A code word length used to correct the error may be longer than a word length used during normal access of the data memory. In some embodiments, a memory system includes a first memory circuit associated with a first bit error rate and a second memory circuit associated with a second bit error rate. In some embodiments, a memory system includes an error correctable multi-level cell (MLC) array. | 2021-10-07 |
20210311828 | COPY-BACK OPERATIONS IN A MEMORY DEVICE - Devices and techniques for performing copy-back operations in a memory device are disclosed herein. A trigger to perform a copy-back operation in relation to a section of data stored on the memory device can be detected. Circuitry of the memory device can then read the section of data at two voltage levels within a read window to obtain a first set of bits and a second set of bits respectively. The first and second sets of bits—which should be the same under normal circumstances—are compared to determine whether a difference between the sets of bits is beyond a threshold. If the difference is beyond a threshold, error correction is invoked prior to completion of the copy-back operation. | 2021-10-07 |
20210311829 | EXTENDED ERROR DETECTION FOR A MEMORY DEVICE - Methods, systems, and devices for extended error detection for a memory device are described. For example, during a read operation, the memory device may perform an error detection operation capable of detecting single-bit errors, double-bit errors, and errors that impact more than two bits and indicate the detected error to a host device. The memory device may use parity information to perform an error detection procedure to detect and/or correct errors within data retrieved during the read operation. In some cases, the memory device may associate each bit of the data read during the read operation with two or more bits of parity information. For example, the memory device may use two or more sets of parity bits to detect errors within a matrix of the data. Each set of parity bits may correspond to a dimension of the matrix of data. | 2021-10-07 |
20210311830 | STORAGE DEVICE AND METHOD OF OPERATING THE SAME - The present technology relates to an electronic device. According to the present technology, a storage device having improved original data recovery capability may include a memory device including a plurality of memory cells, and configured to perform a read operation on data stored in the plurality of memory cells according to read mode information, and to output read data associated with the read operation and a memory controller configured to receive the read data, change the read mode information when error correction decoding for the read data fails, and control the memory device to perform the read operation again according to the changed read mode information. The read mode information may include information on a data interface between the memory device and the memory controller. | 2021-10-07 |
20210311831 | DATA PROCESSING SYSTEM, MEMORY CONTROLLER THEREFOR, AND OPERATING METHOD THEREOF - A data processing system may include a memory module; and a controller configured to exchange data with the memory module in response to a request received from a host. The controller divide a first data into a first data group to error correction and a second data group not to error correction in response to the first data and a first data write request received from the host, generates a first meta data for error correction for the first data group, configures a first data chunk that includes the first data and the first meta data, and transmits the first data chunk to the memory module. | 2021-10-07 |
20210311832 | REGISTER FAULT DETECTOR - A fault detector has a processor configured to receive, during a register write event, first data that are to be stored on a first register; determine a first encoded value from the first data using an encoding operation; receive second data from the first register from one or more bit locations on which the first data were to be stored; determine a second encoded value from the second data using the encoding operation; and compare the first encoded value and the second encoded value. If the first encoded value is the same as the second encoded value, the fault detector operates according to a first operational mode; and if the first encoded value is different from the second encoded value, the fault detector operates according to a second operational mode. | 2021-10-07 |
20210311833 | TARGETED REPAIR OF HARDWARE COMPONENTS IN A COMPUTING DEVICE - A method for targeted repair of a hardware component in a computing device that is part of a cloud computing system includes monitoring a plurality of hardware components in the computing device. At some point, a defective sub-component within the hardware component of the computing device is identified. In addition to the defective sub-component, the hardware component also includes at least one sub-component that is functioning properly and a spare component that can be used in place of the defective sub-component. The method also includes initiating a targeted repair action while the computing device is connected to the cloud computing system. The targeted repair action prevents the defective sub-component from being used by the computing device without preventing sub-components that are functioning properly from being used by the computing device. The targeted repair action causes the spare component to be used in place of the defective sub-component. | 2021-10-07 |
20210311834 | Consistent Recovery Of A Dataset - Servicing I/O operations in a cloud-based storage system, including: receiving, by the cloud-based storage system, a request to write data to the cloud-based storage system; storing, in solid-state storage of the cloud-based storage system, the data; storing, in object storage of the cloud-based storage system, the data; detecting that at least some portion of the solid-state storage of the cloud-based storage system has become unavailable; identifying data that was stored in the portion of the solid-state storage of the cloud-based storage system that has become unavailable; retrieving, from object storage of the cloud-based storage system, the data that was stored in the portion of the solid-state storage of the cloud-based storage system that has become unavailable; and storing, in solid-state storage of the cloud-based storage system, the retrieved data. | 2021-10-07 |
20210311835 | FILE-LEVEL GRANULAR DATA REPLICATION BY RESTORING DATA OBJECTS SELECTED FROM FILE SYSTEM SNAPSHOTS AND/OR FROM PRIMARY FILE SYSTEM STORAGE - A replication feature for providing faster granular file-level replication between distinct data storage devices is managed and orchestrated by components of an illustrative data storage management system. Information and data objects extracted from snapshots or from primary storage at a source file system are replicated to a destination file system by way of a special-purpose restore operation. The file-level granular replication approach selectively transmits only net changed data from source to destination without passing through a backup copy phase. The illustrative replication operation causes source data to be snapshotted; identifies net changed data in the file system since a preceding replication, e.g., add, change, delete, move, etc.; selectively extracts new/changed data objects from the snapshot along with additional information on moves and deletions; and restores the extracted net changed data to the destination. The illustrative replication feature does not rely on making backup copies. | 2021-10-07 |
20210311836 | METHOD FOR READING AND WRITING AND MEMORY DEVICE - The embodiments provide a method for reading and writing and a memory device. The method includes: applying a read command to the memory device, the read command pointing to address information; reading data to be read out from a memory cell corresponding to the address information pointed to by the read command; and storing the address information pointed to by the read command into a preset memory space if an error occurs in the data to be read out, and backing up the address information stored in the preset memory space into a non-volatile memory cell according to a preset rule. | 2021-10-07 |
20210311837 | On-demand Virtualized Data Recovery Apparatuses, Methods and Systems - The On-demand Virtualized Data Recovery Apparatuses, Methods and Systems (“OVDR”) transforms data recovery request, mailbox backup data selection response inputs via OVDR components into mailbox backup data selection request, data recovery response outputs. A mailbox data recovery request datastructure associated with a user is obtained. Available mailbox backup data accessible to the user is determined. A selection of a subset of the available mailbox backup data to recover is obtained. A temporary mailbox environment associated with the mailbox data recovery request datastructure is spawned. A mailbox, corresponding to a mailbox account included in the selected subset of the available mailbox backup data, is created on the temporary mailbox environment. Mailbox data items, corresponding to mailbox data items associated with the mailbox account that are included in the selected subset of data, are restored to the created mailbox. An access notification indicating that the temporary mailbox environment is ready is generated. | 2021-10-07 |
20210311838 | Incident-Responsive, Computing System Snapshot Generation - A method of remote device diagnosis and mitigation includes receiving a signal indicative of an intermittent technical state of a first device Immediately responsive thereto, the method includes interrogating the first device for parameters. The method includes interrogating the first device for the parameters at a third time outside receipt of the signal. The parameters include a transient parameter present at a first time of the intermittent technical state and not present a second time following the first time. The method includes recording the parameters from the first time in a first data file and the parameters for the third time in an additional data file. The first data file is compared with the additional data file to identify a difference in a parameter indicative of a cause of the intermittent technical state. The method includes remotely implementing a change on the first device to mitigate the cause. | 2021-10-07 |
20210311839 | RESTORING ARCHIVED OBJECT-LEVEL DATABASE DATA - According to certain aspects, a system may include a data agent configured to: process a database file residing on a primary storage device(s) to identify a subset of data in the database file for archiving, the database file generated by a database application; and extract the subset of the data from the database file and store the subset of the data in an archive file on the primary storage device(s) as a plurality of blocks having a common size; and at least one secondary storage controller computer configured to, as part of a secondary copy operation in which the archive file is copied to a secondary storage device(s): copy the plurality of blocks to the secondary storage devices to create a secondary copy of the archive file; and create a table that provides a mapping between the copied plurality of blocks and corresponding locations in the secondary storage device(s). | 2021-10-07 |