45th week of 2021 patent applcation highlights part 43 |
Patent application number | Title | Published |
20210349745 | SYSTEMS AND METHODS FOR VIRTUAL DESKTOP USER PLACEMENT IN A MULTI-CLOUD ENVIRONMENT - Embodiments herein provide an analytics-based solution for recommending for the initial placement of one or more virtual desktop (VD) users in a multi-cloud environment and/or for the subsequent migration of one or more users in a multi-cloud environment. In one or more embodiments, a placement (initial or migratory) recommendation may be based one or more metrics related to the cloud deployment and user conditions/requirements. In one or more embodiments, a placement recommendation is based on assessing functionality requirements of a user or users and a correlation analysis with other functionality or functionalities as it relates to functionality that is available at specific cloud deployments. In one or more embodiments, the recommendation may alternatively or additionally be based upon latency analytics, in which end-to-end latency from the user to an application or applications may be considered as metric(s) in the recommendation determination. | 2021-11-11 |
20210349746 | EFFICIENT HANDLING OF NETWORK TOPOLOGY CHANGE NOTIFICATION FOR VIRTUAL MACHINES - Systems and methods for memory management for virtual machines. An example method may include receiving, by a hypervisor running on a host computer system, a request that no topology change notifications be delivered to a virtual machine managed by the hypervisor. The method may include then install a packet filter on a virtual network interface controller (vNIC) associated with the virtual machine. Responsive to receiving, by the packet filter, a topology change notification packet, the method may include dropping the topology change notification packet. | 2021-11-11 |
20210349747 | CONTAINER MANAGEMENT BASED ON APPLICATION PERFORMANCE INDICATORS - Techniques for managing containers based on application key performance indicators (KPIs), associated with instances of network applications executing within containers in a telecommunication network, are discussed herein. For example, a container manager can determine that an instance of the network application is underperforming a target KPI value, is otherwise experiencing problems, and/or may be likely to experience future problems. The container manager can accordingly take one or more corrective and/or preventative actions, such as to terminate and replace the container associated with the underperforming instance of the network application, or to scale out a set of containers by adding additional containers and corresponding additional instances of the network application to reduce the load on individual instances of the network application. | 2021-11-11 |
20210349748 | VIRTUAL MACHINE RESTORATION FOR ANOMALY CONDITION EVALUATION - Architectures and mechanisms for anomaly recovery are disclosed. A virtual machine point-in-time copy having an isolated network connection is generated in response to an anomaly condition in an original virtual machine. The virtual machine copy is a point-in-time copy of the parent virtual machine. A first point-in-time backup copy is restored to the virtual machine copy utilizing the isolated network connection to generate a first restored virtual machine. The first restored virtual machine is evaluated for the anomaly condition. The original virtual machine is replaced with the first restored virtual machine if the anomaly condition does not exist in the first restored virtual machine. A second point-in-time backup copy is restored to the virtual machine copy utilizing the isolated network connection to generate a second restored virtual machine if the anomaly condition exists in the first restored virtual machine. | 2021-11-11 |
20210349749 | SYSTEMS AND METHODS FOR DYNAMIC PROVISIONING OF RESOURCES FOR VIRTUALIZED - In one aspect, a method for dynamic provisioning storage for virtual machines by meeting the service level objectives (SLOB) set in the service level agreement (SLA) is provided. The SLA pertains to the operation of a first virtual machine, the method includes the step of monitoring the workload of the first virtual machine. The method includes the step of establishing at least one SLO, typically on performance, in response to the workload. The SLO comprises a set of specific performance targets requirements for a service level of the workload of the first virtual machine that are designed to be met by the provisioned resource so as to comply with the SLA by meeting the SLO. The provisioned resource is associated with the first virtual machine. The method determines an SLA that specifies the first SLO. The SLA comprises a contract that includes consequences of meeting or missing the SLO. | 2021-11-11 |
20210349750 | ENFORCING COMPLIANCE RULES USING GUEST MANAGEMENT COMPONENTS - A system can include a host device that includes a host management component and a virtual machine execution environment with a guest management component. The guest management component receives a data object generated by the host component. The data object specifies host parameters detected for the host device and hypervisor parameters detected for the hypervisor component. The hypervisor component relays the data object from the host management component to the guest management component, which identifies a violation of a compliance rule using this information. The guest management component performs an action based on the violation. | 2021-11-11 |
20210349751 | SHARING TEMPLATES AND MULTI-INSTANCE CLOUD DEPLOYABLE APPLICATIONS - A method and system determining whether the deployment has been prepared for launch on cloud. The method including receiving, by a server computer, a set of associated image templates to a template repository. The method further including receiving, in the template repository by a processing device of the server computer, a compatible deployable template that is compatible with, and distinct from, the set of associated image templates, wherein the compatible deployable template comprises information for launching the cloud server by starting the plurality of virtual machines from the plurality of virtual machine images together to create a cloud server. The method further including providing the compatible deployable. | 2021-11-11 |
20210349752 | METHOD AND APPARATUS FOR SWITCHING TASKS - A method and an electronic device are provided in which, in response to a first user input, a stack of partially overlaid visual elements is displayed in response to the first user input. Each visual element corresponds to an application that is running in the electronic device and includes an index item representing the corresponding application. A second user input for selecting a visual element from the stack of partially overlaid visual elements is received through the touchscreen. An execution screen of an application corresponding to the selected visual element is displayed. | 2021-11-11 |
20210349753 | TASK SHIFTING BETWEEN COMPUTING DEVICES - In some embodiments, a method includes: displaying, on a first client device, a plurality of tasks; identifying, by the first client device, a task from the plurality of tasks, the task transferrable to a second client device in communication with the first client device; and sending, by the first client device, metadata for the task to the second client device in response to input received by the first client device, the task including metadata to allowing the second client device to display the task in the same manner as the task was displayed by the first client device. | 2021-11-11 |
20210349754 | Device and Method of Performing Content Channel Generation - A content channel generation device comprises a resource unit assignment circuit, for assigning scheduled station(s) as node(s) of a full binary tree according to a search algorithm; a node computing circuit, for determining first node connection information of the full binary tree, and to determine second node connection information of a smallest full binary tree according to a smallest binary tree algorithm and the first node connection information; a load balance circuit, for determining user field numbers corresponding to content channels according to a load balance function and the second node connection information; a user field generation circuit, for generating a traversal result of the smallest full binary tree according to a traversal algorithm and the second node connection information, and for generating user fields corresponding to the content channels according to the traversal result, to generate the content channels. | 2021-11-11 |
20210349755 | UTILIZATION-AWARE RESOURCE SCHEDULING IN A DISTRIBUTED COMPUTING CLUSTER - Embodiments are disclosed for a utilization-aware approach to cluster scheduling, to address this resource fragmentation and to improve cluster utilization and job throughput. In some embodiments a resource manager at a master node considers actual usage of running tasks and schedules opportunistic work on underutilized worker nodes. The resource manager monitors resource usage on these nodes and preempts opportunistic containers in the event this over-subscription becomes untenable. In doing so, the resource manager effectively utilizes wasted resources, while minimizing adverse effects on regularly scheduled tasks. | 2021-11-11 |
20210349756 | WEIGHTED RESOURCE COST MATRIX SCHEDULER - A scheduler for a storage node uses multi-dimensional weighted resource cost matrices to schedule processing of IOs. A separate matrix is created for each computing node of the storage node via machine learning or regression analysis. Each matrix includes distinct dimensions for each emulation of the computing node for which the matrix is created. Each dimension includes modeled costs in terms of amounts of resources of various types required to process an IO of various IO types. An IO received from a host by a computing node is not scheduled for processing by that computing node unless enough resources are available at each emulation of that computing node. If enough resources are unavailable at an emulation, then the IO is forwarded to a different computing node that has enough resources at each of its emulations. A weighted resource cost for processing the IO is calculated and used to determine scheduling priority. The weights or regression coefficients from the model may be used to calculate weighted resource cost. | 2021-11-11 |
20210349757 | INDICATING RELATIVE URGENCY OF ACTIVITY FEED NOTIFICATIONS - An example computing system is disclosed that may send a first notification to a first client device, the first notification indicating a first task to be performed by a first user with respect to a resource accessible to the computing system. The computing system may determine a second task of a second user with respect to the resource, and may further determine that the second user has completed the second task. Based at least in part on the second user having completed the second task, the computing system may determine a parameter indicating an urgency level of the first task, and may cause an indication of the urgency level to be presented on the first client device. | 2021-11-11 |
20210349758 | TRADE PLATFORM WITH REINFORCEMENT LEARNING NETWORK AND MATCHING ENGINE - A system for reinforcement learning in a dynamic resource environment includes at least one memory and at least one processor configured to provide an electronic resource environment comprising: a matching engine and the resource generating agent configured for: obtaining from a historical data processing task database a plurality of historical data processing tasks, each historical data processing task including respective task resource requirement data; for a historical data processing task of the plurality of historical data processing tasks, generating layers of data processing tasks wherein a first layer data processing task has an incremental variant in its resource requirement data relative to resource requirement data for a second layer data processing task; and providing the layers of data processing tasks for matching by the machine engine. | 2021-11-11 |
20210349759 | DYNAMIC THROTTLING BASED ON HEALTH METRICS - Techniques are disclosed for dynamically adjusting a throttling threshold in a multi-tenant virtualized computing environment. System health parameters are collected during a predetermined time interval. A system health status of the multi-tenant virtualized computing environment is determined. Based on the system health status, a throttling threshold for service requests for the multi-tenant virtualized computing environment is determined. The throttling threshold is applied for further service requests. During a subsequent time interval, an updated system health status of the multi-tenant virtualized computing environment is determined based on system health parameters received during the subsequent time interval. The throttling threshold is updated based on the updated system health status. The updated throttling threshold is applied for further service requests. | 2021-11-11 |
20210349760 | METHODS AND APPARATUS TO MANAGE COMPUTE RESOURCES IN A HYPERCONVERGED INFRASTRUCTURE COMPUTING ENVIRONMENT - Methods, apparatus, systems and articles of manufacture are disclosed for managing compute resources in a computing environment. Disclosed examples are to select an offering workload in a computing environment to lend at least one resource to a needy workload in the computing environment; Disclosed examples are also to cause a host associated with the offering workload to at least one of (i) instantiate a first virtual machine when the host is implemented with a second virtual machine or (ii) instantiate a first container when the host is implemented with a second container. Disclosed examples are further to assign the first virtual machine or the first container to the needy workload. | 2021-11-11 |
20210349761 | TECHNIQUES FOR SCALING DICTIONARY-BASED COMPRESSION - Accesses between a processor and its external memory is reduced when the processor internally maintains a compressed version of values stored in the external memory. The processor can then refer to the compressed version rather than access the external memory. One compression technique involves maintaining a dictionary on the processor mapping portions of a memory to values. When all of the values of a portion of memory are uniform (e.g., the same), the value is stored in the dictionary for that portion of memory. Thereafter, when the processor needs to access that portion of memory, the value is retrieved from the dictionary rather than from external memory. Techniques are disclosed herein to extend, for example, the capabilities of such dictionary-based compression so that the amount of accesses between the processor and its external memory are further reduced. | 2021-11-11 |
20210349762 | System and Method for Sharing Central Processing Unit (CPU) Resources with Unbalanced Applications - A method, computer program product, and computing system for monitoring utilization of each central processing unit (CPU) core of a plurality of CPU cores. An average input/output (IO) latency for an operating system thread executing on the CPU core of the plurality of CPU cores may be determined. The operating system thread IO polling cadence for the at least one operating system thread executing on at least one CPU core may be adjusted based upon, at least in part, the utilization of each CPU core of the plurality of CPU cores and the average IO latency for the operating system thread executing on each CPU core of the plurality of CPU cores. | 2021-11-11 |
20210349763 | TECHNIQUE FOR COMPUTATIONAL NESTED PARALLELISM - One embodiment of the present invention sets forth a technique for performing nested kernel execution within a parallel processing subsystem. The technique involves enabling a parent thread to launch a nested child grid on the parallel processing subsystem, and enabling the parent thread to perform a thread synchronization barrier on the child grid for proper execution semantics between the parent thread and the child grid. This technique advantageously enables the parallel processing subsystem to perform a richer set of programming constructs, such as conditionally executed and nested operations and externally defined library functions without the additional complexity of CPU involvement. | 2021-11-11 |
20210349764 | SYSTEMS AND METHODS FOR OPTIMIZED EXECUTION OF PROGRAM OPERATIONS ON CLOUD-BASED SERVICES - Disclosed herein are systems and method efficiently executing a program operation on a cloud-based service. In an exemplary aspect, a method comprises receiving a request to perform a program operation on a cloud-based service and at least one user constraint for performing the program operation, and determining a plurality of sub-operations that are comprised in the program operation. The method comprises identifying a plurality of service component combinations offered by the service provider that can execute the program operation, and identifying, based on a status of each service component, at least one processing constraint of each service component. The method comprises determining, by a machine learning algorithm, a service component combination from the plurality of service component combinations for executing the program operation based on the at least one user constraint and processing constraints. The method comprises executing the program operation by the determined service component combination. | 2021-11-11 |
20210349765 | ENDPOINT GROUP CONTAINING HETEROGENEOUS WORKLOADS - Some embodiments of the invention provide a method for deploying network elements for a set of machines in a set of one or more datacenters. The datacenter set is part of one availability zone in some embodiments. The method receives intent-based API (Application Programming Interface) requests, and parses these API requests to identify a set of network elements to connect and/or perform services for the set of machines. In some embodiments, the API is a hierarchical document that can specify multiple different compute and/or network elements at different levels of compute and/or network element hierarchy. The method performs automated processes to define a virtual private cloud (VPC) to connect the set of machines to a logical network that segregates the set of machines from other machines in the datacenter set. In some embodiments, the set of machines include virtual machines and containers, the VPC is defined with a supervisor cluster namespace, and the API requests are provided as YAML files. | 2021-11-11 |
20210349766 | CLUSTER TUNER - A production cluster executes a workload, such that jobs associated with the executed workload are allocated, according to a first configuration. A cluster monitor extracts production cluster information from the production cluster, monitors configuration information during execution of the workload, and transmits each to a cluster tuner. The cluster tuner receives the information and determines a first recommended configuration for the production cluster. The cluster tuner causes the test cluster to execute a simulated workload according to the first recommended configuration. In response to determining that the first recommended configuration results in a decrease in resource consumption, the cluster tuner causes the production cluster to operate according to the first recommended configuration. | 2021-11-11 |
20210349767 | MIGRATING VIRTUAL MACHINES BETWEEN COMPUTING ENVIRONMENTS - Virtual machines can be migrated between computing environments. For example, a system can receive a request to perform a migration process involving migrating a virtual machine from a source computing environment to a target computing environment. The target computing environment may be a cloud computing environment. In response to the request, the system can receive first configuration data for a first version of the virtual machine that is located in the source computing environment. The first configuration data can describe virtualized features of the first version of the virtual machine. The system can use the first configuration data to generate second configuration data for a second version of the virtual machine that is to be deployed in the target computing environment. The system can then deploy the second version of the virtual machine within one or more containers of the target computing environment in accordance with the second configuration data. | 2021-11-11 |
20210349768 | UID AND GID SHIFTING FOR CONTAINERS IN USER NAMESPACES - A request to access an image stored by a host operating system (OS) maybe received from a process running in a container. The container may run a namespace including a plurality of namespace user identifiers (UIDs). A host UID corresponding to the namespace UID of the process may be synchronized with a host UID of an owner of the image based on configuration data of the namespace. | 2021-11-11 |
20210349769 | PRIORITY BASED ARBITRATION - Methods of arbitrating between requestors and a shared resource are described. The method comprises generating a vector with one bit per requestor, each initially set to one. Based on a plurality of select signals (one per decision node in a first layer of a binary decision tree, where each select signal is configured to be used by the corresponding decision node to select one of two child nodes), bits in the vector corresponding to non-selected requestors are set to zero. The method is repeated for each subsequent layer in the binary decision tree, based on the select signals for the decision nodes in those layers. The resulting vector is a one-hot vector (in which only a single bit has a value of one). Access to the shared resource is granted, for a current processing cycle, to the requestor corresponding to the bit having a value of one. | 2021-11-11 |
20210349770 | BLOCKCHAIN-BASED IMPORT CUSTOM CLEARANCE DATA PROCESSING - Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing data. One of the systems include: a service platform comprising a plurality of service modules and an application programming interface layer comprising a plurality of application programming interfaces (APIs) to enable users to invoke the service modules to process information related to an order associated with importation of a merchandise, wherein the information is provided, or to be provided, to a service authority for requesting clearance for the order, wherein a first application programming interface of the plurality of application programming interfaces enables a user of the service platform to invoke a smart contract on a blockchain managed by a blockchain network, in which the smart contract performs at least one of processing of the information or processing of information related to another user of the service platform. | 2021-11-11 |
20210349771 | SYSTEMS AND METHODS FOR GENERATING AN API CACHING LIBRARY USING A SHARED RESOURCE FILE - The present disclosure is directed to systems and methods for generating an API caching library using a shared resource file. For example, a method may include: receiving, at a first platform, a shared resource file comprising metadata for declaratively deriving an application programming interface (API) caching library for a native application operating on the first platform and a corresponding application related to the native application for a second platform; parsing the shared resource file to extract the metadata at run-time of the native application; declaratively deriving the API caching library based on the extracted metadata, the declaratively deriving the API caching library comprising creating a plurality of objects that represent respective API endpoints of the API caching library; and executing a function of the native application based on at least one of the API endpoints. | 2021-11-11 |
20210349772 | PREDICTION OF PERFORMANCE DEGRADATION WITH NON-LINEAR CHARACTERISTICS - Described are techniques for predicting gradual performance degradation with non-linear characteristics. The techniques including a method comprising inputting a new data sample to a failure prediction model, wherein the failure prediction model is trained using a labeled historical dataset, wherein respective data points are associated with a look-back window and a prediction horizon to create respective training samples, wherein the respective training samples are clustered in a plurality of clusters, and wherein the plurality of clusters are each associated with a normalcy score and an anomaly score. The method further comprises outputting a classification associated with the new data sample based on comparing a first anomaly score of a first cluster of the plurality of clusters that includes the new data sample to an average anomaly score of clusters of the plurality of clusters having the normalcy score greater than the anomaly score. | 2021-11-11 |
20210349773 | PERFORMANCE EVENT TROUBLESHOOTING SYSTEM - Aspects of the present invention disclose a method and system for troubleshooting. The method includes identifying data sources providing sensor data, including a first group of measurands. The method further includes processors determining that values of a second group of the measurands of a subset of the sensor data (provided by a given data source, comprising a component set) indicates an anomaly. The method further includes determining a third group of the measurands that are root cause candidates of the anomaly. The measurands of the third group are provided by the component set. The method further includes assigning a set of coefficients to respective measurands. Each coefficient is indicative of a comparison result of each measurand with a measurand of the third group. The method further includes determining, using the sets of coefficients, whether a specific subset of the component set can be identified as an anomaly root cause. | 2021-11-11 |
20210349774 | SYSTEM AND METHOD FOR INFERRING DEVICE MODEL BASED ON MEDIA ACCESS CONTROL ADDRESS - A system and method for inferring device models. The method includes determining block statistics for each block of a plurality of blocks of a plurality of media access control (MAC) addresses, the plurality of blocks having a plurality of respective prefixes, wherein the plurality of blocks are grouped based on commonalities among the plurality of respective prefixes; generating an aggregated statistical model for the plurality of blocks based on the plurality of MAC addresses and the block statistics, wherein each block is a string of digits included in one of the plurality of MAC addresses; and applying the aggregated statistical model to the block statistics of at least one block of the plurality of blocks in order to determine at least one inferred device model, wherein each of the at least one block is grouped into the same group. | 2021-11-11 |
20210349775 | METHOD OF DATA MANAGEMENT AND METHOD OF DATA ANALYSIS - A method of data management is to be implemented by a baseboard management controller (BMC) of a server. The method includes: collecting normal (abnormal) operation information that is related to current statuses of hardware and firmware components; selecting a portion of the normal (abnormal) operation information; classifying each piece of data included in the portion of the normal (abnormal) operation information as a hardware class or a firmware class; and storing the portion of the normal (abnormal) operation information in the storage. | 2021-11-11 |
20210349776 | DATACENTER IoT-TRIGGERED PREEMPTIVE MEASURES USING MACHINE LEARNING - One example method includes performing a machine learning process that involves performing an assessment of a state of a computing system, and the assessment includes analyzing information generated by an IoT edge sensor in response to a sensed physical condition in the computing system, and identifying an entity in the computing system potentially impacted by an event associated with the physical condition. The example method further includes identifying a preemptive recovery action and associating the preemptive recovery action with an entity, and the preemptive recovery action, when performed, reduces or eliminates an impact of the event on the entity, determining a cost associated with implementation of the preemptive recovery action, evaluating the cost associated with the preemptive recovery actions and identifying the preemptive recovery action with the lowest associated cost, implementing the preemptive recovery action with the lowest associated cost, and repeating part of the machine learning process. | 2021-11-11 |
20210349777 | FAULT PROCESSING METHOD, RELATED DEVICE, AND COMPUTER STORAGE MEDIUM - A fault processing method includes: a fault processing apparatus receives first hardware fault information generated by a first PCIe device on a first PCIe link. The first hardware fault information includes a device identifier of the first PCIe device and is used to indicate that a hardware fault occurs on the first PCIe device. Further, the fault processing apparatus performs, based on the first hardware fault information, fault recovery on the first PCIe link on which the first PCIe device is located, and interrupts a software service related to the first PCIe device. | 2021-11-11 |
20210349778 | ADAPTIVE FOLDING FOR INTEGRATED MEMORY ASSEMBLY - A non-volatile storage system includes a memory controller connected to an integrated memory assembly. The integrated memory assembly includes a memory die comprising non-volatile memory cells and a control die bonded to the memory die. The memory controller provides data to the control die for storage on the memory die. Data is initially stored on the memory die as single bit per memory cell data to increase the performance of the programming process. Subsequently, the control die performs an adaptive folding process which comprises reading the single bit per memory cell data from the memory die, adaptively performing one of multiple decoding options, and programming the data back to the memory die as multiple bit per memory cell data. | 2021-11-11 |
20210349779 | MEMORY, ERROR RESTORATION METHOD OF THE MEMORY, AND BATTERY DEVICE COMPRISING THE MEMORY - Discussed is a memory having an application area that stores at least one application; a flash bootloader (FBL) area that includes codes for updating the application area; and a BUM module that is activated after a defect is detected in the FBL area, deletes the FBL area, writes binary code information of an FBL image into the FBL area, determines whether the binary code written into the FBL area matches binary code information of the FBL image, and is deactivated when the two binary code information match. The FBL image and the BUM module may be provided in the application area. | 2021-11-11 |
20210349780 | SYSTEMS, METHODS, AND DEVICES FOR DATA RECOVERY WITH SPARE STORAGE DEVICE AND FAULT RESILIENT STORAGE DEVICE - A method may include operating a first storage device and a second storage device as a redundant array, operating the first storage device in a fault resilient mode with at least partial read capability based on a fault condition of the first storage device, and rebuilding information from the first storage device on a spare storage device based on the fault condition of the first storage device. Rebuilding information from the first storage device on the spare storage device may include copying information from the first storage device to the spare storage device. The information from the first storage device may include data and/or parity information. The method may further include reading first information for a read or write operation from the first storage device based on a rebuild point of the spare storage device. | 2021-11-11 |
20210349781 | SYSTEMS, METHODS, AND DEVICES FOR DATA RECOVERY USING PARITY SPACE AS RECOVERY SPACE - A method may include operating a first storage device and a second storage device as a redundant array configured to use parity information to recover information from a faulty storage device, operating the first storage device in a fault resilient mode with at least partial read capability based on a fault condition of the first storage device, and rebuilding information from the first storage device in a parity space of the second storage device. Rebuilding the information from the first storage device in the parity space of the second storage device may include copying the information from the first storage device to the parity space of the second storage device. The method may further include copying the rebuilt information from the parity space of the second storage device to a replacement storage device. | 2021-11-11 |
20210349782 | SYSTEMS, METHODS, AND DEVICES FOR FAULT RESILIENT STORAGE - A method of operating a storage device may include determining a fault condition of the storage device, selecting a fault resilient mode based on the fault condition of the storage device, and operating the storage device in the selected fault resilient mode. The selected fault resilient mode may include one of a power cycle mode, a reformat mode, a reduced capacity read-only mode, a reduced capacity mode, a reduced performance mode, a read-only mode, a partial read-only mode, a temporary read-only mode, a temporary partial read-only mode, or a vulnerable mode. The storage device may be configured to perform a namespace capacity management command received from the host. The namespace capacity management command may include a resize subcommand and/or a zero-size namespace subcommand. The storage device may report the selected fault resilient mode to a host. | 2021-11-11 |
20210349783 | STORAGE BACKED MEMORY PACKAGE SAVE TRIGGER - Devices and techniques for a storage backed memory package save trigger are disclosed herein. Data can be received via a first interface. The data is stored in a volatile portion of the memory package. Here, the memory package includes a second interface arranged to connect a host to a controller in the memory package. A reset signal can be received at the memory package via the first interface. The data stored in the volatile portion of the memory package can be saved to a non-volatile portion of the memory package in response to the reset signal. | 2021-11-11 |
20210349784 | Data Replication Method, Apparatus, and System - A data replication method includes obtaining differential data information corresponding to differential data, where the differential data information includes a storage address of the differential data, and a determining value of the differential data, replicating the differential data from the primary volume to the secondary volume according to the storage address of the differential data that is located in the primary volume when the determining value is not less than a preset threshold, and taking a snapshot for the primary volume when the determining value is less than the preset threshold and replicating the differential data to the secondary volume. | 2021-11-11 |
20210349785 | AUTOMATIC BACKUP STRATEGY SELECTION - A system and method to receive, from a database service executing on a cloud infrastructure, information indicating metrics regarding backups for the database service, the information including at least an indication of an age of a last complete backup for the database service, an indication of a size of changed data since the last complete backup, and an indication of a number of data units changed since the last complete backup; determine a type of backup strategy to instruct the database service to perform based on the received information, the type of backup strategy being one of a complete backup of the database service, a delta backup of the database service, and no backup of the database service; and issue, in response to the determination, an instruction to the database service to execute the determined type of backup. | 2021-11-11 |
20210349786 | CURRENT MONITORING IN HYPERSCALER ENVIRONMENT - A system and method providing monitoring of services hosted by a hyperscaler environment. The method including receiving an indication of at least one metric related to a backup storage process for each of a plurality of hyperscaler hosted database service instances; determining at least one value for each of the plurality of database service instances; storing a record of the determined at least one value for each of the plurality of database service instances in a persistent data storage device that is distinct and separate from the database service instances; receiving a request from a third-party entity for the stored at least one value for at least one of the plurality of database service instances; and transmitting the requested one or more of the at least one value for the database service instances specified in the request. | 2021-11-11 |
20210349787 | CUSTOMER SPECIFIC BACKUP OBJECT SIZES - A system and method providing a reception of metrics related to data storage processes of a plurality of different service instances deployed on a cloud services infrastructure providing data storage space for the plurality of database service instances; determining an amount of the data storage space consumed by a first database service instance; and storing a record of the determined amount of the data storage space consumed by the first database service instance in a persistent data storage device that is distinct and separated from the cloud services infrastructure. The system and method further reporting, on demand, the amount of the data storage space determined consumed by the first database service at a particular time | 2021-11-11 |
20210349788 | HYPERSCALER INDEPENDENT VERSIONING OF CLOUD STORAGE OBJECTS - A system and method to receive, by a backup service layer of a database service instance, a request to create a data backup; create, in response to the request and internally of the backup service, a backup having a filename including a version identifier; and transmit the created backup to a hyperscaler to be stored in a cloud object storage of the hyperscaler, the filename of the backup being a key for the storage of the backup in the cloud object storage. | 2021-11-11 |
20210349789 | CATALOG RESTORATION - A method can include obtaining catalog data of a catalog. The catalog can include one or more records. The method can further include detecting one or more damaged records among the one or more records and isolating the one or more damaged records. The method can further include identifying one or more undamaged records among the one or more records. The method can further include transferring the one or more undamaged records to a backup catalog. The method can further include obtaining a transfer status of a first undamaged record of the one or more undamaged records. The method can further include obtaining an access request corresponding to the first undamaged record. The method can further include determining, based on the transfer status, a response to the access request and generating, based at least in part on the backup catalog, a restored catalog. | 2021-11-11 |
20210349790 | SYSTEM AND METHOD OF RESYNCING DATA IN ERASURE-CODED OBJECTS ON DISTRIBUTED STORAGE SYSTEMS WITHOUT REQUIRING CHECKSUM IN THE UNDERLYING STORAGE - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for resynchronizing data in a storage system. One of the methods includes determining that a particular primary disk of a capacity object of a storage system has failed, wherein the capacity comprises a plurality of segments, and wherein the each segment comprises: a plurality of primary columns each corresponding to a respective primary disk of the capacity object, and a plurality of parity columns each corresponding to a respective parity disk of the capacity object; and resynchronizing, for each segment of one or more segments of the capacity object, the primary column of the segment corresponding to the particular primary disk using i) the primary columns of the segment corresponding to each other primary disk of the capacity object, ii) one or more parity columns of the segment, and iii) the column summaries of the segment. | 2021-11-11 |
20210349791 | SYSTEM AND METHOD OF RESYNCING N-WAY MIRRORED METADATA ON DISTRIBUTED STORAGE SYSTEMS WITHOUT REQUIRING CHECKSUM IN THE UNDERLYING STORAGE - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for resynchronizing data in a storage system. One of the methods includes receiving, by a first storage subsystem, a plurality of write requests corresponding to respective meta data blocks, wherein the first storage subsystem comprises a meta object; storing, by the first storage subsystem and for each write request, in each disk of the meta object, a version of the corresponding meta data block; determining that a particular disk of the meta object has failed; determining whether one or more valid versions of the meta data block are stored in respective other disks of the meta object; and in response to determining that one or more valid versions of the meta data block are stored in respective other disks of the meta object, resynchronizing the meta data block in the particular disk. | 2021-11-11 |
20210349792 | FILTER RESET FOR CLOUD-BASED ANALYTICS ENGINE - A method for accessing data stored in a database may include generating a snapshot of a definition of a data story that includes a software widget configured to receive inputs for creating, based on a portion of data stored in the database, a data presentation providing a visual representation of the data. In response to a first indication to apply a filter removing some of the data associated with the data presentation, modifying a runtime definition of the data story to reflect the application of the filter. In response to a second indication to reset the filter, restoring the data story to a state prior to the application of the filter by replacing the runtime definition of the data story with the snapshot of the definition of the data story. Related systems and articles of manufacture are also provided. | 2021-11-11 |
20210349793 | SYSTEM AND METHODS OF EFFICIENTLY RESYNCING FAILED COMPONENTS WITHOUT BITMAP IN AN ERASURE-CODED DISTRIBUTED OBJECT WITH LOG-STRUCTURED DISK LAYOUT - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for resynchronizing data in a storage system. One of the methods includes determining that a particular disk of a capacity object of a storage system was offline for an interval of time, wherein the capacity object comprises a plurality of segments, and wherein the storage system comprises a segment usage table identifying a linked list of particular segments of the capacity object that are in use; determining a time point at which the particular disk went offline; determining one or more first segments of the capacity object that were modified after the time point, wherein determining one or more first segments comprises determining each segment of the segment usage table having a transaction ID that is larger than the time point; and resynchronizing, for each first segment, a portion of the particular disk corresponding to the first segment. | 2021-11-11 |
20210349794 | FENCING NON-RESPONDING PORTS IN A NETWORK FABRIC - A computer-implemented method according to one aspect includes determining whether an operating system of a node of a distributed computing environment is functioning correctly by sending a first management query to the node; in response to determining that the operating system of the node is not functioning correctly, determining whether the node has an active communication link by sending a second management query to ports associated with the node; and in response to determining that the node has an active communication link, resetting the active communication link for the node by sending a reset request to the ports associated with the node. | 2021-11-11 |
20210349795 | AUTOMATING THE FAILOVER OF A RELATIONAL DATABASE IN A CLOUD COMPUTING ENVIRONMENT - Described herein is a method, system, and non-transitory computer readable medium for helping customers in accessing data through an application from a replica database, detecting whether the replica database, zone of availability of the replica database, or geographical region encompassing the zone of availability is experiencing an outage or other failure, and re-routing traffic to a backup replica database accordingly. To assess the status of the database, metrics are pushed in a secure manner from a private subnet to a public-facing monitoring agent, achieving a clear segregation of private subnet and public facing components. Further, circuit-breaker logic is included for preventing failure during updating DNS addresses during the re-routing process. | 2021-11-11 |
20210349796 | SYSTEM AND METHOD FOR PROVIDING FAULT TOLERANCE AND RESILIENCY IN A CLOUD NETWORK - In accordance with an embodiment, described herein is a system and method for providing fault tolerance and resiliency within a cloud network. A cloud computing environment provides access, via the cloud network, to software applications executing within the cloud environment. The cloud network can include a plurality of network devices, of which various network devices can be configured as virtual chassis devices, cluster members, or standalone devices. A fault tolerance and resiliency framework can monitor the network devices, to receive status information associated with the devices. In the event the system determines a failure or error associated with a network device, it can attempt to perform recovery operations to restore the cloud network to its original capacity or state. If the system determines that a particular network device cannot recover from the failure or error, it can alert an administrator for further action. | 2021-11-11 |
20210349797 | AUTOMATIC PART TESTING - Automatic part testing includes: booting a part under testing into a first operating environment; executing, via the first operating environment, one or more test patterns on the part; performing a comparison between one or more observed characteristics associated with the one or more test patterns and one or more expected characteristics; and modifying one or more operational parameters of a central processing unit of the part based on the comparison. | 2021-11-11 |
20210349798 | DIGITAL PROCESSING SYSTEMS AND METHODS FOR SELF-MONITORING SOFTWARE RECOMMENDING MORE EFFICIENT TOOL USAGE IN COLLABORATIVE WORK SYSTEMS - Systems, methods, and computer-readable media for self-monitoring software usage to optimize performance are disclosed. The systems and methods may involve at least one processor configured to: maintain a table; present to an entity a plurality of tools for manipulating data in the table; monitor tool usage by the entity to determine at least one tool historically used by the entity; compare the at least one tool historically used by the entity with information relating to the plurality of tools to thereby identify at least one alternative tool in the plurality of tools whose substituted usage is configured to provide improved performance over the at least one historically used tool; and present to the entity during a table use session a recommendation to use the at least one alternative tool. | 2021-11-11 |
20210349799 | CONTEXT AWARE DYNAMIC RELATIVE POSITIONING OF FOG NODES IN A FOG COMPUTING ECOSYSTEM - Methods and systems for context aware dynamic relative positioning of fog nodes in a fog computing ecosystem are disclosed. A method includes: receiving, by a computing device, data from a plurality of Internet-of-Things (IoT) sensors in an environment; creating, by the computing device, a model using the data from the plurality of IoT sensors; determining, by the computing device, a number of computing nodes based on the model and additional data received from the plurality of IoT sensors; and deploying, by the computing device, at least one mobile computing node in the environment based on the determined number of computing nodes and a number of existing computing nodes in the environment. | 2021-11-11 |
20210349800 | SYSTEMS AND METHODS FOR INTELLIGENT FAN IDENTIFICATION - Systems and methods for intelligent fan identification are described. In some embodiments, an Information Handling System (IHS) may include: an embedded controller (EC); and a memory coupled to the EC, the memory having program instructions stored thereon that, upon execution by the EC, cause the IHS to: detect a cooling fan configuration issue; determine that a number of cooling fans in the IHS has not changed between a previous configuration and a current configuration; and in response to the determination, abstain from identifying the cooling fan configuration issue as a cooling fan error. | 2021-11-11 |
20210349801 | SOFTWARE CONTAINER IMAGE QUALITY RANKING BASED ON PERFORMANCE OF INSTANCES OF THE SOFTWARE CONTAINER IMAGES - An apparatus comprises a processing device comprising a processor coupled to a memory. The processing device is configured to obtain metrics characterizing performance, over two or more periods of time, of software container instances of each of a plurality of software container images. The processing device is also configured to determine, for each of the two or more periods of time, a periodic quality ranking of the plurality of software container images based at least in part on the obtained metrics. The processing device is further configured to generate an overall quality ranking of the plurality of software container images utilizing a consensus ranking aggregation algorithm configured to aggregate the periodic quality rankings of the plurality of software container images across the two or more periods of time, and to publish the overall quality ranking of the plurality of software container images to a software container registry. | 2021-11-11 |
20210349802 | MULTI-LAYERED COMPUTING SYSTEM ATTRIBUTE DEPENDENCY - A method, computer program product, and a system where a processor(s) obtains, from a data source, a list of objects at different layers of a computing system. The processor(s) generates exploration lists from the list (each exploration list with objects for a layer). The processor(s) identifies updated and new data at the layers associated with the objects on the list; the identified data comprises attributes for each layer. The processor(s) applies machine learning algorithm(s) to enrich the data by identifying dependencies between the attributes for each layer as influencers for one or more key performance indicators of the computing system. The processor(s) generates, from the enriched data, a hierarchy matrix. The processor(s) determines, based on the hierarchy matrix that an event associated with one or more computing resources of the computing system will influence a particular key performance indicator. | 2021-11-11 |
20210349803 | AUDIT LOGGING DATABASE SYSTEM AND USER INTERFACE - Systems and methods are provided for improved auditing of user actions associated with a software application. The system includes functionality to log user actions in a structured, standardized way. The system includes interactive user interfaces for analyzing the logs. The logging is based on a well-defined categorization of available actions. The log information includes (and distinguishes among) user details, context details, user inputs, and/or system outputs (including identification of data objects). The interactive user interfaces enable a user to view structured log data in an efficient manner, such as by presenting logs in a tabular format, executing queries on the log data, and/or presenting visualizations that summarize the log data. The interactive user interfaces provide functionality that allows a user to investigate and/or audit user interactions with a data object. A reviewer is permitted to drag and drop one or more data objects of interest from the software application directly into the interactive user interfaces. The interactive interfaces present log entries associated with the object(s) for further review by the reviewer. | 2021-11-11 |
20210349804 | APPLICATION CURATION - Methods, systems and computer program products for user-specific curation of applications from heterogeneous application sources. Multiple components are interconnected to perform user-specific curation operations. The user-specific curation operations comprise accessing application metadata corresponding to a plurality of applications from a plurality of application sources. The application sources may be heterogeneous and may be situated at local sites or at remote sites. A set of rules are applied to the application metadata to determine if one or more applications are authorized for use by a particular user or group. Publication attributes that control accessibility by a particular user or particular group of users are associated with the authorized applications. Based on the publication attributes as they apply to a particular user, one or more curated applications are selected from the authorized applications. A user-specific application marketplace is presented in a user interface to show a portion of the user-specific curated applications. | 2021-11-11 |
20210349805 | COPROCESSOR-BASED LOGGING FOR TIME TRAVEL DEBUGGING - A tracing coprocessor that records execution trace data based on a cache coherency protocol (CCP) message. The tracing coprocessor comprises logic that causes the tracing coprocessor to listen on a bus that is communicatively coupled to a primary processor that executes executable code instructions. The logic also causes the tracing coprocessor to, based on listening on the bus, identify at least one CCP message relating to activity at a processor cache. The logic also causes the tracing coprocessor to identify, from the at least one CCP message, a memory cell consumption by the primary processor. The logic also causes the tracing coprocessor to initiate logging, into an execution trace, at least a memory cell data value consumed by the primary processor in connection with execution of at least one executable code instruction. | 2021-11-11 |
20210349806 | IDENTIFYING SOFTWARE INTERACTION DEFECTS USING DIFFERENTIAL SPEED PROCESSORS - Aspects of the invention include methods, systems and computer program products for identifying interaction software defects. Aspects include singly executing a first testcase at a normal processing speed and singly executing a second testcase at the normal processing speed. Aspects also include simultaneously executing the first testcase at a first processing speed and a second testcase at a second processing speed. Based on determining the single and simultaneous testcase results do not match, aspects further include creating an error notification. | 2021-11-11 |
20210349807 | GENERATION OF OPTIMAL PROGRAM VARIATION - Provided is a system and method for generating a subset of optimal variations of a software program which allow some statements of the control flow to be exposed to side channels. Furthermore, the subset of optimal variations may be selected based on a security and a performance trade-off analysis. In one example, the method may include identifying a set of statements within a control flow of a software program, generating a plurality of variations of the software program which comprise different subsets of statements which are exposed to side channels, respectively, determining one or more pareto-optimal variations of the software program based on side channel leakage values and performance values of the plurality of variations of the software program, and outputting information about the one or more pareto-optimal variations of the software program to a user device. | 2021-11-11 |
20210349808 | SOURCE QUALITY CHECK SERVICE - Techniques and solutions are provided for a source quality check service configured to analyze source text and identify issues in the source text. The source quality check service may identify the issues by performing a selected subset of checks with a centralized source quality check engine, and may be called from within one or more of an Integrated Development Environment (IDE), a build process, and/or a translation process to perform the selected subset of checks. The source quality check service may be further configured to output a report of the identified one or more issues. | 2021-11-11 |
20210349809 | Defect Prediction Operation - A system, method, and computer-readable medium are disclosed for predicting a defect within a computer program comprising: accessing a code base of the computer program, the code base of the computer program comprising a plurality of computer program files; training the defect prediction system, the training including performing a historical analysis of defect occurrence patterns in the code base of the computer program; analyzing a commit of the computer program to identify a likelihood of defect occurrence within each of the plurality of files of the computer program; and, calculating a defect prediction metric for each of the plurality of files of the computer program, the defect prediction metric providing an objective measure of defect prediction for each of the plurality of files of the computer program. | 2021-11-11 |
20210349810 | ASYNCHRONOUS CONSUMER-DRIVEN CONTRACT TESTING IN MICRO SERVICE ARCHITECTURE - A method of verifying, during a continuous integration (CI) and continuous delivery (CD) process, that an asynchronous message contract between a consumer service and a provider service in a microservice architecture has not been broken by a change to the provider service is disclosed. The asynchronous message contract is retrieved from a central server. A test message queue is created, the test message queue being separate from an existing message queue. Generation of a message based on a precondition specified in the asynchronous message contract is triggered. The message is retrieved from the test queue. The message is verified according to the asynchronous message contract, the verifying based on a build error not being generated during the CI and CD process. | 2021-11-11 |
20210349811 | REGRESSION PREDICTION IN SOFTWARE DEVELOPMENT - Described are techniques for predictive regression testing. The techniques include a method comprising constructing a call graph of a modified codebase including at least one modified node corresponding to a modified function. The method further comprises generating a subset of codebase tests by removing respective codebase tests that do not call for the at least one modified node. The method further comprises generating respective partial Abstract Syntax Trees (AST) sequences for relevant test paths in the call graph that connect the at least one modified node to one of the subset of codebase tests. The method further comprises inputting, to a machine learning model, the respective partial AST sequences, and generating, based on output from the machine learning model, predicted regression testing results for the relevant test paths. | 2021-11-11 |
20210349812 | OPTIMIZED TEST CASE SELECTION FOR QUALITY ASSURANCE TESTING OF VIDEO GAMES - A test case selection system and method uses a test selection model to select test cases from a library of test cases to be used for quality assurance (QA) testing of a software application to maximize the chances of finding bugs from executing the selected test cases. The test case selection model may be a machine learning based regression model trained using outcomes of previous QA testing. In some case, the test case selection system may provide periodic and/or continuous refinement of the test case selection model from one QA testing run to the next. The model refinements may include updating weights associated with the test case selection model in the form of a regression model. Additionally, the test case selection system may provide performance analytics between a test case selection model-based selection of test cases and random selection of test cases. | 2021-11-11 |
20210349813 | DYNAMIC USER INTERFACE TESTING AND RECONCILIATION - Systems, devices, and methods for UI testing and reconciliation are presented. In one example, a method of UI testing includes generating a wireframe model of the UI. The method also includes generating a code segment from a portion of the wireframe model of the UI. Additionally, the example method includes determining whether an error in the code segment exists based on a comparison of an aspect of the wireframe model to an aspect of the code segment. The method may include generating an updated code segment from the portion of the wireframe model of the UI when determining the error in the code segment. | 2021-11-11 |
20210349814 | Pipeline performance improvement using stochastic dags - Software development pipeline tools construct pipelines by combining tools, files, and other resources, to build, integrate, test, deploy, or otherwise implement operational functionality in computing systems. Some pipelines are simple, but others are stochastic due to conditional execution, task addition or removal, resource availability randomness, and other causes. Some stochastic pipelines also include a hierarchy with multiple levels of task groupings, which adds complexity. Pipeline performance optimization uses critical paths, but critical paths are challenging to identify in stochastic pipelines. Tools and techniques are presented to automatically identify likely or actual critical paths and to indicate constituent critical tasks as improvement options for stochastic pipelines in software development or other industrial activities. Pipeline representations include directed acyclic graph data structures of constituent tasks. Computationally applying relevance filters helps identify performance improvement options based on historic execution data, without requiring the predefined task dependency information that stochasticity prevents. | 2021-11-11 |
20210349815 | AUTOMATICALLY INTRODUCING REGISTER DEPENDENCIES TO TESTS - Method, apparatus and product for automatically introducing register dependency into tests. A test template represents an abstract test scenario to be utilized for testing a target processor. The abstract test scenario requires that a value be assigned to a register. A test that implements the abstract test scenario is generated. The test is a set of instructions that are executable by the target processor. The generation of the test comprises: determining a memory address to retain the value in a memory that is accessible to the target processor; and adding to the test an instruction to load to the register the value from the memory address, whereby adding a register dependency to the test that is not required by the abstract test scenario. The test can be executed on the target processor or simulation thereof. | 2021-11-11 |
20210349816 | Swarm Management - Systems, methods, and other embodiments associated with swarm management are described. One example system comprises a communication component configured to establish a communication link with at least one element, where the at least one element is part of a swarm. The example system also comprises a management component configured to manage performance of a task list by the swarm through the communication link. | 2021-11-11 |
20210349817 | METHOD FOR RELEASING MEMORY - A method for releasing memory allocated by a contiguous memory allocator that merges a to-be-released memory page with an adjacent free page to form a memory block that can be released more efficiently than would be the case when releasing a series of un-merged memory pages. | 2021-11-11 |
20210349818 | METHOD AND SYSTEM FOR FACILITATING DATA PLACEMENT AND CONTROL OF PHYSICAL ADDRESSES WITH MULTI-QUEUE I/O BLOCKS - A system is provided to receive a request to write a sector of data to a non-volatile storage device, wherein the request is associated with a physical address in the non-volatile storage device at which the sector of data is to be written. The system identifies, based on the physical address, a channel buffer to which the sector of data is to be transmitted, and stores the sector of data in the channel buffer. Responsive to determining that the channel buffer stores other sectors, the system writes the sector of data and the other sectors of data to the non-volatile storage device based on the physical address. | 2021-11-11 |
20210349819 | SEMICONDUCTOR DEVICE - A semiconductor device performs a software lock-step. The semiconductor device includes a first circuit group including a first Intellectual Property (IP) to be operated in a first address space, a first bus, and a first memory, a second circuit group including a second IP to be operated in a second address space, a second bus, and a second memory, a third bus connectable to a third memory, and a transfer control circuit coupled to the first to third buses. when the software lock-step is performed, the second circuit group converts an access address from the second IP to the second memory such that an address assigned to the second memory in the second address space is a same as an address assigned to the first memory in the first address space. | 2021-11-11 |
20210349820 | MEMORY ALLOCATION FOR DISTRIBUTED PROCESSING DEVICES - Examples described herein relate to an offload processor to receive data for transmission using a network interface or received in a packet by a network interface. In some examples, the offload processor can include a packet storage controller to determine whether to store data in a buffer of the offload processing device or a system memory after processing by the offload processing device. In some examples, determine whether to store data in a buffer of the offload processor or a system memory is based on one or more of: available buffer space, latency limit associated with the data, priority associated with the data, or available bandwidth through an interface between the buffer and the system memory. In some examples, the offload processor is to receive a descriptor and specify a storage location of data in the descriptor, wherein the storage location is within the buffer or the system memory. | 2021-11-11 |
20210349821 | MULTI-PROCESSOR BRIDGE WITH CACHE ALLOCATE AWARENESS - Techniques for loading data, comprising receiving a memory management command to perform a memory management operation to load data into the cache memory before execution of an instruction that requests the data, formatting the memory management command into one or more instruction for a cache controller associated with the cache memory, and outputting an instruction to the cache controller to load the data into the cache memory based on the memory management command. | 2021-11-11 |
20210349822 | THREE TIERED HIERARCHICAL MEMORY SYSTEMS - Systems, apparatuses, and methods related to three tiered hierarchical memory systems are described herein. A three tiered hierarchical memory system can leverage persistent memory to store data that is generally stored in a non-persistent memory, thereby increasing an amount of storage space allocated to a computing system at a lower cost than approaches that rely solely on non-persistent memory. An example apparatus may include a persistent memory, and one or more non-persistent memories configured to map an address associated with an input/output (1/0) device to an address in logic circuitry prior to the apparatus receiving a request from the I/O device to access data stored in the persistent memory, and map the address associated with the I/O device to an address in a non-persistent memory subsequent to the apparatus receiving the request and accessing the data. | 2021-11-11 |
20210349823 | Dynamic Adaptive Drain for Write Combining Buffer - In one embodiment, a processor includes a write combining buffer that includes a memory having a plurality of entries. The entries may be allocated to committed store operations transmitted by a load/store unit in the processor, and subsequent committed store operations may merge data with previous store memory operations in the buffer if the subsequent committed store operations are to addresses that match addresses of the previous committed store operations within a predefined granularity (e.g. the width of a cache port). The write combining buffer may be configured to retain up to N entries of committed store operations, but may also be configured to write one or more of the entries to the data cache responsive to receiving more than a threshold amount of non-merging committed store operations in the write combining buffer. | 2021-11-11 |
20210349824 | EFFICIENT WORK UNIT PROCESSING IN A MULTICORE SYSTEM - Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit. | 2021-11-11 |
20210349825 | MEMORY CONTROLLER - A controller that controls a memory device including a plurality of pages each corresponding to a physical address, the controller may include: a memory suitable for storing a plurality of logical-to-physical (L2P) chunks each indicating mapping between one or more logical addresses and one or more physical addresses and an original valid page bitmap (VPB) indicating whether each of the plurality of pages is a valid page that stores valid data; and a processor suitable for generating a reconstructed VPB based on normal L2P chunks when an corrupted L2P chunk is detected, detecting pages having different states in the original VPB and the reconstructed VPB, obtaining logical addresses mapped to physical addresses of the detected pages, respectively, and recovering the corrupted L2P chunk based on the physical addresses of the detected pages and the obtained logical addresses. | 2021-11-11 |
20210349826 | METHODS AND DEVICES FOR BYPASSING THE INTERNAL CACHE OF AN ADVANCED DRAM MEMORY CONTROLLER - A calculation system comprises a computing device having one or more instruction-controlled processing cores and a memory controller, the memory controller including a cache memory; and a memory circuit coupled to the memory controller via a data bus and an address bus, the memory circuit being adapted to have a first m-bit memory location accessible by a plurality of first addresses provided on the address bus, the calculation device being configured to select, in order to each memory operation accessing the first m-bit memory location, one address among the plurality first addresses. | 2021-11-11 |
20210349827 | SLOT/SUB-SLOT PREFETCH ARCHITECTURE FOR MULTIPLE MEMORY REQUESTORS - A prefetch unit generates a prefetch address in response to an address associated with a memory read request received from the first or second cache. The prefetch unit includes a prefetch buffer that is arranged to store the prefetch address in an address buffer of a selected slot of the prefetch buffer, where each slot of the prefetch unit includes a buffer for storing a prefetch address, and two sub-slots. Each sub-slot includes a data buffer for storing data that is prefetched using the prefetch address stored in the slot, and one of the two sub-slots of the slot is selected in response to a portion of the generated prefetch address. Subsequent hits on the prefetcher result in returning prefetched data to the requestor in response to a subsequent memory read request received after the initial received memory read request. | 2021-11-11 |
20210349828 | PAGE MODIFICATION ENCODING AND CACHING - Modifying a page stored in a non-volatile storage includes receiving one or more requests to modify data stored in the page with new data. One or more lines are identified in the page that include data to be modified by the one or more requests. The identified one or more lines correspond to one or more respective byte ranges each of a predetermined size in the page. Encoded data is created based on the new data and respective locations of the one or more identified lines in the page. The encoded data is cached, and at least a portion of the cached encoded data is used to rewrite the page in the non-volatile storage to include at least a portion of the new data. | 2021-11-11 |
20210349829 | COMPRESSED LOGICAL-TO-PHYSICAL MAPPING FOR SEQUENTIALLY STORED DATA - Methods, systems, and devices for compressed logical-to-physical mapping for sequentially stored data are described. A memory device may use a hierarchical set of logical-to-physical mapping tables for mapping logical block address generated by a host device to physical addresses of the memory device. The memory device may determine whether all of the entries of a terminal logical-to-physical mapping table are consecutive physical addresses. In response to determining that all of the entries contain consecutive physical addresses, the memory device may store a starting physical address of the consecutive physical addresses as an entry in a higher-level table along with a flag indicating that the entry points directly to data in the memory device rather than pointing to a terminal logical-to-physical mapping table. The memory device may, for subsequent reads of data stored in one or more of the consecutive physical addresses, bypass the terminal table to read the data. | 2021-11-11 |
20210349830 | LOGICAL TO PHYSICAL TABLE FRAGMENTS - Logical to physical tables each including logical to physical address translations for first logical addresses can be stored. Logical to physical table fragments each including logical to physical address translations for second logical address can be stored. A first level index can be stored. The first level index can include a physical table address of a respective one of the logical to physical tables for each of the first logical addresses and a respective pointer to a second level index for each of the second logical addresses. The second level index can be stored and can include a physical fragment address of a respective logical to physical table fragment for each of the second logical addresses. | 2021-11-11 |
20210349831 | CLASS OF SERVICE - Described herein is an accelerator device having a cache memory for which limits may be specified for a memory allocation according to a class of service associated with a thread, application, or virtual machine that created the memory allocation. The limits can include a specific set of enumerated cache ways that are designated as eligible to cache data for memory allocations associated with a class of service. | 2021-11-11 |
20210349832 | METHOD AND APPARATUS FOR VECTOR PERMUTATION - A method is provided that includes performing, by a processor in response to a vector permutation instruction, permutation of values stored in lanes of a vector to generate a permuted vector, wherein the permutation is responsive to a control storage location storing permute control input for each lane of the permuted vector, wherein the permute control input corresponding to each lane of the permuted vector indicates a value to be stored in the lane of the permuted vector, wherein the permute control input for at least one lane of the permuted vector indicates a value of a selected lane of the vector is to be stored in the at least one lane, and storing the permuted vector in a storage location indicated by an operand of the vector permutation instruction. | 2021-11-11 |
20210349833 | LOGICAL MEMORY ALLOCATION AND PROVISIONING - A method and system of managing memory, the method including receiving a request for storage space in the memory system; obtaining a timestamp for a new Logical Unit Number (LUN); allocating a range of logical blocks to the new LUN in accordance with its requested size, the range of logical blocks including a starting logical block and a number of blocks; assigning the timestamp to the new LUN as the LUN creation timestamp; and saving the LUN creation timestamp with other metadata identifying the new LUN and the allocated logical blocks. Methods and system for deleting LUNs and using a deletion timestamp are disclosed as is a process to format a LUN. | 2021-11-11 |
20210349834 | SYSTEMS AND METHODS FOR MANAGING CACHE REPLACEMENT - A method of managing load units of executable instructions between internal memory in a microcontroller with multiple bus masters, and a non-volatile memory device external to the microcontroller. A copy of the load units are loaded from the external memory device into the internal memory for use by corresponding bus masters. Each load unit is with a corresponding load entity queue and each load entity queue is associated with a corresponding one of the multiple bus masters. Each load entity queue selects an eviction candidate from the associated copy of the load units currently loaded in the internal memory. Information identifying the eviction candidate for each load entity queue is broadcasted to all load entity queues. The eviction candidate is added to a set of managed eviction candidates if none of the load entity queues vetoes using the eviction candidate. | 2021-11-11 |
20210349835 | SYSTEM CACHE OPTIMIZATIONS FOR DEEP LEARNING COMPUTE ENGINES - In an example, an apparatus comprises a plurality of compute engines; and logic, at least partially including hardware logic, to detect a cache line conflict in a last-level cache (LLC) communicatively coupled to the plurality of compute engines; and implement context-based eviction policy to determine a cache way in the cache to evict in order to resolve the cache line conflict. Other embodiments are also disclosed and claimed. | 2021-11-11 |
20210349836 | FIELD-REPLACEABLE UNIT (FRU) SECURE COMPONENT BINDING - Systems and methods are provided for binding one or more components to an identification component of a hardware module. Each of the serial numbers for the one or more components are included within a module-specific authentication certificate that is stored within the identification component of the hardware module. When connected to a computing platform, an authentication system of the computing platform is capable of retrieving the module-specific authentication certificate. The authentication system can compare the list of serial numbers included in the module-specific authentication certificate with one or more serial numbers read over a first interface. If the two lists of serial numbers match, the authentication system can flag the hardware module as authenticate through authentication of all components of the hardware module. | 2021-11-11 |
20210349837 | SYSTEMS, METHODS, AND DEVICES FOR NEAR DATA PROCESSING - A memory module may include one or more memory devices, and a near-memory computing module coupled to the one or more memory devices, the near-memory computing module including one or more processing elements configured to process data from the one or more memory devices, and a memory controller configured to coordinate access of the one or more memory devices from a host and the one or more processing elements. A method of processing a dataset may include distributing a first portion of the dataset to a first memory module, distributing a second portion of the dataset to a second memory module, constructing a first local data structure at the first memory module based on the first portion of the dataset, constructing a second local data structure at the second memory module based on the second portion of the dataset, and merging the first and second local data structures. | 2021-11-11 |
20210349838 | Priority Based Arbitration - Methods of arbitrating between requestors and a shared resource wherein for each processing cycle a plurality of select signals are generated and then used by decision nodes in a binary decision tree to select a requestor. The select signals are generated using valid bits and priority bits. Each valid bit corresponds to one of the requestors and indicates whether, in the processing cycle, the requestor is requesting access to the shared resource. Each priority bit corresponds one of the requestors and indicates whether, in the processing cycle, the requestor has priority. Corresponding valid bit and priority bits are combined in an AND logic element to generate a valid_and_priority bit for each requestor. Pair-wise OR-reduction is then performed on both the valid bits and the valid_and_priority bits to generate additional valid bits and valid_and_priority bits for sets of requestors and these are then used to generate the select signal. | 2021-11-11 |
20210349839 | MULTI-PORTED NONVOLATILE MEMORY DEVICE WITH BANK ALLOCATION AND RELATED SYSTEMS AND METHODS - A nonvolatile memory device can include a serial port having at least one serial clock input, and at least one serial data input/output (I/O) configured to receive command, address and write data in synchronism with the at least one serial clock input. At least one parallel port can include a plurality of command address inputs configured to receive command and address data in groups of parallel bits and a plurality of unidirectional data outputs configured to output read data in parallel on rising and falling edges of a data clock signal. Each of a plurality of banks can include nonvolatile memory cells and be configurable for access by the serial port or the parallel port. When a bank is configured for access by the serial port, the bank is not accessible by the at least one parallel port. Related methods and systems are also disclosed. | 2021-11-11 |
20210349840 | System, Apparatus And Methods For Handling Consistent Memory Transactions According To A CXL Protocol - In one embodiment, an apparatus includes: an interface to couple a plurality of devices of a system and enable communication according to a Compute Express Link (CXL) protocol. The interface may receive a consistent memory request having a type indicator to indicate a type of consistency to be applied to the consistent memory request. A request scheduler coupled to the interface may receive the consistent memory request and schedule it for execution according to the type of consistency, based at least in part on a priority of the consistent memory request and one or more pending consistent memory requests. Other embodiments are described and claimed. | 2021-11-11 |
20210349841 | LOCAL NON-VOLATILE MEMORY EXPRESS VIRTUALIZATION DEVICE - A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node. | 2021-11-11 |
20210349842 | Programmable Hardware Virtual Network Interface - Systems and methods for communication between heterogenous processors via a virtual network interface implemented via programmable hardware and one or more buses. The programmable hardware may be configured with a multi-function bus such that the programmable hardware appears as both a network device and a programmable device to a host system. Additionally, the programmable hardware may be configured with a second bus to appear as a network device to an embedded system. Each system may implement network drivers to allow access to direct memory access engines configured on the programmable hardware. The configured programmable hardware and the network drivers may enable a virtual network connection between the systems to allow for information transfer via one or more network communication protocols. | 2021-11-11 |
20210349843 | SYSTEM COMPONENT AND USE OF A SYSTEM COMPONENT - A system component, including an interface for a data bus, a defined communication protocol being used on the data bus which determines the data sequence of access requests for sending and receiving data. The data of an access request includes pieces of information about the access direction. The system component includes a register unit including data registers. The system component includes a processing unit for the data of an access request. The interface is optionally operable in a first or a second operating mode. In the first operating mode, the data of an access request is supplied to the register unit to identify a register address, so that the corresponding read or write access takes place on the identified data register. In the second mode, the data of an access request is supplied to the processing unit and the corresponding read or write access is handled by the processing unit. | 2021-11-11 |
20210349844 | SYSTEM AND METHOD TO SELECTIVELY REDUCE USB-3 INTERFERENCE WITH WIRELESS COMMUNICATION DEVICES - An information handling system includes a processor that provides a USB-2 channel and a USB-3 channel to a device. The device provides the USB-2 and -3 channels to selected ports. Each port includes a USB-3 enable setting. When the USB-3 enable setting for each particular USB port is in a first state, the associated device USB-3 channel is active, and when the USB-3 enable setting for each particular USB port is in a second state, the associated device USB-3 channel is inactive. The USB-3 enable setting for at least one of the USB ports is placed into the second state to reduce electromagnetic interference between the associated USB-3 channel and an antenna. | 2021-11-11 |