21st week of 2022 patent applcation highlights part 44 |
Patent application number | Title | Published |
20220164201 | SYSTEMS AND METHODS FOR BUILDING DYNAMIC INTERFACES - First data indicative of a first plurality of transactions by a user may be processed to generate first behavioral information describing the user. The first behavioral information may be displayed by an interactive user interface. A user input made in response to the first behavioral information may be received and analyzed to generate user preference information indicating a relationship between the first user input and the first behavioral information. Second data indicative of a second plurality of transactions by the user may be received and processed with the user preference information to generate second behavioral information describing the user. The second behavioral information may be displayed by the interactive user interface differently from the first behavioral information by the interactive user interface as a result of the processing of the second data and the user preference information together. | 2022-05-26 |
20220164202 | SYSTEM AND METHOD FOR PRESENTING AN OBJECT - Method, system for presenting an object on a computing device. A metaphor application on a computing device organizes a user interface based upon a metaphor. The metaphor organizes a document, file, application, or combination thereof based on geospheric direction, geolocation, or both. The metaphor may also organize a document, file, application, data, or a combination thereof based on a solid geometrical figure in three-dimensional Euclidean space. A document, file, application, or any combination thereof may be associated with geophysical direction, a geolocation, or both. The document, file, application, data, or any combination thereof may further be associated with a solid geometrical figure. A presentation object containing data on the document, file, application, data, or combination thereof, and the geospheric direction, geolocation, or both is formatted into data blocks for rendering on a display. The display may be the display screen of the computing device. The metaphor application causes the presentation object to be rendered on the display when the computing device is pointing in the geospheric direction, in the geolocation or both associated with the presentation object. | 2022-05-26 |
20220164203 | STREAMING APPLICATION VISUALS USING PAGE-LIKE SPLITTING OF INDIVIDUAL WINDOWS - The disclosure relates to the transfer of visuals (e.g., window visuals) over virtual frames that may be stored in any number of video frames of one or more video streams. The visuals may be split into two-dimensional (2D) pages of a virtual frame, with each of the 2D pages being a fraction of the size of video frames of the video stream(s). The virtual frame may be encoded to the video frames of the video stream(s) and later reconstructed in accordance with a page table. | 2022-05-26 |
20220164204 | SELF-LEARNING ARTIFICIAL INTELLIGENCE VOICE RESPONSE BASED ON USER BEHAVIOR DURING INTERACTION - A system is provided for recommending guidance instructions to a user. The system includes a memory having computer readable instructions and a processor for executing the computer readable instructions. The computer readable instructions control the processor to perform operations of monitoring an ongoing task comprising at least one act performed by a user, generating image data depicting the ongoing task, and displaying the ongoing task based on the image data. The system analyzes the ongoing task and generates an augmented image. The augmented image is overlaid on the image data so that the augmented image is displayed simultaneously with the ongoing task to direct the user to progress the ongoing task. | 2022-05-26 |
20220164205 | FEATURE EXPOSURE FOR MODEL RECOMMENDATIONS AND FEEDBACK - Systems, methods and apparatus for providing user feedback to an action selection model. In an aspect, a method includes displaying interaction elements for recommendations selected by a selection model. Each interaction element may be selected by one of a first interaction mode or a second interaction mode. Selection by the first interaction mode indicates an acceptance of the recommendation described the interaction element. Selection by the second interaction mode causes the user device to display the decision data that caused the selection model to select the recommendation described by the interaction element. In some implementations, the recommendations are actions that a user device may perform. In other implementations, each recommendation may be one of an action that the user device may perform or content that a user may consume. | 2022-05-26 |
20220164206 | COMMAND LINE INTERFACE EXTENSION PROCESS - In some embodiments, a method or processing a command line interface is provided. The method parses a definition file for extending a command line interface (CLI) with an extension. The definition file is processed through an external interface that is different from an internal interface used to develop the CLI. The method translates a definition from the definition file to an internal structure supported by the internal interface. The translated definition implements the extension in the CLI. The internal structure is added to a data structure for the CLI where a command set of the CLI is extended to perform a command that is defined by the definition. | 2022-05-26 |
20220164207 | SYSTEM FOR PROVIDING AN ADAPTABLE PLUGIN FRAMEWORK FOR APPLICATION TRANSFORMATION TO CLOUD - A system and a method for application transformation to cloud by conversion of an application source code to a cloud native code is provided. First and second transformation recommendation paths are received and remediation templates based on the same are applied. A pre-defined transformation process flow is applied on application source code based on first and second transformation recommendation paths including a pre-processing stage involving analysis of source code and target framework. A plugin unit is provided which provides an adaptable plugin framework for creating multiple plugin types. The adaptable plugin framework allows addition of semi-automated workflow that applies functionality to accelerate application development or application to cloud transformation or addition of semi-automated steps to accelerate greenfield application development and application source code transformation to cloud native code. The functionality may include assessment of application source code and generation of application source code. | 2022-05-26 |
20220164208 | COORDINATED CONTAINER SCHEDULING FOR IMPROVED RESOURCE ALLOCATION IN VIRTUAL COMPUTING ENVIRONMENT - The technology provides for allocating an available resource in a computing system by bidirectional communication between a hypervisor and a container scheduler in the computing system. The computing system for allocating resources includes one or more processors configured to receive a first scheduling request to initiate a first container on a first virtual machine having a set of resources. A first amount of resources is allocated from the set of resources to the first container on the first virtual machine in response to the first scheduling request. A hypervisor is notified in a host of the first amount of resources allocated to the first container. A second amount of resources from the set of resources is allocated to a second virtual machine in the host. A reduced amount of resources available in the set of resources is determined. A container scheduler is notified by the hypervisor for the reduced amount of resources of the set of resources available on the first virtual machine. | 2022-05-26 |
20220164209 | CONTAINERIZED COMPUTING ENVIRONMENTS - Building images that enable improved utilization of previously built image layers. An image build system evaluates commands prior to their use and differentiate between stateful and stateless commands. Employing such an approach enables stateless commands to be identified (e.g. labeled), thus enabling the image build system to handle the stateless commands differently from stateful commands. This enables the re-use of cached/stored image layers, thus reducing image size by avoiding the creation of new image layers. | 2022-05-26 |
20220164210 | VIRTUALIZED FABRIC NAME SERVER FOR STORAGE AREA NETWORK - Techniques for a virtualized fabric name server for a storage area network are described herein. An aspect includes operating a storage area network, the storage area network including a hybrid control plane. Another aspect includes managing, using a virtualized fabric name server and the hybrid control plane, the storage area network, wherein the virtualized fabric name server is disposed in a container that is hosted on an element of the storage area network. | 2022-05-26 |
20220164211 | DATA PROTECTION MANAGEMENT OF APPLICATION INSTANCE EXECUTING IN ACCORDANCE WITH VIRTUAL VOLUME-BASED STORAGE SYSTEM - Data protection management techniques in information processing systems are disclosed. For example, a method is provided to manage generation of a copy of data of an application instance executed by a virtual processing device of a host device operatively coupled to a virtual volume-based storage system. Generation of the copy of the data of the application instance is caused to be performed on the virtual volume-based storage system independent of a virtualization layer associated with the host device. | 2022-05-26 |
20220164212 | SYSTEMS AND METHODS FOR ASSIGNING DOMAIN IDENTIFIERS TO REMOTE PERIPHERAL DEVICES USING A HYPERVISOR - A processing system includes an interconnect, a master processing device including processing cores coupled to the interconnect, a hypervisor coupled to the interconnect and configured to allocate the processing cores to one or more virtual machines, domain configuration information including a domain identifier for each of the one or more virtual machines, remote peripheral devices coupled to the interconnect, and a domain access controller coupled to the interconnect and configured to receive the domain identifiers for the remote peripherals directly from the hypervisor through the interconnect. | 2022-05-26 |
20220164213 | CLOUD BASED AUDIO / VIDEO OPERATING SYSTEMS - Technology is disclosed for establishing and administering multiple virtual machines, each with an audio, video and control (AVC) operating system (OS). The technology can also establish and administer cloud based AVC OSs. A server implementing this technology can perform real-time AVC processing, alongside soft and non-real-time processing and can host multiple, independent, virtual AVC OSs. Each AVC OS can perform the processing for an AVC setup. Each of the AVC OSs can be operated by a corresponding virtual machine controlled by a hypervisor running on the server. A cloud based AVC OS can perform processing for a corresponding remote AVC setup comprising multiple AVC devices. An AVC routing system can cause AVC signals from a particular AVC setup to reach a corresponding cloud AVC OS and conversely can cause signals from an AVC OS to reach the correct destination device. | 2022-05-26 |
20220164214 | CONTAINER PLATFORM-ORIENTED TRUSTED SOFTWARE AUTHORIZATION AND VERIFICATION SYSTEM AND METHOD - Provided are a container platform-oriented trusted software authorization and verification system and a method, the system including a public key infrastructure builder, a container image identity builder, a signature list builder, a container image verifier, a signature list and user certificates loader, and a container program verifier. The method is capable of conveniently authorizing container images and software running in the container, and verifying the container images and programs in the container at the right time, so as to ensure that container images running on the container platform are trusted, and the software running in the container is also trusted, thereby improving the security of the container platform. | 2022-05-26 |
20220164215 | VIRTUAL MACHINE MIGRATION METHOD AND DEVICE - A first device receives a migration instruction sent by a second device and creates a target virtual machine of a to-be-migrated virtual machine that is in the first device. Then, the first device receives a memory bitmap of the to-be-migrated virtual machine from the second device, where the memory bitmap may indicate whether data of each memory unit in a memory of the to-be-migrated virtual machine is stored in a non-volatile memory or a volatile memory. In a virtual machine migration process, the first device stores, based on the memory bitmap, data in the memory of the to-be-migrated virtual machine into a volatile memory and a non-volatile memory of the target virtual machine. | 2022-05-26 |
20220164216 | VIRTUALIZING HARDWARE COMPONENTS THAT IMPLEMENT Al APPLICATIONS - A computing environment can include a host system that maintains a guest system, and a hardware component configured to implement artificial intelligence (“AI”) methods of processing and analyzing date. The guest system can provide a virtual computing environment that receives a request to implement an AI application, and utilize a framework and a guest library to convert data from the AI application into an intermediate representation (“IR”). The host system can receive the IR with a virtual device (“VD”), and utilize an IR backend to translate the IR into hardware operations for the hardware component. Translated hardware operations can be provided to, and carried out by, the hardware component to provide an implementation of the AI application. Results of the hardware operations can be transmitted from the VD of the host system to a VD driver of the guest system, virtualizing the hardware component relative to the guest system. | 2022-05-26 |
20220164217 | MERGING DATA FOR WRITE ALLOCATE - A method includes receiving, by a level two (L2) controller, a write request for an address that is not allocated as a cache line in a L2 cache. The write request specifies write data. The method also includes generating, by the L2 controller, a read request for the address; reserving, by the L2 controller, an entry in a register file for read data returned in response to the read request; updating, by the L2 controller, a data field of the entry with the write data; updating, by the L2 controller, an enable field of the entry associated with the write data; and receiving, by the L2 controller, the read data and merging the read data into the data field of the entry. | 2022-05-26 |
20220164218 | SYSTEMS, METHODS, AND APPARATUSES FOR HETEROGENEOUS COMPUTING - Embodiments of systems, methods, and apparatuses for heterogeneous computing are described. In some embodiments, a hardware heterogeneous scheduler dispatches instructions for execution on one or more plurality of heterogeneous processing elements, the instructions corresponding to a code fragment to be processed by the one or more of the plurality of heterogeneous processing elements, wherein the instructions are native instructions to at least one of the one or more of the plurality of heterogeneous processing elements. | 2022-05-26 |
20220164219 | PROCESSING SYSTEM, PROCESSING METHOD, HIGHER-LEVEL SYSTEM, LOWER-LEVEL SYSTEM, HIGHER-LEVEL PROGRAM, AND LOWER-LEVEL PROGRAM - In asynchronous processing, processing of a low-order system is checked. A high-order system | 2022-05-26 |
20220164220 | Circuit for Fast Interrupt Handling - A circuit for fast interrupt handling is disclosed. An apparatus includes a processor circuit having an execution pipeline and a table configured to store a plurality of pointers that correspond to interrupt routines stored in a memory circuit. The apparatus further includes an interrupt redirect circuit configured to receive a plurality of interrupt requests. The interrupt redirect circuit may select a first interrupt request among a plurality of interrupt requests of a first type. The interrupt redirect circuit retrieves a pointer from the table using information associated with the request. Using the pointer, the execution pipeline retrieves first program instruction from the memory circuit to execute a particular interrupt routine. | 2022-05-26 |
20220164221 | PRESERVING PERSISTENT LINK CONNECTIONS DURING A CLOUD-BASED SERVICE SYSTEM UPGRADE - A method and a microservice system for preserving link connections during an upgrade. The system a memory and an electronic processor. The processor is configured to initiate a client process upgrade for a first instance of a plurality of instances, each configured to establish and maintain a link connection between at least one of a plurality of electronic endpoint devices, store state data regarding the link connection between the first instance and an electronic endpoint device, and instantiate, for the first instance, an upgraded instance of the link adapter service. The processor is configured to shut down the first instance, causing the first instance to terminate the link connection to the endpoint device, and immediately establish a new link connection between the endpoint device and the upgraded instance, a state of the new link connection being established according to the stored state data. | 2022-05-26 |
20220164222 | Execution of Services Concurrently - Methods to execute an orchestration of computing services concurrently, the method including developing a representation of a set of services where each service relates to other services via different types of relationships. Also, applying a set of dependency rules for each type of relationship within the set of services such that the application of the dependency rules creates inter-step dependencies between steps representing state transitions of the set of services and developing the orchestration plan based on the inter-step dependencies that allows for concurrent execution of nondependent steps. | 2022-05-26 |
20220164223 | ANTICIPATED CONTAINERIZED INFRASTRUCTURE USED IN PERFORMING CLOUD MIGRATION - Technology for causing a computer system to: receive a migration plan for migration of computer data and/or computer software, generate containerized migration file(s) according to the migration plan; copy the containerized migration file(s) into a set of container(s) so that the migration plan can be implemented using a container from the set of containers; and migrate computer data and/or computer software between a source computer sub-system and a target computer sub-system using a container from the set of containers to implement the migration plan. | 2022-05-26 |
20220164224 | LONG-TERM PROGRAMMATIC WORKFLOW MANAGEMENT - A system and method for long-term programmatic workflow execution, including: iteratively, while a suspension event is not detected: with a run, executing the code block using passed variable values from another code block; when a suspension event is detected, suspending run execution and persistently storing the run state; and resuming run execution responsive to receipt of a valid run resumption request. | 2022-05-26 |
20220164225 | SELF-PLAY TO IMPROVE TASK-ORIENTED DIALOG SYSTEMS AND METHODS - An automatic agent may be trained using reinforcement learning. A secret task may be obtained for a simulated user, and the secret task may be unknown to the automatic agent. At least one instruction to complete the secret task may be obtained from the simulated user according to at least one RL policy. At least one action may be generated by the automatic agent based on the at least one instruction and the at least one RL policy. Rewards may be determined for the simulated user and the automatic agent in response to determining that the at least one action successfully completes the secret task. The at least one RL policy may be adjusted based on the determined rewards. | 2022-05-26 |
20220164226 | LEVEL TWO FIRST-IN-FIRST-OUT TRANSMISSION - A hardware state machine connected to a processor, the hardware state machine configured to receive operational codes from the processor; a multiplexer connected to the processor, the hardware state machine and a checksum circuit, the multiplexer configured to receive data from the processor; and a transmit circuit connected to the multiplexer, the transmit circuit configured to receive data from the multiplexer for transmission to a far end device, wherein the hardware state machine is further configured to, responsive receiving one or more operational codes from the processor: cause the checksum circuit to alter a checksum value of a first data packet being transmitted by the transmit circuit; and cause the transmit circuit to preempt transmission of the first data packet and begin transmitting a second data packet once the checksum value so altered has been transmitted from the transmit circuit. | 2022-05-26 |
20220164227 | CLUSTERING TENANTS BASED ON TENANCY KNOWLEDGE GRAPH - A computer-implemented method includes constructing a tenancy knowledge graph having a plurality of tenant nodes representing respective tenants in a multitenant computing environment, a plurality of property nodes representing respective properties of the tenants, and a plurality of edges connecting the plurality of tenant nodes and the plurality of property nodes, transforming the plurality of property nodes to corresponding property vectors, performing random walks starting from the plurality of tenant nodes of the tenancy knowledge graph, feeding sequences of nodes traversed by the random walks into a neural network to generate a plurality of tenant vectors corresponding to the plurality of tenant nodes, and clustering the plurality of tenant nodes into one or more tenant clusters based on similarity of the plurality of tenant vectors. | 2022-05-26 |
20220164228 | FINE-GRAINED VIRTUALIZATION RESOURCE PROVISIONING FOR IN-PLACE DATABASE SCALING - Fine-grained virtualization provisioning may be performed for in-place database scaling. Computing resource utilization for a database on a host system is obtained for a period of time. The computing resource utilization may be evaluated with respect to a target capacity for the database. If a scaling event is detected based on the evaluation, a modified target capacity may be determined and used to make an adjustment of the computing resources permitted to be used by the database. | 2022-05-26 |
20220164229 | MANAGING DEPLOYMENT OF WORKLOADS - Examples described herein relate to a management node and a method for managing deployment of a workload. The management node may obtain values of resource labels related to platform characteristics of a plurality of worker nodes. Further, the management node may determine values of one or more custom resource labels for each of the plurality of worker nodes, wherein a value of each custom resource label of the one or more custom resource labels is determined based on values of a respective set of resource labels of the resource labels. Furthermore, the management node may receive a workload deployment request including a workload description of a workload. Moreover, the management node may deploy the workload on a worker node of the plurality of worker nodes based on the workload description and the values of the one or more custom resource labels. | 2022-05-26 |
20220164230 | DISTRIBUTED MEDICAL SOFTWARE PLATFORM - Intelligent, distributed medical software management (e.g., using a computerized tool) is enabled. A system can comprise a processor, and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, comprising determining requirement information representative of one or more requirements of a medical application off a group of medical applications, wherein the medical application is associated with a medical device, based on the requirement information, allocating elements of a cluster employable to host and run the medical application in a medical application container, wherein the elements of the cluster are determined to satisfy the requirement information, and in response to allocating the elements of the cluster, hosting the medical application in the medical application container, wherein hosting the medical application comprises communicatively coupling the medical application to the medical device. | 2022-05-26 |
20220164231 | DETERMINE SPECIFIC DEVICES FROM FOLLOW-UP QUESTIONS - In some examples, a system can include a processing resource and a memory resource storing machine readable instructions to cause the processing resource to receive a description of a plurality of features of a specific device, identify distinguishable features for a plurality of different devices based on the description of the plurality of features, provide a plurality of follow-up questions based on the distinguishable features of the plurality of different devices, and determine a unique identifier for the specific device from the plurality of different devices based on the description and information received in response to the plurality of follow-up questions. | 2022-05-26 |
20220164232 | METHOD FOR MANAGING RESOURCES, COMPUTING DEVICE AND COMPUTER-READABLE STORAGE MEDIUM - A method for managing resources, a computing device, and a computer-readable storage medium are provided. The method includes obtaining device information of multiple physical devices included in a computing node to confirm physical devices supporting a predetermined hardware resource management method; initializing at least one physical device among the physical devices supporting the predetermined hardware resource management method as a unified device view device; allocating a virtual storage address of the unified device view device, where the virtual storage address is mapped to a physical storage address of the physical device participating in the unified device view; transmitting data to the virtual storage address of the unified device view device; and issuing a computing task to the unified device view device via a task queue for using the physical device participating in the unified device view to execute the computing task. | 2022-05-26 |
20220164233 | ACTIVITY ASSIGNMENT BASED ON RESOURCE AND SERVICE AVAILABILITY - An embodiment for resource management is provided. The embodiment may include receiving created text of an assigned activity to a proposed assignee. The embodiment may also include identifying information about the assigned activity. The embodiment may further include predicting resources and capabilities required to complete the assigned activity. The embodiment may also include identifying the proposed assignee. The embodiment may further include analyzing the resources and capabilities available on one or more devices of the proposed assignee. The embodiment may also include in response to determining the proposed assignee is able to complete the assigned activity, displaying to an assignor a predicted start time and time of completion of the assigned activity and in response to determining the proposed assignee is unable to complete the assigned activity, recommending to the assignor another assignee that is able to complete the assigned activity. | 2022-05-26 |
20220164234 | INTELLIGENT QUERY PLANNING FOR METRIC GATEWAY - In accordance with some aspects of the present disclosure, an apparatus is disclosed. The apparatus includes a processor and a memory, wherein the memory includes programmed instructions that when executed by the processor, cause the apparatus to receive a request to join a plurality of entity data structures using a first join order, determine a first performance cost of the first join order, determine a second performance cost of a second join order, determine whether the second performance cost is lower than the first performance cost, in response to determining that the second performance cost is lower than or exceeds the first performance cost, select the second join order or the first join order, respectively, join the plurality of entity data structures using the selected join order, and send the joined plurality of entity data structures. | 2022-05-26 |
20220164235 | Server-Based Workflow Management Using Priorities - A system prioritizes workflows based on priority levels and calculates an amount of resource consumption associated with the workflows. Each client starts a predefined time period with a certain amount of credits that indicate a degree of resource consumption. Workflows may be run with high priorities using credits. Workflows that are run with higher priorities are scheduled to run before workflows with lower priorities. A degree of resource consumption for running a workflow may be calculated based on resources consumed by the system, such as central processing unit (CPU), memory storage, network usage and elapsed time. The degree of resource consumption for running a workflow may be calculated and converted to an amount of credits and the respective amount of credits may be deducted from a credit balance associated with the client's account. The degree of resource consumption associated with a workflow may be estimated before the workflow starts running. | 2022-05-26 |
20220164236 | SYSTEM AND METHOD FOR ADAPTIVE RELEASE OF APPLICATION PROGRAM IN DIFFERENT ENVIRONMENTS - A system for adaptive release of application program in different environments, includes application servers and a configuration library which includes a configuration parameter matching with each of the application servers. When an application program that does not include a configuration parameter is transmitted to the configuration library, the configuration library configures, for the application program, the configuration parameter that corresponds to the application server in which release is to be made, and the application program that includes the configuration parameter is then transmitted to the corresponding application server. The present invention separates the configuration parameter from the application program and arranges an independent configuration library to store a configuration parameter matching with each one of the application servers, and when the application program is to be released, the configuration library conducts a corresponding configuration according to a release environment, reducing working load of the development staffs and increasing working efficiency. | 2022-05-26 |
20220164237 | CONFIGURATION OF A PACKET PROCESSING PIPELINE - Examples described herein relate to a packet processing device comprising a programmable packet processing pipeline that is logically partitioned into multiple domains including privileged and unprivileged domains. The multiple domains can span one or more stages of the programmable packet processing pipeline, wherein at least one stage is to perform match action operations. | 2022-05-26 |
20220164238 | Computerized System for User-Directed Customization and User Interface Transformation - A system includes a processor and memory that stores asset identifiers. The asset identifiers correspond to a respective index and a respective category. The memory stores instructions for execution by the processor. The instructions include, in response to receiving a request signal from a user device, obtaining a set of asset identifiers corresponding to a first index indicated in the request signal and filtering the set of asset identifiers based on a first category indicated in the request signal. The instructions include generating an adjusted set of asset identifiers by, for each category represented in the filtered set of asset identifiers, adjusting a representation ratio of the asset identifiers associated with the corresponding category in response to the request signal including the representation ratio associated with the corresponding category, and transforming an interface of the user device by rendering a graphical depiction of the adjusted set of asset identifiers. | 2022-05-26 |
20220164239 | METHOD AND DEVICE FOR DELETING RESOURCE IN M2M SYSTEM - A method for operating a machine-to-machine (M2M) device in an M2M system includes generating a request message including information associated to deletion of a resource, the deletion being performed in response to an operation, and transmitting the request message to a counterpart M2M device. The information includes at least one of information notifying that the resource is deleted in response to the operation, information indicating the resource, information indicating at least one operation that is a condition of the deletion, information indicating content of the condition, or information for identifying an entity that performs an operation causing the deletion. | 2022-05-26 |
20220164240 | METHOD AND SYSTEM FOR PROVIDING ONE-CLICK DISTRIBUTION SERVICE IN LINKAGE WITH CODE REPOSITORY - Provided is a method for providing a one-click distribution service in linkage with a code repository, including transmitting, by a service platform, information on a source code included in a code repository selected by a user client to a manager node, allocating, by the manager node, a task associated with the source code to one or more worker nodes, and receiving information on a task result of the task from the one or more worker nodes, receiving, by the service platform, the information on the task result from the manager node, and transmitting, by the service platform, the information on the task result to the user client so that the user client outputs a user interface to execute the task result. | 2022-05-26 |
20220164241 | Methods and Apparatus for Selection of a Virtualisation Engine - Embodiments described herein relate to methods and apparatuses for selecting a first virtualisation engine to execute an application deployment request. A method in a selection engine ( | 2022-05-26 |
20220164242 | EDGE COMPUTING WORKLOAD BALANCING - A set of workload criteria is determined from a workload associated with a plurality of sources. The workload is divided among a set of workload groups according to the set of workload criteria and a first workload scheduler. A set of edge computing resources is assigned to each workload group within the set according to the set of workload criteria and the set of workload groups. A portion of the workload associated with a subset of the plurality of sources is handled by a first subset of edge computing resources and a second workload scheduler, where the subset of sources is associated with a first workload group. The handling includes balancing, by the second workload scheduler, the portion of the workload among the subset of sources. The handled workload is reported to a control center. | 2022-05-26 |
20220164243 | METHOD AND SYSTEM FOR ENABLING COMMUNICATION BETWEEN MULTIPLE VIRTUAL PLATFORMS - A computer system configured to enable communication between two or more virtual platforms is disclosed. The computer system comprises a physical processor configured to run the two or more virtual platforms. The computer system further comprises a memory. The memory comprises one or more separate memory portions allocated to each of the two or more virtual platforms, wherein within at least one memory portion allocated to one of the virtual platform a predefined range of addresses is configured as a shared device memory, the shared device memory being accessible by all the virtual platforms. Firmware running on a first virtual platform is configured to transfer a data packet from the first virtual platform to one or more further virtual platforms via the shared device memory. | 2022-05-26 |
20220164244 | AGGREGATED HEALTH MONITORING OF A CLUSTER DURING TEST AUTOMATION - A system includes a cluster of nodes, memory, and a processor, where the cluster includes an application programming interface (API) server and one or more components. The processor is configured to initialize an interface to the API server, where the interface is operable to send status information from the one or more components within the cluster via a single output stream. The API server is configured to modify the single output stream of the API server to output status information associated with a first component of the one or more components within the cluster. The status information is aggregated and it is determined whether the cluster is at a failure point. In response to determining that the cluster is at a failure point, an execution signal is set to false, where the execution signal is accessible to an automation tool in communication the cluster. | 2022-05-26 |
20220164245 | ON-BOARD FEEDBACK SYSTEM FOR AUTONOMOUS VEHICLES - A system includes an on-board electronic device of an autonomous vehicle, and a computer-readable medium having one or more programming instructions. The system receives one or more forecast messages pertaining to a track, where each of the forecast messages includes a unique identifier associated with the track, and receives one or more inference messages pertaining to the track, where each of the inference messages includes the unique identifier. The system aggregates the one or more forecast messages and the one or more inference messages to generate a message set, and applies a set of processing operations to the message set to generate a feedback message. The system identifies one or more events from the feedback message, automatically generates an annotation for one or more of the events that is identified, and embeds the generated annotations in an event log for the autonomous vehicle. | 2022-05-26 |
20220164246 | COMPUTER-BASED SYSTEMS AND/OR COMPUTING DEVICES CONFIGURED FOR ROOT CAUSE ANALYSIS OF COMPUTING INCIDENTS USING MACHINE LEARNING TO DETERMINE WHEN A SOFTWARE VERSION CHANGE IS A CAUSE OF A COMPUTING INCIDENT - An example method includes receiving incident data for historical incidents of downtime or interrupted service. The incident data includes identification information about one or more first computing applications, devices, or services affected by the downtime or the interrupted service. The incident data further includes timing information relating to the historical incidents and version history information of the one or more first computing applications, devices, or services. The method further includes receiving root cause data indicating a cause of the historical incidents and receiving action data indicating a corrective or preventative action taken or to be taken in response to each of the historical incidents. The method further includes training a machine learning algorithm using the incident, root cause, and action data to create a trained model configured to determine a root cause and a new corrective or preventative action for a new incident. | 2022-05-26 |
20220164247 | SYSTEM AND METHOD FOR MISSION CRITICAL SCREEN VALIDATION ON COMMERCIAL DISPLAYS - A method for verifying the integrity, continuity, and availability (ICA) of information displayed on an uncertified display is disclosed. The method includes creating application data for display on the uncertified display device that includes a subliminal symbol that is periodically embedded in a few out of multiple tens of frames and that is camouflaged in the application data using steganography; transmitting the application data to the uncertified display device for display; receiving images of the application data displayed on the display screen; determining whether the subliminal symbol is detected in the captured images by extracting the symbol from the captured images and comparing the extracted symbol to an expected symbol; determining that the application data is not corrupted when the extracted symbol matches the expected symbol; and identifying a loss of ICA when the subliminal symbol is not detected or does not match the expected symbol. | 2022-05-26 |
20220164248 | A SYSTEM AND METHOD FOR LABELING BITS OF CONTROLLER AREA NETWORK (CAN) MESSAGES - A system for generating a set of rules for detecting Controller Area Network (CAN) messages anomalies, the system comprising a processing resource configured to: obtain a training set including a plurality of CAN messages, each CAN message having properties; train a model, using the training set, the model characterizing statistical relationships between one or more first types of CAN messages of respective first CAN message type and one or more second types of CAN messages each of respective second CAN message type, wherein the statistical relationships are based on one or more of the properties of the CAN messages of the training set; wherein the model is usable for identifying anomalies within a sequence of input CAN messages. | 2022-05-26 |
20220164249 | REAL-TIME TRIGGER TO DUMP AN ERROR LOG - In various embodiments, techniques can be provided to address debug efficiency for failures found on an operational system. The techniques can utilize a real-time trigger to notify a memory device to dump an error log to timely capture all needed information. In response to detecting one or more error conditions associated with the memory device, a system that interfaces with the memory device can generate a trigger signal to the memory device. In response to identifying the trigger signal, the memory device can dump an error log of the memory device to a memory component in the memory device. The error log can later be retrieved from the memory component for failure analysis. | 2022-05-26 |
20220164250 | PROACTIVE VOLTAGE DROOP REDUCTION AND/OR MITIGATION IN A PROCESSOR CORE - Techniques facilitating voltage droop reduction and/or mitigation in a processor core are provided. In one example, a system can comprise a memory that stores, and a processor that executes, computer executable components. The computer executable components can comprise an observation component that detects one or more events at a first stage of a processor pipeline. An event of the one or more events can be a defined event determined to increase a level of power consumed during a second stage of the processor pipeline. The computer executable components can also comprise an instruction component that applies a voltage droop mitigation countermeasure prior to the increase of the level of power consumed during the second stage of the processor pipeline and a feedback component that provides a notification to the instruction component that indicates a success or a failure of a result of the voltage droop mitigation countermeasure. | 2022-05-26 |
20220164251 | APPARATUS WITH LATCH CORRECTION MECHANISM AND METHODS FOR OPERATING THE SAME - Methods, apparatuses, and systems related to an apparatus are described. The apparatus may include (1) a fuse array configured to provide non-volatile storage of fuse data and (2) local latches configured to store the fuse data during runtime of the apparatus. The apparatus may further include an error processing circuit configured to determine error detection-correction data for the fuse data. The apparatus may subsequently broadcast data stored in the local latches to the error processing circuit to determine, using the error detection-correction data, whether the locally latched data has been corrupted. The error processing circuit may generate corrected data to replace the locally latched data based on determining corruption in the locally latched data. | 2022-05-26 |
20220164252 | ERROR CORRECTING CODES FOR MULTI-MASTER MEMORY CONTROLLER - An apparatus includes a central processing unit (CPU) core and a cache subsystem coupled to the CPU core. The cache subsystem includes a memory configured to store a line of data and an error correcting code (ECC) syndrome associated with the line of data, where the ECC syndrome is calculated based on the line of data and the ECC syndrome is a first type ECC. The cache subsystem also includes a controller configured to, in response to a request from a master configured to implement a second type ECC, the request being directed to the line of data, transform the first type ECC syndrome for the line of data to a second type ECC syndrome send a response to the master. The response includes the line of data and the second type ECC syndrome associated with the line of data. | 2022-05-26 |
20220164253 | QUANTUM COMPUTING SYSTEM AND OPERATION METHOD THEREOF - Disclosed is a quantum computing system including a first quantum chip including first physical qubits, a second quantum chip including second physical qubits, and a management device. The management device includes a physical qubit layer that manages physical qubit mapping including information about physical channels between first and second physical qubits, an abstraction qubit layer that manages abstraction qubit mapping including information about abstraction qubits and abstraction channels between the abstraction qubits based on the physical qubit mapping, a logical qubit layer that divides the abstraction qubits into logical qubits and to manage logical qubit mapping including information about logical channels between the logical qubits, based on the abstraction qubit mapping, and an application qubit layer that allocates at least one logical qubit corresponding to a qubit request received from a quantum application program based on the logical qubit mapping. | 2022-05-26 |
20220164254 | Instruction Error Handling - An instruction storage circuit within a processor that includes an instruction memory and a memory control circuit. The instruction memory is configured to store instructions of a program for the processor. The memory control circuit is configured to receive a particular instruction from the instruction memory, detect a data integrity error in the particular instruction, and generate and store a corrected version of the particular instruction in an error storage circuit within the instruction memory. A flush of an execution pipeline may be performed in response to the error. In response to a refetch of the particular instruction after the pipeline flush, the instruction storage circuit may be configured to cause the particular instruction to be provided from the error storage circuit to the execution pipeline to permit forward progress of the processor. | 2022-05-26 |
20220164255 | Checkpointing - A system comprising: a first subsystem comprising at least one first processor, and a second subsystem comprising one or more second processors. A first program is arranged to run on the at least one first processor, the first program being configured to send data from the first subsystem to the second subsystem. A second program is arranged to run on the one more second processors, the second program being configured to operate on the data content from the first subsystem. The first program is configured to set a checkpoint at one or more points in time. At each checkpoint it records in memory of the first subsystem i) a program state of the second program, comprising a state of one or more registers on each of the second processors at the time of the checkpoint, and ii) a copy of the data content sent to the second subsystem since the respective checkpoint. | 2022-05-26 |
20220164256 | CAPTURING AND RESTORING DISPLAY CONTENT FROM MULTIPLE DISPLAY DEVICES - Capturing and restoring display content from multiple display devices, including: receiving a request to create a profile associated with one or more display devices; capturing, from the one or more display devices, information describing windows displayed on each of the display devices; associating the captured information with the profile; receiving a request to restore display content associated with a particular profile; and creating, on the one or more display devices, one or more windows using the information describing windows displayed on each of the display devices. | 2022-05-26 |
20220164257 | METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR UPDATING A CARD DATA OBJECT RENDERING INTERFACE BASED ON CARD ACTION INPUTS, REVERSAL CARD ACTION INPUTS, REINSTATE CARD ACTION INPUTS, AND REPLICATE CARD ACTION INPUTS - Various embodiments herein described are directed to methods, apparatuses and computer program products configured for improving human-user interactions and interfaces in card-based collaborative workflow management systems. In some embodiments, a client device may generate card action data objects that may monitor, track, and/or support a sequence of user inputs associated with a card data object, such that one or more user inputs can be reverted, reinstated, and/or replicated on another data object. Additional example embodiments provide various example card data object rendering interfaces that facilitate various user inputs and software operations in a card-based collaborative workflow management system. | 2022-05-26 |
20220164258 | HASHING INFORMATION OF AN INPUT/OUTPUT (I/O) REQUEST AGAINST A PLURALITY OF GATEWAY NODES - A computer-implemented method according to one embodiment includes receiving, on a first cluster site, a first I/O request to migrate a plurality of filesets from a second cluster site to the first cluster site. The first cluster site includes a plurality of gateway nodes. The method further includes identifying at least two of the gateway nodes having resources available to perform operations of the migration, and hashing information of a plurality of filesets against the identified gateway nodes. The information includes inode numbers of entities that are mounted during fulfillment of the first I/O request. Operations of the first I/O request are distributed to the identified gateway nodes based on the hashing, and the identified gateway nodes are instructed to fulfill the operations. | 2022-05-26 |
20220164259 | CREATING A BACKUP DATA SET - A computer-implemented method according to one embodiment includes identifying a first data set to be backed up, where the first data set is stored on a first storage volume; removing empty data tracks from the first data set to create an intermediary data set; storing the intermediary data set at a plurality of secondary storage volumes different from the first storage volume; and creating a backup data set for the first data set, utilizing the intermediary data set. | 2022-05-26 |
20220164260 | SYSTEM AND METHOD FOR ROBUST, EFFICIENT, ADAPTIVE STREAMING REPLICATION APPLICATION PROTOCOL WITH DANCING RECOVERY FOR HIGH-VOLUME DISTRIBUTED LIVE SUBSCRIBER DATASETS - A system and method for a method for facilitating a robust, efficient, adaptive streaming replication application protocol with dancing recovery for high volume distributed subscriber datasets. Master computing devices stream data packets to downstream replicated peer computing devices on a network to maintain live replicated peers. Upon receipt, data packets may be evaluated to determine whether they are next-in-line using efficient checksum disambiguation which enables unambiguous onboarding of next-in-line packets. Links among master and replicated peer devices, as well as replicated peers having replicated peers of their own, can be ranked to determine the most efficient routes and most reliable devices to achieve live continuous streaming of data on potential unreliable devices and links. Link based scoring and popularity rankings among replicated peers and masters achieve optimization of the network of replicated peers. Dancing recovery of replicated peers after being taken offline from masters enables the seamless recovery and rejoining live streaming. | 2022-05-26 |
20220164261 | MANAGING RESTORE WORKLOADS USING WEIBULL MODULUS - One example method includes determining a modulus such as a Weibull modulus for a recovery operation. Enablement and disablement of a read ahead cache are performed based on the modulus. The modulus is a linearization of a cumulative distribution function, where failures correspond to non-sequential accesses and successes correspond to sequential accesses. | 2022-05-26 |
20220164262 | CRITICAL DATA STORAGE - An example system for critical data storage can include a first controller comprising a processor and a non-transitory machine-readable medium (MRM) communicatively coupled to the processor. The non-transitory MRM can include instructions executable by the processor to cause the processor to receive a request for a new critical data type, store the new critical data type in a reserve within the nontransitory MRM, and restore the new critical data type from the reserve to a second controller responsive to replacement of the first controller with the second controller and a subsequent firmware update. | 2022-05-26 |
20220164263 | ERROR-HANDLING FLOWS IN MEMORY DEVICES BASED ON BINS - An example memory sub-system includes a memory device and a processing device, operatively coupled to the memory device. The processing device is configured to detect a power-up state of the memory device following a power loss event; detect a read error with respect to data residing in a block of the memory device, wherein the block is associated with a current voltage offset bin; and perform temporal voltage shift (TVS)-oriented calibration for associating the block with a new voltage offset bin. | 2022-05-26 |
20220164264 | REDUNDANT BUS CIRCUIT BREAKER ADAPTER ASSEMBLY AND POWER DISTRIBUTION SYSTEM - An apparatus, system and method of efficiently configuring a power distribution system includes the provision of a dual-bus power distribution assembly, each bus of which may be connected to power sources and where the circuit breakers are adapted such that power from either of the two buses can be routed to an electrical load to provide redundant or non-redundant power, as required. Each circuit breaker position is capable of being configured to connect between either of the two buses and an individual load equipment supply bus. The circuit breaker may be a plug-in type where one of the terminals is adapted by a part that may be installed in one of two orientations. In the first orientation a first bus is connected to the individual load equipment supply bus and in the second orientation the second bus is connected to the individual load equipment supply bus. | 2022-05-26 |
20220164265 | MEDIATOR ASSISTED SWITCHOVER BETWEEN CLUSTERS - Techniques are provided for metadata management for enabling automated switchover. An initial quorum vote may be performed before a node executes an operation associated with metadata comprising operational information and switchover information. After the initial quorum vote is performed, the node executes the operation upon one or more mailbox storage devices. Once the operation has executed, a final quorum vote is performed. The final quorum vote and the initial quorum vote are compared to determine whether the operation is to be designated as successful or failed, and whether any additional actions are to be performed. | 2022-05-26 |
20220164266 | CLIENT-LESS DATABASE SYSTEM RECOVERY - One or more computer processors install a trigger on a primary database. The one or more computer processors, responsive to the trigger activating and a data modification associated with the primary database, format the data modification into a universal format for a plurality of backup databases. The one or more computer processors rotate an active backup database from the plurality of backup databases based on a determined alternating backup period. The one or more computer processors synchronize in real-time the primary database with the active backup database. | 2022-05-26 |
20220164267 | FAILOVER BETWEEN DECENTRALIZED IDENTITY STORES - A computing system is configured to receive user data from a user associated with a decentralized identifier (DID) and authenticate the user based on the DID via data recorded on a distributed ledger. In response to authenticating the user, the computing system stores the user data redundantly at each of a plurality of decentralized identity stores. One of the plurality of decentralized identity stores is designated as a primary decentralized identity store. In particular, redundantly storing the user data includes storing the user data at the primary decentralized identity store, and causing each remaining decentralized identity store in the plurality of decentralized identity stores to store the user data following the primary decentralized identity store. | 2022-05-26 |
20220164268 | TRANSMISSION LINK TESTING - A computing system can comprise a processing resource and a memory device coupled together via a first transmission link. The processing resource can be configured to test the first transmission link in response to the memory device failing to execute a command by sending the command to the memory device again for retry and monitoring the first transmission link for signals that indicate whether the command was executed by the memory device. | 2022-05-26 |
20220164269 | TEST INPUT/OUTPUT SPEED CONVERSION AND RELATED APPARATUSES AND METHODS - Test input/output speed conversion and related apparatuses and methods are disclosed. An apparatus includes a glue circuit and a BIST circuit for core circuitry of an integrated circuit device. The, the BIST circuit includes a test interface, one or more inputs, and one or more outputs. The BIST circuit is configured to operate at a first speed. The glue circuit is configured to interface with the test interface, the one or more inputs, and the one or more outputs of the BIST circuit. The glue circuit is configured to convert between second speed test interface signals and second speed input/output signals operating at a second speed and first speed test interface signals and first speed input/output signals operating at the first speed. The second speed is different from the first speed. | 2022-05-26 |
20220164270 | AUTOMATIC OPTIMIZATION AND HARDENING OF APPLICATION IMAGES - Computer receives, from within system application comprising application(s) that communicate with operating system(s) (OS), selection of target application. Computer creates stub application for target application that mimics entry and exit points of target application. Computer isolates target application externally to system application. Computer establishes network connection(s) connecting isolated target application and stub application to process communication between isolated target application and system application. Computer generates OS tracing system that logs file and directory accesses of isolated target application. Computer monitors runtime behavior of isolated target application, using logs of OS tracing system, to identify files used by target application. Computer determines set of files not used by target application. Computer hardens the target application by either removing the determined set of files not used by target application or monitoring access to determined set of files and generating alert upon such access. | 2022-05-26 |
20220164271 | MANAGING SYNCHRONIZED REBOOT OF A SYSTEM - Examples described herein relate to a system including a first management system having a primary memory including a free memory, a used memory, and a loosely reserved memory, where the loosely reserved memory comprises cache memory having a reclaimable memory; and a processing resource coupled to the primary memory. The processing resource may monitor an amount of the used memory and an amount of an available memory during runtime of the first management system. Further, the processing resource may enable a synchronized reboot of the first management system if the amount of the used memory is greater than a memory exhaustion first threshold or the amount of the available memory is less than a memory exhaustion second threshold, wherein the memory exhaustion first threshold and the memory exhaustion second threshold are determined based on usage of the reclaimable memory and a number of major page faults. | 2022-05-26 |
20220164272 | APPLICATION PROGRAM MANAGEMENT METHOD AND APPARATUS, AND STORAGE MEDIUM - An application program management method and apparatus, and a non-transitory computer-readable storage medium are disclosed. The application program management method may include: determining a current extra inspection policy for a target application program according to a current running type of the target application program in response to a determination that a freezing detection of the target application program is required; determining a current inspection policy corresponding to the target application program based on a basic inspection policy corresponding to the target application program and the current extra inspection policy; and freezing the target application program in response to a determination that a running state of the target application program satisfies the current inspection policy. | 2022-05-26 |
20220164273 | CALCULATING INDIVIDUAL CARBON FOOTPRINTS - Behavior data associated with a user is obtained. The behavior data is generated when the user uses an Internet service and includes a user identification and identification information indicating the Internet service. At least one predefined carbon-saving quantity quantization algorithm is determined based on the identification information related to the Internet service. A carbon-saving quantity associated with the user is calculated based on the obtained behavior data and the determined at least one predefined carbon-saving quantity quantization algorithm. Based on the calculated carbon-saving quantity associated with the user and the user identification, user data is processed. The user data is related to the carbon-saving quantity associated with the user. | 2022-05-26 |
20220164274 | METHOD, DEVICE, AND PROGRAM PRODUCT FOR MANAGING STORAGE POOL OF STORAGE SYSTEM - Storage devices in a pool are divided into at least one group with a first number of storage devices in an existing group not higher than a range. When a second number of storage devices are added to the resource pool, a sum of the first number and the second number is determined. A new group is created based on at least a portion of the second number of storage devices when the sum does not satisfy the range; and another portion of the second number of storage devices are added to the existing group. A first storage space portion in each of a set of shared storage devices selected from the existing group is allocated to the existing group, and a second storage space portion in each of the set of shared storage devices is allocated to the new group. The storage space utilization rate can be increased. | 2022-05-26 |
20220164275 | METHOD FOR BLOCKING EXTERNAL DEBUGGER APPLICATION FROM ANALYSING CODE OF SOFTWARE PROGRAM - A method for blocking external debugger application from analysing code of software program installed on computing device. The method including initializing software program including an application program and an internal debugger application. The software program, upon initialization thereof, instructs internal debugger application to load application program in internal debugger application. The internal debugger application is configured to utilize kernel resources of an operating system of the computing device. The method includes executing internal debugger application to set one or more break-points in code of application program to define execution path for code of application program, executing application program as per defined execution path for code thereof, stopping execution of code of application program upon reaching any of one or more break-points therein, and handing control to internal debugger application to provide an address for next instruction to be executed in defined execution path for code of application program. | 2022-05-26 |
20220164276 | PRELOADING DEBUG INFORMATION BASED ON THE INCREMENT OF APPLICATION - A method, computer program product and system are provided for preloading debug information based on the presence of incremental source code files. Based on parsed input parameters to a source code debugger, a source code repository and a local storage area are searched for an incremental file. In response to the incremental file being located, a preload indicator in the incremental file, which is a source code file, is set. Based on the preload indicator being set, debug symbol data from the incremental file is merged to a preload symbol list. In response to receiving a command to examine the debug symbol data from the incremental file, the preload symbol list is searched for the requested debug symbol data. | 2022-05-26 |
20220164277 | Analysis and Testing of Embedded Code - A method, system and product comprising determining a characterization of a terminal of a plurality of terminals within a binary code based on influences of the terminal, wherein the characterization of the terminal indicates a role of the terminal in the binary code; based on the characterization of the terminal, determining that the terminal is potentially affected by external input that is inputted to a device executing the binary code; determining for the terminal a corresponding propagation path within the binary code, wherein the propagation path indicates a reachability of the terminal within the binary code; locating in the binary code a code patch associated with a functionality of the binary code, wherein the code patch is associated with the propagation path of the terminal, wherein the code patch can be executed independently from the binary code; extracting the code patch from the binary code for testing; and generating an emulation of the code patch to enable fuzz testing of the emulation, whereby the code patch is tested independently. | 2022-05-26 |
20220164278 | SYSTEM AND METHOD FOR AUTOMATED TESTING OF AN ACCESSIBILITY FEATURE OF A USER DEVICE BY EMULATING ACTIONS OF USERS - A system for automating testing of an accessibility screen-reader for a software application includes an accessibility testing module. The accessibility testing module communicates a set of input commands to a user device in which the software application is installed. The set of input commands emulates a set of actions being performed on the software application. For each input command, an audio of a string of utterances is received when the accessibility screen-reader produces the audio. The audio is converted to a text of the string of utterances. The text is compared with a corresponding test string that is expected to be uttered by the accessibility screen-reader when a corresponding action is performed on the software application. If it is determined that the text matches the corresponding test string, it is concluded that the accessibility screen-reader uttered the corresponding test string that was expected to be uttered. | 2022-05-26 |
20220164279 | TEST AUTOMATION FOR ROBOTIC PROCESS AUTOMATION - A robotic process automation (RPA) robot performs fuzzing on a workflow. The robot provides a randomized typed data input in a workflow, and executes the workflow as a black box with the randomized typed data input. The robot creates test case when a new path is discovered based on an output of the workflow, and terminates the fuzzing when a desired path coverage has been reached. | 2022-05-26 |
20220164280 | STORAGE DEVICE AND OPERATING METHOD THEREOF - A storage device can be designed to reduce latency in a read operation. Such a storage device can include: a memory device including a plurality of pages that include a first page and a second page different from the first page, each page including a plurality of memory cells that are configured to store data; and a memory controller in communication with the memory device and for sequentially storing result values of a function with respect to a plurality of input values in the plurality of memory cells, and controlling the memory device to store a result value in a last area of the first page and a start area of the second page. | 2022-05-26 |
20220164281 | WRITE GRANULARITY FOR STORAGE SYSTEM - A storage array controller may receive a write request comprising data to be stored at one or more solid-state storage devices. A write granularity associated with the write request may be generated that is less than a logical block size associated with the storage array controller. The data associated with the write request may be segmented based on the generated write granularity. The write request may be executed to store the segmented data at the one or more solid-state storage devices. | 2022-05-26 |
20220164282 | REDUCING LOAD BALANCING WORK STEALING - Embodiments are disclosed for a method. The method includes determining that a thief thread attempted a work steal from a garbage collection (GC) owner queue. Additionally, the method includes determining that a number of tasks in the GC owner queue meets a predetermined threshold. Further, the method includes determining that the GC owner queue comprises a heavy-weight task. The method also includes moving the heavy-weight task to a top position of the GC owner queue. | 2022-05-26 |
20220164283 | SELECTIVE GARBAGE COLLECTION - Methods, systems, and devices for selective garbage collection are described. A host system may determine that a battery level is below a threshold or determine whether a power parameter of a memory system that includes a memory device satisfies a criterion. The host system may set a value of a flag. The memory system may perform an access operation and identify the value of the flag. The memory system may determine whether performing a garbage collection procedure is permitted based on identifying the value of the flag. | 2022-05-26 |
20220164284 | IN-MEMORY ZERO VALUE DETECTION - In some embodiments, an integrated circuit may include a substrate and a memory array disposed on the substrate, where the memory array includes a plurality of discrete memory banks. The integrated circuit may also include a processing array disposed on the substrate, where the processing array includes a plurality of processor subunits, each one of the plurality of processor subunits being associated with one or more discrete memory banks among the plurality of discrete memory banks. The integrated circuit may also include a controller configured to implement at least one security measure with respect to an operation of the integrated circuit and take one or more remedial actions if the at least one security measure is triggered. | 2022-05-26 |
20220164285 | COMPENSATING FOR DRAM ACTIVATION PENALTIES - In some embodiments, an integrated circuit may include a substrate and a memory array disposed on the substrate, where the memory array includes a plurality of discrete memory banks. The integrated circuit may also include a processing array disposed on the substrate, where the processing array includes a plurality of processor subunits, each one of the plurality of processor subunits being associated with one or more discrete memory banks among the plurality of discrete memory banks. The integrated circuit may also include a controller configured to implement at least one security measure with respect to an operation of the integrated circuit and take one or more remedial actions if the at least one security measure is triggered. | 2022-05-26 |
20220164286 | MEMORY CONTROLLER, SYSTEM INCLUDING THE SAME, AND OPERATING METHOD OF MEMORY DEVICE - A device includes: a first interface circuit configured to communicate with a host processor; a second interface circuit configured to communicate with a memory comprising a plurality of storage regions; a cache memory including a plurality of cache lines configured to temporarily store data; and a controller configured to receive an integrated command from the host processor, the integrated command comprising memory operation information and cache management information, configured to control the memory based on a first command that is instructed according to the memory operation information, and configured to control at least one of the plurality of cache lines based on the cache management information. | 2022-05-26 |
20220164287 | CACHE COHERENCE SHARED STATE SUPPRESSION - A method includes receiving, by a level two (L2) controller, a first request for a cache line in a shared cache coherence state; mapping, by the L2 controller, the first request to a second request for a cache line in an exclusive cache coherence state; and responding, by the L2 controller, to the second request. | 2022-05-26 |
20220164288 | Configurable Cache Coherency Controller - Entries in a cluster-to-caching agent map table of a data processing network identify one or more caching agents in a caching agent cluster. A snoop filter cache stores coherency information that includes coherency status information and a presence vector, where a bit position in the presence vector is associated with a caching agent cluster in the cluster-to-caching agent map table. In response to a data request, a presence vector in the snoop filter cache is accessed to identify a caching agent cluster and the map table is accessed to identify target caching agents for snoop messages. In order to reduce message traffic, snoop message are sent only to the identified targets. | 2022-05-26 |
20220164289 | COMPUTING METHOD AND DEVICE WITH DATA SHARING - A computing method and device with data sharing re provided. The method includes loading, by a loader, input data of an input feature map stored in a memory in loading units according to a loading order, storing, by a buffer controller, the loaded input data in a reuse buffer of an address rotationally allocated according to the loading order, and transmitting, by each of a plurality of senders, to an executer respective input data corresponding to each output data of respective convolution operations among the input data stored in the reuse buffer, wherein portions of the transmitted respective input data overlap other. | 2022-05-26 |
20220164290 | STACKED MEMORY DICE FOR COMBINED ACCESS OPERATIONS - Methods, systems, and devices for stacked memory dice and combined access operations are described. A device may include multiple memory dice. One die may be configured as a master, and another may be configured as a slave. The master may communicate with a host device. A slave may be coupled with the master but not the host device. The device may include a first die (e.g., master) and a second die (e.g., slave). The first die may be coupled with a host device and configured to output a set of data in response to a read command. The first die may supply a first subset of the data and obtain a second subset of the data from the second die. In some cases, the first die may select, based on a data rate, a modulation scheme (e.g., PAM4, NRZ, etc.) and output the data using the selected modulation scheme. | 2022-05-26 |
20220164291 | Effective PCIe Utilization by PCIe TLP Coalescing - The present disclosure generally relates to effective transport layer packet (TLP) utilization. When the controller of the data storage device generates a request for transferring data to or from the storage device, the request is stored in a merging buffer. The merging buffer may include previously generated requests, where the previously generated requests and the new requests are merged. A timeout counter is initialized for the requests stored in the merging buffer. The timeout counter has a configurable threshold value that corresponds to a weight value, adjusted for latency or bandwidth considerations. When the merged request is greater than the maximum TLP size, the merged request is partitioned, where at least one partition is in the size of the maximum TLP size. The request is sent from the buffer when the request is in the size of the maximum TLP size or when the threshold value is exceeded. | 2022-05-26 |
20220164292 | DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device may include a first memory apparatus including a plurality of data blocks having data classified in units of data blocks; a second memory apparatus in communication with the first memory apparatus to store data cached from the first memory apparatus; and a controller in communication with the first memory apparatus and the second memory apparatus and configured to control the first memory apparatus with respect to data stored in the first memory apparatus to be cached in the second memory apparatus in units of caching groups, wherein the controller is configured to perform a caching group based caching operation by controlling the first memory apparatus to cache data from the first memory apparatus in the second memory apparatus on a caching group basis, and each caching group includes a first data block requested for caching and one or more other data blocks having the same write count as a write count of the first data block. | 2022-05-26 |
20220164293 | MULTI-MODE PROTECTED MEMORY - Multi-mode protected memory in accordance with the present description includes a permanent mode and a transient mode of operation. In one embodiment of the permanent mode, an authentication key is programmable once and a write counter is not decrementable or resettable. In one embodiment of the transient mode, an authentication key may be programmed many times and a write counter may be reset many times. Other features and advantages may be realized, depending upon the particular application. | 2022-05-26 |
20220164294 | CYBER SECURITY AND TAMPER DETECTION TECHNIQUES WITH A DISTRIBUTED PROCESSOR MEMORY CHIP - In some embodiments, an integrated circuit may include a substrate and a memory array disposed on the substrate, where the memory array includes a plurality of discrete memory banks. The integrated circuit may also include a processing array disposed on the substrate, where the processing array includes a plurality of processor subunits, each one of the plurality of processor subunits being associated with one or more discrete memory banks among the plurality of discrete memory banks. The integrated circuit may also include a controller configured to implement at least one security measure with respect to an operation of the integrated circuit and take one or more remedial actions if the at least one security measure is triggered. | 2022-05-26 |
20220164295 | BUS ARBITRATION CIRCUIT AND DATA TRANSFER SYSTEM INCLUDING THE SAME - A bus arbitration circuit includes a first bus port, a second bus port, a first output circuit connected to the first bus port, a second output circuit connected to the second bus port, a control circuit, and a switch circuit. The control circuit includes a first input port, a second input port, a control signal output port, and an output port. The first input port receives data of the first bus port, the second input port receives data of the second bus port, and data is outputted from the output port to an input port of the first output circuit. The switch circuit has an input port connected to the first bus port, a control port connected to the control signal output port of the control circuit, and an output port from which data of a host bus is outputted to an input port of the second output circuit. | 2022-05-26 |
20220164296 | PERFORMING SAVE STATE SWITCHING IN SELECTIVE LANES BETWEEN ELECTRONIC DEVICES IN UFS SYSTEM - Disclosed are a method and a Universal Flash Storage (UFS) system for performing save state switching using selective lanes between a first electronic device and a second electronic device. The method includes: determining, by the first electronic device, whether a data request is received from an application layer of the first electronic device; and performing, by the first electronic device, at least one of: setting a first lane from among a plurality of lanes between the first electronic device and the second electronic device to an active state and the other lanes from among the plurality of lanes to a power save state based on determining that the data request is not received from the application layer of the first electronic device; and setting the plurality of lanes between the first electronic device and the second electronic device to the active state based on determining that the data request is received from the application layer of the first electronic device. | 2022-05-26 |
20220164297 | DISTRIBUTED PROCESSOR MEMORY CHIP WITH MULTI-PORT PROCESSOR SUBUNITS - In some embodiments, an integrated circuit may include a substrate and a memory array disposed on the substrate, where the memory array includes a plurality of discrete memory banks. The integrated circuit may also include a processing array disposed on the substrate, where the processing array includes a plurality of processor subunits, each one of the plurality of processor subunits being associated with one or more discrete memory banks among the plurality of discrete memory banks. The integrated circuit may also include a controller configured to implement at least one security measure with respect to an operation of the integrated circuit and take one or more remedial actions if the at least one security measure is triggered. | 2022-05-26 |
20220164298 | MEMORY SEQUENCER SYSTEM AND A METHOD OF MEMORY SEQUENCING USING THEREOF - A memory sequencer system for external memory protocols including a control center and a microcontroller; a control center network-on-chip having nodes connected point-to-point to synchronize and co-ordinate communication; whereby a command and address sequencer to generate command, control and address commands for specific memory protocols; and at least one data sequencer to generate pseudo-random or deterministic data patterns for each byte lane of a memory interface; wherein said command and address sequencer and said data sequencer are chained to form complex address and data sequences for memory interface training, calibrating and debugging; wherein said control center network-on-chip interconnecting the control center with said command and address sequencer and data sequencer to provide firmware controllability. | 2022-05-26 |
20220164299 | PEER STORAGE DEVICES SHARING HOST CONTROL DATA - Systems and methods for peer storage devices to share host control data are described. Storage devices may include a host interface configured to connect to a host system and a control bus interface to connect to a control bus. Peer storage devices may establish peer communication through the control bus interface to share host control data, such as access parameters for host resources allocated to peer storage devices. A storage device may access host resources using access parameters allocated to that device, receive peer access parameters from a peer storage device, and access host resources allocated to the peer storage device using the peer access parameters. For example, a storage device may use a peer host memory buffer to store buffer data prior to releasing the host memory buffer allocated to it. | 2022-05-26 |
20220164300 | HEAD OF LINE ENTRY PROCESSING IN A BUFFER MEMORY DEVICE - A method of a buffer memory device, a storage system, and a buffer memory device are provided. The method of the buffer memory device, the buffer memory device having a lower tier memory and a higher tier memory, may include receiving a new entry request, determining that the new entry request includes an HOL entry, selecting an entry on the higher tier memory to be tiered down to the lower tier memory in response to determining that the new entry request includes an HOL entry, removing the selected entry from the higher tier memory, storing the HOL entry in the higher tier memory of the buffer memory device, and outputting the HOL entry to an arbiter. | 2022-05-26 |