10th week of 2022 patent applcation highlights part 46 |
Patent application number | Title | Published |
20220075590 | DISPLAY CONTROL DEVICE AND DISPLAY CONTROL METHOD - A display control device, which controls images to be displayed on a single display mounted on a vehicle, includes: multiple image determination units that determine the images to be displayed on the display; a state monitoring unit that successively monitors, as monitoring targets, the plurality of image determination units to determine whether an abnormality occurs in the plurality of image determination units; and a display mode determination unit that switches a display mode of the display in response to the state monitoring unit determining that an error image is displayed on the display due to the abnormality occurred in the plurality of image determination units. The display mode determination unit switches the display mode such that the error image is less likely to give discomfort to a user. | 2022-03-10 |
20220075591 | COLOCATED SHARED AUGMENTED REALITY WITHOUT SHARED BACKEND - Methods and systems are disclosed for creating a shared augmented reality (AR) session. The methods and systems perform operations comprising: receiving, by a client device, input that selects a shared augmented reality (AR) experience from a plurality of shared AR experiences; in response to receiving the input, determining one or more resources associated with the selected shared AR experience; determining, by the client device, that two or more users are located within a threshold proximity of the client device; and activating the selected shared AR experience in response to determining that the two or more users are located within the threshold proximity of the client device. | 2022-03-10 |
20220075592 | VOICE PROCESSING SYSTEM, VOICE PROCESSING METHOD AND RECORDING MEDIUM RECORDING VOICE PROCESSING PROGRAM - A voice processing system includes: a display processing processor that displays an operation screen for an operation target application serving as a target to be operated by the user; a support information presenter that presents operation support information for the operation target application such that the operation support information is associated with the operation screen; a voice receiver that receives the voice of the user; a command identifier that identifies, based on the voice received by the voice receiver, a first command for the operation target application; and a command executor that executes, on the operation target application, the first command identified by the command identifier. | 2022-03-10 |
20220075593 | TEXT INPUT DEVICE AND METHOD THEREFOR - Electronic device includes display, microphone, and processor configured to activate voice input function based on user input, display graphic representation for indicating that the voice input function is activated, provide, on the display, a text display area for displaying text inputted by a plurality of user input methods and a keyboard input interface for receiving a user keyboard input, the plurality of user input methods including user keyboard input method and user voice input method, receive, via the keyboard input interface, the user keyboard input corresponding to a first text, display the first text in the text display area based on receiving the user keyboard input, receive user voice input corresponding to a second text while the keyboard input interface is provided and the voice input function is activated, and display the second text next to the first text in the text display area based on the user voice input. | 2022-03-10 |
20220075594 | METHODS, SYSTEMS, AND MEDIA FOR REWINDING MEDIA CONTENT BASED ON DETECTED AUDIO EVENTS - Methods, systems, and media for rewinding media content based on detected audio events are provided. In some embodiments, a method for providing media guidance is provided, the method comprising: causing media content to be presented; receiving, using an audio input device, audio data that includes ambient sounds in an environment in which the media content is being presented; receiving a user command to rewind the media content; detecting that a portion of the audio data corresponds to an audio event that occurred during the presentation of the media content in response to receiving the user command to rewind the media content; determining a playback position in the media content based on the audio event; and causing the media content to be presented from the determined playback position. | 2022-03-10 |
20220075595 | FLOATING POINT COMPUTATION FOR HYBRID FORMATS - Various embodiments are provided for performing hybrid precision floating point format computation via a simplified superset floating point unit in a computing system. One or more inputs, represented as a plurality of floating point number formats, may be converted into a superset floating point format prior to computation by one or more simplified superset floating point units (ssFPUs). A compute operation may be performed on the one or more inputs represented as the superset floating point format using the one or more ssFPUs. | 2022-03-10 |
20220075596 | INTEGER MATRIX MULTIPLICATION BASED ON MIXED SIGNAL CIRCUITS - A multiply-accumulate device comprises a digital multiplication circuit and a mixed signal adder. The digital multiplication circuit is configured to input L m | 2022-03-10 |
20220075597 | MULTI-DIE DOT-PRODUCT ENGINE TO PROVISION LARGE SCALE MACHINE LEARNING INFERENCE APPLICATIONS - Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks. The multi-die DPE can be used to build a multi-device DNN inference system performing specific applications, such as object recognition, with high accuracy. | 2022-03-10 |
20220075598 | Systems and Methods for Numerical Precision in Digital Multiplier Circuitry - In one embodiment, multiplier circuitry multiplies operands of a first format. One or more storage register circuits store digital bits corresponding to an operand and another operand of the first format. A decomposing circuit decomposes the operand into a first plurality of operands, and the other operand into a second plurality of operands. Each multiplier circuit multiplies a respective first operand of the first plurality of operands with a respective second operand of the second plurality of operands to generate a corresponding partial result of a plurality of partial results. An accumulator circuit accumulates the plurality of partial results using a second format to generate a complete result of the second format that is stored in the accumulator circuit. A conversion circuit truncates the complete result of the second format and converts the truncated result into an output result of an output format. | 2022-03-10 |
20220075599 | MEMORY DEVICE AND OPERATION METHOD THEREOF - A memory device and an operation method thereof are provided. The memory device includes: a memory array including a plurality of memory cells for storing a plurality of weights; a multiplication circuit coupled to the memory array, for performing bitwise multiplication on a plurality of input data and the weights to generate a plurality of multiplication results; a counting unit coupled to the multiplication circuit, for performing bitwise counting on the multiplication results to generate a MAC (multiplication and accumulation) operation result. | 2022-03-10 |
20220075600 | MEMORY DEVICE AND OPERATION METHOD THEREOF - A memory device and an operation method thereof are provided. The memory device includes: a memory array including a plurality of memory cells for storing a plurality of weights; a multiplication circuit for performing bitwise multiplication on a plurality of input data and the weights to generate a plurality of multiplication results, wherein in performing bitwise multiplication, the memory cells generate a plurality of memory cell currents; a digital accumulating circuit for performing a digital accumulating on the multiplication results; an analog accumulating circuit for performing an analog accumulating on the memory cell currents to generate a first MAC operation result; and a decision unit for deciding whether to perform the analog accumulating; the digital accumulating or a hybrid accumulating, wherein in performing the hybrid accumulating, whether the digital accumulating circuit is triggered is based on the first MAC operation result. | 2022-03-10 |
20220075601 | IN-MEMORY COMPUTING METHOD AND IN-MEMORY COMPUTING APPARATUS - An in-memory computing method and an in-memory computing apparatus are adapted to perform multiply-accumulate (MAC) operations on a memory by a processor. In the method, a pre-processing operation is respectively performed on input data and weight data to be written into input lines and memory cells of the memory to divide the input data and weight data into a primary portion and a secondary portion. The input data and the weight data divided into the primary portion and the secondary portion are written into the input lines and the memory cells in batches to perform the MAC operations and obtain a plurality of computation results. According to a numeric value of each of the computation results, the computation results are filtered. According to the portions to which the computation results correspond, a post-processing operation is performed on the filtered computation results to obtain output data. | 2022-03-10 |
20220075602 | MECHANISM FOR INFORMATION PROPAGATION AND RESOLUTION IN GRAPH-BASED PROGRAMMING LANGUAGES - A visual-programming tool processes nodes of a graph corresponding to operations or functions in program code associated with a plurality of programs, (e.g., games), stored as graph of nodes with logical connections signifying inputs, outputs, and/or units of connected nodes. The visual-programming tool resolves valid types and/or units associated with respective connected nodes and can propagate valid types and/or units throughout the graph. | 2022-03-10 |
20220075603 | DYNAMIC ROBOT TRAY BY ROBOTIC PROCESSES - Disclosed herein is a computing device that includes a memory and a processor, which is coupled to the memory. The memory stores processor executable instructions for a robotic process engine. In operation, the robotic process engine generates a robot tray comprising a canvas and dynamically configures the canvas based on inputs. The dynamic configuring includes adding a widget onto the canvas. | 2022-03-10 |
20220075604 | UNIFIED OPERATING SYSTEM FOR DISTRIBUTED COMPUTING - In some embodiments, a real-time event is detected and context is determined based on the real-time event. An application model is fetched based on the context and meta-data associated with the real-time event, the application model referencing a micro-function and including pre-condition and post-condition descriptors. A graph is constructed based on the micro-function. The micro-function is transformed into micro-capabilities by determining a computing resource for execution of a micro-capability by matching pre-conditions and post-conditions of the micro-capability, and enabling execution and configuration of the micro-capability on the computing resource by providing access in a target environment to an API capable of calling the micro-capability to configure and execute the micro-capability. A request is received from the target environment to execute and configure the micro-capability on the computing resource. The micro-capability is executed and configured on the computing resource, and an output of the micro-capability is provided to the target environment. | 2022-03-10 |
20220075605 | TRAINING AND USING ARTIFICIAL INTELLIGENCE (AI) / MACHINE LEARNING (ML) MODELS TO AUTOMATICALLY SUPPLEMENT AND/OR COMPLETE CODE OF ROBOTIC PROCESS AUTOMATION WORKFLOWS - Training and using artificial intelligence (AI)/machine learning (ML) models to automatically supplement and/or complete code of RPA workflows is disclosed. A trained AI/ML model may intelligently and automatically predict and complete the next series of activities in RPA workflows (e.g., one, a few, many, the remainder of the workflow, etc.). Actions users take while creating workflows over a time period may be captured and stored. The AI/ML model may then be trained and used to match the stored actions with stored workflow sequences of actions in order to predict and complete the workflow. As more and more workflow sequences are captured and stored over time, the AI/ML model may be retrained to predict a larger number of sequences and/or to more accurately make predictions. Auto-completion may occur in real-time in some embodiments to save time and effort by the user. | 2022-03-10 |
20220075606 | COMPILING METHOD AND APPARATUS FOR NEURAL NETWORKS - Disclosed are compiling methods and apparatuses, where a compiling method includes receiving a single-core-based code and input data for an operation to be performed based on the single-core-based code, generating kernel clusters by performing graph clustering based on one or more operation kernels in the single-core-based code and the input data, and generating a multi-core-based code based on the kernel clusters. | 2022-03-10 |
20220075607 | Systems and Methods for a Digital Ecosystem - Systems and methods for aggregating a dependency structure based on application logging data, application metadata, customer intent and journey, organizational structure, and operational support information. The method includes receiving data using an application programming interface. The method further includes, for each user, determining a start point and an end point corresponding to user activity on a networked system. The method also includes, for each user, determining a task based on the start point and end point corresponding to the user activity. The method further includes, for each user, determining operations data corresponding to the user activity. The method also includes, for each user, determining a dependency structure based on the task and the operations data. The method also includes aggregating the dependency structure, the task, and the operations data into a visualization. The method further includes generating for display the visualization on a user device. | 2022-03-10 |
20220075608 | Hardware Acceleration Method, Compiler, and Device - A hardware acceleration method includes obtaining compilation policy information and a source code, where the compilation policy information indicates that a first code type matches a first processor and a second code type matches a second processor; analyzing a code segment in the source code according to the compilation policy information; determining a first code segment belonging to the first code type or a second code segment belonging to the second code type; compiling the first code segment into a first executable code; sending the first executable code to the first processor; compiling the second code segment into a second executable code; and sending the second executable code to the second processor. | 2022-03-10 |
20220075609 | SYSTEM AND METHOD FOR APPLICATION RELEASE ORCHESTRATION AND DEPLOYMENT - Aspects of the present disclosure involve systems, methods, devices, and the like for application release and orchestration. In one embodiment, a system is introduced that can communicate with a centralized automation server via an autonomous program. The system centrally test and validate the code and application release using an iterative data deployment process. | 2022-03-10 |
20220075610 | NODE SELECTION METHOD AND APPARATUS - A node selection method and apparatus are disclosed. The method includes: receiving a request message, where the request message is used to request to provide an installation package of a product required by a user; searching, based on the product information in the request message, a node state table for a target node corresponding to the product information, where the node state table includes at least one correspondence; and sending the request message to the target node, so that the target node builds the corresponding product installation package for the product required by the user ( | 2022-03-10 |
20220075611 | Driver Update Via Sideband Processor - Techniques are disclosed relating to a method that includes executing, by a processor of a computer system, one or more processes. The processor may use a peripheral device coupled to the computer system, wherein the peripheral device utilizes a particular version of a driver. A sideband processor included in the computer system may receive, via a network, instructions for an updated version of the driver to replace the particular version of the driver. The sideband processor may cause the processor to pause use of the peripheral device. While the processor executes the one or more processes, the sideband processor may send a series of commands to install the instructions for the updated version of the driver. The sideband processor may also notify the processor that the peripheral device is available for use. | 2022-03-10 |
20220075612 | PROGRAM UPDATE METHOD AND UPDATE SYSTEM - An update system includes: a first server that stores a control program; a second server that stores a common program; a difference extraction device that generates difference data between the common program and the control program; and a reprograming tool that transmits the difference data to a vehicle equipped with an ECU to be updated. The difference extraction device searches a search range by a search unit to find whether search target data of the control program is included in the common program, and generates the difference data. The search range includes, for example, an address of the search target data in the control program and addresses rearward of that address. An offset area is provided in a head area of the common program. | 2022-03-10 |
20220075613 | ADAPTIVE FEEDBACK BASED SYSTEM AND METHOD FOR PREDICTING UPGRADE TIMES AND DETERMINING UPGRADE PLANS IN A VIRTUAL COMPUTING SYSTEM - A system and method for updating a cluster of a virtual computing system includes receiving a maintenance window from a user during which to upgrade the cluster, determining available upgrades for the cluster, presenting one or more upgrade plans to the user, such that each of the one or more upgrade plans is created to be completed within the maintenance window and includes one or more of the available upgrades selected based on a total upgrade time computed for each of the available upgrades, receiving selection of one of the one or more upgrade plans from the user, and upgrading the cluster based on the one of the one or more upgrade plans that is selected. | 2022-03-10 |
20220075614 | SYSTEMS AND METHODS FOR UPDATING SOFTWARE IN A HAZARD DETECTION SYSTEM - Systems and methods for updating software in a hazard detection system are described herein. Software updates may be received by, stored within, and executed by a hazard detection system, without disturbing the system's ability to monitor for alarm events and sound an alarm in response to a monitored hazard event. The software updates may be received as part of a periodic over-the-air communication with a remote server or as part of a physical connection with a data source such as a computer. The software updates may include several portions of code designed to operate with different processors and/or devices within the hazard detection system. The software updates may also include language specific audio files that can be accessed by the hazard detection system to play back language specific media files via a speaker. | 2022-03-10 |
20220075615 | Operating System Update Via Sideband Processor - Techniques are disclosed relating to a method that includes executing, by a processor of a computer system, a particular operating system (OS) from a system memory coupled to the processor. A sideband processor of the computer system may receive, via a network, instructions for an updated version of the OS. While the processor executes the particular OS, the sideband processor may send, to a controller hub, a series of commands that cause the controller hub to store the received instructions into one or more regions of the system memory. The sideband processor may then cause the processor to switch, without rebooting, from executing the particular OS to executing the updated version of the OS. | 2022-03-10 |
20220075616 | SENTIMENT BASED OFFLINE VERSION MODIFICATION - A method, a computer program product, and a computer system modify a version of an application. The method includes determining a sentiment being experienced by a second user while using a second version of the application installed on a second device associated with the second user based on sensory information indicative of the sentiment. The method includes generating sentiment associated information associating the sentiment of the second user with the second version of the application, the sentiment associated information configured to be exchanged with the first device over an offline, ad hoc connection. The method includes transmitting the sentiment associated information to the second device. The sentiment associated information is indicative of whether the first version of the application is to be modified on the first device to the second version of the application. | 2022-03-10 |
20220075617 | MODEL TRAINING USING BUILD ARTIFACTS - The subject technology detects a code commit at a code repository. The subject technology sends a request for a build job to a build server. The subject technology determines that the build job is completed. The subject technology sends a training request and user token to a proxy authenticator. The subject technology determines determining that the user token is validated. The subject technology sends a training request and the user token to a training job manager. Further, the subject technology determines determining that the training job is completed. | 2022-03-10 |
20220075618 | ENHANCED PRODUCT DEVELOPMENT EFFICIENCY, COMMUNICATION, AND/OR SECURITY THROUGH COMPONENT-BASED EVENT GENERATION AND/OR SUBSCRIPTION - Disclosed is a method, a device, a system and/or a manufacture of secure and efficient product development through subscription to an event associated with a restricted design dependency tree. In one embodiment, a method for secure development of design data includes receiving a request for retrieval of a root version of a dependency tree. A dependency reference from the root version is followed to a version of a sub-component. The version of the sub-component is determined to have a positive authorization status for read access through a database association with a unique identifier of a user and/or a group profile. A restricted tree data comprising the unique identifier of the root version and the version of the sub-component is returned. The user and/or the group profile is then subscribed to receive a message on a client device generated in response to an event associated with the restricted design dependency tree. | 2022-03-10 |
20220075619 | USING BIG CODE TO CONSTRUCT CODE CONDITIONAL TRUTH TABLES - A method of analyzing code is provided. The method includes generating an abstract representation of the code, identifying conditional statements in the abstract representation, populating a truth table for each conditional statement that has been identified with all possible outcomes of the conditional statement and assessing the truth table for each conditional statement to identify issues. | 2022-03-10 |
20220075620 | COMPUTING 2-BODY STATISTICS ON GRAPHICS PROCESSING UNITS (GPUs) - Disclosed are various embodiments for computing 2-body statistics on graphics processing units (GPUs). Various types of two-body statistics (2-BS) are regarded as essential components of data analysis in many scientific and computing domains. However, the quadratic complexity of these computations hinders timely processing of data. According, various embodiments of the present disclosure involve parallel algorithms for 2-BS computation on Graphics Processing Units (GPUs). Although the typical 2-BS problems can be summarized into a straightforward parallel computing pattern, traditional wisdom from (general) parallel computing often falls short in delivering the best possible performance. Therefore, various embodiments of the present disclosure involve techniques to decompose 2-BS problems and methods for effective use of computing resources on GPUs. We also develop analytical models that guide users towards the appropriate parameters of a GPU program. Although 2-BS problems share the same core computations, each 2-BS problem however carries its own characteristics that calls for different strategies in code optimization. Accordingly, various embodiments of the present disclosure involve a software framework that automatically generates high-performance GPU code based on a few parameters and short primer code input. | 2022-03-10 |
20220075621 | 22 RESOURCE ALLOCATION IN A MULTI-PROCESSOR SYSTEM - A system includes a memory-mapped register (MMR) associated with a claim logic circuit, a claim field for the MMR, a first firewall for a first address region, and a second firewall for a second address region. The MMR is associated with an address in the first address region and an address in the second address region. The first firewall is configured to pass a first write request for an address in the first address region to the claim logic circuit associated with the MMR. The claim logic circuit associated with the MMR is configured to grant or deny the first write request based on the claim field for the MMR. Further, the second firewall is configured to receive a second write request for an address in the second address region and grant or deny the second write request based on a permission level associated with the second write request. | 2022-03-10 |
20220075622 | METHODS OF BREAKING DOWN COARSE-GRAINED TASKS FOR FINE-GRAINED TASK RE-SCHEDULING - A method of scheduling instructions in a processing system comprising a processing unit and one or more co-processors comprises dispatching a plurality of instructions from a master processor to a co-processor of the one or more co-processors, wherein each instruction of the plurality of instructions comprises one or more additional fields, wherein at least one field comprises grouping information operable to consolidate the plurality of instructions for decomposition, and wherein at least one field comprises control information. The method also comprises decomposing the plurality of instructions into a plurality of fine-grained instructions, wherein the control information comprises rules associated with decomposing the plurality of instructions into the plurality of fine-grained instructions. Further, the method comprises scheduling the plurality of fine-grained instructions to execute on the co-processor, wherein the scheduling is performed in a non-sequential order. | 2022-03-10 |
20220075623 | SYSTEM AND METHOD FOR REACTIVE FLATTENING MAP FOR USE WITH A MICROSERVICES OR OTHER COMPUTING ENVIRONMENT - In accordance with an embodiment, described herein is a system and method for providing a reactive flattening map for use with a microservices or other computing environment. In a cloud computing environment, reactive programming can be used with publishers and subscribers, to abstract execution away from the thread of execution while providing rigorous coordination of various state transitions. The described approach provides support for processing streams of data involving one or more publishers and subscribers, by use of a multi-flat-map publisher component, to flatten or otherwise combine events emitted by multiple publishers concurrently, into a single stream of events for use by a downstream subscriber. | 2022-03-10 |
20220075624 | ALTERNATE PATH FOR BRANCH PREDICTION REDIRECT - Branch prediction circuitry predicts an outcome of a branch instruction. A pipeline circuitry processes instructions along a first path from a predicted branch of the branch instruction. The instructions along the first path are processed concurrently with processing instructions along a second path from an unpredicted branch of the branch instruction. Information representing the state of the second portion while processing the second path is stored in one or more buffers. The instructions are processed along the second path using the information stored in the buffers in response to a misprediction of the outcome of the branch instruction. In some cases, the branch prediction circuitry determines a confidence level for the predicted outcome and the instructions along the second path from the unpredicted branch are processed in response to the confidence level being below a threshold confidence. | 2022-03-10 |
20220075625 | METHOD AND ARRANGEMENT FOR HANDLING MEMORY ACCESS FOR A TCF-AWARE PROCESSOR - An arrangement for handling shared data memory access for a TCF-aware processor. The arrangement comprises at least a flexible latency handling unit ( | 2022-03-10 |
20220075626 | PROCESSOR WITH INSTRUCTION CONCATENATION - A processor includes a plurality of execution units. At least one of the execution units is configured to determine, based on a field of a first instruction, a number of additional instructions to execute in conjunction with the first instruction and prior to execution of the first instruction. | 2022-03-10 |
20220075627 | HIGHLY PARALLEL PROCESSING ARCHITECTURE WITH SHALLOW PIPELINE - Techniques for task processing using a highly parallel processing architecture with a shallow pipeline are disclosed. A two-dimensional array of compute elements is accessed. Each compute element within the array of compute elements is known to a compiler and is coupled to its neighboring compute elements within the array of compute elements. Control for the array of compute elements is provided on a cycle-by-cycle basis. The control is enabled by a stream of wide, variable length, microcode control words generated by the compiler. Relevant portions of the control word are stored within a cache associated with the array of compute elements. The control words are decompressed. The decompressing occurs cycle-by-cycle out of the cache over multiple cycles. A compiled task is executed on the array of compute elements, based on the decompressing. Simultaneous execution of two or more potential compiled task outcomes is provided. | 2022-03-10 |
20220075628 | System and Method for Supervising Processes Among Embedded Systems - System and method for ensuring the integrity of multiple processes executing cooperatively on a plurality of embedded computers. The system can access at least one configuration file that can include the processes that need to be started, monitored, and stopped on the embedded computer. The configuration file can configure executive instances of the executive system to report to an executive prime, if necessary. The executive prime can, for example, communicate with and control aspects of the executive instances from each embedded computer, such as cooperative shutdowns under pre-selected conditions. | 2022-03-10 |
20220075629 | ELECTRONIC DEVICE AND OPERATING METHOD THEREOF, AND NETWORK SYSTEM - An operating method of an electronic device including controllers includes updating, by a first-level controller of the controllers, a first-level firmware of the the first-level controller, writing, by the first-level controller, a second-level firmware to one of second-level controllers of the controllers having a lower level than the first-level controller, booting, by the one of the second-level controllers, by performing a reset operation, verifying, by the first-level controller or the booted second-level controller, whether there is a target second-level controller with out-of-date firmware, and writing, by the first-level controller or the booted second-level controller in response to a result of the verifying, the second-level firmware to the target second-level controller. | 2022-03-10 |
20220075630 | NON-TRANSITORY RECORDING MEDIUM, INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING SYSTEM - A non-transitory recording medium storing a plurality of instructions which, when executed by one or more processors, causes the processors to perform a method. The method includes acquiring information on operation of one or more peripheral devices connectable to an information processing device and causing a peripheral device being connected to the information processing device to operate according to the information on the operation of the one or more peripheral devices. | 2022-03-10 |
20220075631 | PLATFORM-BASED ENTERPRISE TECHNOLOGY SERVICE PORTFOLIO MANAGEMENT - Techniques related to hosted client management comprising providing a hosted client instance over a network interface for communicatively coupling with a remote client device, the hosted client instance including a first application component for performing a first plurality of actions associated with the hosted client instance and a second application component for performing a second plurality of actions, monitoring, by the first application component, the second application component for an event associated with the second application component, determining that the event impacts the first application component based on one or more dependency tables associated with the second application component, and displaying, in a user interface of the first application component, information related to the event. | 2022-03-10 |
20220075632 | AUTOMATED GRAPHICAL USER INTERFACE CONFIGURATION - Automated configuration of graphical user interface screens of industrial software applications. An application executing on a computing device utilizes a navigation model representing hierarchies of navigation items to automate placement of graphical components in appropriate locations on the graphical user interface. | 2022-03-10 |
20220075633 | Method and Device for Process Data Sharing - In one implementation, a method of accessing shared data among processes is performed by a device including processor(s), non-transitory memory, and an image acquisition interface. The method includes obtaining image data acquired by the image acquisition interface. The method further includes determining pose data based at least in part on inertial measurement unit (IMU) information measured by the image acquisition interface. The method also includes determining a gaze estimation based at least in part on eye tracking information obtained through the image acquisition interface. Based at least in part on characteristics of processes, the method includes determining an arrangement for the image data, the pose data, and the gaze estimation. The method additionally includes determining an access schedule for the processes based at least in part on at least one of: the arrangement, the characteristics of the processes, and hardware timing parameters associated with the device. | 2022-03-10 |
20220075634 | DETECTION OF USER INTERFACE CONTROLS VIA INVARIANCE GUIDED SUB-CONTROL LEARNING - Computerized detection of one or more user interface objects is performed by processing an image file containing one or more user interface objects of a user interface generated by an application program. Sub-control objects can be detected in the image file, where each sub-control object can form a portion of a user interface object that receives user input. Extraneous sub-control objects can be detected. Sub-control objects that overlap with or that are within a predetermined vicinity of an identified set of sub-control objects can be removed. Sub-control objects in the identified set of sub-control objects can be correlated to combine one or more of the sub-control objects in the identified set of sub-control objects to generate control objects that correspond to certain of the user interface objects of the user interface generated by the application program. | 2022-03-10 |
20220075635 | Instant Virtual Application Launch - Methods and systems for persisting a protocol state from a first instance of a virtual desktop application to a second instance of the virtual desktop application are described herein. In some embodiments, a computing platform may establish, by a first virtual desktop instance, a secure session with a virtual delivery agent (VDA), resulting in a protocol state of the first virtual desktop instance. Further, the computing platform may persist, using the first virtual desktop instance, the protocol state. Next, the computing platform may transmit, from the first virtual desktop instance to a second virtual desktop instance, the protocol state. Additionally, the computing platform may authenticate, using authentication tokens comprising the protocol state, a connection between the second virtual desktop instance and a gateway device. Subsequently, the computing platform may re-establish, after the authenticating, the secure session, wherein the secure session comprises a connection between the VDA and the second virtual desktop instance. | 2022-03-10 |
20220075636 | SYSTEM AND METHOD FOR VERSIONED SCRIPT MANAGEMENT - This disclosure is directed to a versioned script management (VSM) system that enables a client instance to implement versioned script management. A versioned scripts table includes one or more fields storing version information for each script. The version information tracks platform release information (e.g., family, patch, and/or hotfix release version information) of each script, while also tracking client-specific versions of these scripts that have been modified after release. The VSM system includes instructions to create a modified version of an existing script and to perform a platform release update of platform scripts without overwriting or changing the behavior of client-modified versions of these scripts. As such, the VSM system enables script modifications, as part of client customization and/or platform updates, while avoiding the possibility of introducing regressions as a result of these modifications. | 2022-03-10 |
20220075637 | Techniques for Concurrently Supporting Virtual NUMA and CPU/Memory Hot-Add in a Virtual Machine - Techniques for concurrently supporting virtual non-uniform memory access (virtual NUMA) and CPU/memory hot-add in a virtual machine (VM) are provided. In one set of embodiments, a hypervisor of a host system can compute a node size for a virtual NUMA topology of the VM, where the node size indicates a maximum number of virtual central processing units (vCPUs) and a maximum amount of memory to be included in each virtual NUMA node. The hypervisor can further build and expose the virtual NUMA topology to the VM. Then, at a time of receiving a request to hot-add a new vCPU or memory region to the VM, the hypervisor can check whether all existing nodes in the virtual NUMA topology have reached the maximum number of vCPUs or maximum amount of memory, per the computed node size. If so, the hypervisor can create a new node with the new vCPU or memory region and add the new node to the virtual NUMA topology. | 2022-03-10 |
20220075638 | MEMORY BANDWIDTH THROTTLING FOR VIRTUAL MACHINES - Systems and methods are disclosed for throttling memory bandwidth accessed by virtual machines (VMs). A technique for dynamically throttling the virtual computer processing units (vCPUs) assigned to a VM (tenant) controls the memory access rate of the VM. When the memory is shared by multiple VMs in a cloud-computing environment, one VM increasing its memory access rate may cause another VM to suffer memory access starvation. This behavior violates the principle of VM isolation in cloud computing. In contrast to conventional systems, a software solution for dynamically throttling the vCPUs may be implemented within a hypervisor and is therefore portable across CPU families and doesn't require specialized server-class CPU capabilities or limit the system configuration. | 2022-03-10 |
20220075639 | EXECUTING AN APPLICATION WITH MULTIPLE PROCESSORS - In one example, a system for executing applications can include a main processor to initialize a virtual machine to execute an application. The main processor can also determine a main utilization indicator of the main processor is above a threshold and an auxiliary utilization indicator of an auxiliary processor is below a threshold, wherein the auxiliary processor is based on an auxiliary instruction set architecture. Additionally, the main processor can transmit an instruction from the application to the auxiliary processor for execution and update context data for the application in response to receiving an execution result from the auxiliary processor. | 2022-03-10 |
20220075640 | THIN PROVISIONING VIRTUAL DESKTOP INFRASTRUCTURE VIRTUAL MACHINES IN CLOUD ENVIRONMENTS WITHOUT THIN CLONE SUPPORT - Systems and methods for operating a cloud based computing system. The methods comprise: receiving, by a cloud server, a request for accessing Virtual Hard Disk (“VHD”) data associated with a first location in the VHD of a Virtual Machine (“VM”) hosted by a remote computing device; extracting, by the cloud server, at least a first address specifying the first location from the request; translating, by the cloud server, the first address into a second address specifying a second location in a cloud storage where the VHD data is stored; and communicating from the cloud server the second address to the remote computing device for facilitating access to the VHD data stored in the cloud storage. | 2022-03-10 |
20220075641 | TENANT-CONTROLLED CLOUD UPDATES - Systems and methods are taught for providing customers of a cloud computing service to control when updates affect the services provided to the customers. Because multiple customers share the cloud's infrastructure, each customer may have conflicting preferences for when an update and associated downtime occurs. Preventing and resolving conflicts between the preferences of multiple customers while providing them with input for scheduling a planned update may reduce the inconvenience posed by updates. Additionally, the schedule for the update may be transmitted to customers so that they can prepare for the downtime of services associated with the update. | 2022-03-10 |
20220075642 | VIRTUAL MACHINE REDEPLOYMENT - One or more techniques and/or systems are disclosed for redeploying a baseline VM (BVM) to one or more child VMs (CVMs) by merely cloning virtual drives of the BVM, instead of the entirety of the parent BVM. A temporary directory is created in a datastore that has the target CVMs that are targeted for virtual drive replacement (e.g., are to be “re-baselined”). One or more replacement virtual drives (RVDs) are created in the temporary directory, where the RVDs comprise a clone of a virtual drive of the source BVM. The one or more RVDs are moved from the temporary directory to a directory of the target CVMs, replacing existing virtual drives of the target CVMs so that the target CVMs are thus re-baselined to the state of the parent BVM. | 2022-03-10 |
20220075643 | UNIFIED RESOURCE MANAGEMENT FOR CONTAINERS AND VIRTUAL MACHINES - Various aspects are disclosed for unified resource management of containers and virtual machines. A podVM resource configuration for a pod virtual machine (podVM) is determined using container configurations. The podVM comprising a virtual machine (VM) that provides resource isolation for a pod based on the podVM resource configuration. A host selection for the podVM is received from a VM scheduler. The host selection identifies hardware resources for the podVM. A container scheduler is limited to bind the podVM to a node corresponding to the hardware resources of the host selection from the VM scheduler. The podVM is created in a host corresponding to the host selection. Containers are started within the podVM. The containers correspond to the container configurations. | 2022-03-10 |
20220075644 | VIRTUAL AUTOCALIBRATION OF SENSORS - The present disclosure describes methods and systems for virtually calibrating geometric sensors with overlapping fields of view. In some embodiments, a geometric sensor may be virtually calibrated by applying a correction value to profile data obtained by the geometric sensor to generate adjusted profile data. The correction factor may be determined based at least in part on X-Y offsets and/or rotational offsets of prior profile data obtained by the geometric sensor relative to corresponding profile data obtained by a reference geometric sensor, and may be recalculated or updated as new sets of profile data are obtained. The adjusted profile data may be used in place of the original profile data in various data processing operations to functionally offset a positional error of the geometric sensor. | 2022-03-10 |
20220075645 | OPERATION METHOD OF HOST PROCESSOR AND ACCELERATOR, AND ELECTRONIC DEVICE INCLUDING THE SAME - An operation method includes: dividing a model to be executed in an accelerator into a plurality of stages; determining, for each of the stages, a maximum batch size processible in an on-chip memory of the accelerator; determining the determined maximum batch sizes to each be a candidate batch size to be applied to the model; and determining, to be a final batch size to be applied to the model, one of the determined candidate batch sizes that minimizes a sum of a computation cost of executing the model in the accelerator and a memory access cost. | 2022-03-10 |
20220075646 | Malware Behavioral Monitoring - Systems and methods for monitoring a process a provided. An example method commences with providing a management platform. The management platform is configured to receive user rules for processing at least one function call within the process. A high-level script can be used based on the user rules to develop and install at least one library to execute synchronously within the process. The at least one library can be configured to monitor the process for at least one function call and capture argument values of the function call before the argument values are passed to a function. The at least one library can filter the function call based at least in part on the argument values. The method can continue with selectively creating an API event for execution by a dedicated worker thread. The execution of the API event is performed asynchronously with regard to the process. | 2022-03-10 |
20220075647 | METHODS AND APPARATUS TO PROTECT OPEN AND CLOSED OPERATING SYSTEMS - Methods, apparatus, systems and articles of manufacture are disclosed. An example apparatus includes at least one memory, instructions in the apparatus, at least one processor to execute the instructions to, in response to identifying malicious data: a) in response to determining that the at least one processor is controlled by the first operating system type, block a download from being executed, and b) in response to determining a switch from the first operating system type to the second operating system type, remove, from the at least one memory, an object downloaded in the download. | 2022-03-10 |
20220075648 | System And Method For Intelligent Data Center Power Management And Energy Market Disaster Recovery - Systems and methods for intelligent data center power management and energy market disaster recovery comprised of data collection layer, infrastructure elements, application elements, power elements, virtual machine elements, analytics/automation/actions layer, analytics or predictive analytics engine, automation software, actions software, energy markets analysis layer and software and intelligent energy market analysis elements or software. Plurality of data centers employ the systems and methods comprised of a plurality of Tier 2 data centers that may be running applications, virtual machines and physical computer systems to enable data center and application disaster recovery from utility energy market outages. Systems and methods may be employed to enable application load balancing and data center power load balancing across a plurality of data centers and may lead to financial benefits when moving application and power loads from one data center location using power during peak energy hours to another data center location using power during off peak hours. | 2022-03-10 |
20220075649 | MIGRATION BETWEEN CPU CORES - Methods, non-transitory machine-readable media, and computing devices for transitioning tasks and interrupt service routines are provided. An example method includes processing, by a plurality of processor cores of a storage controller, tasks and interrupt service routines. A performance statistic is determined corresponding to the plurality of processor cores. Based on detecting that the performance statistic passes a threshold, a number of the plurality of processor cores that are assigned to the tasks and the interrupt service routines are reduced. | 2022-03-10 |
20220075650 | CACHED AND PIPELINED EXECUTION OF SOFTWARE MODULES - Systems and methods for executing software modules in a pipelined fashion. A listing of modules to be executed is received and each module is executed in turn. Prior to execution, each module is code and input checked to determine if it corresponds to a previously executed module. If there is correspondence, then cached results from the previously executed module is used in place of executing the module. If there is no correspondence, then the module is executed, and its results are cached such that these results are available to subsequently executed modules. At least one of the modules may be an implementation of a machine learning model. | 2022-03-10 |
20220075651 | HIGHLY PARALLEL PROCESSING ARCHITECTURE WITH COMPILER - Techniques for task processing using a highly parallel processing architecture with a compiler are disclosed. A two-dimensional array of compute elements is accessed. Each compute element within the array of compute elements is known to a compiler and is coupled to its neighboring compute elements within the array of compute elements. A set of directions is provided to the hardware, through a control word generated by the compiler, for compute element operation and memory access precedence. The set of directions enables the hardware to properly sequence compute element results. The set of directions controls data movement for the array of compute elements. A compiled task is executed on the array of compute elements, based on the set of directions. The compute element results are generated in parallel in the array, and the compute element results are ordered independently from control word arrival at each compute element. | 2022-03-10 |
20220075652 | SCHEDULING TASKS USING WORK FULLNESS COUNTER - A method of activating scheduling instructions within a parallel processing unit includes checking if an ALU targeted by a decoded instruction is full by checking a value of an ALU work fullness counter stored in the instruction controller and associated with the targeted ALU. If the targeted ALU is not full, the decoded instruction is sent to the targeted ALU for execution and the ALU work fullness counter associated with the targeted ALU is updated. If, however, the targeted ALU is full, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state. When an ALU changes from being full to not being full, the scheduler is triggered to re-activate an oldest scheduled task waiting for the ALU by removing the oldest scheduled task from the non-active state. | 2022-03-10 |
20220075653 | SCHEDULING METHOD AND APPARATUS, AND RELATED DEVICE - This application provides a scheduling method and apparatus, and a related device. The method includes a central cluster receiving a scheduling request sent by a first cluster, and determining a second cluster that meets the scheduling request. The central cluster indicates the first cluster to execute a task by using the second cluster. The method can support task cross-cluster scheduling, thereby implementing resource sharing between different clusters, and increasing resource utilization. | 2022-03-10 |
20220075654 | OPTIMIZING RUNTIME FRAMEWORK FOR EFFICIENT HARDWARE UTILIZATION AND POWER SAVING - A system and method are disclosed for polling in a multi-thread computing system. In one embodiment, a method includes actively polling at least one work queue associated with a worker thread; as a result of the at least one work queue being 5 empty during the polling for a first period of time, causing the worker thread to alternately: poll the at least one work queue during at least one polling interval; and enter an autonomous sleep state during at least one sleep interval; and, as a result of the at least one work queue being empty during each polling interval of a back-off period, causing the worker thread to enter a non-autonomous sleep state for a yield | 2022-03-10 |
20220075655 | EFFICIENT ACCELERATOR OFFLOAD IN MULTI-ACCELERATOR FRAMEWORK - Methods, apparatus, and software for efficient accelerator offload in multi-accelerator frameworks. One multi-accelerator framework employs a compute platform including a plurality of processor cores and a plurality of accelerator devices. An application is executed on a first core and a portion of the application workload is offloaded to a first accelerator device. In connection with moving execution of the application to a second core, a second accelerator devices to be used for the offloaded workload is selected based on core-to-accelerator cost information for the second core. This core-to-accelerator cost information includes core-accelerator cost information for combinations of core-accelerator pairs, which are based, at least on part, on latencies projected for interconnect paths between cores and accelerators. Both single-socket and multi-socket platform are supported. The solutions include mechanisms for moving offloaded workloads for multiple accelerator devices, as well as synchronizing accelerator operations and workflows. | 2022-03-10 |
20220075656 | DYNAMIC WORKLOAD SHIFTING WITHIN A CONNECTED VEHICLE - A system for dynamic job shifting includes an interface and a processor. The interface is configured to receive a job request to perform a job. The processor is configured to monitor available resources for performing the job. The available resources include a set of vehicle carried systems accessible to a vehicle event recorder via a communication link. The vehicle event recorder is coupled to a vehicle. The processor is further configured to determine a vehicle carried system of the set of vehicle carried systems for performing the job; provide the job to the vehicle carried system, where the job is configured to create one or more checkpoint data files; and receive an indication of creation of a checkpoint data file of the one or more checkpoint data files. | 2022-03-10 |
20220075657 | RESOURCE ALLOCATION CONTROL DEVICE, COMPUTER SYSTEM, AND RESOURCE ALLOCATION CONTROL METHOD - In a management node that controls the amount of hardware resources of storage nodes to be allocated to the software of distributed data stores executed by storage nodes, the management node includes a disk device that stores a performance model indicating the correspondence relationship between the amount of hardware resources and the performance that can be implemented by the hardware of the resource amount, and a central processing unit (CPU) connected to the disk device, in which the CPU receives the target performance by distributed data stores, determines the hardware resource amount required to achieve the target performance based on the performance model, and sets to allocate hardware of the determined resource amount to the programs of the distributed data stores. | 2022-03-10 |
20220075658 | HIERARCHICAL SCHEDULER - Methods, systems, and computer programs are directed to the implementation of configurable hierarchical schedulers with multiple levels, where each level may use one of several types of queueing mechanisms. A configurable, hierarchical scheduler is designed to handle large scale processing of requests (e.g., transmitting outgoing messages). The hierarchical scheduler distributes the loads to different queues handling different types of messages (e.g., by user ID, by Internet Address (IP), by schedule). The different layers of the hierarchical scheduler are configurable to queue and schedule traffic based on many factors, such as IP address, handling reputation, available downstream bandwidth, fairness, concurrency rates to handle multiple constraints, scheduling per client, time of delivery constrains, rate limits per user, domain scheduling per user, concurrency throttling per outbound channel, and sharing global rate limits across service processors. | 2022-03-10 |
20220075659 | RUNTIME CONFIGURABLE REGISTER FILES FOR ARTIFICIAL INTELLIGENCE WORKLOADS - There is disclosed a system and method of performing an artificial intelligence (AI) inference, including: programming an AI accelerator circuit to solve an AI problem with a plurality of layer-specific register file (RF) size allocations, wherein the AI accelerator circuit comprises processing elements (PEs) with respective associated RFs, wherein the RFs individually are divided into K sub-banks of size B bytes, wherein B and K are integers, and wherein the RFs include circuitry to individually allocate a sub-bank to one of input feature (IF), output feature (OF), or filter weight (FL), and wherein programming the plurality of layer-specific RF size allocations comprises accounting for sparse data within the layer; and causing the AI accelerator circuit to execute the AI problem, including applying the layer-specific RF size allocations at run-time. | 2022-03-10 |
20220075660 | COMPUTING RESOURCE ALLOCATION WITH SUBGRAPH ISOMORPHISM - A computing system is provided, including a processor configured to generate a directed weighted graph indicating a plurality of functions configured to be executed on a plurality of communicatively connected processing devices. For each of a plurality of pairs of the functions, the processor may determine a shortest path between the pair of functions. The processor may generate a second graph indicating the plurality of pairs of functions connected by the shortest paths. The processor may receive a pipeline directed acyclic graph (DAG) specifying a data pipeline of a plurality of processing stages. The processor may determine a subgraph isomorphism between the pipeline DAG and the second graph. The processor may convey, to one or more processing devices of the plurality of processing devices, instructions to execute the plurality of processing stages as specified by the subgraph isomorphism. | 2022-03-10 |
20220075661 | TECHNOLOGIES FOR SCHEDULING ACCELERATION OF FUNCTIONS IN A POOL OF ACCELERATOR DEVICES - Technologies for scheduling acceleration in a pool of accelerator devices include a compute device. The compute device includes a compute engine to execute an application. The compute device also includes an accelerator pool including multiple accelerator devices. Additionally, the compute device includes an acceleration scheduler logic unit to obtain, from the application, a request to accelerate a function, determine a capacity of each accelerator device in the accelerator pool, schedule, in response to the request and as a function of the determined capacity of each accelerator device, acceleration of the function on one or more of the accelerator devices to produce output data, and provide, to the application and in response to completion of acceleration of the function, the output data to the application. Other embodiments are also described and claimed. | 2022-03-10 |
20220075662 | MESH AGENTS FOR DISTRIBUTED COMPUTING - A method to broker events of event-driven application components, within a distributed computing environment and using a mesh broker, is described. The mesh broker is instantiated as several mesh agents, the mesh agents being provisioned to support mediation activities relating to a plurality of computational nodes within the distributed computing environment. The mesh agents are further deployed as a mesh network among the computational nodes of the distributed computing environment. A connectivity catalog stores cost data associated with transmission of an event notification between each of multiple pairs of computational nodes of the computational nodes. Routes across the mesh network are automatically selected, by the mesh agents and using the cost data to determine low-cost routes across the mesh network. | 2022-03-10 |
20220075663 | SYSTEM FOR PROVIDING A SERVICE - It is disclosed a method and system for providing a service to a user. The system comprises a module (VAHI) associated with the user and configured to act on her/his behalf and a plurality of modules (VAE) associated with and representing respective physical resources. The system comprises a runtime distributed execution environment for running VAHI and VAEs, such environment being provided with a distributed operating system supporting access by VAHI and VAEs to the physical resources. The VAHI is capable of having an interaction with the VAEs and, as a result of such interaction, providing the user with a proposal for the provision of a service. Upon reception of an approval from the user, the VAHI is capable of instructing at least one of the VAEs to request the associated physical resource to perform an action providing at least part of the service according to the proposal. | 2022-03-10 |
20220075664 | OPTIMIZING RESOURCE ALLOCATION FOR DISTRIBUTED STREAM PROCESSING SYSTEMS - Computer software executing on computer hardware that performs the following operations: (i) training a machine learning model to determine allocations of computing resources to processing elements of a stream processing job according to a specified objective; and (ii) allocating a set of computing resources to the processing elements by: allocating to the processing elements a first subset of the set of computing resources based, at least in part, on a minimum resource requirement for the processing elements, and allocating to the processing elements a second subset of the set of computing resources based, at least in part, on an allocation determined using the trained machine learning model. | 2022-03-10 |
20220075665 | SCHEDULING METHOD FOR SELECTING OPTIMAL CLUSTER WITHIN CLUSTER OF DISTRIBUTED COLLABORATION TYPE - There are provided a cloud management method and a cloud management apparatus for rapidly scheduling arrangements of service resources by considering equal distribution of resources in a large-scale container environment of a distributed collaboration type. The cloud management method according to an embodiment includes: receiving, by a cloud management apparatus, a resource allocation request for a specific service; monitoring, by the cloud management apparatus, available resource current statuses of a plurality of clusters, and selecting a cluster that is able to be allocated a requested resource; calculating, by the cloud management apparatus, a suitable score with respect to each of the selected clusters; and selecting, by the cloud management apparatus, a cluster that is most suitable to the requested resource for executing a requested service from among the selected clusters, based on the respective suitable scores. Accordingly, for the method for determining equal resource arrangements between associative clusters according to characteristics of a required resource, a model for selecting a candidate group and finally selecting a cluster that is suitable to a required resource can be supported. | 2022-03-10 |
20220075666 | CONTAINERIZED VNF DEPLOYMENT METHOD AND RELATED DEVICE - A containerized virtualised network function (VNF) deployment method and a related device are disclosed. The disclosed method, implemented by a VNF manager (VNFM), includes receiving a first VNF instantiation request from a network functions virtualisation orchestrator (NFVO), where the first VNF instantiation request carries a first VNF instance identifier and a first virtualised network function descriptor (VNFD) identifier. The method further includes determining a container object package identifier referenced by a VNFD identified by the first VNFD identifier, sending a container object package management request to a container management entity, and receiving a container object package management response, which indicates that a container object instance in a container object package is successfully created, from the container management entity. Furthermore, the method includes creating a VNF instance identified by the first VNF instance identifier, and maintaining a mapping relationship between the first VNF instance identifier and the container object package identifier. | 2022-03-10 |
20220075667 | WORKLOAD IDENTIFICATION AND CAPTURE - In an embodiment, a method includes receiving user-behavior data for a plurality of software applications. The method also includes determining activity windows for the plurality of software applications. The method also includes generating a time map of the activity windows. The method also includes detecting a usage pattern, where the usage pattern indicates two or more applications of the plurality of software applications that are used in combination. The method also includes identifying a user-centric workload from the usage pattern. The method also includes generating a user-centric workload specification for the user-centric workload, where the user-centric workload identifies the two or more applications indicated by the usage pattern. The method also includes associating the user-centric workload specification with an optimization trigger and an optimization profile. The method also includes optimizing the user-centric workload in accordance with the optimization trigger and the optimization profile. | 2022-03-10 |
20220075668 | DISTRIBUTED COMPUTING SYSTEM AND METHOD OF OPERATION THEREOF - There is provided a distributed computation system that establishes a consensus related to a computational value of a computational task, wherein the distributed computation system includes a plurality of computing nodes. | 2022-03-10 |
20220075669 | Non-Blocking Simultaneous MultiThreading (NB-SMT) - A method for non-blocking multithreading, the method may include (a) providing, during a deep neural network (DNN) calculation iteration, to a shared computational resource, input information units related to multiple DNN threads; (b) determining whether to reduce a numerical precision of one or more DNN calculations related to at least one of the multiple DNN threads, and (c) executing, based on the determining, DNN calculations on at least some of the input information units to provide one or more results of the DNN processing. | 2022-03-10 |
20220075670 | SYSTEMS AND METHODS FOR REPLACING SENSITIVE DATA - A model optimizer is disclosed for managing training of models with automatic hyperparameter tuning. The model optimizer can perform a process including multiple steps. The steps can include receiving a model generation request, retrieving from a model storage a stored model and a stored hyperparameter value for the stored model, and provisioning computing resources with the stored model according to the stored hyperparameter value to generate a first trained model. The steps can further include provisioning the computing resources with the stored model according to a new hyperparameter value to generate a second trained model, determining a satisfaction of a termination condition, storing the second trained model and the new hyperparameter value in the model storage, and providing the second trained model in response to the model generation request. | 2022-03-10 |
20220075671 | High Availability Events in a Layered Architecture - Techniques are provided for high availability events in a layered architecture. In an example two computing nodes coordinate to provide a computing service, where each node has a base operating system configured to fence the other base operating system, and an application configured to fence the other application. In some examples, fencing requests by an application are routed through its base operating system, which coordinates application-level fencing requests and operating system-level fencing requests. | 2022-03-10 |
20220075672 | Host Routed Overlay With Deterministic Host Learning And Localized Integrated Routing And Bridging - Systems, methods, and devices for improved routing operations in a network computing environment. A system includes a virtual customer edge router and a host routed overlay comprising a plurality of host virtual machines. The system includes a routed uplink from the virtual customer edge router to one or more of the plurality of leaf nodes. The system is such that the virtual customer edge router is configured to provide localized integrated routing and bridging (IRB) service for the plurality of host virtual machines of the host routed overlay. | 2022-03-10 |
20220075673 | Routing Optimizations In A Network Computing Environment - Systems, methods, and devices for improved routing operations in a network computing environment. A system includes a network topology comprising a spine node and a plurality of leaf nodes. The system is such that at least one of the plurality of leaf nodes is associated with one or more networking prefixes. The spine node stores a prefix table. The prefix table includes a listing of networking prefixes in the network topology. The prefix table includes an indication of at least one equal-cost multipath routing (ECMP) group associated with each of the networking prefixes in the network topology. The prefix table includes an indication of at least one leaf node of the plurality of leaf nodes associated with each of the networking prefixes in the network topology. | 2022-03-10 |
20220075674 | Configuring an API to provide customized access constraints - Systems and methods for configuring an Application Programming Interface (API) to provide a set of customized access constraints are provided. In one implementation, a computing system includes a processing device and a memory device configured to store an API and computer software. The computer software has a plurality of software components configured to enable the processing device to utilize internal data for performing a plurality of functions. The API is configured to define interactions between the software components and is further configured to define access constraints with respect to the computing system. The access constraints are configured to restrict access by an end user associated with the computing system with respect to the internal data and software components. Also, the computer software is configured to adjust the access constraints of the API. | 2022-03-10 |
20220075675 | API Topology Hiding Method, Device, and System - Embodiments of this application relate to the field of communications technologies, and disclose an application programming interface (API) topology hiding method, a device, and a system. A common API framework core function (CCF) receives, from a topology hiding request entity, a request message that includes information about an API and that is used to request to hide an API exposing function (AEF) that provides the API. Based on the request message, a topology hiding entry point used by an API invoker to invoke the API is determined. An identifier of the API and an identifier of the AEF that provides the API are sent to the topology hiding entry point so that the topology hiding entry point hides the AEF that provides the API. | 2022-03-10 |
20220075676 | USING A MACHINE LEARNING MODULE TO PERFORM PREEMPTIVE IDENTIFICATION AND REDUCTION OF RISK OF FAILURE IN COMPUTATIONAL SYSTEMS - Input on a plurality of attributes of a computing environment is provided to a machine learning module to produce an output value that comprises a risk score that indicates a likelihood of a potential malfunctioning occurring within the computing environment. A determination is made as to whether the risk score exceeds a predetermined threshold. In response to determining that the risk score exceeds a predetermined threshold, an indication is transmitted to indicate that potential malfunctioning is likely to occur within the computing environment. A modification is made to the computing environment to prevent the potential malfunctioning from occurring. | 2022-03-10 |
20220075677 | METHOD AND APPARATUS WITH NEURAL NETWORK PROFILING - A processor-implemented neural network method includes: receiving an event corresponding to a neural network operation and a control program for performing the neural network operation; detecting a missing event based on the event and the control program; and generating a profile of the neural network operation based on a result of the detecting. | 2022-03-10 |
20220075678 | COMPUTER-READABLE RECORDING MEDIUM STORING FAILURE CAUSE IDENTIFICATION PROGRAM AND METHOD OF IDENTIFYING FAILURE CAUSE - A non-transitory computer-readable recording medium stores a failure cause identification program for causing a computer to execute a process including: collecting process information related to one or more processes that operate in a container environment; obtaining a derivative relationship of a process for each container on the basis of the process information; generating symbol information in which a function of each of the processes is associated with a container in which each of the processes operates according to the derivative relationship of the process for each container; generating an aggregation result in which a frequency of the function is aggregated according to the symbol information; and identifying a cause at a time of failure occurrence on the basis of the aggregation result. | 2022-03-10 |
20220075679 | LOG ANALYSIS IN VECTOR SPACE - The disclosed embodiments provide for identification of a remedial action based on analysis of a system log file. In some example embodiments, messages from the system log file are used as input to generate vectors within a vector space. Portions of the log messages may generate vectors that cluster into a region in the vector space. The region of vector space is associated with one or more remedial actions. The disclosed embodiments are configured, in some example embodiments, to perform the one or more remedial actions when activity in the log file maps to the region of vector space associated with the one or more remedial actions. In some example embodiments, a remedial action can include submitting a problem report to a problem tracking database. | 2022-03-10 |
20220075680 | DIVERSE INTEGRATED PROCESSING USING PROCESSORS AND DIVERSE FIRMWARE - A fault detection system includes a sensor configured to measure a physical quantity and generate a measurement of the physical quantity; a first processor configured to receive the measurement, execute a first firmware based on the measurement, and output a first result of the executed first firmware; a second processor configured to receive the measurement from the sensor, execute a second firmware based on the measurement, and output a second result of the executed second firmware, wherein the first firmware and the second firmware provide a same nominal function in a diverse manner for calculating the first result and the second result, respectively, such that the first result and the second result are expected to be within a predetermined margin; and a fault detection circuit configured to detect a fault when the first result and the second result are not within the predetermined margin. | 2022-03-10 |
20220075681 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - Failure of a processing unit that processes a plurality of information pieces is discovered in a short time. An information processing device | 2022-03-10 |
20220075682 | RESET AND REPLAY OF MEMORY SUB-SYSTEM CONTROLLER IN A MEMORY SUB-SYSTEM - In an embodiment, a system includes a plurality of memory components and a processing device that is operatively coupled with the plurality of memory components. The processing device includes a host interface, an access management component, a media management component (MMC), and an MMC-restart manager that is configured to perform operations including detecting a triggering event for restarting the MMC, and responsively performing MMC-restart operations that include suspending operation of the access management component; determining whether the MMC is operating, and if so then suspending operation of the MMC; resetting the MMC; resuming operation of the MMC; and resuming operation of the access management component. | 2022-03-10 |
20220075683 | METHODS AND SYSTEMS FOR SELF-HEALING IN CONNECTED COMPUTING ENVIRONMENTS - Methods and systems for networked systems are provided. A reinforcement learning (RL) agent is deployed during runtime of a networked system having at least a first component and a second component. The RL agent detects a first degradation signal in response to an error associated with the first component and a second degradation signal from the second component, the second degradation signal generated in response to the error. The RL agent identifies from a learned data structure an action for fixing degradation, at both the first component and the second component; and continues to update the learned data structure, upon successful and unsuccessful attempts to fix degradation associated with the first component and the second component. | 2022-03-10 |
20220075684 | TECHNOLOGIES FOR PRESERVING ERROR CORRECTION CAPABILITY IN COMPUTE-IN-MEMORY OPERATIONS - Technologies for preserving error correction capability in compute-near-memory operations in a memory include memory media and a media access circuitry coupled with the memory media. The media access circuitry is to detect an error code adjustment state indicative of a failure in the initiated error correction. The media access circuitry is to adjust a voltage to the memory media to eliminate the error code correction adjustment state. Once eliminated, the media access circuitry is to perform the error correction on the read data. | 2022-03-10 |
20220075685 | Data Storage System for Improving Data Throughput and Decode Capabilities - Systems and methods for storing data are described. A system can comprise a controller, one or more physical non-volatile memory devices, a bus comprising a plurality of input/output (I/O) lines. The controller configured to receive data, encode the received data into a codeword, and transfer, in parallel, different portions of the codeword to different physical non-volatile memory devices among the plurality of physical non-volatile memory devices. | 2022-03-10 |
20220075686 | MEMORY SYSTEM AND CONTROL METHOD - According to one embodiment, a memory system includes a non-volatile memory, a memory interface that reads data recorded in the non-volatile memory as a received value, a converting unit that converts the received value to first likelihood information by using a first conversion table, a decoder that decodes the first likelihood information, a control unit that outputs an estimated value with respect to the received value, which is a decoding result obtained by the decoding, when decoding by the decoder has succeeded, and a generating unit that generates a second conversion table based on a decoding result obtained by the decoding, when decoding of the first likelihood information by the decoder has failed. When the generating unit generates the second conversion table, the converting unit converts the received value to the second likelihood information by using the second conversion table, and the decoder decodes the second likelihood information. | 2022-03-10 |
20220075687 | Data Address Management In Non-Volatile Memory - A method, an apparatus, and a system for data address management in non-volatile memory. Write data is allocated to each of a plurality of multi-level pages configured for storage on a page of a non-volatile memory array. A digest is associated with the write data of one multi-level page based on an attribute for that multi-level page. This attribute differs from the attributes of at least one of the other multi-level pages. An amount of redundancy data to be stored with write data on the multi-level page is reduced to account for the associated digest. A digest may be distributed among a plurality of ECC codewords of a multi-level page. The reduced redundancy data, the digest, and the write data for the multi-level page are stored on the page along with the write data for each of the other multi-level pages of the plurality of multi-level pages. | 2022-03-10 |
20220075688 | Circuits And Methods For Correcting Errors In Memory - An electronic system includes a processor circuit, a memory circuit, and an error correction circuit. The error correction circuit receives information read from the memory circuit. The error correction circuit detects if the information contains an error. The error correction circuit corrects the error in the information to generate corrected information and provides the corrected information and an error signal to the processor circuit. The processor circuit provides the corrected information and a write command to the memory circuit based on the error signal indicating the error. The memory circuit overwrites the information stored in the memory circuit with the corrected information in response to the write command. | 2022-03-10 |
20220075689 | MEMORY WORDLINE ISOLATION FOR IMPROVEMENT IN RELIABILITY, AVAILABILITY, AND SCALABILITY (RAS) - A memory device that performs internal ECC (error checking and correction) can treat an N-bit channel as two N/2-bit channels for application of ECC. The memory device includes a memory array to store data and prefetches data bits and error checking and correction (ECC) bits from the memory array for a memory access operation. The memory device includes internal ECC hardware to apply ECC, with a first group of a first half the data bits checked by a first half of the ECC bits in parallel with a second group of a second half of the data bits checked by a second half of the ECC bits. | 2022-03-10 |