06th week of 2022 patent applcation highlights part 44 |
Patent application number | Title | Published |
20220043616 | Job Management Program - When a job management program is executed by an arithmetic processing device in a terminal device, the arithmetic processing device operates as a job management unit for inputting a print job to an image forming apparatus. When a predetermined command is designated by a user operation, a job management unit inputs an image quality control print job for printing a specific color patch chart image on a target medium without applying color correction based on a color profile to the image forming apparatus for creating a color profile for the target medium desired by the user. | 2022-02-10 |
20220043617 | OPERATING MACHINE INFORMATION DISPLAY SYSTEM - An operating machine information acquisition unit acquires information on an operating machine, including position information on the operating machine. A terminal device is carried by a worker around the operating machine, capable of displaying information, and includes a terminal device information acquisition unit. The terminal device information acquisition unit acquires position information and orientation information of the terminal device. A calculation unit causes the terminal device to display state information relating to a state of the operating machine based on the position information of the operating machine and the position information and the orientation information of the terminal device. | 2022-02-10 |
20220043618 | MEDICAL INFORMATION PROCESSING APPARATUS AND MEDICAL INFORMATION PROCESSING METHOD - According to one embodiment, a medical information processing apparatus includes processing circuitry. The processing circuitry generates progress information on a communication process, based on an anatomical site of a subject included in a communication-target medical image, in the communication process of the medical image related to the subject. The processing circuitry displays the progress information. The processing circuitry terminates the communication process in response to a trigger that is a signal for terminating the communication process. | 2022-02-10 |
20220043619 | METHOD AND ELECTRONIC DEVICE FOR PROJECTING EVENT INFORMATION ON REFLECTOR DEVICE CONNECTED TO ELECTRONIC DEVICE - A method for operating an electronic device includes: based on detection of an event, activating at least one reflector device connected to the electronic device; controlling the at least one reflector device to have an angle with respect to the electronic device such that a view position of a user is placed on the at least one reflector device; and controlling to display, on a display of the electronic device, event information associated with the detected event to be reflected to the at least one reflector device. | 2022-02-10 |
20220043620 | SCREEN CREATION ASSISTANCE DEVICE, DISPLAY DEVICE, AND COMPUTER READABLE STORAGE MEDIUM - A screen creation assistance device includes: a master data creation unit that creates master data including specific information that is information that specifies each of elements included in screen creation data for causing a display device to display a screen; a sub-project data creation unit that creates sub-project data including reference data for referring to the master data, and the specific information on elements that are not included in the master data; and a communication unit that transmits the master data and the sub-project data to a display device. | 2022-02-10 |
20220043621 | ELECTROMAGNETIC BANDGAP STRUCTURES - Devices for mitigating or stopping noise or surface current on a display are provided. An electronic device including a display may include a display substrate, a mid-support plate that is adjacent to the display substrate, and a lower support plate that is adjacent to the mid-support plate. A space exists between the mid-support plate and the lower support plate. The mid-support plate includes one or more electromagnetic band gap (EBG) structures formed through the mid-support plate, one or more electromagnetic band gap structures mounted onto the mid-support plate, or both. The one or more electromagnetic band gap structures may mitigate or stop surface current flow across the display. | 2022-02-10 |
20220043622 | SYSTEMS AND METHODS FOR COLLABORATING PHYSICAL-VIRTUAL INTERFACES - Aspects of systems and methods for collaborating physical-virtual interfaces are disclosed. In an example, a method may include transmitting digital content to display in a virtual environment that includes one or more human inhabited characters, the digital content corresponding to content displayed on one or more interactive devices. The method may also include receiving first input data representing content markup of the digital content from a first user of the first interactive device. The method may also include determining an action state or an appearance state for a human inhabited character in response to the first data. The method may also include transmitting the first input data to display in the virtual environment and transmitting the human inhabited character to display in the virtual environment according to the action state or the appearance state. | 2022-02-10 |
20220043623 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND SCREEN-SHARING TERMINAL CONTROLLING METHOD - An information processing system includes: an information processing apparatus configured to manage sharing of a screen performed by a plurality of screen-sharing terminals coupled communicably to the information processing apparatus; and an administrator terminal configured to have administrative authority and to be coupled to the information processing apparatus via a network. The information processing apparatus includes an information management unit and an instruction delivery unit. The administrator terminal includes an accepting unit a communication control unit. | 2022-02-10 |
20220043624 | CONTROL POD FOR A WIRELESS HEADSET - A headset control pod stores a headset for charging when not in use. When the headset is in signal communication with a smart device, a user can use the control pod to mute and unmute a call of the smart device with no need to access the smart device. When the headset is used for music playback from the smart device, the control pod can be used to pause or advance playback. The control pod can also control playback by a headset of music stored in the control pod. | 2022-02-10 |
20220043625 | MEDIA PLAYBACK ACTIONS BASED ON KNOB ROTATION - A system is provided for streaming media content in a vehicle. The system includes a personal media streaming appliance system configured to connect to a media delivery system and receive media content from the media delivery system at least via a cellular network. The personal media streaming appliance system operates to transmit a media signal representative to the received media content to a vehicle media playback system so that the vehicle media playback system operates to play the media content in the vehicle. Various types of rotations of a knob part of the personal media streaming applicant system result in different media playback actions. | 2022-02-10 |
20220043626 | METHODS AND USER INTERFACES FOR SHARING AUDIO - While an electronic device is connected to a first external device, display a first user interface including a first affordance. Detect an input selecting the first affordance. In response to detecting the input corresponding to selection of the first affordance, initiate a process to provide audio data concurrently with the first external device and a second external device different from the first external device. After initiating the process to provide audio data concurrently to the first external device and a second external device, detect an indication that a physical proximity between the electronic device and the second external device satisfies a proximity condition. In response to detecting the indication that the physical proximity between the electronic device and the second external device satisfies the proximity condition, display a second user interface indicating that the physical proximity between the electronic device and the second external device satisfies the proximity condition. | 2022-02-10 |
20220043627 | Receiving Apparatus And Control Method - A receiving apparatus includes a control unit that performs a process of searching for one of a channel and a content on the basis of a phrase that is recognized from an uttered speech of a user, a process of selecting one of a single channel and a single content from among a plurality of channels and a plurality of contents obtained through the search process, a process of displaying, on a display unit, one of the selected content and a content that is being broadcasted on the selected channel, and a process of displaying, as options, item images representing a plurality of channels or a plurality of contents that are obtained through the search process on the display unit. | 2022-02-10 |
20220043628 | ELECTRONIC DEVICE AND METHOD FOR GENERATING SHORT CUT OF QUICK COMMAND - An electronic device and method are disclosed. The electronic device includes a display, a microphone, a communication circuit, a processor, and a memory. The memory stores instructions that, when executed by the processor, implement the method. The method includes determining whether the electronic device is communicatively coupled to an external display device, when the electronic device is not communicatively coupled to the external display, receiving a first user utterance, executing a task corresponding to at least one of a word, phrase or sentence included in the first user utterance as indicated by the mapping, the task preconfigured by a user, and when the electronic device is communicatively coupled to the external display device, displaying at least one of a text and a graphical user interface (GUI) indicating the at least one word, phrase, and sentence on the external display. | 2022-02-10 |
20220043629 | USER-INTERFACE SYSTEM FOR A LAUNDRY APPLIANCE - A laundry appliance includes a cabinet. A door is coupled to the cabinet. The door is operable between an opened position and a closed position. An audio interface is disposed on the door. The audio interface includes a microphone for receiving a voice command and a speaker for projecting an audio output. A visual interface is disposed on the door. The visual interface is configured to display a message in response to at least one of the voice command and the audio output. A microcontroller is disposed on the door. The microcontroller is operably coupled to the audio interface and the visual interface. A proximity sensor is configured to communicate sensed information to the microcontroller. The microcontroller is configured to activate at least one of the audio interface and the visual interface in response to the sensed information. | 2022-02-10 |
20220043630 | ELECTRONIC DEVICE AND CONTROL METHOD THEREFOR - An electronic device and a method of controlling thereof are provided. The electronic device includes a memory configured to store coefficient data and identification code data in which kernel data is quantized; a first operation circuit configured to, based on a plurality of target elements of target data being sequentially input, select an output value corresponding to at least one of the plurality of target elements that is sequentially input according to an identification code corresponding to the one of the plurality of target elements, and accumulate the selected output value; and a second operation circuit configured to output a convolution operation result based on output data that is output from the first operation circuit and a coefficient corresponding to the output data. | 2022-02-10 |
20220043631 | CONTROLLING CARRY-SAVE ADDERS IN MULTIPLICATION - A multiplier circuit is provided to multiply a first operand and a second operand. The multiplier circuit includes a carry-save adder network comprising a plurality of carry-save adders to perform partial product additions to reduce a plurality of partial products to a redundant result value that represents a product of the first operand and the second operand. A number of the carry-save adders that is used to generate the redundant result value is controllable and is dependent on a width of at least one of the first operand and the second operand. | 2022-02-10 |
20220043632 | PROCESSING-IN-MEMORY (PIM) DEVICES - A processing-in-memory (PIM) device includes first to Lth multiplication/accumulation (MAC) operators, first to Lth memory banks, and a plurality of data input/output (I/O) circuits. The first to Lth MAC operators include first to Lth left MAC operators and first to Lth right MAC operators. The plurality of data I/O circuits include left data I/O circuits and right data I/O circuits. A Uth MAC operator among the first to Lth MAC operators is configured to output one of the first to Mth MAC result data through a Uth left MAC operator among the first to Lth left MAC operators or a Uth right MAC operator among the first to Lth right MAC operators. The PIM device is configured to output the MAC result data outputted through the left MAC operators through the left data I/O circuits, and output the MAC result data outputted through the right MAC operators through the right data I/O circuits. | 2022-02-10 |
20220043633 | RANDOM NUMBER SUPPLY METHOD AND DEVICE - A random number supply device that generates three states required for operation of a signal processing unit from two-bit random number, includes a decision section decides whether a first random number generated by a first random number generator matches a predetermined value, and a control section supplies the signal processing unit with two-bit random number including the first random number by not using a second random number generated by a second random number generator when the first random number matches the predetermined value, and by using the second random number when the first random number does not match the predetermined value. | 2022-02-10 |
20220043634 | Compact Digitization System for Generating Random Numbers - System for generating random numbers comprising an optical component configured to generate two optical signals, and two photodetectors connected to the optical component, wherein the first photodetector is adapted to receive the first optical signal and to generate a first electrical signal and the second photodetector is adapted to receive the second optical signal and to generate a second electrical signal, wherein the optical component is adapted to generate first and second optical signals that randomly result in first and second electrical signals where the first and second electrical signals are either equal or one is larger than the other, the system characterized in that the photodetectors are adapted to transmit the first and second electrical signals to a comparator, wherein the comparator is adapted to provide an output based on a comparison of the first and second electrical signals, thereby providing the random number. | 2022-02-10 |
20220043635 | DATAFLOW GRAPH CONFIGURATION - A method for configuring a first computer executable program includes through a user interface, receiving information indicative of a source of data and a data target; through the user interface, receiving a characterization of a process, the characterization including a type of the process and values for respective characteristics associated with the process. The method includes based on the received information, automatically assigning values to respective parameters of the first computer executable program to cause the first computer executable program to, when executed, receive data from the source of data and output data to the data target. The method includes based on the received characterization of the process, automatically configuring the first computer executable program to reference a second computer executable program. The configuring includes identifying the second computer executable program based on the type of the process; and assigning values to respective parameters of the second computer executable program based on the values for the respective characteristics associated with the process. | 2022-02-10 |
20220043636 | APPLICATION PROGRAM FOR EXTENSION AND DEPLOYMENT OF INTEGRATED AND EXPORTABLE CROSS PLATFORM DIGITAL TWIN MODEL - A method and apparatus for extending, customizing and validating a simulation-based digital twin model is described. In an exemplary embodiment, the device transmits a model to a client, where the model is a simulation-based digital twin model. In addition, the device receives a customization to the model, the where the customization adds a functionality to the model. Furthermore, the device deploys the model in a model platform, where the model is used in a simulation with the model platform and the model is coupled with the model platform. | 2022-02-10 |
20220043637 | PROVIDING COMMUNICATION BETWEEN A CLIENT SYSTEM AND A PROCESS-BASED SOFTWARE APPLICATION - Resource-to-operation pairs are received at a user interface. The resource-to-operation pairs are stored in a model abstraction as a flat structure. The resource-to-operation pairs in the flat structure are converted into Representational State Transfer (REST) connectors. The REST connectors are encoded in a conventional interface description language. The REST connectors are stored in webpage code. A request is received at the webpage code from the client system for a service of a process step of the process-based software application. A REST connector in the webpage code translates the request to a message that conforms with the service. A response from the service is received at the webpage code. The response from the service is processed at the webpage code. Data retrieved by the processing of the response is accessed via a browser located at the client system. | 2022-02-10 |
20220043638 | SYSTEMS, DEVICES, AND METHODS FOR SOFTWARE CODING - Provided method and system allow dynamic rendering of a reflexive questionnaire based on a modifiable spreadsheet for users with little to no programming experience and knowledge. The method comprises receiving a modifiable spreadsheet with multiple rows, each row comprising rendering instructions for a reflexive questionnaire from a first computer, such as a data type cell, statement cell, logic cell, and a field identifier; rendering a graphical user interface, on a second computer, comprising a label and an input element corresponding to the rendering instructions of a first row of the spreadsheet; receiving an input from the second computer; evaluating the input against the logic cell of the spreadsheet; in response to the input complying with the logic cell of the spreadsheet, dynamically rendering a second label and a second input element to be displayed on the graphical user interface based on the logic of the first row. | 2022-02-10 |
20220043639 | CONTROL OF MISSION DATA TOOL APPLICATION PROGRAM INTERFACES - The system and method for decoupling algorithms from a Mission Data Tool suite, or the like, into a scalable, modular and multithreaded tool set callable from a variety of environments. The system generates data objects representing inputs, outputs and field validation capable of being accessed from a framework callable from Java or C#, for example. Data objects are defined in an abstract Independent Definition Language (IDL) and the objects are then generated into the target languages for use by a software component's application programming interface (API). The resulting generated code may also contain serialization/deserialization routines needed for object transfer between different systems in a seamless manner. In some cases, transfer of the objects, algorithmic generation of data into the objects, and the transfer of outputs back to the calling system is possible with minimal overhead and interaction on either end. | 2022-02-10 |
20220043640 | ELECTRONIC SYSTEM FOR DYNAMIC ANALYSIS AND DETECTION OF TRANSFORMED TRANSIENT DATA IN A DISTRIBUTED SYSTEM NETWORK - Embodiments of the invention are directed to systems, methods, and computer program products for dynamic analysis and detection of transformed transient data in a distributed system network. The system is structured for validating, determining and evaluating temporal data transformations associated with technology resource components across iterations of technology applications for maintaining backward compatibility. The system comprises an execution module structured for executing technology resource components in a plurality of testing technology environments concurrently. The system further comprises an analysis module structured for evaluating iterations of a first technology resource component by comparing the transformed first testing output with the transformed second testing output to determine modifications to the first iteration of the first technology resource component in the second iteration of the first technology resource component that succeeds the first iteration. | 2022-02-10 |
20220043641 | SIMULATING CONTAINER DEPLOYMENT - A computer-implemented method, computer system, and computer program product for a container deployment simulation. The method may include performing a container deployment simulation. The method may include detecting a container deployment simulation error. In response to detecting the container deployment simulation error, the method may include providing one or more recommendations to a user. In response to receiving an acceptance of the recommendation from the user, the method may include implementing the recommendation. In response to receiving a rejection of the recommendation from the user, the method may include receiving a user recommendation. The method may include implementing the user recommendation and performing the container deployment simulation. The one or more recommendations may have a weight value. The weight value of the one or more recommendations may be increased when the user accepts the one or more recommendations or reduced when the user rejects the one or more recommendations. | 2022-02-10 |
20220043642 | MULTI-CLOUD LICENSED SOFTWARE DEPLOYMENT - Methods, systems, and computer program products for flexible virtualization system deployment into different cloud computing environments. A set of floating licenses to virtualization system software components is established. The set of floating licenses are configured to permit usage of the virtualization system software components on different cloud computing infrastructures. Workload parameters of a workload to be deployed to one of the different cloud computing infrastructures is considered with respect to cloud attributes corresponding to the different cloud computing infrastructures. One or more candidate target cloud computing infrastructures are selected based upon a comparison between workload attributes of a computing workload and cloud attributes of the candidate target cloud computing infrastructures. Virtualization system software components are deployed into the selected target cloud computing infrastructures. Licenses to the virtualization system software components can float between any combination of different cloud computing infrastructures, including floating the licenses between private clouds and public clouds. | 2022-02-10 |
20220043643 | AUTONOMOUS SERVER INSTALLATION - Aspects of the subject disclosure may include, for example, a system for preparing servers for service over a network, where the servers include out of band management cards. The system may include a processor, a database of server configuration information, and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations can include establishing a secure hypertext transport protocol session over the network with the out of band management card for the server to mount and execute a preinstall image that performs an installation, without any specific network configuration, and where the installation is performed without deploying an agent. Other embodiments are disclosed. | 2022-02-10 |
20220043644 | DISTRIBUTED GEOIP INFORMATION UPDATING - Methods and systems for providing distributed GeoIP information updating. One method includes receiving, with a data processing server, an update event associated with an update to an active version of GeoIP information, the active version of the GeoIP information is actively accessible by an application execution server for request enrichment. The method also includes generating, with the data processing server, an updated version of the GeoIP information according to the update. The method also includes replacing, with the data processing server, the active version with the updated version in storage, where, in response to storing the updated version, the updated version is actively accessible by the application server for request enrichment. | 2022-02-10 |
20220043645 | DISTRIBUTED USER AGENT INFORMATION UPDATING - Methods and systems for providing distributed user agent information updating. One system includes a data processing server configured to receive an update event associated with an update to an active version of user agent information. The active version of the user agent information is actively accessible for request enrichment. The data processing server is also configured to generate an updated version of the user agent information according to the update and replace the active version with the updated version in storage. In response to the storage of the updated version, the updated version is actively accessible for request enrichment. | 2022-02-10 |
20220043646 | SYSTEM FOR FACILITATING SOFTWARE CODE BUILD AND REVIEW WITH REAL-TIME ERROR FEEDBACK - Embodiments of the present invention provide a system for providing real-time feedback during software code build and review sessions. The system is configured for determining that a first user is accessing a review session user interface via a user device, determining initiation of a first review session from the user device by the first user, wherein the first review session involves review of a software code associated with an entity application that is developed by at least one of the first user and a plurality of users, continuously capturing the first review session in real-time, via the user device, in response to capturing the first review session, process the first review session, and generating a recommendation based on processing the first review session. | 2022-02-10 |
20220043647 | SECURE DELIVERY OF ASSETS TO A TRUSTED DEVICE - Embodiments described herein provide a system and method for secure delivery of assets to a trusted device. Multiple levels of verification are implemented to enable components of a software update and asset delivery system to verify other components within the system. Furthermore, updates are provided only to client devices that are authorized to receive such updates. In one embodiment, the specific assets provided to a client device during a software update can be tailored to the client device, such that individual client devices can receive updated versions of software asset at a faster or slower rate than mass market devices. For example, developer or beta tester devices can receive pre-release assets, while enterprise devices can receive updates at a slower rate relative to mass market devices. | 2022-02-10 |
20220043648 | MANAGEMENT OF TRANSPORT SOFTWARE UPDATES - An example operation may include one or more of receiving a software update at a transport of a subset of transports, validating the software update based on one or more of: a period of time when the software update is in use, and a number of utilizations of the software update by the subset of the transports, propagating the software update based on the validating, to a further subset of transports, wherein the further subset of the transports is larger than the subset of the transports. | 2022-02-10 |
20220043649 | DISTRIBUTION AND EXECUTION OF INSTRUCTIONS IN A DISTRIBUTED COMPUTING ENVIRONMENT - Methods and apparatus for distribution and execution of instructions in a distributed computing environment are disclosed. An example apparatus includes memory; first instructions; and processor circuitry to execute the first instructions to manage an instruction queue. The instruction queue includes indications of second instructions to be executed at a component server. The processor circuitry is to add a first indication of a corresponding one of the second instructions to the instruction queue. The first indication is to identify: (1) a location of the second instruction and (2) a format of the second instruction. In response to a second indication that the second instruction has been executed, the processor circuitry is to remove the first indication from the instruction queue. | 2022-02-10 |
20220043650 | SOFTWARE CHECKPOINT-RESTORATION BETWEEN DISTINCTLY COMPILED EXECUTABLES - A system and method for software checkpoint-restoration between distinctly compiled executables is disclosed. A first compiled version of the software, such as Version A, is executed. After which, checkpointing is performed in order to generate a checkpoint image. After checkpointing, restarting execution is performed with at least some of a second compiled version of the software, such as Version B, being executed using a switching function that is configured to switch execution upon restart at least partly to the second compiled version of the software. In this way, different executable versions may be used during the restart than during the initial execution, such as an unoptimized build during the restart versus an optimized build during the initial execution, so that software testing and/or debugging may be performed more efficiently. | 2022-02-10 |
20220043651 | METHODS AND SYSTEMS THAT SHARE RESOURCES AMONG MULTIPLE, INTERDEPENDENT RELEASE PIPELINES - The current document is directed to automated application-release-management facilities that, in a described implementation, coordinate continuous development and release of cloud-computing applications. The application-release-management process is specified, in the described implementation, by application-release-management pipelines, each pipeline comprising one or more stages, with each stage comprising one or more tasks. The currently described methods and systems allow resources to be shared among multiple, interdependent release pipelines and allow access to shared resources to be controlled. | 2022-02-10 |
20220043652 | SYSTEMS, METHODS, AND APPARATUS FOR TILE CONFIGURATION - Embodiments detailed herein relate to matrix (tile) operations. For example, decode circuitry to decode an instruction having fields for an opcode and a memory address; and execution circuitry to execute the decoded instruction to set a tile configuration for the processor to utilize tiles in matrix operations based on a description retrieved from the memory address, wherein a tile a set of 2-dimensional registers are discussed. | 2022-02-10 |
20220043653 | REDUCING SAVE RESTORE LATENCY FOR POWER CONTROL - A method of save-restore operations includes monitoring, by a power controller of a parallel processor (such as a graphics processing unit), of a register bus for one or more register write signals. The power controller determines that a register write signal is addressed to a state register that is designated to be saved prior to changing a power state of the parallel processor from a first state to a second state having a lower level of energy usage. The power controller instructs a copy of data corresponding to the state register to be written to a local memory module of the parallel processor. Subsequently, the parallel processor receives a power state change signal and writes state register data saved at the local memory module to an off-chip memory prior to changing the power state of the parallel processor. | 2022-02-10 |
20220043654 | Techniques For Metadata Processing - Techniques are described for metadata processing that can be used to encode an arbitrary number of security policies for code running on a processor. Metadata may be added to every word in the system and a metadata processing unit may be used that works in parallel with data flow to enforce an arbitrary set of policies. In one aspect, the metadata may be characterized as unbounded and software programmable to be applicable to a wide range of metadata processing policies. Techniques and policies have a wide range of uses including, for example, safety, security, and synchronization. Additionally, described are aspects and techniques in connection with metadata processing in an embodiment based on the RISC-V architecture. | 2022-02-10 |
20220043655 | HISTOGRAM OPERATION - A digital data processor includes an instruction memory storing instructions each specifying a data processing operation and at least one data operand field, an instruction decoder coupled to the instruction memory for sequentially recalling instructions from the instruction memory and determining the data processing operation and the at least one data operand, and at least one operational unit coupled to a data register file and to an instruction decoder to perform a data processing operation upon at least one operand corresponding to an instruction decoded by the instruction decoder and storing results of the data processing operation. The operational unit is configured to increment histogram values in response to a histogram instruction by incrementing a bin entry at a specified location in a specified number of at least one histogram. | 2022-02-10 |
20220043656 | PERFORMANCE SCALING FOR BINARY TRANSLATION - Embodiments relate to improving user experiences when executing binary code that has been translated from other binary code. Binary code (instructions) for a source instruction set architecture (ISA) cannot natively execute on a processor that implements a target ISA. The instructions in the source ISA are binary-translated to instructions in the target ISA and are executed on the processor. The overhead of performing binary translation and/or the overhead of executing binary-translated code are compensated for by increasing the speed at which the translated code is executed, relative to non-translated code. Translated code may be executed on hardware that has one or more power-performance parameters of the processor set to increase the performance of the processor with respect to the translated code. The increase in power-performance for translated code may be proportional to the degree of translation overhead. | 2022-02-10 |
20220043657 | SYSTEM AND METHOD FOR CONVOLVING IMAGE WITH SPARSE KERNELS - An image processing system for convolving an image includes processing circuitry that is configured to retrieve the image including a set of rows, a merged kernel, multiple skip values and a pixel base address. The merged kernel includes all non-zero coefficients of a set of kernels. Each skip value corresponds to a location offset of each non-zero coefficient with respect to a previous non-zero coefficient. Further, the processing circuitry is configured to execute a multiply-accumulate (MAC) instruction and a load instruction parallelly in one clock cycle for multiple times, on the set of rows and the merged kernel to convolve the image with the merged kernel. Each row on which the MAC and load instructions are executed is associated with a corresponding non-zero coefficient and a corresponding skip value. The load instruction is executed based on the pixel base address, the corresponding skip value, and a width of each row. | 2022-02-10 |
20220043658 | SYSTEMS AND METHODS FOR MANAGING SYSTEM ROLLUP OF ACCELERATOR HEALTH - An information handling system may include a processor, one or more accelerators communicatively coupled to the processor, and a management controller communicatively coupled to the processor and the one or more accelerators and configured for out-of-band management of the information handling system, the management controller further configured to receive information regarding the one or more accelerators, determine a criticality factor for each of the one or more accelerators based on the information, determine an accelerator health status for each of the one or more accelerators, and determine an overall system health of the information handling system based on the criticality factors and the accelerator health statuses. | 2022-02-10 |
20220043659 | STATE SEMANTICS KEXEC BASED FIRMWARE UPDATE - A kexec-based system update process wherein user-specific data is transferred on reboot of the second kernel. Upon initializing kexec load, buffer memory is assigned to the second kernel and the system loads control pages of fixed size for the second kernel boot, and also loads user-specific data onto extended control pages of variable size. Upon boot of the second kernel, the user-specific data is extracted from the extended control pages and transferred to the corresponding applications. | 2022-02-10 |
20220043660 | SYSTEM FOR SOFTWARE MODULE DEVELOPMENT - Systems and methods for use in software module development. A configuration file and a process agent module operate cooperatively in conjunction with a computer system to provision one or more execution environments to implement one or more instances of a user's software module in development. The configuration file contains the hardware and software configuration that defines the limits and capabilities of the execution environment as well as parameters needed by the software module. The process agent launches the execution environment and ensures that the software module executing in the execution environment has access to the resources set out in the configuration file. Once execution of the software module is complete, performance results are then passed to the process agent for collation and analysis. These results can then be used to determine which implementation of the software module performs best. | 2022-02-10 |
20220043661 | SYSTEMS, DEVICES AND METHODS FOR DYNAMIC GENERATION OF DIGITAL INTERACTIVE CONTENT - Systems and methods for dynamic generation of a user interface for display on a display device of a user are described herein. A first set of payload elements associated with a user interface element to be rendered on the user interface can be identified. The first set of payload elements can be filtered by comparing keywords of each payload element to a user interface keyword to generate a second set of payload elements. The second set of payload elements can be filtered by comparing logic of each payload element to user parameters. A final payload element can be selected based on weighted random selection. The user interface can be rendered on the display with the final payload element as the user interface element. | 2022-02-10 |
20220043662 | Application Publishing In A Virtualized Environment - Methods and systems for application publishing in a virtualized environment are described herein. A system may facilitate publishing of one or more shortcuts based on inputs made in the virtual desktop environment (e. g., when a user “drag-and-drops” a shortcut onto a publishing icon on a desktop). The system may determine application information and instance information for the application, and may publish a shortcut for that application to the storefront. As a result, users may be permitted to self-publish shortcuts for preferred applications onto personalized storefronts, which may be unique to each user. | 2022-02-10 |
20220043663 | METHOD OF DEFINING AND PERFORMING DYNAMIC USER-COMPUTER INTERACTION, COMPUTER GUIDED NAVIGATION, AND APPLICATION INTEGRATION FOR ANY PROCEDURE, INSTRUCTIONS, INSTRUCTIONAL MANUAL, OR FILLABLE FORM - Various embodiments described in this disclosure relate to methods and computer-based systems that implement those methods for overlaying or superimposing computer-user interaction widgets and application interface widgets on top of content blocks contained within digitized documents which represent written procedures or instructions, instructional manuals and fillable forms, in order to support means of providing dynamic user-computer interaction and computer guided navigation, as well as application functions and data integration, driven through structured interaction and integration metadata definitions, as it relates to said content blocks, during job performance. | 2022-02-10 |
20220043664 | SYSTEMS AND METHODS FOR ACCELERATOR TASK PROFILING VIA VIRTUAL ACCELERATOR MANAGER BASED ON SLOT SPEED - An information handling system may include a plurality of hardware accelerator devices and a processor subsystem having access to a memory subsystem and having access to the plurality of hardware accelerator devices, wherein the memory subsystem stores instructions executable by the processor subsystem, the instructions, when executed by the processor subsystem, causing the processor subsystem to: responsive to issuance of, by an application executing on a virtual machine of a hypervisor executing on the processor subsystem, an instruction triggering an event for use of a selected hardware accelerator device of the plurality of hardware accelerator devices, invoke a virtual acceleration manager of the hypervisor to handle the instruction; determine by the virtual acceleration manager an amount of data to be transferred between the processor subsystem and the selected hardware accelerator device; select by the virtual acceleration manager the selected hardware accelerator based on the amount of data to be transferred; and distribute by the virtual acceleration manager the instruction to the selected hardware accelerator device. | 2022-02-10 |
20220043665 | SYSTEMS AND METHODS FOR MULTI-LINK PLATFORM CONFIGURATION WITH CONTAINERIZED COMPUTE INSTANCES - An information handling system may include a processor subsystem and non-transitory computer-readable media communicatively coupled to the processor subsystem and storing instructions, the instructions configured to, when read and executed by the processor subsystem: execute a basic/input output service to create a link aggregation table with details based on wireless and wired network interface modules present within the information handling system; execute a first operating system service on a container instantiated on a hypervisor of the information handling system to instantiate virtual link aggregation tables for the container based on a network bandwidth policy of the container and link aggregation capabilities as set forth in the link aggregation table; and execute a second operating system service on the hypervisor to instantiate an operating system driver based on operating systems for network instances of link aggregation drivers and dynamic detection of network driver requirements determined by the first operating system service. | 2022-02-10 |
20220043666 | EXECUTING INTERRUPT PROCESSING OF VIRTUAL MACHINES USING PROCESSOR'S ARITHMETIC UNIT - A data processing device that can monitor properly the state of the interrupt processing of a virtual machine is provided. The data processing device according to an aspect of the present disclosure includes an arithmetic unit that executes multiple virtual machines, respectively, and an interrupt controller that instructs execution of the interrupt processing to the arithmetic unit with the virtual machine information to specify at least one of the multiple virtual machines. The interrupt controller includes a counter to count the number of interrupts for each virtual machine based on the virtual machine information. | 2022-02-10 |
20220043667 | NETWORK-BASED SIGNALING TO CONTROL VIRTUAL MACHINE PLACEMENT - A virtualized computing environment includes a plurality of host computers, each host being connected to a physical network and having a hypervisor executing therein. To provision a virtual machine requiring a connection to a virtual network in one of the hosts, a candidate host for hosting the virtual machine, the candidate host having the virtual network configured therein, is selected. A request is then made for a status of the virtual network to the candidate host. The status of the virtual network is then received from the candidate host. If the virtual network is available, then the virtual machine is deployed to the candidate host. If the virtual network is not available, then a second candidate host is selected for hosting the virtual machine. | 2022-02-10 |
20220043668 | SYSTEM AND METHODS FOR IMPLEMENTING A COMPUTER PROCESS AUTOMATION TOOL - Systems and methods for implementing an automation platform that is configured to analyze computing activities from a plurality of users so as to identify potential automation processes is provided. In one or more examples, a plurality of data collection agents are deployed across a plurality of computing devices and can be configured to collect and record activities performed on the computing device by one or more users of the computing devices. In one or more examples, each agent deployed on a computing device can be configured to transmit the collected data to a central server that can store the collected data in memory. The central server can be configured to collect the data from each agent and can be configured to apply one or more data science algorithms that can be configured to cluster various activities collected by the agents into groups for potential automation. | 2022-02-10 |
20220043669 | COMPLETING AN SMI TASK ACROSS MULTIPLE SMI EVENTS - An SMI task to be completed across multiple SMI events. An OS agent can be employed to determine a current load on a computing device. Based on the load, the OS agent can create an SMI message that specifies a maximum duration for an SMI event and that segments the SMI data for the SMI task. The OS agent can provide the SMI message to BIOS as part of requesting that the SMI task be performed. During the resulting SMI event, the BIOS can reassemble the segmented SMI data and then perform the SMI task. If this processing cannot be completed within the specified maximum duration for an SMI event, the BIOS can pause its processing and cause a subsequent SMI event to occur during which the processing can be resumed. In this way, the SMI task can be completed across multiple SMI events while ensuring that no single SMI event exceeds the specified maximum duration. | 2022-02-10 |
20220043670 | STREAMING ENGINE WITH SHORT CUT START INSTRUCTIONS - A streaming engine employed in a digital data processor specifies a fixed read only data stream recalled memory. Streams are started by one of two types of stream start instructions. A stream start ordinary instruction specifies a register storing a stream start address and a register of storing a stream definition template which specifies stream parameters. A stream start short-cut instruction specifies a register storing a stream start address and an implied stream definition template. A functional unit is responsive to a stream operand instruction to receive at least one operand from a stream head register. The stream template supports plural nested loops with short-cut start instructions limited to a single loop. The stream template supports data element promotion to larger data element size with sign extension or zero extension. A set of allowed stream short-cut start instructions includes various data sizes and promotion factors. | 2022-02-10 |
20220043671 | MATERIALIZATION OF AN ANALYTICAL WORKSPACE - Techniques are disclosed for creating a workspace. A data processing system receives a request to create a workspace to implement a portion of a model deployed in a production environment. One or more data objects and associated metadata thereof relevant to the portion of the model, and an execution venue for the workspace are obtained. A set of instructions is generated for executing the one or more data objects and the associated metadata in the workspace. The workspace is created within the execution venue by instantiating the portion of the model, the one or more data objects, and the associated metadata in the workspace. The portion of the model in the workspace is processed using the one or more data objects and the associated metadata in accordance with the set of instructions, and the production environment is updated by the data processing system based on the processing. | 2022-02-10 |
20220043672 | VIRTUAL QUEUE - Methods and non-transitory machine-readable media associated with a virtual queue are described. A method can include receiving, by a processing resource, a request from a user to join a virtual queue, adding, by the processing resource, the user to the virtual queue, determining, by the processing resource, a queue optimization based on an estimated wait time for the user in the virtual queue, and providing to the user, by the processing resource, the queue optimization including the estimated wait time for the user in the virtual queue. The virtual queue can be updated in an example. | 2022-02-10 |
20220043673 | FPGA ACCELERATION FOR SERVERLESS COMPUTING - In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement. | 2022-02-10 |
20220043674 | SATELLITE DATA PROCESSING METHOD, APPARATUS, AND SATELLITE BACKUP SUBSYSTEM - A satellite data processing method, apparatus, and a satellite backup subsystem belongs to the technical field of satellites. This method is applied to the satellite backup subsystem. The method comprises: receiving a data task, wherein the data task comprises data backup or data restoration; splitting the data task into a plurality of single-orbit tasks; and executing respective single-orbit task in each orbital flight. | 2022-02-10 |
20220043675 | GRAPH COMPUTING METHOD AND APPARATUS - This application discloses a graph computing method and apparatus, so that concurrent graph computing performed by using a plurality of algorithms can be supported. A plurality of subgraphs of a graph are loaded into a plurality of computing units, and the plurality of computing units execute a plurality of algorithms in parallel, so that a same graph can be shared by the plurality of algorithms, and the plurality of algorithms are executed in parallel on the same graph. In this way, a delay caused when one algorithm needs to executed after execution of another algorithm ends is saved, so that overall efficiency of performing graph computing by using the plurality of algorithms is improved, and overall time of performing graph computing by using the plurality of algorithms is shortened. | 2022-02-10 |
20220043676 | EVENT PROXIES FOR FUNCTIONS-AS-A-SERVICE (FAAS) INFRASTRUCTURES - Techniques for implementing event proxies in a Functions-as-a-Service (FaaS) infrastructure are provided. In one set of embodiments, a computer system implementing an event proxy can receive an event emitted by an event source, where the computer system is part of a first computing cloud including the FaaS infrastructure, and where the event source is a software service running in a second computing cloud that is distinct from the first computing cloud. The computer system can translate the event from a first format understood by the event source to a second format understood by a function scheduler of the FaaS infrastructure, where the function scheduler is configured to schedule execution of functions on hosts of the FaaS infrastructure. The computer system can then make the translated event available to the function scheduler. | 2022-02-10 |
20220043677 | VIRTUAL MACHINES SCHEDULING - A computer implemented method of scheduling a plurality of virtual machines for execution by a physical computing infrastructure, each virtual machine being deployable to a subset of the physical computing infrastructure to execute a computing task, the method including determining, for each virtual machine, a subset of the infrastructure and a time period for deployment of the virtual machine, so as to schedule the virtual machines to execute to completion over an aggregate of all time periods, wherein the determination is based on a mathematical optimization of a risk function for each of the plurality virtual machines corresponding to a relative risk that at least one virtual machine will fail to fully execute its task to completion. | 2022-02-10 |
20220043678 | EFFICIENT DISTRIBUTED SCHEDULER FOR A DATA PARTITIONED SYSTEM - Presented herein are methods, non-transitory computer readable media, and devices for optimizing thread assignment to schedulers, avoid starvation of individual data partitions, and maximize parallelism in the presence of hierarchical data partitioning are disclosed, which include: partitioning, by a network storage server, a scheduler servicing a data partitioned system into a plurality of autonomous schedulers; determining what fraction of thread resources in the data partitioned system at least one of the plurality of autonomous schedulers is to receive; and determining, with minimal synchronization, when it is time to allow the at least one of the plurality of autonomous schedulers servicing a coarse hierarchy to run. | 2022-02-10 |
20220043679 | TECHNOLOGIES FOR PROVIDING PREDICTIVE THERMAL MANAGEMENT - Technologies for providing predictive thermal management include a compute device. The compute device includes a compute engine and an execution assistant device to assist the compute engine in the execution of a workload. The compute engine is configured to obtain a profile that relates a utilization factor indicative of a present amount of activity of the execution assistant device to a predicted temperature of the execution assistant device, determine, as the execution assistant device assists in the execution of the workload, a value of the utilization factor of the execution assistant device, determine, as a function of the determined value of the utilization factor and the obtained profile, the predicted temperature of the execution assistant device, determine whether the predicted temperature satisfies a predefined threshold temperature, and adjust, in response to a determination that the predicted temperature satisfies the predefined threshold temperature, an operation of the compute device to reduce the predicted temperature. Other embodiments are also described and claimed. | 2022-02-10 |
20220043680 | INSTANCE CREATION IN A COMPUTING SYSTEM - A system and method for efficiently creating and managing application instances in distributed computing systems is disclosed. Controls are presented for specifying an application for instantiation, a data file for use with the application, and a destination for results from the application. Application resources and topology may be recommended to the user based on prior application execution, and CPU, GPU, and interconnect parameters such as bandwidth and latency. The controls may enable to user to customize the recommendations prior to automated instantiation based on the user's needs, such as whether the application is to be run in batch mode or interactive mode. | 2022-02-10 |
20220043681 | MEMORY USAGE PREDICTION FOR MACHINE LEARNING AND DEEP LEARNING MODELS - Herein, a computer receives a new training dataset for a target ML model. Proven or unproven respective values of hyperparameters of the target ML model are selected. An already-trained ML metamodel predicts an amount of memory that the target ML model will need, when configured with the respective values of the hyperparameters, to train with the new training dataset. In an embodiment, supervised training of the ML metamodel is as follows. The ML metamodel receives feature vectors that each contains distinct details of a respective past training of the target ML model of many and varied trainings of the target ML model. Those distinct details of each past training includes: respective values of the hyperparameters, and respective values of metafeatures of a respective training dataset of many training datasets. Each feature vector is labeled with a respective amount of memory that the target ML model needed during the respective past training. | 2022-02-10 |
20220043682 | CONTROLLING MEMORY UTILIZATION BY A TOPIC IN A PUBLISH-SUBSCRIBE ENVIRONMENT - An apparatus is provided to manage memory utilization by a topic in a publish-subscribe environment, wherein the topic is a logical container for the messages. The apparatus includes a primary memory device configured to store messages published to a topic, and a secondary storage device. A processor operationally coupled to the primary and secondary memory devices is configured to monitor utilization of a portion of the primary memory device assigned to the topic. In response to detecting that the utilization of the portion of the primary memory device has equaled or exceeded a threshold for memory utilization, the processor performs at least one of throttling the rate of publishing to the topic and transferring a portion of the messages from the topic to the secondary memory device. Each of the throttling and the transferring keeps the portion of the primary memory device assigned to the topic from overloading. | 2022-02-10 |
20220043683 | SYSTEM MANAGEMENT MEMORY COHERENCY DETECTION - In an example, a system includes a firmware controller to initiate a SM execution mode of the system. The firmware controller scans memory for a process pool tag. The firmware controller compares the process pool tag to a set of operating system process pool tags and detects a coherency discrepancy between the process pool tag and the set of operating system process pool tags. The firmware controller exits the SM execution mode of the system. | 2022-02-10 |
20220043684 | MEMORIES COMPRISING PROCESSOR PROFILES - In an example, a memory includes processor profiles to load into a processor. The processor may provide an address to access a processor profile. The address may be modified to select a processor profile to load into the processor. | 2022-02-10 |
20220043685 | SYSTEMS AND METHODS FOR REAL-TIME PROCESSING - A method for real-time data processing is described. The method being implemented on a computer system having one or more physical processors programmed with computer program instructions which, when executed, perform the method. The method comprising allocating a real-time dataset associated with a real-time data interaction to a node in a chain of nodes, wherein each node is representative of a user in the real-time data interaction; setting a node status of the node for the real-time dataset to pending; and independently of (i) a node status of the one or more upstream nodes and (ii) a node status of the one or more downstream nodes: periodically determining, by the computer system, an availability status of the node; and in response to the availability status satisfying the criterion, setting the node status for the real-time dataset as settled. | 2022-02-10 |
20220043686 | ALLOCATING COMPUTING RESOURCES BASED ON PROPERTIES ASSOCIATED WITH LOCATION - Various examples are disclosed for predictive allocation of computing resources based on the predicted location of a user. A computing environment can generate a predictive usage model that predicts a location of a user and allocate computing resources, such as VDI sessions or VMs, to a host device that optimizes latency to the predicted location. | 2022-02-10 |
20220043687 | METHODS AND APPARATUS FOR SCALABLE MULTI-PRODUCER MULTI-CONSUMER QUEUES - Methods and apparatus are disclosed for scalable multi-producer multi-consumer queues. At least one non-transitory machine-readable medium comprises instructions that, when executed, cause a processor to enqueue a first value into a first element of a queue using an atomic operation, the first element identified by a producer index, update the producer index to identify a second element of the queue using an atomic operation, the second element determined by one or more of the producer index and a length of the queue, dequeue a second value from a third element of the queue using an atomic operation, the second element identified by a consumer index, and update the consumer index to identify a fourth element of the queue in the using an atomic operation, the fourth element determined by one or more of the consumer index and the length of the queue. | 2022-02-10 |
20220043688 | Heterogeneous Scheduling for Sequential Compute Dag - Embodiments of this disclosure provide techniques for splitting a DAG computation model and constructing sub-DAG computation models for inter-node parallel processing. In particular, a method is provided where a plurality of processors split the DAG computation into a plurality of non-interdependent sub-nodes within each respective node of the DAG computation model. The plurality of processors includes at least two different processing unit types. The plurality of processors construct a plurality of sub-DAG computations, each sub-DAG computation including at least a non-interdependent sub-node from different nodes of the DAG computation. The plurality of processors process each of the plurality of sub-DAG computations in parallel. | 2022-02-10 |
20220043689 | COMPUTERIZED SYSTEMS AND METHODS FOR FAIL-SAFE LOADING OF INFORMATION ON A USER INTERFACE USING A CIRCUIT BREAKER - Systems and methods are provided for fail-safe loading of information on a user interface, comprising receiving, via a modular platform, requests for access to a mobile application platform from a plurality of mobile devices, opening and directing the requests for access to the mobile application platform to a sequential processor of an application programming interface (API) gateway when a parallel processor of the API gateway is unresponsive to requests for access to the mobile application platform for a predetermined period of time, periodically checking a status of the parallel processor, and redirecting the requests for access to the mobile application platform to the parallel processor when the parallel processor is capable of processing requests for access to the mobile application platform. | 2022-02-10 |
20220043690 | PARALLELIZED SEGMENT GENERATION VIA KEY-BASED SUBDIVISION IN DATABASE SYSTEMS - A method for execution by a record processing and storage system includes assigning each of a plurality of key space sub-intervals of a cluster key domain to a corresponding one of a plurality of processing core resources, and generating a plurality of segments from the set of records via the plurality of processing core resources. Each processing core resource in the plurality of processing core resources generates a subset of the plurality of segments by identifying a proper subset of the set of records based on having cluster key values included in a corresponding one of the plurality of key space sub-intervals, and by generating the subset of the plurality of segments to include the proper subset of the set of records. | 2022-02-10 |
20220043691 | CLUSTERING AND VISUALIZING DEMAND PROFILES OF RESOURCES - A system and method are presented for processing demand data for a set of resources in a technology platform. A method is provided that includes collecting demand profiles for the set of resources; reformatting each demand profile into a cumulative demand plot; calculating a distance metric for each pair of cumulative demand plots based on an area between the pair of cumulative demand plots; clustering the resources into a set of clusters based on calculated distance metrics; and generating a characterization for each of the clusters to facilitate management or control of the technology platform. | 2022-02-10 |
20220043692 | SAAS INFRASTRUCTURE FOR FLEXIBLE MULTI-TENANCY - Techniques for implementing a software-as-a-service (SaaS) infrastructure that supports flexible multi-tenancy are provided. In various embodiments, this SaaS infrastructure employs a hybrid design that can flexibly accommodate both single-tenant and multi-tenant instances of a SaaS application. Accordingly, with this infrastructure, a SaaS provider can advantageously support high levels of isolation between certain tenants of its application (as dictated by the tenants' needs and/or other criteria) while keeping the marginal cost of operating the infrastructure as low as possible. | 2022-02-10 |
20220043693 | METHODS, SYSTEMS AND APPARATUS FOR CLIENT EXTENSIBILITY DURING PROVISIONING OF A COMPOSITE BLUEPRINT - Methods, apparatus and articles of manufacture to provide client extensibility during provisioning of a composite blueprint are disclosed. An example virtual appliance in a cloud computing environment includes an orchestrator to facilitate provisioning of a virtual computing resource based on a blueprint, the provisioning associated with an event defined by the blueprint. The example virtual appliance also includes an event broker to maintain a set of subscribers to the event broker, each of the set of subscribers further subscribing to at least one event topic through the event broker, the event broker to trigger a notification of a first subscriber to a first event topic associated with the event when the event broker determines that the first subscriber is a blocking subscriber for the first event topic, the event broker to facilitate modification of the event by a blocking subscriber but not by a non-blocking subscriber. | 2022-02-10 |
20220043694 | SYSTEM AND METHOD FOR ALLOCATION OF RESOURCES WITHIN AN ENVIRONMENT - Embodiments of the present invention provide a system for allocating resources to one or more users within an environment. The system is configured for registering one or more users of an entity to a closed distributed register, registering one or more resources of the entity to the closed distributed register, establishing a communication link with one or more entity systems, wherein the one or more entity systems are located in the environment of the entity, identifying one or more patterns associated with the one or more users based on the communication link with the one or more entity systems, and allocating the one or more resources to the one or more users based on the one or more patterns. | 2022-02-10 |
20220043695 | Migrating Workloads Using Active Disaster Recovery - Migrating workloads among execution environments including storage systems includes: selecting a target execution environment for supporting a workload and migrating the workload to the target execution environment utilizing active disaster recovery. Migrating the workload can include: assigning storage resources of the workload to a first pod; linking for replication, the first pod to a second pod of the target execution environment; and replicating the storage resources of the workload to the second pod of the target execution environment. | 2022-02-10 |
20220043696 | DISTRIBUTED INFERENCING USING DEEP LEARNING ACCELERATORS WITH INTEGRATED RANDOM ACCESS MEMORY - Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory. At least one interface of the integrated circuit device is configured to receive input data from a data source, and to receive, from a server system over a computer network, parameters of a first Artificial Neural Network (ANN) and instructions executable by the Deep Learning Accelerator to perform matrix computation of the first ANN. The Deep Learning Accelerator may execute the instructions to generate an output of the first ANN responsive to the third data; and the at least one interface is configured to transmit the output to the server system over the computer network as an input to a second ANN in the server system. | 2022-02-10 |
20220043697 | SYSTEMS AND METHODS FOR ENABLING INTERNAL ACCELERATOR SUBSYSTEM FOR DATA ANALYTICS VIA MANAGEMENT CONTROLLER TELEMETRY DATA - An information handling system may include a processor, a management controller configured for out-of-band management of the information handling system, and an internal accelerator communicatively coupled to the management controller and configured to execute an analytics engine for receiving events from the management controller and analyzing the events to perform one or more tasks based on the events. | 2022-02-10 |
20220043698 | CONTENT REFERENCE MANAGEMENT SYSTEM - A first disclosed method involves storing, by a computing system, data indicating that first content accessible by an endpoint device includes a reference to second content of a first application hosted on a remote computing device. The computing system receives an indication of an update to the second content, and sends a notification to cause the endpoint device to output an indicator of the update to the second content. A second disclosed method involves sending, by a first computing system to a second computing system, a request to register that first content accessible by an endpoint device includes a reference to second content of an application hosted on a remote computing device. The first computing system receives a notification of an update to the second content from the second computing system, and causes the endpoint device to output an indicator of the update in response to receiving the notification. | 2022-02-10 |
20220043699 | INTELLIGENT SCALING IN MICROSERVICE-BASED DISTRIBUTED SYSTEMS - In an approach to intelligent scaling in a cloud platform, an attribute template is stored for one or more target services based on one or more system data. One or more request metrics for each target service is stored, wherein the request metrics are based on an analysis of one or more incoming requests of one or more service call chains. Responsive to receiving a request for a target service in a service call chain, the target service is scaled based on the attribute template of the target service and the request metrics of the target service. | 2022-02-10 |
20220043700 | Stack Safety for Independently Defined Operations - Systems and methods are disclosed for swapping or changing between stacks associated with respective applications when one application calls the other. | 2022-02-10 |
20220043701 | MOBILE APPLICATION SERVICE ENGINE (MASE) - Third party applications are deployed as “containerized applications” on one or more wireless AP devices. The containerized applications are confined to a pre-allocated segregated disk space within a file system of a wireless AP device. The containerized applications have access to standard Linux services as well as access to advanced features provided by an AP. | 2022-02-10 |
20220043702 | METHOD FOR DETERMINISTICALLY REPORTING CAUSE AND EFFECT IN SOFTWARE SYSTEMS - Negative outcomes experienced by a user in a live software system can be automatically, deterministically, and contemporaneously traced back to the root conditions that caused those outcomes, by generating causal event entries in a database for those root conditions as they occur, assigning unique causal IDs to those causal events, and propagating causal IDs alongside the software system state changes that are known to produce negative outcomes and which are effected by those root conditions. By selectively passing causal IDs based on the input and output values of the operation, subsequent causal events and negative outcomes can be linked accurately to causal IDs of parent events, making it simpler to trace negative outcomes for the user back to their root cause events in a software system. | 2022-02-10 |
20220043703 | METHOD AND APPARATUS FOR INTELLIGENT OPERATION MANAGEMENT OF INFRASTRUCTURE - An intelligent operation management apparatus for infrastructure may include a memory and a processor. Herein, the processor may be configured to: collect data by monitoring a resource of an operation target, perform an anomaly detection analysis by various methods of visualization using a graph for the collected data, perform an abnormal prediction analysis for the collected data, and perform pre-maintenance intelligent management based on a result of the anomaly detection analysis and a result of the abnormal prediction analysis. According to an apparatus and method for intelligent operation management of infrastructure, an effect of reducing operation expense and an effect of consecutively providing the quality of service (QoS) may be expected. | 2022-02-10 |
20220043704 | Increasing Or Decreasing The Amount Of Log Data Generated Based On Performance Characteristics Of A Device - Dynamically adjusting an amount of log data generated for a storage system that includes a plurality of storage devices, including: setting, for a component within the storage system, a logging level for the component, the logging level specifying the extent to which log data should be generated for a particular component; determining, in dependence upon one or more measured operating characteristics of the storage system, whether the logging level for the component should be changed; and responsive to determining that the logging level for the component should be changed, changing the logging level associated with the component. | 2022-02-10 |
20220043705 | STORAGE CIRCUIT WITH HARDWARE READ ACCESS - A method for configuring a storage circuit, including: writing data via an input line into the storage circuit by a software write access; writing a bit-wise inverted form of the data via the input line into the storage circuit by a subsequent software write access; and generating an error signal if a comparison based on the written data and the written bit-wise inverted form of the data indicates a storage circuit configuration error, wherein the storage circuit permits hardware read access and lacks software read access. | 2022-02-10 |
20220043706 | PRIORITIZATION OF ERROR CONTROL OPERATIONS AT A MEMORY SUB-SYSTEM - A failure of a first memory access operation is detected at a memory device. Responsive to the detection, a first error control operation and a second error control operation are performed. In response to a determination that the second error control operation has remedied the failed first memory access operation, the second error control operation is associated with a second priority which is higher than a first priority associated with the first error control operation. | 2022-02-10 |
20220043707 | ENRICHED HIGH FIDELITY METRICS - A system including a data repository storing metrics describing operational behavior of software programs executing in an enterprise system. The system also includes an application programming interface (API) gateway configured to receive the metrics. The system also includes an ingestion layer configured to ingest the metrics to form the ingested metrics. The system also includes a tumbling window processor configured to process the ingested metrics and the events into heat maps, sort the heat maps into the time slices, and populate the time slices with the ingested metrics. | 2022-02-10 |
20220043708 | METHOD AND SYSTEM FOR VALIDATING A MEMORY DEVICE - The present invention relates to a method of validating a memory device. The method includes validating a second memory device based on one or more first microcode instructions stored in a validated predetermined part of a first memory device to detect the operational status of the second memory device. Further, the method includes receiving one or more second microcode instructions upon validating the second memory device. Finally, validating the first memory device based on the one or more second microcode instructions stored in the second memory device to detect the operational status of the first memory device. | 2022-02-10 |
20220043709 | DYNAMIC REBUILD CAPABILITY IN REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID) ARRAYS USING COMPRESSING DRIVES - Method and system are provided for dynamic rebuild capability in redundant array of independent disks (RAID) arrays using compressing drives. The method includes providing an array including a physical rebuild area for the multiple drives of the array and dynamically adjusting a number of allocated rebuild zones available within the rebuild area, wherein each allocated rebuild zone has capacity to store a drive rebuild based on a current physical usage of the multiple drives of the array. | 2022-02-10 |
20220043710 | DATA STORAGE APPARATUS AND OPERATING METHOD THEREOF - A data storage apparatus is provided to include a storage including a main data region for storing first data and a spare region for storing second data indicating attributes of the first data; and a controller in communication with a host and configured to control the storage based on a request from the host, wherein the controller comprises: a first error check and correction (ECC) engine configured to perform an error correction on the first data stored in the main data region of the storage; and a second ECC engine configured to perform an error correction on the second data stored in the spare region of the storage. | 2022-02-10 |
20220043711 | DYNAMICALLY SELECTING OPTIMAL INSTANCE TYPE FOR DISASTER RECOVERY IN THE CLOUD - The selection of an optimal restore instance type based on a customer's speed/cost tradeoff resolution is disclosed. Aspects of the anticipated completion time to complete the recovery and completion cost to perform the recovery may be extrapolated based on a baseline or test recovery and/or actual recovery times and costs. An automated restore activity may be performed on a baseline test VM of a predefined size using different restore instance types. An optimal restore instance type is used to form worker VMs that perform the recovery operations. | 2022-02-10 |
20220043712 | LIGHTWEIGHT METADATA HANDLING FOR FILE INDEXING AND LIVE BROWSE OF BACKUP COPIES - The disclosed enhancements optimize the use of the live browse cache and pseudo-disk storage areas, improving metadata handling so that it can be used more effectively to speed up live browse and file indexing of backup copies in a data storage management system. The enhancements operate granularly to identify within each extent being backed up smaller sectors that comprise metadata. The disclosed approach pre-fetches the metadata of the backup copy before allowing the file scan of the file indexing and/or the live browse operation to proceed. The backup operation, the file indexing operation, and the live browse operation are enhanced to handle the more granular metadata sectors without changing the granularity of the full extents generated and stored in the backup. | 2022-02-10 |
20220043713 | Meta Data Protection against Unexpected Power Loss in a Memory System - A memory system having a set of non-volatile media, a volatile memory, a buffer memory, and a controller configured to process requests from a host system to store data in the non-volatile media or retrieve data from the non-volatile media. The buffer memory is capable of holding data for at least a predetermined period of time after the volatile memory loses data during an event of power outage in the memory system. A power manager monitors a power supply of the memory system to detect an onset of power outage and, in response to the onset of power outage, causes the controller to copy meta data in the volatile memory to the buffer memory. | 2022-02-10 |
20220043714 | DATA PROTECTION METHOD, ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT - Embodiments of the present disclosure provide a data protection method, an electronic device, and a computer program product. The method includes determining an object feature for each protection object in a set of protection objects that generate protected data, the set of protection objects including at least one protection object configured with a predetermined data protection strategy. The method further includes determining a set of candidate objects belonging to the same class as the at least one protection object from the set of protection objects according to the determined object features. The method further includes configuring the predetermined data protection strategy to at least one candidate object in the set of candidate objects. | 2022-02-10 |
20220043715 | RETENTION TIME BASED CONSISTENT HASH RING - A retention-based consistent hash ring process defines each file name in the system to include its expiration date (or time) as a prefix or suffix that is stored and indexed as metadata. The process uses a virtual node to represent adjacent expiration days to create virtual nodes based on individual days of the week. Each physical node contains the same number of labeled virtual nodes, and the consistent hash ring process is used to move files with the same expiration day to different physical nodes by looking for next labeled virtual nodes on the hash ring. This provides a way to locate the virtual node storage location by specifying a file's expiration date as part of the key used in the hash ring process, and distributes files that may otherwise be assigned to the same physical node through a backup policy. | 2022-02-10 |