02nd week of 2021 patent applcation highlights part 45 |
Patent application number | Title | Published |
20210011678 | MULTI-DISPLAY BASED DEVICE - An electronic device is provided that includes a first display and a second display. The electronic device also includes a processor configured to allocate a first set of resources to the first display and a second set of resources to the second display. The first set of resources is different from the second set of resources. Each of the first set of resources and the second set of resources includes one or more of at least one available hardware resource and at least one available software resource. | 2021-01-14 |
20210011679 | DISPLAY SYSTEM AND PROGRAM - The present invention provides a display system configured to control a plurality of display devices by combining a plurality of information processing devices and the display devices, even if the display system includes a display device incapable of communicating a switch signal with all of the information processing devices. | 2021-01-14 |
20210011680 | Screen-Projection Emitter, Screen-Projection Emission System and Screen-Projection System - The present disclosure relates to a screen-projection emitter, a screen-projection emission system and a screen-projection system. The screen-projection emitter includes a memory which is configured to store screen-projection data processing software instruction set. The computer device is configured to receive the screen-projection data processing software instruction set from the memory, run the screen-projection data processing software instruction set locally to process data of content to be projected, and provide the processed data of content to be projected to the screen-projection emitter. | 2021-01-14 |
20210011681 | DISPLAY DEVICE - A display device including a display panel including a first substrate and a pixel array layer disposed on a first surface of the first substrate, a first sound generation device disposed on a second surface of the first substrate opposing the first surface, and configured to vibrate the display panel and output first sound, and a circuit board disposed on the second surface of the first substrate, in which the first sound generation device includes a bobbin fixed on one surface of the first substrate, a voice coil surrounding a side surface of the bobbin, a magnet disposed on the bobbin and spaced apart from the bobbin, and a plate disposed on the magnet and fixed to the circuit board. | 2021-01-14 |
20210011682 | SYSTEMS AND METHODS FOR PROVIDING AUDIO TO A USER BASED ON GAZE INPUT - According to the invention, a method for providing audio to a user is disclosed. The method may include determining, with an eye tracking device, a gaze point of a user on a display. The method may also include causing, with a computer system, an audio device to produce audio to the user, where content of the audio may be based at least in part on the gaze point of the user on the display. | 2021-01-14 |
20210011683 | VEHICLE CONTROL DEVICE AND VEHICLE TRAVEL CONTROL SYSTEM - A vehicle control device including: a remote operation signal reception section configured to be input with a remote operation signal based on an operation by an operator at a command center external to a vehicle; a control section configured to control the vehicle in a remote operation mode, based on the remote operation signal output from the command center, in a state in which remote operation of the vehicle from the command center side has been enabled; and a speech communication device configured to enable conversation between an occupant of the vehicle and an operator at the command center. | 2021-01-14 |
20210011684 | DYNAMIC AUGMENTED REALITY INTERFACE CREATION - A method for dynamic augmented reality interface creation is provided. The method detects an utterance from a user of an augmented reality device and determines an ambiguity level of the utterance. The method generates a set of visual artifacts based on the utterance and the ambiguity level. The visual artifacts are generated within an augmented reality use interface, with each visual artifact corresponding to a selectable function. The method detects an interaction with a first visual artifact corresponding to a first selectable function. The method modifies the augmented reality user interface in response to the interaction with the first visual artifact. | 2021-01-14 |
20210011685 | System and Method for Storing Data Records - The present disclosure discloses systems and methods for storing data records of a table or any data collection in a database system. The records are stored in a plurality of data files on a computer server. The system considers both the sequential I/O and random I/O options in the processing writing data records to a disk, and finds the best approach to writing data to the disk. Under certain conditions, the method analyzes and recognizes that sequential I/O may perform better. Under another condition, the method analyzes and recognize random I/O may perform better. Under other conditions, the method analyzes and recognizes a combination of sequential I/O and random I/O may perform better. The method chooses the option that has the minimum-cost for storing data records in a disk file. In doing so, the method considers and applies system constraints, such as memory resource and I/O latency. | 2021-01-14 |
20210011686 | ARITHMETIC OPERATION DEVICE AND ARITHMETIC OPERATION SYSTEM - Provided is an arithmetic operation device including a multiplying section in which multiplying units are divided and assigned to each of one or more groups such that each group includes one or more of the multiplying units according to a calculation precision mode, and each multiplying unit multiplies together an individual multiplier, which is a digit range of at least a portion of a multiplier for the group, and an individual multiplicand, which is a digit range of at least a portion of a multiplicand for the group, and an adding section in which adding units are divided and assigned to each of one or more groups such that each group includes one or more of the adding units according to the calculation precision mode, and the adding units add together each multiplication result obtained by each multiplying unit and output a product of the multiplier and the multiplicand. | 2021-01-14 |
20210011687 | PRODUCT-SUM CALCULATION DEVICE AND PRODUCT-SUM CALCULATION METHOD - To provide a product-sum calculation device and a product-sum calculation method capable of more efficient operation. A product-sum calculation device includes: a plurality of synapses including a transistor and having a variable resistance value; a plurality of input lines extending in a first direction and configured to propagate an input signal to each of the plurality of synapses; a plurality of output lines extending in a second direction orthogonal to the first direction, and configured to output a product-sum calculation result of the input signal from each of the plurality of synapses; and a charge and discharge control unit configured to control an output state of the product-sum calculation result by controlling a charge and discharge state of the output line on the basis of a polarity of the transistor. | 2021-01-14 |
20210011688 | AUTOMATIC DISCOVERY OF MICROSERVICES FROM MONOLITHIC APPLICATIONS - A method, computer program product, and a system to replace monolithic applications with microservices includes a processor(s) obtaining a requirement for the monolithic application. The processor(s) automatically identifies, based on a sentence comprising the requirement, a given component of the monolithic application, based on analyzing the requirement. The processor(s) determines, based on syntax of the sentence, an initial class and a responsibility for the given component. The processor(s) generates a bounded context for the given component, based on analyzing one or more additional sentences comprising the requirement, to identify additional classes beyond the initial class associated with the responsibility in the requirement. The processor(s) identifies, in a microservices architecture executing in a shared computing environment, one or more microservices within the bounded context. The processor(s) generates a stub for use by the user, via the client, for accessing the one or more identified microservices. | 2021-01-14 |
20210011689 | METHOD AND APPARATUS FOR QUICK PROTOTYPING OF EMBEDDED PERIPHERALS - The disclosure describes methods and apparatus for quickly prototyping of a solution developed using one or more sensing devices (e.g., sensors), functional blocks, algorithm libraries, and customized logic. The methods produce firmware executable by a processor (e.g., a microcontroller) on an embedded device such as a development board, expansion board, or the like. By performing these methods on the apparatus described, a user is able to create a function prototype without having deep knowledge of the particular sensing device or any particular programming language. Prototypes developed as described herein enable the user to rapidly test ideas and develop sensing device proofs-of-concept. The solutions produced by the methods and apparatus improve the functioning of the sensor being prototyped and the operation of the embedded device where the sensor is integrated. | 2021-01-14 |
20210011690 | DESIGN SYSTEM FOR CREATING GRAPHICAL CONTENT - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining a content theme that includes a set of visual components and accessing control panels of a design system. The control panels are configured to provide control functions for adjusting attributes of the components. The control panels receive a selection of a first component that is linked to at least a second component in the set of visual components. An attribute of the first component is adjusted in response to detecting user interaction with a control panel. The user interaction causes adjustment of an attribute of a second component based on the adjusting of the attribute of the first component because of the second component being linked to the first component. Graphical content is created for output at a display based on the adjusted attributes of the first and second components. | 2021-01-14 |
20210011691 | PROGRAM CREATION ASSISTING SYSTEM, METHOD FOR SAME, AND PROGRAM - A program creation assisting system includes a camera that captures an image of a chip array in which a special chip indicating a task for programming is in a chip array, a task management table managed in a state where the task and an image of the plurality of chips used for the task are associated with each other, an image processing unit that recognizes an image relating to the chip from the image of the chip array acquired by the camera, and a program creation processing unit | 2021-01-14 |
20210011692 | BACK-END APPLICATION CODE STUB GENERATION FROM A FRONT-END APPLICATION WIREFRAME - As part of identifying a theme corresponding to a wireframe, the wireframe comprising a set of graphical elements is analyzed, the set of graphical elements specifying a graphical representation of a user interface of a front-end application. A similarity measure is computed, the similarity measure quantifying a degree of similarity between the theme and an entry in a feature implementation history stored in a code repository. From the entry, a first feature to be implemented in a back-end application is extracted, the first feature servicing a data request from the front-end application. A source code stub extracted from the code repository, comprising a partial implementation of the first feature in the back-end application, is coupled with a first graphical element in the set of graphical elements in the wireframe. | 2021-01-14 |
20210011693 | BACK-END APPLICATION CODE STUB GENERATION FROM A FRONT-END APPLICATION WIREFRAME - As part of identifying a theme corresponding to a wireframe, the wireframe comprising a set of graphical elements is analyzed, the set of graphical elements specifying a graphical representation of a user interface of a front-end application. A similarity measure is computed, the similarity measure quantifying a degree of similarity between the theme and an entry in a feature implementation history stored in a code repository. From the entry, a first feature to be implemented in a back-end application is extracted, the first feature servicing a data request from the front-end application. A source code stub extracted from the code repository, comprising a partial implementation of the first feature in the back-end application, is coupled with a first graphical element in the set of graphical elements in the wireframe. | 2021-01-14 |
20210011694 | TRANSLATING BETWEEN PROGRAMMING LANGUAGES USING MACHINE LEARNING - Techniques are described herein for translating source code in one programming language to source code in another programming language using machine learning. In various implementations, one or more components of one or more generative adversarial networks, such as a generator machine learning model, may be trained to generate “synthetically-naturalistic” source code that can be used as a translation of source code in an unfamiliar language. In some implementations, a discriminator machine learning model may be employed to aid in training the generator machine learning model, e.g., by being trained to discriminate between human-generated (“genuine”) and machine-generated (“synthetic”) source code. | 2021-01-14 |
20210011695 | TECHNIQUES FOR AUTOMATICALLY DETECTING PROGRAMMING DEFICIENCIES - A quality control (QC) engine analyzes sample code provided by a user and then generates example code that more effectively performs the same or similar operations performed by the sample code. An objective model analyzes the sample code to generate one or more tags indicating the intended objective(s) of the sample code. The quality model analyzes the sample code to generate one or more ratings indicating the degree to which the sample code achieves each intended objective. The performance model analyzes the tags and the ratings and estimates the performance of the sample code when executed in a production environment. The recommendation engine queries a database of code based on the tags, the ratings, and the estimated performance of the sample code to determine example code that achieves the same or similar objectives(s) as the sample code, but with at least one of higher ratings and greater performance. | 2021-01-14 |
20210011696 | BLACKBOX MATCHING ENGINE - A method and apparatus are disclosed for enhancing operable functionality of input source code files from a software program by identifying a first code snippet and a first library function which generate similar outputs from a shared input by parsing each and every line of code in a candidate code snippet to generate a templatized code snippet data structure for the first code snippet, and then testing the templatized code snippet data structure against extracted library function information to check for similarity of outputs between the first code snippet and the first library function in response to a shared input so that the developer is presented with a library function recommendation which includes the first code snippet, the first library function, and instructions for replacing the first code snippet with the first library function. | 2021-01-14 |
20210011697 | MULTI-VERSION SHADERS - Described herein are techniques for generating a stitched shader program. The techniques include identifying a set of shader programs to include in the stitched shader program, wherein the set includes at least one multiversion shader program that includes a first version of instructions and a second version of instructions, wherein the first version of instructions uses a first number of resources that is different than a second number of resources used by the second version of instructions. The techniques also include combining the set of shader programs to form the stitched shader program. The techniques further include determining a number of resources for the stitched shader program. The techniques also include based on the determined number of resources, modifying the instructions corresponding to the multiversion shader program to, when executed, execute either the first version of instructions, or the second version of instructions. | 2021-01-14 |
20210011698 | SOFTWARE AUTOMATION DEPLOYMENT AND PERFORMANCE TRACKING - Through the systems and methods described herein for provisioning a software automation and tracking performance of the software automation throughout its lifecycle. A realized benefit of deployment of the software automation can be determined and automatically reported according to a schedule. The reports may be provided to certain specified recipients such as project managers, executive officers, sales and/or vendor relations managers, and the like for analysis and processing by the various parties associated with the operation of the software automation. This all-in-one system provides a platform from which one or more software automation projects may be automatically managed through completion and deployment, improving efficiency of the project and management of all deployed software automations for a more cost-effective suite of such programs. | 2021-01-14 |
20210011699 | FACILITATING CLOUD NATIVE EDGE COMPUTING VIA BEHAVIORAL INTELLIGENCE - Behavioral intelligence can be used with cloud native computing to enhance software deployment for various infrastructures by analyzing and deploying software functions according to the various infrastructures. Because different providers can have their own systems and controls for managing their infrastructures, it is costly to deploy software functions that are coupled together. However, if the software functions are disaggregated and translated according to the systems and controls relative to the various infrastructures, then the software functions can be failed and scaled independently of one another, thereby generating efficiencies. | 2021-01-14 |
20210011700 | SYSTEM AND METHOD FOR UPDATING NETWORK COMPUTER SYSTEMS - An update system configured to provide software updates, software patches and/or other data packets to one or more computer systems via a network is disclosed. The update system may interact with a network management system, such as an enterprise management system, to distribute data packets and gather configuration information. The update system may generate and send commands to the network management system. The network management system may carry out the commands to distribute data packets and/or gather configuration information. | 2021-01-14 |
20210011701 | SYSTEM UPDATE DEVICE AND SYSTEM UPDATE METHOD - A system update device | 2021-01-14 |
20210011702 | SYSTEMS AND METHODS FOR UPDATING TELEVISION RECEIVING DEVICES - Systems and methods for updating television receiving devices (such as cable and satellite set-top boxes) include functionality that pre-downloads software or firmware updates for the receiving device on a mobile device, such as a smartphone, of a television service provider technician. In order to communicate the updates to the receiving device from the mobile device, during initial installation of the set-top box at the customer premises, the technician connects his or her mobile device to the same input of the set-top box that normally receives the television programming and remote software or firmware updates from the television service provider. The mobile device may include or be coupled to an adapter that adapts a signal and hardware interface from an output interface of the mobile device to a signal and hardware interface compatible with the input interface of the set-top box. | 2021-01-14 |
20210011703 | System for Improved Evaluation of Semiconductor Hardware and Corresponding Method - A system and method for improved evaluation of semiconductor hardware is provided. The system comprises a firmware repository server, which firmware repository server comprises a plurality of firmware packages for the one or more evaluation hardware boards. The firmware repository server is further configured to: receive a firmware request for a user evaluation hardware board from a first of the client devices, search the plurality of firmware packages for compatible firmware packages for the user evaluation hardware board, generate a catalog of the compatible firmware packages for the user evaluation hardware board, transmit the catalog to the first client device, receive a request for a user selected firmware package from the catalog of compatible firmware packages, and to transmit firmware of the user selected firmware package to the client device for installation on the user evaluation hardware board. | 2021-01-14 |
20210011704 | PROGRESS MANAGEMENT SYSTEM, PROGRESS MANAGEMENT METHOD, AND INFORMATION PROCESSING APPARATUS - A progress management system in which a plurality of progress management terminals that execute browser software and an information processing apparatus communicate with each other, terminals, the one of the progress management terminals sends an issue that the progress is updated to the information processing apparatus, and the information processing apparatus, when the issue that the progress is updated is received, reflects the issue that the progress is updated on other one or ones of the progress management terminals that display a same screen as that of the one of the progress management terminals using bidirectional communications, and browser software of the other one or ones of the progress management terminals automatically update the same screen as that of the one of the progress management terminals. | 2021-01-14 |
20210011705 | A METHOD OF AND DEVICES FOR PERFORMING AN OVER-THE-AIR, OTA, UPGRADE IN A NETWORK OF COMMUNICATIVELY INTERCONNECTED DEVICES | 2021-01-14 |
20210011706 | MEMORY DEVICE FIRMWARE UPDATE AND ACTIVATION WITHOUT MEMORY ACCESS QUIESCENCE - Examples include updating firmware for a persistent memory module in a computing system during runtime. Examples include copying a new version of persistent memory module firmware into an available area of random-access memory (RAM) in the persistent memory module, and transferring processing of a current version of persistent memory module firmware to the new version of persistent memory module firmware during runtime of the computing system, without a reset of the computing system and without quiesce of access to persistent memory media in the persistent memory module, while continuing to perform critical event handling by the current version of persistent memory module firmware. Examples further include initializing the new version of persistent memory module firmware; and transferring processing of critical event handling from the current version of persistent memory module firmware to the new version of persistent memory module firmware when initializing the new version of persistent memory module firmware is completed. | 2021-01-14 |
20210011707 | METHOD AND SYSTEM FOR A CLIENT TO SERVER DEPLOYMENT VIA AN ONLINE DISTRIBUTION PLATFORM - An apparatus and a method for a client to server deployment via an online distribution platform can include a mechanism to update at least part of a system software or server-side software via a parallel client software update. Online distribution platforms such as mobile application stores can be utilized in embodiments of the apparatus and method to provide not only the client update, but also the system software update in the underlying system (e.g. server-side version). | 2021-01-14 |
20210011708 | DEVICE-DRIVEN AUTO-RECOVERY USING MULTIPLE RECOVERY SOURCES - Examples for device-driven auto-recovery using multiple recovery sources are disclosed herein. At least one storage device or storage disk includes instructions that, when executed, cause at least one processor to at least detect a flaw in a first configuration of a program to be installed on a programmable device, the first configuration recorded on a first chain of a distributed ledger of a blockchain; correct the flaw in the first configuration to generate a corrected configuration; commit the corrected configuration to the distributed ledger, the corrected configuration to create a second chain of the distributed ledger; detect an update of the first configuration to a first updated configuration and an update to the corrected configuration to an updated corrected configuration; and prevent the first updated configuration from being installed on the programmable device by replacing the first updated configuration with the updated corrected configuration on the second chain. | 2021-01-14 |
20210011709 | PROGRAM UPDATE SYSTEM, PROGRAM UPDATE METHOD, AND COMPUTER PROGRAM - A program update system includes an in-vehicle communication apparatus connected to an in-vehicle control apparatus including a control program for controlling an operation of equipment mounted in a vehicle, and a mobile device that can communicate with the in-vehicle communication apparatus, and transmits, to the in-vehicle communication apparatus, update data for the control program obtained from an external server, the control program being updated as a result of the in-vehicle communication apparatus transmitting, to the in-vehicle control apparatus, the update data received from the mobile device. The in-vehicle communication apparatus includes an obtaining unit that obtains update information indicating an update state of the control program, and an in-vehicle transmission unit that transmits the obtained update information to the mobile device. The mobile device receives the update information transmitted from the in-vehicle transmission unit and transmits the received update information to the external server. | 2021-01-14 |
20210011710 | ELECTRONIC DEVICE MANAGEMENT - According to an example aspect, there is provided a method comprising: generating a smart contract with information of a number of targeted devices to perform an update, providing the smart contract for a distributed ledger for the number of targeted devices, receiving an indication to the distributed ledger on readiness to perform the update by the number of targeted devices, and receiving another indication to the distributed ledger on the performance of the update by the number of targeted devices. | 2021-01-14 |
20210011711 | CONTROL DEVICE, CONTROL METHOD, AND COMPUTER PROGRAM - A control device according to this disclosure includes: a communication unit configured to communicate with an on-vehicle control device via an in-vehicle communication line; and a control unit configured to control the communication unit. The control unit executes: an acquisition process of acquiring a first time and a second time described below; and a determination process of determining, based on a result of comparison between the first time and the second time that have been acquired, whether or not rollback to a control program before update is necessary in the on-vehicle control device that is updating the control program. | 2021-01-14 |
20210011712 | USER COMPETENCY BASED CHANGE CONTROL - A change to a collaborative data repository made by a developer is detected. Using an analysis of the change, a change score corresponding to the change is computed, wherein the analysis comprises determining a complexity score of the change, a writing quality score of the change, a value score of the change, and a criticality score of the change. Using an analysis of the developer, a first developer score is computed, wherein the analysis comprises determining a role score of the developer and a history score corresponding to a previous change of the developer. Based on the change score and the first developer score, a restriction on implementing the first change is enforced. A result of the change and the restriction is detected. Based on the result, the change score, and the first developer score, a second developer score is generated. | 2021-01-14 |
20210011713 | DEFECT DESCRIPTION GENERATION FOR A SOFTWARE PRODUCT - The present invention provides a method, computer system and computer program product for generating a defect description. According to the method, one or more keywords from a user for depicting a defect encountered when using a software product are received. A search for one or more terms matching at least one of the keywords and a path corresponding to the one or more terms in an operation map is conducted, wherein the operation map includes terms and paths describing all possible operations of a software product based on a user interface of the software product. And in response to the one or more terms matching the at least one of the keywords and a corresponding path are searched out, a defect description based on the one or more matched terms, the corresponding path and the received keywords is generated. | 2021-01-14 |
20210011714 | SYSTEM, METHOD AND COMPUTER-ACCESSIBLE MEDIUM FOR PREDICTING A TEAM'S PULL REQUEST COMMENTS TO ADDRESS PROGRAM CODE CONCERNS BEFORE SUBMISSION - An exemplary system, method, and computer-accessible medium for providing feedback on a section(s) of computer code, can include receiving the section(s) of computer code, analyzing a portion(s) of the section(s), and providing the feedback on the analyzed portion using a machine learning procedure. The machine learning procedure can be a recurrent neural network. The portion(s) can be automatically identified (e.g., using a computer). The portion can be identified based on a label(s) associated with the portion(s). The label(s) can be located in a comments section associated with the computer code. The portion(s) can be a topic model associated with the computer code. The feedback can include an approval or a rejection of the portion(s). Semantics of the portion(s) can be identified, and feedback can be provided based on the semantics. | 2021-01-14 |
20210011715 | BIT STRING LOOKUP DATA STRUCTURE - Systems, apparatuses, and methods related to bit string operations using a computing tile are described. An example apparatus includes computing device (or “tile”) that includes a processing unit and a memory resource configured as a cache for the processing unit. A data structure can be coupled to the computing device. The data structure can be configured to receive a bit string that represents a result of an arithmetic operation, a logical operation, or both and store the bit string that represents the result of the arithmetic operation, the logical operation, or both. The bit string can be formatted in a format different than a floating-point format. | 2021-01-14 |
20210011716 | PROCESSING CIRCUIT, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD - Information processing circuit includes an accelerator function unit (AFU), an FPGA interface unit (FIU), a tag check unit, and an output control unit. The AFU sequentially obtains write control instructions for a plurality of kinds of data including an output waiting instruction that stops output of a subsequent instruction. The FIU sequentially outputs the write control instructions via a first path or a second path. The tag check unit receives responses to the write control instructions output from the FIU. The output control unit selects one of the first path and the second path based on the storage address of the write control instruction, determines the necessity for mixing write control instructions, mixes write control instructions and causes the FIU to output the result. | 2021-01-14 |
20210011717 | Verified Stack Trace Generation And Accelerated Stack-Based Analysis With Shadow Stacks - A verified stack trace can be generated by utilizing information contained in a shadow stack, such as a hardware protected duplicate stack implemented for malware prevention and computer security. The shadow stack contains return addresses which are obtainable without requiring an unwinding of the traditional call stack. As such, triaging based on return address information can be performed more quickly and more efficiently, and with a reduced utilization of processing resources. Additionally, the generation of a verified stack trace can be performed, with such a verified stack trace containing return addresses that are known to be correct and not corrupted. The return addresses can either be read from the traditional call stack, or derived therefrom, and then verified by comparison to corresponding return addresses from the shadow stack, or they can be read directly from the shadow stack. | 2021-01-14 |
20210011718 | AUTOMATION OF SEQUENCES OF ACTIONS - Traditional manual macro-recorders may not work under a dynamically changing operating environment. Technical solutions are disclosed to automatically generate macros to increase productivity. After a new sequence of actions is detected, the system will prompt the user with the information of an existing macro if the existing macro contains a similar sequence. Otherwise, the system will attempt to automatically generate a new macro based on the sequence of actions. | 2021-01-14 |
20210011719 | SORT AND MERGE INSTRUCTION FOR A GENERAL-PURPOSE PROCESSOR - A Sort Lists instruction is provided to perform a sort and/or a merge operation. The instruction is an architected machine instruction of an instruction set architecture and is executed by a general-purpose processor of the computing environment. The executing includes sorting a plurality of input lists to obtain one or more sorted output lists, which are output. | 2021-01-14 |
20210011720 | VECTOR SEND OPERATION FOR MESSAGE-BASED COMMUNICATION - Methods and systems for conducting vector send operations are provided. The processor of a sender node receives a request to perform a collective send operation (e.g., MPI_Broadcast) from a user application, requesting a copy of data in one or more send buffers by sent to each of a plurality of destinations in a destination vector. The processor invokes a vector send operation from a software communications library, placing a remote enqueue atomic send command for each destination node of the destination vector in an entry of a transmit data mover (XDM) command queue in a single call. The processor executes all of the commands in the XDM command queue and writes the data in the one or more send buffers into each receive queue of each destination identified in the destination vector. | 2021-01-14 |
20210011721 | PREFETCHING WORKLOADS WITH DEPENDENT POINTERS - A set of dependence relationships in a set of program instructions is detected by a processor. The set of dependence relationships comprises a first load instruction to load a first data object and a second load instruction to load a second data object from a second address that is provided by address data within the first data object. The processor identifies a number of dependence instances in the set of dependence relationships and determines that the number is over a pattern threshold. The processor sends an enhanced load request to a memory controller. The enhanced load request comprises instructions to load the first data object from a first address on a physical page, locate address data in the first data object based on a memory offset, load the second data object from a second address in the address data, and transmit the first and second data objects to the processor. | 2021-01-14 |
20210011722 | TARGET INJECTION SAFE METHOD FOR INLINING INSTANCE-DEPENDENT CALLS - A method for redirecting indirect calls to direct calls on a per-process basis includes accessing a memory code region of an operating system kernel that has a different mapping for each of one or more user processes running on the operating system kernel. The memory code region stores a first trampoline that refers directly to a second trampoline, which is an inline or outline trampoline that is correlated with a particular user process. Executing the first trampoline invokes the second trampoline, as a result of which the indirect calls are redirected to direct calls. | 2021-01-14 |
20210011723 | APPARATUS AND METHOD AND COMPUTER PROGRAM PRODUCT FOR EXECUTING HOST INPUT-OUTPUT COMMANDS - The invention introduces a method for executing host input-output (IO) commands, performed by a processing unit of a device side when loading and executing program code of a first layer, at least including: receiving a slot bit table (SBT) including an entry from a second layer, where each entry is associated with an IO operation; receiving a plurality of addresses of callback functions from the second layer; and repeatedly executing a loop until IO operations of the SBT have been processed completely, and, in each iteration of the loop, calling the callback functions implemented in the second layer for a write operation or a read operation of the SBT to drive the frontend interface through the second layer. | 2021-01-14 |
20210011724 | NAND TYPE LOOKUP-TABLE HARDWARE SEARCH ENGINE - A lookup-table type TL-TCAM hardware search engine includes a SL decoder, a TL-TCAM array, and the data stored in the TL-TCAM hardware search engine is obtained by performing lookup table operation in the corresponding TCAM hardware search engine, the SL decoder is used to decode the search word and send it to the TL-TCAM hardware search engine array, and the decoding is to convert a search word SL corresponding to data in a TCAM hardware search engine table into a search word LSL corresponding to TL-TCAM hardware search engine table data, the effect is that TCAM adds a decoder, cooperates with the decoder and by lookup table method converts the TCAM table data to a new circuit unit that can be adapted to the added search line. | 2021-01-14 |
20210011725 | MEMORY CONTROLLER AND MEMORY SYSTEM INCLUDING THE SAME - Embodiments of the present invention include a memory controller including a buffer memory configured to store program data, an instruction set configurator configured to configure an instruction set describing a procedure for programming program data stored in the buffer memory to target memory blocks, an instruction set performer configured to sequentially perform instructions in the instruction set and generate an interrupt at a time of completion of performance of a last instruction among the instructions, and a central processing unit configured to erase the program data stored in the buffer memory when the interrupt is received from the instruction set performer. The instruction set configurator may configure the instruction set differently according to whether a non-interleaving block group exists among the target memory blocks. | 2021-01-14 |
20210011726 | METHODS AND APPARATUS TO DYNAMICALLY ENABLE AND/OR DISABLE PREFETCHERS - Disclosed Methods, Apparatus, and articles of manufacture to dynamically enable and/or disable prefetchers are disclosed. An example apparatus include an interface to access telemetry data, the telemetry data corresponding to a counter of a core in a central processing unit, the counter corresponding to a first phase of a workload executed at the central processing unit; a prefetcher state selector to select a prefetcher state for a subsequent phase based on the telemetry data; and the interface to instruct the core in the central processing unit to operate in the subsequent phase according to the prefetcher state. | 2021-01-14 |
20210011727 | STM32 LOWPOWER SMART CACHE PREFETCH - In an embodiment a method for operating an integrated circuit includes sequentially requesting, by a processor of an integrated circuit, different instruction lines; determining, by a first comparator of the integrated circuit, while the processor processes a current instruction line supplied in response to a corresponding request, whether or not at least one of the instructions of the current instruction line is a branch instruction by comparing the at least one of the instructions to reference instructions; executing, by the processor, all instructions of the current instruction line before executing a next instruction line when the at least one instruction is a branch instruction from a program memory of the integrated circuit; and executing, by the processor, all instruction of the current instruction line before executing a next instruction line from first and second volatile memory of the integrated circuit when the at least one instruction is not a branch instruction. | 2021-01-14 |
20210011728 | TARGET INJECTION SAFE METHOD FOR DYNAMICALLY INLINING BRANCH PREDICTIONS - A method for redirecting an indirect call in an operating system kernel to a direct call is disclosed. The direct calls are contained in trampoline code called an inline jump switch (IJS) or an outline jump switch (OJS). The IJS and OJS can operate in either a use mode, redirecting an indirect call to a direct call, a learning and update mode or fallback mode. In the learning and update mode, target addresses in a trampoline code template are learned and updated by a jump switch worker thread that periodically runs as a kernel process. When building the kernel binary, a plug-in is integrated into the kernel. The plug-in replaces call sites with a trampoline code template containing a direct call so that the template can be later updated by the jump switch worker thread. | 2021-01-14 |
20210011729 | Managing Commit Order for an External Instruction Relative to Queued Instructions - In a pipeline configured for out-of-order issuing, handling translation of virtual addresses to physical addresses includes: storing translations in a translation lookaside buffer (TLB), and updating at least one entry in the TLB based at least in part on an external instruction received from outside a first processor core. Managing external instructions includes: updating issue status information for each of multiple instructions stored in an instruction queue, processing the issue status information in response to receiving a first external instruction to identify at least two instructions in the instruction queue, including a first queued instruction and a second queued instruction. An instruction for performing an operation associated with the first external instruction is inserted into a stage of the pipeline so that the operation associated with the first external instruction is committed before the first queued instruction is committed and after the second queued instruction is committed. | 2021-01-14 |
20210011730 | SHARED LOCAL MEMORY TILING MECHANISM - An apparatus to facilitate memory tiling is disclosed. The apparatus includes a memory, one or more execution units (EUs) to execute a plurality of processing threads via access to the memory and tiling logic to apply a tiling pattern to memory addresses for data stored in the memory. | 2021-01-14 |
20210011731 | PLC DEVICE - To preferentially execute an instruction with higher priority in a case of the CNC being unable to respond due to being an unresponsive timing, load on the bus or the like. A PLC device includes: a special instruction control unit that sets a priority degree indicating a degree of priority for executing predetermined processing to a special instruction for performing the predetermined processing in a control device that controls an industrial machine, and transmits the special instruction in which the priority degree is set to the control device; an instruction storage determining unit that determines whether or not to queue the special instruction according to an operation state of the control device; and an instruction storage unit that sequentially stores the special instruction received, on the basis of a determination result of the instruction storage determining unit. | 2021-01-14 |
20210011732 | Matrix Data Reuse Techniques in Processing Systems - Techniques for computing matrix convolutions in a plurality of multiply and accumulate units including data reuse of adjacent values. The data reuse can include reading a current value of the first matrix in from memory for concurrent use by the plurality of multiply and accumulate units. The data reuse can also include reading a current value of the second matrix in from memory to a serial shift buffer coupled to the plurality of multiply and accumulate units. The data reuse can also include reading a current value of the second matrix in from memory for concurrent use by the plurality of multiply and accumulate units. | 2021-01-14 |
20210011733 | MULTIPLE CLIENT SUPPORT ON DEVICE-BASED LwM2M - A module may have more than one device, such as an IoT device, that requires bootstrapping. A first device may be provisioned with a pre-shared key (PSK). The first device, such as an IoT device, may bootstrap in a conventional manner using its PSK. A second device without a PSK may be added to the module post-manufacture. The first device may share registration details with the second device and also with an LwM2M server. When contacted by the second device, the LwM2M server may associate the second device with the first device and treat them as one from an operational standpoint, reducing the need for pre-shared keys across domains lacking an existing trust relationship. | 2021-01-14 |
20210011734 | INDUSTRIAL INTERNET OF THINGS GATEWAY BOOT METHODS - An industrial internet of things gateway boot method is described wherein installation, operation and maintenance phases are controlled to limit the chance of a malicious attack on a connected network. | 2021-01-14 |
20210011735 | IDENTIFYING A TRANSPORT ROUTE USING AN INFORMATION HANDLING SYSTEM FIRMWARE DRIVER - Methods, systems, and computer programs for receiving, by an information handling system firmware driver, a request for a feature associated with information handling system firmware, the feature stored in a baseboard management controller; determining a transfer size associated with the feature; identifying a plurality of connectivity points, each of the plurality of connectivity points communicatively coupling the information handling system firmware driver to the baseboard management controller, and each of the plurality of connectivity points associated with a bandwidth; selecting a transport route from the plurality of connectivity points based in part on the transfer size associated with the feature; and in response to selecting the transport route: transmitting the request for the feature to the baseboard management controller via the transport route; receiving the feature from the baseboard management controller via the transport route; and providing the feature to the information handling system firmware for execution. | 2021-01-14 |
20210011736 | METHOD AND APPARATUS FOR MANAGING APPLICATION - An electronic device is disclosed that includes a memory storing a first application run based on a first sandbox environment and a processor connected with the memory. The memory stores instructions which, when executed, cause the processor to determine whether it is necessary to change a first user identifier (UID) for the first application in response to an application installation request requesting to update the first application to a second application, assign a second UID for the second application using a UID mapping resident program based on it being necessary to change the first UID, and construct a second sandbox environment for the second application to have the second UID and a resource included in the first sandbox environment. | 2021-01-14 |
20210011737 | PROFILE TRANSMISSION METHOD, RELATED DEVICE, AND STORAGE MEDIUM - Embodiments of this application disclose a profile transmission method, a related device, and a storage medium, to ensure that a terminal can download a profile to a corresponding OS. This improves accuracy of downloading the profile by the terminal. The method in the embodiments of this application includes: when the terminal runs a first operating system OS, obtaining, by the terminal, a second OS identifier, where the second OS identifier matches a second profile; switching, by the terminal, to a second OS based on the second OS identifier; sending, by the terminal, a target message to a server, where the target message is used to request the second profile; and obtaining, by the terminal, the second profile from the server. | 2021-01-14 |
20210011738 | TARGET INJECTION SAFE METHOD FOR INLINING REGISTRATION CALLS - A method of redirecting an indirect call in a callback list associated with a list of functions that are registered, includes the steps of: upon registering the list of functions, determining a list of function pointers, each of which corresponds to an address in an associated callback; for each function pointer in the list of function pointers, adding a direct call instruction to the registration trampoline corresponding to the associated callback of the function pointer; and upon invoking the associated callback of one of the function pointers in the list of function pointers, invoking the corresponding direct call instruction in the registration trampoline. | 2021-01-14 |
20210011739 | MANAGEMENT OF DEPENDENCIES BETWEEN CLUSTERS IN A COMPUTING ENVIRONMENT - Described herein are systems, methods, and software to manage configurations between dependent clusters. In one implementation, a management system maintains a data structure that indicates relationships between clusters in a computing environment. The management system further identifies a configuration modification to a first cluster and identifies other clusters associated with the first cluster based on the data structure. Once the other clusters are identified, the management system may determine configuration modifications for the other clusters based on the data structure and initiate deployment of the configuration modifications. | 2021-01-14 |
20210011740 | METHOD AND SYSTEM FOR CONSTRUCTING LIGHTWEIGHT CONTAINER-BASED USER ENVIRONMENT (CUE), AND MEDIUM - A method and system for constructing a lightweight container-based user environment (CUE), and a medium, the method including: preparing, by a main process, for communication, cloning a child process, and then becoming a parent process; elevating, by the child process, permission, executing namespace isolation, and cloning a grandchild process, and setting, by the parent process, cgroups for the grandchild process; and setting, by the grandchild process, permission of the grandchild process to execute a command and a file, preparing an overlay file system, setting a hostname, restricting permission, and executing an initialization script to start the container. Multiple users are allowed to customize their own environments, enabling the users to customize their environments more flexibly, achieving privacy isolation, and making it easier and more secure to update a system. Therefore, it is particularly applicable to a high-performance computing cluster. | 2021-01-14 |
20210011741 | DEVICE ENHANCEMENTS FOR SOFTWARE DEFINED SILICON IMPLEMENTATIONS - Methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to provide device enhancements for software defined silicon implementations are disclosed. Example apparatus disclosed herein include a request interface to receive a request for a timestamp. Disclosed example apparatus also include a property checker to determine a first value of an electrical property of a feature embedded in a silicon product, the feature having electrical properties that change over time. Disclosed example apparatus further include a relative time determiner to calculate a relative time between the request and a previous event based on the first value of the electrical property and a second value of the electrical property, the second value of the electrical property associated with the previous event. | 2021-01-14 |
20210011742 | DEPLOYMENT AND ISOLATION OF PLUGINS IN A VIRTUALIZED COMPUTING ENVIRONMENT - In an architecture of a virtualized computing system plugins are less tightly integrated with a core user interface of a management server. Rather than being installed and executed at the management server as local plugins, the plugins are served as remote plugins from a plugin server, and may be accessed by a web client through a reverse proxy at the management server. Plugin operations may be executed at the plugin server and/or invoked from a user device where the web client resides. Furthermore, a plugin sandbox and other isolation configurations are provided at the user device, so as to further control access capability and interaction of the plugins. | 2021-01-14 |
20210011743 | METHOD AND SYSTEM OF INSTANTIATING PERSONA BASED USER INTERFACE NOTIFICATIONS - A method and system of instantiating a user interface (UI) notification. The method comprises accessing a persona configuration, the persona configuration specified at least in part based on one or more script agents, the script agents including script code arranged in accordance with an execution state flow associated with execution of a software program; executing, in one or more processors of a server device, object code of the software program in accordance with the execution state flow in conjunction with the one or more script agents to generate a set of resultant parameters; and transmitting, to a UI display of a display device that is authorized in association with the persona configuration, information instantiated based on at least a subset of the resultant parameters. | 2021-01-14 |
20210011744 | CONTENT PRESENTATION WITH ENHANCED CLOSED CAPTION AND/OR SKIP BACK - Apparatuses, methods and storage medium associated with content consumption are disclosed herein. In embodiments, an apparatus may include a decoder, a user interface engine, and a presentation engine. The decoder may be configured to receive and decode a streaming of the content. The user interface engine may be configured to receive user commands. The presentation engine may be configured to present the content as the content is decoded from the stream, in response to received user commands. Further, the decoder, the user interface engine, the presentation engine, and/or combination/sub-combination thereof, may be arranged to adapt the presentation to enhance user experience during response to a skip back command, where the adaption is in addition to a nominal response to the skip back command, e.g., display of closed captions. Other embodiments may be described and/or claimed. | 2021-01-14 |
20210011745 | SYSTEMS AND METHODS FOR REMOTE COMPUTING SESSIONS WITH VARIABLE FORWARD ERROR CORRECTION (FEC) - A computing device may include a memory and a processor cooperating with the memory to generate data to correct errors in transmission of packets to a client device based upon a ratio of a first bandwidth in which to transfer content of a buffer and a second bandwidth in which to transfer the generated data, the packets to transfer the content and the generated data to the client device via a channel. The processor may further adjust the ratio based upon a parameter of the channel, and send the content of the buffer and the generated data via packets and through the channel to the client device based on the adjusted ratio. | 2021-01-14 |
20210011746 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - An information processing apparatus includes: a memory that stores a plurality of applications; and circuitry to display on a display, a screen including notification information on a function of a particular application of the plurality of applications, and to activate the particular application in response to an input of a request to execute the function on the screen. The particular application configures the circuitry to execute the function in accordance with guide information defining one or more setting items to be set to execute the function and an order of setting the setting items. | 2021-01-14 |
20210011747 | CONTROLLER FOR BRIDGING DATABASE ARCHITECTURES - A method of bridging a first database and a second database. The method includes maintaining a state machine representing a state of a virtual node in the first database, wherein the state of the virtual node conforms to a native protocol for native nodes of the first database, said native protocol of the first database differing from a foreign protocol of the second database. The method further includes receiving an incoming message for the virtual node from one of the native nodes according to the native protocol, and based on the incoming message, accessing the second database. The method further includes updating the state of the virtual node based on the incoming message according to the native protocol, and based on the state of the virtual node as updated, sending an outgoing message to one or more of the native nodes according to the native protocol. | 2021-01-14 |
20210011748 | QUANTUM VARIATIONAL METHOD, APPARATUS, AND STORAGE MEDIUM FOR SIMULATING QUANTUM SYSTEMS - The present disclosure discloses a method for obtaining optimal variational parameters of a ground state wavefunction for a Hamiltonian system. The method includes initializing a plurality of variational parameters and sending the variational parameters to a quantum computing portion to output a plurality of measurement results. The method includes transmitting the measurement results to a classical computing portion to update the plurality of variational parameters based on the plurality of measurement results and an update rule, and determining whether a measured energy satisfies a convergence rule. When the measured energy does not satisfy the convergence rule, the method includes sending the plurality of updated variational parameters to the quantum computing portion for a next iteration; and when the measured energy satisfies the convergence rule, the method includes obtaining a plurality of optimal variational parameters for the Hamiltonian system. | 2021-01-14 |
20210011749 | SYSTEMS AND METHODS TO MONITOR A COMPUTING ENVIRONMENT - A system may include a registration module to register the system with a server cluster and a resource collector module operatively connected to the registration module, the resource collector module to identify a list of resources for a container running on the server cluster. The system may also include a resource monitor module operatively connected to the resource collector module, the resource collector module to receive the list of resources for the container, monitor a resource in the list of resources for the container, and generate an event for the container and an event manager module operatively connected to the resource monitor module, the event manager to receive the event and determine a recovery action for the container. | 2021-01-14 |
20210011750 | ARCHITECTURAL DATA MOVER FOR RAID XOR ACCELERATION IN A VIRTUALIZED STORAGE APPLIANCE - Systems and methods for I/O acceleration in a virtualized system include receiving, at a hypervisor from an application executing under a guest OS, a request to write new data to a RAID system, redirecting the request to the VSA owning the RAID drives, moving the new data from guest OS physical address space to VSA physical address space, preparing, by a RAID driver in the VSA, the new data for writing according to a RAID redundancy policy, reading, by the RAID driver into a first buffer, old data and old parity information, performing, by an architectural data mover, inline XOR copy operations to compute a difference between the old and new data, compute new parity information, and write the difference and new parity information into the second buffer, and writing, by the RAID driver, the difference and new parity information to the RAID system using the redundancy policy. | 2021-01-14 |
20210011751 | MEMORY-AWARE PLACEMENT FOR VIRTUAL GPU ENABLED SYSTEMS - Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some embodiments, a computing environment is monitored to identify graphics processing unit (GPU) data for a plurality of virtual GPU (vGPU) enabled GPUs of the computing environment, a plurality of vGPU requests are received. A respective vGPU request includes a GPU memory requirement. GPU configurations are determined in order to accommodate vGPU requests. The GPU configurations are determined based on an integer linear programming (ILP) vGPU request placement model. Configured vGPU profiles are applied for vGPU enabled GPUs, and vGPUs are created based on the configured vGPU profiles. The vGPU requests are assigned to the vGPUs. | 2021-01-14 |
20210011752 | Collaborative Hosted Virtual Systems And Methods - A method including: receiving, by a computing device, a request from a user device for access to a hosted virtual machine; dedicating, by the computing device, a port to forward a cast of a particular hosted virtual machine instance to the user device; establishing a connection between the user device and the particular hosted virtual machine instance through the dedicated port; receiving, by the computing device and from the user device, instructions to execute an application on the particular hosted virtual machine instance; logging external calls made by the particular hosted virtual machine instance; and transmitting, by the computer device, the log of external calls to be stored on a server, the logs being synced by the server with the user device in substantially real time. | 2021-01-14 |
20210011753 | METHODS AND APPARATUS FOR MULTI-PROVIDER VIRTUAL NETWORK SERVICES - Embodiments herein provide methods and apparatus for providing multi-provider virtual network service. A first lifecycle management, LCM, component is provided controlled by a first service provider in a virtual network, wherein the virtual network comprises a trusted LCM component controlled by a trusted provider configured to provide a decentralised trust system between a plurality of LCM components controlled by different service providers in the virtual network. The first LCM component may perform a method comprising receiving a service request to provide a first service; responsive to a determination that the first service cannot be fully provided by the first service provider, generating a first tag representing a portion of the first service that the first service provider cannot provide; transmitting a discovery request to the trusted LCM component, wherein the discovery request comprises the first tag; receiving, from the trusted LCM component based on the first tag, a list of LCM components comprising a second LCM component controlled by a second service provider, wherein the second service provider is capable of providing a part of the portion of the first service; and transmitting a provision request to the second LCM component to provide the part of the portion of the first service. | 2021-01-14 |
20210011754 | HYPERVISOR-INDEPENDENT BLOCK-LEVEL LIVE BROWSE FOR ACCESS TO BACKED UP VIRTUAL MACHINE (VM) DATA AND HYPERVISOR-FREE FILE-LEVEL RECOVERY (BLOCK-LEVEL PSEUDO-MOUNT) - Hypervisor-independent block-level live browse is used for directly accessing backed up virtual machine (VM) data. Hypervisor-free file-level recovery (block-level pseudo-mount) from backed up VMs also is disclosed. Backed up virtual machine (“VM”) data can be browsed without needing or using a hypervisor. Individual backed up VM files can be requested and restored to anywhere without a hypervisor and without the need to restore the rest of the backed up virtual disk. Hypervisor-agnostic VM backups can be browsed and recovered without a hypervisor and from anywhere, and individual backed up VM files can be restored to anywhere, e.g., to a different VM platform, to a non-VM environment, without restoring an entire virtual disk, and without a recovery data agent at the destination. | 2021-01-14 |
20210011755 | SYSTEMS, METHODS, AND DEVICES FOR POOLED SHARED/VIRTUALIZED OR POOLED MEMORY WITH THIN PROVISIONING OF STORAGE CLASS MEMORY MODULES/CARDS AND ACCELERATORS MANAGED BY COMPOSABLE MANAGEMENT SOFTWARE - Provided are systems, methods, and devices for management of storage class memory modules. Methods include receiving a request from an application running on a server, the request received at a memory controller, and maintaining a page table comprising page numbers, server numbers, storage class memory (SCM) dual-inline memory module (DIMM) numbers, and pointers mapping blocks of memory to SCM DIMMs in devices connected to the server through a network interface. The methods also include allocating memory using the request from the application, wherein whether the memory is locally allocated or remotely allocated remains transparent to the application. | 2021-01-14 |
20210011756 | DEVICE SUCH AS A CONNECTED OBJECT PROVIDED WITH MEANS FOR CHECKING THE EXECUTION OF A PROGRAM EXECUTED BY THE DEVICE - The present invention relates to a device ( | 2021-01-14 |
20210011757 | SYSTEM FOR OPERATIONALIZING HIGH-LEVEL MACHINE LEARNING TRAINING ENHANCEMENTS FROM LOW-LEVEL PRIMITIVES - In an embodiment, a method for inspecting and transforming a machine learning model includes receiving a request that includes the machine learning model and a configuration object that provides an indication of a selected strategy. In the embodiment, the method includes creating a partially specified task graph that includes a first placeholder node for a future expanded task node. In the embodiment, the method includes performing a dynamic expansion and execution phase that includes, repeatedly (a) using a cognitive engine to evaluate whether to revise the partially specified task graph based at least in part on the selected strategy, and (b) using a processor-based execution engine to perform an action specified by the complete node. In an embodiment, the dynamic expansion and execution phase repeats until after the cognitive engine adds a consolidated results node. | 2021-01-14 |
20210011758 | INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING APPARATUS - A non-transitory computer-readable recording medium has stored therein a program that causes a first apparatus to execute a process, the process including: when a load of a first resource existing in a first group is equal to or more than a first threshold value, searching the first group for a first destination resource that is a migration destination of a first task performed using the first resource, the first apparatus being included in the first group; when the first destination resource is not found in the first group, selecting a second group based on first information; transmitting a first request to search for the first destination resource to a second apparatus included in the second group; and when a second resource that is the first destination resource is found in the second group, updating the first information based on second information that is transmitted from the second apparatus. | 2021-01-14 |
20210011759 | MULTI-CORE SYSTEM AND METHOD OF CONTROLLING OPERATION OF THE SAME - A method of controlling an operation of a multi-core system including a plurality of processor cores, includes, monitoring task execution delay times with respect to tasks respectively assigned to the plurality of processor cores, monitoring core execution delay times with respect to the plurality of processor cores and controlling an operation of the multi-core system based on the task execution delay times and the core execution delay times. | 2021-01-14 |
20210011760 | VMID AS A GPU TASK CONTAINER FOR VIRTUALIZATION - Systems, apparatuses, and methods for abstracting tasks in virtual memory identifier (VMID) containers are disclosed. A processor coupled to a memory executes a plurality of concurrent tasks including a first task. Responsive to detecting one or more instructions of the first task which correspond to a first operation, the processor retrieves a first identifier (ID) which is used to uniquely identify the first task, wherein the first ID is transparent to the first task. Then, the processor maps the first ID to a second ID and/or a third ID. The processor completes the first operation by using the second ID and/or the third ID to identify the first task to at least a first data structure. In one implementation, the first operation is a memory access operation and the first data structure is a set of page tables. Also, in one implementation, the second ID identifies a first application of the first task and the third ID identifies a first operating system (OS) of the first task. | 2021-01-14 |
20210011761 | MOBILE TASKS - Activities related to data analyses are managed in part using task objects representing tasks that need to be performed. In one embodiment, a method comprises: receiving a first request to generate a task object that describes a task; responsive to the first request, generating the task object, the task object being a data structure that comprises values for task object fields that represent attributes of the task; identifying, in a repository of data objects, a particular data object to associate with the task object; determining that a first field of the task object fields corresponds to a second field of the particular data object, the second field of the particular data object having a particular value; and assigning the first field of the task object to the particular value of the corresponding second field. In another embodiment, task objects are associated with geolocation data, and mapped or otherwise presented accordingly. | 2021-01-14 |
20210011762 | Deep Learning Job Scheduling Method and System and Related Device - A deep learning job scheduling method includes obtaining a job request of a deep learning job, determining a target job description file template from a plurality of pre-stored job description file templates based on the job request, determining an identifier of a target job basic image from identifiers of a plurality of pre-stored job basic images based on the job request, generating a target job description file based on the target job description file template and the identifier of the target job basic image, sending the target job description file to a container scheduler, and selecting the target job basic image from the pre-stored job base images based on the target job description file, and creating at least one container for executing the job request. | 2021-01-14 |
20210011763 | SUBSTRATE PROCESSING DEVICE AND DETERMINATION METHOD - In a substrate processing device, a technique that achieves OR transfer satisfying a specified condition is provided. The substrate processing device includes an identifying unit configured to refer to usage states of a plurality of process modules at process time slots to identify one or more process modules that can execute processes of substrates included in a control job to be processed among process modules that can be used at the process time slots, a calculating unit configured to assign the processes of the substrates to respective process time slots of the one or more process modules identified by the identifying unit to calculate a time duration from a start to an end of the processes of the substrates, and a determining unit configured to determine start timing of starting the processes of the substrates so that the time duration calculated by the calculating unit satisfies a specified condition. | 2021-01-14 |
20210011764 | WORKLOAD/PROCESSOR RESOURCE SCHEDULING SYSTEM - A workload/processor resource scheduling system is coupled to a processing system. The workload/processor resource scheduling system monitors a performance of first workload(s) by the processing system according to a workload/processor resource schedule, and identifies a correlation between the performance of the first workload(s) according to the workload/processor resource schedule, and an operating level of a processing system operating parameter for the processing system when performing the first workload(s) according to the workload/processor resource schedule. Based on the correlation, the workload/processor resource schedule and the processing system operating parameter are linked. Subsequently, an operating-parameter-based request is received to produce the operating level of the processing system operating parameter when performing a second workload and, based on the operating-parameter-based request and the linking of the workload/processor resource schedule and the processing system operating parameter, the second workload is performed by the processing system based on the workload processor resource schedule. | 2021-01-14 |
20210011765 | ADAPTIVE LIMITED-DURATION EDGE RESOURCE MANAGEMENT - Systems and techniques for adaptive limited-duration edge resource management are described herein. Available capacity may be calculated for a resource for a node of the edge computing network based on workloads executing on the node. Available set-aside resources may be determined based on the available capacity. A service request may be received from an application executing on the edge computing node. A priority category may be determined for the service request. Set-aside resources from the available set-aside resources may be assigned to a workload associated with the service request based on the priority category. | 2021-01-14 |
20210011766 | Predictive System Resource Allocation - A system may include a processing unit; a storage device comprising instructions, which when executed by the processing unit, configure the processing unit to perform operations comprising: retrieving a meeting count of meetings scheduled during a future time period; generating a predicted usage level of a service during the future time period based on the meeting count; determining a resource modification for the service based on the predicted usage level; and implementing the resource modification prior to the future time period. | 2021-01-14 |
20210011767 | DYNAMIC SIZE OF STATIC SLC CACHE - Apparatus and methods are disclosed, including using a memory controller to track a maximum logical saturation over the lifespan of the memory device, where logical saturation is the percentage of capacity of the memory device written with data. A portion of a pool of memory cells of the memory device is reallocated from single level cell (SLC) static cache to SLC dynamic cache storage based at least in part on a value of the maximum logical saturation, the reallocating including writing at least one electrical state to a register, in some examples. | 2021-01-14 |
20210011768 | THREAD ASSOCIATED MEMORY ALLOCATION AND MEMORY ARCHITECTURE AWARE ALLOCATION - A method and system for thread aware, class aware, and topology aware memory allocations. Embodiments include a compiler configured to generate compiled code (e.g., for a runtime) that when executed allocates memory on a per class per thread basis that is system topology (e.g., for non-uniform memory architecture (NUMA)) aware. Embodiments can further include an executable configured to allocate a respective memory pool during runtime for each instance of a class for each thread. The memory pools are local to a respective processor, core, etc., where each thread executes. | 2021-01-14 |
20210011769 | MANAGEMENT OF UNMAPPED ALLOCATION UNITS OF A MEMORY SUB-SYSTEM - An indication that an allocation unit of a memory sub-system has become unmapped can be received. In response to receiving the indication that the allocation unit of the memory sub-system has become unmapped, the allocation unit can be programmed with a data pattern. Data to be written to the unmapped allocation unit can be received. A write operation can be performed to program the received data at the unmapped allocation unit by using a read voltage that is based on the data pattern. | 2021-01-14 |
20210011770 | QUIESCE RECONFIGURABLE DATA PROCESSOR - A reconfigurable data processor comprises an array of configurable units configurable to allocate a plurality of sets of configurable units in the array to implement respective execution fragments of the data processing operation. Quiesce logic is coupled to configurable units in the array, configurable to respond to a quiesce control signal to quiesce the sets of configurable units in the array on quiesce boundaries of the respective execution fragments, and to forward quiesce ready signals for the respective execution fragments when the corresponding sets of processing units are ready. An array quiesce controller distributes the quiesce control signal to configurable units in the array, and receives quiesce ready signals for the respective execution fragments from the quiesce logic. | 2021-01-14 |
20210011771 | MEASUREMENT SEQUENCE DETERMINATION FOR QUANTUM COMPUTING DEVICE - A computing system is provided, including a processor configured to identify a plurality of measurement sequences that implement a logic gate. Each measurement sequence may include a plurality of measurements of a quantum state of a topological quantum computing device. The processor may be further configured to determine a respective estimated total resource cost of each measurement sequence of the plurality of measurement sequences. The processor may be further configured to determine a first measurement sequence that has a lowest estimated total resource cost of the plurality of measurement sequences. The topological quantum computing device may be configured to implement the logic gate by applying the first measurement sequence to the quantum state. | 2021-01-14 |
20210011772 | MEMORY MANAGEMENT AND RESOURCE UTILIZATION ON SERVICE HOSTING COMPUTING DEVICES - The present disclosure relates to systems, non-transitory computer-readable media, and methods of a process management system that improves memory management and resource utilization on host devices that utilize a pre-fork worker process model (e.g., uWSGI). For example, the process management system can utilize the memory consumption of the host device to determine how many worker processes to terminate as well as which worker processes to terminate. In addition, the process management system can utilize adaptive respawning to determine when to respawn each of the terminated worker processes. | 2021-01-14 |
20210011773 | MEMORY-AWARE PLACEMENT FOR VIRTUAL GPU ENABLED SYSTEMS - Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. Virtual graphics processing unit (vGPU) data is identified for graphics processing units (GPUs). A configured GPU list and an unconfigured GPU list are generated using the GPU data. The configured GPU list specifies configured vGPU profiles for configured GPUs. The unconfigured GPU list specifies a total GPU memory for unconfigured GPUs. A vGPU request is assigned to a vGPU of a GPU. The GPU is a first fit, from the configured GPU list or the unconfigured GPU list that satisfies a GPU memory requirement of the vGPU request. | 2021-01-14 |
20210011774 | APPLICATION PROGRAM MANAGEMENT METHOD AND APPARATUS - This application provides an application program management method and apparatus. The method is performed in a database cluster system including at least two database nodes, at least one database object is stored in each database node, and the method includes: running an application program on a first database node in a first time period; determining a target database node based on at least one historical database object accessed by the application program in the first time period, where the target database node stores the historical database object; and running the application program on the target database node in a second time period. According to this application, a database node on which an application program runs can be dynamically adjusted, to avoid overload of the database node. | 2021-01-14 |
20210011775 | EXECUTION OF CONTAINERIZED PROCESSES WITHIN CONSTRAINTS OF AVAILABLE HOST NODES - The technology disclosed herein enables optimized managing of cluster deployment on a plurality of host nodes. In a particular embodiment, a method includes defining parameters of a cluster for executing a process that will execute in a plurality of containers distributed across one or more of the plurality of host nodes. The method further provides adding a first container portion of the plurality of containers to a first host node portion of the plurality of host nodes. After adding the first container portion, the method includes determining that a remaining host node portion of the plurality of host nodes will not support more of the plurality of containers and adjusting the parameters of the cluster to allow the process to execute on the first host node portion. | 2021-01-14 |
20210011776 | MANAGING OWNERSHIP TRANSFER OF FILE SYSTEM INSTANCE IN VIRTUALIZED DISTRIBUTED STORAGE SYSTEM - Example implementations relate to managing ownership transfer of a file system instance in a virtualized distributed storage system. The virtualized distributed storage system includes a first node having a first virtual controller that acts as an owner of a file system instance in a distributed storage, and a second node having a second virtual controller coupled to the first node over a network. A failure condition associated with a first node is detected. Further, in response to detection of the failure condition, an ownership of the file system instance may be transferred from the first virtual controller to the second virtual controller no later than an IP address switchover of the first virtual controller. | 2021-01-14 |
20210011777 | ENTANGLEMENT OF PAGES AND GUEST THREADS - Entanglement of pages and threads is disclosed. An indication is received of a stalling event caused by a requested portion of memory being inaccessible. It is determined that the requested portion of memory is an entangled portion of memory that is entangled with a physical node in a plurality of physical nodes. A type of the entangled portion of memory is determined. The stalling event is handled based at least in part on the determined type of the entangled portion of memory. | 2021-01-14 |