18th week of 2022 patent applcation highlights part 53 |
Patent application number | Title | Published |
20220137933 | METHOD AND SYSTEM FOR EXTRACTING NATURAL LANGUAGE ELEMENTS EMBEDDED IN APPLICATION SOURCE CODE - Natural language elements are present in both the executable lines and non-executable lines of the code. Rich information hidden within them are often ignored in code analysis as extraction of meaningful insights from its raw form is not straight forward. A system and method extracting natural language elements from an application source code is provided. The disclosure provides a method for performing detailed analytics on the natural language elements, classify those using deep learning networks and create meaningful insights. The system understands the different type of natural language elements, comment patterns present in the application source code and segregates the authentic comments having valuable insights, version comments, data element level comments from other non-value adding comments. The embedded intelligence finally takes care of mapping the extracted natural language elements with the code blocks, thus making it consumable and opening a range of applications in domain contextualization, code documentation and maintenance. | 2022-05-05 |
20220137934 | PROVIDING SERVICES FOR ASSISTING PROGRAMMING - Systems and methods for services for assisting programming are disclosed. The systems and methods can be used to, during edit time, for program code or data of interest, identify one or more services available to the program code or the data of interest, generating a context for the one or more services, execute code for the one or more services within the context to generate a result for each of the one or more services, analyze the result for each of the one or more services to select a subset of results based on criteria associated with the program code, the data of interest, or the one or more services, and offer, to a user, services corresponding to the subset of results or the subset of results as suggestions to facilitate further development of the program code or use of the data of interest. | 2022-05-05 |
20220137935 | SYSTEM AND METHOD OF COLLECTIVELY TRACKING A BEHAVIOR OF A MEMBER ACROSS ONE OR MORE DIMENSIONS - A method of triggering an action by collectively tracking a behavior of a member across a plurality of dimensions is provided. The method includes obtaining, an action rule that specifies the action when the member performs the behavior in the plurality of dimensions, specifying a tensor counter that is a data structure to track the behavior based on the action rule, comprising a first data object storing name of behavior and a second data object comprising a plurality of keys and a plurality of values, determining the name and an updated value of the behavior, and a dimension associated with the behavior, modifying a value associated with the key to track the behavior of member in the dimension, updating the tensor counter to collectively track the behavior of member across the dimensions, and triggering the action to the member when the behavior of member matches the action rule. | 2022-05-05 |
20220137936 | Efficient State Machines for Real-Time Dataflow Programming - An efficient state-machine-based pattern matching technique processes tokens in an input queue and identifies patterns in the sequence of tokens that match one or more predetermined input patterns without backtracking. Tokens can include data or no data and a time component. The tokens can be a stream of data generated by a sensor, which transforms a physical property into a digital quantity. The pattern matching technique processes the input queue in a single direction, and does not examine any previously examined token. In an implementation, specific patterns to be matched are specified using a state machine, where the state machine is specified in a state table and operates using a state stack. | 2022-05-05 |
20220137937 | AUTOMATED VALIDATION SCRIPT GENERATION AND EXECUTION ENGINE - Methods and systems for automatically generating validation scripts for software applications are disclosed. A computing device may receive a first application. The computing device may compare the first application to a plurality of stored applications. The computing device may determine a second application among the plurality of stored applications based on the comparing. The computing device may determine a first validation script associated with the second application. The computing device may automatically generate a second validation script for the first application based on the first validation script and a result of a comparison of the first application and the second application. The computing device may validate the first application using the second validation script. | 2022-05-05 |
20220137938 | SYSTEM AND METHOD FOR AUTOMATED USER INTERFACE LAYOUT PRESENTATION BASED ON TASK - A system and method to assist in repetitively performing a task within an application/system is provided. Tracking information, in terms of how a user interacts with the application/system is stored. The tracking information includes a series of navigation actions the user takes to perform a task, and includes a sequence of user interface layouts presented in the course of performing the task. Based on the stored tracking information, a task-centric user interface layout is determined, and this is presented to the user in response to the user performing one of the navigation actions within the sequence of interface layouts. This saves the user from having to execute the series of navigation actions each time the user is to perform the task. | 2022-05-05 |
20220137939 | REPRESENTATION AND ANALYSIS OF WORKFLOWS USING ABSTRACT SYNTAX TREES - A workflow for an operational process may be defined using a functional programming language. A computer system may parse the workflow to generate an abstract syntax tree, which may include states of the workflow and transitions from one workflow state to another. The computer system may generate code paths from the abstract syntax tree representing sequences of execution. Reflection on the workflow may be performed using the abstract syntax tree and code paths to allow intelligent decision-making. | 2022-05-05 |
20220137940 | USING FUSION TO REDUCE ENCRYPTION IN STREAMING APPLICATIONS - An embodiment includes analyzing data associated with an original flow graph comprising a plurality of operators of a stream computing application, including identifying a secure network connection between a first operator and a second operator that uses encryption. The embodiment fuses the first operator with the second operator such that a first logical function of the first operator is combined with a second logical function of the second operator. The embodiment then generates a modified flow graph as a modification of the original flow graph that combines the first operator and the second operator and lacks encryption between the first operator and the second operator. | 2022-05-05 |
20220137941 | COMPILATION METHOD, APPARATUS, COMPUTING DEVICE AND MEDIUM - A compilation method, a compilation apparatus suitable for an In-Memory Computing apparatus, a computing device and a storage medium. The compilation method includes: acquiring calculation information of an algorithm to be compiled; converting the algorithm to be compiled into the first intermediate representation according to the calculation information; mapping the first intermediate representation to the second intermediate representation; and compiling the algorithm to be compiled into instruction information recognized by the In-Memory Computing apparatus according to the hardware information, to make the In-Memory Computing apparatus execute the instruction information. The compilation method may compile the calculation information into instructions that may be directly executed by the In-Memory Computing apparatus, so as to realize the effect of accelerating the operations of various algorithms by using the In-Memory Computing apparatus. | 2022-05-05 |
20220137942 | NATIVE EMULATION COMPATIBLE APPLICATION BINARY INTERFACE FOR SUPPORTING EMULATION OF FOREIGN CODE - A function is compiled against a first application binary interface (ABI) and a second ABI of a native first instruction set architecture (ISA). The second ABI defines context data not exceeding a size expected by a third ABI of a foreign second ISA, and uses a subset of registers of the first ISA that are mapped to registers of the second ISA. Use of the subset of registers by the second ABI results in some functions being foldable when compiled using both the first and second ABIs. First and second compiled versions of the function are identified as foldable, or not, based on whether the compiled versions match. Both the first and second compiled versions are emitted into a binary file when they are not foldable, and only one of the first or second compiled versions is emitted into the binary file when they are foldable. | 2022-05-05 |
20220137943 | SELECTION OF RANKED CONFIGURATIONS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selection of ranked configurations. In one aspect, a method includes providing a plurality of class definitions for selection, each class definition modeling a respective data or functional component of a cloud-based environment using a group of configurable class parameters, each class definition supporting instantiation and inheritance of the class definition in a configuration specification for a cloud-based deployment; deriving respective performance metrics associated with each of the plurality of class definitions based on aggregated performance of multiple cloud-based deployments, wherein the multiple cloud-based deployments had been carried out according to respective configuration specifications that require instantiation of the class definition or a new class definition derived from the class definition; and utilizing the respective performance metrics associated with each of the plurality of class definitions in ranking the plurality of class definitions. | 2022-05-05 |
20220137944 | ACCELERATING APPLICATION AND SUB-PACKAGE INSTALLATIONS - In some examples, a method includes downloading, from an application provider, a patch to be applied to a first application element and a stripped version of the application that does not include one or more application elements to be reused during installation of the application, decompressing the first application element to generate a decompressed version of the first application element, and decompressing the patch to generate a decompressed version of the patch. The method may also include applying the decompressed version of the patch to the decompressed version of the first application element to generate a patched application element, compressing the patched application element to generate a compressed patched application element, and installing the application using the compressed patched application element, the stripped version of the application, and the one or more application elements other than the first application element. | 2022-05-05 |
20220137945 | Multiple Virtual Machines in a Mobile Virtualization Platform - Systems and methods are described for embodiments of a mobile virtualization platform (MVP) that may be embedded in an end user mobile device or comprise part of the firmware loaded on the device. The MVP may implement a thin layer of software embedded on the device to decouple applications and data from the underlying hardware, thus enabling the device to concurrently run multiple operating systems. Furthermore, the MVP may enable applications to run concurrently per each base band. | 2022-05-05 |
20220137946 | STATE-DRIVEN VIRTUALIZATION SYSTEM IMAGING - A method for forming a virtualization system image. A specification of an expressed end state of a virtualization system image is analyzed. The specification is decomposed into lower level specifications and the lower level specifications are decomposed into idempotent operations. The virtualization system image corresponding to the expressed end state is assembled by processing the idempotent operations. The expressed end state, decomposed lower level intents, and decomposed idempotent operations are codified into a decomposition hierarchy. The decomposition hierarchy is query-able such that, for a given intent, an idempotent operation is returned. An idempotent operation code library is query-able such that, for a given idempotent operation, a corresponding set of executable code is returned. An image builder executes the executable code. When all of the idempotent operations have been successfully executed, the virtualization system image is complete. A virtualization system image is deployed to computing nodes that constitute a computing cluster. | 2022-05-05 |
20220137947 | INTERFACE DEVICE HAVING UPDATABLE FIRMWARE, MOBILE DEVICE, AND FIRMWARE UPDATE METHOD - According to an embodiment disclosed herein, an interface device to be connected to an external mobile device may comprise: a connector; at least one integrated circuit (IC); a memory for storing firmware for the at least one IC and instructions; and at least one processor configured to execute the stored instructions, wherein the instructions, when executed by the processor, cause the processor to: transmit identification data of the interface device, including data associated with the firmware, to the external mobile device through the connector when the interface device is connected to the external mobile device through the connector; receive firmware update data for the at least one IC, which corresponds to the identification data, from the external mobile device through the connector; verify the integrity of the firmware update data; and update the firmware stored in the memory by using the firmware update data when the integrity of the firmware update data has been verified. | 2022-05-05 |
20220137948 | EDGE-BASED INTELLIGENCE FOR OVER THE AIR UPDATE - A computing device receives one or more idle state conditions that indicate an idle device state for a class of devices associated with the computing device. The computing device receives an over the air (OTA) update of a firmware of the computing device, where the OTA update is to be applied by the computing device responsive to detecting the idle device state of the computing device. The computing device identifies a device state of the computing device and determines whether the device state satisfies the one or more idle state conditions. Responsive to determining that the first device state of the computing device satisfies the one or more idle state conditions, the computing device applies the OTA update of the firmware to the computing device | 2022-05-05 |
20220137949 | METHOD AND SYSTEM FOR COMPUTER UPDATE REQUIREMENT ASSIGNMENT USING A MACHINE LEARNING TECHNIQUE - A novel data collection technique is disclosed. This data collection technique gathers data relating to various technical processes implemented in a computer network. A database including past assignment of requirements is also provided. These requirements can be software update requirements relating to a computer network. A classification model is trained using the assignment records fort these requirements and the data collected relating to the technical processes. Using the classification model, new requirements can be classified and implemented on the computer network. | 2022-05-05 |
20220137950 | SOFTWARE UPDATE MANAGEMENT DEVICE AND SOFTWARE UPDATE MANAGEMENT METHOD - To ensure efficiency of software update operations while maintaining reliability of an entire network during the software update operations. A software update management apparatus | 2022-05-05 |
20220137951 | SYSTEM AND METHOD FOR TRANSFERRING AN OPERATING SOFTWARE UPDATE TO A SECURITY-ORIENTED DEVICE - A system including a safety-oriented device and an electronic storage device which is separate therefrom and in which exactly one piece of data content for the device is stored. The exactly one piece of data content is either an operating software update or an address. The electronic storage device has a first connection unit for mechanical and electrical coupling to the device, the first connection unit comprising a first mechanical coding means. The device has a storage unit in which an operating system is stored, a microcontroller and a second connection unit for mechanical and electrical coupling to the electronic storage device, the second connection unit having a second mechanical coding means. The microcontroller is designed to recognize whether the electronic storage device is connected via its first connection unit to the second connection unit of the device, and in this case is also designed to download the exactly one piece of data content stored in the electronic storage device to the storage unit of the device. | 2022-05-05 |
20220137952 | SYSTEM AND METHOD FOR UPDATING FIRMWARE OF A COOKING APPARATUS - Techniques for updating the firmware of a second cooking apparatus under the control of a first cooking apparatus controlling recipe execution by the first and second cooking apparatuses in a joint cooking process. The first cooking apparatus queries an update server to check if the current firmware version of the second cooking apparatus corresponds to the latest available firmware version for providing a particular cooking function. If the latest firmware version differs from the current firmware version, the latest firmware version is downloaded to the first cooking apparatus. The downloaded latest firmware version is uploaded to the second cooking apparatus while preventing interruption of the joint cooking process. Completion of the firmware update is registered by the first cooking apparatus after receipt of a confirmation from the second cooking apparatus. The first cooking apparatus the sends the recipe instructions for performing the particular cooking function to the second cooking apparatus. | 2022-05-05 |
20220137953 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An information processing apparatus stores first software and second software, the first software is stored in a first storage medium accessible by a central processing unit and an embedded controller, and the second software is used to restore the first software and stored in a second storage medium accessible by the embedded controller. The information processing apparatus includes an update unit configured to update the second software using the first software depending on a result of a comparison between version information about the first software and version information about the second software, a falsification detection unit configured to detect whether the first software is falsified; and a restoration unit configured to restore the first software using the second software in a case where the falsification detection unit detects that the first software is falsified. | 2022-05-05 |
20220137954 | SYSTEMS AND METHODS FOR PROVIDING EVENT ATTRIBUTION IN SOFTWARE APPLICATIONS - A method and apparatus for event attribution during software experimentation is described. The method may include receiving, by a server computer system, a plurality of event tracking messages associated with an end user system, each event tracking message including at least a customer identifier, an end user identifier, and a timestamp. The method may also include storing each event tracking messages with the received customer identifier, end user identifier, and timestamp in a customer data store. Furthermore, the method may include applying, by the server computer system, a feature treatment to a configurable application executing on the end user system, the feature treatment specified by a customer system associated with the customer identifier, and the feature treatment configures one or more features of the configurable application associated with the end user identifier. The method may also include attributing a first set of events from the received plurality of event tracking messages with the application of the feature treatment. | 2022-05-05 |
20220137955 | Method and apparatus for firmware patching - A method of handling a firmware update for a device is disclosed, comprising: determining a device to be in an updatable state; setting the device into an updating state after determining the updatable state; and after the device is in the updating state, writing a firmware update to memory for the device. After writing the firmware update, the device is switchable to a working state in which the device operates based on the firmware update. | 2022-05-05 |
20220137956 | UPGRADE OF A DISTRIBUTED SERVICE IN A VIRTUALIZED COMPUTING SYSTEM - An example method of performing an upgrade operation for a distributed service in a virtualized computing system is described. The virtualized computing system includes a host cluster, the host cluster having hosts and a virtualization layer executing on hardware platforms of the hosts. The method includes: receiving, at a controller of the distributed service, a first upgrade operation from a user, the distributed service including the controller and a plurality of service engine groups, each of the plurality of service engine groups including a plurality of service engines; and performing, by the controller, the first upgrade operation on software of the controller exclusive of software of the service engines in each of the service engine groups, the software of the controller and the software of the plurality of service engines in each of the plurality of service engine groups executing in a plurality of hosts. | 2022-05-05 |
20220137957 | ASSEMBLING DATA DELTAS IN CONTROLLERS AND MANAGING INTERDEPENDENCIES BETWEEN SOFTWARE VERSIONS IN CONTROLLERS USING TOOL CHAIN - Disclosed embodiments relate to perform operations for receiving and integrating a delta file in a vehicle. Operations may include receiving, at an Electronic Control Unit (ECU) in the vehicle, a delta file, the delta file comprising a plurality of deltas corresponding to a software update for software on the ECU and startup code for executing the delta file in the ECU; executing the delta file, based on the startup code, in the ECU; and updating memory addresses in the ECU to correspond to the plurality of deltas from the delta file. | 2022-05-05 |
20220137958 | METHODS AND SYSTEMS FOR AUTOMATIC DETERMINATION OF A DEVICE-SPECIFIC CONFIGURATION FOR A SOFTWARE APPLICATION OPERATING ON A USER DEVICE - A method and system for automatically determining a device-specific configuration for a software application operating on a user device. A configuration monitoring program monitors local user data stored on a user device and generates a device-specific prediction model using a machine learning algorithm applied to the monitored local data. The configuration monitoring program also receives a global prediction model generated remotely using global user data collected from a plurality of user devices. The configuration monitoring program generates a predicted device-specific configuration of the application operating on the user device using prediction data from both the device-specific prediction model and the global prediction model and updates the configuration of the given application using the predicted device-specific configuration. | 2022-05-05 |
20220137959 | DETECTING DUPLICATED CODE PATTERNS IN VISUAL PROGRAMMING LANGUAGE CODE INSTANCES - In various embodiments, a process for detecting duplicated code patterns in visual programming language code instances includes analyzing a repository of graph based visual programming language code instances and detecting a similar code portion pattern duplicated among a group of graph based visual programming language code instances included in the repository of graph based visual programming language code instances including by using an index and tokenizing a flow corresponding to at least one graph based visual programming language code instance in the group of graph based visual programming language code instance. The process includes visually indicating elements belonging to the detected similar code portion pattern within a visual representation of at least one of the group of graph based visual programming language code instances. | 2022-05-05 |
20220137960 | SYSTEM AND METHOD FOR PROCESSING LARGE DATASETS - An apparatus comprises a bulk array of non-volatile memory cells on an integrated circuit die and an arithmetic logic unit on the die coupled to the bulk array. The arithmetic logic unit is operable to perform arithmetic logic operations on contents of the bulk array responsive to instructions received from outside of the die. The non-volatile memory cells may include NAND-type flash memory cells. | 2022-05-05 |
20220137961 | Controller with Early Termination in Mixed-Integer Optimal Control Optimization - A system is controlled by solving a mixed-integer optimal control optimization problem using branch-and-bound (B&B) optimization that searches for a global optimal solution within a search space. The B&B optimization iteratively partitions the search space into a nested tree of regions, and prunes at least one region from the nested tree of regions before finding a local optimal solution for each region when a dual objective value of a projection of a sub-optimal dual solution estimate for each region into a dual feasible space is greater than an upper bound or lesser than a lower bound of the global optimal solution maintained by the B&B optimization. | 2022-05-05 |
20220137962 | LOGARITHMIC NUMBER SYSTEM - A processor comprising a register file comprising a bias register for holding a bias and a plurality of operand registers each for holding a respective number which together with the bias represents a respective value in a logarithmic number system; and an execution unit configured to, in response to receiving a logarithmic addition opcode: retrieve first and second numbers from first and second sources respectively; subtract the first number from the second number to determine a difference; and if the determined difference is less than or equal to a predetermined number, retrieve, from a look-up table, a third number mapped to the determined difference, and add the third number to the first number to determine a result; if the determined difference is greater than the predetermined number, determine the result to be the greatest of the first and second numbers; and store the result. | 2022-05-05 |
20220137963 | NEURAL NETWORK ACCELERATOR AND OPERATING METHOD THEREOF - Disclosed are a neural network accelerator and an operating method thereof, which include an instruction analyzer that analyzes a first instruction instructing an operation with respect to a first layer of a neural network algorithm from an external device, a polymorphic operator array including a plurality of operators that performs the operation with respect to the first layer under a control of the instruction analyzer, an interface that communicates with the external device and an external memory under the control of the instruction analyzer, an internal memory, a type converter, a type conversion data mover that stores data received from the external memory through the interface in the internal memory under the control of the instruction analyzer, and an internal type converter that performs a conversion of data stored in the internal memory or data generated by the polymorphic operator array under the control of the instruction analyzer. | 2022-05-05 |
20220137964 | METHODS AND SYSTEMS FOR OPTIMIZING FILE SYSTEM USAGE - A method for generating a thread queue, that includes obtaining, by a user space file system, CPU socket data, and based on the CPU socket data, generating a plurality of thread handles for a plurality of cores, ordering the plurality of thread handles, in the thread queue, for a first core of the plurality of cores, and saving the thread queue to a region of shared memory. | 2022-05-05 |
20220137965 | AUTOMATION METHOD, AUTOMATION SYSTEM AND AUTOMATION PROGRAM - The problem to be solved by the present invention is to implement an automation method which enhances user experience in software operation. In a computer processor, executed are: a building step for registering a macro that includes one or more information processing procedures and an execution order related to the one or more information processing procedures; and an execution step for performing a sequential execution for sequentially executing the one or more information processing procedures based on the execution order in response to specifying the macro, and an individual execution for individually executing at least one of the one or more information processing procedures in response to specifying the at least one of the one or more information processing procedures. | 2022-05-05 |
20220137966 | PROCESSOR AND OPERATING METHOD THEREOF - A processor and an operating method thereof are provided. The processor comprises a plurality of physical registers and a renaming circuit. The renaming circuit is coupled to the plurality of physical registers and is configured to receive an instruction sequence and check the instruction sequence. When a current instruction of the instruction sequence comprises a move instruction, the renaming circuit assigns a first physical register, which is assigned to a source logical register of the current instruction previously, to a destination logical register of the current instruction. The first physical register is one of the plurality of physical registers. | 2022-05-05 |
20220137967 | GRAPHICS PROCESSOR DATA ACCESS AND SHARING - Embodiments are generally directed to graphics processor data access and sharing. An embodiment of an apparatus includes a circuit element to produce a result in processing of an application; a load-store unit to receive the result and generate pre-fetch information for a cache utilizing the result; and a prefetch generator to produce prefetch addresses based at least in part on the pre-fetch information; wherein the load-store unit is to receive software assistance for prefetching, and wherein generation of the pre-fetch information is based at least in part on the software assistance. | 2022-05-05 |
20220137968 | Hardware Verification of Dynamically Generated Code - In an embodiment, dynamically-generated code may be supported in the system by ensuring that the code either remains executing within a predefined region of memory or exits to one of a set of valid exit addresses. Software embodiments are described in which the dynamically-generated code is scanned prior to permitting execution of the dynamically-generated code to ensure that various criteria are met including exclusion of certain disallowed instructions and control of branch target addresses. Hardware embodiments are described in which the dynamically-generated code is permitted to executed but is monitored to ensure that the execution criteria are met. | 2022-05-05 |
20220137969 | MULTI-VENDOR ACCELERATOR MANAGEMENT PROTOCOL INTEROPERABILITY - An information handling system may include at least one central processing unit (CPU); and a special-purpose processing unit implementing a particular management interface that is one of a plurality of management interfaces. The information handling system may be configured to: receive management instructions for the special-purpose processing unit, wherein the management instructions are in accordance with a cross-platform management interface different from the particular management interface; translate the management instructions into translated instructions that are in accordance with the particular management interface; and perform management of the special-purpose processing unit by causing the special-purpose processing unit to execute the translated instructions. | 2022-05-05 |
20220137970 | LOOK-UP TABLE INITIALIZE - A digital data processor includes an instruction memory storing instructions specifying a data processing operation and a data operand field, an instruction decoder coupled to the instruction memory for recalling instructions from the instruction memory and determining the operation and the data operand, and an operational unit coupled to a data register file and to an instruction decoder to perform a data processing operation upon an operand corresponding to an instruction decoded by the instruction decoder and storing results of the data processing operation. The operational unit is configured to perform a table write in response to a look up table initialization instruction by duplicating at least one data element from a source data register to create duplicated data elements, and writing the duplicated data elements to a specified location in a specified number of at least one table and a corresponding location in at least one other table. | 2022-05-05 |
20220137971 | INSTRUCTION LENGTH BASED PARALLEL INSTRUCTION DEMARCATOR - Instruction length based parallel instruction demarcators and methods for parallel instruction demarcation are included, wherein an instruction sequence is received at an instruction buffer, the instruction sequence comprising a plurality of instruction syllables, and the instruction sequence is stored at the instruction buffer. It is determined, using one or more logic blocks arranged in a sequence, a length of instructions and at least one boundary. Additionally, using a controlling logic block, the sequence is demarcated into individual instructions. | 2022-05-05 |
20220137972 | PROCESSING DEVICE WITH A MICROBRANCH TARGET BUFFER FOR BRANCH PREDICTION USING LOOP ITERATION COUNT - An integrated circuit comprising instruction processing circuitry for processing a plurality of program instructions and instruction prediction circuitry. The instruction prediction circuitry comprises circuitry for detecting successive occurrences of a same program loop sequence of program instructions. The instruction prediction circuitry also comprises circuitry for predicting a number of iterations of the same program loop sequence of program instructions, in response to detecting, by the circuitry for detecting, that a second occurrence of the same program loop sequence of program instructions comprises a same number of iterations as a first occurrence of the same program loop sequence of program instructions. | 2022-05-05 |
20220137973 | Program Thread Selection Between a Plurality of Execution Pipelines - Techniques are disclosed relating to an apparatus that includes a plurality of execution pipelines including first and second execution pipelines, a shared circuit that is shared by the first and second execution pipelines, and a decode circuit. The first and second execution pipelines are configured to concurrently perform operations for respective instructions. The decode circuit is configured to assign a first program thread to the first execution pipeline and a second program thread to the second execution pipeline. In response to determining that respective instructions from the first and second program threads that utilize the shared circuit are concurrently available for dispatch, the decode circuit is further configured to select between the first program thread and the second program thread. | 2022-05-05 |
20220137974 | BRANCH DENSITY DETECTION FOR PREFETCHER - In one embodiment, a microprocessor, comprising: first logic configured to dynamically adjust a maximum prefetch count based on a total count of predicted taken branches over a predetermined quantity of cache lines; and second logic configured to prefetch instructions based on the adjusted maximum prefetch count. | 2022-05-05 |
20220137975 | Coprocessor Operation Bundling - In an embodiment, a processor includes a buffer in an interface unit. The buffer may be used to accumulate coprocessor instructions to be transmitted to a coprocessor. In an embodiment, the processor issues the coprocessor instructions to the buffer when ready to be issued to the coprocessor. The interface unit may accumulate the coprocessor instructions in the buffer, generating a bundle of instructions. The bundle may be closed based on various predetermined conditions and then the bundle may be transmitted to the coprocessor. If a sequence of coprocessor instructions appears consecutively in a program, the rate at which the instructions are provided to the coprocessor (on average) at least matches the rate at which the coprocessor consumes the instructions, in an embodiment. | 2022-05-05 |
20220137976 | Removal of Dependent Instructions from an Execution Pipeline - Techniques are disclosed relating to an apparatus, including a data storage circuit having a plurality of entries, and a load-store pipeline configured to allocate an entry in the data storage circuit in response to a determination that a first instruction includes an access to an external memory circuit. The apparatus further includes an execution pipeline configured to make a determination, while performing a second instruction and using the entry in the data storage circuit, that the second instruction uses a result of the first instruction, and cease performance of the second instruction in response to the determination. | 2022-05-05 |
20220137977 | PREDICTING LOAD-BASED CONTROL INDEPENDENT (CI) REGISTER DATA INDEPENDENT (DI) (CIRDI) INSTRUCTIONS AS CI MEMORY DATA DEPENDENT (DD) (CIMDD) INSTRUCTIONS FOR REPLAY IN SPECULATIVE MISPREDICTION RECOVERY IN A PROCESSOR - Predicting load-based control independent (CI), register data independent (DI) (CIRDI) instructions as CI memory data dependent (DD) (CIMDD) instructions for replay in speculative misprediction recovery in a processor. The processor predicts if a source of a load-based CIRDI instruction will be forwarded by a store-based instruction (i.e. “store-forwarded”). If a load-based CIRDI instruction is predicted as store-forwarded, the load-based CIRDI instruction is considered a CIMDD instruction and is replayed in misprediction recovery. If a load-based CIRDI instruction is not predicted as store-forwarded, the processor considers such load-based CIRDI instruction as a pending load-based CIRDI instruction. If this pending load-based CIRDI instruction is determined in execution to be store-forwarded, the instruction pipeline is flushed and the pending load-based CIRDI instruction is also replayed in misprediction recovery. If this pending load-based CIRDI instruction is not determined in execution to be store-forwarded, the pending load-based CIRDI instruction is not replayed in misprediction recovery. | 2022-05-05 |
20220137978 | METHOD AND APPARATUS FOR STATELESS PARALLEL PROCESSING OF TASKS AND WORKFLOWS - In a method for parallel processing of a data stream, a processing task is received to process the data stream that includes a plurality of segments. A split operation is performed on the data stream to split the plurality of segments into N sub-streams. Each of the N sub-streams includes one or more segments of the plurality of segments. The N is a positive integer. N sub-processing tasks are performed on the N sub-streams to generate N processed sub-streams. A merge operation is performed on the N processed sub-streams based on a merge buffer to generate a merged output data stream. The merge buffer includes an output iFIFO buffer and N sub-output iFIFO buffers coupled to the output iFIFO buffer. The merged output data stream is identical to an output data stream that is generated when the processing task is applied directly to the data stream without the split operation. | 2022-05-05 |
20220137979 | METHOD AND SYSTEM FOR CONTROLLING SYSTEM BOOT - A method and computer system are disclosed for controlling a system boot of the computer system. Both involve determining that a chassis of the computer system was opened, determining whether the opening of the chassis was authorized, and controlling the system boot of the computer system based on whether the opening of the chassis was authorized. | 2022-05-05 |
20220137980 | CONFIGURABLE MEDIA STRUCTURE - Systems, apparatuses, and methods related to configurable media structure are described. A memory device can be configured to boot up in a variety of configurations. The variety of configurations can include using the memory device for persistent memory storage, for non-persistent memory storage, etc. For instance, an apparatus can include a first memory array and a second memory array. The apparatus can include a memory controller coupled to the first memory array and the second memory array. The second memory array can be configured to store at least two boot images. The first memory array can be configured to operate based on which of the at least two boot images is used. | 2022-05-05 |
20220137981 | SYSTEM FOR AUTOMATICALLY GENERATING ELECTRONIC ARTIFACTS USING EXTENDED FUNCTIONALITY - A system is provided for automatically generating electronic artifacts using extended functionality. In particular, the system may use a template-based process to automatically generate artifacts based on a defined set of parameters and/or variables. The system may further use one or more plugins which may provide extended functionality with respect to the artifact generation process. Accordingly, the artifact generation process may include initializing a parameter list based on application parameters and/or plugin parameters, processing the parameters, generating variables based on the parameters, and replacing variables in scheme template files with appropriate values (e.g., user supplied or plugin generated values) to output an artifact file to a predetermined location. In this way, the system provides a robust and efficient way to automatically generate artifacts. | 2022-05-05 |
20220137982 | SYSTEMS AND METHODS FOR ACHIEVING FASTER BOOT TIMES USING BIOS ATTRIBUTE MITIGATION - A BIOS may include a plurality of BIOS attributes associated with the information handling system, each attribute of the plurality of BIOS attributes having metadata defining a priority for such attribute. The BIOS may also include an attribute engine configured to execute a preboot process prior to booting of an operating system of the information handling system, wherein the preboot process is configured to identify boot-critical attributes of the plurality of BIOS attributes based on the metadata and load the boot-critical attributes. The attribute engine may also execute a steady-state process after booting of the operating system of the information handling system, wherein the steady-state process is configured to load attributes of the plurality of BIOS attributes other than the boot-critical attributes in an order based on the metadata. | 2022-05-05 |
20220137983 | REAL TIME PERIPHERAL CONFIGURATION VIA HARDWARE REQUEST - A method for configuring a target peripheral via a hardware request is provided. The method includes receiving a hardware request from one of a plurality of initiator peripherals, receiving a configuration selection from the requesting initiator peripheral, and selecting a configuration from a plurality of hardware memory locations based at least in part on the configuration selection. The method also includes configuring one or more signal processing modules within the target peripheral based at least in part on the configuration, receiving a signal from an electronic device, and processing the signal from the electronic device with the one or more signal processing modules. The method further includes transmitting a processed signal from the signal processing modules to the requesting initiator peripheral. | 2022-05-05 |
20220137984 | Efficient Hibernation Apparatus and Method for Digital Devices - Hibernating an android device includes freezing one or more tasks, processes, drives, data and/or files of open applications, or other RAM data, and creating a hibernation image. A resume image is generated based on the hibernation image. The resume image is stored to disk along with one or more hibernation parameters that are configured to guide Linux to specific memory locations of certain resume image data. Power to both the processor and the RAM storage devices of the android device are then cut off. | 2022-05-05 |
20220137985 | COMPUTATIONAL CONFIGURATION AND MULTI-LAYER CLUSTER ANALYSIS - Systems and methods are provided for computationally configuring computing devices and performing multi-layer cluster analysis. For example, the system can identify multiple layers of clusters of devices (e.g., shared hardware configuration, shared application configuration, number of applications, etc.) in a large scale infrastructure environment automatically. For each layer of the clusters of devices, parameters of these devices are provided to a machine learning model to produce an objective function (e.g., minimum number of devices, utilization under 80%, etc.), whose output can be provided to a datacenter operator or other user in the large scale infrastructure environment so they can make further configuration changes to the devices in each cluster. | 2022-05-05 |
20220137986 | APPLICATION-BASED DYNAMIC HETEROGENEOUS MANY-CORE SYSTEMS AND METHODS - A method for dynamically configuring multiple processors based on needs of applications includes receiving, from an application, an acceleration request message including a task to be accelerated. The method further includes determining a type of the task and searching a database of available accelerators to dynamically select a first accelerator based on the type of the task. The method further includes sending the acceleration request message to a first acceleration interface located at a configurable processing circuit. The first acceleration interface sends the acceleration request message to a first accelerator, and the first accelerator accelerates the task upon receipt of the acceleration request message. | 2022-05-05 |
20220137987 | UPDATED SHARED LIBRARY RELOADING WITHOUT STOPPING THE EXECUTION OF AN APPLICATION - Techniques include executing a software program having a function call to a shared library and reloading the shared library without stopping execution of the software program. A global offset table (GOT) is updated responsive to resolving a link address associated with the function call. An entry in GOT included a link address field, an index field, and a resolved field, the updating including updating the index field with an affirmative value and marking the resolved field with an affirmative flag for the entry in the GOT. Responsive to reloading the shared library, the entry in the GOT is found having the affirmative value in the index field and the affirmative flag in the resolved field. An address value in the link address field is returned for the entry having the affirmative value in the index field, responsive to a subsequent execution of the function call to the shared library. | 2022-05-05 |
20220137988 | VIRTUALIZATION FOR WEB-BASED APPLICATION WORKLOADS - Virtualized web-based application workloads include web-based application requests from clients that are intercepted. The type of request is determined and results for the requested web-based application are replaced with redirected output using application code that is separate from the client. | 2022-05-05 |
20220137989 | SYSTEMS AND METHODS FOR ZERO-FOOTPRINT AND SAFE EXECUTION OF QUANTUM COMPUTING PROGRAMS - Systems and methods for zero-footprint and safe execution of quantum computing programs are disclosed. According to one embodiment, in an electronic device comprising at least one computer processor, a method for cloud-based execution of quantum-computing programs may include: (1) receiving, from a user interface on a client device, a serialized file comprising a domain, an application, and an algorithm; (2) receiving, from the user interface, problem data and an identification of a quantum computing backend for executing the problem data; (3) instantiating a quantum program for execution and communicating the quantum program and the problem data to the quantum computing backend for execution; (4) receiving, from the quantum computing backend, an output of the execution; and (5) communicating the output to the user interface on the client device. | 2022-05-05 |
20220137990 | UNIFIED INTELLIGENT EDITOR TO CONSOLIDATE ACTIONS IN A WORKSPACE - A computing device includes a display, and a processor coupled to the display. The processor is configured to monitor user input for a template keyword that matches with one or more templates, and display on the display one or more application service options in response to the template keyword matching the one or more of the templates. Each application service option corresponds to an action that can be performed. The processor provides template content data to a server, with the template content data defining the action corresponding to the template associated with the application service option selected by a user of the computing device. | 2022-05-05 |
20220137991 | ON-DEMAND APPLICATIONS - A virtual server includes at least one processor to create a single composited layered image comprising an operating system layer and an application shortcut that includes a representation of an application while not including the application. The single composited layered image is provided as a virtual session to a client computing device. An application layer is mounted to the single composited layered image in response to a user of the client computing device interacting with the application shortcut, with the application layer including the application. | 2022-05-05 |
20220137992 | VIRTUAL AGENT TEAM - The present invention relates to a system for providing a team of at least two virtual agents, the system comprising at least one central processing unit, at least one non-transitory computer readable storage medium, at least one input device, and at least one output device, wherein the virtual agents are distinctively presented to a user by the at least one output device and each of the at least two virtual agents is assigned to a different field of interest and each of the at least two virtual agents is configured to provide visual and/or audio recommendations of the field of interest assigned to the respective virtual agent to the user on the basis of user-specific data and on the basis of the current state of the user, wherein at least two virtual agents of the team assigned to at least two different fields of interest provide recommendations to the user in the current state so that the user is provided with bidirectional, multidirectional, or even contradictory recommendations of at least two different fields of interest. | 2022-05-05 |
20220137993 | System and Methods for Live Help - A live help system provides an intuitive display of help information on a user's graphical user interface. A request is received from a client device for help, and a live help provider interface is initiated at a live help location. Data is acquired regarding a user's location, including data on external devices in the user's location. Indicators are provided to allow the live help provider to point to or otherwise indicate items on the user interface or outside of the user interface. Live help input is captured at the live help provider interface. Instructions are then transmitted to the display of the client device to display live help input, as though the agent were present and interacting with or indicating items on the screen or off the screen. | 2022-05-05 |
20220137994 | INSTANCES OF JUST-IN-TIME (JIT) COMPILATION OF CODE USING DIFFERENT COMPILATION SETTINGS - In some examples, just-in-time (JIT) control instructions upon execution cause a system to initiate a plurality of instances of JIT compilation of a first code called by a program, where the initiating of the plurality of instances of the JIT compilation of the first code is under control of the JIT control instructions that are outside the program, and the plurality of instances of the JIT compilation of the first code use respective different compilation settings, and are to produce respective JIT compiled instances of the first code. | 2022-05-05 |
20220137995 | PROVIDING CLOCK TIMES TO VIRTUAL DEVICES - Providing clock times to virtual devices. In one embodiment, a method includes identifying a real-time clock device of a host computing device. The host computing device comprises a hypervisor and a virtual machine. The method also includes determining that a virtual device used by the virtual machine will use clock times obtained from the real-time clock device. The method further includes obtaining, by a processing device of the host computing device, a current clock time from the real-time clock device of the host computing device. The method further includes providing the current clock time to the virtual device. | 2022-05-05 |
20220137996 | Systems and Methods Involving Aspects of Hardware Virtualization such as Separation Kernel Hypervisors, Hypervisors, Hypervisor Guest Context, Hypervisor Context, Anti-Fingerprinting and/or Other Features - Systems, methods, computer readable media and articles of manufacture consistent with innovations herein are directed to computer virtualization, computer security and/or hypervisor fingerprinting. According to some illustrative implementations, innovations herein may utilize and/or involve a separation kernel hypervisor which may include the use of a guest operating system virtual machine protection domain, a virtualization assistance layer, and/or a CPU ID instruction handler (which may be proximate in temporal and/or spatial locality to malicious code, but isolated from it). The CPU ID instruction handler may perform processing, inter alia, to return configurable values different from the actual values for the physical hardware. The virtualization assistance layer may further contain virtual devices, which when probed by guest operating system code, return the same values as their physical counterparts. In addition, the virtualization assistance layer may vary its internal I/O and memory addresses in a configurable manner. | 2022-05-05 |
20220137997 | PLATFORM UPDATE USING SELF-INSTALLING CONTAINERIZED MICROSERVICE - A system and method for self-installing a container platform. A method includes implementing an active version and a passive version of the container platform, wherein the active version actively runs on a computing infrastructure and the passive version is maintained in a storage area; loading an updater container from a container registry containing updates to the container platform into the container engine; running the updater container in the container engine, including: mapping the passive version from the storage area to the updater container, writing update data to the passive version, installing the passive version as a new active version, and rebooting the host operating system. | 2022-05-05 |
20220137998 | STORAGE VIRTUALIZATION DEVICE SUPPORTING VIRTUAL MACHINE, OPERATION METHOD THEREOF, AND OPERATION METHOD OF SYSTEM HAVING THE SAME - Disclosed is an operation method of a storage virtualization device which communicates with a host device and a storage device set, includes a first submission queue (SQ) and a first completion queue (CQ), and supports a first virtual machine executable by the host device. The method includes fetching a first command of a first virtual submission queue (VSQ) of the first virtual machine, distributing the first command thus fetched to the first SQ, providing the first command of the first SQ to the storage device set, receiving, from the storage device set, a first completion indicating that the first command is processed, wherein the first completion is written in the first CQ, distributing the first completion of the first CQ to a virtualization layer, and writing the first completion thus distributed to a first virtual completion queue (VCQ) of the first virtual machine. | 2022-05-05 |
20220137999 | COMPUTING DEVICE WITH ETHERNET CONNECTIVITY FOR VIRTUAL MACHINES ON SEVERAL SYSTEMS ON A CHIP - A computing device, in particular for automotive applications, includes Ethernet connectivity for virtual machines on several systems on a chip. A vehicle comprises such a computing device. The computing device comprises two or more systems on a chip, each system on a chip comprising one or more virtual machines, wherein one system on a chip provides a connection to an Ethernet network, and wherein the two or more systems on a chip are connected by a switch. The virtual machines are connected via a virtual Ethernet link. For this purpose, each system on a chip comprises an instance of a distributed virtual switch, which is configured to provide a virtualized access to the Ethernet network for the virtual machines of the respective system on a chip. | 2022-05-05 |
20220138000 | COMPUTING DEVICE WITH ETHERNET CONNECTIVITY FOR VIRTUAL MACHINES ON SEVERAL SYSTEMS ON A CHIP THAT ARE CONNECTED WITH POINT-TO-POINT DATA LINKS - A computing device, in particular for automotive applications, with Ethernet connectivity for virtual machines on several systems on a chip are connected with point-to-point data links. The computing device includes two or more systems on a chip. One system on a chip is a root system on a chip, and the other systems on a chip are end point systems on a chip that are connected to the root system on a chip with point-to-point data links. Each system on a chip includes one or more virtual machines, and wherein one system on a chip provides a connection to an Ethernet network. The virtual machines are connected via a virtual Ethernet link. For this purpose, each system on a chip comprises an instance of a distributed virtual switch, which is configured to provide a virtualized access to the Ethernet network for the virtual machines of the respective system on a chip. | 2022-05-05 |
20220138001 | MEASURING HOST UTILIZATION IN A DATACENTER - Various examples are disclosed for generating heatmaps and plotting utilization of hosts in a datacenter environment. A collector virtual machine can rove the datacenter and collect utilization data. The utilization data can be plotted on a heatmap to illustrate utilization hotspots in the datacenter environment. | 2022-05-05 |
20220138002 | PIPELINED MATRIX MULTIPLICATION AT A GRAPHICS PROCESSING UNIT - A graphics processing unit (GPU) schedules recurrent matrix multiplication operations at different subsets of CUs of the GPU. The GPU includes a scheduler that receives sets of recurrent matrix multiplication operations, such as multiplication operations associated with a recurrent neural network (RNN). The multiple operations associated with, for example, an RNN layer are fused into a single kernel, which is scheduled by the scheduler such that one work group is assigned per compute unit, thus assigning different ones of the recurrent matrix multiplication operations to different subsets of the CUs of the GPU. In addition, via software synchronization of the different workgroups, the GPU pipelines the assigned matrix multiplication operations so that each subset of CUs provides corresponding multiplication results to a different subset, and so that each subset of CUs executes at least a portion of the multiplication operations concurrently. | 2022-05-05 |
20220138003 | AUTOMATIC LOCALIZATION OF ACCELERATION IN EDGE COMPUTING ENVIRONMENTS - Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement. | 2022-05-05 |
20220138004 | SYSTEM AND METHOD FOR AUTOMATED PRODUCTION AND DEPLOYMENT OF PACKAGED AI SOLUTIONS - A data processing method and system for automated construction, resource provisioning, data processing, feature generation, architecture selection, pipeline configuration, hyperparameter optimization, evaluation, execution, production, and deployment of machine learning models in an artificial intelligence solution development lifecycle. In accordance with various embodiments, a graphical user interface of an end user application is configured to provide a pre-configured template comprises an automated ML framework for data import, data preparation, data transformation, feature generation, algorithms selection, hyperparameters tuning, models training, evaluation, interpretation, and deployment to an end user. A configurable workflow is configured to enable a user to assemble one or more transmissible AI build/products containing one or more pipelines and/or ML models for executing one or more AI solutions. Embodiments of the present disclosure may enable full serialization and versioning of all entities relating to an AI build/product for deployment within an enterprise architecture. | 2022-05-05 |
20220138005 | DISTRIBUTED COMPUTING SYSTEM AND METHOD OF OPERATION THEREOF - There is provided a distributed computation system that establishes a consensus related to a computational value of a computational task, wherein the distributed computation system includes a plurality of computing nodes. The distributed computation system distributes the computational task to the plurality of computing nodes; each of a first set of computing nodes, from the plurality of computing nodes, performs a partial evaluation of the computational task, wherein the partial evaluations of the computational task are stored in a ledger arrangement; each of a second set of computing nodes from the plurality of computing nodes generate a computational value corresponding to each of the partial evaluations stored in the ledger arrangement and determine correctness proof of each of the computational value; and a third set of computing nodes from the plurality of computing nodes validates the correctness proof of each of the computational value to establish consensus related to the computational values. | 2022-05-05 |
20220138006 | DISTRIBUTED STREAMING SYSTEM SUPPORTING REAL-TIME SLIDING WINDOWS - In various embodiments, a process for providing a distributed streaming system supporting real-time sliding windows includes receiving a stream of events at a plurality of distributed nodes and routing the events into topic groupings. The process includes using one or more events in at least one of the topic groupings to determine one or more metrics of events with at least one window and an event reservoir including by: tracking, in a volatile memory of the event reservoir, beginning and ending events within the at least one window; and tracking, in a persistent storage of the event reservoir, events associated with tasks assigned to a respective node. The process includes updating the one or more metrics based on one or more previous values of the one or more metrics as a new event is added or an existing event is expired from the at least one window. | 2022-05-05 |
20220138007 | METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR ADJUSTING COMPUTING LOAD - Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for adjusting a computing load. The method in an illustrative embodiment includes: determining a total computing power demand of at least one user device that will be switched, due to movement, to being provided a computing service by a computing node; determining an available computing power of the computing node; and if the available computing power is unable to meet the total computing power demand, by adjusting a computing load of the computing node, adjusting the available computing power before the at least one user device is switched to being provided the computing service by the computing node, so as to meet the total computing power demand. | 2022-05-05 |
20220138008 | METHODS AND APPARATUS TO MANAGE RESOURCES IN A HYBRID WORKLOAD DOMAIN - Methods and apparatus to manage resources in a hybrid workload domain are disclosed. An example apparatus includes a usage monitor to monitor resource utilization of a workload allocated within a hybrid workload domain, and an orchestrator to: determine a first type of the workload domain in the hybrid workload domain; in response to determining that under-utilized resources of the first type are not available, identify resources of a second type that are available; convert the resources from the first type to the second type; and allocate the converted resources to the workload. | 2022-05-05 |
20220138009 | INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS, AND PROGRAM FOR CONTROLLING INFORMATION PROCESSING APPARATUS - A method, an apparatus and a medium storing a program for controlling information processing apparatus that manages a plurality of processing nodes each including a buffer and a processor that processes data held in the buffer is disclosed. The method includes predicting a boundary between processed data and unprocessed data in the buffer at a predicted reaching time at which a resource load of a certain processing node during data processing will reach a predetermined amount; and transferring, in reverse processing order toward the boundary, the unprocessed data to another processing node that will take over the data processing. | 2022-05-05 |
20220138010 | QUIESCENT STATE-BASED RECLAIMING STRATEGY FOR PROGRESSIVE CHUNKED QUEUE - A system includes a memory for storing a plurality of memory chunks and a processor for executing a plurality of producer threads. A producer thread increases a producer sequence and determines (i) a first chunk identifier associated with the producer sequence of an identified memory chunk and (ii) a position from the producer sequence to offer an item. The producer thread determines a second chunk identifier of a last created/appended memory chunk and determines whether the second chunk identifier is valid (e.g., matches the first chunk identifier). The producer thread reads a current memory chunk and determines whether a third chunk identifier associated with the current memory chunk is valid (e.g., matches the first chunk identifier). The producer thread writes the item into the identified memory chunk at the position. | 2022-05-05 |
20220138011 | NON-TRANSITORY COMPUTER-READABLE MEDIUM, MANAGEMENT APPARATUS, RELAY APPARATUS AND DISPLAY CONTROL METHOD - A non-transitory computer-readable medium storing a display control program readable by a computer of a relay apparatus of a terminal management system having a management apparatus, the relay apparatus, and a terminal apparatus, the display control program, when executed by the computer, causing the relay apparatus to perform: obtaining, from the management apparatus, execution timing information indicating an execution timing of a relay application configured to relay data between the management apparatus and the terminal apparatus via the relay apparatus, the relay application being activated by a user's activation operation performed on the relay apparatus; and based on the execution timing information obtained, setting a reservation for executing display processing of displaying, on the relay apparatus, an activation notification image prompting the user to activate the relay application when the execution timing comes. | 2022-05-05 |
20220138012 | Computing Resource Scheduling Method, Scheduler, Internet of Things System, and Computer Readable Medium - Various embodiments of the teachings herein include a resource scheduling method comprising: receiving data to be processed collected by a sensor in an Internet of Things system; determining a processing priority of the data to be processed; predicting, according to the determined processing priority, a computing resource amount and duration required for processing the data to be processed; and scheduling a computing resource of an edge computing device in the IoT system according to the predicted computing resource amount and duration to process the data to be processed. | 2022-05-05 |
20220138013 | WORKLOAD COMPLIANCE GOVERNOR SYSTEM - A workload compliance governor system includes a management system coupled to a computing system. A workload compliance governor subsystem in the computing system receives a workload performance request associated with a workload, exchanges hardware compose communications with the management system to compose hardware components for the workload, and receives back an identification of hardware components. The workload compliance governor subsystem then determines that the identified hardware components satisfy hardware compliance requirements for the workload, and configures the identified hardware components in the computing system based on the software compliance requirements for the workload in order to cause those identified hardware components to provide an operating system and at least one application that operate to perform the workload. | 2022-05-05 |
20220138014 | INTRA-FOOTPRINT COMPUTING CLUSTER BRING-UP - Methods, systems and computer program products for intra-footprint computing cluster bring-up within a virtual private cloud. A network connection is established between an initiating module and a virtual private cloud (VPC). An initiating module allocates resources of the virtual private cloud including a plurality of nodes that correspond to members of a to-be-configured computing cluster. A cluster management module having coded therein an intended computing cluster configuration is configured into at least one of the plurality of nodes. The members of the to-be-configured computing cluster interoperate from within the VPC to accomplish a set of computing cluster bring-up operations that configure the plurality of members into the intended computing cluster configuration. Execution of bring-up instructions of the management module serve to allocate networking IP addresses of the virtual private cloud. The allocated networking IP addresses of the virtual private cloud are assigned to networking interfaces of the plurality of nodes. | 2022-05-05 |
20220138015 | SHARED ENTERPRISE CLOUD - A cloud-enterprise resource management system enables sharing of computing resources belonging to different datacenters by one or more clients of a resource pooling and sharing service. Each datacenter of includes a first partition of computing resources and a second partition of computing resources. The first partition is designated as reserved for use by an enterprise operating the datacenter. The second partition is designated as available for use by one or more clients of the resource pooling and sharing service. A workload manager in each datacenter predicts workload and transfers (i) a first computing resource from the first partition to the second partition wherein when the predicted workload is below a first threshold and (ii) a second computing resource from the second partition to the first partition when the predicted workload is above a second threshold. | 2022-05-05 |
20220138016 | METHODS AND APPARATUS TO STORE AND ACCESS MULTI-DIMENSIONAL DATA - Methods, apparatus, systems and articles of manufacture to store and access multi-dimensional data are disclosed. An example apparatus includes a memory; a memory allocator to allocate part of the memory for storage of a multi-dimensional data object; and a storage element organizer to: separate the multi-dimensional data into storage elements; store the storage elements in the memory, the stored storage elements being selectively executable; store starting memory address locations for the storage elements in an array in the memory, the array to facilitate selectable access of data of the stored elements; store a pointer for the array into the memory. | 2022-05-05 |
20220138017 | RESOURCE MANAGEMENT DEVICE AND RESOURCE MANAGEMENT METHOD - The processing performance of an entire system is enhanced by efficiently using CPU resources shared by a plurality of guests. A server | 2022-05-05 |
20220138018 | CROSSBOW DE-COCKING MECHANISM - A crossbow de-cocking mechanism may include a trigger mechanism, a trigger latch mechanism and a winch assembly. A first rotational input to the winch may move a trigger latch to disengage the trigger mechanism. A second rotational input to the trigger latch, opposite to the first, may move the trigger mechanism to move a crossbow bowstring from a cocked position to an un-cocked position. | 2022-05-05 |
20220138019 | METHOD AND SYSTEM FOR PERFORMING WORKLOADS IN A DATA CLUSTER - A method for performing workloads is performed by a recommendation engine. The method includes obtaining, by the recommendation engine, a workload; generating workload features associated with the workload; obtaining hardware specification information associated with hardware of data nodes of a data cluster; determining compliant hardware configurations of the data cluster using the workload features, the hardware specification information, and a first machine learning model; generating performance predictions associated with the compliant hardware configurations using the workload features, a portion of the hardware specification information associated with the compliant hardware configurations, and a second machine learning model; generating a recommendation using the performance predictions, and the recommendation specifies a hardware configuration of the compliant hardware configurations; sending the recommendation to the data cluster; and initiating the performance of the workload on the hardware configuration. | 2022-05-05 |
20220138020 | COMPUTING ARCHITECTURE FOR OPTIMALLY EXECUTING SERVICE REQUESTS BASED ON NODE ABILITY AND INTEREST CONFIGURATION - The present disclosure relates to a system for executing a plurality of service requests (SRs) from corresponding plurality of user computing devices, the system comprising a distributed compute (DC) that forms part of a distributed network, the DC having at least one processor that executes one or more routines stored in an operatively coupled memory to enable receipt of the plurality of service requests in a heterogeneous interaction pool, wherein the DC further comprises a system state manager (SSM) that, based on at least one common attribute of each SR in the interaction pool, identifies an appropriate node (N) from one or more available nodes that has an ability and the attribute-based interest configuration to execute the respective SR, and transmits the respective SR to the identified node (N) for execution. | 2022-05-05 |
20220138021 | COMMUNICATIONS FOR WORKLOADS - Examples described herein relate to a sender process having a capability to select from use of a plurality of connections to at least one target process, wherein the plurality of connections to at least one target process comprise a connection for the sender process and/or one or more connections allocated per job. In some examples, the connection for the sender process comprises a datagram transport for message transfers. In some examples, the one or more connections allocated per job utilize a kernel bypass datagram transport for message transfers. In some examples, the one or more connections allocated per job comprise a connection oriented transport and wherein multiple remote direct memory access (RDMA) write operations for a plurality of processes are to be multiplexed using the connection oriented transport. | 2022-05-05 |
20220138022 | Compact Synchronization in Managed Runtimes - A computer including multiple processors and memory implements a managed runtime providing a synchronization application programming interface (API) for threads that perform synchronized accesses to shared objects. A standardized header of objects includes a memory word storing an object identifier. To lock the object for synchronized access, the memory word may be converted to store the tail of a linked list of a first-in-first-out synchronization structures for threads waiting to acquire the lock, with the object identifier relocated to the list structure. The list structure may further include a stack of threads waiting on events related to the object, with the synchronization API additionally providing wait, notify and related synchronization operations. Upon determining that no threads hold or desire to hold the lock for the object and that no threads are waiting on events related to the object, the memory word may be restored to contain the object identifier. | 2022-05-05 |
20220138023 | MANAGING ALERT MESSAGES FOR APPLICATIONS AND ACCESS PERMISSIONS - Managing alert messages and access permissions for applications. In one embodiment, a method is provided. The method includes determining that one or more errors have occurred in a set of applications executing in a set of containers. The method also includes identifying a set of users in view of one or more of the set of containers and a set of files for the set of applications. The method further includes sending, via a set of messaging systems, a set of messages to the set of users to indicate that the one or more errors have occurred in the set of applications. | 2022-05-05 |
20220138024 | EVENT TRANSLATION FOR BUSINESS OBJECTS - Methods and systems for translating events for use by business objects. In one embodiment, a method is provided that includes receiving a scheming via a discovery function. The schema may correspond to a data source with a business object environment and may describe properties of the data source. A business object may be created within the business object environment and may include an inbox to receive events and a translation function. An event may be received from the event source at the inbox. The event may be translated according to the translation function into a business object event that corresponds to a property of the data source. The business object event may be provided to at least one business object within the business object environment. | 2022-05-05 |
20220138025 | TECHNOLOGIES FOR PROVIDING EFFICIENT REPROVISIONING IN AN ACCELERATOR DEVICE - Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed. | 2022-05-05 |
20220138026 | SHARING DATA STRUCTURE VALUES BETWEEN SOFTWARE APPLICATIONS - The present disclosure provides for sharing data structure values between applications via messaging in a computer operating system. A plurality of data structures are defined, each with a given topic name, and a data structure including a collection of defined formats of multiple data elements. Interest by applications in topics is registered. Within an application a collection of multiple data elements having the formats of a defined data structure are identified, and an item is stored in association with the given topic name of the defined data structure, where the item is a collection of data values of the identified data elements. The item is made available to an application registered to the topic for input of the values in a corresponding data structure in the application. | 2022-05-05 |
20220138027 | METHOD FOR TRANSMITTING A MESSAGE IN A COMPUTING SYSTEM, AND COMPUTING SYSTEM - In a method for transmitting a message in a computing system, the message is transmitted by a transmitter and received by a receiver. The transmitter is granted access to a memory area for the purpose of transmitting using a first virtual address allocated to the transmitter by a memory management unit, whereas the access to the memory area by the transmitter is revoked after transmitting. Subsequently, the receiver is granted access to the memory area for the purpose of receiving using a second virtual address allocated to the receiver by a memory management unit. The first virtual address may be different from the second virtual address. | 2022-05-05 |
20220138028 | SELF-EXECUTING BOT BASED ON CACHED USER DATA - Cached data is obtained from a device. The cached data includes data saved on the device in response to electronic searches or electronic messaging performed by a user using the device. A determination is made, at least in part via the cached data, regarding an intended use context associated with the electronic searches or the electronic messaging. Using the intended use context, a confidence level is determined. In response to the determined confidence level meeting or exceeding a predefined threshold, a transaction involving the user is automatically executed, or an electronic communication is automatically sent on behalf of the user. | 2022-05-05 |
20220138029 | DISTRIBUTED LOAD BALANCING FOR ACCESS POINTS - Third party applications are deployed as “containerized applications” on one or more wireless AP devices. The containerized applications are confined to a pre-allocated segregated disk space within a file system of a wireless AP device. The containerized applications have access to standard Linux services as well as access to advanced features provided by an AP. | 2022-05-05 |
20220138030 | COMMON GATEWAY PLATFORM - Systems, methods, and software disclosed herein relate to a common gateway platform system. In an implementation, program instructions direct a computing system to execute a common gateway platform environment comprising an adapter comprising an adapter name identification, a broker connected to the adapter, and an application connected to the adapter. The application also generates an action configured according to a common gateway platform protocol, transmit the action to the broker, and receive a reaction from the broker. The broker is configured to identify the adapter based on the adapter identifier and transmit the action to the adapter. The broker also receives the reaction from the adapter and transmit the reaction to the application. The adapter is configured to acquire the data from the industrial automation environment based on an adapter instruction in the action, generate the reaction comprising the acquired data, and transmit the reaction to the broker. | 2022-05-05 |
20220138031 | AUTO-HYPOTHESES ITERATION TO CONVERGE INTO SITUATION-SPECIFIC SCIENTIFIC CAUSATION USING INTUITION TECHNOLOGY FRAMEWORK - Methods and systems correlating hypotheses outcomes using relevance scoring for intuition based forewarning are disclosed. For one example, an intuition based forewarning method includes collecting and storing core data and surroundings data, wherein the core data includes parameters describing a system and ring data includes parameters describing surroundings of the system. The collected core data and ring data are analyzed to determine one or more changing situations of the system. A relevance score is provided for each determined changing situation of the system based on the analyzed core data and ring data. Each determined situation is correlated with one or more hypotheses outcomes representing a future system state based on the relevance score. The hypotheses may be modified using, for example, auto-hypothesis generation. A system forewarning is generated based on the correlated hypotheses outcomes which can be observed by one or more users. | 2022-05-05 |
20220138032 | ANALYSIS OF DEEP-LEVEL CAUSE OF FAULT OF STORAGE MANAGEMENT - Storage management is performed. For example, a computing device may determine that a fault belongs to one of a plurality of predefined fault categories based on description information of the fault of a storage system. Then, the computing device may determine at least one fault cause associated with the fault category at a first level of a hierarchical structure of predetermined fault causes. Further, the computing device may determine a first fault cause that causes the fault among the at least one fault cause. After that, the computing device may determine a target fault cause at the deepest level that causes the fault based on the first fault cause. As a result, the root cause of a fault of a storage system may be accurately and efficiently determined, thereby providing the possibility of fundamentally eliminating the fault. | 2022-05-05 |