27th week of 2020 patent applcation highlights part 49 |
Patent application number | Title | Published |
20200210141 | HEARING DEVICE INCORPORATING USER INTERACTIVE AUDITORY DISPLAY - A hearing device comprises a processor configured to generate a virtual auditory display comprising a sound field, a plurality of disparate sound field zones, and a plurality of quite zones that provide acoustic contrast between the sound field zones. The sound field zones and the quiet zones remain positionally stationary within the sound field. One or more sensors are configured to sense a plurality of inputs from the wearer. The processor is configured to facilitate movement of the wearer within the sound field in response to a navigation input received from the one or more sensors. The processor is also configured to select one of the sound field zones for playback via a speaker or actuation of a hearing device function in response to a selection input received from the one or more sensors. | 2020-07-02 |
20200210142 | METHOD AND APPARATUS FOR CONTROLLING VIRTUAL SPEECH ASSISTANT, USER DEVICE AND STORAGE MEDIUM - The present disclosure discloses a method and an apparatus for controlling a virtual speech assistant, a user device and a storage medium, which solves the problem associated with bad feedback effect for input of a user device in the field. The method includes: displaying a virtual speech assistant icon in a floating way on a human-machine interaction interface of a user device; receiving a speech instruction when a microphone of the user device is enabled; and performing an operation according to the speech instruction, and producing a speech output corresponding to an operation result of the operation. | 2020-07-02 |
20200210143 | Attribute Combination Discovery for Predisposition Determination - A method and system are presented in which a query attribute is used as the basis for accessing stored attribute combinations and their frequencies of occurrence for query-attribute-positive individuals and query-attribute-negative individuals and tabulating, based on frequencies of occurrence, those attribute combinations that are most likely to co-occur with the query attribute. | 2020-07-02 |
20200210144 | DATA SORTING DEVICE AND METHOD, AND MONITORING AND DIAGNOSIS DEVICE - The present invention provides a data sorting device and method and a monitoring and diagnosis device, which are able to create a model conveniently. A data sorting device and a monitoring and diagnosis device of the present invention include an operating data database which stores operating data of the plant equipment in a time-series manner. The devices input the operating data that are highly associated physically and engineeringly among the operating data stored in the operating data database, inputting processing attributes relevant to the operating data that are highly associated physically and engineeringly, creating a model simulating characteristics of the plant equipment, and performing data sorting, monitoring, and diagnosis through use of the model. | 2020-07-02 |
20200210145 | Modulo Hardware Generator - A method of generating a hardware design to calculate a modulo value for any input value in a target input range with respect to a constant value d using one or more range reduction stages. The hardware design is generated through an iterative process that selects the optimum component for mapping successively increasing input ranges to the target output range until a component is selected that maps the target input range to the target output range. Each iteration includes generating hardware design components for mapping the input range to the target output range using each of a plurality of modulo preserving range reduction methods, synthesizing the generated hardware design components, and selecting one of the generated hardware design components based on the results of the synthesis. | 2020-07-02 |
20200210146 | BINARY PARALLEL ADDER AND MULTIPLIER - An arithmetic logic unit (ALU) including a binary, parallel adder and multiplier to perform arithmetic operations is described. The ALU includes an adder circuit coupled to a multiplexer to receive input operands that are directed to either an addition operation or a multiplication operation. During the multiplication operation, the ALU is configured to determine partial product operands based on first and second operands and provide the partial product operands to the adder circuit via the multiplexer, and the adder circuit is configured to provide an output having a value equal to a product of the first operand second operands. During an addition operation, the ALU is configured to provide the first and second operands to the adder circuit via the multiplexer, and the adder circuit is configured to provide the output having a value equal to a sum of the first and second operands. | 2020-07-02 |
20200210147 | RANDOM NUMBER GENERATOR - Provided is a random number generator including a single-photon emitter configured to emit single photons by pumping, a waveguide configured to guide the single photons emitted from the single-photon emitter to the inside of the waveguide, the waveguide including a first output terminal and a second output terminal that are respectively provided at both end portions of the waveguide, the single photons being output from the first output terminal and the second output terminal, and a first single-photon detector and a second single-photon detector respectively provided at the first output terminal and the second output terminal and configured to detect the single photons output from the first output terminal and the second output terminal, respectively. | 2020-07-02 |
20200210148 | METHOD FOR ADAPTING TO BLOCKCHAIN AND DEVICE, TERMINAL AND MEDIUM PERFORMING THE SAME - The present disclosure discloses a method for adapting to blockchain and device, terminal and medium performing the same. The method comprising: receiving a development requirement including a blockchain communication requirement of a target blockchain from a plurality of candidate blockchains associated with the processor and a target language associated with the target blockchain; and providing a SDK corresponding to the target language based on the development requirement, wherein the SDK includes a calling interface corresponding to the development requirement, the calling interface being configured to trigger a communication channel to communicate with the target blockchain. | 2020-07-02 |
20200210149 | AUTOMATING SERVICE MATURITY ANALYSIS AND ESTIMATION - Techniques for computer-implemented automation of analysis of service maturity and automation of estimation of service maturity for software applications and services are provided, identifying a service to determine the service maturity level compared to an identified task comprising optimal service maturity criteria. In response to identifying the service and task, determining that each criteria of the task is met by the service. Subsequently, updating a score for the overall task and the individual criteria based on the total service conformity to the criteria. The scores and differences can be displayed in an interface to indicate the maturity of the service. | 2020-07-02 |
20200210150 | METHOD FOR MANUFACTURING A SECURE, MODULAR BUSINESS-SPECIFIC HARDWARE APPLICATION AND CORRESPONDING OPERATING SYSTEM - Disclosed is a method for manufacturing a secure, modular business-specific hardware application, including: a step of selecting: a hardware computer integrated into a closed case that isolates it from the outside so as to make the hardware resources of this hardware computer structurally non-expandable because these resources cannot be accessed from outside the case without damaging them, an operating system for managing containers in a generic, lightweight fashion, associated with the computer, a software development kit, associated with the operating system and with the computer, container templates, business-specific software components, a step of deploying the business-specific software components in instantiated containers based on the container templates. | 2020-07-02 |
20200210151 | SYSTEM AND METHOD FOR INTERFACING INCIDENT AND CONTINUOUS INTEGRATION SYSTEMS - A computing system includes a server. The server is communicatively coupled to a data repository and is configured to perform operations comprising creating, via a visual information flow creation tool, at least one information flow object. The server is additionally configured to perform operations comprising creating an incident management interface for the at least one information flow object, and executing the incident management interface to communicate with an incident management system. | 2020-07-02 |
20200210152 | MIXED MODE PROGRAMMING - A mixed mode programming method permitting users to program with graphical coding blocks and textual code within the same programming tool. The mixed mode preserves the advantages of graphical block programming while introducing textual coding as needed for instructional reasons and/or for functional reasons. Converting a graphical code block or group of blocks to a textual block lets the user see a portion of the textual code in the context of a larger program. Within one programming tool the mixed mode method allows users to learn programming and build purely graphical blocks; then transition into mixed graphical and textual code and ultimately lead to their ability to program in purely textual code. The mixed mode further allows users to program using any combination of drag-and-drop graphical blocks and typed textual code in various forms. | 2020-07-02 |
20200210153 | INVERSION OF CONTROL FRAMEWORK FOR MULTIPLE BEHAVIORS ON TOP OF A PROCESS - Implementations generally relate to providing process modes. In some implementations, a method includes receiving view descriptors at a client device, where the view descriptors define how a process model is rendered and define how the process model behaves when rendered. The method further includes storing the view descriptors at the client device. The method further includes receiving, at the client device, a process mode selection from a user, where the process mode selection selects a process mode of a plurality of process modes, and where the selected process mode is associated with a set of the view descriptors. The method further includes retrieving the process model from a server. The method further includes applying the process model at the client device based on the set of the view descriptors associated with the selected process mode. | 2020-07-02 |
20200210154 | FUNCTION BLOCK FRAMEWORK GENERATION - For function block framework generation, a method generates a function block framework for a hardware device. The function block framework includes function block framework source code and a function block framework description. The hardware device includes a logic engine and automation hardware. The function block framework presents a standard interface to a function block core executed by the logic engine. The method instantiates the function block framework and the function block core as an executable image for the hardware device. The method further configures the logic engine to execute the executable image using the function block framework description. The method executes the executable image with the logic engine. | 2020-07-02 |
20200210155 | SYSTEMS AND METHODS FOR INTEGRATING MODULES INTO A SOFTWARE APPLICATION - Methods and systems are presented for providing a platform that seamlessly integrates software modules into an application. In particular, the platform allows software modules to access services provided by other software modules, through a host module, without the need of exposing the source code of any of the software modules. The application is configured as a host module by integrating one or more software modules into the application. The application may directly consume services provided by the software modules and also facilitate service accessing between software modules that are integrated into the application. As such, a software module does not need to interact with another software module directly to access services provided by the other software module, but rather, using the application as a medium to interact with the other software module. | 2020-07-02 |
20200210156 | Compiler-Generated Asynchronous Enumerable Object - A single asynchronous enumerable object is generated that contains the data and methods needed to iterate through an enumerable asynchronously. The asynchronous enumerable object contains the code for traversing the enumerable one step at a time and the operations needed to suspend an iteration to await completion of an asynchronous operation and to resume the iteration upon completion of the asynchronous operation. The allocation of a single object to perform all of these tasks reduces the memory consumption needed to execute an asynchronous enumeration. | 2020-07-02 |
20200210157 | AUTOMATIC TRANSLATION OF COMPUTER CODE - One or more lines of computer code are accessed. An electronic dictionary file is retrieved in response to the accessing of the computer code. The electronic dictionary file contains definitions for a plurality of commands or terms associated with the one or more lines of computer code. Based on the definitions contained in the electronic dictionary file, the one or more lines of computer code are parsed. An output is generated based on the parsing of the computer code. The output contains information explaining the one or more lines of computer code or an intended result of an execution thereof. | 2020-07-02 |
20200210158 | AUTOMATED OR MACHINE-ENHANCED SOURCE CODE DEBUGGING - Analyzing software, in particular, a voluminous quantity of source code is significant burden for many computing platforms. Bugs must be found, features added, removed, and modified, all without inducing new errors. By providing a Dependency Ordered Behavior (DOB), a language-agnostic model of software may be machine-derived and associated with natural human terminology for a particular domain. As a result, software may be reviewed and/or automatically edited with confidence in knowing what portions of the code will and will not be impacted. | 2020-07-02 |
20200210159 | SOURCE TO SOURCE COMPILER, COMPILATION METHOD, AND COMPUTER-READABLE MEDIUM FOR PREDICTABLE MEMORY MANAGEMENT - Described are various embodiments of a source-to-source compiler, compilation method, and computer-readable medium for predictable memory management. One embodiment is described as a memory management system operable on input source code for an existing computer program, the system comprising: a computer-readable medium having computer-readable code portions stored thereon to implement, when executed, a deterministic memory manager (DMM), wherein said code portions comprise smart pointer code portions and associated node pointer code portions for implementing a smart pointer that automatically corrects for memory misallocations in target memory allocation source code portions. | 2020-07-02 |
20200210160 | Data Polarization - The Data Polarization process is completed on computer systems to make binary data information streams more efficient. The process does this by polarizing the binary segments and adding a signature to indicate how the segments were polarized for unpackaging. Polarizing in Data Polarization means that in all of the binary information segment, either all of the zeros are turned into ones, and ones turned into zeros. Afterwards, after computations or transmissions with the data package, with the signature, the information can be correctly interpreted and unpackaged. This helps computer systems use less energy in transmission and computation as less ones, or bursts of energy, are being used overall in the system, because of the optimized segments. This has many uses in a variety of computer systems including undersea cable relays, quantum computers, or Bitcoin miners. | 2020-07-02 |
20200210161 | METHOD OF ENFORCING CONTROL FLOW INTEGRITY IN A MONOLITHIC BINARY USING STATIC ANALYSIS - Method of enforcing control flow integrity (CFI) for a monolithic binary using static analysis by: marking evaluated functions as core functions by a chosen heuristic or empirically; generating a binary call graph; merging all function nodes of core functions as a node of highest privilege (set 0); merging all leaf functions in one node without privilege (set n); merging all nodes without privilege that reach functions of privilege i and setting the merged node privilege to i+1; checking if there is a node without privilege besides a trivial function; in a positive case, returning to merging all nodes without privilege and setting the merged node privilege to i+1; and in a negative case, setting the privilege of trivial functions as i+2. | 2020-07-02 |
20200210162 | Computer Processing and Outcome Prediction Systems and Methods - Computer processing and outcome prediction systems and methods used to generate algorithm time prediction polynomials, inverse algorithm time prediction polynomials, determine race conditions, determine when a non-linear algorithm can be treated as if it were linear, as well as automatically generate parallel and quantum solutions from classical software or from the relationship between monotonic attribute values. | 2020-07-02 |
20200210163 | OPERATING SYSTEM INSTALLATION - The disclosure provides a method and device for installing an operating system. According to an example of the method, in a temporary system, a target physical hard disk to be used for installing the operating system is determined in response to a user operation of specifying the physical hard disk, and it is judged whether the temporary system and the operating system to be installed are of the same type. Then, in a system stage corresponding to a judgment result, a logical drive letter of the target physical hard disk on the operating system to be installed is queried by a query means corresponding to the judgment result. In this way, in a small system for the operating system to be installed, the operating system to be installed can be installed on the target physical hard disk identified by the logical drive letter. | 2020-07-02 |
20200210164 | PREVENTING DATABASE PACKAGE UPDATES TO FAIL CUSTOMER REQUESTS AND CAUSE DATA CORRUPTIONS - A method for processing database package connections and updates has been developed. First, an execution request for a database package is received. A connection context is then established which can execute the execution request. The database package is determined whether or not it is valid with proper package updates prior to establishing the connection. If the database package is determined to not be valid, change packages are retrieved for the database package. The valid change packages are then compiled for the database package and the connection request is executed for the updated database package with the connection context. | 2020-07-02 |
20200210165 | METHOD AND SYSTEM FOR DOWNLOADING INFORMATION - Methods and systems for downloading software information are disclosed herein. In one example embodiment, the method includes performing a first determination as to whether a first number of inquiries or download requests received by a server computer is or has been excessive and, if the first determination is that the first number of inquiries or download requests is not or has not been excessive, sending a signal including a first permission to download a software package. Also, the method includes performing a second determination as to whether either the first number or a second number of inquiries or download requests received by the server computer is or has been excessive and, if the second determination is that the first or second number of inquiries or download requests is not or has not been excessive, sending a first part of the software package for receipt by a first client computer. | 2020-07-02 |
20200210166 | SYSTEMS AND METHODS FOR ENFORCING UPDATE POLICIES WHILE APPLYING UPDATES FROM BOOTABLE IMAGE FILE - An information handling system may include a host system processor and a computer-readable storage medium communicatively coupled to the host system processor and having stored thereon a bootable update image file for performing a firmware update associated with the information handling system. The bootable update image file may be configured to, when read and executed by the processor, read policy settings stored within the information handling system setting forth update policies to be applied during application of updates defined within the bootable update image file and perform updates defined within the bootable update image file in accordance with the update policies. | 2020-07-02 |
20200210167 | CONTROL APPARATUS, CONTROL METHOD, AND COMPUTER PROGRAM - A control apparatus that includes an in-vehicle communication unit configured to communicate with an on-vehicle control device, a storage unit configured to store a plurality of types of communication paths from the in-vehicle communication unit to the on-vehicle control device, and a selection unit configured to select a transmission path for transmitting an update program to the on-vehicle control device, among the plurality of types of stored communication paths. | 2020-07-02 |
20200210168 | SYSTEMS AND METHODS FOR UTILIZING ENCRYPTION IN MICROCONTROLLERS FOR FOTA - A firmware update system includes a microcontroller which includes an encryption module configured to perform an encryption function. An update module is configured to communicate with the microcontroller to provide a firmware update. The update module includes a decryption module configured to convert the firmware update from plaintext into decryption ciphertext using a decryption function. The encryption module is configured to convert the decryption ciphertext into plaintext such that the microcontroller can execute the plaintext to implement the firmware update. | 2020-07-02 |
20200210169 | SOFTWARE GLOBALIZATION MANAGEMENT - Embodiments of the present invention provide a method, system and computer program product for software globalization management. In an embodiment of the invention, a method for software globalization management includes loading markup in a browser for rendering in the browser and parsing the markup to identify different markup language tags disposed in the markup. Thereafter, on condition that during the parsing a globalization tag is detected, an internationalization key associated with a textual resource is extracted in connection with the globalization tag, a locale setting is retrieved for the browser, the key is submitted to remote repository with the locale setting in order to retrieve therefrom a translated form of the textual resource in accordance with the locale setting, the translated form of the textual resource is received in response to the requesting and the markup is rendered with the translated form of the textual resource. | 2020-07-02 |
20200210170 | RESOLVING POTENTIAL MERGE CONFLICTS BY UTILIZING A DISTRIBUTED BLOCKCHAIN FRAMEWORK - Disclosed is a system for resolving merge conflict using a distributed blockchain framework including a set of computing nodes. A block of a code snippet, of a plurality of code snippets, is registered in a distributed blockchain framework. A subset of computing nodes comprising a copy of the code snippet is identified. A change set associated with the code snippet is sent to the subset of computing nodes. A conflict is determined between the change set and the copy of the code snippet stored at a computing node of the subset of computing nodes. An approval is received to push the change set into the code repository when the conflict is determined. The approval indicates that the conflict is resolved when the change set is executed locally by the computing node. The change set is pushed into the code repository indicating modification in the copy of code snippet. | 2020-07-02 |
20200210171 | TREE-CONVERSION DELTA ENCODING - A first data tree of a first version of the software and a second data tree of a second version of the software may be provided. The first data tree may be converted into a first data tree file, and the second data tree may be converted into a second data tree file. A delta for the first data tree and the second data tree may be generated based on a comparison of the first data tree file and the second data tree file. The delta may be packaged for provision to a client-side agent. The client-side agent may be configured to modify a client-side version of the software based on the delta. | 2020-07-02 |
20200210172 | DYNAMIC CONFIGURATION OF A DATA FLOW ARRAY FOR PROCESSING DATA FLOW ARRAY INSTRUCTIONS - A system for processing data flow array instructions is described. The system includes a data flow array, which includes a plurality of processing elements; a decoder to receive a data flow array instruction and generate a set of microinstructions based on the data flow array instruction; a reservation station to receive and dispatch each microinstruction in the set of microinstructions, wherein the set of microinstructions includes a configuration microinstruction for configuring the data flow array for processing the data flow array instruction; a configuration watcher to receive the configuration microinstruction and to add a configuration identifier and a set of parameters of the configuration microinstruction to a configuration queue for the data flow array, wherein the data flow array is to configure the plurality of processing elements based on configuration information associated with the configuration identifier and the set of parameters. | 2020-07-02 |
20200210173 | SYSTEMS AND METHODS FOR PERFORMING NIBBLE-SIZED OPERATIONS ON MATRIX ELEMENTS - Disclosed embodiments relate to systems and methods for performing nibble-sized operations on matrix elements. In one example, a processor includes fetch circuitry to fetch an instruction, decode circuitry to decode the fetched instruction the fetched instruction having fields to specify an opcode and locations of first source, second source, and destination matrices, the opcode to indicate the processor is to, for each pair of corresponding elements of the first and second source matrices, logically partition each element into nibble-sized partitions, perform an operation indicated by the instruction on each partition, and store execution results to a corresponding nibble-sized partition of a corresponding element of the destination matrix. The exemplary processor includes execution circuitry to execute the decoded instruction as per the opcode. | 2020-07-02 |
20200210174 | APPARATUSES, METHODS, AND SYSTEMS FOR STENCIL CONFIGURATION AND COMPUTATION INSTRUCTIONS - Systems, methods, and apparatuses relating to performing stencil configuration and computation operations are described. In one embodiment, a matrix operations accelerator circuit includes a two-dimensional grid of fused multiply accumulate circuits coupled by a network; a first plurality of registers that represents a first two-dimensional matrix coupled to the matrix operations accelerator circuit; a second plurality of registers that represents a second two-dimensional matrix coupled to the matrix operations accelerator circuit; a decoder, of a core coupled to the matrix operations accelerator circuit, to decode a single instruction into a decoded single instruction; and an execution circuit of the core to execute the decoded single instruction to: switch the matrix operations accelerator circuit from a first mode to a second mode where a first set of input values from the first plurality of registers is sent to a first plurality of fused multiply accumulate circuits that form a first row of the two-dimensional grid, a second set of input values from the first plurality of registers is sent to a second plurality of fused multiply accumulate circuits that form a second row of the two-dimensional grid, a first coefficient value from the second plurality of registers is broadcast to a third plurality of fused multiply accumulate circuits that form a first column of the two-dimensional grid, and a second coefficient value from the second plurality of registers is broadcast to a fourth plurality of fused multiply accumulate circuits that form a second column of the two-dimensional grid. | 2020-07-02 |
20200210175 | REGISTER FILES IN A MULTI-THREADED PROCESSOR - A processor comprising a barrel-threaded execution unit for executing concurrent threads, and one or more register files comprising a respective set of context registers for each concurrent thread. One of the register files further comprises a set of shared weights registers common to some or all of the concurrent threads. The types of instruction defined in the instruction set of the processor include an arithmetic instruction having operands specifying a source and a destination from amongst a respective set of arithmetic registers of the thread in which the arithmetic instruction is executed. The execution unit is configured so as, in response to the opcode of the arithmetic instruction, to perform an operation comprising multiplying an input from the source by at least one of the weights from at least one of the shared weights registers, and to place a result in the destination. | 2020-07-02 |
20200210176 | SYSTEMS AND METHODS FOR COMPONENT FAULT DETECTION - Systems, methods, and non-transitory computer-readable media can receive power consumption information for a component in a vehicle indicative of power consumption by the component. The power consumption information is compared with a nominal power signature associated with the component. A determination is made that power consumption of the component deviates from the nominal power signature. Corrective action is executed based on the determining that power consumption of the component deviates from the nominal power signature. | 2020-07-02 |
20200210177 | EXTENDING OPERATIONAL LIFETIME OF SPATIAL LIGHT MODULATOR DEVICES WITH CONTENT-DEPENDENT ENCODING - This disclosure relates to various implementations improve operational lifetime for a spatial light modulator device. One or more controllers are able to determine a target duty cycle based on an operating condition for a spatial light modulator device. The spatial light modulator device comprises one or more light modulation components that modulate light. The controllers allocate a refresh time period for encoding spatial light modulator refresh instructions based on the target duty cycle and encode spatial light modulator refresh instructions within the refresh time period. Afterwards, the controllers transmit the spatial light modulator refresh instruction to the spatial light modulator device. | 2020-07-02 |
20200210178 | BRANCH TYPE LOGGING IN LAST BRANCH REGISTERS - A processor includes a counter to store a cycle count that tracks a number of cycles between retirement of a first branch instruction and retirement of a second branch instruction during execution of a set of instructions. The processor further includes a stack of registers coupled to the counter, wherein the stack of registers is to store branch type information including: a first value of the counter when the first branch instruction is retired; a second value of the counter when the second branch instruction is retired; a first type information value indicating a type of the first branch instruction; and a second type information value indicating a type of the second branch instruction. | 2020-07-02 |
20200210179 | UNIFORMITY CHECK BASED INSTRUCTIONS - Novel instructions, their format, and support thereof are described. For example, an instruction having an opcode to indicate that execution circuitry is to: perform a packed data operation as indicated by the opcode as a scalar and broadcast operation when there is uniformity between packed data elements in identified source operands by performing a single scalar operation using scalar circuitry that is equivalent to the packed data operation using one packed data element of each of the identified source operands to generate a single result and broadcast the single result into each packed data element positions of the identified packed data destination operand. | 2020-07-02 |
20200210180 | APPARATUSES, METHODS, AND SYSTEMS FOR VECTOR LOGICAL OPERATION AND TEST INSTRUCTIONS - Systems, methods, and apparatuses relating to performing logical operations on packed data elements and testing the results of that logical operation to generate a packed data resultant are described. In one embodiment, a processor includes a decoder to decode an instruction into a decoded instruction, the instruction having fields that identify a first packed data source, a second packed data source, and a packed data destination, and an opcode that indicates a bitwise logical operation to perform on the first packed data source and the second packed data source and indicates a width of each element of the first packed data source and the second packed data source; and an execution circuit to execute the decoded instruction to perform the bitwise logical operation indicated by the opcode on the first packed data source and the second packed data source to produce a logical operation result of packed data elements having a same width as the width indicated by the opcode, perform a test operation on each element of the logical operation result to set a corresponding bit in a packed data test operation result to a first value when any of the bits in a respective element of the logical operation result are set to the first value, and set the corresponding bit to a second value otherwise, and store the packed data test operation result into the packed data destination. | 2020-07-02 |
20200210181 | APPARATUSES, METHODS, AND SYSTEMS FOR VECTOR ELEMENT SORTING INSTRUCTIONS - Systems, methods, and apparatuses relating to performing a sort operation on a packed data source to generate a packed data resultant are described. In one embodiment, a processor includes a decoder to decode a single instruction into a decoded single instruction, the single instruction having at least one field that identifies a packed data source and a packed data destination, and an opcode that is to indicate a sort type; and an execution circuit to execute the decoded single instruction to: provide storage for a comparison matrix to store a comparison value for each element of the packed data source against the other elements of the packed data source, perform a same comparison operation on each element of the packed data source against the other elements of the packed data source to populate the comparison matrix, add each column of results in the comparison matrix to generate each element of a packed data count, move each element of the packed data source according to the packed data count to generate a packed data result that is sorted by the sort type indicated by the opcode, and store the packed data result into the packed data destination. | 2020-07-02 |
20200210182 | SYSTEMS AND METHODS FOR PERFORMING DUPLICATE DETECTION INSTRUCTIONS ON 2D DATA - Disclosed embodiments relate to systems and methods for performing duplicate detection instructions on two-dimensional (2D) data. In one example, a processor includes fetch circuitry to fetch an instruction, decode circuitry to decode the fetched instruction having fields to specify an opcode and locations of a source matrix comprising M×N elements and a destination, the opcode to indicate execution circuitry is to use a plurality of comparators to discover duplicates in the source matrix, and store indications of locations of discovered duplicates in the destination. The execution circuitry to execute the decoded instruction as per the opcode. | 2020-07-02 |
20200210183 | VECTORIZATION OF LOOPS BASED ON VECTOR MASKS AND VECTOR COUNT DISTANCES - Systems, apparatuses and methods may provide for technology that identifies that an iterative loop includes a first code portion that executes in response to a condition being satisfied, generates a first vector mask that is to represent one or more instances of the condition being satisfied for one or more values of a first vector of values, and one or more instances of the condition being unsatisfied for the first vector of values, where the first vector of values is to correspond to one or more first iterations of the iterative loop, and conducts a vectorization process of the iterative loop based on the first vector mask. | 2020-07-02 |
20200210184 | CONTROLLING POWER STATE DEMOTION IN A PROCESSOR - In an embodiment, a processor for demotion includes a plurality of cores to execute instructions and a demotion control circuit. The demotion control circuit is to: for each core of the plurality of cores, determine an average count of power state break events in the core; determine a sum of the average counts of the plurality of cores; determine whether the average count of a first core exceeds a first demotion threshold; determine whether the sum of the average counts of the plurality of cores exceeds a second demotion threshold; and in response to a determination that the average count of the first core exceeds the first demotion threshold and the sum of the average counts exceeds the second demotion threshold, perform a power state demotion of the first core. Other embodiments are described and claimed. | 2020-07-02 |
20200210185 | METHOD FOR MIGRATING CPU STATE FROM AN INOPERABLE CORE TO A SPARE CORE - An apparatus is disclosed in which the apparatus may include a plurality of cores, including a first core, a second core and a third core, and circuitry coupled to the first core. The first core may be configured to process a plurality of instructions. The circuitry may be may be configured to detect that the first core stopped committing a subset of the plurality of instructions, and to send an indication to the second core that the first core stopped committing the subset. The second core may be configured to disable the first core from further processing instructions of the subset responsive to receiving the indication, and to copy data from the first core to a third core responsive to disabling the first core. The third core may be configured to resume processing the subset dependent upon the data. | 2020-07-02 |
20200210186 | APPARATUS AND METHOD FOR NON-SPATIAL STORE AND SCATTER INSTRUCTIONS - Embodiments of systems, apparatuses, and methods for storing data elements in a processor are described. For example, execution circuitry executes a decoded instruction, the instruction having a first field identifying a location in main memory, a second field identifying a register storing a data element to be stored at the location in main memory, and an opcode to indicate to execution circuitry to store the data element at the location in main memory without storing the data element in a data cache of the processor, by storing the data element at the location in main memory without storing the data element in the data cache of the processor. | 2020-07-02 |
20200210187 | LOAD-STORE INSTRUCTION - A processor having an instruction set including a load-store instruction having operands specifying, from amongst the registers in at least one register file, a respective destination of each of two load operations, a respective source of a store operation, and a pair of address registers arranged to hold three memory addresses, the three memory addresses being a respective load address for each the two load operations and a respective store address for the store operation. The load-store instruction further includes three immediate stride operands each specifying a respective stride value for each of the two load addresses and one store address, wherein at least some possible values of each immediate stride operand specify the respective stride value by specifying one of a plurality of fields within a stride register in one of the one or more register files, each field holding a different stride value. | 2020-07-02 |
20200210188 | SYSTEMS AND METHODS FOR PERFORMING MATRIX ROW- AND COLUMN-WISE PERMUTE INSTRUCTIONS - Disclosed embodiments relate to systems and methods for performing matrix row-wise and column-wise permute instructions. In one example, a processor includes fetch circuitry to fetch an instruction, decoding, using decode circuitry, the fetched instruction having fields to specify an opcode and locations of a source matrix and a destination matrix, the opcode indicating the processor is to perform a permutation by copying, into each of a plurality of equal-sized logical partitions of the destination matrix, a selected logical partition of a same size from the source matrix, the selection being indicated by a permute control, and execution circuitry to execute the decoded instruction as per the opcode. | 2020-07-02 |
20200210189 | INSTRUCTION TIGHTLY-COUPLED MEMORY AND INSTRUCTION CACHE ACCESS PREDICTION - Disclosed herein are systems and method for instruction tightly-coupled memory (iTIM) and instruction cache (iCache) access prediction. A processor may use a predictor to enable access to the iTIM or the iCache and a particular way (a memory structure) based on a location state and program counter value. The predictor may determine whether to stay in an enabled memory structure, move to and enable a different memory structure, or move to and enable both memory structures. Stay and move predictions may be based on whether a memory structure boundary crossing has occurred due to sequential instruction processing, branch or jump instruction processing, branch resolution, and cache miss processing. The program counter and a location state indicator may use feedback and be updated each instruction-fetch cycle to determine which memory structure(s) needs to be enabled for the next instruction fetch. | 2020-07-02 |
20200210190 | MICRO-OPERATION CACHE USING PREDICTIVE ALLOCATION - According to one general aspect, an apparatus may include an instruction fetch unit circuit configured to retrieve instructions from a memory. The apparatus may include an instruction decode unit configured to convert instructions into one or more micro-operations that are provided to an execution unit circuit. The apparatus may also include a micro-operation cache configured to store micro-operations. The apparatus may further include a branch prediction circuit configured to: determine when a kernel of instructions is repeating, store at least a portion of the kernel within the micro-operation cache, and provide the stored portion of the kernel to the execution unit circuit without the further aid of the instruction decode unit circuit. | 2020-07-02 |
20200210191 | EXIT HISTORY BASED BRANCH PREDICTION - A computer-implemented method includes fetching a fetch-packet containing a first hyper-block from a first address of a memory, the fetch-packet containing a bitwise distance from an entry point of the first hyper-block to a predicted exit point; executing a first branch instruction of the first hyper-block, wherein the first branch instruction corresponds to a first exit point, and wherein the first branch instruction includes an address corresponding to an entry point of a second hyper-block; storing, responsive to executing the first branch instruction, a bitwise distance from the entry point of the first hyper-block to the first exit point; and moving a program counter from the first exit point of the first hyper-block to the entry point of the second hyper-block. | 2020-07-02 |
20200210192 | INSTRUCTION CACHE IN A MULTI-THREADED PROCESSOR - A processor comprising: a barrel-threaded execution unit for executing concurrent threads, and a repeat cache shared between the concurrent threads. The processor's instruction set includes a repeat instruction which takes a repeat count operand. When the repeat cache is not claimed and the repeat instruction is executed in a first thread, a portion of code is cached from the first thread into the repeat cache, the state of the repeat cache is changed to record it as claimed, and the cached code is executed a number of times. When the repeat instruction is then executed in a further thread, then the already-cached portion of code is again executed a respective number of times, each time from the repeat cache. For each of the first and further instructions, the repeat count operand in the respective instruction specifies the number of times to execute the cached code. | 2020-07-02 |
20200210193 | HARDWARE PROFILER TO TRACK INSTRUCTION SEQUENCE INFORMATION INCLUDING A BLACKLISTING MECHANISM AND A WHITELISTING MECHANISM - A processor includes a set of execution units in an out-of-order execution pipeline, and a hardware profiler in the out-of-order execution pipeline coupled to the set of execution units and to profile instructions executed by the set of execution units, the hardware profiler to generate a profiling interrupt, the profiling interrupt to initiate an optimization of a basic block of instructions in response to determining that a whitelist bit is set corresponding to the basic block of instructions, the whitelist bit to identify the basic block of instructions for immediate optimization. | 2020-07-02 |
20200210194 | TECHNIQUES FOR SCHEDULING INSTRUCTIONS IN COMPILING SOURCE CODE - Examples described herein generally relate to generating, from a listing of source code, a plurality of basic blocks for compiling into intermediate language, determining, for a first basic block of the plurality of basic blocks, first heuristics related to applying a first plurality of optimizations to the first basic block, determining, for a second basic block of the plurality of basic blocks, second heuristics related to applying a second plurality of optimizations to the second basic block, and applying, based on the first heuristics and the second heuristics, one of the first plurality of optimizations to the first basic block to schedule first instructions for the first basic block and one of the second plurality of optimizations to the second basic block to schedule second instructions for the second basic block. | 2020-07-02 |
20200210195 | Managing Trace Information Storage Using Pipeline Instruction Insertion and Filtering - At least some instructions executed in a pipeline are each associated with corresponding trace information that characterizes execution of that instruction in the pipeline. A predetermined type of store instructions flow through a subset of contiguous stages of the pipeline. A signal is received to store a portion of the trace information. A stage before the subset of contiguous stages is stalled. A store instruction of the predetermined type is inserted into a stage at the beginning of the subset of contiguous stages to enable the store instruction to reach the memory access stage at which an operand of the store instruction including the portion of the trace information is sent out of the pipeline. The store instruction is filtered from a stage of the subset of contiguous stages that occurs earlier in the pipeline than a stage in which trace information is generated. | 2020-07-02 |
20200210196 | HARDWARE PROCESSORS AND METHODS FOR EXTENDED MICROCODE PATCHING - Hardware processors and methods for extended microcode patching through on-die and off-die secure storage are described. In one embodiment, the additional storage resources used for storing micro-operations are section(s) of a cache that are unused at runtime and/or unused by a configuration of a processor. For example, the additional storage resources may be a section of a cache that is used to store context information from a core when the core is transitioned to a power state that shuts off voltage to the core. Non-limiting examples of such sections are one or more sections for: storage of context information for a transition of a thread to idle or off, storage of context information for a transition of a core for a multiple core processor to idle or off, or storage of coherency information for a transition of a cache coherency circuit (e.g., cache box (CBo)) to idle or off. | 2020-07-02 |
20200210197 | SECURE PREDICTORS FOR SPECULATIVE EXECUTION - Systems and methods are disclosed for secure predictors for speculative execution. Some implementations may eliminate or mitigate side-channel attacks, such as the Spectre-class of attacks, in a processor. For example, an integrated circuit (e.g., a processor) for executing instructions includes a predictor circuit that, when operating in a first mode, uses data stored in a set of predictor entries to generate predictions. For example, the integrated circuit may be configured to: detect a security domain transition for software being executed by the integrated circuit; responsive to the security domain transition, change a mode of the predictor circuit from the first mode to a second mode and invoke a reset of the set of predictor entries, wherein the second mode prevents the use of a first subset of the predictor entries of the set of predictor entries; and, after completion of the reset, change the mode back to the first mode. | 2020-07-02 |
20200210198 | Optimized Result Writeback and Mode Switching for CPUs with Software Controlled Pipeline Protection - Techniques related to executing a plurality of instructions by a processor comprising a method for executing a plurality of instructions by a processor, the method comprising detecting a pipeline hazard based on one or more instructions provided for execution by an instruction execution pipeline, beginning execution of an instruction, of the one or more instructions on the instruction execution pipeline, stalling a portion of the instruction execution pipeline based on the detected pipeline hazard, storing a register state associated with the execution of the instruction based on the stalling, determining that the pipeline hazard has been resolved, and restoring the register state to the instruction execution pipeline based on the determination. | 2020-07-02 |
20200210199 | MASK GENERATION USING REDUCTION OPERATORS AND SCATTER USE THEREOF - Novel instructions, their format, and support thereof are described. For example, an instruction including a field for an opcode to indicate a reduction-based mask generation operation is to be performed, a field to identify a first packed data source operand, a field to identify a second packed data source operand, and a field to identify a destination operand to store reduction-based generated mask and its hardware support is described. | 2020-07-02 |
20200210200 | Multi-Operating System Device, Notification Device and Methods Thereof - The multi-operating system device comprises a processor, a transceiver, and an output device. The processor is configured to host a first operating system in the foreground and a second operating system (OS | 2020-07-02 |
20200210201 | INFORMATION PROCESSING SYSTEM AND RELAY DEVICE - An information processing system includes a plurality of information and a relay device. The information processing devices each includes a processor. The relay device connects the information processing devices via an expansion bus and relays communication between the information processing devices. The relay device includes a power supply controller that controls supply of power to the information processing devices, and performs control to shut off supply of power to the relay device and the information processing devices after detecting shutdown of all the information processing devices. | 2020-07-02 |
20200210202 | Data Storage Devices, Access Device and Data Processing Methods - An access device includes a memory controller coupled to a memory device and configured to access the memory device. The memory controller is further configured to perform a test procedure on the memory device to obtain a test result, write a boot code index, which indicates a predetermined address for storing predetermined system data of the memory device and a copy rule adopted for generating one or more duplicates of the predetermined system data, in the memory device, establish system data of the memory device according to the test result, write the system data into the predetermined address as the predetermined system data, and write the system data in one or more memory blocks of the memory device as the duplicates of the predetermined system data according to the copy rule. | 2020-07-02 |
20200210203 | SYSTEMS AND METHODS FOR HANDLING FIRMWARE DRIVER DEPENDENCIES IN HOST OPERATING SYSTEMS WHILE APPLYING UPDATES FROM BOOTABLE IMAGE FILE - A bootable update image file may be configured to, if operating system driver updates associated with a firmware update are boot-critical: modify a boot order of the information handling system to cause the information handling system to boot to an operating system of the information handling system such that the operating system fetches driver update packages from an update partition of the information handling system, applies the driver update packages, and modifies the boot order to cause the information handling system to boot to the bootable image file in a subsequent boot and in the subsequent boot, apply the firmware update; and, if driver updates are non-boot-critical: apply the firmware update and modify the boot order to cause the information handling system to boot to the operating system such that the operating system fetches the driver update packages from an update partition and applies the driver update packages. | 2020-07-02 |
20200210204 | SOFTWARE UPGRADE AND DISASTER RECOVERY ON A COMPUTING DEVICE - A method, a device, and a non-transitory storage medium provide to execute a first stage boot loader during a boot-up of the device; determine whether a disaster recovery service is invoked based on the first stage boot loader reading a first file that indicates whether the disaster recovery service was invoked during a previous cycle of the device and detecting a position of a button of the device; execute a second stage boot loader in response to a determination that the disaster recovery service is invoked; again determine whether the disaster recovery service is invoked; and reboot in response to a determination that the disaster recovery service is not invoked. | 2020-07-02 |
20200210205 | INDEPENDENT OPERATION OF AN ETHERNET SWITCH INTEGRATED ON A SYSTEM ON A CHIP - An Ethernet switch and a switch microcontroller or CPU are integrated onto a system-on-a-chip (SoC). The Ethernet switch remains independently operating at full speed even though the remainder of the SoC is being reset or is otherwise nonoperational. The Ethernet switch is on a separated power and clock domain from the remainder of the integrated SoC. A warm reset signal is trapped by control microcontroller (MCU) to allow the switch CPU to isolate the Ethernet switch and save state. When the Ethernet switch is isolated and operating independently, the warm reset request is provided to the other entities on the integrated SoC. When warm reset is completed, the state is restored and the various DMA and flow settings redeveloped in the integrated SoC to allow return to normal operating condition. | 2020-07-02 |
20200210206 | CONTROLLING OPERATIONAL STATE OF AN ELECTRONIC APPARATUS BASED ON USER PROXIMITY AND USER INPUT - An electronic apparatus includes processing unit configured to execute system processing, an object detection unit configured to detect an object present within a predetermined detection range, and an operation control unit configured to control the system processing according to a detection state detected by the object detection unit to make a transition to one of a first operating state and a second operating state in which at least part of the system processing is more limited than that in the first operating state. When a transition from the first operating state to the second operating state is made regardless of the detection state detected by the object detection unit, where the operation control unit prohibits the transition to the first operating state according to the detection state detected by the object detection unit. | 2020-07-02 |
20200210207 | NEGOTIATED POWER-UP FOR SSD DATA REFRESH - An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to manage a persistent storage media, provide a host with an indication of a time for the host to initiate a subsequent wake-up for data management of the persistent storage media, and perform data management of the persistent storage media in response to a host-initiated wake-up from a zero power state. Other embodiments are disclosed and claimed. | 2020-07-02 |
20200210208 | CHANGE PROCEDURE GENERATION DEVICE, CHANGE PROCEDURE GENERATION METHOD, AND CHANGE PROCEDURE GENERATION PROGRAM - A change procedure generation device | 2020-07-02 |
20200210209 | Loader Application with Secondary Embedded Application Object - Methods, systems, and non-transitory computer-readable media for embedding a secondary application object within a loader application are described herein. In some embodiments, a computing platform may initiate a first iOS application comprising a first name and a first instance of UIApplication comprising an NSObject class. Further, the computing platform may embed into the first iOS application, a second iOS application comprising a second name, a second instance of UIApplication, and a first derived class. Next, the computing platform may generate, based on NSObject and the first derived class, a second derived class. Additionally, the computing platform may generate an iPhone Application (IPA) file comprising the first iOS application wherein the first iOS application comprises the second derived class and the second name Subsequently, the computing platform may distribute via a communication interface, the IPA file. | 2020-07-02 |
20200210210 | SYSTEMS AND METHODS FOR ENABLING WIDGET CUSTOMIZATION VIA EXTENSION POINTS - The present approach relates to updating a customer-extended application in such a manner that customer extensions of certain widgets in the application are preserved thorough the update. A cloud-computing system may facilitate customer extension of a widget of an initial version of the application by providing a first subset of script associated with one or more extension point hooks, such that the first subset of the script may receive customer script to extend the widget. In this manner, customers may extend aspects of the widget by modifying the script via the one or more extension point hooks to cater those extensible widgets to specific customer needs in a manner that can be maintained as updates occur over time, which may allow the customer to save time and resources that would otherwise be consumed by modifying the application after the enterprise upgrades the application. | 2020-07-02 |
20200210211 | PERSONALIZATION OF RESOURCE STRINGS IN A COMPUTING DEVICE - A method for personalizing resource strings within a user interface of a computing device. The method includes accessing a personalization editor via the computing device, and receiving a user modification to a resource string associated with one or more applications. The method also includes storing an original unmodified resource string associated with the resource string, the modified resource string, and the associated applications. The method also includes receiving a request from a first application for a first resource string associated with a specified resource identifier. The method also includes determining if the first resource string has an associated modified resource string stored in the personalized resource string database, and displaying the modified resource string based on the first resource string being determined to be associated with the modified resource string stored in the personalized resource string database. | 2020-07-02 |
20200210212 | SYSTEMS, METHODS, STORAGE MEDIA, AND COMPUTING PLATFORMS FOR END USER PRE-INFERRED CONTEXT DRIVEN APPLICATIONS - Systems, methods, storage media, and computing platforms for end user pre-inferred context driven applications are disclosed. Exemplary implementations may: perform upfront analysis of the end user's pre-inferred context; retrieve end user pre-inferred context metadata and/or data as the initial read operation; render a user interface that displays end user pre-inferred context metadata and/or data; retrieve additional end user pre-inferred context metadata and/or data; render a superimposed user interface with additional end user pre-inferred context metadata and/or data. | 2020-07-02 |
20200210213 | DISPLAY METHOD OF MULTI-APPLICATION BASED ON ANDROID SYSTEM, AND TERMINAL DEVICE - A display method of multi-application based on an Android system and a terminal device are provided. The display method includes: upon reception of a first instruction, determining whether a first application is a non-Android application, where the first instruction is configured to start the first application; when the first application is the non-Android application, invoking a non-native display module; and drawing a display interface of the first application based upon the non-native display module. | 2020-07-02 |
20200210214 | AUDITING CLIPBOARD OPERATIONS IN VIRTUAL DESKTOP ENVIRONMENTS - Techniques are described for auditing clipboard operations in virtual desktop environments. The auditing takes place by detecting clipboard operations that are being redirected between the virtual desktop and the client device and recording log entries containing information about each clipboard operation that was redirected. In order to reduce auditing potential noise information, the redirection process is modified to delay populating the clipboard and recording log entries until a paste operation or other request for the clipboard is detected. In some situations, the clipboard redirection may be blocked based on the auditing capability, such as in cases where the auditing cannot take place due to the client computing device lacking support for auditing the clipboard operation and recording the necessary log entry. | 2020-07-02 |
20200210215 | SYSTEM AND METHOD FOR SIMPLE OBJECT ACCESS PROTOCOL (SOAP) INTERFACE CREATION - A computing system includes a server. The server is communicatively coupled to a data repository and is configured to perform operations comprising creating, via a visual information flow creation tool, at least one information flow object. The server is additionally configured to perform operations comprising creating a simple object access protocol (SOAP) interface for the at least one information flow object, and executing the at least one information flow object to communicate with an external system via the SOAP interface. | 2020-07-02 |
20200210216 | USAGE CHECKS FOR CODE RUNNING WITHIN A SECURE SUB-ENVIRONMENT OF A VIRTUAL MACHINE - A system may include an application server and one or more tenants served by the application server. The application server may host a virtual machine with multiple isolated sub-environments. Each tenant of the application server may request to run a program in a tenant-specific sub-environment of the virtual machine. The sub-environments may be configured so the execution of one tenant's code does not affect execution of another tenant's code or the hosting virtual machine, for example, by considering the resources used to execute the code. The application server may implement techniques to securely execute “untrusted” code, programmed using one or more different programming languages, in the sub-environments by enforcing resource restrictions and restricting the sub-environments from accessing the host's local file system. In this way, one tenant's code does not negatively impact execution of another tenant's code by using too many resources of the virtual machine. | 2020-07-02 |
20200210217 | CONTAINER-BASED MANAGEMENT METHOD FOR INTELLIGENT COMPONENTS - A method for managing AI components installed in containers is provided. The container-based component management method creates a container, installs at least one selected from a plurality of components in the container, and manages the components installed in the container. Accordingly, the execution priorities of the AI components installed in the containers can be managed and operated, such that degradation of system performance and frequent error occurrence can be prevented. | 2020-07-02 |
20200210218 | CONFIGURATION MANAGEMENT FOR HYBRID CLOUD - A system and method include tracking virtual entities in a hybrid cloud system. The method includes receiving, at a management processor of a virtual hybrid computing system, a migration request to move a virtual entity from a first cloud sub-system to a second cloud sub-system; adding, by the management processor, a source proxy object representing the virtual entity exiting the first cloud sub-system to a first database, wherein the source proxy object includes a destination identifier and an exit time; executing, by the management processor, a logical migration of the virtual entity from the first cloud sub-system to the second cloud subsystem; receiving, at the management processor, a management query including a query time corresponding to a time for which status information of the virtual entity is requested; determining, at the management processor, whether the query time is before the exit time of the source proxy object; and invoking, from the management processor, a first management software application program interface (API) for the first cloud sub-system when the query time is before the exit time of the source proxy object. | 2020-07-02 |
20200210219 | STORAGE CONTROL METHOD AND STORAGE CONTROLLER FOR USER INDIVIDUAL SERVICE ENVIRONMENT - Disclosed are a storage control method and a storage controller for a virtualization environment with which to provide a virtualization service. The disclosed storage control method may include adjusting an over-provisioning proportion for a virtual storage device allotted to each virtual machine according to an I/O workload pattern for each of the virtual machines; and allotting an over-provisioning space for each of the virtual storage devices according to the over-provisioning proportion. | 2020-07-02 |
20200210220 | SYSTEMS AND METHODS FOR JAVA VIRTUAL MACHINE MANAGEMENT - A virtual machine (VM) management utility tool may deploy an object model that may persist one or more virtual machine dependencies and relationships. Through a web front-end interface, for example, the VMs may be started in a specific order or re-booted, and the tool automatically determines the additional VMs that need to be re-booted in order to maintain the integrity of the environment. Through the web interface, for example, the object model may be managed, and start-up orders or VM dependencies may be updated. For VMs that may not start under load, the object model may block access to the VM until the VM is fully initialized. | 2020-07-02 |
20200210221 | Management of IoT Devices in a Virtualized Network - Specialized, service optimized virtual machines are assigned to handle specific types of Internet of Things (IoT) devices. An IoT context mapping policy engine within the context of a virtualized network function manages IoT context mapping policy functions in load balancers. The IoT context mapping policy functions select service optimized virtual machines based on IoT device IDs, and assign those virtual machines to handle the devices. The IoT context mapping policy functions provide load data to the IoT context mapping policy engine. Based on the load data, the IoT context mapping policy engine maintains appropriate scaling by creating or tearing down instances of the virtual machines. | 2020-07-02 |
20200210222 | VIRTUALIZATION SYSTEM, VIRTUALIZATION PROGRAM, AND STORAGE MEDIUM - A virtualization system in which a plurality of virtual machines operate on a single physical machine having a plurality of cores is provided. The virtualization system comprises: a plurality of hardware; a hardware control core that controls operation of the plurality of hardware; a plurality of virtual machine cores each operating a guest OS; and a shared memory that is a memory configured to be accessed by the hardware control core and the plurality of virtualization machine cores concurrently and in parallel. | 2020-07-02 |
20200210223 | TASK EXECUTION WITH NON-BLOCKING CALLS - Techniques are disclosed relating to task execution with non-blocking calls. A computer system may receive a request to perform an operation comprising a plurality of tasks, each of which corresponds to a node in a graph. A particular one of the plurality of tasks specifies a call to a downstream service. The computer system may maintain a plurality of task queues, each of which is associated with a thread pool. The computer system may enqueue, in an order specified by the graph, the plurality of tasks in one or more of the plurality of task queues. The computer system may process the plurality of tasks. Such processing may include a thread of a particular queue in which the particular task is enqueued performing a non-blocking call to the downstream service. After processing the plurality of tasks, the computer system may return a result of performing the operation. | 2020-07-02 |
20200210224 | METHODS AND APPARATUS FOR VERIFYING COMPLETION OF GROUPS OF DATA TRANSACTIONS BETWEEN PROCESSORS - Methods and apparatus for acknowledging and verifying the completion of data transactions over an inter-processor communication (IPC) link between two (or more) independently operable processors. In one embodiment, a host-side processor delivers payloads over the IPC link using one or more transfer descriptors (TDs) that describe the payloads. The TDs are written in a particular order to a transfer descriptor ring (TR) in a shared memory between the host and peripheral processors. The peripheral reads the TDs over the IPC link and transacts, in proper order, the data retrieved based on the TDs. To acknowledge the transaction, the peripheral processor writes completion descriptors (CDs) to a completion descriptor ring (CR). The CD may complete one or more TDs; in optimized completion schemes the CD completes all outstanding TDs up to and including the expressly completed TD. | 2020-07-02 |
20200210225 | ARCHITECTURE FOR SIMULATION OF DISTRIBUTED SYSTEMS - Systems and methods are provided for the deterministic simulation of distributed systems, such as vehicle-based processing systems. A distributed system may be represented as a plurality of subsystems or “nodelets” executing with a single process of a computing device during a simulation. The nodelets may communicate using in-process communication. A task scheduler can schedule the nodelets to execute separately in serially-occurring frames. A simulated clock may be used to mitigate the variability in timestamped data that may be caused by latency or jitter. | 2020-07-02 |
20200210226 | SERVER-TO-CONTAINER MIGRATION - For each server under consideration for container migration, whether the server has a value for a first parameter that precludes the server from being migrated to a container is determined. Each server having a value that precludes the serve from being migrated to a container is removed from further consideration. For each server remaining under consideration, a value of the server for each second parameter of a number of second parameters is determined, and the values of the server for the second parameters are weighted to yield a weight for the server. The servers remaining under consideration for migration are ranked based at least on the weights for the servers, yielding an order in which the servers are to migrated. | 2020-07-02 |
20200210227 | COORDINATION OF DATA TRANSMISSION AND PROCESSING - Techniques are disclosed relating to coordinating processing data transmissions between computing systems over a network. In various embodiments, a system includes a coordinator that receives information about an expected batch transmission between a data publishing application at a first computing system and a data processing application at a second computing system. Based on the received information, the coordinator determines a time when the data publishing application is expected to publish the batch transmission to the data processing application and causes the second computing system to initiate execution of the data processing application in conjunction with the determined time to receive and process the batch transmission from the data publishing application. | 2020-07-02 |
20200210228 | Scheduling Applications in CPU and GPU Hybrid Environments - A method may include receiving instructions to process a first application in response to a user request. The method also includes determining whether to store the first application in a first processing queue or a second processing queue based on a comparison between a CPU processing cost associated with the first application and a GPU processing cost associated with the first application. Further, the method includes grouping a first set of applications stored in the first processing queue according to CPU grouping criteria and grouping a second set of applications stored in the second processing queue according to GPU batching criteria. The method also includes causing a CPU to process the grouped first set of applications and a plurality of GPUs to process the grouped second set of applications. | 2020-07-02 |
20200210229 | REDUCING MINIMUM OPERATING VOLTAGE THROUGH HETEROGENEOUS CODES - Preferred embodiments of systems and methods are disclosed to reduce a minimal working voltage, Vmin, and/or increase the frequency of Vmin while executing multithreaded computer programs with better reliability, efficiency, and performance. A computer complier complies multiple copies of high-level code, each with different a different set of resource allocators so system resources are allocated during simultaneous execution of multiple threads in a way that allows reducing Vmin at a given reference voltage frequency and/or increasing the frequency of Vmin at a given Vmin value. | 2020-07-02 |
20200210230 | Multi-Processor Queuing Model - An apparatus includes multiple processors, a classifier and queue management logic. The classifier is configured to classify tasks, which are received for execution by the processors, into multiple processor queues, each processor queue associated with a single processor or thread, and configured to temporarily store task entries that represent the tasks, and to send the tasks for execution by the associated processors. The queue management logic is configured to set, based on queue-lengths of the queues, an affinity strictness measure that quantifies a strictness with which the tasks of a same classified queue are to be processed by a same processor, and to assign the task entries to the queues while complying with the affinity strictness measure. | 2020-07-02 |
20200210231 | HIGH AVAILABILITY CLUSTER MANAGEMENT OF COMPUTING NODES - Techniques and solutions are described for providing high-availability computing resources to service client requests. Groups of computing nodes are organized into loops, a given loop being configured to execute a particular subset of tasks, such as tasks with a hash value in a particular ranged serviced by a loop. Computing nodes within a loop can evaluate a task request to determine whether the task request conflicts with another task currently assigned to a node. If a computing node which sent out a task request determines that no conflict was identified, it can execute the task request. Communications within a loop can occur unidirectionally, such that a node which initiated a communication will receive the communication from the last loop node. Loops can be connected to form a ribbon, the ribbon providing a namespace for task execution, where hash ranges for the namespace are uniquely assigned to loops of the ribbon. | 2020-07-02 |
20200210232 | INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - An information processing apparatus includes an execution section that executes a series of processes according to an execution instruction which is formed to include a series of execution commands, and an execution control section that, in a case where execution of the series of processes by the execution section is stopped in the middle of the execution and, thereafter, is executed again, controls the execution section to not execute a first process according to a first execution command, which is decided in advance, among the series of execution commands, and to perform re-execution from a second process according to a second execution command other than the first execution command among the series of execution commands. | 2020-07-02 |
20200210233 | OPERATION METHOD, DEVICE AND RELATED PRODUCTS - The invention relates to an operation method and device and a related product, the product comprises a control module, and the control module comprises an instruction caching unit, an instruction processing unit and a storage queue unit; the instruction caching unit is used for storing a calculation instruction associated with the artificial neural network operation; the instruction processing unit is used for analyzing the calculation instruction to obtain a plurality of operation instructions; the storage queue unit is used for storing instruction queues, and the instruction queues comprise a. plurality of operation instructions or calculation instructions to be executed according to the sequence of the queues. Through the method, the operation efficiency of related products during operation of the neural network model can be improved. | 2020-07-02 |
20200210234 | DISTRIBUTED SYSTEM TASK MANAGEMENT USING A SIMULATED CLOCK - Systems and methods are provided for the deterministic simulation of distributed systems, such as vehicle-based processing systems. A distributed system may be represented as a plurality of subsystems or “nodelets” executing with a single process of a computing device during a simulation. A simulated clock may be used during execution of the nodelets to mitigate the variability in timestamped data that may be caused by latency or jitter. In some embodiments, all timestamps generated during a given frame of work will be assigned the same time value, regardless of when within the frame the timestamps were generated. A task scheduler can update the value of the simulated clock as execution proceeds through different frames of work. | 2020-07-02 |
20200210235 | EFFICIENT METRIC TRACKING OF UNITS IN A CLIENT APPLICATION - A pool of units of a client application that satisfy one or more targeting rules associated with a primary unit is identified, wherein the pool of units includes a subset of a total number of units of the client application. A determination as to whether a first metric for a first unit of the pool of units satisfies one or more target criteria associated with the primary unit is made. In response to determining that the first metric satisfies the one or more target criteria, the first unit of the pool of units is selected as a target unit for the primary unit. | 2020-07-02 |
20200210236 | HIGHLY AVAILABLE DISTRIBUTED QUEUE USING REPLICATED MESSAGES - Methods and systems for implementing a highly available distributed queue using replicated messages are disclosed. An enqueue request is received from a client at a particular queue host of a plurality of queue hosts. The enqueue request comprises a message and a replica count greater than one. One or more copies of a replication request are sent from the particular queue host to one or more additional queue hosts. The replication request comprises the message. The quantity of copies of the replication request is determined based at least in part on the replica count. An initial replica of the message is enqueued at the particular queue host. One or more additional replicas of the message are enqueued at the one or more additional queue hosts. A quantity of the one or more additional replicas is determined based at least in part on the replica count. | 2020-07-02 |
20200210237 | Methods, Systems and Computer Program Products for Optimizing Computer System Resource Utilization During In-Game Resource Farming - Disclosed are methods, systems and computer program products for optimizing computer system resource utilization during in-game resource farming. In some non-limiting embodiments or aspects, the present disclosure describes a method for optimizing computer system resource utilization during in-game resource farming, the method including detecting a gameplay state associated with an executing instance of a gaming application and based on the detected gameplay state selecting a gaming application mode from among a plurality of available gaming application modes. In some non-limiting embodiments or aspects, the method may also include implementing the selected gaming application mode for subsequent execution of the gaming application on the computing system. | 2020-07-02 |
20200210238 | HYBRID LOW POWER HOMOGENOUS GRAPICS PROCESSING UNITS - In an example, an apparatus comprises a plurality of execution units comprising at least a first type of execution unit and a second type of execution unit and logic, at least partially including hardware logic, to analyze a workload and assign the workload to one of the first type of execution unit or the second type of execution unit. Other embodiments are also disclosed and claimed. | 2020-07-02 |
20200210239 | System and method of scheduling and computing resource allocation optimization of machine learning flows - A distributed machine learning optimization flow processing engine is proposed. The processing engine takes into account the structure of the programming to assign proper allocation within a distributed computing infrastructure. The processing engine also takes into account availability and loads of the different computing elements within the distributed infrastructure to maximize their utilization according to the software being executed. | 2020-07-02 |
20200210240 | METHOD AND SYSTEM FOR DEADLINE INHERITANCE FOR RESOURCE SYNCHRONIZATION - Example embodiments of the present invention provide a method, a system, and a computer program product for managing tasks in a system. The method comprises running a first task on a system, wherein the first task has a first priority of execution time and the execution of which first task locks a resource on the system, and running a second task on the system, wherein the second task has a second priority of execution time earlier than the first priority of execution time of the first task and the execution of which second task requires the resource on the system locked by the first task. The system then may promote the first task having the later first priority of execution time to a new priority of execution time at least as early as the second priority of execution time of the second task and resume execution of the first task having the later first priority of execution time. | 2020-07-02 |