18th week of 2020 patent applcation highlights part 50 |
Patent application number | Title | Published |
20200133637 | SELECTING AN ITH LARGEST OR A PTH SMALLEST NUMBER FROM A SET OF N M-BIT NUMBERS - A method of selecting, in hardware logic, an i | 2020-04-30 |
20200133638 | SYSTEM AND METHOD FOR A COMPUTATIONAL NOTEBOOK INTERFACE - Example implementations described herein are directed to an improved interface for a computational notebook that represents cells of the computational notebook in a graph form involving nodes and links. Through such an improved interface, the execution order of the cells can be immediately understood, as well as the dependencies between different cells of executable code and the variables contained therein. | 2020-04-30 |
20200133639 | CLIENT-SIDE SOURCE CODE DEPENDENCY RESOLUTION IN LANGUAGE SERVER PROTOCOL-ENABLED LANGUAGE SERVER - Examples of techniques for client-side source code dependency resolution in a language server protocol (LSP) enabled language server are disclosed. In one example, a method includes parsing, by the LSP-enabled language server, a source code file received from a client language editor to identify dependencies in the source code file. The method further includes, based at least in part on identifying a dependency in the source code file during the parsing, transmitting, by the LSP-enabled language server, a diagnostic message to the client language editor to request resolution of the dependency. The method further includes receiving, by the LSP-enabled language server, a dependency resolution from the client language editor, the dependency resolution being responsive to the diagnostic message. The method further includes continuing, by the LSP-enabled language server, the parsing the source code file based at least in part on the dependency resolution received from the client language editor. | 2020-04-30 |
20200133640 | DIGITAL COMPLIANCE PLATFORM - Provided is a method and system for building a compliance software service using reusable and configurable components. In one example, the method may include receiving a request to build a software in association with an identified jurisdiction from among a plurality of jurisdictions, retrieving a plurality of configurable software components which comprise built-in functionality that is generic across the plurality of jurisdictions, dynamically configuring non-generic functionality for the identified jurisdiction within the plurality of configurable software components based on inputs received from a user, and creating a software program for the identified jurisdiction based on the dynamically configured software components and storing a file including the created software program in a storage device. | 2020-04-30 |
20200133641 | MACHINE LEARNING MODELS FOR CUSTOMIZATION OF A GRAPHICAL USER INTERFACE - Techniques disclosed herein relate generally to generating a user-specific customized graphical user interface. More specifically, some embodiments disclosed herein relate to implementing a plurality of machine learning models to a plurality of aspects of a user's interaction with a cloud-based application suite. In one embodiment, the machine learning models may generate one or more aspects of a graphical user interface. The graphical user interface may then be used to interact with the cloud-based application suite. | 2020-04-30 |
20200133642 | USER INTERFACE (UI) DESIGN SYSTEM MAPPING PLATFORM FOR AUTOMATICALLY MAPPING DESIGN SYSTEM COMPONENTS TO A DESIGN DOCUMENT FILE FROM A DESIGN TOOL TO GENERATE A MAPPED SPECIFICATION - A user interface (UI) design system mapping platform is provided that can receive and process a design document file and a library file to automatically generate a mapped specification that maps the design document file to the library file. The library file can be generated at a design system and includes design system components for development reuse and their definitions. The design system components can be assembled, via an external design tool, to build user interfaces, applications or layouts. The design document file can be imported from the design tool to the UI design system mapping platform. The design document file is generated based on selected ones of the design system components and includes various design artifacts including layout and assets that describe a user interface of an application. | 2020-04-30 |
20200133643 | Automatic Identification of Types of User Interface Components - Techniques are disclosed relating to identifying types of user interface components based on one or more existing user interfaces. The disclosed techniques may include, for example, determining a plurality of visible elements of a graphical user interface, based on user interface code. Additionally, techniques include determining coordinates for bounding boxes for ones of the plurality of visible elements, based on the user interface code. Disclosed techniques may also include grouping the visible elements into at least first and second groups and determining types of elements within the first and second groups. The techniques also include, in response to detecting a match between the types of elements within the first and second groups, determining a similarity metric for the first and second groups based on the coordinates of determined bounding boxes within the first and second groups. In response to the similarity metric meeting a threshold value, the techniques include storing information defining a component type that corresponds to the first and second groups. | 2020-04-30 |
20200133644 | Automatic Classification of User Interface Elements - Techniques are disclosed relating to classifying user interface elements of existing user interfaces. This may include for example, storing information specifying known metadata values for a plurality of metadata fields and indications of relationships between ones of the known metadata values and a plurality of types of visible user interface elements. The techniques also include determining respective metadata values for a plurality of visible elements of a graphical user interface, where the metadata values are included in user interface code that specifies the plurality of visible elements. The disclosed techniques also include, based on the stored indications of relationships and the determined metadata values, scoring ones of the plurality of visible elements to generate score values for each of the plurality of types of visible elements. Finally, the disclosed techniques include, based on the scoring, classifying the plurality of visible elements according to the plurality of types of visible elements and storing information specifying the classified elements. | 2020-04-30 |
20200133645 | USER INTERFACE AND FRONT END APPLICATION AUTOMATIC GENERATION - A method for generating a user interface for facilitating user interaction with a software application is provided. The method includes identifying an application; displaying, to a user, a list of selectable features; receiving user selections from the list; and automatically generating a user interface based on the identified application and the received user selections. As a result of the automatic generation of the user interface, command logic is generated and integrated into the user interface, thereby providing the user with an integration platform to be further connected with a database, back-end system, or network. | 2020-04-30 |
20200133646 | TRANSFORMATION OF INTER-ORGANIZATION PROCESS FOR EXECUTION VIA BLOCKCHAIN - An example operation may include one or more of receiving a plurality of state representations of a plurality of off-chain systems for performing a multi-party process via a blockchain, wherein each state representation identifies send and receive events of a respective off-chain system, removing one or more events from a state representation of an off-chain system to generate a reduced state representation, generating executable chaincode for the blockchain based on the plurality of state representations including the reduced state representation, and storing the generated chaincode via a blockchain node of the blockchain. | 2020-04-30 |
20200133647 | GENERATING CODE FOR FUNCTION CALLS THAT USE MULTIPLE ADDRESSING MODES - A compiler and linker include multiple addressing mode resolvers that generate code to resolve a plurality of function calls that use different addressing modes. A first addressing mode is defined where a first address for first data is specified as an offset from a base pointer. A second, relative addressing mode is defined where a second address for second data is specified as an offset from an address of an instruction that references the second data. The generated code assures correct operation when functions with different addressing modes are included in the computer program. The generated code preserves a base pointer when executing a function that uses relative addressing, when needed. The compiler inserts one or more relocation markers that trigger certain functions in the linker. A linker resolves the relocation markers inserted by the compiler, and generates code, when needed, that handles a mismatch between addressing modes. | 2020-04-30 |
20200133648 | GENERATING CODE FOR FUNCTION CALLS THAT USE MULTIPLE ADDRESSING MODES - A compiler and linker include multiple addressing mode resolvers that generate code to resolve a plurality of function calls that use different addressing modes. A first addressing mode is defined where a first address for first data is specified as an offset from a base pointer. A second, relative addressing mode is defined where a second address for second data is specified as an offset from an address of an instruction that references the second data. The generated code assures correct operation when functions with different addressing modes are included in the computer program. The generated code preserves a base pointer when executing a function that uses relative addressing, when needed. The compiler inserts one or more relocation markers that trigger certain functions in the linker. A linker resolves the relocation markers inserted by the compiler, and generates code, when needed, that handles a mismatch between addressing modes. | 2020-04-30 |
20200133649 | PROCESSOR CONTROLLED PROGRAMMABLE LOGIC DEVICE MODIFICATION - Systems or methods of the present disclosure may provide a computing system that includes a processor and one or more implemented designs in one or more configurable circuits of a programmable logic fabric. The computing system also includes a memory coupled to the programmable logic fabric. The computing system further includes an accelerator that is located in-line between the one or more configurable circuits and the memory. The accelerator is defined using a low-level programming language. The processor is coupled to the accelerator and is configured to enable modification of the definition of the accelerator by converting a high-level programming language to the low-level programming language to change the way that the accelerator operates. | 2020-04-30 |
20200133650 | SECURITY MODEL FOR LIVE APPLICATIONS IN A CLOUD COLLABORATION PLATFORM - Disclosed herein are system, method, and computer program product embodiments for providing a security model to customizable live applications in a cloud collaboration platform. The security approach may dedicate a frame to each live application, serving the frame from a different domain than a document in which the live application is embedded. This approach ensures that more stringent security requirements may be required of the live application and allows the data presented to the live application to be narrowly tailored. The security model may further leverage sandbox attributes and content-security policies to restrict the behavior of sandboxed and non-sandboxed live applications in accordance with best security practices. | 2020-04-30 |
20200133651 | RELEASE AUTOMATION SERVICE IN SOFTWARE DEVELOPMENT TOOLS - Data is received at a release automation system indicating a project opened in an integrated development environment (IDE) on another system. A deployment model of the release automation system is identified as associated with the project, the deployment model including at least a definition of a workflow to be executed to perform automated deployment of applications and a definition of an environment including one or more target computing systems. Data is received at the release automation system indicating a user interaction with the IDE and a particular definition within the deployment model is determined as being relevant to the user interaction and the project. An interface between the IDE and the release automation system is used to cause information from the particular definition to be presented in a graphical user interface of the IDE based on the user interaction. | 2020-04-30 |
20200133652 | TECHNIQUES OF UPDATING HOST DEVICE FIRMWARE VIA SERVICE PROCESSOR - In an aspect of the disclosure, a method, a computer-readable medium, and a computer system are provided. The computer system includes an initialization component of a host. The initialization component requests from a service processor of the host a first replacement firmware image of a first device of the host. The initialization component then receives the first replacement firmware image from the service processor. The initialization component further provides the first replacement firmware image to a first updating program. The initialization component uses the first updating program to replace the first existing firmware image on the first device with the first replacement firmware image. | 2020-04-30 |
20200133653 | GENERATION OF RECOMMENDED MULTIFUNCTION PERIPHERAL FIRMWARE AND APPLICATIONS BASED ON GROUP MACHINE LEARNING - A system and method for a system for machine learning generation of a customized and optimized list of candidate software for use on devices such as MFPs includes a processor and associated memory. A network interface communicates data with a plurality of multifunction peripherals. Inventory data corresponding to an inventory of software associated with each of a plurality of multifunction peripherals is received, along with software installation data corresponding to software installed each device. Device operation data corresponding to document processing operations completed on each multifunction peripheral is also received. The processor generates software installation recommendations specific to each multifunction peripheral in accordance with inventory data, software installation data and device operation data received from each of the plurality of multifunction peripherals. | 2020-04-30 |
20200133654 | METHOD FOR REMOTELY UPDATING FIRMWARE OF FIELD PROGRAMMABLE GATE ARRAY - A method for remotely updating firmware of a field programmable gate array (FPGA) includes: by a controller, transmitting a storing instruction and relaying an entry of configuration data received from a remote device to a processor of the FPGA; by the processor, performing an updating subtask to store a file segment recorded in the entry of configuration data in an update-storage area indicated by location information recorded in the entry of configuration data; by the controller, determining whether the processor has successfully completed the updating subtask, and when affirmative, enabling the remote device to transmit another entry of configuration data; and repeating the aforementioned steps. | 2020-04-30 |
20200133655 | UPDATE METHOD, SYSTEM, END NODE AND ELECTRONIC DEVICE - An updating method including: acquiring, by an active update node, update information from a server; and acquiring or receiving, by a passive update node, the update information from the active update node through a local network. By using the update method and system, end node and electronic device provided in the present disclosure, an active update node of a plurality of end nodes in the same local network acquires update information from a server, and a passive update node acquires the update information from the active update node, thereby reducing the number of end nodes acquiring the update information from the server, and reducing the burden of the cloud. | 2020-04-30 |
20200133656 | SYSTEMS AND METHODS FOR DIFFERENTIAL BUNDLE UPDATES - In an embodiment, a system includes a processor coupled with a data store, the at least one processor configured to: receive a client product version number from a client device; identify a differential bundle based on a difference between the client product version number and a current product version number, wherein the differential bundle comprises a set of bytewise differences between an executable client product binary file associated with the client product version number and a executable current product binary file associated with the current product version number; determine whether the differential bundle is available in the data store; retrieve the differential bundle from the data store in response to determining that the differential bundle is available in the data store; produce the differential bundle in response to determining that the differential bundle is not available in the data store; and send the differential bundle to the client device. | 2020-04-30 |
20200133657 | METHOD, ELECTRONIC DEVICE AND COMPUTER READABLE STORAGE MEDIUM OF STORAGE MANAGEMENT - Storage management techniques involve: generating, based on a first file created at a first time, a first package including first file information, information regarding a physical location of the first file and a first block associated with the first file; generating an upgrade package from a second package based on the first package, the second package based on a second file created at a second time prior to the first time, the second package including second file information, information regarding a physical location of the second file and a second block associated with the second file; and transmitting the upgrade package to an HCI system node for upgrade. The techniques may also include upgrading the node based on the upgrade package and the second block. Accordingly, installation time is saved, the normal operation of the node is ensured and the influence over other nodes is reduced. | 2020-04-30 |
20200133658 | CHANGE GOVERNANCE USING BLOCKCHAIN - A method is used in maintaining a software project in computing environments. A software project maintenance module processes at least one transaction associated with source code of the software project, where the transaction requires validation by a plurality of contributors in a decentralized network. The software project maintenance module updates the decentralized network by adding the at least one processed transaction as a block in the decentralized network. | 2020-04-30 |
20200133659 | PACKAGED APPLICATION RESOURCES FOR MOBILE APPLICATIONS - Some embodiments provide a program. The program receives through an application operating on the mobile device a request for a set of visualizations of data. The program further determines a version of application resources to use for generating the set of visualizations of data. Based on the version of application resources, the program also determines whether a set of application resources having the version is included in the application. Upon determining that the set of application resources having the version is included in the application, the program further uses the set of application resources to generate the set of visualizations of data. Upon determining that the set of application resources having the version is not included in the application; the program also retrieves the set of application resources from a computing system and using the retrieved set of application resources to generate the set of visualizations of data. | 2020-04-30 |
20200133660 | TECHNOLOGY COMPATIBILITY DURING INTEGRATION OF AN IT SYSTEM - A method of integrating a computing system that includes: identifying component products for the computing system; identifying possible versions of each of the component products; prioritizing the possible versions for each of the component products so as to emphasize those versions that are most important to a user of the computing system; selecting one product as a primary product of the computing system with the remaining products being subsidiary products; forming a technology matrix of possible combinations of primary product, subsidiary products and possible versions of the subsidiary products; and selecting the combination of primary product, subsidiary product and subsidiary product version having the highest prioritization; wherein the method is implemented by a processor. | 2020-04-30 |
20200133661 | SYSTEM AND METHOD FOR AUTOMATED GENERATION OF SOFTWARE DEVELOPMENT LIFE CYCLE AUDIT DOCUMENTATION - An embodiment of the present invention may be directed to an automated generation of software development life cycle audit documentation tool that enables development teams to move from point-in-time documentation to living documentation while still satisfying software development life cycle (SDLC) audit and risk concerns. An embodiment of the present invention is directed to generating release artifacts for application teams, to avoid costly application development time being used to fill in paperwork. An embodiment of the present invention may run as a Command Line Interface, or as a part of the build pipeline for application teams. This enables development teams to spend their time focusing on delivering high quality business solutions in a rapid fashion. | 2020-04-30 |
20200133662 | AUTOMATIC GENERATION OF DOCUMENTATION AND AGGREGATION OF COMMUNITY CONTENT - A system and method may provide assistance to programmers related to the creation of documentation. In some aspects, the system may automatically generate documentation-related text in source code. In other aspects, the system may automatically detect the need for the programmer to edit long-from documentation when changes are detected in code. Moreover, the system may provide for the aggregation or creation of documentation content based on one or more data sources, such as by embedding links to those data sources into documentation. In some aspects, some components of the system are based on machine learning methods and are trained on collected data. | 2020-04-30 |
20200133663 | AUTOMATIC GENERATION OF MULTI-SOURCE BREADTH-FIRST SEARCH FROM HIGH-LEVEL GRAPH LANGUAGE FOR DISTRIBUTED GRAPH PROCESSING SYSTEMS - Techniques are described herein for automatic generation of multi-source breadth-first search (MS-BFS) from high-level graph processing language that can be executed in a distributed computing environment. In an embodiment, a method involves a computer analyzing original software instructions. The original software instructions are configured to perform multiple breadth-first searches to determine a particular result. Each breadth-first search originates at each of a subset of vertices of a graph. Each breadth-first search is encoded for independent execution. Based on the analyzing, the computer generates transformed software instructions configured to perform a MS-BFS to determine the particular result. Each of the subset of vertices is a source of the MS-BFS. In an embodiment, the second plurality of software instructions comprises a node iteration loop and a neighbor iteration loop, and the plurality of vertices of the distributed graph comprise active vertices and neighbor vertices. The node iteration loop is configured to iterate once per each active vertex of the plurality of vertices of the distributed graph, and the node iteration loop is configured to determine the particular result. The neighbor iteration loop is configured to iterate once per each active vertex of the plurality of vertices of the distributed graph, and each iteration of the neighbor iteration loop is configured to activate one or more neighbor vertices of the plurality of vertices for the following iteration of the neighbor iteration loop. | 2020-04-30 |
20200133664 | VIOLATION MATCH SETS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for attributing violation introductions and removals. One of the methods includes receiving a request to compute a number of violation introductions attributable to a particular developer entity in a plurality of ancestor snapshots of an original snapshot in a revision graph of a code base. A respective match set for each of a plurality of violations occurring in the plurality of ancestor snapshots of the original snapshot are computed, wherein each match set for a particular violation in a particular snapshot includes any transitively matching violations in the ancestor snapshots of the particular snapshot that transitively match the particular violation. A count of unique match sets having at least one violation that was introduced by the particular developer entity is computed. The number of unique match sets is provided in response to the request. | 2020-04-30 |
20200133665 | METHODS, SYSTEMS, AND ARTICLES OF MANUFACTURE TO PERFORM HETEROGENEOUS DATA STRUCTURE SELECTION VIA PROGRAMMER ANNOTATIONS - Methods, apparatus, systems, and articles of manufacture to perform heterogeneous data structure selection via programmer annotations. An example apparatus includes a phase tracker to identify a first phase and a second phase, a cost predictor to estimate interaction costs of interacting with respective types of data structures within the first phase and the second phase, a tree constructor to construct a tree corresponding to a first data structure type, the tree including a first node in the first phase, a second node in the second phase, and an edge connecting the first node and the second node, the second node representing a second data structure type different from the first data structure type, a transformation cost calculator to calculate a transformation cost for the edge, and a branch selector to select a sequence of data structures based on the combined interaction costs and transformation costs. | 2020-04-30 |
20200133666 | APPLICATION LIFECYCLE MANAGEMENT SYSTEM - A computer-implemented method or system is provided to automate actions for one or more applications executed via a platform using at least one virtual machine in a guest system. Each virtual machine includes a guest operating system, a guest agent and an application to be executed on the virtual machine. The method or system stores in a memory user-defined automation actions and causal relationships between the user-defined automation actions from which an automation graph is derived for the application to be executed on the virtual machine on the guest system; launches the guest system and the virtual machine via the platform; and executes the user-defined automation actions via the guest agent of the virtual machine according to the automation graph after the guest system and the virtual machine are launched. | 2020-04-30 |
20200133667 | MICROCONTROLLER CAPABLE TO EXECUTE A CONFIGURABLE PROCESSING IN AN ACCELERATED MANNER - A microcontroller includes a processor and the hardware accelerator coupled to the processor. The microcontroller is programmed to execute a processing operation able to be parameterized by at least one parameter by delivering the at least one parameter from the processor to the hardware accelerator. The microcontroller can be part of an on-board vehicle computer. | 2020-04-30 |
20200133668 | DATA STORAGE OPTIMIZATION USING REPLICATION STATISTICS TO AUTOMATICALLY GENERATE NVMe STREAM IDENTIFIERS - An aspect of optimizing storage of data in a data replication system includes, for a plurality of write requests received from a source site, determining transfer statistics corresponding to each of the write requests and updating a table with the transfer statistics. An aspect also includes grouping pages in the table having common transfer statistics, assigning a unique non-volatile memory express (NVMe) stream identifier (ID) to each of the groups, and identifying grouped pages based on the assigned NVMe stream ID. An aspect further includes selecting a storage optimization technique for each of the groups based on the common transfer statistics and storing data of the write requests for each of the groups according to the selected optimization technique. | 2020-04-30 |
20200133669 | TECHNIQUES FOR DYNAMIC PROXIMITY BASED ON-DIE TERMINATION - Techniques for proximity based on-die termination (ODT) include a memory device determining what ODT setting to apply during execution of a command by another memory device that is coupled to a same data channel as the memory device based on the memory device's proximity to the other memory device and whether the command is a read command or a write command. | 2020-04-30 |
20200133670 | PROBABILISTIC TECHNIQUES FOR FORMATTING DIGITAL COMPONENTS - Methods, systems, and apparatus, including an apparatus for using probabilistic techniques to provide reformatted versions of digital components. In one aspect, a process includes obtaining data specifying a distribution parameter limit for a given reformattable digital component that is eligible for reformatting using a set of digital component extensions. For each of multiple digital component requests, a determination is made that a given digital component extension has an additional selection requirement that, when combined with a base selection requirement for the given reformattable digital component, would exceed the distribution parameter limit. A determination is made, using a probabilistic technique, a probability at which the given digital component extension will be selected for use in generating a reformatted version of the given reformattable digital component such that an aggregate selection requirement for distributing the given reformattable digital component in response to requests over time is within the distribution parameter limit. | 2020-04-30 |
20200133671 | PREFETCH STREAM ALLOCATION FOR MULTITHREADING SYSTEMS - A computer system for prefetching data in a multithreading environment includes a processor having a prefetching engine and a stride detector. The processor is configured to perform requesting data associated with a first thread of a plurality of threads, and prefetching requested data by the prefetching engine, where prefetching includes allocating a prefetch stream in response to an occurrence of a cache miss. The processor performs detecting each cache miss, and based on detecting the cache miss, monitoring the prefetching engine to detect subsequent cache misses and to detect one or more events related to allocations performed by the prefetching engine. The processor further performs, based on the stride detector detecting a selected number of events, directing the stride detector to switch from the first thread to a second thread by ignoring stride-1 allocations for the first thread and evaluating stride-1 allocations for potential strided accesses on the second thread. | 2020-04-30 |
20200133672 | HYBRID AND EFFICIENT APPROACH TO ACCELERATE COMPLICATED LOOPS ON COARSE-GRAINED RECONFIGURABLE ARRAYS (CGRA) ACCELERATORS - A coarse-grained reconfigurable array includes a processing element array, instruction memory circuitry, data memory circuitry, and an instruction fetch unit. The processing element array includes a number of processing elements. The instruction memory circuitry is coupled to the processing element array and configured to store a set of instructions. During each one of a number of processing cycles, the instruction memory circuitry provides instructions from the set of instructions to the processing elements. The instruction fetch unit is coupled to the processing element array and the instruction memory circuitry and configured to receive a result of a conditional instruction evaluated by one of the processing elements and provide the instruction fetch signals based at least in part on the result of the conditional instruction such that only instructions associated with a correct branch of the conditional instruction are provided to the plurality of processing elements. | 2020-04-30 |
20200133673 | CIRCUITRY AND METHOD - Circuitry comprises a prediction register having one or more entries each storing prediction data; prediction circuitry configured to map a value of the stored prediction data to a prediction of whether or not a branch represented by a given branch instruction is predicted to be taken, according to a data mapping; and control circuitry configured to selectively vary the data mapping between the prediction and the value of the stored prediction data. | 2020-04-30 |
20200133674 | CIRCUITRY AND METHOD - Circuitry comprises a prediction register storing a plurality of entries each having respective data values for association with one or more branch instructions; prediction circuitry to detect, using prediction data derived by a mapping function from the stored data values associated with a given branch instruction, whether or not a branch represented by the given branch instruction is predicted to be taken; update circuitry to modify the stored data values associated with the given branch instruction in dependence upon a resolution of whether the branch represented by the given branch instruction is taken or not; and control circuitry configured to selectively alter one or more of the data values other than data values associated with the given branch instruction. | 2020-04-30 |
20200133675 | APPARATUS AND METHOD OF MANAGING PREDICTION MECHANISMS USED WHEN FETCHING INSTRUCTIONS - Aspects of the present disclosure relate to an apparatus comprising instruction execution circuitry and fetch circuitry to fetch, from memory, instructions for execution by the instruction execution circuitry. The fetch circuitry comprises a plurality of prediction components, each prediction component being configured to predict instructions in anticipation of the predicted instructions being required for execution by the instruction execution circuitry. The fetch circuitry is configured to fetch instructions in dependence on the predicting. The apparatus further comprises prediction tracking circuitry to maintain, for each of a plurality of execution regions, a prediction performance metric for each prediction component. The fetch circuitry is configured, based on at least one of the prediction performance metrics for a given execution region, to implement a prediction adjustment action in respect of at least one of the prediction components. | 2020-04-30 |
20200133676 | PARALLELIZATION OF NUMERIC OPTIMIZERS - A method for parallelization of a numeric optimizer includes detecting an initialization of a numeric optimization process of a given function. The method computes a vector-distance between an input vector and a first neighbor vector of a set of neighbor vectors. The method predicts, using the computed vector-distance, a subset of the set of neighbor vectors. The method pre-computes, in a parallel processing system, a set of evaluation values in parallel, each evaluation value corresponding to one of the subset of the set of neighbor vectors. The method detects a computation request from the numeric optimization process, the computation request involving at least one of the set of evaluation values. The method supplies, in response to receiving the computation request, and without performing a computation of the computation request, a parallelly pre-computed evaluation value from the set of evaluation values to the numeric optimization process. | 2020-04-30 |
20200133677 | Universal Pointers for Data Exchange in a Computer System having Independent Processors - A system, method and apparatus to facilitate data exchange via pointers. For example, in a computing system having a first processor and a second processor that is separate and independent from the first processor, the first processor can run a program configured to use a pointer identifying a virtual memory address having an ID of an object and an offset within the object. The first processor can use the virtual memory address to store data at a memory location in the computing system and/or identify a routine at the memory location for execution by the second processor. After the pointer is communicated from the first processor to the second processor, the second processor can access the same memory location identified by the virtual memory address. The second processor may operate on the data stored at the memory location or load the routine from the memory location for execution. | 2020-04-30 |
20200133678 | BRANCH PREDICTION FOR INDIRECT BRANCH INSTRUCTIONS - Examples of techniques for branch prediction for indirect branch instructions are described herein. An aspect includes detecting a first register setting instruction in an instruction pipeline of a processor, wherein the first register setting instruction stores a target instruction address in a first register of the processor. Another aspect includes looking up the first register setting instruction in a first table. Another aspect includes, based on there being a hit for the first register setting instruction in the first table, determining instruction address data corresponding to a first indirect branch instruction that is associated with the first register setting instruction in a first entry in the first table. Another aspect includes updating a branch prediction for the first indirect branch instruction in a branch prediction logic of the processor based on the target instruction address. | 2020-04-30 |
20200133679 | APPARATUSES AND METHODS FOR SPECULATIVE EXECUTION SIDE CHANNEL MITIGATION - Methods and apparatuses relating to mitigations for speculative execution side channels are described. Speculative execution hardware and environments that utilize the mitigations are also described. For example, three indirect branch control mechanisms and their associated hardware are discussed herein: (i) indirect branch restricted speculation (IBRS) to restrict speculation of indirect branches, (ii) single thread indirect branch predictors (STIBP) to prevent indirect branch predictions from being controlled by a sibling thread, and (iii) indirect branch predictor barrier (IBPB) to prevent indirect branch predictions after the barrier from being controlled by software executed before the barrier. | 2020-04-30 |
20200133680 | MANAGING PIPELINE INSTRUCTION INSERTION FOR RECEIVED EXTERNAL INSTRUCTIONS - In a pipeline configured for out-of-order issuing, handling translation of virtual addresses to physical addresses includes: storing translations in a translation lookaside buffer (TLB), and updating at least one entry in the TLB based at least in part on an external instruction received from outside a first processor core. Managing external instructions includes: updating issue status information for each of multiple instructions stored in an instruction queue, processing the issue status information in response to receiving a first external instruction to identify at least two instructions in the instruction queue, including a first queued instruction and a second queued instruction. An instruction for performing an operation associated with the first external instruction is inserted into a stage of the pipeline so that the operation associated with the first external instruction is committed before the first queued instruction is committed and after the second queued instruction is committed. | 2020-04-30 |
20200133681 | ENABLING SOFTWARE SENSOR POWER OPERATION REQUESTS VIA BASEBOARD MANAGEMENT CONTROLLER (BMC) - An information handling system (IHS), baseboard management controller (BMC) and method provide for coordinating the BMC and the host processor subsystem to avoid conflicts between power operations by BMC and maintenance activities by the host processor subsystem. In response to determining that a power operation is requested for the host processor subsystem, a service processor of the BMC determining whether a planned power operation (PPO) software sensor contains information indicating that the host processor subsystem is executing a critical operation utility. In response to determining that the host processor subsystem is not executing the critical operation utility, service processor updates/modifies information contained in the PPO software sensor to indicate that a power operation is scheduled. The modified information prevents the host processor subsystem from subsequently initiating execution of the critical operation utility. The service processor also schedules the power operation of the host processor subsystem. | 2020-04-30 |
20200133682 | SYSTEM AND METHOD FOR AUTOMATICALLY RECONFIGURING A COMPUTER BETWEEN A CRYPTOCURRENCY MINING MODE AND A GAMING MODE - A system, method, and apparatus for reconfiguration of a computer from a gaming/graphics mode to cryptocurrency mining/compute mode, and cryptocurrency mining/compute to gaming/graphics mode automatically, including performance optimization in both modes. The system switches operating parameters of one or more Graphics Processing Units (GPU's) between compute and graphics modes automatically with mode performance optimization. The invention solves the problem of switching back and forth between graphics and compute mode automatically (without the need for extensive programming) and ensures that the PC/GPU's with run safely (without overheating) and profitably within both modes. The system may also include a dashboard that can be used to see the operational efficiency of the user's GPU's in compute/mining mode. | 2020-04-30 |
20200133683 | TECHNOLOGIES FOR FAST BOOTING WITH ERROR-CORRECTING CODE MEMORY - Technologies for fast boot-up of a compute device with error-correcting code (ECC) memory are disclosed. A basic input/output system (BIOS) of a compute device may assign memory addresses of the ECC memory to different processors on the compute device. The processors may then initialize the ECC memory in parallel by writing to the ECC memory. The processors may write to the ECC memory with direct-store operations that are immediately written to the ECC memory instead of being cached. The BIOS may continue to operation on one processor while the rest of the processors initialize the ECC memory. | 2020-04-30 |
20200133684 | AUTOMATED SYSTEM FOR RATING EMPLOYER SCREENING PRACTICES AND CORPORATE MANAGEMENT - A novel automated system develops a uniform rating of screening practices of organizations/industry entities (IEs) in a given industry. The uniform rating is based upon objective employee screening practices during pre-employment screening. The system allows IEs to view employee screening parameters of their own records stored on the system, add information and update the values of these employee screening parameters. These records are also made available to all users. The system creates weighting factors based upon the relative importance of each employee screening parameters then creates the rating based upon a combination of the weighted employee screening practices used by an IE. This rating reflects an estimate of the completeness, or thoroughness of the employment screening practices of each IE which has a direct effect upon the quality of the employees hired and ultimately upon the quality of the products produced by these employees or the services rendered. | 2020-04-30 |
20200133685 | OPEN ARCHITECTURE, REAL TIME, MULTI-INPUT THREAT CORRELATION - A system and method for combining the inputs of various sensors used in platforms, primarily those having a military use, having or making use of a number of inputs capable of determining information about an environment in which they are operating. Each input is configured to output a standardized set of information regarding its capabilities and the environment in which it is operating. A correlator is configured to adaptably process the information based on rules contained in at least one changeable correlation matrix and information contained in at least one changeable configuration file. The information, or simply an alert, is provided to at least one output that provides the information to a human operator. | 2020-04-30 |
20200133686 | REMOTE DEPLOYMENT OF OPERATING SYSTEMS - Example approaches for remote deployment of an operating system (OS) in an electronic device are described. In an example, a Baseboard Management Controller (BMC) of the electronic device is set as a first bootable component in an order of initialization of hardware components of the electronic device during a boot operation. The BMC is initialized during the boot operation. A remote server information indicative of a network address of a remote server and a path directed to boot files of the OS stored in the remote server is received. The boot files are for deployment of the OS in the electronic device. The boot files from the remote server are downloaded over a dedicated communication channel associated with the BMC, based on the remote server information. The boot files are executed to deploy the OS in the electronic device. | 2020-04-30 |
20200133687 | Operating System Extension Framework - Systems, method, and computer programmable products are described herein for generating application extension frameworks for operating systems. A host application receives data encapsulating a modification to an extension configuration file that defines one or more extensions for use by the host application. The host application includes a plurality of binary files. The host application provides the modified extension configuration file to an extension framework for instantiation of a first extension of the one or more extensions. The extension framework generates an interface for the first extension for communication with the extension framework. A new application encompassing the extension framework, the first extension, and the host application is generated without modification to the plurality of binary files of the host application. The first extension communicates with the extension framework via the interface. | 2020-04-30 |
20200133688 | AUTOMATED MECHANISMS FOR ENSURING CORRECTNESS OF EVOLVING DATACENTER CONFIGURATIONS - Herein are computerized techniques for generation, costing/scoring, optimal selection, and reporting of intermediate configurations for a datacenter change plan. In an embodiment, a computer receives a current configuration of a datacenter and a target configuration. New configurations are generated based on the current configuration. A cost function is applied to calculate a cost of each new configuration based on measuring a logical difference between the new configuration and the target configuration. A particular new configuration is selected that has a least cost. When the particular configuration satisfies the target configuration, the datacenter is reconfigured based on the particular configuration. Otherwise, this process is (e.g. iteratively) repeated with the particular configuration instead used as the current configuration. In embodiments, new configurations are randomly, greedily, and/or manually generated. In an embodiment, new configurations obey design invariants that constrain which changes and/or configurations are attainable. | 2020-04-30 |
20200133689 | Disaggregated Cloud-Native Network Architecture - A cloud based network includes a plurality of nodes, each of which include at least one containerized microservice that enables intent-driven operation of the cloud based network. One or more resource controllers, each designated to manage a custom resource, communicate with a master controller of the node to manage operational and configuration states of the node and any microservices containerized within the node. The master enables a user to monitor and automate the management of microservices and the cloud based network as a whole. The containerized microservice architecture allows user customizable rendering of microservices, reconciliation of old and new versions of microservices, and facilitated management of a plurality of nodes. | 2020-04-30 |
20200133690 | BIDIRECTIONAL PROTECTION OF APPLICATION PACKAGE - Embodiments provide bidirectional signature protection for packaged apps by verifying an authored app as executable and downloadable from a trusted marketplace service in response to determining that a (first) unique signature embedded within binary code defining the authored app matches an original trusted marketplace service signature acquired from the trusted marketplace service. Embodiments store another (second) signature acquired from the binary code defining the authored app into a storage item of the trusted marketplace service, wherein the second signature is unique to the authored app and different from the first signature; and offer the verified, authored app for download from the trusted marketplace service, wherein the first signature and the second signature are embedded in binary code defining the authored app. | 2020-04-30 |
20200133691 | MODIFYING CONTENT INTERFACE BASED UPON LEVEL OF ACTIVITY - One or more computing devices, systems, and/or methods for modifying content interfaces based upon levels of activity are provided. For example, a content interface may be displayed using a device. First activity performed using the first content interface may be detected. An activity profile associated with the device may be generated based upon the first activity. The first activity profile may be indicative of a first level of activity associated with the device. Second activity performed using the first content interface may be detected. It may be determined that a difference between the first level of activity and the second level of activity is greater than a threshold difference. Responsive to determining that the difference is greater than the threshold difference, the content interface may be modified to a modified version of the content interface associated with an exhaustion management mode. | 2020-04-30 |
20200133692 | Automatic User Interface Architecture - Techniques are disclosed relating to automatically generating user interfaces. In some embodiments, input data to be displayed is grouped into components (e.g., based on depth within hierarchical code, coordinates in a display space, etc.). These components may be based on template information that defines a set of known component types. In some embodiments, the system formats the selected components according to display parameters and causes display of a user interface that displays the components based on the formatting. In various embodiments, the disclosed techniques may allow automatic creation of effective user interfaces without information specifying layout and formatting for input data. This may provide flexible, quality interfaces without requiring design or coding expertise. Further, disclosed techniques may allow the automatic interface generator to generate interfaces similar to other existing interfaces. | 2020-04-30 |
20200133693 | Dynamic User Interface for Predicted Procedures - Techniques are disclosed relating to predicting events based on automation parameters and initiating a procedure to request user input. In some embodiments, the system automatically selects one or more component types to display an interface to request the user input, e.g., based on user interface elements associated with the procedure. These component-based techniques may be used to imitate another interface while automatically formatting the input data without a known template. | 2020-04-30 |
20200133694 | INDIVIDUAL APPLICATION WINDOW STREAMING SUITABLE FOR REMOTE DESKTOP APPLICATIONS - The present disclosure relates to streaming individual application windows and/or other desktop elements of a remote desktop. Data used to represent irrelevant desktop areas may be replaced with lower entropy data that may be highly compressed in a video stream and/or with data representative of other visual content. The video stream may also include desktop metadata (e.g., locations for desktop visuals, etc.) used to render the desktop elements on the local desktop. The desktop visuals of an application window may be rendered in a proxy window on the local desktop. | 2020-04-30 |
20200133695 | STREAMING APPLICATION VISUALS USING PAGE-LIKE SPLITTING OF INDIVIDUAL WINDOWS - The disclosure relates to the transfer of visuals (e.g., window visuals) over virtual frames that may be stored in any number of video frames of one or more video streams. The visuals may be split into two-dimensional (2D) pages of a virtual frame, with each of the 2D pages being a fraction of the size of video frames of the video stream(s). The virtual frame may be encoded to the video frames of the video stream(s) and later reconstructed in accordance with a page table. | 2020-04-30 |
20200133696 | STREAMING PER-PIXEL TRANSPARENCY INFORMATION USING TRANSPARENCY-AGNOSTIC VIDEO CODECS - The disclosure relates to the transfer of per-pixel transparency information using video codecs that do not provide an alpha channel (alternatively referred to as “transparency-agnostic video codecs”). For example, alpha information of visual elements may be transcoded into the supported channels of a video stream to generate additional samples of a supported color space, which are representative of the alpha information. After being encoded by a “transparency-agnostic video codec” and transmitted, the received alpha information may then be extracted from the supported channels of the video stream to render the received visuals with corresponding per-pixel transparency. | 2020-04-30 |
20200133697 | COLLABORATIVE COMMAND LINE INTERFACE - A collaborative command line interface is disclosed. In some embodiments, a robot (bot) representing a prescribed service employed by an entity is added as a user in a collaboration service channel associated with the entity. The bot facilitates in making the collaboration service channel a command line interface that interfaces with the prescribed service. A command associated with the command line interface that is received on the collaboration service channel is responded to with a response from the prescribed service. The prescribed service is at least in part integrated in the collaboration service channel via the bot and associated command line interface. | 2020-04-30 |
20200133698 | ALERTING, DIAGNOSING, AND TRANSMITTING COMPUTER ISSUES TO A TECHNICAL RESOURCE IN RESPONSE TO A DEDICATED PHYSICAL BUTTON OR TRIGGER - Systems, methods, and software for providing a dedicated physical user input device to trigger collection and curation of real-time data and submission to a technical resource. Computer usage data and system diagnostic data from a computing device is monitored and collected during use by an end user. Upon detecting the press of a dedicated physical button connected to the computing device, additional information regarding the issue is collected from the end user while additional system diagnostic data about the computing device at a time of occurrence of the issue is collected. The collected information and data is then curated by the computing device in regard to the issue and transmitted to a remote server for access by a technical resource. | 2020-04-30 |
20200133699 | OPTIMIZING ACCESS TO PRODUCTION DATA - Various systems, methods, and processes for optimizing access to production data in application development and testing environments are disclosed. If an input/output (I/O) operation is a read operation, a storage location on a virtual storage unit at which the read operation is to be performed is determined. Also determined is whether an earlier write operation was performed at the storage location. If an earlier write operation was performed at the storage location, the read operation is performed on one or more virtual data files. However, if the earlier write operation was not performed at the storage location, the read operation is performed on allocated storage space. | 2020-04-30 |
20200133700 | Life Cycle Management for Cloud-Based Application Executors with Key-Based Access to Other Devices - Life cycle management techniques are provided for cloud-based application executors with key-based access to other devices. An exemplary method comprises determining that a retention time for a first cloud-based application executor (e.g., a virtual machine or a container) has elapsed, wherein the first cloud-based application executor has key-based access to at least one other device using a first key; in response to the determining, performing the following steps: creating a second cloud-based application executor; and determining a second key for the second cloud-based application executor that is different than the first key, wherein the second cloud-based application executor uses the first key to add the second key to one or more trusted keys of the at least one other device and deactivates the first key from the one or more trusted keys. | 2020-04-30 |
20200133701 | SOFTWARE SERVICE INTERVENTION IN A COMPUTING SYSTEM - A system may include multiple computing nodes, each including a hypervisor, a controller virtual machine and multiple virtual machines. The hypervisor may include a host agent configured to start a service and determine whether a performance of the service has met a criteria. If the performance of the service has met the criteria, the hypervisor may further determine whether the service has any pending critical operations, and if no critical operations are pending, stop the service. In some examples, each service may create a process configured to monitor the performance of the service. Examples of the performance of the service may include memory utilization and the service response time. | 2020-04-30 |
20200133702 | VIRTUAL WORKLOAD MIGRATIONS - An example system including a plurality of computing resources, distributed across a plurality of hosts, to execute virtual workloads; and a computing device, communicatively coupled to the plurality of hosts, comprising a processing resource and a memory resource. The memory resource may store instructions executable by the processing resource to monitor utilization data of the plurality of computing resources executing assigned respective virtual workloads; predict a destination host from the plurality of hosts with available computing resources in an amount to accommodate a predicted resource utilization of a particular virtual workload based on the monitored utilization data of the plurality of computing resources; and migrate the particular virtual workload assigned to a source host of the plurality of hosts to the destination host to be executed. | 2020-04-30 |
20200133703 | SHARING DATA BY A VIRTUAL MACHINE - A memory block is provided that is shared between two endpoints. This first endpoint is either a host for a virtual machine or the virtual machine. The second endpoint is either the host or another virtual machine. The shared memory block includes a buffer, a post counter, and an acknowledgment counter. The block is employed for communicating data from the first endpoint to the second endpoint. Sending data to the second endpoint includes identifying the buffer as being currently owned by the first endpoint and storing data in the buffer. It is then detected that the acknowledgment counter is equal to the post counter. The post counter is then incremented to signal that data has been stored for receipt by the second endpoint. Receiving the data by the second endpoint includes detecting that the post counter has changed and then incrementing the acknowledgment counter to acknowledge receipt of the data. | 2020-04-30 |
20200133704 | DYNAMICALLY UPDATING VIRTUAL CENTRAL PROCESSING UNITS - A method includes receiving, by a hypervisor running on a host computer system, a request pertaining to a microcode update from a guest operating system of a virtual machine running on the host computer system. The method also includes identifying, by a hypervisor, a central processing unit (CPU) model including one or more features associated with the microcode update. The method also includes emulating, by the hypervisor, the CPU model on a virtual central processing unit (vCPU) of the virtual machine to provide access to the one or more features of the CPU model to the guest operating system of the virtual machine. | 2020-04-30 |
20200133705 | GENERALIZED VIRTUALIZATION PLATFORM FOR SYSTEMS USING HARDWARE ABSTRACTION SOFTWARE LAYERS - Techniques for testing a physical hardware system by executing hardware system application software on a corresponding emulated proxy physical hardware system in a proxy virtual machine are presented. The techniques include: obtaining a proxy physical hardware system that matches aspects of the physical hardware system; constructing, in a virtualization system, the proxy virtual machine; emulating, using the virtualization system, hardware components of the proxy physical hardware system in the proxy virtual machine; executing a hardware abstraction software layer in the proxy virtual machine; executing, by the hardware abstraction software layer of the virtualization system, the hardware system application software in the proxy virtual machine on the proxy physical hardware system using a memory map at least one adapter; and testing, using the virtualization system, the physical hardware system by the executing the hardware system application software in the proxy virtual machine on the proxy physical hardware system. | 2020-04-30 |
20200133706 | ASYNCHRONOUS WORKLOAD MIGRATION CONTROL - Systems and methods for workload migration control. Migration control operations commence upon identifying a workload comprising two or more virtual machines to be migrated from a source computing environment to a target computing environment. A migration process initiates migration of the two or more virtual machines to the target computing environment. After the migration process has begun, a user identifies a prioritized virtual machine from among the two or more virtual machines that are in the process of being migrated. In response to receiving the user input, a migration manager suspends progression of selected virtual machines while accelerating progression of the migration of the prioritized virtual machine that is not suspended. The migration of the re-prioritized virtual machine preferentially receives computing resources. After migration of the re-prioritized virtual machine that is not suspended has completed, then the migration manager releases the suspension of the suspended virtual machines to continue migration. | 2020-04-30 |
20200133707 | POWER EFFICIENT WORKLOAD PLACEMENT AND SCHEDULING IN A VIRTUALIZED COMPUTING ENVIRONMENT - An apparatus referred to as a profiling server monitor receives data corresponding to the operation of physical hardware in a virtual computing environment. An example is power consumption data. The profiling server monitor analyzes the data received and determines an operation to perform or a business rule to follow in order to, as one example, reduce power consumption of the virtual computing environment. | 2020-04-30 |
20200133708 | Method for Managing VNF Instantiation and Device - A method for managing virtualized network function (VNF) instantiation, including a first device receiving, from a requester device, a request for instantiating the NS, wherein the request carries instance information of a VNF that is in a network service (NS) and to be instantiated using a second device. After receiving the request, the first device records an instance identifier (ID) of the VNF. In a VNF instantiation phase, the first device authorizes instantiation of the VNF based on the instance ID of the VNF that is recorded during instantiation of the NS. The instance ID of the VNF to be instantiated by the second device is carried in the request, such that the first device can obtain a relationship between the VNF to be instantiated by the second device and the NS, and therefore can control instantiation of the VNF based on the relationship. | 2020-04-30 |
20200133709 | SYSTEM AND METHOD FOR CONTENT - APPLICATION SPLIT - Virtual machine storage and runtime provisioning comprises accessing a base generalized reusable virtual machine image and configuring memory according to an instantiated copy of the base virtual machine image. Project specific content is accessed from a central content store configured to store content separately from the base machine image. The instantiated machine image and project specific content are linked at launch time to form a project specific functioning virtual machine. After completion of the project specific function, the project specific content is stored separately and the project specific functioning virtual machine is removed from memory. This minimizes use of virtual machine instances to only when needed and makes content available to other users. This also minimizes proliferation of unused project specific function virtual machine images, frees-up storage space, and enables easier automated maintenance of separately stored base virtual machine images and updating of the centralized plurality of project specific contents. | 2020-04-30 |
20200133710 | AN APPARATUS AND METHOD FOR MANAGING USE OF CAPABILITIES - An apparatus and method are provided for managing use of capabilities. The apparatus has processing circuitry to execute instructions, and a plurality of capability storage elements accessible to the processing circuitry and arranged to store capabilities used to constrain operations performed by the processing circuitry when executing instructions. The processing circuitry is operable at a plurality of exception levels, each exception level having different software execution privilege. Further, capability configuration storage is provided to identify capability configuration information for each of the plurality of exception levels. For each exception level, the capability configuration information identifies at least whether the operations performed by the processing circuitry when executing instructions at that exception level are constrained by capabilities. During a switch operation from a source exception level to a target exception level, the capability configuration information in the capability configuration storage pertaining to at least one of the source exception level and the destination exception level is used to determine how execution state of the processing circuitry is managed during the switch operation. This provides a great deal of flexibility in the management of capabilities. | 2020-04-30 |
20200133711 | Event-Triggered Configuration of Workflow Processes for Computer Software Development Systems - The present disclosure relates generally to software development and more particularly to methods and systems for automated configuration and execution of context-optimized software development workflow processes for software. A method can perform a pre-configuration of a software development workflow process in advance of execution of the workflow process based upon one or more sources of configuration data. The method can subsequently create an optimized workflow process configuration wherein evaluation of workflow process triggering event context data results in event-optimized workflow process configuration and execution. A system can include: a user interface through which users can interact with the system; a component that can perform pre-configuration of software development workflow processes; a component that can perform context-optimized configuration of software development workflow processes; and a component that can perform automated execution of such context-optimized software development workflow processes. | 2020-04-30 |
20200133712 | TECHNIQUES OF SECURELY PERFORMING LOGIC AS SERVICE IN BMC - In an aspect of the disclosure, a method, a computer-readable medium, and a computer system are provided. The computer system includes an initialization component of a host. The initialization component obtains a process file for dynamically creating a processing component on a management platform on a BMC of the host, the process file defining a logic to be implemented by the processing component, the initialization component operating to load an operating system of the host. The initialization component sends the process file to the BMC. The initialization component further sends a message to the BMC, the message including data to be processed by the processing component. | 2020-04-30 |
20200133713 | Stack Overflow Processing Method and Apparatus - A method and an apparatus for stack overflow processing are provided. The method includes using a memory management device to detect whether any stack overflow occurs on a specified stack; and triggering a memory access interrupt by the memory management device when a stack overflow is detected on the specified stack. By using the memory management device to detect a stack overflow, the timeliness of stack overflow detection can be effectively improved, and occasional problems such as a stack overflow can be discovered timely, thus shortening the time of troubleshooting the problem of the stack overflow. | 2020-04-30 |
20200133714 | Tracking Method, Apparatus, Device, and Machine-Readable Medium - A tracking method, an apparatus, a device, and a machine-readable medium are provided. The method specifically includes: writing a tracking result of an activity of an operating system and/or a running activity of a program into a buffer when an interrupt is disabled; and reading and sending the tracking result from the buffer when the interrupt is enabled. The embodiments of the present disclosure can effectively shorten the maximum time during which interrupts are disabled for an operating system, and thereby can effectively improve the performance of the operating system and/or a program. | 2020-04-30 |
20200133715 | SYSTEM TO PROTECT CRITICAL REMOTELY ACCESSIBLE MEDIA DEVICES - Techniques for controlling the performance of remote operations on computing devices within a video processing environment are described. One embodiment determines to perform a remote operation on a remote device in a media processing environment and determines a signal chain within the media processing environment that includes the remote device. An operational status of the signal chain is determined, based on a media processing schedule for the signal chain. Upon determining that the operational status indicates that the remote device is available for performance of the remote operation, embodiments initiate the performance of the remote operation on the remote device. | 2020-04-30 |
20200133716 | TRANSFORMATION OF INTER-ORGANIZATION PROCESS FOR EXECUTION VIA BLOCKCHAIN - An example operation may include one or more of storing chaincode comprising executable steps of a multi-party process generated from a state diagram in which a blockchain is an intermediary between a plurality of off-chain systems, receiving a request to execute the multi-party process, processing a step of the multi-party process based on the request via execution of the stored chaincode including the executable steps of the multi-party process to generate a processed result for the step, and storing an identification of the processed step and the generated processed result via a data block among a hash-linked chain of data blocks of the blockchain. | 2020-04-30 |
20200133717 | CONTEXTUAL AWARENESS ASSOCIATED WITH RESOURCES - Contextual awareness associated with resources can be employed to facilitate controlling access to resources of a system, including function blocks. A resource manager component (RMC) can pre-load a defined number of respective versions of configuration parameter data associated with respective applications in each resource. With regard to each application, the RMC can associate a context value, unique for each application, with the respective versions of configuration parameter data associated with that application. When a current application is being changed to a next application, the RMC can write the context value associated with the next application to a context select component (CSC). Each resource can read the context value in the CSC, identify and retrieve the version of configuration parameter data associated with the next application based on the context value, and configure the function block based on the version of configuration parameter data. | 2020-04-30 |
20200133718 | VIRTUAL MACHINE MIGRATION TASK MANAGEMENT - Systems and methods for preferential treatment of a prioritized virtual machine during migration of a group of virtual machines from a first virtualized computing environment to a second virtualized computing environment. A data structure is allocated to store virtual machine migration task attributes that are associated with a plurality of in-process virtual machine migration tasks. As migration proceeds, the migration task attributes in the data structure are updated to reflect ongoing migration task scheduling adjustments and ongoing migration task resource allotments. A user interface or other process indicates a request to prioritize migration of a particular one of the to-be-migrated virtual machines. Based on the request, at least some of the virtual machine migration task attributes are modified to indicate a reduced scheduling priority of some of the to-be-migrated virtual machine migration tasks so as to preferentially deliver computing resources to the prioritized virtual machine migration tasks. | 2020-04-30 |
20200133719 | METHOD OF EFFICIENTLY MIGRATING DATA FROM ONE TIER TO ANOTHER WITH SUSPEND AND RESUME CAPABILITY - In an embodiment, a system and method for supporting a seeding process with suspend and resume capabilities are described. A resumable seeding component in a data seeding module can be used to move data from a source tier to a target tier. A resumption context including a perfect hash function (PHF) and a perfect hash vector (PHV) persists a state of a seeding process at the end of each operation in the seeding process. The PHV represents data segments of the data using the PHF. The resumption context is loaded into memory upon resumption of the seeding process after it is suspended. Information in the resumable context is used to determine a last successfully completed operation, and a last copied container. The seeding process is resumed by executing an operation following the completed operation in the resumable context. | 2020-04-30 |
20200133720 | METHOD TO CHECK FILE DATA INTEGRITY AND REPORT INCONSISTENCIES WITH BULK DATA MOVEMENT - In an embodiment, a method for validating data integrity of a seeding process is described. The seeding process for migrating data from a source tier to a target tier persists a perfect hash vector (PHV) to a disk when the seeding process is suspended for various reasons. The PHV includes bits for fingerprints for data segments corresponding to the data, and can be reloaded into memory upon resumption of the seeding process. One or more bits corresponding to fingerprints for copied data segments are reset prior to starting the copy phase in the resumed run. A checksum of the PHV is calculated after the seeding process completes copying data segments in the containers. A non-zero checksum of the PHV indicates that one or more data segments are missing on the source tier or the data segments are not successfully copied to the target tier. The missing data segments and/or one or more related files are reported to a user via a user interface. | 2020-04-30 |
20200133721 | SEMICONDUCTOR DEVICE AND SYSTEMS USING THE SAME - A semiconductor device capable of suppressing performance degradation and systems using the same are provided. The semiconductor device includes a plurality of processors CPU | 2020-04-30 |
20200133722 | CALCULATOR AND JOB SCHEDULING METHOD THEREOF - A method for scheduling jobs for the calculator includes measuring core utilization of the second-type processor, when the measured core utilization is less than a reference value, transmitting, by the first-type processor, a job suspension instruction to suspend a first job, which is currently being executed, to the second-type processor, in response to the job suspension instruction, copying data of a region occupied by the first job in a memory of the second-type processor to a main memory, copying data of a second job stored in the main memory to the memory of the second-type processor, and transmitting, by the first-type processor, an instruction to execute the second job to the second-type processor. | 2020-04-30 |
20200133723 | MICRO KERNEL SCHEDULING METHOD AND APPARATUS - A micro kernel scheduling method and apparatus are disclosed in embodiments of this disclosure. The method is applied to a software platform and includes: receiving a scheduling instruction for a current micro kernel; and switching the current micro kernel to a target micro kernel. In some embodiments, a micro kernel is switched directly according to a scheduling instruction, and this is completed without any thread of the software platform, which solves the problems in the conventional system of high micro kernel switching cost and poor real-time performance caused by one-to-one correspondence between micro kernels and threads of the software platform. | 2020-04-30 |
20200133724 | TASK PRIORITY PROCESSING METHOD AND PROCESSING DEVICE - In a multitask computing system, there are multiple tasks include a first task, a second task, and a third task, and the first task has a higher priority than that of the second task and the third task. A method including raising the priority of the second task that shares a first critical section with the first task and is accessing the first critical section when the first task is blocked due to failure to access the first critical section; determining whether there is a third task that shares a second critical section with the second task and is accessing the second critical section; and raising, when the third task is present, the priority of the third task. The techniques of the present disclosure prevent a low-priority third task from delaying the execution of a second task, thus avoiding the priority inversion caused by the delayed execution of a high-priority first task. | 2020-04-30 |
20200133725 | METHODS, SYSTEMS, ARTICLES OF MANUFACTURE, AND APPARATUS TO OPTIMIZE THREAD SCHEDULING - An apparatus comprising: a model to generate adjusted tuning parameters of a thread scheduling policy based on a tradeoff indication value of a target system; and a workload monitor to: execute a workload based on the thread scheduling policy; obtain a performance score and a power score from the target system based on execution of the workload, the performance score and the power score corresponding to a tradeoff indication value; compare the tradeoff indication value to a criterion; and based on the comparison, initiate the model to re-adjust the adjusted tuning parameters. | 2020-04-30 |
20200133726 | Provenance Driven Job Relevance Assessment - Described herein is a system and method for ranking and/or taking an action regarding execution of jobs of a shared computing cluster based upon predicted user impact. Information regarding previous executions of a plurality of jobs is obtained, for example, from job execution log(s). Data dependencies of the plurality of jobs are determined. Job impact of each of the plurality of jobs as a function of the determined data dependencies is calculated. User impact of each of the plurality of jobs as a function of the determined data dependencies, the calculated job impact, and time is calculated. The plurality of jobs are ranked in accordance with the calculated user impact. An action is taken in accordance with the ranking of the plurality of jobs. The action can include automatic scheduling of the jobs and/or providing information regarding the rankings to a user. | 2020-04-30 |
20200133727 | SEMICONDUCTOR DEVICE - A semiconductor device capable of executing a plurality of tasks in real time and improving performances is provided. The semiconductor device comprises a plurality of processors and a plurality of DMA controllers as master, a plurality of memory ways as slave, and a real-time schedule unit for controlling the plurality of masters such that the plurality of tasks are executed in real time. The real-time schedule unit RTSD uses the memory access monitor circuit and the data determination register to determine whether or not the input data of the task has been determined, and causes the task determined to have the input data determined to have been determined to be executed preferentially. | 2020-04-30 |
20200133728 | DATA BASED SCHEDULING FOR HORIZONTALLY SCALABLE CLUSTERS - An apparatus comprising: a processing resource; a memory resource to store instructions executable by the processing resource to: associate a plurality of consumer containers with a data container, wherein the plurality of consumer containers accesses the data container; identify a node of a cloud computing system that hosts the data container; and schedule the plurality of consumer containers to execute on the node based on the association between the plurality of consumer containers and the data container. | 2020-04-30 |
20200133729 | OPTIMIZED MANAGEMENT OF APPLICATION RESOURCES FOR MOBILE APPLICATIONS - Some embodiments provide a program that determines a version of a first set of application resources. The program further determines whether a version of a second set of application resources is different and compatible with the version of the first set of application resources. Upon determining that the version of the second set of application resources is different and compatible with the version of the first set of application resources, the program also uses the second set of application resources to generate visualizations of data while downloading the first set of application resources for later use. Upon determining that the version of the second set of application resources is different and not compatible with the version of the second set of application resources, the program further downloads the first set of application resources and uses the first set of application resources to generate visualizations of data. | 2020-04-30 |
20200133730 | MEMORY TRANSACTION REQUEST MANAGEMENT - A method of requesting data items from storage. The method comprising allocating each of a plurality of memory controllers a unique identifier and assigning memory transaction requests for accessing data items to a memory controller according to the unique identifiers. The data items are spatially local to one another in storage. The data items are requested from the storage via the memory controllers according to the memory transaction requests and then buffered if the data items are received out of order relative to an order in which the data items are requested | 2020-04-30 |
20200133731 | Resource Conservation for Containerized Systems - A method for conserving resources in a distributed system includes receiving an event-criteria list from a resource controller. The event-criteria list includes one or more events watched by the resource controller and the resource controller controls at least one target resource and is configured to respond to events from the event-criteria list that occur. The method also includes determining whether the resource controller is idle. When the resource controller is idle, the method includes terminating the resource controller, determining whether any event from the event-criteria list occurs after terminating the resource controller, and, when at least one event from the event-criteria list occurs after terminating the resource controller, recreating the resource controller. | 2020-04-30 |
20200133732 | COORDINATING MAIN MEMORY ACCESS OF A PLURALITY OF SETS OF THREADS - A computing device includes a plurality of nodes, where a first node operates in accordance with a computing device operation system (OS) and remaining nodes operate in accordance with a custom OS. The remaining nodes include a plurality of sets of processing core resources that process a plurality of sets of threads of an application. The computing device also includes a main memory divided into a computing device memory section and a custom memory section that includes portions logically allocated as a plurality of buffers. The computing device also includes a memory access control module operable to coordinate access to the plurality of buffers by at least some of the plurality of sets of threads in accordance with the custom OS. The computing device also includes disk memory and a disk memory access control module operable to coordinate access to the disk memory in accordance with the computing device OS. | 2020-04-30 |
20200133733 | HYPER-CONVERGED INFRASTRUCTURE (HCI) EPHEMERAL WORKLOAD/DATA PROVISIONING SYSTEM - A Hyper-Converged Infrastructure (HCI) ephemeral workload/data provisioning system includes a workload system coupled to a plurality of HCI systems by a manager system. The manager system identifies a first ephemeral workload that is provided by the workload system and that is configured to operate on one of the plurality of HCI systems for less than a first time period. In response to identifying the first ephemeral workload, the manager system determines first data that is to-be utilized by the first ephemeral workload and that is stored on a first HCI system that is included in the plurality of HCI systems. In response to determining that the first HCI system includes the first data that is to-be utilized by the first ephemeral workload, the manager system causes the first ephemeral workload to be provisioned on the first HCI system. | 2020-04-30 |
20200133734 | APPARATUS THAT GENERATES OPTIMAL LAUNCH CONFIGURATIONS - Launch configurations of a hardware acceleration device are determined, which minimize hardware thread management overhead in running a program code. Based on received hardware behaviors, the architectural features, the thread resources and the constraints associated with the hardware acceleration device, possible launch configurations and impossible launch configurations are generated. A ranking of at least some of the possible launch configurations may be generated and output, based on how well each of said at least some of the possible launch configurations satisfies at least some of the constraints. Parametric values of said at least some of the possible launch configurations, an explanation why the impossible launch configurations have been determined as being impossible, and one or more strategies for scheduling, latencies and efficiencies associated with the hardware acceleration device, are output. | 2020-04-30 |
20200133735 | METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR ASSIGNING TASKS TO DEDICATED PROCESSING RESOURCES - A method comprises obtaining hardware information of a plurality of dedicated processing resources, wherein the plurality of dedicated processing resources comprises a first dedicated processing resource and a second dedicated processing resource, and the hardware information comprises first hardware information of the first dedicated processing resource and second hardware information of the second dedicated processing resource. The method further comprises generating a first task based on the first hardware information and a second task based on the second hardware information, and allocating the first task to the first dedicated processing resource and the second task to the second dedicated processing resource. For task scheduling in heterogeneous dedicated processing resources (for example, accelerator devices) scenario, the method generates corresponding kernel codes according to different hardware capabilities. Thus, dynamic optimization for the heterogeneous dedicated processing resources is implemented, thereby improving resource utilization rate and execution efficiency. | 2020-04-30 |
20200133736 | COORDINATED APPLICATION PROCESSING - Coordinated application processing. A method identifies processing engines available for coordinated application processing, distributes to the processing engines an application configured for execution to perform image processing, and distributes images to the processing engines. The images cover an image area that includes multiple different sub-areas, where the image processing proceeds across multiple cycles of image processing to process a respective set of images of each sub-area of the multiple different sub-areas, and where the distributing the images includes, for each sub-area of the multiple different sub-areas: selecting for that sub-area a respective processing engine of the processing engines to perform the image processing across the multiple cycles to process the respective set of images of that sub-area, and distributing, across the multiple cycles of the image processing, the images of the respective set of images of that sub-area to the respective processing engine selected for that sub-area. | 2020-04-30 |