16th week of 2020 patent applcation highlights part 44 |
Patent application number | Title | Published |
20200117429 | Method and Apparatus for an Internet of Things Controller - An Internet of Things Controller, and related tools. The tools address application development and deployment. An Application Meta-Data Editor permits a developer to specify meta-data, that guides the structure of actual data, that can be passed to the application, when invoked as a particular execution (or task) for a particular end-user. Once an application is developed, and has undergone a test deployment, the developer can upload the application to an online Application Store, from which the application can be downloaded and deployed by others. A Data Editor permits an end-user to create his/her own data, in accordance with the developer's meta-data, that adapts the execution to his/her particular needs. While permitting adaptation, the Data Editor ensures that the data created follows the overall pattern of the meta-data, as provided by the developer. Facilities for internationalization of a deployed application's documentation, on a crowd-sourced basis, are also provided. | 2020-04-16 |
20200117430 | Model Configuration Using Partial Model Data - As a non-limiting representative example, a system is dicslosed that includes a product configurator user interface that displays a configurable model and receives configuration input for the model and a modeling platform comprising a segmenting engine networked together. The segmenting engine performs operations such as receiving configuration input and generating a first partial structured data set for evaluation by the configuration engine. The system also includes a configuration engine that communicates with the modeling platform. The modeling platform sends the first partial structured data set to the configuration engine for evaluation and receives back an evaluated instance reflecting an outcome of the configuration of the configurable model. | 2020-04-16 |
20200117431 | SYSTEM AND METHOD FOR THE GENERATION OF AN ADAPTIVE USER INTERFACE IN A WEBSITE BUILDING SYSTEM - A system for a website building system implemented on a server, the server having at least one processor and a memory and including a site analyzer to generate a representative component for each of a cluster of multiple components of a website of a user, based on an analysis of the attributes of the multiple components; and an editor UI builder to create a dynamically modified user interface at least from the representative components for a visual editor of said website; where the site analyzer and the editor UI builder change the dynamically modified user interface as the user edits the website; and where the visual editor includes a regular user interface and said dynamically modified user interface. | 2020-04-16 |
20200117432 | USER INTERFACE RESOURCE FILE OPTIMIZATION - Technologies described herein reduce the size of a software application. In some embodiments, the size of one or more resource files of an application are reduced. Resource files include key/value pairs that define elements of the application. In some embodiments, the application's source code is analyzed to determine if an entry in a resource file may be removed. For instance, initialization functions in the application's source code may be analyzed to determine if a value loaded from a resource file is replaced before being used. For example, a button with a color property may be defined as grey by a resource, but later set to orange in an initialization function. In this case, the resource entry defining the button as grey is superfluous and may be safely removed. This technique allows for entries to be removed from a resource file even though the source code references the entries. | 2020-04-16 |
20200117433 | CODE OPTIMIZATION CONVERSATIONS FOR CONNECTED MANAGED RUNTIME ENVIRONMENTS - A method of providing by a code optimization service an optimized version of a code unit to a managed runtime environment is disclosed. Information related to one or more runtime conditions associated with the managed runtime environment that is executing in a different process than that of the code optimization service is obtained, wherein the one or more runtime conditions are subject to change during the execution of the code unit. The optimized version of the code unit and a corresponding set of one or more speculative assumptions are provided to the managed runtime environment, wherein the optimized version of the code unit produces the same logical results as the code unit unless at least one of the set of one or more speculative assumptions is not true, wherein the set of one or more speculative assumptions are based on the information related to the one or more runtime conditions. | 2020-04-16 |
20200117434 | METHODS, SYSTEMS, AND PORTAL FOR ACCELERATING ASPECTS OF DATA ANALYTICS APPLICATION DEVELOPMENT AND DEPLOYMENT - The present disclosure relates to methods and systems for accelerating the development and distribution of data science workloads, including a consistent, portable and pre-configured data science workspace for development of data science applications allowing for the creation of a standardized, modular and reusable library of data science code product that can be maintained, extended and reused in a clear and repeatable manner. The code may be submitted to a build and deployment process that ensures consistency across multiple environments in terms of the application code and the operating system environment. Runtime execution may be managed through the authoring of definitions which detail aspects of how the workload should operate within a certain environment. | 2020-04-16 |
20200117435 | METHOD AND SYSTEM FOR APPLICATION INSTALLATION AND INTERACTION DURING A PODCAST - The present disclosure provides a method and system to perform installation of one or more applications based on the interaction of a user with a podcast. The method includes a first step to insert one or more advertisements during the broadcasting of the podcast. In addition, the method includes another step to detect mode of listening of the podcast by the user. Further, the method includes yet another step to determine one or more gestures. Furthermore, the method includes yet another step to receive one or more gesture inputs from the user. Moreover, the method includes yet another step to perform one or more actions at the application installation system. | 2020-04-16 |
20200117436 | POST-INSTALL APPLICATION INTERACTION - Methods, systems, and apparatus include computer programs encoded on a computer-readable storage medium, including a method for providing content. Data specifying a post-install activity is received from a provider of an application. An opportunity is identified to provide third-party content to a user. A likelihood is determined that the user will perform the specified post-install activity based on one or more attributes of the user and attributes of users that have previously performed the specified post-install activity in the application. A selection value is adjusted for third-party content that identifies the application based on the determined likelihood, wherein the selection value increases as the likelihood increases. The third-party content identifying the application is selected based on the adjusted selection value. The third-party content identifying the application is distributed to a client device of the user. | 2020-04-16 |
20200117437 | Automated Identification Of Generic Module Location Per Electrical Signatures - A database stored electrical signatures of mounting points for generic modules within a vehicle model. Software for programming each mounting point is mapped to the mounting points. For a production unit of the vehicle model, generic modules are placed at the mounting points without being programmed to perform a specific function. The generic modules measure the electrical signature of the mounting point at which they are mounted. The generic modules then coordinate with a server to identify a matching electrical signature in the database and programming the generic modules with corresponding software for performing specific functions. | 2020-04-16 |
20200117438 | SCHEDULING SIMPLIFICATION VIA GEOFENCE TIME ZONE RESOLUTION - A boundary of a geofence spanning a plurality of time zones is received by a vehicle. A primary time zone of the geofence is identified. Installation of a software update is initiated responsive to the vehicle being located within the geofence and a current time within the primary time zone being within a period of time for software updates. | 2020-04-16 |
20200117439 | Systems and Methods for Reinforced Update Package Authenticity - A system for ensuring update package authenticity includes an update package transaction ledger and a repository. Change managers are configured to maintain the update package transaction ledger, create a transaction block using metadata of an update, and determine a package value based on the transaction ledger and on the update. The change managers also incorporate the package value and the update into a package, and upload the package to the repository. A client obtains the package from the repository, obtains the transaction block from the update package transaction ledger, determines a calculated value based on the transaction block and on the update, and compares the calculated value and the package value. The update is installed when the calculated value and the package value match. | 2020-04-16 |
20200117440 | HIERARCHICAL EQUIPMENT SOFTWARE UPDATE FOR AN ELECTRICAL DISTRIBUTION GRID - The disclosure relates to updating software intended to be executed by equipment of an electrical distribution grid. Each equipment unit forms a node of a command-control network communicating with other nodes of this command-control network, and the nodes of the command-control network have respective identifiers. The method provides in particular the steps implemented by a current node: obtaining first data of the identifier of the at least one secondary node for which the current node is configured for allowing a software update; and upon receiving a software update request for a secondary node, using these first data for a software update at least for the secondary node identified in these first data. | 2020-04-16 |
20200117441 | SYSTEM AND METHOD OF UPDATING A NETWORK ELEMENT - A method and apparatus of a device that performs a hitless update a boot image of a network element. In this embodiment, the device identifies the network element to update and determines if the network element has redundant paths. If this network element has redundant paths, the device configures the network element to drain data processing of the network element. In addition, the device updates the network element to a new boot image when the data processing of the network element is drained. | 2020-04-16 |
20200117442 | METHOD, SYSTEM AND PROGRAM PRODUCT FOR MONITORING AND MANAGING EMERGENCY ALERT SYSTEM DEVICES - A method of monitoring and managing Emergency Alert System/Common Alerting Protocol (EAS/CAP) devices includes providing a system, the system including processor(s) in communication with memory(ies) storing instructions for execution by the processor(s), the instructions enabling the monitoring and assisting with managing EAS/CAP devices, monitoring by the system a status of the EAS/CAP devices, the status relating to configuration settings and updates to software and firmware for the EAS/CAP devices, aggregating by the system government required compliance logs for the EAS/CAP devices, resulting in aggregated compliance logs, generating report(s) regarding the EAS/CAP devices with assistance from the system, the report(s) including a consolidated compliance report for providing to government agency(ies), the consolidated compliance report including the aggregated compliance logs, and managing the configuration settings and updates to software and firmware for the EAS/CAP devices with assistance from the system. | 2020-04-16 |
20200117443 | SELECTIVE APPLICATION UPDATES BASED ON USAGE ANALYTICS - To prevent utilization of device storage space for storage of application updates affecting features which a user does not use, selective application updates are offered on a per-feature, per-user basis based on analysis of monitored application use patterns. Application analytics software monitors user behavior and interaction with the application and uploads usage metrics to an application store server (“server”). The server determines for each user the features for which a high, low, or no usage activity has been observed. The server maintains data structures to correlate code units with application features. The server determines which application features are impacted by updates to the application source code based on correlations indicated in the data structures. The server determines if a user's device should receive an auto-update or a notification that an update is available if the application update affects the features for which high user activity has been observed. | 2020-04-16 |
20200117444 | Intelligent Visual Regression System - Methods and apparatus, including computer program products, implementing and using techniques for identifying program code changes causing a changed visual appearance of a graphical user interface of a computer application. A code change is received for the computer application in a build system. The code change results in a changed computer application. The changed computer application is run and screenshots of the visual appearance of the graphical user interface are captured and saved during the run. Differences between the saved screenshots are detected to identify changes in the visual appearance of the graphical user interface. The identified changes in the visual appearance are correlated with a particular section of program code causing the changes in the visual appearance. | 2020-04-16 |
20200117445 | CHANGESET CONFLICT REBASING - In example embodiments, techniques are provided to implement changeset conflict rebasing when performing conflict-detection and merging in an infrastructure modeling software architecture that uses an optimistic concurrency policy. Changeset conflict rebasing involves adjusting the pre-change values in a local changeset so they match post-change values of a remote version, rather an original base version, or removing changes from the local changeset entirely. | 2020-04-16 |
20200117446 | CODE SEARCH AND CODE NAVIGATION - A system and method may provide assistance to programmers during programming to reduce the number of routine tasks that must be performed. In some aspects, the system may provide for searching a corpus of source code based on keyword or natural language search input. Search results including code entities and snippets of code that are described by the search input are then provided as search results. Some embodiments relate to using a neural network encoder to generate tensor embeddings of source code and related text in a joint tensor space. Relatedness between embeddings in this joint tensor space for text and associated source code is used in some embodiments to facilitate code search. | 2020-04-16 |
20200117447 | SYSTEMS AND METHODS FOR PROVIDING RANKED DEPLOYMENT OPTIONS - Systems, methods, and non-transitory computer-readable media can receive information about an application design plan. The application design plan can be associated with at least one deployment criterion. One or more available infrastructure resources can be identified based on the information about the application design plan. A plurality of deployment options can be determined based on the one or more available infrastructure resources. The plurality of deployment options can be determined to be compliant with the at least one deployment criterion. The plurality of deployment options can be ranked to produce an ordered set of deployment options. | 2020-04-16 |
20200117448 | Methods and Systems for Managing Agile Development - Aspects of the present disclosure provide systems for managing product development that include receiving development data. The systems record an amount of time spent developing one or more project features; calculate, based at least in part on development data and the amount of time spent developing the feature, business momentum; and calculate, based on certain development data, project agility and market agility. | 2020-04-16 |
20200117449 | Accelerated Access to Computations Results Generated from Data Stored in Memory Devices - An integrated circuit (IC) memory device encapsulated within an IC package. The memory device includes: multiple memory regions configured to store one or more lists of operands; an arithmetic compute element matrix coupled to access the memory regions in parallel; and a communication interface to receive a request from an external processing device. In response to the request, the arithmetic compute element matrix computes an output from the plurality of lists of operands stored in the plurality of memory regions; and the communication interface provides the output as a response to the request. For example, the request can be a memory read command addressing a memory location where an opcode is stored; and the output can be provided as if the output had been pre-calculated and stored at the memory location. | 2020-04-16 |
20200117450 | REGISTER-BASED MATRIX MULTIPLICATION - Techniques for performing matrix multiplication in a data processing apparatus are disclosed, comprising apparatuses, matrix multiply instructions, methods of operating the apparatuses, and virtual machine implementations. Registers, each register for storing at least four data elements, are referenced by a matrix multiply instruction and in response to the matrix multiply instruction a matrix multiply operation is carried out. First and second matrices of data elements are extracted from first and second source registers, and plural dot product operations, acting on respective rows of the first matrix and respective columns of the second matrix are performed to generate a square matrix of result data elements, which is applied to a destination register. A higher computation density for a given number of register operands is achieved with respect to vector-by-element techniques. | 2020-04-16 |
20200117451 | DATA ELEMENT REARRANGEMENT, PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A processor includes a decode unit to decode an instruction indicating a source packed data operand having source data elements and indicating a destination storage location. Each of the source data elements has a source data element value and a source data element position. An execution unit, in response to the instruction, stores a result packed data operand having result data elements each having a result data element value and a result data element position. Each result data element value is one of: (1) equal to a source data element position of a source data element, closest to one end of the source operand, having a source data element value equal to the result data element position of the result data element; and (2) a replacement value, when no source data element has a source data element value equal to the result data element position of the result data element. | 2020-04-16 |
20200117452 | METHOD FOR MIN-MAX COMPUTATION IN ASSOCIATIVE MEMORY - A method for finding an extreme value among a plurality of numbers in an associative memory includes creating a spread-out representation (SOR) for each number of the plurality of numbers, storing each SOR in a column of the associative memory array and performing a horizontal bit-wise Boolean operation on rows of the associative memory array to produce an extreme SOR (ESOR) having the extreme value. A system for finding an extreme value includes an associative memory array to store the plurality of numbers, each number storable in a column; a spread-out representation (SOR) creator to create a SOR for each number of the plurality of numbers and to store each SOR in a column of the associative memory array, and an extreme SOR (ESOR) finder to find an extreme value using a horizontal bit-wise Boolean operation on rows of the associative memory array storing bits of the SORs. | 2020-04-16 |
20200117453 | COMPUTING DEVICE AND METHOD - The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations. | 2020-04-16 |
20200117454 | VECTOR REGISTERS IMPLEMENTED IN MEMORY - Systems and methods related to implementing vector registers in memory. A memory system for implementing vector registers in memory can include an array of memory cells, where a plurality of rows in the array serve as a plurality of vector registers as defined by an instruction set architecture. The memory system for implementing vector registers in memory can also include a processing resource configured to, responsive to receiving a command to perform a particular vector operation on a particular vector register, access a particular row of the array serving as the particular register to perform the vector operation. | 2020-04-16 |
20200117455 | EFFICIENT MULTI-CONTEXT THREAD DISTRIBUTION - Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to determine a first number of threads to be scheduled for each context of a plurality of contexts in a multi-context processing system, allocate a second number of streaming multiprocessors (SMs) to the respective plurality of contexts, and dispatch threads from the plurality of contexts only to the streaming multiprocessor(s) allocated to the respective plurality of contexts. Other embodiments are also disclosed and claimed. | 2020-04-16 |
20200117456 | PERFORMANCE SCALING FOR BINARY TRANSLATION - Embodiments relate to improving user experiences when executing binary code that has been translated from other binary code. Binary code (instructions) for a source instruction set architecture (ISA) cannot natively execute on a processor that implements a target ISA. The instructions in the source ISA are binary-translated to instructions in the target ISA and are executed on the processor. The overhead of performing binary translation and/or the overhead of executing binary-translated code are compensated for by increasing the speed at which the translated code is executed, relative to non-translated code. Translated code may be executed on hardware that has one or more power-performance parameters of the processor set to increase the performance of the processor with respect to the translated code. The increase in power-performance for translated code may be proportional to the degree of translation overhead. | 2020-04-16 |
20200117457 | PROCESSING MERGING PREDICATED INSTRUCTION - A merging predicated instruction controls a processing pipeline to perform a processing operation to determine a processing result based on at least one source operand, and to perform a merging operation to merge the processing result with a previous value of a destination register under control of a predicate value identifying, for each of a plurality of portions of the destination register, whether that portion is to be set to a corresponding portion of the processing result or a corresponding portion of the previous value. The merging predicated instruction is permitted to be issued to the pipeline with a timing which results in the previous value of the destination register still being unavailable when the merging predicated instruction is at a given pipeline stage at which the processing result is determined. This can help to improve performance of subsequent instructions which are independent of the merging predicated instruction. | 2020-04-16 |
20200117458 | APPARATUS AND METHOD FOR CONTROLLING A CHANGE IN INSTRUCTION SET - An apparatus and method are provided for controlling a change in instruction set. The apparatus has processing circuitry to execute instructions of an instruction set, with the processing circuitry being arranged to operate in a capability domain comprising capabilities used to constrain operations performed by the processing circuitry when executing the instructions. A program counter capability storage element is used to store a program counter capability used by the processing circuitry to determined a program counter value. The processing circuitry is arranged to employ a capability based mechanism to change the instruction set. In particular, in response to execution of at least one type of instruction that is used to load an identified capability into the program counter capability storage element, the processing circuitry is arranged to invoke the capability based mechanism in order to perform a capability check operation in respect of the identified capability, and to cause the instruction set to be identified by an instruction set identifier field from the identified capability provided the capability check operation is passed. This provides a controlled mechanism for allowing the instruction set to be changed, thereby alleviating the risk of inadvertent or malicious attempts to change the instruction set. | 2020-04-16 |
20200117459 | CALCULATION PROCESSING APPARATUS AND METHOD FOR CONTROLLING CALCULATION PROCESSING APPARATUS - By including a storing device that stores a plurality of memory access instructions decoded by a decoder and outputs the memory access instruction stored therein to a cache memory, a determiner that determines whether the storing device is afford to store the plurality of memory access instructions; and an inhibitor that inhibits, when the determiner determines that the storing device is not afford to store a first memory access instruction included in the plurality of memory access instructions, execution of a second memory access instruction being included in the plurality of memory access instructions and being subsequent to the first memory access instruction for a predetermined time period, regardless of a result of determination made on the second memory access instruction by the determiner, the calculation processing apparatus inhibits a switch of the order of a store instruction and a load instruction. | 2020-04-16 |
20200117460 | MEMORY INTEGRATED CIRCUIT AND PRE-FETCH ADDRESS DETERMINING METHOD THEREOF - A memory integrated circuit and a pre-fetch address determining method thereof are provided. The memory integrated circuit includes an interface circuit, a memory, a memory controller, and a pre-fetch accelerator circuit. The interface circuit receives a normal read request from an external device. When the pre-fetch accelerator circuit receives the normal read request from the interface circuit, the pre-fetch accelerator circuit adds a current address of the normal read request to a training address group as a new training address. The pre-fetch accelerator circuit reorders a plurality of training addresses of the training address group. The pre-fetch accelerator circuit calculates a pre-fetch stride according to the reordered training addresses of the training address group. The pre-fetch accelerator circuit calculates a pre-fetch address of a pre-fetch request according to the pre-fetch stride and the current address. | 2020-04-16 |
20200117461 | ARITHMETIC PROCESSING DEVICE AND CONTROL METHOD FOR ARITHMETIC PROCESSING DEVICE - An arithmetic processing device includes: a pipeline circuit including an instruction fetch circuit, an instruction decoder that performs a first branch misprediction determination for a branch instruction, and issues the instructions in-order, a branch instruction processing circuit which performs a second branch misprediction determination for the branch instruction; and a commit processing circuit that executes a commit processing of the processed instructions in-order. When a branch misprediction is established in the first branch misprediction determination, the instruction decoder inhibits issuing of the instructions to the branch prediction destination from the instruction decoder, and when the first branch instruction for which the branch misprediction is established is inputted, the branch instruction processing circuit clears the pipeline state in the instruction decoder, allows the instruction fetch circuit to start fetching instructions to a correct branch destination, and releases the inhibit of issuing of the instructions from the instruction decoder. | 2020-04-16 |
20200117462 | MEMORY INTEGRATED CIRCUIT AND PRE-FETCH METHOD THEREOF - A memory integrated circuit and a pre-fetch method thereof are provided. The memory integrated circuit includes an interface circuit, a memory, a memory controller, and a pre-fetch accelerator circuit. The interface circuit receives a normal read request from an external device. After the pre-fetch accelerator circuit sends a pre-fetch request to the memory controller, the pre-fetch accelerator circuit pre-fetches at least one pre-fetch data from the memory through the memory controller. When the pre-fetch data in the pre-fetch accelerator circuit has a target data of the normal read request, the pre-fetch accelerator circuit takes the target data from the pre-fetch data and returns the target data to the interface circuit. When the pre-fetch data in the pre-fetch accelerator circuit has no target data, the pre-fetch accelerator circuit sends the normal read request with higher priority than the pre-fetch request to the memory controller. | 2020-04-16 |
20200117463 | CACHE CONTROL CIRCUITRY AND METHODS - An apparatus comprises execution circuitry to perform operations on source data values and to generate result data values; issue circuitry comprising one or more issue queues identifying pending operations awaiting performance by the execution circuitry, and selection circuitry to select pending operations to issue to the execution circuitry; data value cache storage comprising first and second cache regions; and cache control circuitry to control the storing to the first cache region of result data values generated by the execution circuitry and the eviction of stored result data values from the first cache region in response to newly generated result data values being stored in the first cache region; the cache control circuitry being configured to store to the second cache region result data values required as source data values for one or more oldest pending operations identified by the one or more issue queues and to inhibit eviction of a given result data value stored in the second cache region until initiation of execution of a pending operation which requires that given result data value as a source data value. | 2020-04-16 |
20200117464 | EXECUTING BRANCH INSTRUCTIONS FOLLOWING A SPECULATION BARRIER INSTRUCTION - An apparatus comprising processing circuitry is provided, the processing circuitry comprising execution circuitry, commit circuitry, issue circuitry comprising an issue queue and selection circuitry, and a branch predictor. The processing circuitry is configured to identify a speculation barrier instruction in the commit queue. While an entry in the commit queue identifies a speculation barrier instruction, when a branch instruction that follows the speculation barrier instruction in the program order is selected for issue, the processing circuitry performs a first execution of the instruction, inhibiting updating of branch prediction data items associated with the branch instruction and inhibiting the selection circuitry from invalidating the associated issue queue entry. When the speculation barrier instruction completes, the processing circuitry is configured to perform a second execution of the instruction, updating the branch prediction data items associated with the branch instruction and allowing the issue circuitry to invalidate the associated issue queue entry. | 2020-04-16 |
20200117465 | MULTI-AGENT INSTRUCTION EXECUTION ENGINE FOR NEURAL INFERENCE PROCESSING - Multi-agent instruction execution engines for neural inference processing are provided. In various embodiments, a neural core is provided. The neural core includes an instruction memory. The instruction memory comprises a plurality of instruction streams, each instruction stream associated with one of a plurality of agents. The instruction memory further comprises a plurality of shared functional units. The neural core is adapted to concurrently execute the plurality of instruction streams on the plurality of associated agents. The execution includes maintaining a separate program counter for each of the plurality of agents, determining a plurality of operations from the instructions of each instruction stream, and directing the operations to the shared functional units. The instructions of each instruction stream are statically scheduled prior to runtime to ensure their execution is conflict free. | 2020-04-16 |
20200117466 | COMBINING INSTRUCTIONS FROM DIFFERENT BRANCHES FOR EXECUTION IN A SINGLE N-WAY VLIW PROCESSING ELEMENT OF A MULTITHREADED PROCESSOR - A data processing system includes a processor operable to execute a program partitioned into a number of discrete instructions, the processor having multiple processing elements each capable of executing more than one instruction per cycle, and an interface configured to read a first program and, on detecting a branch operation by that program creating m number of branches each having a different sequence of instructions, combine an instruction from one of the branches with an instruction from at least one of the other branches so as to cause a processing element to execute the combined instructions during a single cycle. | 2020-04-16 |
20200117467 | CONFIGURABLE CACHE FOR MULTI-ENDPOINT HETEROGENEOUS COHERENT SYSTEM - A device includes a memory bank. The memory bank includes data portions of a first way group. The data portions of the first way group include a data portion of a first way of the first way group and a data portion of a second way of the first way group. The memory bank further includes data portions of a second way group. The device further includes a configuration register and a controller configured to individually allocate, based on one or more settings in the configuration register, the first way and the second way to one of an addressable memory space and a data cache. | 2020-04-16 |
20200117468 | SHUTDOWN SEQUENCE OF THIN CLIENTS - Examples disclosed herein relate to a thin client. A thin client device can include a power receiver and a shutdown power source sized to store an amount of power for a shutdown sequence of the thin client. The thin client may include a basic input output system (BIOS) to initialize a shutdown sequence powered by the shutdown power source in response to detection of a power loss condition at the power receiver. | 2020-04-16 |
20200117469 | GENERATING A PREDICTED PROFILE FOR TARGET CODE BASED ON A PROFILE OF SAMPLED CODE - A predicted profile is generated for target code to be executed on a processor of the computing environment. The predicted profile is based on a profile of sampled code. The sampled code is a different version of code than the target code and is a complex build of modules for which it is difficult to determine which versions of the modules have been profiled. Based on the predicted profile for the target code, a determination is made of predicted execution information for the target code. Based on the determining the predicted execution information for the target code, an action is performed to facilitate processing within the computing environment. | 2020-04-16 |
20200117470 | REDUCING THE STARTUP LATENCY OF FUNCTIONS IN A FAAS INFRASTRUCTURE - Techniques for reducing the startup latency of functions in a Functions-as-a-Service (FaaS) infrastructure are provided. In one set of embodiments, a function manager of the FaaS infrastructure can receive a request to invoke a function uploaded to the infrastructure and can retrieve information associated with the function. The retrieved information can include an indicator of whether instances of the function may be sticky (i.e., kept in host system primary memory after function execution is complete), and a list of zero or more host systems in the FaaS infrastructure that currently have an unused sticky instance of the function in their respective primary memories. If the indicator indicates that instances of the function may be sticky and if the list identifies at least one host system with an unused sticky instance of the function in its primary memory, the function manager can select the at least one host system for executing the function. | 2020-04-16 |
20200117471 | COMPUTING DEVICE WITH MULTIPLE OPERATING SYSTEMS AND OPERATIONS THEREOF - A computing device includes main volatile memory and a node. The node includes a central processing module, non-volatile memory; and a non-volatile memory interface unit. A combination of the non-volatile memory and the main volatile memory stores an application specific operating system and at least a portion of a computing device operating system. The application specific operating system includes a plurality of application specific system level operations and the computing device operating system includes a plurality of general system level operations. A first processing module of the central processing module operates in accordance with a selected operating system and ignores operation not included in the selected operating system. The selected operating system includes one or more selected application specific level operations of the application specific operating system. | 2020-04-16 |
20200117472 | RACK LEVEL SERVER BOOT - Methods, systems and computer program products for remotely providing local server boot capabilities are provided. Aspects include receiving a command to boot a specified server of a plurality of servers by a rack level server boot device from remote user device. Aspects also include identifying a target emulated hard drive of a plurality of emulated hard drives of the rack level server boot device. The target emulated hard drive may be associated with a port of a plurality of ports of the rack level server that is connected to the specified server. Aspects also include selecting a specified OS boot image of one or more OS boot images stored by a memory of the rack level server boot device. Aspects further include causing the specified server to boot from the target emulated hard drive using the specified OS boot image. | 2020-04-16 |
20200117473 | Application Launching in a Multi-Display Device - Techniques for application launching in a multi-display device are described. In one or more implementations, an apparatus such as a mobile device includes multiple interconnected display devices. According to one or more implementations, techniques described herein enable application launching behavior to be determined based on context information. For instance, based on a determined context condition of a multi-display client device, an application launch behavior is determined and used to launch an application on the client device. | 2020-04-16 |
20200117474 | NETWORK BOOTING IN A PEER-TO-PEER ENVIRONMENT USING DYNAMIC MAGNET LINKS - A method, computer program product, and system includes a processor(s) connecting a first computer system to a boot swarm, initiating formation of a peer to peer network. The processor(s) receive a request from a second computer system, a request for a file. The processor(s) configure the second computer system, including implementing a client application hosted from a resource in the first computer system, to facilitate the second computer system joining the peer to peer network. The processor(s) determine immediate peer(s) in the peer to peer network available to provide the file to the second computer system. The processor(s) generate a magnet link that includes a listing of address(es) of the immediate peer(s), ranking address(es) from best source to worst source for downloading the file. The processor(s) provide the second computer system with the magnet link to utilize in downloading the file from a peer. | 2020-04-16 |
20200117475 | FUNCTION EVALUATION USING MULTIPLE VALUES LOADED INTO REGISTERS BY A SINGLE INSTRUCTION - A technique for efficient calling of functions on a processor generates an executable program having a function call by analysing an interface for the function that defines an argument expression and an internal value used solely within the function, and an argument declaration defining an argument value to be provided to the function when the program is run. A data structure is generated including the internal value and a resolved argument value derived from the argument expression and the argument value. A single instruction is encoded in the program to utilise the data structure. When the program is executed on a processor, the single instruction causes the processor to load the argument value and internal value from the data structure into registers in the processor, prior to evaluating the function. The function can then be executed without further register loads being performed. | 2020-04-16 |
20200117476 | AUTOMATED CONFIGURATION OF AN APPLICATION INSTANCE - A method for automated configuration of an application instance includes selecting a set of end users of corresponding instances of an application and classifying each of the end users, logging interactions between each of the end users and the corresponding instances, the grouping together of clusters of the end users according to commonly observed ones of the logged interactions. For each grouping, a configuration common to the end users clustered therein is generated. Thereafter, a request for automated application instance configuration is received from a new end user and, in response to the request, the new end user is classified, one of the set of end users of same classification identified along with a corresponding grouping, and finally, the generated configuration for the corresponding grouping is retrieved and then applied to a new instance of the application for the end user. | 2020-04-16 |
20200117477 | EDGE CONFIGURATION OF SOFTWARE SYSTEMS FOR MANUFACTURING - In one embodiment, a method receiving a specification of a manufacturing system that is generated using a modeling language. The modeling language allows a layout of machines to be specified. Available software systems from a software provider are retrieved and the specification is analyzed to determine applicable software systems for the manufacturing system. The method generates a configuration based on the analysis of the layout of the machines for the manufacturing system. The configuration specifies instances of software systems to be installed on the machines and an edge system between the manufacturing system and a remote computing environment. The instances of the software systems are deployed on the edge system and the machines and the edge system orchestrates operations on the machines operating in the manufacturing system in real-time. Also, the edge system communicates with the remote computing environment for non-real time operations on the set of machines. | 2020-04-16 |
20200117478 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR MANAGING SERVICE CONTAINER - The present disclosure relates to a method, apparatus and computer program product for managing service containers. According to example implementations of the present disclosure, there is provided a method for managing a group of service containers. In the method, in response to receiving a backup demand on a group of to-be-generated service containers, a configuration file for generating the group of service containers is built on the basis of the backup demand, the configuration file comprising scripts for installing backup agencies that perform backup operations to the group of service containers. An image file for initializing the group of service containers is loaded to at least one node in a service container management system so as to create a group of basic running environments. The configuration file is deployed to the group of basic running environments to generate the group of service containers, so that a corresponding backup agency comprised in a corresponding service container in the group of generated service containers performs the backup operation to the group of generated service containers. Further, there is provided an apparatus and computer program product for managing a group of service containers. | 2020-04-16 |
20200117479 | AUTOMATED PROPAGATION OF SERVER CONFIGURATION ON A SERVER CLUSTER - Techniques are disclosed to automate secure propagation of a configuration to a plurality of servers in a server cluster. For example, the techniques may include a method. The method may include receiving, at a first computing device, a first public key associated with a target computing device, the first computing device having an updated configuration. The method may further include encrypting, at the first computing device, the updated configuration using the first public key. The method may further include sending the encrypted configuration to the target computing device. The method may further include decrypting, at the target computing device, the encrypted configuration using a first private key associated with the target computing device, wherein the first public key and the first private key are a first keypair associated with the target computing device. The method may further include updating the target computing device with the updated configuration. | 2020-04-16 |
20200117480 | TRIGGER CORRELATION FOR DYNAMIC SYSTEM RECONFIGURATION - A system is reconfigured at runtime when triggers are issued in response to events taking place in the system. The triggers, which are issued on configuration entities, are correlated by transferring relations of the configuration entities to relations of the triggers to thereby identify related triggers. Elasticity rules are selected for the triggers, where the elasticity rules specify actions for resource allocation or deallocation at runtime. Selected actions of the selected elasticity rules for the related triggers are executed to reconfigure the system according to a set of action correlation meta-rules which provide an ordering of the actions. | 2020-04-16 |
20200117481 | SYSTEM AND METHOD FOR THIRD PARTY APPLICATION ENABLEMENT - Disclosed herein are system, method, and computer program product embodiments for enabling and/or configuring cloud-based application. In an embodiment, a cloud system provides a cloud-based computing platform accessible by user input devices to perform cloud-based computing. The cloud system also includes an application exchange, allowing user input devices to select other cloud-based applications and/or software to enable and use with the cloud-based computing platform. The cloud-based application may be hosted by a third party cloud system that enables the functionality of the cloud-based application. When a user input device selects a cloud-based application to enable, cloud system generates an installation and configuration process to seamlessly install and configure the cloud-based application within the cloud-based computing platform. In this manner, the cloud system integrates the configuration process into the cloud-based computing platform. | 2020-04-16 |
20200117482 | DYNAMIC LOADING OF A JAVA AGENT INTO A RUNNING PROCESS THROUGH DIRECT PROCESS MEMORY MANIPULATION - A method includes performing, by a processor: instantiating a first process corresponding to a first user identification and modifying a Java Runtime Environment (JRE) associated with a second process corresponding to a second user identification that is different from the first user identification to include a loader, the loader being configured to load a Java agent into the second process. Modifying the JRE comprises modifying the JRE using the first process. | 2020-04-16 |
20200117483 | DATA PROCESSING SYSTEM AND METHOD - A data processing system includes a data processing arrangement, wherein the data processing arrangement includes computing hardware for executing one or more software products, wherein execution of the one or more software products configures the data processing arrangement to access data from a file system arrangement. The data processing arrangement is operable to load a dynamic linker that is operable to include an intercept library that intercepts file access operations of an executable software product wherein:
| 2020-04-16 |
20200117484 | PERSONALIZED INTERACTIVE DATA PRESENTATION THROUGH AUGMENTED REALITY - A data presentation method through augmented reality, system, and computer program product include creating a file container for a presentation that supports raw data embedding and definition of available interactions and levels of confidentiality of information for the presentation, streaming presentation content from the file container to a plurality of augmented reality devices, and generating a personalized individual interactive experience of the presentation content for at least one person wearing an augmented reality device in the plurality of augmented reality devices. | 2020-04-16 |
20200117485 | Generating User Interface Containers - A system for generating a user interface described herein can include a processor to detect a plurality of display characteristics from a user interface manager, wherein the plurality of display characteristics correspond to a type of a device. The processor can also detect a list of applications being executed by the system and generate a user interface container by applying the plurality of display characteristics to each of the applications from the list of applications. | 2020-04-16 |
20200117486 | Method and Apparatus for Composite User Interface Creation - A method and apparatus for modelling and generating a composite user interface comprising a plurality of user interface elements provided by at least one source application. Modelling the composite user interface comprises modelling at least part of a user interface provided by the or each source application, and modelling relationships between the at least part of the user interface provided the or each source application and the composite user interface. | 2020-04-16 |
20200117487 | TERMINAL AND METHOD FOR CONTROLLING THE SAME - The present invention relates to a terminal and a method of operating the terminal. The terminal can execute at least one task. The terminal can execute at least one task, can cause the display to display a soft key and information related to the executed at least one task at a first region of the display and can cause the display to display at least one function indicator related to the information at a second region of the display. | 2020-04-16 |
20200117488 | CAPTURING AND PROCESSING INTERACTIONS WITH A USER INTERFACE OF A NATIVE APPLICATION - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for evaluating interactions with a user interface of an application are disclosed. In one aspect, a method includes receiving frame bundles for a user session with a native application. Each frame bundle can include data specifying, for each of one or more points in time, a presentation position of each presentation object used by the native application to generate a user interface of the native application at the point in time, and, for one or more presentation objects, one or more drawing operations performed to generate the visual representation of the presentation object. Playback data that presents visual changes of the user interface corresponding to the drawing operations performed to generate the visual representation of each presentation object is generated based on the data specified by the frame bundles. | 2020-04-16 |
20200117489 | SYSTEMS AND METHODS FOR TRAFFIC OPTIMIZATION VIA SYSTEM ON CHIP OF INTERMEDIARY DEVICE - Embodiments described include systems and methods for delivering a network application. An intermediary device between a client device and a server hosting a network application establishes a connection with the network application. The intermediary device receives encoded application data and decodes the encoded application data. The application data is encoded graphics data or audio data. The decoded application data is renderable at the client device. The intermediary device transmits the decoded application graphics and/or audio data to a client application of the client application for rendering to provide user access to the network application. | 2020-04-16 |
20200117490 | Interface for Generating Models with Customizable Interface Configurations - A method includes receiving, via a model building platform, historical user behavior including historical data analysis characteristics; generating, based on the historical data analysis characteristics, a blueprint for guiding user action to accomplish a task, the generating including constructing the blueprint using the historical data analysis characteristics; receiving, via graphical user interface, user input requesting generation of a model and a task description; determining, using the blueprint and based on the task description, data analysis characteristics; and rendering, within the graphical user interface, a prompt to select the determined data analysis characteristics. Related apparatus, systems, techniques and articles are also described. | 2020-04-16 |
20200117491 | FRAMEWORK FOR CUSTOM ACTIONS ON AN INFORMATION FEED - Systems and methods for providing a custom action for an information post are described. In one embodiment, data for generating a user interface component for display at a client machine may be transmitted from a server to the client machine. The user interface component displaying one or more information posts may be capable of being generated in accordance with first computing programming language instructions provided by a first entity. Each information post may include information relating to a record stored on a storage medium accessible to the server. Selected ones of the information posts may have associated therewith a custom action activation mechanism for activating a custom action relating to the associated information post. The custom action activation mechanism may be capable of being generated in accordance with second computer programming language instructions provided by a second entity. | 2020-04-16 |
20200117492 | USING A GENERATIVE MODEL TO FACILITATE SIMULATION OF POTENTIAL POLICIES FOR AN INFRASTRUCTURE AS A SERVICE SYSTEM - A method for evaluating at least one potential policy for an IaaS system may include determining a predicted workload for the IaaS system based on at least one generative model corresponding to the IaaS system. The at least one potential policy for the IaaS system may be simulated based on the predicted workload, thereby producing one or more simulation metrics that indicate effects of the at least one potential policy. The performance of the IaaS system may be optimized based on the one or more simulation metrics. | 2020-04-16 |
20200117493 | SYSTEM AND METHOD FOR EXECUTING DIFFERENT TYPES OF BLOCKCHAIN CONTRACTS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for executing blockchain contracts are provided. One of the methods includes: obtaining a bytecode of a blockchain contract, wherein the bytecode comprises one or more indicators, and the one or more indicators comprise a first indicator indicating a virtual machine type for executing the blockchain contract; and executing the blockchain contract using a virtual machine of the virtual machine type associated with the first indicator. | 2020-04-16 |
20200117494 | MINIMIZING IMPACT OF MIGRATING VIRTUAL SERVICES - The present disclosure relates to systems, methods, and computer readable media that utilize a low-impact live-migration system to reduce unfavorable impacts caused as a result of live-migrating computing containers between physical server devices of a cloud computing system. For example, systems disclosed herein evaluates characteristics of computing containers on server devices to determine a predicted unfavorable impact of live-migrating the computing containers between the server devices. Based on the predicted impact, the systems disclosed herein can selectively identify which computing containers to live-migrate as well as carry out live-migration of the select computing containers in such a way the significantly reduces unfavorable impacts to a customer or client device associated with the computing containers. | 2020-04-16 |
20200117495 | ZONE COMPUTE AND CONTROL ARCHITECTURE - A method of zone controlling of features and functions of a vehicle is provided. The zone controlling includes a backbone, which is communicatively coupled to a connected compute center and zone modules, communicating inputs and outputs with respect to the features and the functions. The connected compute center host processing operations that control the features and the functions of the vehicle based on the inputs and outputs communicated via the backbone. The zone modules distribute the inputs and outputs to and from the features and the functions of the vehicle. | 2020-04-16 |
20200117496 | VIRTUAL AUTOCALIBRATION OF SENSORS - The present disclosure describes methods and systems for virtually calibrating geometric sensors with overlapping fields of view. In some embodiments, a geometric sensor may be virtually calibrated by applying a correction value to profile data obtained by the geometric sensor to generate adjusted profile data. The correction factor may be determined based at least in part on X-Y offsets and/or rotational offsets of prior profile data obtained by the geometric sensor relative to corresponding profile data obtained by a reference geometric sensor, and may be recalculated or updated as new sets of profile data are obtained. The adjusted profile data may be used in place of the original profile data in various data processing operations to functionally offset a positional error of the geometric sensor. | 2020-04-16 |
20200117497 | SYSTEM AND METHOD FOR STORAGE DURING VIRTUAL MACHINE MIGRATION - A system and method receiving a request to transfer first data from a first storage space to a second storage space, receiving a write request to write second data to a location during the transfer of the first data, determining from an access data structure that the location is not in use, writing the second data to the second storage space, and updating a location data structure indicating the location of the second data to be in the second storage space. | 2020-04-16 |
20200117498 | AUTOMATIC DOMAIN JOIN FOR VIRTUAL MACHINE INSTANCES - A customer submits a request to a virtual computer system service to launch a virtual machine instance and to join this instance to a managed directory. The service may obtain, from the customer, a domain name and Internet Protocol addresses for the selected directory, which is then stored within a systems management server. When launched, the instance may initiate an agent, which may communicate with the systems management server to obtain the configuration information. The agent may use this configuration information to establish a communications channel with the managed directory and create a temporary set of computer credentials that may be used to verify that the customer is authorized to join the virtual machine instance to the managed directory. If the credentials are valid, the managed directory may generate a computer account within the managed directory, which may be used to join the virtual machine instance to the managed directory. | 2020-04-16 |
20200117499 | METHOD FOR CONTROLLING EXECUTION OF HETEROGENEOUS OPERATING SYSTEMS AND ELECTRONIC DEVICE AND STORAGE MEDIUM THEREFOR - An electronic device is provided. The electronic device includes a display, at least one processor, and a memory operatively connected with the display and the at least one processor and configured to store a plurality of applications including a first application configured to execute using a first operating system (OS) and a second application configured to execute using a second OS, wherein the memory stores instructions configured to, when executed, cause the at least one processor to output a first object and a second object on a screen of the first OS, wherein the first object is associated with execution of the first application and the second object is associated with execution of the second application and, when the second object is selected, output an execution screen of the second application on the screen of the first OS. Other embodiments are also possible. | 2020-04-16 |
20200117500 | METHOD FOR CONTROLLING A MULTI-CORE PROCESSOR AND ASSOCIATED COMPUTER - The invention relates to a control method for a multi-core processor comprising a plurality of cores sharing at least one common material resource according to a sharing policy based on different time windows, each time window being attributed to least one core. | 2020-04-16 |
20200117501 | CUSTOMIZING JVM NATIVE SERVICES - Applications can invoke native services provided by an operating system via an interface library of functions. The interface library may provide a more limited set of parameters than the full range of parameters supported by the native services. An application can customize an interface function by requesting that a parameter be added to the parameter list whenever the interface function is called by the application. For example, an application can request that a parameter that is supported by a native service but is not provided by the interface function to the native service be added prior to calling the native service. | 2020-04-16 |
20200117502 | INVOKING AN AUTOMATED ASSISTANT TO PERFORM MULTIPLE TASKS THROUGH AN INDIVIDUAL COMMAND - Methods, apparatus, systems, and computer-readable media for engaging an automated assistant to perform multiple tasks through a multitask command. The multitask command can be a command that, when provided by a user, causes the automated assistant to invoke multiple different agent modules for performing tasks to complete the multitask command. During execution of the multitask command, a user can provide input that can be used by one or more agent modules to perform their respective tasks. Furthermore, feedback from one or more agent modules can be used by the automated assistant to dynamically alter tasks in order to more effectively use resources available during completion of the multitask command. | 2020-04-16 |
20200117503 | METHOD AND SYSTEM FOR PROCESSING DATA - A method for processing data includes receiving an adjustment request for adjusting a number of consumer instances from a first number to a second number, and determining a migration overhead for adjusting a first distribution of states associated with the first number of consumer instances to a second distribution of the states associated with the second number of consumer instances, wherein the states are intermediate results of processing the data and the migration overhead includes a latency and a bandwidth shortage incurred for migrating the states. Based on the determined migration overhead, the states are migrated between the first number of consumer instances and the second number of consumer instances, and thereafter the data is processed based on the second distribution of the states at the second number of consumer instances. | 2020-04-16 |
20200117504 | EVOLUTIONARY MODELLING BASED NON-DISRUPTIVE SCHEDULING AND MANAGEMENT OF COMPUTATION JOBS - In an embodiment, one or more non-transitory computer-readable storage media store one or more sequences of instructions, which when executed using one or more processors causes the one or more processors to perform various functions, such as accessing data stored in memory regarding a plurality of computation jobs. Such data includes, for instance, periodicity constraints that specify execution frequencies for the plurality of computation jobs, categorization data that categorizes the plurality of computation jobs into a plurality of job types, and organization data that organizes the plurality of computation jobs into a plurality of ordered arrangements. In this embodiment, there are at least a first ordered arrangement and a second ordered arrangement, each ordered arrangement comprises references to each of the plurality of computation jobs according to job type, and the job types are associated with relative priority indicia. The functions performed by the one or more processors also include populating a resulting ordered arrangement by selecting priority indicia from between respective job types in the first ordered arrangement and second ordered arrangement, and generating a schedule for execution of the plurality of computation jobs in accordance with the relative priority indicia in the resulting ordered arrangement and the periodicity constraints. The one or more processors are also configured to execute the one or more sequences of instructions to cause execution of the plurality of computation jobs according to the schedule. | 2020-04-16 |
20200117505 | MEMORY PROCESSOR-BASED MULTIPROCESSING ARCHITECTURE AND OPERATION METHOD THEREOF - A memory processor-based multiprocessing architecture and an operation method thereof are provided. The memory processor-based multiprocessing architecture includes a main processor and a plurality of memory chips. The memory chips include a plurality of processing units and a plurality of data storage areas. The processing units and the data storage areas are respectively disposed one-to-one in the memory chips. The data storage areas are configured to share a plurality of sub-datasets of a large dataset. The main processor assigns a computing task to one of the processing units of the memory chips, so that the one of the processing units accesses the corresponding data storage area to perform the computing task according to a part of the sub-datasets. | 2020-04-16 |
20200117506 | CORRELATION OF THREAD INTENSITY AND HEAP USAGE TO IDENTIFY HEAP-HOARDING STACK TRACES - Embodiments identify heap-hoarding stack traces to optimize memory efficiency. Some embodiments can determine a length of time when heap usage by processes exceeds a threshold. Some embodiments may then determine heap information of the processes for the length of time, where the heap information comprise heap usage information for each interval in the length of time. Next, some embodiments can determine thread information of the one or more processes for the length of time, wherein determining the thread information comprises determining classes of threads and wherein the thread information comprises, for each of the classes of threads, thread intensity information for each of the intervals. Some embodiments may then correlate the heap information with the thread information to identify code that correspond to the heap usage exceeding the threshold. Some embodiments may then initiate actions associated with the code. | 2020-04-16 |
20200117507 | System, Method, and Computer Program Product for Load Balancing to Process Large Data Sets - Systems, methods, and computer program products are provided for load balancing for processing large data sets. The method includes identifying a number of segments and a transaction data set comprising transaction data for a plurality of transactions, the transaction data for each transaction of the plurality of transactions comprising a transaction value, determining an entropy of the transaction data set based on the transaction value of each transaction of the plurality of transactions, segmenting the transaction data set into the number of segments based on the entropy of the transaction data set and balancing respective entropies of each segment of the number of segments, and distributing processing tasks associated with each segment of the number of segments to at least one processor of a plurality of processors to process each transaction in each respective segment. | 2020-04-16 |
20200117508 | DETERMINING AN ALLOCATION OF COMPUTING RESOURCES FOR A JOB - A device may receive a computing resource request. The computing resource request may be related to allocating computing resources for a job. The device may process the computing resource request to identify a set of parameters related to the computing resource request or to the job. The set of parameters may be used to determine an allocation of the computing resources for the job. The device may utilize multiple machine learning models to process data related to the set of parameters identified in the computing resource request. The device may determine the allocation of the computing resources for the job based on utilizing the multiple machine learning models to process the data. The device may generate a set of scripts related to causing the computing resources to be allocated for the job according to the allocation. The device may perform a set of actions based on the set of scripts. | 2020-04-16 |
20200117509 | MANAGING MULTIPLE ISOLATED EXECUTION CONTEXTS IN A SINGLE PROCESS - A method may include generating, for a host application, an image including an image heap including objects and a writeable object partition including a subset of the objects. The method may further include initializing, by executing the image in a process of a computer system, a first isolate including a first address space and a first read-only map of the image heap. The first read-only map may designate the writeable object partition of the image heap as copy-on-write. The method may further include initializing, by executing the image in the process, a second isolate including a second address space and a second read-only map of the image heap. The method may further include performing, in the first isolate and using the first read-only map, a first task that accesses an object, and performing, in the second isolate and using the second read-only map, a second task that accesses the object. | 2020-04-16 |
20200117510 | DATA SET COMPRESSION WITHIN A DATABASE SYSTEM - A method includes receiving, by a host computing device of a storage cluster of computing devices, a segment group of data. The method further includes processing, by the host computing device, the segment group of data to produce data segments. The method further includes, sending, by the host computing device, the data segments to the computing devices of the storage cluster. The method further includes allocating, by a host node of the first computing device, data segment divisions of the first data segment to nodes of the first computing device. The method further includes allocating, by a host processing core resource of the first node, data segment sub-divisions of the first data segment division to processing core resources of the first node. The method further includes storing, by the first computing device, the first data segment having the data segment divisions and the first data segment division having the data segment sub-divisions. | 2020-04-16 |
20200117511 | SYSTEMS AND METHODS FOR SCHEDULING PROGRAMS FOR DEDICATED EXECUTION ON A QUANTUM PROCESSOR - Systems and methods for scheduling usage time for programs that can be executed on a hybrid computing system including a quantum processing unit (QPU) and a central processing unit (CPU). Programs can comprise both QPU-executable tasks and CPU-executable tasks. Some programs can be considered high performance programs that are intolerant of interruptions to QPU-executable tasks and some programs can be considered low performance programs that are tolerant of interruptions to QPU-executable tasks. After a high performance program finishes executing QPU-executable tasks on a QPU, a low performance program may execute QPU-executable tasks on the QPU while the high performance program executes CPU-executable tasks on a CPU. Execution of QPU-executable tasks of a low performance program on a QPU can pause or stop if a high performance program is queued. | 2020-04-16 |
20200117512 | ALLOCATING COMPUTING RESOURCES TO A CONTAINER IN A COMPUTING ENVIRONMENT - Computing resources can be allocated to a container in a computing environment. For example, a computing device can determine that a dependent computing resource is to be allocated to the container. The dependent computing resource can depend on another computing resource being allocated to the container before the dependent computing resource is allocated to the container. The computing device can determine a parameter value for a backoff process for checking the availability of the dependent computing resource. The parameter value can be determined using another parameter value for another backoff process for checking the availability of the other computing resource. The computing device can then determine that the dependent computing resource is available by executing the backoff process using the parameter value. In response to determining that the dependent computing resource is available, the computing device can allocate the dependent computing resource to the container. | 2020-04-16 |
20200117513 | SYSTEMS AND METHODS FOR SPLITTING PROCESSING BETWEEN DEVICE RESOURCES AND CLOUD RESOURCES - An example described herein includes a device to receive, from a user device, a request message to split processing, of an application, between the user device and the server device; determine a processing capability of the server device; and determine whether the server device is capable of executing a process of the application based on the processing capability of the server device. When the server device is determined to be capable of executing the process of the application, the device may: send an acceptance message to the user device, wherein the acceptance message identifies a first set of processes of the application and includes instructions to permit the user device to execute the first set of processes; and execute a second set of processes of the application, wherein the user device executes the first set of processes of the application substantially simultaneously as the server device executes the second set of processes of the application. | 2020-04-16 |
20200117514 | ELECTRONIC DEVICE FOR EXECUTING MULTIPLE OPERATING SYSTEMS AND METHOD OF CONTROLLING SAME - An electronic device for executing various operating systems is provided. The electronic device includes first and second hardware devices, a first operating system (OS), a second OS different from the first OS, and a processor configured to control the first hardware device to process first data from a first program executed on the first OS, obtain a command for executing the second OS, generate a container for executing the second OS based on a kernel of the first OS in response to the command for executing the second OS, execute the second OS on the generated container, execute a second program on the second OS, obtain second data regarding the second program from the second OS via socket communication by a control application installed on the first OS, and control the second hardware device to process the second data regarding the second program based on the first OS using the installed control application. | 2020-04-16 |
20200117515 | METHOD AND DEVICE FOR EXECUTING A FUNCTION BETWEEN A PLURALITY OF ELECTRONIC DEVICES - The present disclosure relates to a sensor network, Machine Type Communication (MTC), Machine-to-Machine (M2M) communication, and technology for Internet of Things (IoT). The present disclosure may be applied to intelligent services based on the above technologies, such as smart home, smart building, smart city, smart car, connected car, health care, digital education, smart retail, security and safety services. The present disclosure relates to method and device for executing a function, the method for executing a function in a relay device may include: receiving, from a plurality of electronic devices, capability value information on at least one function executable in each of the plurality of electronic devices; determining at least one electronic device to perform the at least one function based on the received capability value information; and transmitting at least one command message instructing an execution of the at least one function to the at least one electronic device. | 2020-04-16 |
20200117516 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIUMS FOR WORKLOAD CLUSTERING - Methods, systems, and computer readable mediums for optimizing a system configuration are disclosed. In some examples, a method includes determining whether a system configuration for executing a workload using a distributed computer system is optimizable and in response to determining that the system configuration is optimizable, modifying the system configuration such that at least one storage resource for storing workload data is located at a server node that is executing the workload in the distributed computer system. | 2020-04-16 |
20200117517 | SYSTEM AND METHOD FOR OUTWARD COMMUNICATION IN A COMPUTATIONAL STORAGE DEVICE - A method of facilitating communication from an embedded computer in a computational storage device to a host or an external device includes receiving a message from an embedded user process for transmission to a user process running at either the host or the external device, determining that a destination address of the message corresponds to the host or the external device, in response to the determination, forwarding the message to an embedded relay process associated with the host or the external device, instructing a storage controller of the computational storage device about the message to be delivered, notifying a host relay process at the host of a presence of the message, receiving a send message request from the host in response to the notification, and in response to receiving the send message request, transmitting the message to the host. | 2020-04-16 |
20200117518 | SYSTEM AND METHOD FOR COMPUTATIONAL STORAGE DEVICE INTERCOMMUNICATION - A method of facilitating communication to an embedded computer in a computational storage device via a host includes receiving a message for transmission to an embedded process running at the embedded computer, determining that a destination address of the message corresponds to the embedded computer within the computational storage device, in response to the determination, forwarding the message to a host relay process associated with the embedded computer, and encapsulating the message to generate a proprietary command for transmission to the computational storage device. | 2020-04-16 |
20200117519 | DATA SHARING SYSTEM AND DATA SHARING METHOD THEREFOR - A data sharing system may include a storage module and at least two processing modules. The at least two processing modules may share the storage module and the at least two processing modules communicate to implement data sharing. A data sharing method for the data sharing system is provided. According to the disclosure, a storage communication overhead may be reduced, and a data access delay may be effectively reduced. | 2020-04-16 |
20200117520 | SYSTEM AND METHOD FOR COMMUNICATING BETWEEN COMPUTATIONAL STORAGE DEVICES - A method of computational storage device intercommunication includes receiving a notification from a first storage controller of a first computational storage device indicating a presence of a message, in response to receiving the notification, transmitting a send message request to the first storage controller of the first computational storage device, and receiving the message from the first storage controller, storing the message to a host memory and notifying a host pseudo network device driver of availability of the message, determining whether a destination address of the message corresponds to a host user process that is local to the host or to a second user process that is local to a second embedded computer of a second computational storage device, and providing the message to the host user process or to the second embedded computer associated with the destination address. | 2020-04-16 |
20200117521 | PROCESSING SYSTEM WITH INTERSPERSED PROCESSORS WITH MULTI-LAYER INTERCONNECT - Embodiments of a multi-processor array are disclosed that may include a plurality of processors and configurable communication elements coupled together in a interspersed arrangement. Each configurable communication element may include a local memory and a plurality of routing engines. The local memory may be coupled to a subset of the plurality of processors. Each routing engine may be configured to receive one or more messages from a plurality of sources, assign each received message to a given destination of a plurality of destinations dependent upon configuration information, and forward each message to assigned destination. The plurality of destinations may include the local memory, and routing engines included in a subset of the plurality of configurable communication elements. | 2020-04-16 |
20200117522 | LIGHTWEIGHT APPLICATION PROGRAMMING INTERFACE (API) CREATION AND MANAGEMENT - Examples of techniques lightweight application programming interface (API) creation and management are described herein. An aspect includes sending an API response to a client based on a first API request from the client. Another aspect includes receiving a first data consumption record corresponding to the API response, wherein the first data consumption record indicates an amount of data that was discarded from the API response by the client. Another aspect includes determining, based on the first data consumption record, a lightweight API. Another aspect includes sending the lightweight API to the client based on a second API request from the client. | 2020-04-16 |
20200117523 | STATISTICAL DEEP CONTENT INSPECTION OF API TRAFFIC TO CREATE PER-IDENTIFIER INTERFACE CONTRACTS - Embodiments of the present disclosure relate to deep content inspection of API traffic. Initially, messages are received from users of an API at an API gateway. The messages comprise a structure and metadata and are intended for an API server. The API gateway selectively communicates copies of the messages to a traffic sampler. The traffic sampler comprises a database of traffic samples, a machine learning system, and a database comprising one or more models. The traffic sample communicates the models corresponding to usage of the API servers to the API gateway. The models are built by the machine learning system based on the structure and metadata of the traffic samples and may be utilized to perform tests on the API servers. | 2020-04-16 |
20200117524 | REPLACING GENERATED PROCEDURE CALLS WITH GENERATED INTER-PROCESS COMMUNICATION - A package generated by a compiler of a computing environment is to be used in inter-process communication between one module and another module running in a single address space of the computing environment. The one module is one class of module and the other module is another class of module, in which the one class of module is different from the other class of module. The one module calls the other module using the inter-process communication, which employs the package generated by the compiler. The called module performs one or more operations, and the one module is placed in a wait state. | 2020-04-16 |
20200117525 | NOVEL METHOD FOR NVME SSD BASED STORAGE SERVICE USING RPC AND GRPC TUNNELING OVER PCIE + - A host machine is disclosed. The host machine may include a host processor, a memory, an operating system running on the host processor, and an application running under the operating system on the host processor. The host machine may also include a Peripheral Component Interconnect Express (PCIe) tunnel to a Non-Volatile Memory Express (NVMe) Solid State Drive (SSD) and an RPC capture module which may capture the RPC from the application and deliver a result of the RPC to the application as though from the host processor, where the NVMe SSD may execute the RPC to generate the result. | 2020-04-16 |
20200117526 | TECHNIQUES FOR HANDLING ERRORS IN PERSISTENT MEMORY - Examples may include a basic input/output system (BIOS) for a computing platform communicating with a controller for a non-volatile dual in-line memory module (NVDIMM). Communication between the BIOS and the controller may include a request for the controller to scan and identify error locations in non-volatile memory at the NVDIMM. The non-volatile memory may be capable of providing persistent memory for the NVDIMM. | 2020-04-16 |
20200117527 | REDUCING BLOCK CALIBRATION OVERHEAD USING READ ERROR TRIAGE - A computer-implemented method, according to one embodiment, includes: detecting that an error count resulting from reading a first page in a block of storage space in memory is above a first threshold, and reading a second page in the block of storage space. The second page is one which had a highest error count of the pages in the block of storage space following a last calibration of the block of storage space. Moreover, a determination is made as to whether an error count resulting from reading the second page is above the first threshold. In response to determining that the error count resulting from reading the second page is above the first threshold, the block of storage space is calibrated. Other systems, methods, and computer program products are described in additional embodiments. | 2020-04-16 |
20200117528 | APPARATUS AND METHODS FOR FAULT DETECTION IN A SYSTEM CONSISTED OF DEVICES CONNECTED TO A COMPUTER NETWORK - A method and apparatus of a management device that determine a root cause of a device failure is described. In an exemplary embodiment, the management device receives an indication of the device failure for a first device, where the first device is part of a system of a plurality of devices. The management device further applies a set of one or more rules to measure a set of input signals of the first device. if the set of input signals are within a corresponding operating range, the management device marks the first device as failed. Alternatively, if one or more of the set of input signals are outside the corresponding operating range, the management device iteratively determines that a second device of the plurality devices that is coupled to the first devices has at least one output signal that is outside of the corresponding operating range and the input signals of the second device is within the corresponding operating range, and marks the second device as a root cause failure of the first device. | 2020-04-16 |