14th week of 2019 patent applcation highlights part 34 |
Patent application number | Title | Published |
20190102132 | COMMUNICATION BETWEEN DISPLAY AND DEVICE UTILIZING A COMMUNICATION AND DISPLAY PROTOCOL - A system is disclosed. The system includes one or more devices, wherein the one or more devices include communication functions and at least one application contained therein. The system also includes one or more displays. The one or more displays do not have any application programs contained therein. The one or more displays and the one or more devices communicate via a communication and display protocol. | 2019-04-04 |
20190102133 | DISPLAY DEVICE - A video wall-type display device may include: a plurality of display modules, each including: a display area including: a first sub-area; and at least one second sub-area disposed to adjoin and to surround the first sub-area; and a non-display area disposed to surround the display area, wherein the first sub-area may include at least one first pixel unit. The at least one first pixel unit includes: a first pixel configured to display a first color; a second pixel configured to display a second color; and a third pixel configured to display a third colors, the first, second, and third colors being different from each another. The second sub-area may include at least one second pixel unit, the at least one second pixel unit including: the first pixel, the second pixel, and the third pixel; and at least one fourth pixel configured to display a white color. | 2019-04-04 |
20190102134 | DISPLAY SYSTEM, DISPLAY DEVICE, AND DISPLAY METHOD OF DISPLAY SYSTEM - Disclosed are a display system, a display device and a display method of a display system. A main display device corrects display properties of a displayed image of a secondary display device according to display properties of a displayed image of the main display device, so as to adjust the displayed image of the secondary display device to have the same display properties as the displayed image of the main display device, thereby effectively improving the convenience of use of the display system and the display device. | 2019-04-04 |
20190102135 | SCALABLE INTERACTION WITH MULTI-DISPLAYS - Systems and methods for multiple users to interact with a multi-display using multiple modalities. The multi-display allows a lucid transition between personal non-private work environment and shared work environment for multiple groups of users in an open workspace. This provides users with freedom to use large amounts of space (with or without whiteboard) and the aggregate compute and storage resources available in an open workspace in any configuration suitable for their work dynamics and applications. It allows users to explore and manipulate data using a branch explore merge paradigm via a combination of personal display spaces to create shared display spaces and segregation of personal displays thereof on-demand using interaction modalities like hand gestures, laser pointers and even personal devices. The result is a paradigm where the displays are used as mediums for interacting with the data. | 2019-04-04 |
20190102136 | Display Device - To provide a display device that is suitable for increasing in size. To provide a display device in which display unevenness is suppressed. To provide a display device that can display an image along a curved surface. The display device includes two display panels, two plates, two stages, two driver circuits, two adjusting units, and a frame. Each display panel includes a display portion, an operating circuit portion, a terminal, an external electrode, a transparent portion, and a first portion and has flexibility. Each transparent portion includes a region transmitting visible light. The display panels are fixed so that transparent portions and parts of the display portions extend beyond the plates. The display portion of one of the two display panels overlaps with the transparent portion of the other display panel. | 2019-04-04 |
20190102137 | DISPLAY MAPPING - Examples associated with display mapping are described. One example system includes a display mapping module. The display mapping module maps display components connected to the system to virtual channels to which the system is subscribed. A communication module transmits a content instruction to subscribers of a virtual channel. The instruction controls the subscribers of the virtual channel to display content associated with the content instruction on display components the respective subscribers have mapped to the virtual channel. A display module causes a display component mapped to the virtual channel to display content associated with the content instruction. | 2019-04-04 |
20190102138 | METHOD FOR PREVENTING DEROGATORY LYRICS IN A SONG FROM BEING PLAYED AND SYSTEM - A method for preventing derogatory lyrics in a song from being played and a system for recognizing derogatory lyrics in a song. It allows users to listen to music in front of all types of audiences without offensive lyrics being played. The method includes downloading a computing device application on a computing device; using a sensor to identify derogatory lyrics in music; omitting the derogatory lyrics; and producing a song with clean lyrics. The music may be received and played from a radio station, video channel, video music, or from the computing device application. The method may include the ability to skip through songs with inappropriate lyrics to a song with appropriate lyrics. | 2019-04-04 |
20190102139 | SYSTEMS AND METHODS OF ASSOCIATING MEDIA CONTENT WITH CONTEXTS - Systems, devices, apparatuses, components, methods, and techniques for saving media content to a context for later playback are provided. An example media-playback device for identifying and playing media content for a user traveling in a vehicle includes a context detecting device, a context-driven playback engine, and a media playback engine. Contexts are established by parameters that can be detected by a media-playback device. Contexts are situations that are defined by one or more locations, times, events, activities, people, and devices. Media content is saved to the contexts for later playback. The contexts are detected by the context detecting device, the associated media content is identified by the context-driven playback engine, and the media content is automatically played through the media playback engine, without additional input required by the user. | 2019-04-04 |
20190102140 | AUTOMATICALLY GENERATED MEDIA PREVIEW - Systems, devices, apparatuses, components, methods, and techniques for automatically generating media previews are provided. An example media system for automatically generating media previews for a particular artist include a trailer generation application configured to receive input specifying an artist and duration of a trailer, automatically select clips from two or more media items by the artist, and automatically arrange and combine the clips into a media trailer for later playback. | 2019-04-04 |
20190102141 | SCENE SOUND EFFECT CONTROL METHOD, AND ELECTRONIC DEVICE - A method for controlling a scene sound effect and the electronic device includes after an device is turned on, starting a service having monitoring function; the device monitors the audio track of the device by means of the service having monitoring function, and determines whether the audio track includes an audio output; a mapping exists between the audio tracks and the applications in the device; if the device determines that the audio track includes an audio output, the device then determines, on the basis of the mapping, the application mapped to the audio track; the device obtains the scene sound effect corresponding to the application, and set the scene sound effect as the current sound effect of the device, The setting of the scene sound effects is autonomous, thereby simplifying operations and enhancing the utilization efficiency of the device while ensuring higher accuracy of scene sound effect. | 2019-04-04 |
20190102142 | SYSTEM WITH A COMPUTING PROGRAM AND A SERVER FOR HEARING DEVICE SERVICE REQUESTS - An electronic device includes: a communication interface configured to communicate with a hearing device, the hearing device configured to be worn by a user, the hearing device comprising a processing unit configured to receive an input signal and provide an output signal for compensating a hearing loss of the user; a processing unit configured to generate a request upon a detection of the output signal being unsatisfactory, wherein the processing unit is also configured to receive a wireless response that is generated in response to the request, the response being based at least in part on one or more of a plurality of initial fitting parameters of the hearing device, audiogram(s), one or more of a plurality of current settings of the hearing device, or any combination of the foregoing; and a screen configured to display information regarding an adjustment for improving a performance of the hearing device. | 2019-04-04 |
20190102143 | WIRELESS AUDIO SPLITTER - A host device communicating with a plurality of accessory devices transmits audio data packets via a broadcast channel to the plurality of accessory devices. When one of the plurality of accessory devices determines an audio data packet has not been received, the accessory device sends a negative-acknowledgement signal (NACK) via a unicast channel. The NACK indicates that the at least one of the accessory devices did not receive at least one audio data packet. The host device retransmits the at least one audio data packet indicated as not being received via the broadcast channel to the plurality of accessory devices. Other aspects are also described and claimed. | 2019-04-04 |
20190102144 | Identifying Music as a Particular Song - In general, the subject matter described in this disclosure can be embodied in methods, systems, and program products for indicating a reference song. A computing device stores reference song characterization data that identifies a plurality of audio characteristics for each reference song in a plurality of reference songs. The computing device receives digital audio data that represents audio recorded by a microphone, converts the digital audio data from time-domain format into frequency-domain format, and uses the digital audio data in the frequency-domain format in a music-characterization process. In response to determining that characterization values for the digital audio data are most relevant to characterization values for a particular reference song, the computing device outputs an indication of the particular reference song. | 2019-04-04 |
20190102145 | Media Playback System with Voice Assistance - Example techniques involve invoking voice assistance for a media playback system. In some embodiments, media playback system is configured to (i) capture a voice input via at least one microphone device, (ii) detect inclusion of one or more of the commands within the voice input, (iii) determine that the one or more commands meets corresponding command criteria associated with the one or more commands within the set of command information, and (iv) in response to the determination, select a first voice assistant service (VAS) and (a) forego selection of a second VAS, (b) send the voice input to first VAS, and (c) after sending the voice input, receiving a response to the voice input from the first VAS. | 2019-04-04 |
20190102146 | EFFICIENT DIRECT STORE DELIVERY SYSTEM AND METHODS OF USING THE SAME - Provided is an improved direct store delivery (DSD) system for providing customized information to a user. The DSD system includes a device that receives a first voice command from the user and determines a current location of the user. The first voice command is then transformed into a first text request based on a conversion of the points in speech to points in data. Further, the first text request is processed to determine a first set of information, based on a current location of the device, to be provided audibly to the user. The first set of information is transmitted to the user through speech. The device further receives a second voice command from the user to interact with the first set of information. The second voice command is processed to determine a final set of information, and the final set of information is visually displayed to the user in an instance in which the user has arrived at a desired location. | 2019-04-04 |
20190102147 | Memory Filtering for Disaggregate Memory Architectures - Examples may include a data center in which memory sleds are provided with logic to filter data stored on the memory sled responsive to filtering requests from a compute sled. Memory sleds may include memory filtering logic arranged to receive filtering requests, filter data stored on the memory sled, and provide filtering results to the requesting entity. Additionally, a data center is provided in which fabric interconnect protocols in which sleds in the data center communicate is provided with filtering instructions such that compute sleds can request filtering on memory sleds. | 2019-04-04 |
20190102148 | Development Environment for Real-Time Application Development - According to certain embodiments, a development environment for mobile applications includes a design environment executed by a computing system in communication with a group of viewing applications operating on a group of mobile devices. The viewing applications correspond to version(s) of an application under development. In some embodiments, the design environment is capable of receiving inputs from a designer to modify the application under development. In some embodiments, the design environment provides to the viewing applications, during run-time and in real time, dynamic instructions based on the designer's modifications. In some embodiments, each viewing application executed by each mobile device includes localized features corresponding to features of the application under development, each localized feature optimized for the mobile device. In some embodiments, each viewing application is capable of receiving a dynamic instruction, and modifying, during run-time and in real time, the corresponding localized feature based on the received dynamic instruction. | 2019-04-04 |
20190102149 | METHOD FOR PROVIDING AN INTEGRATED PROCESS FOR CONTROL UNIT DEVELOPMENT AND A SIMULATION DEVICE FOR CONTROL UNIT DEVELOPMENT - A method for providing an integrated control unit development process based on a plurality of simulation models, having at least one first simulation model and a second simulation model, wherein the integrated process simulates a control unit or an environment of a control unit and is executable on a simulation device. The method includes the steps: isolating externally visible first communication parameters of the first simulation model and isolating externally visible second communication parameters of the second simulation model; comparing the first communication parameters and the second communication parameters and identifying identically named communication parameters; and modifying the identically named communication parameters at least for one of the first simulation models and the second simulation models such that the integrated process is executable on a single processor core. | 2019-04-04 |
20190102150 | METHODS AND APPARATUS TO PERFORM REGION FORMATION FOR A DYNAMIC BINARY TRANSLATION PROCESSOR - Methods, apparatus, systems and articles of manufacture to perform region formation for usage by a dynamic binary translation are disclosed. An example apparatus includes an initial region former to form an initial region starting at a first block of hot code of a control flow graph. The initial region former also adds blocks of hot code lying on a first hottest path of the control flow graph. A region extender extends the initial region to form an extended region including the initial region. The extended region begins at a hottest exit of the initial region and includes blocks of hot code lying on a second hottest path until one of a threshold path length has been satisfied or a back edge of the control flow graph is added to the extended region. A region pruner prunes the remove all loop nests except a selected loop nest which forms a final region. | 2019-04-04 |
20190102151 | METHODS AND APPARATUS TO MAP SINGLE STATIC ASSIGNMENT INSTRUCTIONS ONTO A DATA FLOW GRAPH IN A DATA FLOW ARCHITECTURE - Methods, apparatus, systems and articles of manufacture to map a set of instructions onto a data flow graph are disclosed herein. An example apparatus includes a variable handler to modify a variable in the set of instructions. The variable is used multiple times in the set of instructions and the set of instructions are in a static single assignment form. The apparatus also includes a PHI handler to replace a PHI instruction contained in the set of instructions with a set of control data flow instructions and a data flow graph generator to map the set of instructions modified by the variable handler and the PHI handler onto a data flow graph without transforming the instructions out of the static single assignment form. | 2019-04-04 |
20190102152 | INTERACTIVE CODE OPTIMIZER - Methods and devices for generating program code representations may include receiving program code or edited program code for an application executing on the computer device. The methods and devices may include receiving an identification of a selected pipeline from a plurality of pipelines that defines a plurality of passes of actions to execute on the program code or the edited program code to optimize the program code or the edited program code. The methods and devices may include running the selected pipeline and generate optimizer output with a program code representation of the program code. | 2019-04-04 |
20190102153 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM RECORDING PROGRAM - An information processing apparatus includes a memory, and a processor coupled to the memory, wherein the processor is configured to acquire, by analyzing a program, a first address of the memory at which a memory access instruction in the program is stored, and a second address of the memory to be accessed by the memory access instruction, and generate first information indicating a correspondence between the first address and the second address. | 2019-04-04 |
20190102154 | METHOD OF DISTRIBUTED GRAPH LOADING FOR MINIMAL COMMUNICATION AND GOOD BALANCE VIA LAZY MATERIALIZATION AND DIRECTORY INDIRECTION USING INDEXED TABULAR REPRESENTATION - Techniques herein minimally communicate between computers to repartition a graph. In embodiments, each computer receives a partition of edges and vertices of the graph. For each of its edges or vertices, each computer stores an intermediate representation into an edge table (ET) or vertex table. Different edges of a vertex may be loaded by different computers, which may cause a conflict. Each computer announces that a vertex resides on the computer to a respective tracking computer. Each tracking computer makes assignments of vertices to computers and publicizes those assignments. Each computer that loaded conflicted vertices transfers those vertices to computers of the respective assignments. Each computer stores a materialized representation of a partition based on: the ET and vertex table of the computer, and the vertices and edges that were transferred to the computer. Edges stored in the materialized representation are stored differently than edges stored in the ET. | 2019-04-04 |
20190102155 | ARTIFICIAL INTELLIGENCE DRIVEN CONFIGURATION MANAGEMENT - Techniques for artificial intelligence driven configuration management are described herein. In some embodiments, a machine-learning process determines a feature set for a plurality of deployments of a software resource. Based on varying values in the feature set, the process clusters each of the plurality of deployments into a cluster of a plurality of clusters. Each cluster of the plurality of clusters comprises one or more nodes and each node of the one or more nodes corresponds to at least a subset of values of the feature set that are detected in at least one deployment of the plurality of deployments of the software resource. The process determines a representative node for each cluster of the plurality of clusters. An operation may be performed based on the representative node for at least one cluster. | 2019-04-04 |
20190102156 | Streamlined Technique For Deploying Application In Cloud Computing Environment - A method is provided for emulating a mainframe development application in a secure partition of computing resources in a cloud computing environment. Privileges are granted to an execution configurator to access services in the secure partition of the cloud computing environment. The services include an application streaming service that emulates the mainframe development application on a web browser. A given instance of the application streaming service is instantiated by the execution configurator in the secure partition of the computing resources. Computing infrastructure which implements the application streaming service is configured by the execution configurator. In particular, the computing infrastructure is configured by generating a script and executing the script in the secure partition of the cloud computing environment, where the script interacts with the given instance of the application streaming service via a command-line interface of a software development kit for the application streaming service. | 2019-04-04 |
20190102157 | OPTIMIZING REDEPLOYMENT OF FUNCTIONS AND SERVICES ACROSS MULTIPLE CONTAINER PLATFORMS AND INSTALLATIONS - A method of distributing microservice containers for a service across a plurality of computing environments may include receiving a service that is built from a plurality of containerized microservices to be deployed in a container platform. The container platform may include a plurality of computing environments. The method may also include receiving a deployment criteria for deploying the service in the container platform; accessing characteristics of the plurality of computing environments; and deploying the plurality of containerized micro services across the plurality of computing environments based on the deployment criteria and the characteristics of the plurality of computing environments. | 2019-04-04 |
20190102158 | SYSTEM AND METHOD FOR PROVIDING SOFTWARE UPDATES IN ASSEMBLY LINE AND DEALERSHIP LOT ENVIRONMENTS - In an example, an access point device for providing software updates to vehicles is disclosed. In an example implementation, the access point device includes a communication module that is configured to establish a wireless connection with respective control modules of the vehicles. The communication module is configured to receive vehicle identification information from the respective vehicles. The access point device also includes a vehicle update determination module that is configured to establish a connection with a server and determine whether a software update is available from the server for the vehicles based upon the vehicle identification information. The vehicle update determination module is configured to initiate a download of the software update from the server when the software update is available. | 2019-04-04 |
20190102159 | ECU AND PERIPHERALS UPDATE USING CENTRAL DISPATCH UNIT - A computer implemented method of updating software of embedded devices connected to a central dispatch device, comprising using one or more processors of a central dispatch device, the processor(s) are adapted for executing a code for obtaining a respective update package for one or more of a plurality of embedded devices which are operatively connected to the central dispatch device via a communication interconnection, transferring a transient update agent to the embedded device(s) and transferring the update package to the embedded device(s), the one or more embedded devices execute the transient update agent to apply the update package in the one or more embedded devices. The one or more embedded devices discard the transient update agent after the update package is applied. | 2019-04-04 |
20190102160 | SOFTWARE MANAGEMENT SYSTEM, SOFTWARE UPDATER, SOFTWARE UPDATING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORING SOFTWARE UPDATE PROGRAM - A software management system includes a control system connected to a first network and including at least one of control circuitry and a sensor, each controllable by updating first control software in the respective at least one of the control circuitry and sensor, a software distribution device including first circuitry that stores second control software and transmits the second software through a second network, and a software update device including second circuitry that, while the software device is connected to the second network, receives the second software from the distribution device and stores the second software, and while the update device is connected to the first network, transmits the second software to the respective at least one of the control circuitry and sensor to cause the respective at least one of the control circuitry and sensor to update the first software using the second software. | 2019-04-04 |
20190102161 | AUTOMATED USAGE DRIVEN ENGINEERING - Implementations directed to providing a computer-implemented method for automating vehicle feature updates, the method being executed by one or more processors and comprising receiving telematics data identifying an actual usage of a vehicle; performing a gap analysis between the actual usage of the vehicle and an expected usage of the vehicle; determining a feature update based on the gap analysis; providing the feature update to a product engineering module when the feature cannot be implemented by a software update; and providing the feature update to an onboard computer system when the feature can be implemented by a software update. | 2019-04-04 |
20190102162 | Application Templates and Upgrade Framework for a Multi-Tenant Identity Cloud Service - A system manages tenant application updates in a multi-tenant cloud-based identity and access management (IAM) system by defining one or more application templates; creating one or more applications for one or more tenants of the multi-tenant cloud-based IAM system using the one or more application templates; applying a change to at least one of the one or more application templates; determining whether the one or more applications need to be updated in an automatic mode, a semi-automatic mode, or a manual mode, to incorporate the change; and updating at least one of the one or more applications in an applicable one of the automatic mode, the semi-automatic mode, or the manual mode, based on the outcome of the determining. | 2019-04-04 |
20190102163 | System and Method for a Blockchain-Supported Programmable Information Management and Data Distribution System - A system, method, and computer program for a blockchain-supported programmable information management and data distribution system for decentralized application development that integrates scalability power with functional decentralized application development environment and high data storage capacity. The system may comprise a virtual machine that supports and unites on-chain logic capabilities and off-chain data management capabilities. On-chain logic capabilities may comprise application of rate-limiting capabilities, use of currency, and delegated asynchronous proof-of-stake functionality. Off-chain data management capabilities may comprise an artifact network, multi-party protocol, and analytic capabilities. Incorporation of smart contracts may be supported directly by the virtual machine. | 2019-04-04 |
20190102164 | OVER THE AIR UPDATES USING DRONES - A computer implemented method of using a drone to provide update packages to embedded devices, comprising using one or more processors mounted on the drone for executing a code for maneuvering the drone to be in range of one or more wireless interfaces of each of a plurality of embedded devices, communicating with each embedded device through the wireless interface(s) to identify one or more attributes of each embedded device, selecting one of a plurality of update packages according to the identified attribute(s) and transmitting the selected update package to the each embedded device through the wireless interface(s). | 2019-04-04 |
20190102165 | METHOD AND SYSTEM FOR IDENTIFYING OPEN-SOURCE SOFTWARE PACKAGE BASED ON BINARY FILES - The present disclosure provides a method and system for identifying an open-source software package from a binary file for which an open-source license is to be checked. The method includes: accessing an open-source database generated to include a plurality of reference binary files and a plurality of reference open-source software packages having a plurality of reference open-source files, based on a plurality of first hash values extracted from the plurality of reference binary files generated from the plurality of reference open-source files; receiving the target binary file; extracting a plurality of second hash values including at least two general hash values from the target binary file; extracting at least two first hash values corresponding to the plurality of second hash values among the plurality of first hash values; and identifying a reference open-source software package corresponding to the at least two first hash values based on the open-source database. | 2019-04-04 |
20190102166 | CREATION AND EXECUTION OF CUSTOMISED CODE FOR A DATA PROCESSING PLATFORM - A method of executing computer-readable code for interaction with one or more data resources on a data processing platform is disclosed, wherein the method is performed using one or more processors. The method may comprise receiving a request message including an identifier identifying executable code stored in a data repository. Another operation may comprise determining, using the identifier, an execution environment mapped to the executable code. Another operation may comprise executing the identified executable code using the determined execution environment. A further operation may comprise passing requests made with the executable code to one or more data resources via a proxy. Also disclosed is a method of creating customised computer-readable code for interaction with one or more data resources on a data processing platform, wherein the method is performed using one or more processors. This method may comprise receiving, through a code creation tool, user entered computer-readable code, committing the entered code to a data repository and creating an identifier which maps to the committed code and to an execution environment for running the committed code on the data processing platform. | 2019-04-04 |
20190102167 | GENERATING AN OPERATING PROCEDURE MANUAL - A device generates an operating procedure manual for software including a captured image of a screen displayed by the software. An image acquiring hardware unit acquires a plurality of captured images of a plurality of screens displayed by software in response to a plurality of operations with respect to the software. A dividing hardware unit divides the plurality of captured images into a plurality of captured image groups, to each of which at least one captured image acquired in response to at least one operation constituting a meaningful chunk belongs. A generating hardware unit generates an operating procedure manual including, for each captured image group, a captured image belonging to that captured image group. | 2019-04-04 |
20190102168 | APPARATUS AND METHOD FOR PERFORMING DUAL SIGNED AND UNSIGNED MULTIPLICATION OF PACKED DATA ELEMENTS - An apparatus and method for performing dual concurrent multiplications of packed data elements. For example one embodiment of a processor comprises: a decoder to decode a first instruction to generate a decoded instruction; a first source register to store a first plurality of packed byte data elements; a second source register to store a second plurality of packed byte data elements; execution circuitry to execute the decoded instruction, the execution circuitry comprising: multiplier circuitry to concurrently multiply each of the packed byte data elements of the first plurality with a corresponding packed byte data element of the second plurality to generate a plurality of products; adder circuitry to add specified sets of the products to generate temporary results for each set of products; zero-extension or sign-extension circuitry to zero-extend or sign-extend the temporary result for each set to generate an extended temporary result for each set; accumulation circuitry to combine each of the extended temporary results with a selected packed data value stored in a third source register to generate a plurality of final results; and a destination register to store the plurality of final results as a plurality of packed data elements in specified data element positions. | 2019-04-04 |
20190102169 | EFFECTIVE DETERMINATION OF PROCESSOR PAIRS FOR TRANSFERRING DATA PROCESSED IN PARALLEL - An apparatus serves as at least one of a plurality of information processing devices each including a group of arithmetic processors, where the plurality of information processing devices are configured to perform parallel processing by using calculation result data of the groups of arithmetic processors included in the plurality of information processing devices. The apparatus includes a memory configured to store bandwidth information indicating a communication bandwidth with which an arithmetic processor included in the groups of arithmetic processors communicate with another arithmetic processor included in the groups of arithmetic processors. For a source arithmetic processor that is any one of the groups of arithmetic processors, the apparatus determines a destination arithmetic processor that is one of the groups of arithmetic processors to which the calculation result data of the source arithmetic processor is to be transferred, based on the bandwidth information stored in the memory. | 2019-04-04 |
20190102170 | TECHNIQUES FOR CURRENT-SENSING CIRCUIT DESIGN FOR COMPUTE-IN-MEMORY - A compute-in-memory (CIM) circuit that enables a multiply-accumulate (MAC) operation based on a current-sensing readout technique. An operational amplifier coupled with a bitline of a column of bitcells included in a memory array of the CIM circuit to cause the bitcells to act like ideal current sources for use in determining an analog voltage value outputted from the operational amplifier for given states stored in the bitcells and for given input activations for the bitcells. The analog voltage value sensed by processing circuitry of the CIM circuit and converted to a digital value to compute a multiply-accumulate (MAC) value. | 2019-04-04 |
20190102171 | PROCESSOR MICRO-ARCHITECTURE FOR COMPUTE, SAVE OR RESTORE MULTIPLE REGISTERS, DEVICES, SYSTEMS, METHODS AND PROCESSES OF MANUFACTURE - An electronic circuit ( | 2019-04-04 |
20190102172 | VECTOR POPULATION COUNT DETERMINATION IN MEMORY - Examples of the present disclosure provide apparatuses and methods for determining a vector population count in a memory. An example method comprises determining, using sensing circuitry, a vector population count of a number of fixed length elements of a vector stored in a memory array. | 2019-04-04 |
20190102173 | METHODS AND SYSTEMS FOR TRANSFERRING DATA BETWEEN A PROCESSING DEVICE AND EXTERNAL DEVICES - At the inputs and/or outputs, memories are assigned to a reconfigurable module to achieve decoupling of internal data and in particular decoupling of the reconfiguration cycles from the external data streams (to/from peripherals, memories, etc.) | 2019-04-04 |
20190102174 | APPARATUS AND METHOD FOR MULTIPLY, ADD/SUBTRACT, AND ACCUMULATE OF PACKED DATA ELEMENTS - An apparatus and method for performing dual concurrent multiplications, subtraction/addition, and accumulation of packed data elements. For example one embodiment of a processor comprises: a decoder to decode an instruction to generate a decoded instruction; a first source register to store first and second packed data elements; a second source register to store third and fourth packed data elements; execution circuitry to execute the decoded instruction, the execution circuitry comprising: multiplier circuitry to multiply the first and third packed data elements to generate a first temporary product and to concurrently multiply the second and fourth packed data elements to generate a second temporary product, the first through fourth packed data elements all being a first width; circuitry to negate the first temporary product to generate a negated first product; adder circuitry to add the first negated product to a first accumulated packed data element from a third source register to generate a first result, the first result being a second width which is at least twice as large as the first width; the adder circuitry to concurrently add the second temporary product to a second accumulated packed data element to generate a second result of the second width; the first and second results to be stored in specified first and second data element positions within a destination register. | 2019-04-04 |
20190102175 | HYBRID ANALOG-DIGITAL FLOATING POINT NUMBER REPRESENTATION AND ARITHMETIC - A hybrid floating-point arithmetic processor includes a scheduler, a hybrid register file, and a hybrid arithmetic operation circuit. The scheduler has an input for receiving floating-point instructions, and an output for providing decoded register numbers in response to the floating-point instructions. The hybrid register file is coupled to the scheduler and contains circuitry for storing a plurality of floating-point numbers each represented by a digital sign bit, a digital exponent, and an analog mantissa. The hybrid register file has an output for providing selected ones of the plurality of floating-point numbers in response to the decoded register numbers. The hybrid arithmetic operation circuit is coupled to the scheduler and to the hybrid register file, for performing a hybrid arithmetic operation between two floating-point numbers selected by the scheduler and providing a hybrid result represented by a result digital sign bit, a result digital exponent, and a result analog mantissa. | 2019-04-04 |
20190102176 | APPARATUS AND METHOD FOR PERFORMING DUAL SIGNED AND UNSIGNED MULTIPLICATION OF PACKED DATA ELEMENTS - An apparatus and method for performing dual concurrent multiplications of packed data elements. For example one embodiment of a processor comprises: a decoder to decode a first instruction to generate a decoded instruction; a first source register to store a first plurality of packed doubleword data elements; a second source register to store a second plurality of packed doubleword data elements; and execution circuitry to execute the decoded instruction, the execution circuitry comprising: multiplier circuitry to multiply a first doubleword data element from the first source register with a second doubleword data element from the second source register to generate a first quadword product and to concurrently multiply a third doubleword data element from the first source register with a fourth doubleword data element from the second source register to generate a second quadword product; and a destination register to store the first quadword product and the second quadword product as first and second packed quadword data elements. | 2019-04-04 |
20190102177 | APPARATUS AND METHOD FOR SHIFTING QUADWORDS AND EXTRACTING PACKED WORDS - An apparatus and method for performing left-shifting operations on packed quadword data. For example, one embodiment of a processor comprises: a decoder to decode a left-shift instruction to generate a decoded left-shift instruction; a first source register to store a plurality of packed quadwords data elements; execution circuitry to execute the decoded left-shift instruction, the execution circuitry comprising shift circuitry to left-shift at least first and second packed quadword data elements from first and second packed quadword data element locations, respectively, in the first source register by an amount specified in an immediate value or in a control value in a second source register, to generate first and second left-shifted quadwords; the execution circuitry to cause selection of 16 most significant bits of the first and second left-shifted quadwords to be written to 16 least significant bit regions of first and second quadword data element locations, respectively, of a destination register; and the destination register to store the specified set of the 16 most significant bits of the first and second left-shifted quadwords. | 2019-04-04 |
20190102178 | Converting a Stream of Data Using a Lookaside Buffer - A stream of data is accessed from a memory system by an autonomous memory access engine, converted on the fly by the memory access engine, and then presented to a processor for data processing. A portion of a lookup table (LUT) containing converted data elements is preloaded into a lookaside buffer associated with the memory access engine. As the stream of data elements is fetched from the memory system each data element in the stream of data elements is replaced with a respective converted data element obtained from the LUT in the lookaside buffer according to a content of each data element to thereby form a stream of converted data elements. The stream of converted data elements is then propagated from the memory access engine to a data processor. | 2019-04-04 |
20190102179 | PROCESSORS AND METHODS FOR PRIVILEGED CONFIGURATION IN A SPATIAL ARRAY - Methods and apparatuses relating to privileged configuration in spatial arrays are described. In one embodiment, a processor includes processing elements; an interconnect network between the processing elements; and a configuration controller coupled to a first subset and a second, different subset of the plurality of processing elements, the first subset having an output coupled to an input of the second, different subset, wherein the configuration controller is to configure the interconnect network between the first subset and the second, different subset of the plurality of processing elements to not allow communication on the interconnect network between the first subset and the second, different subset when a privilege bit is set to a first value and to allow communication on the interconnect network between the first subset and the second, different subset of the plurality of processing elements when the privilege bit is set to a second value. | 2019-04-04 |
20190102180 | OPTIMIZING SOFTWARE-DIRECTED INSTRUCTION REPLICATION FOR GPU ERROR DETECTION - Software-only and software-hardware optimizations to reduce the overhead of intra -thread instruction duplication on a GPU or other instruction processor are disclosed. The optimizations trade off error containment for performance and include ISA extensions with limited hardware changes and area costs. | 2019-04-04 |
20190102181 | APPARATUS AND METHOD FOR SHIFTING AND EXTRACTING PACKED DATA ELEMENTS - An apparatus and method for performing left-shifting operations on packed quadword data. For example, one embodiment of a processor comprises: a decoder to decode a left-shift instruction to generate a decoded left-shift instruction; a first source register to store a plurality of packed quadwords data elements; execution circuitry to execute the decoded left-shift instruction, the execution circuitry comprising shift circuitry to left-shift at least first and second packed quadword data elements from first and second packed quadword data element locations, respectively, in the first source register by an amount specified in an immediate value or in a control value in a second source register, to generate first and second left-shifted quadwords; the execution circuitry to cause selection of a specified set of most significant bits of the first and second left-shifted quadwords to be written to least significant bit regions of first and second quadword data element locations, respectively, of a destination register; and the destination register to store the specified set of the most significant bits of the first and second left-shifted quadwords. | 2019-04-04 |
20190102182 | APPARATUS AND METHOD FOR PERFORMING DUAL SIGNED AND UNSIGNED MULTIPLICATION OF PACKED DATA ELEMENTS - An apparatus and method for performing dual concurrent multiplications of packed data elements. For example one embodiment of a processor comprises: a decoder to decode a first instruction to generate a decoded instruction; a first source register to store a first plurality of packed data elements; a second source register to store a second plurality of packed data elements; execution circuitry to execute the decoded instruction, the execution circuitry comprising: multiplier circuitry to perform concurrent dual multiplications of a first packed data element from the first source register with a second packed data element from the second source register and a third packed data element from the first source register with a fourth packed data element from the second source register to generate first and second products, respectively, wherein the first and third packed data elements have a width twice as large as a width of the second and fourth packed data elements; the multiplier circuitry to select the first and third packed data elements from the first source register and the second and fourth packed data elements from the second source register in accordance with the immediate to generate the first and second products. | 2019-04-04 |
20190102183 | APPARATUS AND METHOD FOR MULTIPLICATION AND ACCUMULATION OF COMPLEX AND REAL PACKED DATA ELEMENTS - An apparatus and method for multiplying packed real and imaginary components of complex numbers. For example, one embodiment of a processor comprises: a decoder to decode a first instruction to generate a decoded instruction; a first source register to store a first plurality of packed real and imaginary data elements; a second source register to store a second plurality of packed real and imaginary data elements; execution circuitry to execute the decoded instruction, the execution circuitry comprising: multiplier circuitry to select real and imaginary data elements in the first source register and second source register to multiply, the multiplier circuitry to multiply each selected imaginary data element in the first source register with a selected real data element in the second source register, and to multiply each selected real data element in the first source register with a selected imaginary data element in the second source register to generate a plurality of imaginary products, adder circuitry to add a first subset of the plurality of imaginary products to generate a first temporary result and to add a second subset of the plurality of imaginary products to generate a second temporary result; accumulation circuitry to combine the first temporary result with first data from a destination register to generate a first final result and to combine the second temporary result with second data from the destination register to generate a second final result and to store the first final result and second final result back in the destination register. | 2019-04-04 |
20190102184 | APPARATUS AND METHOD FOR SHIFTING QUADWORDS AND EXTRACTING PACKED WORDS - An apparatus and method for performing right-shifting operations on packed quadword data. For example, one embodiment of a processor comprises: a decoder to decode a right-shift instruction to generate a decoded right-shift instruction; a first source register to store a plurality of packed quadwords data elements; execution circuitry to execute the decoded right-shift instruction, the execution circuitry comprising shift circuitry to right-shift at least first and second packed quadword data elements from first and second packed quadword data element locations, respectively, in the first source register by an amount specified in an immediate value or in a control value in a second source register, to generate first and second right-shifted quadwords; the execution circuitry to cause selection of 16 most significant bits of the first and second right-shifted quadwords to be written to 16 least significant bit regions of first and second quadword data element locations, respectively, of a destination register; and the destination register to store the specified set of the 16 most significant bits of the first and second right-shifted quadwords. | 2019-04-04 |
20190102185 | SYSTEMS, APPARATUSES, AND METHODS FOR MULTIPLICATION, NEGATION, AND ACCUMULATION OF VECTOR PACKED SIGNED VALUES - Embodiments of systems, apparatuses, and methods for multiplication, negation, and accumulation of data values in a processor are described. For example, execution circuitry executes a decoded instruction to multiply selected data values from a plurality of packed data element positions in first and second packed data source operands to generate a plurality of first result values, sum the plurality of first result values to generate one or more second result values, negate the one or more second result values to generate one or more third result values, accumulate the one or more third result values with one or more data values from the destination operand to generate one or more fourth result values, and store the one or more third result values in one or more packed data element positions in the destination operand. | 2019-04-04 |
20190102186 | SYSTEMS, APPARATUSES, AND METHODS FOR MULTIPLICATION AND ACCUMULATION OF VECTOR PACKED UNSIGNED VALUES - Embodiments of systems, apparatuses, and methods for multiplication and accumulation of data values in a processor are described. For example, execution circuitry executes a decoded instruction to multiply selected unsigned data values from a plurality of packed data element positions in first and second packed data source operands to generate a plurality of first unsigned result values, sum the plurality of first unsigned result values to generate one or more second unsigned result values, accumulate the one or more second unsigned result values with one or more data values from the destination operand to generate one or more third unsigned result values, and store the one or more third unsigned result values in one or more packed data element positions in a destination operand. | 2019-04-04 |
20190102187 | Processors, Methods, Systems, and Instructions to Generate Sequences of Integers in which Integers in Consecutive Positions Differ by a Constant Integer Stride and Where a Smallest Integer is Offset from Zero by an Integer Offset - A method of an aspect includes receiving an instruction. The instruction indicates an integer stride, indicates an integer offset, and indicates a destination storage location. A result is stored in the destination storage location in response to the instruction. The result includes a sequence of at least four integers in numerical order with a smallest one of the at least four integers differing from zero by the integer offset and with all integers of the sequence in consecutive positions differing by the integer stride. Other methods, apparatus, systems, and instructions are disclosed. | 2019-04-04 |
20190102188 | APPARATUS AND METHOD FOR NON-SERIALIZING SPLIT LOCKS - An apparatus and method are described for performing split lock operations in a multi-core processor. For example, one embodiment of a processor comprises: a plurality of cores to execute instructions, each core comprising a core cache to cache data during instruction execution; a shared cache to be shared by two or more of the plurality of cores; a locking agent on a first core to initiate a split lock operation in response to detecting a transaction targeting at least two cache lines, the locking agent to transmit a request for the two cache lines to be set to an Exclusive state; at least one coherence enforcement engine to receive the request from the locking agent and to responsively cause any copies of the two cache lines in other cores to be invalidated; the locking agent to permit the transaction targeting the two cache lines to complete upon receipt of an indication that the cache lines are in the Exclusive state and, upon completion of the transaction, to transmit an indication that the transaction is complete to the coherence enforcement engine. | 2019-04-04 |
20190102189 | WORKFLOW GENERATING APPARATUS DISPLAYING OUTPUT DATA OF JOB TO BE ADDED TO WORKFLOW, AND COMPUTER READABLE STORAGE MEDIUM STORING PROGRAM INSTRUCTIONS FOR CAUSING WORKFLOW GENERATING APPARATUS TO DISPLAY THE OUTPUT DATA - A non-transitory computer readable storage medium stores a set of program instructions for a workflow generating apparatus including a processor and a display. The workflow generating apparatus is capable of generating a workflow by combining a plurality of jobs. The set of program instructions, when executed by the processor, causes the workflow generating apparatus to perform: receiving a selection of one job to be added to the workflow from among a plurality of selectable jobs; acquiring input data to be inputted to the one job; specifying, on the basis of the acquired input data, output data to be outputted by execution of the one job, the output data being usable as input data for a job to be executed after execution of the one job in the workflow; and displaying, on the display, the specified output data in a distinguishable manner. | 2019-04-04 |
20190102190 | APPARATUS AND METHOD FOR PERFORMING MULTIPLICATION WITH ADDITION-SUBTRACTION OF REAL COMPONENT - An apparatus and method for performing a transform on complex data. For example, one embodiment of a processor comprises: multiplier circuitry to multiply packed real N-bit data elements in the first source register with packed real M-bit data elements in the second source register and to multiply packed imaginary N-bit data elements in the first source register with packed imaginary M-bit data elements in the second source register to generate at least four real products, adder circuitry to subtract a first selected real product from a second selected real product to generate a first temporary result and to subtract a third selected real product from a fourth selected real product to generate a second temporary result, the adder circuitry to add the first temporary result to a first packed N-bit data element from the third source register to generate a first pre-scaled result, to subtract the first temporary result from the first packed N-bit data element to generate a second pre-scaled result, to add the second temporary result to a second packed N-bit data element from the third source register to generate a third pre-scaled result, and to subtract the second temporary result from the second packed N-bit data element to generate a fourth pre-scaled result; scaling circuitry to scale the first, second, third and fourth pre-scaled results to a specified bit width to generate first, second, third, and fourth final results; and a destination register to store the first, second, third, and fourth final results in specified data element positions. | 2019-04-04 |
20190102191 | SYSTEMS, APPARATUSES, AND METHODS FOR DUAL COMPLEX BY COMPLEX CONJUGATE MULTIPLY OF SIGNED WORDS - Embodiments of systems, apparatuses, and methods for dual complex number by complex conjugate multiplication in a processor are described. For example, execution circuitry executes a decoded instruction to multiplex data values from a plurality of packed data element positions in the first and second packed data source operands to at least one multiplier circuit, the first and second packed data source operands including a plurality of pairs complex numbers, each pair of complex numbers including data values at shared packed data element positions in the first and second packed data source operands; calculate a real part and an imaginary part of a product of a first complex number and a complex conjugate of a second complex number; and store the real result to a first packed data element position in the destination operand and store the imaginary result to a second packed data element position in the destination operand. | 2019-04-04 |
20190102192 | APPARATUS AND METHOD FOR SHIFTING AND EXTRACTING PACKED DATA ELEMENTS - An apparatus and method for performing right-shifting operations on packed quadword data. For example, one embodiment of a processor comprises: a decoder to decode a right-shift instruction to generate a decoded right-shift instruction; a first source register to store a plurality of packed quadwords data elements; execution circuitry to execute the decoded right-shift instruction, the execution circuitry comprising shift circuitry to right-shift at least first and second packed quadword data elements from first and second packed quadword data element locations, respectively, in the first source register by an amount specified in an immediate value or in a control value in a second source register, to generate first and second right-shifted quadwords; the execution circuitry to cause selection of a specified set of most significant bits of the first and second right-shifted quadwords to be written to least significant bit regions of first and second quadword data element locations, respectively, of a destination register; and the destination register to store the specified set of the most significant bits of the first and second right-shifted quadwords. | 2019-04-04 |
20190102193 | APPARATUS AND METHOD FOR COMPLEX BY COMPLEX CONJUGATE MULTIPLICATION - An apparatus and method for multiplying packed real and imaginary components of complex numbers. For example, one embodiment of a processor comprises: a decoder to decode a first instruction to generate a decoded instruction; a first source register to store a first plurality of packed real and imaginary data elements; a second source register to store a second plurality of packed real and imaginary data elements; and execution circuitry to execute the decoded instruction, the execution circuitry comprising: multiplier circuitry to select real and imaginary data elements in the first source register and second source register to multiply, the multiplier circuitry to multiply each selected imaginary data element in the first source register with a selected real data element in the second source register, and to multiply each selected real data element in the first source register with a selected imaginary data element in the second source register to generate a plurality of imaginary products, adder circuitry to add a first subset of the plurality of imaginary products and subtract a second subset of the plurality of imaginary products to generate a first temporary result and to add a third subset of the plurality of imaginary products and subtract a fourth subset of the plurality of imaginary products to generate a second temporary result, accumulation circuitry to combine the first temporary result with first data from a destination register to generate a first final result and to combine the second temporary result with second data from the destination register to generate a second final result and to store the first final result and second final result back in the destination register. | 2019-04-04 |
20190102194 | APPARATUS AND METHOD FOR MULTIPLICATION AND ACCUMULATION OF COMPLEX AND REAL PACKED DATA ELEMENTS - An apparatus and method for multiplying packed real and imaginary components of complex numbers. For example, one embodiment of a processor comprises: a decoder to decode a first instruction to generate a decoded instruction; a first source register to store a first plurality of packed real and imaginary data elements; a second source register to store a second plurality of packed real and imaginary data elements; execution circuitry to execute the decoded instruction, the execution circuitry comprising: multiplier circuitry to select real and imaginary data elements in the first source register and second source register to multiply, the multiplier circuitry to multiply each selected imaginary data element in the first source register with a selected real data element in the second source register, and to multiply each selected real data element in the first source register with a selected imaginary data element in the second source register to generate a plurality of imaginary products, adder circuitry to add a first subset of the plurality of imaginary products to generate a first temporary result and to add a second subset of the plurality of imaginary products to generate a second temporary result; negation circuitry to negate the first temporary result to generate a third temporary result and to negate the second temporary result to generate a fourth temporary result; accumulation circuitry to combine the third temporary result with first data from a destination register to generate a first final result and to combine the fourth temporary result with second data from the destination register to generate a second final result and to store the first final result and second final result back in the destination register. | 2019-04-04 |
20190102195 | APPARATUS AND METHOD FOR PERFORMING TRANSFORMS OF PACKED COMPLEX DATA HAVING REAL AND IMAGINARY COMPONENTS - An apparatus and method for performing a transform on complex data. For example, one embodiment of a processor comprises: a decoder to decode a first instruction to generate a decoded instruction; a first source register to store a first plurality of packed real and imaginary data elements; a second source register to store a second plurality of packed real and imaginary data elements; a third source register to store a third plurality of packed real and imaginary data elements; execution circuitry to execute the decoded instruction, the execution circuitry comprising: multiplier circuitry to select real and imaginary data elements in the first and second source registers to multiply based on an immediate of the first instruction, the multiplier circuitry to multiply first packed data elements from the first source register with second packed data elements from the second source register in accordance with the immediate to generate a plurality of real and imaginary products, adder circuitry to select real and imaginary data elements in the third source register based on the immediate, the adder circuitry to add and subtract selected real and imaginary values from the real and imaginary products to generate first real and imaginary results; scaling, rounding, and/or saturation circuitry to scale, round, and/or saturate the first real and imaginary results to generate final real and imaginary data elements; and a destination register to store the final real and imaginary data elements in specified data element positions. | 2019-04-04 |
20190102196 | SYSTEMS AND METHODS FOR PERFORMING INSTRUCTIONS TO TRANSFORM MATRICES INTO ROW-INTERLEAVED FORMAT - Disclosed embodiments relate to systems and methods for performing instructions to transform matrices into a row-interleaved format. In one example, a processor includes fetch and decode circuitry to fetch and decode an instruction having fields to specify an opcode and locations of source and destination matrices, wherein the opcode indicates that the processor is to transform the specified source matrix into the specified destination matrix having the row-interleaved format; and execution circuitry to respond to the decoded instruction by transforming the specified source matrix into the specified RowInt-formatted destination matrix by interleaving J elements of each J-element sub-column of the specified source matrix in either row-major or column-major order into a K-wide submatrix of the specified destination matrix, the K-wide submatrix having K columns and enough rows to hold the J elements. | 2019-04-04 |
20190102197 | SYSTEM AND METHOD FOR MERGING DIVIDE AND MULTIPLY-SUBTRACT OPERATIONS - According to one general aspect, an apparatus may include a decoder circuit, a scheduler circuit, and an execution circuit. The decoder circuit may be configured to detect, within an instruction stream, a first instruction followed by a second instruction, wherein the first instruction takes as input a dividend and a divisor, and wherein the second instruction produces a remainder. The scheduler circuit may be configured to: merge the first and second instructions into a third instruction, wherein the third instruction takes as input the dividend and the divisor, and produces the remainder, replace, within an instruction pipeline, the first instruction with the third instruction, and delete, within the instruction pipeline, the second instruction. The execution circuit may be configured to execute the third instruction. | 2019-04-04 |
20190102198 | SYSTEMS, APPARATUSES, AND METHODS FOR MULTIPLICATION AND ACCUMULATION OF VECTOR PACKED SIGNED VALUES - Embodiments of systems, apparatuses, and methods for multiplication and accumulation of signed data values in a processor are described. For example, execution circuitry executes a decoded instruction to multiply selected signed data values from a plurality of packed data element positions in first and second packed data source operands to generate a plurality of first signed result values, sum the plurality of first signed result values to generate one or more second signed result values, accumulate the one or more signed result values with one or more data values from a destination operand to generate one or more third signed result values, and store the one or more third signed result values in one or more packed data element positions in the destination operand. | 2019-04-04 |
20190102199 | METHODS AND SYSTEMS FOR EXECUTING VECTORIZED PYTHAGOREAN TUPLE INSTRUCTIONS - Disclosed embodiments relate generally to computer processor architecture, and, more specifically, to methods and systems for executing vectorized Pythagorean tuple instructions. In one example, a processor includes fetch circuitry to fetch an instruction having an opcode, an order, a destination identifier, and N source identifiers, N being equal to the order, and the order being one of two, three, and four, decode circuitry to decode the fetched instruction, and execution circuitry, for each element of the identified destination, to generate N squares by squaring each corresponding element of the N identified sources and generate a sum of the N squares and previous contents of the element. | 2019-04-04 |
20190102200 | Loader Application with Secondary Embedded Application Object - Methods, systems, and non-transitory computer-readable media for embedding a secondary application object within a loader application are described herein. In some embodiments, a computing platform may initiate a first iOS application comprising a first name and a first instance of UIApplication comprising an NSObject class. Further, the computing platform may embed into the first iOS application, a second iOS application comprising a second name, a second instance of UIApplication, and a first derived class. Next, the computing platform may generate, based on NSObject and the first derived class, a second derived class. Additionally, the computing platform may generate an iPhone Application (IPA) file comprising the first iOS application wherein the first iOS application comprises the second derived class and the second name. Subsequently, the computing platform may distribute via a communication interface, the IPA file. | 2019-04-04 |
20190102201 | COMPONENT INVOKING METHOD AND APPARATUS, AND COMPONENT DATA PROCESSING METHOD AND APPARATUS - A component invoking method includes obtaining component invoking data corresponding to a child application. The component invoking data includes a component identifier identifying a first native component in a parent application and corresponding to a current system platform and a second native component running on another system platform and having a same function as the first native component. The method further includes transferring the component invoking data to a native layer corresponding to the parent application using a communications channel corresponding to the current system platform and invoking the first native component by the native layer based on the component invoking data. | 2019-04-04 |
20190102202 | METHOD AND APPARATUS FOR DISPLAYING HUMAN MACHINE INTERFACE - A method and apparatus that generate a human machine interface are provided. The method includes detecting at least one from among a number of one or more users and a location of the one or more users, generating a human machine interface based on the at least one from among the number of the one or more users and the location of the one or more users, and outputting the generated human machine interface. | 2019-04-04 |
20190102203 | SYSTEMS, METHODS, AND APPARATUS THAT PROVIDE MULTI-FUNCTIONAL LINKS FOR INTERACTING WITH AN ASSISTANT AGENT - Methods, apparatus, systems, and computer-readable media are provided for introducing a user to functions of various applications through interactions with an assistant agent. The assistant agent can correspond to an assistant application that can provide a user interface with multiple selectable elements, each of which can correspond to a separate application. When a user selects one of the selectable elements, a function of an application can be demonstrated to the user, in order that the user might become more familiar with functions of the application. In some implementations, a portion of the selectable element can be selected to cause information about the application to be presented to the user. This allows the user to have the option to try out or learn about an application before investing computational resources through downloading and installing the entire application. | 2019-04-04 |
20190102204 | Application Profiling via Loopback Methods - A system, method, and computer-readable medium are disclosed for performing a dynamic application optimization operation, comprising: instrumenting a plurality of system parameters of a client information handling system for monitoring; instructing a user to execute a particular application on the client information handling system; obtaining a plurality of samples of the plurality of system parameters; performing a machine learning operation using the plurality of samples of the plurality of system parameters, the machine learning operation training a machine learning model to generate a profile for the particular application and an operating mode of the particular application; applying the profile to the client information handling system to provide a new information handling system configuration, the new information handling system configuration optimizing the information handling system for the particular application. | 2019-04-04 |
20190102205 | METHOD AND APPARATUS FOR SECURITY CERTIFIED SMART CARD OS DIVERSIFICATION SYSTEM - Various embodiments relate to a method and apparatus for embedding an operating system in a smart card product, which is certified and which derives multiple variants from the operating system, the method including the steps of certifying, a target of evaluation, the target of evaluation including an OS core mask and a plurality of components which includes OS components and plugin placeholders, building, by an image builder tool, romized content and runtime content including at least one of the plurality of components and customizing which of the plurality of components to include on the smart card product. | 2019-04-04 |
20190102206 | LEVERAGING MICROSERVICE CONTAINERS TO PROVIDE TENANT ISOLATION IN A MULTI-TENANT API GATEWAY - A system can host APIs for a plurality of different tenants and receive requests from many different client devices. As requests are received, an associated tenant can be identified, and a router can determine if a container instance is available to service the request. A container instance may be an empty container instance including an internal endpoint, a Web server, and a runtime environment. An empty container instance can be unassociated with a particular tenant. To associate a container instance with a tenant, a data store, such as a key-value data store can retrieve configuration files that turn the agnostic container instance into a container instance that is associated with particular tenant and includes configuration code to perform the requisite API functions. The pool of empty and populated containers can be managed efficiently. | 2019-04-04 |
20190102207 | AUTHORIZING A BIOS POLICY CHANGE FOR STORAGE - Examples herein disclose receiving a basic input output system (BIOS) policy change and authorizing the BIOS policy change. Upon the authorization of the BIOS policy change, a first copy of the BIOS policy is stored in a first memory accessible by a central processing unit. Additionally, a second copy of the BIOS policy change is transmitted for storage in a second memory electrically isolated from the central processing unit. | 2019-04-04 |
20190102208 | METHOD AND APPARATUS FOR AUTOMATIC PROCESSING OF SERVICE REQUESTS ON AN ELECTRONIC DEVICE - Embodiments of the present application provides methods and apparatus capable of recording operation/display events on a mobile device used to fulfill a first service request entered via a user interface of the mobile device. A recorded sequence of operation/display events is used to build a script file, which is associated with the service request or a template extracted from the service request. When a second service request that is the same or similar to the first service request is received again on the same or different mobile device, the script file associated with the service request is retrieved and provided to the mobile device, which executes the script file to automatically brings up a series of operation/display events to fulfill the service request. Thus, a user of the mobile device does not need to manually navigate through the sequence of operation/display events again in order to fulfill the service request. | 2019-04-04 |
20190102209 | MULTITENANT HOSTED VIRTUAL MACHINE INFRASTRUCTURE - A multi-tenant virtual machine infrastructure (MTVMI) allows multiple tenants to independently access and use a plurality of virtual computing resources via the Internet. Within the MTVMI, different tenants may define unique configurations of virtual computing resources and unique rules to govern the use of the virtual computing resources. The MTVMI may be configured to provide valuable services for tenants and users associated with the tenants. | 2019-04-04 |
20190102210 | LOG MANAGEMENT DEVICE AND LOG MANAGEMENT METHOD - A log management device includes one or more memories configured to store management information indicating each relationship between identification information of each virtual machine and identification information regarding each environment in which the each virtual machine operates, and one or more processor coupled to the one or memories and configured to, by referring to the management information, obtain a first log regarding a first environment operating a first virtual machine on the basis of identification information of the first virtual machine, perform generation of a second log in which specific information included in the first log is changed to a specific identifier regarding the first environment, and output the second log in response to receiving a request for a log regarding the first virtual machine. | 2019-04-04 |
20190102211 | COMPUTER SYSTEM PROVIDING USER SPECIFIC SESSION PRELAUNCH FEATURES AND RELATED METHODS - A virtualization server may include a memory and a processor cooperating therewith to determine when electronic devices associated with respective different users enter within a virtual geographic boundary, and pre-launch unauthenticated user-specific virtual computing sessions for respective users based upon determining that the electronic devices for the respective users have entered within the virtual geographic boundary. The processor may further authenticate the pre-launched user-specific virtual computing sessions based upon unique identifiers associated with the users and the respective electronic devices being within the virtual geographic boundary. | 2019-04-04 |
20190102212 | PLATFORM INDEPENDENT GPU PROFILES FOR MORE EFFICIENT UTILIZATION OF GPU RESOURCES - Disclosed are various examples for platform independent graphics processing unit (GPU) profiles for more efficient utilizes of GPU resources. A computing device can identify a platform independent configuration of a virtual machine, such as one made by an administrator, that indicates that a virtual graphics processing unit (vGPU) is to be utilized in execution, where the configuration comprising a graphics computing requirement for the virtual machine. The computing device can identify one or more hosts available in a computing environment to place the virtual machine, where each of the plurality of hosts comprises at least one GPU. The computing device can identify a most suitable one of the hosts to place the virtual machine based at least in part on the graphics computing requirement and whether a preferred graphics card was specified. | 2019-04-04 |
20190102213 | INTELLIGENT SELF-CONFIGURING DECISION METHOD FOR THE CLOUD RESOURCES MANAGEMENT AND ORCHESTRATION - According to an embodiment of the invention, a method is provided for reducing the monitored-data load and the processing load/delay on a cloud management system (CMS). The method includes applying a prediction process to provide predicted key performance indicator (KPI) values for a plurality of VMs managed by the CMS during a first monitoring epoch; collecting, during the first monitoring epoch, observed KPI values for the plurality of VMs managed by the CMS; assessing the accuracy of the prediction process by way of calculating, according to a reward function, reward values for the plurality of VMs based on a deviation between the observed KPI values and the predicted KPI values; calculating a monitoring frequency for collecting monitoring information during a second monitoring epoch based on the reward values; and collecting the monitoring information during the second monitoring epoch according to the calculated monitoring frequency. | 2019-04-04 |
20190102214 | CONTAINER MANAGEMENT APPARATUS, CONTAINER MANAGEMENT METHOD, AND NONVOLATILE RECORDING MEDIUM - A host computer | 2019-04-04 |
20190102215 | VIRTUAL MACHINE MIGRATION - Migrating servers from client networks to virtual machines (VMs) on a provider network. A migration appliance is installed or booted on the client network, and a migration initiator is instantiated on the provider network. A VM and associated volumes are instantiated on the provider network. The initiator sends a request for a boot sector to the appliance; the appliance reads the blocks from a volume on the client network, converts the blocks to a format used by the VM, and sends the blocks to the initiator. The initiator boots the VM using the boot sector and the VM begins execution. The initiator then retrieves all data blocks for the VM from volumes on the client network via the appliance, stores the data to the volumes on the provider network, and fulfills requests from the VM from either local volumes or the remote volumes via the appliance. | 2019-04-04 |
20190102216 | Automatically Limiting Repeated Checking On Completion Of A Command Without Relinquishing A Processor - A process or thread is implemented to issue a command which executes without use of a processor that issues the command, retain control of the processor to check whether the issued command has completed, and when the issued command has not completed repeat the checking without relinquishing the processor, until a limiting condition is satisfied. The limiting condition may be determined specifically for a current execution of the command, based on one or more factors, such as durations of executions of the command after start of the process or thread and/or an indicator of delay in a current execution of the command. When the limiting condition is satisfied, the processor is relinquished by the process or thread issuing a sleep command, after setting an interrupt. After the command completes, the limiting condition is determined anew based on the duration of the current execution, for use in a next execution. | 2019-04-04 |
20190102217 | Method and Apparatus for Determination of Slot-Duration in Time-Triggered Control System - A method for a determination of the optimal duration of a time slot for computational actions in a time-triggered controller. The controller includes a sensor subsystem, a computational subsystem, an actuator subsystem, and a time-triggered communication system. The time-triggered communication system is placed between the sensor subsystem, the computational subsystem, the actuator subsystem, and a monitor subsystem. An anytime algorithms is executed in the computational subsystem. A plurality of execution slot durations of the anytime algorithms is probed during the development phase, starting from the minimum execution slot duration, increasing this slot duration by the execution slot granularity until the maximum execution slot duration is reached. In each of the execution slot durations, a multitude of frames is executed in a destined application environment. In each frame the computational subsystem calculates imprecise anticipated values of observable state variables by interrupting execution of the anytime algorithm at the end of the provided execution slot duration, using data received from the sensor subsystems at the beginning of the frame. | 2019-04-04 |
20190102218 | AUTOMATIC SYNOPSIS GENERATION FOR COMMAND-LINE INTERFACES - Aspects of the disclosure provide for mechanisms for automatic generating synopsis data of command-line commands. A method of the disclosure includes processing source code implementing a command; identifying, in view of the processing, a plurality of command options related to the command; generating, by a processing device, relationship data representing dependencies of the command options; and generating, by the processing device, synopsis data for the command in view of the relationship data. In some embodiments, the relationship data may include a graph, wherein the graph including an arc that associates a first node of the graph with a second node of the graph. The first node may correspond to the first command option. The second node may correspond to the second command option. | 2019-04-04 |
20190102219 | PROGRAM EXECUTING APPARATUS AND PROGRAM EXECUTION METHOD - According to one embodiment, a program executing apparatus includes an event management portion to configure migration timing of the program on the basis of a state of the program being executed at reception of a migration request from the program, and a state transmission portion to migrate state information of the program on the basis of the migration timing configured in the event management portion. | 2019-04-04 |
20190102220 | JOB PROCESSING IN QUANTUM COMPUTING ENABLED CLOUD ENVIRONMENTS - A compatibility is ascertained between a configuration of a quantum processor (q-processor) of a quantum cloud compute node (QCCN) in a quantum cloud environment (QCE) and an operation requested in a first instruction in a portion (q-portion) of a job submitted to the QCE, the QCE including the QCCN and a conventional compute node (CCN), the CCN including a conventional processor configured for binary computations. In response to the ascertaining, a quantum instruction (q-instruction) is constructed corresponding to the first instruction. The q-instruction is executed using the q-processor of the QCCN to produce a quantum output signal (q-signal). The q-signal is transformed into a corresponding quantum computing result (q-result). A final result is returned to a submitting system that submitted the job, wherein the final result comprises the q-result. | 2019-04-04 |
20190102221 | THREAD SCHEDULING USING PROCESSING ENGINE INFORMATION - In an embodiment, a processor includes a plurality of processing engines (PEs) to execute threads, and a guide unit. The guide unit is to: monitor execution characteristics of the plurality of PEs and the threads; generate a plurality of PE rankings, each PE ranking including the plurality of PEs in a particular order; and store the plurality of PE rankings in a memory to be provided to a scheduler, the scheduler to schedule the threads on the plurality of PEs using the plurality of PE rankings. Other embodiments are described and claimed. | 2019-04-04 |
20190102222 | SYSTEMS AND METHODS DEFINING THREAD SPECIFICATIONS - Methods and systems are disclosed for executing tasks in a partially out-of-order execution environment. Input is received indicating a task and task type for execution within an environment. Functions associated with the task and type of task may be selected. An instruction may be generated for each function indicating that the function is configured for static scheduling or dynamic scheduling. A schedule for instantiating each function may be generated, where functions configured for static scheduling are scheduled for instantiation according to a position of the function within the list and functions configured for dynamic scheduling are scheduled for instantiation at runtime based on an environment in which the function is instantiated and a position of the function of the subset of the set of functions within the list. A thread specification may then be generated using the functions and list. The thread specification may transmitted to remote devices. | 2019-04-04 |
20190102223 | System, Apparatus And Method For Real-Time Activated Scheduling In A Queue Management Device - In one embodiment, a hardware queue manager is to receive tasks from a plurality of producer threads and allocate the tasks to a plurality of consumer threads. The hardware queue manager may include: a plurality of input queues each associated with one of the plurality of producer threads, each of the plurality of input queues having a plurality of entries to store a queue element associated with a task, the queue element including a task portion and timing information associated with the task; and an arbiter to select a consumer thread of the plurality of consumer threads to receive a task and select the task from a plurality of tasks stored in the plurality of input queues, based at least in part on the timing information of the queue element associated with the task. Other embodiments are described and claimed. | 2019-04-04 |
20190102224 | TECHNOLOGIES FOR OPPORTUNISTIC ACCELERATION OVERPROVISIONING FOR DISAGGREGATED ARCHITECTURES - Technologies for opportunistic acceleration overprovisioning for disaggregated architectures include a compute device. The compute device includes accelerator devices and a management logic unit. The management logic unit is to receive a plurality of job execution requests, each job execution request including a job requested to be accelerated received from an orchestrator server. The management logic unit is also to determine one or more job parameters of each requested job based on the corresponding job execution request, select an accelerator device of the compute device to execute each job based at least in part on the job parameters of the corresponding job, determine, for each job, whether one or more kernels are to be registered on the corresponding accelerator device selected for the corresponding job to enable the corresponding accelerator device to execute the job, register, in response to a determination that the one or more kernels are to be registered, the one or more kernels on the corresponding accelerator device, and schedule, for each accelerator device of the compute device, the kernels of the corresponding accelerator device based on a kernel prediction. | 2019-04-04 |
20190102225 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING PROGRAM - An information processing device includes a processor. The processor is configured to decide a deletion deadline of a second environment based on a predetermined condition relating to an application having a first environment that is a production environment and the second environment that is a staging environment, and delete the second environment when the decided deletion deadline arrives. | 2019-04-04 |
20190102226 | DYNAMIC NODE REBALANCING BETWEEN CONTAINER PLATFORMS - A method may include deploying a plurality of container pods to a plurality of container nodes in a container environment. Each of the plurality of container pods may include one or more services. Each of the plurality of container nodes may include one or more container pods. The plurality of container pods may be deployed to the plurality of container nodes based on initial characterizations of usage factors for each of the plurality of container pods. The method may also include monitoring actual usage factors for each of the plurality of container pods after deployment to the plurality of container nodes; identifying one or more container pods in the plurality of container pods that deviate from their initial characterizations of usage factors; and redistributing the one or more container pods throughout the plurality of container nodes based on the actual usage factors. | 2019-04-04 |
20190102227 | THREAD SCHEDULING USING PROCESSING ENGINE INFORMATION - In an embodiment, a processor includes a plurality of processing engines (PEs) to execute threads, and a guide unit. The guide unit is to: monitor execution characteristics of the plurality of PEs and the threads; generate a plurality of PE rankings, each PE ranking including the plurality of PEs in a particular order; and store the plurality of PE rankings in a memory to be provided to a scheduler, the scheduler to schedule the threads on the plurality of PEs using the plurality of PE rankings. Other embodiments are described and claimed. | 2019-04-04 |
20190102228 | UNIFIED WORK BACKLOG - Systems and methods are related to a global ranking for a unified list of tasks. From a plurality of work projects each having one or more tasks, a processor may receive a first set of selections of at least two work projects for generating a work backlog having a unified list of tasks. The processor may generate a list of potential tasks to include in the work backlog from the selected work projects. The processor may receive a second set of selections of one or more of the potential tasks to include in the work backlog. The processor may send signals to display the unified list of tasks of the work backlog based on the potential tasks selected. The unified list of tasks comprises at least two types of tasks from two different work projects having disparate priority metrics. | 2019-04-04 |
20190102229 | DYNAMIC PERFORMANCE BIASING IN A PROCESSOR - Technologies are provided in embodiments to dynamically bias performance of logical processors in a core of a processor. One embodiment includes identifying a first logical processor associated with a first thread of an application and a second logical processor associated with a second thread, obtaining first and second thread preference indicators associated with the first and second threads, respectively, computing a first relative performance bias value for the first logical processor based, at least in part, on a relativeness of the first and second thread preference indicators, and adjusting a performance bias of the first logical processor based on the first relative performance bias value. Embodiments can further include increasing the performance bias of the first logical processor based, at least in part, on the first relative performance bias value indicating a first performance preference that is higher than a second performance preference. | 2019-04-04 |
20190102230 | MANAGING SPLIT PACKAGES IN A MODULE SYSTEM - Techniques for managing split packages in a module system are disclosed. A code conflict exists between two packages, in different modules, based at least in part on the packages being named identically and including executable code. No code conflict exists between two other identically-named packages, in different modules, based at least in part on the packages not including any executable code. Managing split packages may be based, at least in part, on module membership records associated with the modules. | 2019-04-04 |
20190102231 | ACQUISITION AND MAINTENANCE OF COMPUTE CAPACITY - A system for providing low-latency computational capacity from a virtual compute fleet is provided. The system may be configured to maintain a plurality of virtual machine instances on one or more physical computing devices, wherein the plurality of virtual machine instances comprises a first pool comprising a first sub-pool of virtual machine instances and a second sub-pool of virtual machine instances, and a second pool comprising virtual machine instances used for executing one or more program codes thereon. The first sub-pool and/or the second sub-pool may be associated with one or more users of the system. The system may be further configured to process code execution requests and execute program codes on the virtual machine instances of the first or second sub-pool. | 2019-04-04 |