24th week of 2017 patent applcation highlights part 43 |
Patent application number | Title | Published |
20170168769 | FLEXIBLE DISPLAY DEVICE - A flexible display device including a memory; a display including an extendable and reducible screen; a sensor configured to detect a size of the screen; and a controller configured to display information on the extended screen, and store the information in the memory in response to the size of the screen being reduced while the information is selected. | 2017-06-15 |
20170168770 | ELECTRONIC DEVICE AND METHOD FOR ONBOARD DISPLAY CONTROL - A vehicle-mounted display control method and an electronic device thereof are disclosed, applied to a vehicle-mounted system, the vehicle-mounted system being connected to a portable terminal. The method includes: receiving interface information sent by the portable terminal, and displaying a virtual interface according to the interface information; receiving an input instruction via the virtual interface, and sending an operation instruction generated according to the input instruction to the portable terminal; and receiving and outputting operation data generated by the portable terminal according to the operation instruction. According to the present disclosure, resources are shared between the vehicle-mounted system and the portable terminal, the display and play effects of the portable terminal are improved, and the disclosure of the vehicle-mounted system is extended. | 2017-06-15 |
20170168771 | SONG PLAYING PROGRESS CONTROL METHOD AND ELECTRONIC DEVICE - Disclosed are a song playing progress control method and an electronic device, wherein, the method includes the following steps: detecting whether a touch operation occurs on a current playing progress bar; if a touch operation occurs on a current playing progress bar, determining the types of the touch operation, including click touch and slide; if the type of the touch operation is click touch, determining whether the time of the touch operation is less than a preset time threshold value; if the time of the touch operation is less than a preset time threshold value, switching to a time point corresponding to the position where the touch operation is located to conduct playing; otherwise, detecting and determining whether the strength of the touch operation is less than a preset strength threshold value; if the strength of the touch operation is less than a preset strength threshold value, previewing the lyrics at the time point corresponding to the position where the touch operation is located; otherwise, switching to display the lyrics at the time point corresponding to the position where the touch operation is located. | 2017-06-15 |
20170168772 | ELECTRONIC DEVICE AND METHOD OF OPERATING THE SAME - Provided is an electronic device including a communicator including communication circuitry configured to perform wireless communication with a peripheral device and to receive information indicating at least one audio connection protocol for connecting the peripheral device to the electronic device; a processor; a memory; and one or more programs comprising instructions, stored in the memory, which, when executed by the processor, cause the processor to perform operations corresponding to the instructions, the one or more programs including instructions for selecting one of the at least one audio connection protocol based on pre-set priorities of the audio connection protocols; and instructions for outputting an audio signal to the peripheral device based on the selected audio connection protocol. | 2017-06-15 |
20170168773 | MODIFYING HAPTIC FEEDBACK PROVIDED TO A USER TO ACCOUNT FOR CHANGES IN USER PERCEPTION OF HAPTIC FEEDBACK - A system modifies data generating haptic feedback to account for changes in user perception of haptic feedback. The system identifies haptic data and determines an estimated amplitude of haptic feedback corresponding to a portion of the haptic data. Responsive to the estimated amplitude of the haptic feedback corresponding to the portion of the haptic data exceeding a threshold value, a refractory period is determined that will occur after haptic feedback corresponding to the portion of the haptic data is applied to the user. The portion of the haptic data is provided to an input interface, and a set of haptic data associated with times within a duration of the refractory period from the identified haptic data is removed to form an adjusted data set that is provided to the input interface to provide haptic feedback to the user in accordance with adjusted haptic data set. | 2017-06-15 |
20170168774 | IN-VEHICLE INTERACTIVE SYSTEM AND IN-VEHICLE INFORMATION APPLIANCE - To improve convenience for a user. An in-vehicle information appliance | 2017-06-15 |
20170168775 | Methods and Apparatuses for Performing Multiplication - In a novel computation device, a plurality of partial product generators is communicatively coupled to a binary number multiplier. The binary number is partitioned in the computation device into non-overlapping subsets of binary bits and each subset is coupled to one of the plurality of partial product generators. Each partial product generator, upon receiving, a subset of binary bits representing a number, generates a multiplication product of the number and a predetermined constant. The multiplication products from all partial product generators are summed to generate the final, product between the predetermined constant and the binary number. The partial product generators are constructed by logic gates and wires connected the logic gates including a AND gate. The partial product generators are free of memory elements. | 2017-06-15 |
20170168776 | Extracting Entropy From Mobile Devices To Generate Random Numbers - Embodiments include method, systems and computer program products for extracting entropy from mobile devices to generate random numbers. In some embodiments, first vibration data may be received from a first device. Second vibration data may be received from a second device. A first piece of entropy data may be generated using the first vibration data and a second piece of entropy data may be generated using the second vibration data. The first piece of entropy data and the second piece of entropy data may be aggregated. The first piece of entropy data and the second piece of entropy data may be stored in an entropy pool. | 2017-06-15 |
20170168777 | INTEGRATED DEVELOPMENT TOOL FOR AN INTERNET OF THINGS (IOT) SYSTEM - A system and method are described for an IoT integrated development tool. For example, one embodiment of an apparatus comprises: an Internet of Things (IoT) development application comprising a GUI through which a developer is to specify a configuration for a new IoT device; a development database comprising configuration data related to different IoT device configurations, the IoT development application to utilize the data in the development database based on the configuration specified by the developer for the new IoT device; an IoT device engine to generate an IoT device profile responsive to the development application specifying input/output functions to be performed by the new IoT device; a client app engine to generate a user experience (UX) profile responsive to the development application specifying features of a client app or application related to operation of the new IoT device; and an IoT service engine to generate a cloud application programming interface (API) profile responsive to the development application specifying features of an IoT service related to operation of the new IoT device. | 2017-06-15 |
20170168778 | DETERMINING THE IDENTITY OF SOFTWARE IN SOFTWARE CONTAINERS - One or more processors scan a first software container template for one or more identities of software present on a first software container associated with the first software container template. One or more processors generate a map of the one or more identities of software present on the first software container. The one or more identities of software present on the first software container are mapped with one or both of: an identifier of the first software container template and an identifier of the first software container associated with the first software container template. | 2017-06-15 |
20170168779 | AUTOMATED GENERATION OF MEMORY CONSUMPTION AWARE CODE - Techniques generate memory-optimization logic for concurrent graph analysis. A computer analyzes domain-specific language logic that analyzes a graph having vertices and edges. The computer detects parallel execution regions that create thread locals. Each thread local is associated with a vertex or edge. For each parallel region, the computer calculates how much memory is needed to store one instance of each thread local. The computer generates instrumentation that determines how many threads are available and how many vertices and edges will create thread locals. The computer generates tuning logic that determines how much memory is originally needed for the parallel region based on how much memory is needed to store the one instance, how many threads are available, and graph size. The tuning logic detects a memory shortage based on the original amount of memory needed exceeding how much memory is available and accordingly adjusts the execution of the parallel region. | 2017-06-15 |
20170168780 | AUTOMATIC PROGRAM SYNTHESIS USING MONADIC SECOND-ORDER LOGIC - A method is provided for synthesizing a computer program by a hardware processor and a program synthesizer. The method includes representing program components and registers by position set variables and constraints on the position set variables using Monadic Second-Order Logic. The method further includes determining potential combinations of the program components by solving the constraints. The method also includes forming the computer program from at least one of the potential combinations. | 2017-06-15 |
20170168781 | OPTIMIZING LATEST USER DEFINED CLASS LOADER - A computer-implemented method for class load optimizing. The method determines whether a caller method within the class has a specific signature call using the context of the class. The method determines a callee method within the class using the context of the class. Furthermore, the method retrieves a class object of the class and converts the callee method to a second method, in response to the caller method having the specific signature, the callee method being of the specific signature and callee method being the first argument of the caller method. | 2017-06-15 |
20170168782 | SYSTEM AND METHOD FOR CREATING A UNIVERSALLY COMPATIBLE APPLICATION DEVELOPMENT SYSTEM - A software application development system and method for producing, delivering and displaying scalable, adaptable, interchangeable software applications that provide universal consistency, operability and compatibility on any hardware and/or operating system of any digital device. The system and method providing through a distinctive Hierarchical Access Navigation and Menu System seamless integration of a plurality of software applications, external applications, web pages, and URL's without exiting a defined core application environment. The system uses Architectural Blueprints and Composite Hyper Displays to determine the composition and geometry of User Access (UA), the User Interface (UI), the User Experience (UX) and the User Content (UC). | 2017-06-15 |
20170168783 | GENERATING LOGIC WITH SCRIPTING LANGUAGE IN SOFTWARE AS A SERVICE ENTERPRISE RESOURCE PLANNING - Systems and methods are described for converting business logic architecture from a native language (e.g., processor compiled code) into a scripting language (e.g., scripted code) for software as a service (SaaS) delivery methods for enterprise resource planning. The systems and methods may include generating a plurality of business process patterns representing business logic associated with one or more of a plurality of business objects, obtaining a plurality of code portions that correspond to one or more of the plurality of business process patterns, the plurality of code portions being coded in a scripting language and stored in a script repository, defining at least one extension point for each business process pattern. Each extension point may represent an interface, within the business logic, in which to insert one or more of the plurality of code portions into processor-compiled architecture in a software application. | 2017-06-15 |
20170168784 | METHOD AND DEVICE FOR VISUALLY IMPLEMENTING SOFTWARE CODE - The present invention relates to a method and device for visually implementing a software code. To this end, a method for visually implementing a software code according to the present invention comprises the steps of: generating, by a code block generation unit, a code block used for implementing a software code by the unit of block depending on a requirement and a function; and setting, by a code block setting unit, a code block attribute or an internal attribute code included in the code block on the basis of information input from a user, wherein the step of setting the code block attribute or the internal attribute code comprises the step of including function information on the code block, description information on the function information, and the internal attribute code in the code block. | 2017-06-15 |
20170168785 | COMPUTER PROGRAMMING SYSTEM AND METHOD - A method of computer programming includes the steps of making a writable system catalog, and developing grammar by building an abstract grammar tree. Another method of computer programming involves use of a data model and a user interface, and includes the step of decoupling the user interface from the data model. | 2017-06-15 |
20170168786 | Source Code Generation From Prototype Source - Methods, systems, and computer program products for generating source code from a compilable annotated source code prototype are disclosed. A computer-implemented method may include receiving two or more schemas that each describe attributes of respective source code modules to be generated by a source code generator, receiving a compilable source code prototype comprising annotations associated with the source code generator to generate each of the respective source code modules, detecting the annotations from the source code prototype as part of generating the respective source code modules, determining that one or more of the annotations from the source code prototype correspond with one or more attributes of a schema associated with one of the respective source code modules to be generated, and generating each of the respective source code modules based on the annotations from the source code prototype in view of attributes described in an associated schema. | 2017-06-15 |
20170168787 | OPTIMIZED COMPILING OF A TEMPLATE FUNCTION - A template function is received. The template function includes one or more data types. A single abstract instantiation of the template function is created. An abstract internal descriptor for each data type is created. A map set for each abstract internal descriptor is created. The number of instantiations required and the type of instantiation required is provided. A finished object is created using each map set. The finished object is a translation of the intermediate representation into assembly code. | 2017-06-15 |
20170168788 | OPTIMIZED COMPILING OF A TEMPLATE FUNCTION - A template function is received. The template function includes one or more data types. A single abstract instantiation of the template function is created. An abstract internal descriptor for each data type is created. A map set for each abstract internal descriptor is created. The number of instantiations required and the type of instantiation required is provided. A finished object is created using each map set. The finished object is a translation of the intermediate representation into assembly code. | 2017-06-15 |
20170168789 | OPTIMIZED COMPILING OF A TEMPLATE FUNCTION - A template function is received. The template function includes one or more data types. A single abstract instantiation of the template function is created. An abstract internal descriptor for each data type is created. A map set for each abstract internal descriptor is created. The number of instantiations required and the type of instantiation required is provided. A finished object is created using each map set. The finished object is a translation of the intermediate representation into assembly code. | 2017-06-15 |
20170168790 | PARALLELIZATION METHOD, PARALLELIZATION TOOL, AND IN-VEHICLE APPARATUS - A method is for generating a parallel program for a multicore microcomputer from processes in a single program for a single core. The method includes extraction procedure, association procedure, and analysis procedure. The extraction procedure extracts (i) an extracted address of an accessed data item, which is among data items stored in a storage area together with the processes and accessed when each process is executed and (ii) an extracted symbol name of the accessed data item. The association procedure associates an associated address in the storage area storing the accessed data item of the extracted symbol name with the extracted symbol name. The analysis procedure analyzes a dependency between each process based on the extracted address and the associated address, and determines that two processes accessing an identical address have a dependency while determining that two processes not accessing an identical address have no dependency. | 2017-06-15 |
20170168791 | REARRANGEABLE CODE OPTIMIZATION MODULES - Disclosed are ways to flexibly arrange, rearrange, and execute optimization modules for program code in user-customizable sequences. In various embodiments, computer programmers can select an order of multiple standalone optimizers that each perform an optimization function on program code, forming a pipeline of a series of optimization modules. The pipeline can be modified by, for example, adding, removing, rearranging, repeating, and/or replacing optimization modules. | 2017-06-15 |
20170168792 | SPACE AND TIME AWARE ORGANIZATION AND ISOLATION OF COMPONENTS IN REAL TIME SYSTEMS - A method includes obtaining, by a first processor, a first software architecture description file and obtaining, by the first processor, a platform independent model file. The method also includes obtaining, by the first processor, a platform architecture definition file and performing, by the first processor, a first source-to-source compilation in accordance with the first software architecture description file, the platform independent model file, and the platform architecture definition file, to produce generated interface code. Additionally, the method includes generating, by the first processor, run time code, in accordance with the generated interface code and running, by a second processor in real time, the run time code. | 2017-06-15 |
20170168793 | SPLIT INSTALLATION OF A SOFTWARE PRODUCT - Various embodiments of systems and methods to provide split installation of a software product are described herein. In one aspect, a request for split installation of a software product is received. A pre-installation document corresponding to installation of the software product in a first phase of the split installation is generated and stored when at least one other applications continue to run. The pre-installation document includes installation information of the software product. Further, one or more deployment units are cached for installation of the software product based on the installation information. The software product is installed by installing the one or more deployment units based on the pre-installation document in a second phase of the split installation. | 2017-06-15 |
20170168794 | Enhanceable Cross-Domain Rules Engine For Unmatched Registry Entries Filtering - Identification of unmatched registry entries may be provided, by scanning a file system, discovering software, collecting first attribute values of the discovered software, receiving a plurality of filtering rules including a method and an attribute. The attribute may comprise a software-specific condition. The method may further comprise collecting native registry entries comprising second attribute values indicated by said attributes of at least one of said rule, and comparing said first attribute values of said discovered software with related ones of said second attribute values of said collected native registry entries. Then, the native registry entries may be grouped into two groups. The first group represents matched registry entries and the second group represents unmatched registry entries. The unmatched registry entries may be identified as unequivocal entries for further software discovery. Finally, the filtering rules may be applied against said collected registry entries based on said filtering method. | 2017-06-15 |
20170168795 | METHOD AND SYSTEM FOR MANAGING MICRO RELEASE AUTOMATION IN AN APPLICATION DELIVERY SYSTEM - The present invention relates to a method and system for managing micro release automation in an application delivery system. The method includes generating, by a build server, a second mount action version of a second installation package for at least one target process of a target application. The at least one target process operates based on a first installation package including a first mount action version. The second installation package is stored within a mount action repository that includes the first installation package. The method further includes broadcasting a mount action change event message for the at least one target process to at least one target node of the application delivery system. The mount action change event message is broadcasted based on detection of the second installation package in the mount action repository. The mount action change event message includes geospatial data of the at least one target node. | 2017-06-15 |
20170168796 | METHOD AND ELECTRONIC APPARATUS FOR TRANSFERRING APPLICATION PROGRAM FROM PC TO MOBILE APPARATUS - A method and electronic apparatus for transferring application program from PC to mobile apparatus including: receiving information that the mobile apparatus is already connected to the PC sent from the PC by a PC end application program installed at the PC and in an opened state; detecting whether the mobile apparatus is permitted to install the application program and whether a user agrees to install the application program; installing the application program, which is previously downloaded to the PC and related to the PC end application program, to the mobile apparatus by the PC end application program when the mobile apparatus is permitted to install the application program to the mobile apparatus and the user agrees to install the application program to the mobile apparatus, for transferring the application program from the PC end to the mobile apparatus end affects user experience and having low efficiency in transferring. | 2017-06-15 |
20170168797 | MODEL-DRIVEN UPDATES DISTRIBUTED TO CHANGING TOPOLOGIES - Examples of the disclosure enable updates to be deployed to a modifiable distributed topology. In one aspect, a computer-implemented method, system, and computer storage medium for distributing model-driven updates are provided. An instruction to define a task is received. A model defining a first instance of a plurality of components for a distributed cloud application is received, the plurality of components including a first component and an update component. The first instance of the plurality of components is deployed. The update component determines whether an update to the distributed cloud application is available. In response to determining that the update is available, a second template associated with the update is retrieved, with the second template defining a second instance of the first component. The second instance of the first component is deployed. | 2017-06-15 |
20170168798 | APPLYING PROGRAM PATCH SETS - Embodiments of the present invention disclose a method, computer program product, and system for applying a plurality of program patch sets on a plurality of computer programs. Virtual machines are prepared to be patchable, in response to a suspended computer program. Synchronized snapshots of the virtual machines are created. A plurality of binary code sections of each of the synchronized snapshots are determined. Symbol data information of each of the synchronized snapshots are analyzed, based on the program patch sets. The determined binary code sections are replaced with a set of patch data, based on the plurality of program patch sets, resulting in patched snapshots for each of the synchronized snapshots. Dependencies of the patch data are adjusted, based on the replaced plurality of binary code sections and the execution of the computer program on each of the virtual machines are resumed using the plurality of patched snapshots. | 2017-06-15 |
20170168799 | SYSTEMS AND METHODS FOR MANAGING COMPUTER COMPONENTS - A computer-based method for managing a plurality of computer components in an organization is provided. The method is implemented using a Component Manager (CM) computing device. The method includes receiving, from a stakeholder computing device, component data for at least one computer component of the plurality of computer components. The method also includes storing the component data in a memory block in the memory device. The method further includes assigning a first lifecycle classification, a domain, and at least one stakeholder to the at least one computer component by updating the memory block in the memory device. The method also includes causing the stakeholder computing device to electronically display an interactive dashboard that includes a graphical representation of the at least one computer component. The method further includes prompting a stakeholder to update a component utilization scheme for the computer component, by electronically displaying the graphical representation. | 2017-06-15 |
20170168800 | Reporting Marine Electronics Data and Performing Software Updates on Marine Electronic Peripheral Devices - Various implementations described herein are directed to technologies for reporting data and updating marine electronic peripheral device software. A reporting function that captures and saves current settings information of a marine electronics device and peripheral devices in communication with the marine electronics device is provided. The reporting function further captures and saves current information pertaining to a network that facilitates communication between the marine electronics device and the peripheral devices. A software update function that updates the peripheral devices using a cloud server and at least one member of a group consisting of: a portable storage device, the marine electronics device, and a handheld computer device is provided. | 2017-06-15 |
20170168801 | VERSION CONTROL FOR CUSTOMIZED APPLICATIONS - A customer's VCS is set up to store files associated with an application having application versions. The customer's VCS includes a set of branches defined correspondingly to a set of systems of a customer change management landscape. A first branch comprises files of a first version of the application. A second version of the application is populated into the first branch. Existing customizations, modifications, and created runtime authoring objects during design time and runtime of the first version of the application are applied over the second version. The changes are submitted into the customer's VCS and an updated version is generated in the first branch. The updated version is transported to a second branch through merging the first branch and the second branch. When a request for deployment is received, a reference to the second branch pointing to the updated version of the application is provided. | 2017-06-15 |
20170168802 | DELEGATING DATABASE QUERIES - The disclosure is directed to pushing data updates to client computing devices (“clients”) in real-time. Clients can obtain data from a data storage layer by sending queries to the data storage layer that will return data compatible with the client's local data model. These queries are stored in a database and the identifier for the query (“query ID”) is used instead of the query itself. In the query stored in the database, a marker is used as a proxy for a content ID of the content to be retrieved. When querying, both the query ID and the content ID are passed to the data storage layer. The query stored with the query id is loaded, the marker is substituted with the content ID, and then executed. | 2017-06-15 |
20170168803 | METHOD AND APPARATUS FOR PERFORMING HITLESS UPDATE OF LINE CARDS OF A NETWORK DEVICE - A method in a first network device of performing a software update of a line card of a second network device without disruption to data traffic. The method includes causing a redundant control plane component of the second network device to be updated according to the software update. The method continues with causing the second network device to instantiate, based on the software update, a line card virtual machine (LC VM) as a redundant data plane component for the line card. The method further includes causing a third network device to forward data traffic to both the line card and the LC VM of the second network device, and causing the second network device to update the line card according to the software update while processing the received data traffic using the LC VM. | 2017-06-15 |
20170168804 | APPLYING PROGRAM PATCH SETS - Embodiments of the present invention disclose a method, computer program product, and system for applying a plurality of program patch sets on a plurality of computer programs. Virtual machines are prepared to be patchable, in response to a suspended computer program. Synchronized snapshots of the virtual machines are created. A plurality of binary code sections of each of the synchronized snapshots are determined. Symbol data information of each of the synchronized snapshots are analyzed, based on the program patch sets. The determined binary code sections are replaced with a set of patch data, based on the plurality of program patch sets, resulting in patched snapshots for each of the synchronized snapshots. Dependencies of the patch data are adjusted, based on the replaced plurality of binary code sections and the execution of the computer program on each of the virtual machines are resumed using the plurality of patched snapshots. | 2017-06-15 |
20170168805 | METHOD AND ELECTRONIC DEVICE FOR SYSTEM UPDATING - The embodiments of the disclosure disclose a method for system updating of electronic device, the electronic device being in a screen locked state, wherein the method includes: receiving a system update message from a server end; reading the update type in the system update message; and, when the update type is an urgent or forced update, pushing system update information in a current lock screen application. With the embodiments of the disclosure, a user can be notified of system update information in time; even if the mobile phone is in a screen locked state, the user can still learn the system update information without unlocking the mobile phone, which greatly improves the system upgrade rate of mobile phones; moreover, a user can install an upgrade package updated by the system in time, ensuring that the mobile phone can play the best performance and make up system vulnerabilities in time. | 2017-06-15 |
20170168806 | METHOD AND ELECTRONIC DEVICE FOR MOBILE TERMINAL UPGRADE - Embodiments of the present disclosure are a method and an electronic device for mobile terminal upgrading. The method includes: triggering to close a CPU upon application upgrade on a mobile terminal; maintaining a desktop system and triggering to close an application icon on the desktop system during restart of the CPU; and reloading the application icon to the desktop system upon restart of the CPU. | 2017-06-15 |
20170168807 | METHOD AND ELECTRONIC DEVICE FOR UPDATING APPLICATION PROGRAM - Disclosed are a method and an electronic device for updating an application program. The method includes: receiving implementation module updating information sent by a server, wherein the implementation module updating information carries an implementation module plugin needing to be updated in the application program; loading the implementation module plugin according to a base interface in a local application program, wherein the base interface comprises a plurality of functional interfaces, and different functional interfaces correspond to different implementation module plugins; and updating a corresponding implementation module plugin in the local application program by means of the implementation module plugin. | 2017-06-15 |
20170168808 | INFORMATION PROCESSING APPARATUS, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING SYSTEM - An information processing apparatus that executes second software capable of communicating with first software running on an information terminal, includes a receiver unit to receive version information of the first software from the information terminal; a function identification unit configured, if a newer version of the first software is obtainable, to refer to function information in which available functions are associated with the version information of the first software and the second software, respectively, so as to obtain first functions available with the first software after a version upgrade to the newer version, and second functions available with a current version information of the second software, and to identify available functions commonly with the first functions and the second functions; and a transmitter unit to transmit the identified available functions to the information terminal. | 2017-06-15 |
20170168809 | TECHNOLOGIES FOR CUSTOMIZED CROWD-SOURCED FEATURES, AUTOMATED SAFETY AND QUALITY ASSURANCE WITH A TECHNICAL COMPUTING ENVIRONMENT - Technologies for customized crowd-sourced update and validation include a computing device having a technical computing environment (TCE)-based engine that receives user information from one or more user devices, executes a TCE model with the user information to generate behavior data of the TCE model, and generates a software update for the TCE model based on the behavior data. The TCE model may be a model for an autonomous system such as a self-driving vehicle, and the software update may be a safety update for the autonomous vehicle. The user information may include sensor data, such as distance detection sensor data. The computing device may transmit an incentive such as a software update, feature update, or safety software update to the user devices. The computing device may also receive information associated with the TCE model from one or more developer devices. Other embodiments are described and claimed. | 2017-06-15 |
20170168810 | TECHNIQUE FOR UPDATING SOFTWARE ON BOARD AN AIRCRAFT - An interface unit for use in updating software of an aircraft component on board an aircraft. The interface unit comprises a first interface to connect to a server providing one or more software updates for upload to the aircraft component, and a second interface to connect to the aircraft component. The interface unit further comprises a receiving component configured to receive a software update from the server via the first interface for upload to the aircraft component, and an uploading component configured to upload the received software update to the aircraft component via the second interface. | 2017-06-15 |
20170168811 | DEPLOYING UPDATES IN A DISTRIBUTED DATABASE SYSTEMS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for instrumentation and control of workloads in a massively parallel processing database. Deployment is in a cluster which mirrors the cluster of the database to be controlled. The system includes data publishing modules, action agents, rules processing modules, deployment managers, rule compilation and management tools. Together these provide a distributed, fault tolerant system for the automated rules-based control of work in a database cluster. For example, in deploying an update, a deployment manager pushes the update to one or more nodes and instructs each of the one or more nodes to restart in a bootstrap mode. The deployment manager generates a respective configuration package for each of the one or more nodes, and restarts each of the one or more nodes in a production mode. | 2017-06-15 |
20170168812 | Integration of non-supported dental imaging devices into legacy and proprietary dental imaging software - A method for integrating a non-supported dental imaging device into dental imaging software operates on a computer which is coupled to a display that is capable of displaying dental x-rays and dental photographs. An originally supported dental imaging device has an API binary file with an original filename accessible to the computer. The method includes the steps of creating a replacement alternate API binary file which contains equivalent functionality as the API binary file of the original supported dental imaging device and placing the replacement alternate API binary file either onto or accessible to the computer. The replacement alternate API binary file has the same filename as does the original filename of the API binary file of the originally supported dental imaging device. The method also includes the step of having the replacement alternate API binary file operated on by the dental imaging software by means of the computer. The dental imaging software is not aware the dental imaging software is not communicating with the originally supported dental imaging device. The replacement alternate API binary file delivers image data acquired by the non-supported imaging device to the dental imaging software. | 2017-06-15 |
20170168813 | Resource Provider SDK - Embodiments provide a library that allows developers to very quickly build and deploy services or resource providers without having to interpret a complex cloud protocol specification. The SDK implements resource storage, automatically handles resource lifecycle, provides appropriate hooks to plug into external systems, facilitates handling of subscription-wide operations, implements complex flows such as moving of resources, enables appropriate security features, and creates necessary endpoints for the developer's service. | 2017-06-15 |
20170168814 | System and Method for Registration of a Custom Component in a Distributed Computing Pipeline - The present disclosure relates to system(s) and method(s) for registration of a custom component on a Stream Analytics Platform. The system is configured to receive a program file and one or more registration instructions corresponding to the custom component, from a primary user of the Stream Analytics Platform. Further, a program code may be extracted from the program file by parsing the program file. Further, the system is configured to register the program code as a registered custom component, on the Stream Analytics Platform based on the one or more registration instructions. Once registered, the registered custom component is available over a Graphical User Interface (GUI) of the Stream Analytics Platform. The system enables at least the primary user or a set of secondary users of the Stream Analytics platform to use the registered custom component, based on the one or more registration instructions, for designing a distributed processing pipeline. | 2017-06-15 |
20170168815 | AUTOMATICALLY EXPIRING OUT SOURCE CODE COMMENTS - Aspects include a method for expiring out source code comments. The method includes parsing source code to locate one or more comments. The method also includes, for each of the located one or more comments: determining whether the comment specifies expiration criteria; determining whether the expiration criteria meets an expiration threshold based on the comment specifying expiration criteria; and deleting the comment from the source code based on determining that the expiration criteria meets the expiration threshold, the deleting resulting in updated source code. The method further includes storing the updated source code. | 2017-06-15 |
20170168816 | AUTOMATICALLY EXPIRING OUT SOURCE CODE COMMENTS - Aspects include a method for expiring out source code comments. The method includes parsing source code to locate one or more comments. The method also includes, for each of the located one or more comments: determining whether the comment specifies expiration criteria; determining whether the expiration criteria meets an expiration threshold based on the comment specifying expiration criteria; and deleting the comment from the source code based on determining that the expiration criteria meets the expiration threshold, the deleting resulting in updated source code. The method further includes storing the updated source code. | 2017-06-15 |
20170168817 | CONDITIONAL OPERATION IN AN INTERNAL PROCESSOR OF A MEMORY DEVICE - The present techniques provide an internal processor of a memory device configured to selectively execute instructions in parallel, for example. One such internal processor includes a plurality of arithmetic logic units (ALUs), each connected to conditional masking logic, and each configured to process conditional instructions. A condition instruction may be received by a sequencer of the memory device. Once the condition instruction is received, the sequencer may enable the conditional masking logic of the ALUs. The sequencer may toggle a signal to the conditional masking logic such that the masking logic masks certain instructions if a condition of the condition instruction has been met, and masks other instructions if the condition has not been met. In one embodiment, each ALU in the internal processor may selectively perform instructions in parallel. | 2017-06-15 |
20170168818 | OPERATION OF A MULTI-SLICE PROCESSOR WITH REDUCED FLUSH AND RESTORE LATENCY - Operation of a multi-slice processor that includes execution slices and load/store slices coupled via a results bus, including: for a target instruction targeting a logical register, determining whether an entry in a general purpose register representing the logical register is pending a flush; if the entry in the general purpose register representing the logical register is pending a flush: cancelling the flush in the entry of the general purpose register; storing the target instruction in the entry of the general purpose register representing the logical register, and if an entry in a history buffer targeting the logical register is pending a restore, cancelling the restore for the entry of the history buffer. | 2017-06-15 |
20170168819 | INSTRUCTION AND LOGIC FOR PARTIAL REDUCTION OPERATIONS - In one embodiment, a processor includes: a fetch logic to fetch instructions, the instructions including a partial reduction instruction; a decode logic to decode the partial reduction instruction and provide the decoded partial reduction instruction to one or more execution units; and the one or more execution units to, responsive to the decoded partial reduction instruction, perform a plurality of N partial reduction operations to generate an result array including N output data elements, where an input array comprises N lanes, and where each of the N partial reduction operations is to reduce a set of input data elements included in a corresponding lane of the N lanes. Other embodiments are described and claimed. | 2017-06-15 |
20170168820 | DATA PROCESSING - Data processing apparatus comprises vector processing circuitry to apply a vector processing instruction to data vectors having a data vector length, each data vector comprising a plurality of data items equal in number to the data vector length, the vector processing circuitry having circuitry defining a plurality of processing lanes, there being at least as many processing lanes as a maximum data vector length; and control circuitry to selectively vary the data vector length used by the vector processing circuitry amongst a plurality of possible data vector length values up to the maximum data vector length and to disable operation of a subset of the processing lanes so that the disabled subset of processing lanes are unavailable for use by the vector processing circuitry and there remain at least as many enabled processing lanes as the data vector length set by the control circuitry. | 2017-06-15 |
20170168821 | OPERATION OF A MULTI-SLICE PROCESSOR WITH SPECULATIVE DATA LOADING - Operation of a multi-slice processor that includes a plurality of execution slices and a plurality of load/store slices coupled via a results bus includes: retrieving, from the results bus into an entry of a register file of an execution slice, speculative result data of a load instruction generated by a load/store slice; and determining, from the load/store slice after expiration of a predetermined period of time, whether the result data is valid. | 2017-06-15 |
20170168822 | OPERATION OF A MULTI-SLICE PROCESSOR WITH SELECTIVE PRODUCER INSTRUCTION TYPES - Operation of a multi-slice processor including execution slices and load/store slices, where the load/store slices are coupled to the execution slices via a results bus and the results bus includes segments assigned to carry results of a different instruction type, includes: receiving a producer instruction that includes an identifier of an instruction type and an identifier of the producer instruction, including storing the identifier of the instruction type and the identifier of the producer instruction in an entry of a register; receiving a source instruction dependent upon the result of the producer instruction including storing, in an issue queue, the source instruction, the identifier of the instruction type of the producer instruction, and an identifier of the producer instruction; and snooping the identifier of the producer instruction only from the segment of the results bus assigned to carry results of the instruction type of the producer instruction. | 2017-06-15 |
20170168823 | HANDLING UNALIGNED LOAD OPERATIONS IN A MULTI-SLICE COMPUTER PROCESSOR - Handling unaligned load operations, including: receiving a request to load data stored within a range of addresses; determining that the range of addresses includes addresses associated with a plurality of caches, wherein each of the plurality of caches are associated with a distinct processor slice; issuing, to each distinct processor slice, a request to load data stored within a cache associated with the distinct processor slice, wherein the request to load data stored within the cache associated with the distinct processor slice includes a portion of the range of addresses; executing, by each distinct processor slice, the request to load data stored within the cache associated with the distinct processor slice; and receiving, over a plurality of data communications busses, execution results from each distinct processor slice, wherein each data communications busses is associated with one of the distinct processor slices. | 2017-06-15 |
20170168824 | AGE MANAGEMENT LOGIC - A system, method and computer program product for maintaining an age and validity of entries in a structure associated with a processor is disclosed. An age tracking matrix is created for the structure. Each row of the age tracking matrix corresponds to an entry of the structure and each column of the age tracking matrix corresponds to an entry of the structure. When initiating an entry: a row corresponding to the entry is determined and a field in the determined row that is on a diagonal of the matrix is marked. For each other field in the determined row, the values that are in a diagonal field that is in a same column of the field are copied into the field. A relative age of the entries is determined by counting a number of marked fields in a column of the age tracking matrix. | 2017-06-15 |
20170168825 | AUXILIARY BRANCH PREDICTION WITH USEFULNESS TRACKING - According to an aspect, management of auxiliary branch prediction in a processing system including a primary branch predictor and an auxiliary branch predictor is provided. A congruence class of the auxiliary branch predictor is located based on receiving a primary branch predictor misprediction indicator corresponding to a mispredicted target address of the primary branch predictor. An entry is identified in the congruence class having an auxiliary usefulness level set to a least useful level with respect to one or more other entries of the congruence class. Auxiliary data corresponding to the mispredicted target address is installed into the entry. The auxiliary usefulness level of the entry is reset to an initial value based on installing the auxiliary data. | 2017-06-15 |
20170168826 | OPERATION OF A MULTI-SLICE PROCESSOR WITH REDUCED FLUSH AND RESTORE LATENCY - Operation of a multi-slice processor that includes execution slices and load/store slices coupled via a results bus, including: for a target instruction targeting a logical register, determining whether an entry in a general purpose register representing the logical register is pending a flush; if the entry in the general purpose register representing the logical register is pending a flush: cancelling the flush in the entry of the general purpose register; storing the target instruction in the entry of the general purpose register representing the logical register, and if an entry in a history buffer targeting the logical register is pending a restore, cancelling the restore for the entry of the history buffer. | 2017-06-15 |
20170168827 | SORTING DATA AND MERGING SORTED DATA IN AN INSTRUCTION SET ARCHITECTURE - A processing device includes a sorting module, which adds to each of a plurality of elements a position value of a corresponding position in a register rest resulting in a plurality of transformed elements in corresponding positions. The plurality of elements include a plurality of bits. The sorting module compares each of the plurality of transformed elements to itself and to one another. The sorting module also assigns one of an enabled or disabled indicator to each of the plurality of the transformed elements based on the comparison. The sorting module further counts a number of the enabled indicators assigned to each of the plurality of the transformed elements to generate a sorted sequence of the plurality of elements. | 2017-06-15 |
20170168828 | PERCEPTRON BRANCH PREDICTOR WITH VIRTUALIZED WEIGHTS - According to an aspect, virtualized weight perceptron branch prediction is provided in a processing system. A selection is performed between two or more history values at different positions of a history vector based on a virtualization map value that maps a first selected history value to a first weight of a plurality of weights, where a number of history values in the history vector is greater than a number of the weights. The first selected history value is applied to the first weight in a perceptron branch predictor to determine a first modified virtualized weight. The first modified virtualized weight is summed with a plurality of modified virtualized weights to produce a prediction direction. The prediction direction is output as a branch predictor result to control instruction fetching in a processor of the processing system. | 2017-06-15 |
20170168829 | PROCESSOR, COMPUTING SYSTEM COMPRISING THE SAME AND METHOD FOR DRIVING THE PROCESSOR - A processor includes a first architectural register configured to store first data based on a result of executing an instruction in a first loop, the first architectural register being mapped to one of a plurality of physical registers; and a control unit configured to determine, before execution of the instruction in an n-th loop (n being a natural number greater than 1), at least one of whether the first data stored in the first architectural register is changed and whether a physical register, among the plurality of physical registers, to which the first architectural register is mapped is changed, and, based on a result of determination, execute the instruction in the n-th loop. | 2017-06-15 |
20170168830 | ENERGY EFFICIENT SOURCE OPERAND ISSUE - In an approach for decreasing a rate of logic voltage level transitions in a multiplexor, one of a plurality of inputs to a multiplexor is selected with a first multiplexor select value at a first clock, wherein each input to the multiplexor is identified as one of i) valid and ii) invalid and the first multiplexor select value is latched in a latch until the first multiplexor select value is replaced by a second multiplexor select value. The second multiplexor select value is determined. The second multiplexor select value is applied to the multiplexor at a second clock if and only if the second multiplexor select value is different from the first multiplexor select value and the second multiplexor select value selects a valid input, wherein the second clock follows the first clock. Subsequent to applying the second multiplexor select value, the second multiplexor value is latched in the latch. | 2017-06-15 |
20170168831 | OPERATION OF A MULTI-SLICE PROCESSOR WITH INSTRUCTION QUEUE PROCESSING - Operation of a multi-slice processor that includes execution slices and load/store slices coupled via a results bus includes: receiving, by an execution slice, a producer instruction, including: storing, in an entry of an issue queue, the producer instruction; and storing, in a register, an issue queue entry identifier representing the entry of the issue queue in which the producer instruction is stored; receiving, by the execution slice, a source instruction, the source instruction dependent upon the result of the producer instruction, including: storing, in another entry of the issue queue, the source instruction and the issue queue entry identifier of the producer instruction; determining in dependence upon the issue queue entry identifier of the producer instruction that the producer instruction has issued from the issue queue; and responsive to the determination that the producer instruction has issued from the issue queue, issuing the source instruction from the issue queue. | 2017-06-15 |
20170168832 | INSTRUCTION WEIGHTING FOR PERFORMANCE PROFILING IN A GROUP DISPATCH PROCESSOR - Methods, apparatuses, and computer program products for instruction weighting for performance profiling in a group dispatch processor are described. In a particular embodiment, a post processing profiler retrieves an execution sample including an instruction address of a youngest instruction in a dispatch group that has completed execution in a group dispatch processor and a number of instructions in the dispatch group. In the particular embodiment, the post processing profiler identifies, based on the instruction address of the youngest instruction and the number of instructions in the dispatch group, all of the instructions that are in the dispatch group at the time that the dispatch group completes execution. In the particular embodiment, the post processing profiler applies within an execution profile, the result of the execution sample, equally to all of the identified instructions that are in the dispatch group. | 2017-06-15 |
20170168833 | INSTRUCTION WEIGHTING FOR PERFORMANCE PROFILING IN A GROUP DISPATCH PROCESSOR - Methods, apparatuses, and computer program products for instruction weighting for performance profiling in a group dispatch processor are described. In a particular embodiment, a post processing profiler retrieves an execution sample including an instruction address of a youngest instruction in a dispatch group that has completed execution in a group dispatch processor and a number of instructions in the dispatch group. In the particular embodiment, the post processing profiler identifies, based on the instruction address of the youngest instruction and the number of instructions in the dispatch group, all of the instructions that are in the dispatch group at the time that the dispatch group completes execution. In the particular embodiment, the post processing profiler applies within an execution profile, the result of the execution sample, equally to all of the identified instructions that are in the dispatch group. | 2017-06-15 |
20170168834 | OPERATION OF A MULTI-SLICE PROCESSOR WITH SELECTIVE PRODUCER INSTRUCTION TYPES - Operation of a multi-slice processor including execution slices and load/store slices, where the load/store slices are coupled to the execution slices via a results bus and the results bus includes segments assigned to carry results of a different instruction type, includes: receiving a producer instruction that includes an identifier of an instruction type and an identifier of the producer instruction, including storing the identifier of the instruction type and the identifier of the producer instruction in an entry of a register; receiving a source instruction dependent upon the result of the producer instruction including storing, in an issue queue, the source instruction, the identifier of the instruction type of the producer instruction, and an identifier of the producer instruction; and snooping the identifier of the producer instruction only from the segment of the results bus assigned to carry results of the instruction type of the producer instruction. | 2017-06-15 |
20170168835 | OPERATION OF A MULTI-SLICE PROCESSOR WITH INSTRUCTION QUEUE PROCESSING - Operation of a multi-slice processor that includes execution slices and load/store slices coupled via a results bus includes: receiving, by an execution slice, a producer instruction, including: storing, in an entry of an issue queue, the producer instruction; and storing, in a register, an issue queue entry identifier representing the entry of the issue queue in which the producer instruction is stored; receiving, by the execution slice, a source instruction, the source instruction dependent upon the result of the producer instruction, including: storing, in another entry of the issue queue, the source instruction and the issue queue entry identifier of the producer instruction; determining in dependence upon the issue queue entry identifier of the producer instruction that the producer instruction has issued from the issue queue; and responsive to the determination that the producer instruction has issued from the issue queue, issuing the source instruction from the issue queue. | 2017-06-15 |
20170168836 | OPERATION OF A MULTI-SLICE PROCESSOR WITH SPECULATIVE DATA LOADING - Operation of a multi-slice processor that includes a plurality of execution slices and a plurality of load/store slices coupled via a results bus includes: retrieving, from the results bus into an entry of a register file of an execution slice, speculative result data of a load instruction generated by a load/store slice; and determining, from the load/store slice after expiration of a predetermined period of time, whether the result data is valid. | 2017-06-15 |
20170168837 | PROCESSING OF MULTIPLE INSTRUCTION STREAMS IN A PARALLEL SLICE PROCESSOR - A method of managing instruction execution for multiple instruction streams using a processor core having multiple parallel instruction execution slices. An event is detected indicating that either resource requirement or resource availability for a subsequent instruction of an instruction stream will not be met by the instruction execution slice currently executing the instruction stream. In response to detecting the event, dispatch of at least a portion of the subsequent instruction is made to another instruction execution slice. The event may be a compiler-inserted directive, may be an event detected by logic in the processor core, or may be determined by a thread sequencer. The instruction execution slices may be dynamically reconfigured as between single-instruction-multiple-data (SIMD) instruction execution, ordinary instruction execution, wide instruction execution. When an instruction execution slice is busy processing a current instruction for one of the streams, another slice can be selected to proceed with execution. | 2017-06-15 |
20170168838 | METHODS AND COMPUTER SYSTEMS OF SOFTWARE LEVEL SUPERSCALAR OUT-OF-ORDER PROCESSING - Embodiments include methods, computer systems and computer program products for performing superscalar out-of-order processing in software in a computer system. Aspects include: loading opcodes into an analysis thread of the computer system, analyzing opcodes to identify certain non-independent opcode snippets, distributing non-independent opcode snippets to separate threads of computer system, instructing each of separate threads to execute each of non-independent opcode snippets, respectively, and collecting results of executions of each of separate threads by a consolidation thread. In exemplary embodiments, analyzing may include analyzing the opcodes using arbitrarily large variable size instruction windows to identify the non-independent opcode snippets, and distributing may include distributing opcode snippets: to a thread of same ISA, and to a code morphing thread when the opcode snippets need to be executed in threads of different ISA and then distributing the opcode snippets to the threads of different ISA by the code morphing thread. | 2017-06-15 |
20170168839 | BRANCHING TO ALTERNATE CODE BASED ON RUNAHEAD DETERMINATION - The description covers a system and method for operating a micro-processing system having a runahead mode of operation. In one implementation, the method includes providing, for a first portion of code, a runahead correlate. When the first portion of code is encountered by the micro-processing system, a determination is made as to whether the system is operating in the runahead mode. If so, the system branches to the runahead correlate, which is specifically configured to identify and resolve latency events likely to occur when the first portion of code is encountered outside of runahead. Branching out of the first portion of code may also be performed based on a determination that a register is poisoned. | 2017-06-15 |
20170168840 | SYSTEM OPERATING METHOD AND SYSTEM OPERATING DEVICE - The present disclosure provides an operation method of a system and an operation device of the system. The operation method of the system includes: setting target operation areas corresponding to each system of the plurality of systems on an operation interface of the terminal, according to a received setting command of a system operation area; operating a target system corresponding to the target operation area according to a first specified operation action when any of the target operation areas receives the first specified operation action. The present disclosure can realize to quickly access any of a plurality of systems and quickly invoke applications of any system to be operated. | 2017-06-15 |
20170168841 | HARDWARE POWER-ON INITIALIZATION OF AN SOC THROUGH A DEDICATED PROCESSOR - In an example, a system-on-chip (SoC) includes a hardware power-on-reset (POR) sequencer circuit coupled to a POR pin. The SoC further includes a platform management unit (PMU) circuit, coupled to the hardware POR sequencer circuit, the PMU including one or more central processing units (CPUs) and a read only memory (ROM). The SoC further includes one or more processing units configured to execute a boot process. The hardware POR sequencer circuit is configured to initialize the PMU. The one or more CPUs of the PMU are configured to execute code stored in the ROM to perform a pre-boot initialization. | 2017-06-15 |
20170168842 | CIRCUIT AND METHOD OF POWER ON INITIALIZATION FOR CONFIGURATION MEMORY OF FPGA - A circuit and method of power on initialization for a configuration memory of an FPGA. The circuit includes: a decoding circuit, a driving circuit, and a configuration memory, where when 0 is written for the 1 | 2017-06-15 |
20170168843 | THREAD-AGILE EXECUTION OF DYNAMIC PROGRAMMING LANGUAGE PROGRAMS - Methods, systems, and products are provided for thread-agile dynamic programming language (‘DPL’) program execution. Thread-agile DPL program execution may be carried out by receiving, in a message queue, a message for an instance of a DPL program and determining whether the host application has a stored state object for the instance of the DPL program identified by the message. If the host application has a stored state object for the DPL program, thread-agile DPL program execution may also carried out by retrieving the state object; preparing a thread available from a thread pool for execution of the instance of the DPL program in dependence upon the state object and an execution context for the instance of the DPL program; providing, to an execution engine for executing the DPL program, the state object and the prepared thread; and passing the message to the execution engine. | 2017-06-15 |
20170168844 | SYSTEM MANAGEMENT MODE DISABLING AND VERIFICATION TECHNIQUES - Various configurations and methods for disabling system management mode (SMM) and verifying a disabled status of SMM in a computing system are disclosed. In various examples, SMM may be disabled through a hardware strap, soft-straps, or firmware functions, and the indication of the SMM disabled status may be included in a model specific register (MSR) value accessible to the central processing unit (CPU). Additionally, techniques for verifying whether SMM is disabled in hardware or firmware, preventing access of SMM functionality, and handling secure software operations are disclosed. | 2017-06-15 |
20170168845 | MANAGING DEPENDENCIES FOR HUMAN INTERFACE INFRASTRUCTURE (HII) DEVICES - Systems and methods for managing dependencies for Human Interface Infrastructure (HII) devices are described. In some embodiments, an Information Handling System (IHS) may include a host processor and a Baseboard Management Controller (BMC) coupled to the host processor, the BMC having program instructions stored thereon that, upon execution by the BMC, cause the BMC to: receive, from another IHS remotely located with respect to the IHS, a request to change a value of a given attribute of a Human Interface Infrastructure (HII) device coupled to the IHS; and use a dependency matrix to determine how the change is affected by a current value of another attribute. | 2017-06-15 |
20170168846 | HOST INTERFACE CONTROLLER AND CONTROL METHOD FOR STORAGE DEVICE - A host interface controller with improved boot up efficiency, which uses a buffer mode setting register to set the operation mode of a first and a second buffer set provided within the host interface controller. When a cache memory of a central processing unit (CPU) at the host side has not started up, the first and second buffer sets operate in a cache memory mode to respond to read requests that the CPU repeatedly issues for data of specific addresses of the storage device. When the cache memory has started up, the first buffer set and the second buffer set operate in a ping-pong buffer mode to respond to read requests that the CPU issues for data of sequential addresses of the storage device. | 2017-06-15 |
20170168847 | REBOOTING TIMING ADJUSTMENT FOR IMPROVED PERFORMANCE - A method, computer program product, and system identify a low-cost time to re-boot a system. The method includes a processor obtaining a request for a re-boot of a system. The processor obtains identifiers of uncompleted tasks executing in the system. Based on obtaining the identifiers, the processor obtains a task cost of each task of the uncompleted tasks, where a value of the task cost of each task relates to a portion of each task completed by the processor at a given time. The processor determines, based on the task costs associated with the uncompleted tasks, a re-boot cost for re-booting the system at the given time. The processor determined a system cost for not re-booting the system at the given time. The processor compares the re-boot cost to the system cost to determine whether to re-boot the system at the given time in response to the request. | 2017-06-15 |
20170168848 | OPERATING SYSTEM STARTUP ACCELERATION - Embodiments are disclosed for methods and systems for selectively initializing elements of an operating system of a computing device. In some embodiments, a method of selectively loading classes during an initialization of an operating system of a computing device comprises initializing a virtual machine, loading classes selected from a first class list, and loading resources. The method further includes loading a service-loading process configured to initialize services of the operating system and register the services with a service manager, and loading classes selected from a second class list after loading the service-loading process. | 2017-06-15 |
20170168849 | Computer Device and Memory Startup Method of Computer Device - A computer device and a memory startup method of a computer are provided, where a basic input/output system initializes only a first part of memory in a memory initialization phase after a computer is powered on and started, so that an operating system can be started, and after the operating system is started, the basic input/output system continues to initialize memory that is not initialized in the computer, so that, in a startup phase, the computer can start the operating system without needing to wait until all memory has been initialized; therefore, a time from being started to entering the operating system is reduced for the computer, and a user can quickly enter the operating system to perform an operation, thereby improving user experience. | 2017-06-15 |
20170168850 | METHOD OF DOWNLOADING CONFIGURATION CODE, SYSTEM AND TIMER/COUNTER CONTROL REGISTER - The present invention provides a method of downloading a configuration code, applied in a timer/counter control register, and the method comprises steps of acquiring a first boot code through a first default interface of the timer/counter control register; calculating a first check code according to the acquired first boot code; determining whether the calculated first check code and a first standard check code are the same; as the two are the same, downloading the configuration code from the storage element coupled to the first default interface through the first default interface; as the two are different, downloading the configuration code from a storage element coupled to a second default interface through the second default interface of the timer/counter control register. Therefore, the present invention promote the utility flexibility of the timer/counter control register. The present invention further provides a system of downloading a configuration code and a timer/counter control register. | 2017-06-15 |
20170168851 | SYSTEM AND METHOD FOR MANAGING BIOS SETTING CONFIGURATIONS - A BIOS settings configuration may be stored in BIOS of a computer system. A default BIOS status may be set as a locked state. The BIOS status can be changed from the locked state to an unlocked state when an authentication request is received and when the received authentication information matches stored authentication information in BIOS. In some embodiments, a BIOS settings change request can be received. The BIOS settings can be modified based on the BIOS settings change request. The BIOS status can be changed back to the locked state after the BIOS settings modification has been made. | 2017-06-15 |
20170168852 | METHOD FOR INITIALIZING PERIPHERAL DEVICES AND ELECTRONIC DEVICE USING THE SAME - A method for initializing a peripheral device and an electronic device using the method. The electronic device includes one or more peripheral devices having registers, a memory having a data storing module, and an instruction capturing module. The instruction capturing module captures a plurality of hardware register settings from a driver execution process of the one or more peripheral devices, stores the plurality of hardware register settings in the data storing module, and serializes or concatenates the plurality of hardware register settings to form serialized hardware register settings, when the electronic device is performing a non-hibernation resume or non-wakeup cold boot to execute an initialization process of the one or more peripheral devices. The one or more peripheral devices are initialized by the serialized hardware register settings, when the electronic device is performing cold boot again due to a hibernation resume or wakeup to execute the initialization process. | 2017-06-15 |
20170168853 | DYNAMIC PREDICTIVE WAKE-UP TECHNIQUES - Dynamic predictive wake-up techniques are disclosed. A central processing unit (CPU) may initiate an input/output (I/O) transfer. The CPU may ascertain if a predicted time for the transfer exceeds an amount of time required to enter and exit a low-power mode and enter the low-power mode after the transfer is initiated. An I/O controller may calculate how long the transfer will take and compare that calculation to a known exit latency associated with the CPU. The calculated value is decremented by the amount of the known exit latency and the I/O controller may generate an early wake command at the decremented value. The CPU receives the early wake command and wakes such that the CPU is awake and ready to process data at conclusion of the transfer. | 2017-06-15 |
20170168854 | APPLICATION SPECIFIC CONFIGURABLE GRAPHICAL USER INTERFACE - Methods and system are disclosed that manage behavior of a graphical user interface associated with an application during a runtime of the application. In one aspect, the graphical user interface (GUI) may be configured with attributes associated with the application by a GUI configuration manager. Upon determining application configuration information, a data field metadata manager may determine data fields to be mapped onto the GUI. The data field metadata manager may read the metadata information associated with the data fields that may include data field attributes and domain values. A GUI metadata manager may retrieve metadata information associated with the mapped data fields. A GUI runtime manager may manage the behavior of the GUI and the data received by the data fields may be saved in a data store in a data format associated with the application. | 2017-06-15 |
20170168855 | Social Filtering of User Interface - In one embodiment, a method includes identifying a content object for display based at least in part on one or more filtering criteria. The filtering criteria is a measure of suitability of each content object for presentation based at least in part on social-graph information between a first user and one or more second users or a current geo-location of the first user. The method also includes applying the filtering criteria to the content object; and providing for display on a user interface (UI) the content object based on whether the content object is suitable for presentation based at least in part on the filtering criteria. | 2017-06-15 |
20170168856 | SENDING FEATURE-INSTRUCTION NOTIFICATIONS TO USER COMPUTING DEVICES - A system and method for sending feature-instruction notifications to user computing devices. In one implementation, an online content management system detects a feature-instruction triggering event. If the user that caused the feature-instruction triggering event did not use a feature system of the online content management system, a feature-instruction notification may be sent to the user to educate the user on the feature system of the online content management system. | 2017-06-15 |
20170168857 | INFORMATION PROMPTING METHOD, DEVICES, COMPUTER PROGRAM AND STORAGE MEDIUM - The present disclosure relates to information prompting methods, information prompting devices and non-transitory computer-readable media for same. According to implementations herein, an exemplary information prompting method may include acquiring an operation set of a certain application in the current interface, and detecting whether the operation set includes a second operation or not; if the operation set does not include the second operation, displaying second operation prompt information for prompting a user to carry out the second operation in the current interface; acquiring a user's response operation executed according to the prompt information, and closing the operation prompt function according to the response operation. | 2017-06-15 |
20170168858 | PATTERN BASED VIDEO FRAME NAVIGATION AID - Recommending and graphically displaying viewed video data sensitive to the viewing pattern of a user. Responsive to viewing a plurality of video frames of a video file, a navigation profile is captured to document the viewing pattern of the video frames. Specifically, attributes of the video frames are documented such as the frequency of plays of the video frames. Where multiple navigation profiles are captured, the navigation profiles are stored, aggregated, and represented graphically on a display. Additional video may be recommended based on the aggregated data. | 2017-06-15 |
20170168859 | USING PUBLIC KEY INFRASTRUCTURE FOR AUTOMATIC DEVICE CONFIGURATION - A device may receive a digital voucher, a customer certificate, and configuration information for automatically configuring the device. The digital voucher may include a first customer identifier that identifies a customer associated with the device and a device identifier that identifies the device. The customer certificate may include a second customer identifier that identifies the customer and a customer public key associated with the customer. The configuration information may include information that identifies a configuration for automatically configuring the device. The device may validate at least one of the digital voucher, the customer certificate, or the configuration information. The device may configure the device, using the configuration, based on validating at least one of the digital voucher, the customer certificate, or the configuration information. | 2017-06-15 |
20170168860 | DYNAMICALLY BINDING DATA IN AN APPLICATION - In a method for dynamically binding data in an application, an expression describing a relation between a first property of a first data of the application to a first property of a second data of the application is received. A binding is created between the first data and the second data based on the relation. A change is propagated to the first property of the second data based on a change to the first property of the first data. It is determined when to execute the expression in the application. | 2017-06-15 |
20170168861 | Optimizations and Enhancements of Application Virtualization Layers - Methods, systems, and computer-readable media for optimizing and enhancing delivery of application virtualization layers to client computing devices are described herein. In various embodiments, an application virtualization layer optimization service may identify a first and a second application virtualization layer to be delivered to one or more client computing devices. Each application virtualization layer may represent a package of one or more applications. A layer analysis service may analyze the first and second application virtualization layers to determine conflicts between the layers, using predetermined conflict analysis rules, and generate an actionable conflict resolution report based on the analysis. Based on the actionable conflict resolution report, the application virtualization layer optimization service may resolve conflicts between the first and second application virtualization layers, order the first and application virtualization layers, and deliver the ordered layers to the one or more client computing devices. | 2017-06-15 |
20170168862 | SELECTING A VIRTUAL MACHINE ON A MOBILE DEVICE BASED UPON CONTEXT OF AN INCOMING EVENT - A method of selecting a virtual machine (VM) on a mobile device within a wireless communication network based upon context of an incoming event. For example, a virtual intelligence engine can select a VM to handle an incoming phone call based upon the context of the phone call. If the phone call is work-related, then the virtual intelligence engine may select a first VM, while if the incoming phone call is a personal phone call, then the virtual intelligence engine may select a second VM different from the first VM. The VMs can utilize different operating systems. | 2017-06-15 |
20170168863 | METHOD AND APPARATUS FOR DYNAMIC ROUTING OF USER CONTEXTS - In one example, a method and apparatus for dynamic routing of user contexts are disclosed. In one example, a method for supporting a context associated with a connection between a user and a first virtual machine of a virtual function includes receiving a notification of a change in a behavior of the user that affects the context, wherein the context is supported by the first virtual machine of the virtual function, and reassigning the context to a second virtual machine of the virtual function, different from the first virtual machine, based at least in part on the change in the behavior. | 2017-06-15 |
20170168864 | Directing Data Traffic Between Intra-Server Virtual Machines - Systems and methods for improving data communications between intra-server virtual machines are described herein. An example method may commence with receiving, from a first virtual machine, a data packet directed to a second virtual machine, routing the data packet via an external routing environment, and receiving the data packet allowed for delivery to the second virtual machine. Based on the receipt, it may be determined that a data flow associated with the data packet is allowed, and a unique identifier of the first virtual machine may be replaced with a first unique identifier and a unique identifier of the second virtual machine may be replaced with a second unique identifier. The first and second unique identifiers may be associated with corresponding interfaces of the intra-server routing module and used to direct the data flow internally within the server between the first virtual machine and the second virtual machine. | 2017-06-15 |
20170168865 | Method and Apparatus for Hypervisor Based Monitoring of System Interactions - A security system and method efficiently monitors and secures a computer to defend against malicious intrusions, and includes an in-band software monitor disposed within a kernel in communication with an operating system (OS) of the computer. The monitor intercepts system calls made from an MSR (Model Specific Register), to execute monitoring operations, and subsequently returns execution to the OS. An out-of-band hypervisor communicably coupled to the OS, has read shadow means for trapping read requests to the MSR, and write mask means for trapping write requests to the MSR. The hypervisor includes means for responding to the trapped read and write requests so that presence of the monitor is obscured. | 2017-06-15 |
20170168866 | METHOD AND APPARATUS FOR MANAGING IT INFRASTRUCTURE IN CLOUD ENVIRONMENTS - In example implementations, when a management program deploys new virtual machines, the management program may identify candidate virtual machines for replacement, score the possibilities of replacement and relate the new virtual machines to candidate virtual machines if it determines the probability of replacement is high. The management program may also migrate virtual machines and storage volumes used by the virtual machines to other physical servers and storage arrays by related pairs of virtual machines. The management program may also inherit management policies from existing virtual machines being replaced and leverage them to manage new virtual machines, which replace the existing virtual machines. | 2017-06-15 |
20170168867 | INFORMATION PROCESSING SYSTEM AND CONTROL METHOD - One or more virtual machines included in an information processing system each issues, if a status indicating a processing error associated with identification information of a message obtained from a queue storing a plurality of messages is stored in a storage, the identification information being included in the message, an instruction to delete the message from the queue before completion of processing of data specified by contents of the message. | 2017-06-15 |
20170168868 | SIMULTANEOUS MULTIPLE-USER POSTAGE METER/SHIPPING DEVICE - A mail processing device whose operation can be accomplished remotely and support multiple users simultaneously is provided. The present invention utilizes secure simultaneous multiple user/application access over a network. This supports software configuration, date access, control of a postal security device (PSD), indicium creation, and control of a print engine. The invention is realized by binding core embedded software to a web server within the device and using a web technology stack that allows many users to access the software. HTTPS and a web socket based API protocol are used to enable communications with external processing devices and web applications | 2017-06-15 |