52nd week of 2016 patent applcation highlights part 43 |
Patent application number | Title | Published |
20160378468 | CONVERSION OF BOOLEAN CONDITIONS - A Set Boolean machine instruction is provided that has associated therewith a result location to be used for a set Boolean operation and a mask. The mask is configured to test a plurality of types of conditions, including simple conditions and composite conditions. The machine instruction is executed, and the executing includes performing a first logical operation between the mask and contents of a selected field to obtain an output. The mask indicates a condition to be tested, and the condition is one type of condition of the plurality of types of conditions. The executing further includes performing a second logical operation on the output to obtain a first value represented as one data type, and placing a result in the result location based on the first value. The result including a second a value of another data type, the other data type being different from the one data type. | 2016-12-29 |
20160378469 | INSTRUCTION TO PERFORM A LOGICAL OPERATION ON CONDITIONS AND TO QUANTIZE THE BOOLEAN RESULT OF THAT OPERATION - A machine instruction is provided that has associated therewith a result location to be used for a set operation, a first source, a second source, and an operation select field configured to specify a plurality of selectable operations. The machine instruction is executed, which includes obtaining the first source, the second source, and a selected operation, and performing the selected operation on the first source and the second source to obtain a result in one data type. That result is quantized to a value in a different data type, and the value is placed in the result location. | 2016-12-29 |
20160378470 | INSTRUCTION AND LOGIC FOR TRACKING FETCH PERFORMANCE BOTTLENECKS - A processor includes a front end, an execution unit, a retirement stage, a counter, and a performance monitoring unit. The front end includes logic to receive an event instruction to enable supervision of a front end event that will delay execution of instructions. The execution unit includes logic to set a register with parameters for supervision of the front end event. The front end further includes logic to receive a candidate instruction and match the candidate instruction to the front end event. The counter includes logic to generate the front end event upon retirement of the candidate instruction. | 2016-12-29 |
20160378471 | INSTRUCTION AND LOGIC FOR EXECUTION CONTEXT GROUPS FOR PARALLEL PROCESSING - A processor includes cores and a context management circuit. The circuit includes logic to determine an execution context group (ECG) to be migrated between cores. The ECG is to include application threads. The circuit also includes logic to halt all execution contexts in the ECG before migrating the ECG, reassign processor affinity to designate the target core, and restart execution of the ECG. | 2016-12-29 |
20160378472 | Instruction and Logic for Predication and Implicit Destination - A processor includes a front end to receive an instruction. The processor also includes a core to execute the instruction. The core includes logic to execute a base function of the instruction to yield a result, generate a predicate value of a comparison of the result based upon a predication setting in the instruction, and set the predicate value in a register. The processor also includes a retirement unit to retire the instruction. | 2016-12-29 |
20160378473 | INSTRUCTION AND LOGIC FOR CHARACTERIZATION OF DATA ACCESS - A processor includes a front end to receive an instruction, a decoder to decode the instruction, a core to execute the first instruction, and a retirement unit to retire the first instruction. The core includes logic to execute the first instruction, including logic to repeatedly record a translation lookaside buffer (TLB) until a designated number of records are determined, and flush the TLB after a flush interval. | 2016-12-29 |
20160378474 | CONVERSION OF BOOLEAN CONDITIONS - A Set Boolean machine instruction is provided that has associated therewith a result location to be used for a set Boolean operation and a mask. The mask is configured to test a plurality of types of conditions, including simple conditions and composite conditions. The machine instruction is executed, and the executing includes performing a first logical operation between the mask and contents of a selected field to obtain an output. The mask indicates a condition to be tested, and the condition is one type of condition of the plurality of types of conditions. The executing further includes performing a second logical operation on the output to obtain a first value represented as one data type, and placing a result in the result location based on the first value. The result including a second a value of another data type, the other data type being different from the one data type. | 2016-12-29 |
20160378475 | INSTRUCTION TO PERFORM A LOGICAL OPERATION ON CONDITIONS AND TO QUANTIZE THE BOOLEAN RESULT OF THAT OPERATION - A machine instruction is provided that has associated therewith a result location to be used for a set operation, a first source, a second source, and an operation select field configured to specify a plurality of selectable operations. The machine instruction is executed, which includes obtaining the first source, the second source, and a selected operation, and performing the selected operation on the first source and the second source to obtain a result in one data type. That result is quantized to a value in a different data type, and the value is placed in the result location. | 2016-12-29 |
20160378476 | NON-DEFAULT INSTRUCTION HANDLING WITHIN TRANSACTION - Embodiments relate to non-default instruction handling within a transaction. An aspect includes entering a transaction, the transaction comprising a first plurality of instructions and a second plurality of instructions, wherein a default manner of handling of instructions in the transaction is one of atomic and non-atomic. Another aspect includes encountering a non-default specification instruction in the transaction, wherein the non-default specification instruction comprises a single instruction that specifies the second plurality of instructions of the transaction for handling in a non-default manner comprising one of atomic and non-atomic, wherein the non-default manner is different from the default manner. Another aspect includes handling the first plurality of instructions in the default manner. Yet another aspect includes handling the second plurality of instructions in the non-default manner. | 2016-12-29 |
20160378477 | INSTRUCTIONS TO COUNT CONTIGUOUS REGISTER ELEMENTS HAVING SPECIFIC VALUES - A machine instruction to find a condition location within registers, such as vector registers. The machine instruction has associated therewith a register to be examined and a result location. The register includes a plurality of elements. In execution, the machine instruction counts a number of contiguous elements of the plurality of elements of the register having a particular value in a selected location within the contiguous elements. Other locations within the contiguous elements are ignored for the counting. The counting provides a count placed in the result location. | 2016-12-29 |
20160378478 | INSTRUCTIONS TO COUNT CONTIGUOUS REGISTER ELEMENTS HAVING SPECIFIC VALUES - A machine instruction to find a condition location within registers, such as vector registers. The machine instruction has associated therewith a register to be examined and a result location. The register includes a plurality of elements. In execution, the machine instruction counts a number of contiguous elements of the plurality of elements of the register having a particular value in a selected location within the contiguous elements. Other locations within the contiguous elements are ignored for the counting. The counting provides a count placed in the result location. | 2016-12-29 |
20160378479 | DECOUPLED PROCESSOR INSTRUCTION WINDOW AND OPERAND BUFFER - A processor core in an instruction block-based microarchitecture is configured so that an instruction window and operand buffers are decoupled for independent operation in which instructions in the block are not tied to resources such as control bits and operands that are maintained in the operand buffers. Instead, pointers are established among instructions in the block and the resources so that control state can be established for a refreshed instruction block (i.e., an instruction block that is reused without re-fetching it from an instruction cache) by following the pointers. Such decoupling of the instruction window from the operand space can provide greater processor efficiency, particularly in multiple core arrays where refreshing is utilized (for example when executing program code that uses tight loops), because the operands and control bits are pre-validated. | 2016-12-29 |
20160378480 | Systems, Methods, and Apparatuses for Improving Performance of Status Dependent Computations - Embodiments for systems, methods, and apparatuses for improving performance of status dependent computations are detailed. In an embodiment, an hardware apparatus comprises decoder hardware to decode an instruction, operand retrieval hardware to retrieve data from at least one source operand associated with the instruction decoded by the decoder hardware, and execution hardware to execute the decoded instruction to generate a result including at least one status bit and to cause the result and at least one status bit to be stored in a single destination physical storage location, wherein the at least one status bit and result are accessible through a read of the single register. | 2016-12-29 |
20160378481 | INSTRUCTION AND LOGIC FOR ENCODED WORD INSTRUCTION COMPRESSION - A processor includes a memory and a decompressor. The memory is to store compressed instruction. The decompressor includes logic to receive a request for an instruction in the compressed instructions to be executed by the processor, determine a block in the memory including the requested instruction, and determine a start address of the block in the compressed instructions. The decompressor also includes logic decompress chunks of the block, a given chunk to include parts of a plurality of very-long instruction word (VLIW) instructions. | 2016-12-29 |
20160378482 | EFFICIENT QUANTIZATION OF COMPARE RESULTS - A set machine instruction is provided that has associated therewith a result location to be used with a set operation. The set machine instruction is executed, which includes checking contents of a selected field, and determining, based on the checking, whether the contents of the selected field indicate a first condition, a second condition or a third condition represented in one data type. The result location is set to a value based on the determining, wherein the value, based on the setting, is of a data type different from the one data type and represents a result of a previously executed instruction. The result of the previously executed instruction being one of the first condition, the second condition or the third condition. | 2016-12-29 |
20160378483 | REUSE OF DECODED INSTRUCTIONS - Systems and methods are disclosed for reusing fetched and decoded instructions in block-based processor architectures. In one example of the disclosed technology, a system includes a plurality of block-based processor cores and an instruction scheduler. A respective core is capable of executing one or more instruction blocks of a program. The instruction scheduler can be configured to identify a given instruction block of the program that is resident on a first processor core of the processor cores and is to be executed again. The instruction scheduler can be configured to adjust a mapping of instruction blocks in flight so that the given instruction block is re-executed on the first processor core without re-fetching the given instruction block. | 2016-12-29 |
20160378484 | MAPPING INSTRUCTION BLOCKS BASED ON BLOCK SIZE - A processor core in an instruction block-based microarchitecture utilizes instruction blocks having headers that include an index to a size table that may be expressed using one of memory, register, logic, or code stream. A control unit in the processor core determines how many instructions to fetch for a current instruction block for mapping into an instruction window based on the block size that is indicated from the size table. As instruction block sizes are often unevenly distributed for a given program, utilization of the size table enables more flexibility in matching instruction blocks to the sizes of available slots in the instruction window as compared to arrangements in which instruction blocks have a fixed sized or are sized with less granularity. Such flexibility may enable denser instruction packing which increases overall processing efficiency by reducing the number of nops (no operations, such as null functions) in a given instruction block. | 2016-12-29 |
20160378485 | EFFICIENT QUANTIZATION OF COMPARE RESULTS - A set machine instruction is provided that has associated therewith a result location to be used with a set operation. The set machine instruction is executed, which includes checking contents of a selected field, and determining, based on the checking, whether the contents of the selected field indicate a first condition, a second condition or a third condition represented in one data type. The result location is set to a value based on the determining, wherein the value, based on the setting, is of a data type different from the one data type and represents a result of a previously executed instruction. The result of the previously executed instruction being one of the first condition, the second condition or the third condition. | 2016-12-29 |
20160378486 | METHOD AND APPARATUS FOR EXECUTION MODE SELECTION - An apparatus and method for performing high performance instruction emulation. For example, one embodiment of the invention includes a processor to process an instruction set including high-power and standard instructions comprising: an analysis module to determine whether a number of high-power instructions within a specified window are above or below a specified threshold; an execution mode selection module to select a native execution of the high-power instructions if the number of high-power instructions are above the specified threshold or to select an emulated execution of the high-powered instructions if the number of high-power instructions are below the specified threshold. | 2016-12-29 |
20160378487 | EFFICIENT INSTRUCTION FUSION BY FUSING INSTRUCTIONS THAT FALL WITHIN A COUNTER-TRACKED AMOUNT OF CYCLES APART - A technique to enable efficient instruction fusion within a computer system. In one embodiment, a processor logic delays the processing of a second instruction for a threshold amount of time if a first instruction within an instruction queue is fusible with the second instruction. | 2016-12-29 |
20160378488 | ACCESS TO TARGET ADDRESS - Systems, methods, and computer-readable storage are disclosed for providing early access to target addresses in block-based processor architectures. In one example of the disclosed technology, a method of performing a branch in a block-based architecture can include executing one or more instructions of a first instruction block using a first core of the block-based architecture. The method can include, before the first instruction block is committed, initiating non-speculative execution of instructions of a second instruction block. | 2016-12-29 |
20160378489 | REGISTER FILE MAPPING - An apparatus for processing instructions includes a mapping unit comprising a plurality of mappers wherein each mapper of the plurality of mappers maps a logical sub-register reference to a physical sub-register reference, a decoding unit configured to receive an instruction and determine a plurality of logical sub-register references therefrom, and an execution unit. The mapping unit may be configured to distribute the plurality of logical sub-register references amongst the plurality of mappers according to at least one bit in the instruction and provide a corresponding plurality of physical sub-register references. The execution unit may be configured to execute the instruction using the plurality of physical sub-register references. Corresponding methods are also disclosed herein. | 2016-12-29 |
20160378490 | PROTECTING CONFIDENTIAL DATA WITH TRANSACTIONAL PROCESSING IN EXECUTE-ONLY MEMORY - Generally, this disclosure provides systems, devices, methods and computer readable media for protecting confidential data with transactional processing in execute-only memory. The system may include a memory module configured to store an execute-only code page. The system may also include a transaction processor configured to enforce a transaction region associated with at least a portion of the code page. The system may further include a processor configured to execute a load instruction fetched from the code page, the load instruction configured to load at least a portion of the confidential data from an immediate operand of the load instruction if a transaction mode of the transaction region is enabled. | 2016-12-29 |
20160378491 | DETERMINATION OF TARGET LOCATION FOR TRANSFER OF PROCESSOR CONTROL - Methods and apparatus are disclosed for eliminating explicit control flow instructions (for example, branch instructions) from atomic instruction blocks according to a block-based instructions set architecture (ISA). In one example of the disclosed technology, an explicit data graph execution (EDGE) ISA processor is configured to fetch instruction blocks from a memory and execute at least one of the instruction blocks, each of the instruction blocks being encoded to have one or more exit points determining a target location of a next instruction block. Processor control circuitry evaluates one or more predicates for instructions encoded within a first one of the instruction blocks, and based on the evaluating, transfers control of the processor to a second instruction block at a target location that is not specified by a control flow instruction in the first instruction block. | 2016-12-29 |
20160378492 | Decoding Information About a Group of Instructions Including a Size of the Group of Instructions - A method including fetching a group of instructions, where the group of instructions is configured to execute atomically by a processor is provided. The method further includes decoding at least one of a first instruction or a second instruction, where: (1) decoding the first instruction results in a processing of information about a group of instructions, including information about a size of the group of instructions, and (2) decoding the second instruction results in a processing of at least one of: (a) a reference to a memory location having the information about the group of instructions, including information about the size of the group of instructions or (b) a processor status word having information about the group of instructions, including information about the size of the group of instructions. | 2016-12-29 |
20160378493 | BULK ALLOCATION OF INSTRUCTION BLOCKS TO A PROCESSOR INSTRUCTION WINDOW - A processor core in an instruction block-based microarchitecture includes a control unit that allocates instructions into an instruction window in bulk by fetching blocks of instructions and associated resources including control bits and operands at once. Such bulk allocation supports increased efficiency in processor core operations by enabling consistent management and policy implementation across all the instructions in the block during execution. For example, when an instruction block branches back on itself, it may be reused in a refresh process rather than being re-fetched from the instruction cache. As all of the resources for that instruction block are in one place, the instructions can remain in place and only valid bits need to be cleared. Bulk allocation also facilitates operand sharing by instructions in a block and explicit messaging among instructions. | 2016-12-29 |
20160378494 | Processing Encoding Format to Interpret Information Regarding a Group of Instructions - A method including fetching information regarding a group of instructions, where the group of instructions is configured to execute atomically by a processor, including an encoding format for the information regarding the group of instructions, is provided. The method further includes processing the encoding format to interpret the information regarding the group of instructions. | 2016-12-29 |
20160378495 | Locking Operand Values for Groups of Instructions Executed Atomically - A method including fetching a group of instructions, including a group header for the group of instructions, where the group of instructions is configured to execute by a processor, and where the group header includes a field including locking information for at least one operand is provided. The method further includes storing a value of the at least one operand in at least one operand buffer of the processor and based on the locking information, locking a value of the at least one operand in the at least one operand of the buffer such that the at least one operand is not cleared from the at least one operand buffer of the processor in response to completing the execution of the group of instructions. | 2016-12-29 |
20160378496 | Explicit Instruction Scheduler State Information for a Processor - A method including fetching a group of instructions, where the group of instructions is configured to execute atomically by a processor, is provided. The method further includes scheduling at least one of the group of instructions for execution by the processor before decoding the at least one of the group of instructions based at least on pre-computed ready state information associated with the at least one of the group of instructions. | 2016-12-29 |
20160378497 | Systems, Methods, and Apparatuses for Thread Selection and Reservation Station Binding - Embodiments of systems, methods, and apparatuses for thread selection and reservation station binding are disclosed. In an embodiment, an apparatus includes allocation hardware including reservation station binding logic to bind an operation to one of a plurality of reservation stations. In an embodiment, an apparatus includes thread selection logic to select a thread to be processed by a pipeline stage, wherein the thread selection logic to evaluate a plurality of conditions to select a thread, wherein the conditions include if a thread is active, if a thread has operations in an instruction queue, if a thread has available resources, and if a thread has no known stall. | 2016-12-29 |
20160378498 | Systems, Methods, and Apparatuses for Last Branch Record Support - Systems, methods, and apparatuses for last branch record support are described. In an embodiment, a hardware processor core comprises a hardware execution unit to execute a branch instruction, at least two last branch record (LBR) registers to store a source and destination information of a branch taken during program execution, wherein an entry in a LBR register to include an encoding of the branch, a write bit array to indicate which LBR register is architecturally correct, an architectural bit array to indicate when an LBR register has been written, and a plurality of top of stack pointers to indicate which LBR register in a LBR register stack is to be written. | 2016-12-29 |
20160378499 | VERIFYING BRANCH TARGETS - Apparatus and methods are disclosed for implementing bad jump detection in block-based processor architectures. In one example of the disclosed technology, a block-based processor includes one or more block-based processing cores configured to fetch and execute atomic blocks of instructions and a control unit configured to, based at least in part on receiving a branch signal indicating a target location is received from one of the instruction blocks, verify that the target location is a valid branch target. | 2016-12-29 |
20160378500 | SPLIT-LEVEL HISTORY BUFFER IN A COMPUTER PROCESSING UNIT - A split level history buffer in a central processing unit is provided. A history buffer is partitioned into a first portion and a second portion, wherein the first portion includes a first tagged instruction. A result is generated for the first tagged instruction. A determination whether a second tagged instruction is to be stored in the first portion of the history buffer is made. Responsive to the determination that the second tagged instruction is to be stored in the first portion of the history buffer, the first tagged instruction and the generated result for the first tagged instruction is written to the second portion of the history buffer. | 2016-12-29 |
20160378501 | SPLIT-LEVEL HISTORY BUFFER IN A COMPUTER PROCESSING UNIT - A split level history buffer in a central processing unit is provided. A history buffer is partitioned into a first portion and a second portion, wherein the first portion includes a first tagged instruction. A result is generated for the first tagged instruction. A determination whether a second tagged instruction is to be stored in the first portion of the history buffer is made. Responsive to the determination that the second tagged instruction is to be stored in the first portion of the history buffer, the first tagged instruction and the generated result for the first tagged instruction is written to the second portion of the history buffer. | 2016-12-29 |
20160378502 | AGE-BASED MANAGEMENT OF INSTRUCTION BLOCKS IN A PROCESSOR INSTRUCTION WINDOW - A processor core in an instruction block-based microarchitecture includes a control unit that explicitly tracks instruction block state including age or priority for current blocks that have been fetched from an instruction cache. Tracked instruction blocks are maintained in an age-ordered or priority-ordered list. When an instruction block is identified by the control unit for commitment, the list is checked for a match and a matching instruction block can be refreshed without re-fetching from the instruction cache. If a match is not found, an instruction block can be committed and replaced based on either age or priority. Such instruction state tracking typically consumes little overhead and enables instruction blocks to be reused and mispredicted instructions to be skipped to increase processor core efficiency. | 2016-12-29 |
20160378503 | TECHNIQUES TO WAKE-UP DEPENDENT INSTRUCTIONS FOR BACK-TO-BACK ISSUE IN A MICROPROCESSOR - Techniques are disclosed for back-to-back issue of instructions in a processor. A first instruction is stored in a queue position in an issue queue. The issue queue stores instructions in a corresponding queue position. The first instruction includes a target instruction tag and at least a source instruction tag. The target instruction tag is stored in a table storing a plurality of target instruction tags associated with a corresponding instruction. Each stored target instruction tag specifies a logical register that stores a target operand. Upon determining, based on the source instruction tag associated with the first instruction and the target instruction tag associated with a second instruction, that the first instruction is dependent on the second instruction, a pointer to the first instruction is associated with the second instruction. The pointer is used to wake up the first instruction upon issue of the second instruction. | 2016-12-29 |
20160378504 | TECHNIQUES TO WAKE-UP DEPENDENT INSTRUCTIONS FOR BACK-TO-BACK ISSUE IN A MICROPROCESSOR - Techniques are disclosed for back-to-back issue of instructions in a processor. A first instruction is stored in a queue position in an issue queue. The issue queue stores instructions in a corresponding queue position. The first instruction includes a target instruction tag and at least a source instruction tag. The target instruction tag is stored in a table storing a plurality of target instruction tags associated with a corresponding instruction. Each stored target instruction tag specifies a logical register that stores a target operand. Upon determining, based on the source instruction tag associated with the first instruction and the target instruction tag associated with a second instruction, that the first instruction is dependent on the second instruction, a pointer to the first instruction is associated with the second instruction. The pointer is used to wake up the first instruction upon issue of the second instruction. | 2016-12-29 |
20160378505 | SYSTEM OPERATION QUEUE FOR TRANSACTION - Embodiments relate to a system operation queue for a transaction. An aspect includes determining whether a system operation is part of an in-progress transaction of a central processing unit (CPU). Another aspect includes based on determining that the system operation is part of the in-progress transaction, storing the system operation in a system operation queue corresponding to the in-progress transaction. Yet another aspect includes, based on the in-progress transaction ending, processing the system operation in the system operation queue. | 2016-12-29 |
20160378506 | EFFICIENT POWER MANAGEMENT OF A SYSTEM WITH VIRTUAL MACHINES - Efficient power management of a system with virtual machines is disclosed. In particular, such efficient power management may enable coordination of system-wide power changes with virtual machines. Additionally, such efficient power management may enable coherent power changes in a system with a virtual machine monitor. Furthermore, such efficient power management may enable dynamic control and communication of power state changes. | 2016-12-29 |
20160378507 | FIRMWARE BLOCK DISPATCH BASED ON FUSING - The present disclosure is directed to firmware block dispatch based on fusing. A device may determine firmware blocks to load during initialization of the device based on fuses set in a processing module in the device. A firmware module may comprise at least a nonvolatile (NV) memory including boot code and a firmware information table (FIT). During initialization the boot code may cause the processing module to read fuse information from a fuse module and to determine at least one firmware block to load based on the fuse information. For example, the fuse information may comprise a fuse string and the processing module may compare the fuse string to the FIT table, determine at least one pointer in the FIT table associated with the fuse string and load at least one firmware block based on a location (e.g., offset) in the NV memory identified by the at least one pointer. | 2016-12-29 |
20160378508 | JNI OBJECT ACCESS - Embodiments of the present invention disclose a method, system, and computer program product for a JNI object access system. A computer receives a JNI reference and obtains the pointer data and call site of the referenced object. The computer determines whether a record of the object and call site exist and, if not, the respective records are created. The computer applies a heuristic analysis of the object and call site in which it determines whether the object is larger than a threshold size, whether the object is part of a particular region of the heap, whether the call site is associated with a read-only or a read-write function, and whether the object or call site has caused more non-moving garbage collections than a threshold number. Based on the heuristic, the computer either copies the object data or pins the object and any non-moving garbage collections are recorded. | 2016-12-29 |
20160378509 | SYSTEMS AND METHODS FOR REDUCING BIOS REBOOTS - In accordance with embodiments of the present disclosure, a method may include during boot of an information handling system, obtaining from a management controller integral to the information handling system information regarding resource requirements for one or more peripheral devices communicatively coupled to the one or more processor sockets integral to the information handling system and the management controller. The method may also include determining whether a default allocation of resources for the one or more peripheral devices among the one or more processor sockets by a basic input/output system integral to the information handling system satisfies the resource requirements. The method may further include, in response to determining the default allocation does not satisfy the resource requirements, rebalancing resources among the one or more processor sockets to satisfy the resource requirements prior to enumeration of the one or more peripheral devices. | 2016-12-29 |
20160378510 | Configuration Method, Data Exchange Method and Server System - A configuration method, a data exchange method, and a server system are described. The configuration method includes virtualizing at least one first storage apparatus into a number of M booting virtual storage space, and virtualizing at least one second storage apparatus into a number of M data virtual storage space, wherein M is an integer larger than or equal to 2; creating the i-th first corresponding relationship between the i-th server of the number of M servers and the i-th booting virtual storage space of the number of M booting virtual storage space with i from 1 to M in order; creating the i-th second corresponding relationship between the i-th server and the i-th data virtual storage space of the number of M data virtual storage space, with i from 1 to M in order. | 2016-12-29 |
20160378511 | ELECTRONIC DEVICE HAVING AN EXTERNAL MEMORY AND METHOD FOR OPERATING THE SAME - An electronic device having an external memory according to various embodiments of the present disclosure may include a communication unit; an internal memory configured to store a first electronic device information of the electronic device and a first booting data in a first booting area, said first booting data is loaded when an electric power is supplied to the electronic device; an external memory configured to store a second electronic device information of the electronic device, firmware corresponding to the electronic device in a firmware storage area, and updated firmware received via the communication unit in a firmware update information storage area; and a controller configured to compare the second electronic device information stored in the external memory and the first electronic device information stored in the internal memory and configured to control to change the firmware in the firmware storage area based on the updated firmware stored in the firmware update information storage area during booting of the electronic device when the first and second electronic device information are not identical. | 2016-12-29 |
20160378512 | CIRCUIT, METHOD, AND DEVICE FOR WAKING UP MASTER MCU - The present disclosure relates to a circuit that includes: a master microcontroller unit (MCU) having a clock line connected with a master clock signal; a peripheral interface chip; and a peripheral processing chip connected to the master MCU via the peripheral interface chip, wherein each of a clock line of the peripheral processing chip and a clock line of the peripheral interface chip is connected with a slave clock signal; wherein the peripheral processing chip is configured to remain working normally after the master MCU enters a deep sleep mode; and wherein the peripheral interface chip is configured to: remain working normally after the master MCU enters the deep sleep mode; monitor an amount of data sent by the peripheral processing chip to the peripheral interface chip; and send a wake-up signal to the master MCU when the amount of the data exceeds a threshold. | 2016-12-29 |
20160378513 | Method and Apparatus for User Interface Modification - A method and apparatus for modifying a user interface. The method comprises receiving user interface data at a client from a first server, receiving modification computer program code at said client, and executing said modification computer program code at said client to modify said user interface data to generate modified user interface data. The modification computer program code can be received from said first server or from a further server. | 2016-12-29 |
20160378514 | AUTOMATED TESTING OF GUI MIRRORING - Testing correct mirroring of a GUI. Two GUI specifications are received, a reference GUI specification and a mirrored GUI specification that corresponds to a horizontally mirrored version of the reference GUI specification. For each child element in the reference GUI specification, a start position, width, and width of the parent GUI element are determined from the reference GUI specification; for the corresponding mirrored GUI element, a mirrored start position and a mirrored width are determined from the mirrored GUI specification; and for the mirrored GUI element, a calculated mirrored start position, based on the start position, width, and width of the child GUI element's parent GUI element are determined. If the mirrored start position or the mirrored width is not within a predefined tolerance of the calculated mirrored start position or the width, respectively, the mirrored GUI specification is updated with the calculated mirrored start position or the width, respectively. | 2016-12-29 |
20160378515 | REMOTELY EMULATING COMPUTING DEVICES - Disclosed are various embodiments that facilitate remote emulation of computing devices. A request is received from a client device to evaluate an application without installing the application upon the client device. The application is then executed in a hosted environment, and a video signal from the application is captured. User interface data for a browser executed in the client device is generated. The browser renders the video signal and captures user input relative to the video signal. The user interface data and data encoding the video signal are sent to the client device. | 2016-12-29 |
20160378516 | MODIFYING AN INSTANCE CATALOG TO PERFORM OPERATIONS - The present disclosure is related to methods, systems, and machine-readable media for modifying an instance catalog to perform operation. A storage system can include a plurality of packfiles that store data. The storage system can include a plurality of streams that include a plurality of hashes that identify the plurality of packfiles. The storage system can include an instance catalog that includes an identification of the plurality of streams. The storage system can include an operation engine to perform a number of operations on the plurality of packfiles by modifying the instance catalog using the identification of the plurality of streams. | 2016-12-29 |
20160378517 | METHODS AND APPARATUS TO MONITOR VIRTUAL COMPUTING ENVIRONMENTS - Methods, apparatus, systems and articles of manufacture to monitor virtual computing environments are described. An example method includes determining a computing resource status of a computing host that is operating a container engine, comparing the computing resource status to a threshold, and in response to determining that computing resource status does not exceed the threshold, executing a monitoring operation in a container hosted by the container engine. | 2016-12-29 |
20160378518 | POLICY BASED PROVISIONING OF CONTAINERS - Techniques for placing containers in a cloud (e.g., into virtual machines (“VMs”)) based on container policies. The container policies may specify compute-related qualities, storage-related quality, and/or network-related qualities that are to be met by the underlying software and hardware that supports execution of the virtual machines. A cloud director or other entity receives requests to place containers in a particular virtual machine based on the container policies and directs placement of the virtual machine based on the policies. The cloud director may migrate and/or reconfigure VMs, virtual machine disk files, and/or virtual network interface controllers to satisfy the container placement policy. After placement, the cloud director may disable migration to maintain the VM in a desired state. | 2016-12-29 |
20160378519 | METHOD AND SYSTEM FOR ANTICIPATING DEMAND FOR A COMPUTATIONAL RESOURCE BY CONTAINERS RUNNING ABOVE GUEST OPERATING SYSTEMS WITHIN A DISTRIBUTED, VIRTUALIZED COMPUTER SYSTEM - The current document is directed to methods and systems for efficiently executing OSL-virtualization containers within the execution environments provided by virtual machines that execute above traditional virtualization layers within large, virtualized, distributed computing systems. The currently disclosed methods and systems anticipate the need for additional virtual machines in order to meet anticipated demands for one or more computational resources by the containers. In addition, the methods and systems provision and launch virtual machines with computational-resource allocations that minimize overhead and computational-resource wastage. In one implementation, computational-resource utilization of ATMs and containers within the virtualized, distributed computer system are periodically monitored in order to estimate future demand for the computational resource and, when necessary, to launch additional virtual machines to meet the estimated future demand for the computational resource. | 2016-12-29 |
20160378520 | ADJUSTING VIRTUAL MACHINE MIGRATION PLANS BASED ON ALERT CONDITIONS RELATED TO FUTURE MIGRATIONS - Migration of virtual machines within a computing environment is facilitated. A processor obtains a current virtual machine to host mapping in the computing environment, as well as a plurality of future virtual machine to host mappings. A current migration plan to migrate from a current state of the computing environment to another state of the computing environment is also obtained. Based on the current virtual machine to host mapping and one or more future virtual machine to host mappings of the plurality of future virtual machine to host mappings a determination is made that one or more potential alert conditions exist in the current migration plan. The current migration plan and/or one or more future virtual machine to host mappings are displayed. The current migration plan is adjusted to address at least one potential alert condition of the one or more potential alert conditions to improve processing within the computing environment. | 2016-12-29 |
20160378521 | AUTOMATED TEST OPTIMIZATION - As disclosed herein a method, executed by a computer, includes receiving an indication from a test monitoring operation that an automated test has reached an input checkpoint on a first virtual machine, and receiving a plurality of input responses corresponding to the input checkpoint. The method further includes communicating with a hypervisor to request creation of at least one cloned virtual machine, corresponding to the first virtual machine, to provide a plurality of virtual machines. The method further includes providing each input response of the plurality of input responses to a corresponding virtual machine of the plurality of virtual machines to provide a parallel automated test for the plurality of input responses. A computer system, and a computer program product corresponding to the above method are also disclosed herein. | 2016-12-29 |
20160378522 | PROTECTING STATE INFORMATION FOR VIRTUAL MACHINES - A processing system includes a processor that implements registers to define a state of a virtual machine (VM) running on the processor. The processor detects exit conditions of the VM. The processing system also includes a memory element to store contents of the registers in a first data structure that is isolated from a hypervisor of the VM in response to the processor detecting an exit condition. The VM is to selectively expose contents of a subset of the registers to the hypervisor. | 2016-12-29 |
20160378523 | PERFORMANCE OF VIRTUAL MACHINE FAULT TOLERANCE MICRO-CHECKPOINTING USING TRANSACTIONAL MEMORY - Techniques disclosed herein generally describe providing fault tolerance in a virtual machine cluster using hardware transactional memory. According to one embodiment, a micro-checkpointing tool suspends execution of a virtual machine instance on a primary server. The micro-checkpointing tool identifies one or more memory pages associated with the virtual machine instance that were modified since a previous synchronization. The micro-checkpointing tool maps a first task to an operation to be performed on a memory of the primary server, where the first task is to resume the virtual machine instance. The micro-checkpointing tool also maps a second task to an operation to be performed on the memory of the primary server, where the second task is to copy the identified memory pages associated with the virtual machine instance to a secondary server. The first and second tasks are then performed on the memory. | 2016-12-29 |
20160378524 | Optimizing order of migrating virtual computing instances for increased cloud services engagement - The order of migrating virtual computing instances from a private data center to a public cloud is optimized using a TSP solver. The method of migrating a plurality of virtual computing instances that are in communication with each other within a private data center to a public cloud includes the steps of assigning, for each different pair of virtual computing instances, a numerical value that represents an amount of data transmission between the pair over a predetermined period of time, determining a recommended order of migration for the virtual computing instances based on the assigned numerical values, and migrating the virtual computing instances according to the recommended order. | 2016-12-29 |
20160378525 | TECHNOLOGIES FOR APPLICATION MIGRATION USING LIGHTWEIGHT VIRTUALIZATION - Technologies for migrating an application from a source computing device to a destination computing device using lightweight virtualization includes a migration management module on each of the source and destination computing devices. The migration management module of the source computing device is configured to determine information of dependencies of the application to be migrated and perform a checkpointing operation on the application to generate application checkpoint data. The source computing device is further configured to transmit the dependencies and the application checkpoint data to the destination computing device. The migration management module of the destination computing device is configured to generate a container based on the dependency information and restore the application using the application checkpoint data. Other embodiments are described herein and claimed. | 2016-12-29 |
20160378526 | SEAMLESS ADDRESS REASSIGNMENT VIA MULTI-TENANT LINKAGE - The technology described herein manages the deployment of a group of machines from a staged state to a production state, while maintaining both the production and staged machines behind a single virtual internet protocol (VIP) address. The machines may be deployed within one or more data centers. Requests for service addressed to the VIP can be sent by a load balancer to machines within a staged pool or a production pool. The load balancer can evaluate characteristics of the request against a policy to determine whether to communicate the request to a machine in the first or second pool. | 2016-12-29 |
20160378527 | CLONING A VIRTUAL MACHINE FROM A PHYSICAL DEVICE BASED ON A LOCAL SNAPSHOT - Techniques are described for creating a virtual machine clone of a physical host computing device. A hosted hypervisor running within a host operating system on the physical computing device receives a request to boot a virtual machine clone of the device. In response to the request, the hosted hypervisor synthesizes a virtual disk that is comprised of a master boot record of the host computing device, a read-only snapshot obtained from a volume snapshot service of the host operating system and a delta virtual disk for recording changes. The hosted hypervisor then launches the virtual machine clone by attaching the synthesized virtual disk to the virtual machine clone and booting the guest operating system from the master boot record and the snapshot. Any changes made during the use of the virtual machine clone can be automatically propagated back and applied to the physical host device. | 2016-12-29 |
20160378528 | PROPAGATING CHANGES FROM A VIRTUAL MACHINE CLONE TO A PHYSICAL HOST DEVICE - Techniques are described for creating a virtual machine clone of a physical host computing device. A hosted hypervisor running within a host operating system on the physical computing device receives a request to boot a virtual machine clone of the device. In response to the request, the hosted hypervisor synthesizes a virtual disk that is comprised of a master boot record of the host computing device, a read-only snapshot obtained from a volume snapshot service of the host operating system and a delta virtual disk for recording changes. The hosted hypervisor then launches the virtual machine clone by attaching the synthesized virtual disk to the virtual machine clone and booting the guest operating system from the master boot record and the snapshot. Any changes made during the use of the virtual machine clone can be automatically propagated back and applied to the physical host device. | 2016-12-29 |
20160378529 | UTM INTEGRATED HYPERVISOR FOR VIRTUAL MACHINES - Systems and methods for integrating firewall and Unified Threat Management (UTM) features directly within a hypervisor are provided. According to one embodiment, a system is provided that includes multiple virtual machines (VMs) and an integrated hypervisor that manages the VMs. The integrated hypervisor has integrated therein a unified threat management (UTM) layer. In operation, the integrated hypervisor intercepts network traffic directed to or originated by the VMs and provides network security using the UTM layer. | 2016-12-29 |
20160378530 | REMOTE-DIRECT-MEMORY-ACCESS-BASED VIRTUAL MACHINE LIVE MIGRATION - The current document is directed to methods and systems for moving executing virtual machines between host systems in a virtual data center. In described implementations, remote-direct memory access is used for transferring memory contents and, in certain implementations, additional data between the host systems to facilitate live migration of virtual machines. To provide increased efficiency, transfer of the contents of a shared memory page from a source host system to target host system during migration of a virtual machine is deferred until the relocated virtual machine attempts to write to the shared memory page. | 2016-12-29 |
20160378531 | ADJUSTING VIRTUAL MACHINE MIGRATION PLANS BASED ON ALERT CONDITIONS RELATED TO FUTURE MIGRATIONS - Migration of virtual machines within a computing environment is facilitated. A processor obtains a current virtual machine to host mapping in the computing environment, as well as a plurality of future virtual machine to host mappings. A current migration plan to migrate from a current state of the computing environment to another state of the computing environment is also obtained. Based on the current virtual machine to host mapping and one or more future virtual machine to host mappings of the plurality of future virtual machine to host mappings a determination is made that one or more potential alert conditions exist in the current migration plan. The current migration plan and/or one or more future virtual machine to host mappings are displayed. The current migration plan is adjusted to address at least one potential alert condition of the one or more potential alert conditions to improve processing within the computing environment. | 2016-12-29 |
20160378532 | MANAGING VIRTUAL MACHINE MIGRATION - Systems and method for the management of migrations of virtual machine instances are provided. A migration manager monitors the resource usage of a virtual machine instance over time in order to create a migration profile. When migration of a virtual machine instance is desired, the migration manager schedules the migration to occur such that the migration conforms to the migration profile. | 2016-12-29 |
20160378533 | COMPUTER AND HYPERVISOR-BASED RESOURCE SCHEDULING METHOD - A simple hypervisor, in addition to a hypervisor, is operated on a computer. A guest OS, the continued operations of which need to be guaranteed, when a fault occurs in the hypervisor is operated on the simple hypervisor, and the other guest OSs are operated on the hypervisor. The hypervisor performs resource scheduling (determining of resources to be allocated to or deallocated from each guest OS) and the simple hypervisor executes, in place of the simple hypervisor, allocation or deallocation of resources to or from the guest OS, the continued operations of which need to be guaranteed. | 2016-12-29 |
20160378534 | APPARATUS AND METHOD FOR VIRTUAL DESKTOP SERVICE - Disclosed herein are an apparatus and method for virtual desktop service. The apparatus for virtual desktop service includes a connection broker for performing a task for coordinating a delivery protocol that is used between at least one user terminal that uses virtual desktop service and multiple servers that provides the virtual desktop service, a resource pool for providing software resources including an Operating System (OS) for the virtual desktop service, and virtual machine infrastructure for supporting hardware resources. | 2016-12-29 |
20160378535 | APPARATUS AND METHOD FOR IN-MEMORY-BASED VIRTUAL DESKTOP SERVICE - Disclosed herein are an apparatus and method for in-memory-based virtual desktop service. The apparatus for in-memory-based virtual desktop service includes a connection broker for performing a task for coordinating a delivery protocol that is used between at least one user terminal that uses virtual desktop service and multiple servers that provide the virtual desktop service, a resource pool for providing software resources including an Operating System (OS) for the virtual desktop service; and virtual machine infrastructure for supporting hardware resources, and dynamically allocating software stored in the software resources to the hardware resources. | 2016-12-29 |
20160378536 | CONTROL METHOD AND INFORMATION PROCESSING DEVICE - A control method executed by a computer includes determining which one of a first virtual machine that executes a real-time process and a second virtual machine that executes a batch process a virtual machine being operated is, stopping the virtual machine being operated, when a process executed by the virtual machine being operated is finished and the virtual machine being operated is the second virtual machine, and maintaining operation of the virtual machine being operated, when the process executed by the virtual machine being operated is finished and the virtual machine being operated is the first virtual machine. | 2016-12-29 |
20160378537 | Method and Apparatus for Controlling Virtual Machine Migration - A method and an apparatus for controlling virtual machine migration is presented, where the method includes obtaining information about an application running on a first virtual machine, where the first virtual machine runs on a first host; determining, according to the information about the application, whether an application associated with the application running on the first virtual machine runs on a second virtual machine, where the second virtual machine is any virtual machine running on a second host; and if no application associated with the application running on the first virtual machine runs on the second virtual machine, migrating the first virtual machine to the second host. The embodiments of the present disclosure can ensure that reliability of an application is not affected during a virtual machine migration process. | 2016-12-29 |
20160378538 | PARTITIONING PROCESSES ACROSS CLUSTERS BY PROCESS TYPE TO OPTIMIZE USE OF CLUSTER SPECIFIC CONFIGURATIONS - A system and method for virtualization and cloud security are disclosed. According to one embodiment, a system comprises a first multi-core processing cluster and a second multi-core processing cluster in communication with a network interface card and software instructions. When the software instructions are executed by the second multi-core processing cluster they cause the second multi-core processing cluster to receive a request for a service, create a new or invoke an existing virtual machine to service the request, and return a desired result indicative of successful completion of the service to the first multi-core processing cluster. | 2016-12-29 |
20160378539 | MIGRATING VIRTUAL MACHINES BASED ON RELATIVE PRIORITY OF VIRTUAL MACHINE IN THE CONTEXT OF A TARGET HYPERVISOR ENVIRONMENT - A method, system and computer program product for selecting a target hypervisor to run a migrated virtual machine. An “effective priority value,” representing the virtual machine's priority with respect to the other virtual machines running on the same hypervisor, is calculated for the virtual machine when it is running on the source hypervisor as well as if it were to run on a target hypervisor for each possible target hypervisor. The target hypervisor associated with the minimum difference in absolute value terms between the virtual machine's effective priority value calculated when it is running on the source hypervisor and its effective priority value calculated if it were to be migrated to run on a target hypervisor is selected to receive the migrating virtual machine. In this manner, the effective priority metric has enabled a target hypervisor to be chosen that most closely matches the priority environment of the source hypervisor. | 2016-12-29 |
20160378540 | MULTITHREADED TRANSACTIONS - Embodiments relate to multithreaded transactions. An aspect includes assigning a same transaction identifier (ID) corresponding to the multithreaded transaction to a plurality of threads of the multithreaded transaction, wherein the plurality of threads execute the multithreaded transaction in parallel. Another aspect includes determining one or more memory areas that are owned by the multithreaded transaction. Another aspect includes receiving a memory access request from a requester that is directed to a memory area that is owned by the transaction. Yet another aspect includes based on determining that the requester has a transaction ID that matches the transaction ID of the multithreaded transaction, performing the memory access request without aborting the multithreaded transaction. | 2016-12-29 |
20160378541 | ADDRESS PROBING FOR TRANSACTION - Embodiments relate to address probing for a transaction. An aspect includes determining, before starting execution of a transaction, a plurality of addresses that will be used by the transaction during execution. Another aspect includes probing each address of the plurality of addresses to determine whether any of the plurality of addresses has an address conflict. Yet another aspect includes, based on determining that none of the plurality of addresses has an address conflict, starting execution of the transaction. | 2016-12-29 |
20160378542 | MULTITHREADED TRANSACTIONS - Embodiments relate to multithreaded transactions. An aspect includes assigning a same transaction identifier (ID) corresponding to the multithreaded transaction to a plurality of threads of the multithreaded transaction, wherein the plurality of threads execute the multithreaded transaction in parallel. Another aspect includes determining one or more memory areas that are owned by the multithreaded transaction. Another aspect includes receiving a memory access request from a requester that is directed to a memory area that is owned by the transaction. Yet another aspect includes based on determining that the requester has a transaction ID that matches the transaction ID of the multithreaded transaction, performing the memory access request without aborting the multithreaded transaction. | 2016-12-29 |
20160378543 | IMPLEMENTING PSEUDO NON-MASKING INTERRUPTS BEHAVIOR USING A PRIORITY INTERRUPT CONTROLLER - A method is provided for handling interrupts in a processor, the interrupts including regular interrupts having a range of priorities and a pseudo non-maskable interrupt (PNMI) that is of a higher priority than any of the regular interrupts. The method includes the steps of obtaining an interrupt vector corresponding to a received interrupt, and if the received interrupt is a regular interrupt, enabling interrupts in the processor so that a PNMI can be received while handling the regular interrupt, executing a regular interrupt handler using the interrupt vector, and disabling interrupts in the processor. On the other hand, if the received interrupt is a PNMI, a PNMI interrupt handler is executed using the interrupt vector as an input thereto. | 2016-12-29 |
20160378544 | INTELLECTIVE SWITCHING BETWEEN TASKS - Methods, computer program products, and system are presented. The methods include, for instance: identifying, by one or more processor, a current task, obtaining, by the one or more processor, an indicator of a commencement of a switching event, where the switching event includes a transition originating from the current task and concluding at a new task, obtaining, by the one or more processor, behavior analysis data relating to a plurality of past switching events, where each past switching event includes a transition originating from the current task and concluding at a target task. The behavior analysis data includes a timestamp for each past switching event. The method also includes determining, by the one or more processor, based on the behavior analysis data, at least one recommended task, where the at least one recommended task includes at least one target task. | 2016-12-29 |
20160378545 | METHODS AND ARCHITECTURE FOR ENHANCED COMPUTER PERFORMANCE - Methods and systems for enhanced computer performance improve software application execution in a computer system using, for example, a symmetrical multi-processing operating system including OS kernel services in kernel space of main memory, by using groups of related applications isolated areas in user space, such as containers, and using a reduced set of application group specific set of resource management services stored with each application group in user space, rather than the OS kernel facilities in kernel space, to manage shared resources during execution of an application, process or thread from that group. The reduced sets of resource management services may be optimized for the group stored therewith. Execution of each group may be exclusive to a different core of a multi-core processor and multiple groups may therefore execute separately and simultaneously on the different cores. | 2016-12-29 |
20160378546 | VIRTUAL MACHINE INSTANCE MIGRATION USING A TRIANGLE APPROACH - Techniques for preserving the state of virtual machine instances during a migration from a source location to a target location are described herein. A set of credentials configured to provide access to a storage device by a virtual machine instance at the source location is provided to the virtual machine instance. When the migration from the source location to the target location starts, a second set of credentials configured to provide access to a storage device by a virtual machine instance at the source location is provided to the virtual machine instance. During the migration, a response to an input-output request is provided to one or more of the locations using the set of credentials and based at least in part on the state of the migration. | 2016-12-29 |
20160378547 | PRESERVING STATE DURING VIRTUAL MACHINE INSTANCE MIGRATION - Techniques for preserving the state of virtual machine instances during a migration from a source location to a target location are described herein. A set of credentials configured to provide access to a storage device by a virtual machine instance at the source location is provided to the virtual machine instance. When the migration from the source location to the target location starts, a second set of credentials configured to provide access to a storage device by a virtual machine instance at the source location is provided to the virtual machine instance. During the migration, state information associated with the block storage device is copied from the source location to the target location based on the migration phase. | 2016-12-29 |
20160378548 | HYBRID HETEROGENEOUS HOST SYSTEM, RESOURCE CONFIGURATION METHOD AND TASK SCHEDULING METHOD - A hybrid heterogeneous host system, a resource configuration method and a task scheduling method are disclosed. The system includes: a basic unit, including: computing resource nodes, storage resource nodes and input/output I/O resource nodes; wherein multiple basic units are connected via a high-speed internetwork; and a software definition unit, configured to: when system resources are increased or reduced, extend the address space of an increased hardware resource to a current address space, or delete an address space of a reduced hardware resource from the current address space, and update a system resource view. Through the embodiments of the present invention, the extendibility of a tightly coupled shared memory system can be guaranteed, and the design complexity and cost of the multiway system also can be greatly reduced, which improves the flexibility and reusability of the system. | 2016-12-29 |
20160378549 | Goal-Oriented, Socially-Connected, Task-Based, Incentivized To-Do List Application System and Method - A system may provide a socially connected application for managing a list of tasks, assignable by one or more assignors to be performed by one or more assignees. The method may provide a multi-platform application, including a method when executed on a processor include receiving a plurality of tasks to be performed, tracking completion, and tracking points associated with successful completion. Tasks may be assigned and confirmed completed by assignors, and performed by assignees. The system may include a point value system enabling redemption upon completion of tasks having point value or currency value. The point reward system may be integrated with other technologies to facilitate transfer of exemplary fiat or crypto currency, loyalty program points, desired product purchases on behalf of the assignee or other benefit. The method may include: receiving, by processor, a plurality of tasks; and managing, a point system associated with completion of each task. The system may provide instructional steps of how to perform a task. The system may provide proof of accomplishment, initiation, or completion of a task by the assignees to the assignors. The system may provide a listing of tasks for users to sign up for task assignments in return for a specified bounty. The system may receive and/or provide an assessment of the completed task by the assignors. | 2016-12-29 |
20160378550 | OPTIMIZATION OF APPLICATION WORKFLOW IN MOBILE EMBEDDED DEVICES - An aspect includes optimizing an application workflow. The optimizing includes characterizing the application workflow by determining at least one baseline metric related to an operational control knob of an embedded system processor. The application workflow performs a real-time computational task encountered by at least one mobile embedded system of a wirelessly connected cluster of systems supported by a server system. The optimizing of the application workflow further includes performing an optimization operation on the at least one baseline metric of the application workflow while satisfying at least one runtime constraint. An annotated workflow that is the result of performing the optimization operation is output. | 2016-12-29 |
20160378551 | ADAPTIVE HARDWARE ACCELERATION BASED ON RUNTIME POWER EFFICIENCY DETERMINATIONS - Systems and methods may provide for making a power efficiency determination at runtime based on one or more runtime usage notifications and scheduling a workload for execution on a hardware accelerator if the power efficiency determination indicates that execution of the workload on the hardware accelerator will be more efficient than execution of the workload on a host processor. Additionally, the workload may be scheduled for execution on the host processor if the power efficiency determination indicates that execution of the workload on the host processor will be more efficient than execution of the workload on the hardware accelerator. In one example, making the power efficiency determination includes applying one or more configurable rules to at least one of the one or more runtime usage notifications. | 2016-12-29 |
20160378552 | AUTOMATIC SCALING OF COMPUTING RESOURCES USING AGGREGATED METRICS - A computing resource monitoring service receives a plurality of measurements for a metric associated with an auto-scale group. Each measurement is associated with metadata for the measurement, which specifies attributes for the measurement. The computing resource monitoring service determines, for each measurement and based at least in part on the metadata, a fully qualified metric identifier for the measurement. The service partitions the plurality of measurements into a plurality of logical partitions associated with one or more in-memory datastores. The service transmits the measurements from the plurality of logical partitions to the one or more datastores for storage of the measurements. These measurements are provided to one or more computing resource managers for the auto-scale group to enable automatic scaling of computing resources of the group based at least in part on the measurements. | 2016-12-29 |
20160378553 | Resource Management Method and Device for Terminal System - The present document relates to a system resource management method and device for a terminal. The method includes: partitioning a memory chip of the terminal into a customized data partition and at least one operating system partition, the customized data partition being used for storing system characteristic resource data, and the operating system partition being used for storing system general function resource data; and respectively managing the resource data of the customized data partition and the at least one operating system partition, and sharing the resource data of the customized data partition in the at least one operating system partition. The present document avoids the influence of system operation and update on customized data, reduces the system maintenance complexity and operating cost of the terminal, and at the same time decreases the download traffic of update data. | 2016-12-29 |
20160378554 | Parallel and Distributed Computing Using Multiple Virtual Machines - Systems and techniques are described for using virtual machines to write parallel and distributed applications. One of the techniques includes receiving a job request, wherein the job request specifies a first job to be performed by a plurality of a special purpose virtual machines, wherein the first job includes a plurality of tasks; selecting a parent special purpose virtual machine from a plurality of parent special purpose virtual machines to perform the first job; instantiating a plurality of child special purpose virtual machines from the selected parent special purpose virtual machine; partitioning the plurality of tasks among the plurality of child special purpose virtual machines by assigning one or more of the plurality of tasks to each of the child special purpose virtual machines; and performing the first job by causing each of the child special purpose virtual machines to execute the tasks assigned to the child special purpose virtual machine. | 2016-12-29 |
20160378555 | GENERATING TIMING SEQUENCE FOR ACTIVATING RESOURCES LINKED THROUGH TIME DEPENDENCY RELATIONSHIPS - A method, and associated computer program product and computer system. A Direct Acyclic Graph (DAG) includes nodes and directed edges. Each node represents a unique resource and is a predefined Recovery Time Objective (RTO) node or an undefined RTO node. Each directed edge directly connects two nodes and represents a time delay between the two nodes. The nodes are topologically sorted to order the nodes in a dependency sequence of ordered nodes. A corrected RTO is computed for each ordered node. | 2016-12-29 |
20160378556 | METHOD, DEVICE, AND MOBILE TERMINAL FOR CLEANING UP APPLICATION PROCESS - A method and device for clearing an application process, and a mobile terminal are provided. The method for clearing an application process includes: clearing the application process; obtaining a restart interval of the application process; and clearing the application process continuously according to the restart interval, until the restart interval is greater than a predetermined time. | 2016-12-29 |
20160378557 | TASK ALLOCATION DETERMINATION APPARATUS, CONTROL METHOD, AND PROGRAM - A distributed system ( | 2016-12-29 |
20160378558 | COORDINATING MULTIPLE COMPONENTS - A system and method including: determining, by a manager module, a need to determine a primary software component of a client device; identifying a first software component and a second software component of the client device; identifying a set of characteristics of the first software component and the second software component; determining that the first software component is the primary software component based on the set of characteristics of each software component, where determining the primary software component further includes comparing the set of characteristics of each software component and selecting the primary software component based on the set of characteristics with a highest priority; and instructing, by the manager module, the one or more processors to cause functionality associated with the second software component to be at least partially suspended. | 2016-12-29 |
20160378559 | EXECUTING A FOREIGN PROGRAM ON A PARALLEL COMPUTING SYSTEM - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a distributed parallel computing system to adapt a foreign program to execute on the distributed parallel computing system. The foreign program is a program written for a computing framework that is different from a computing framework of the parallel computing system. The distributed parallel computing system includes a master node computer and one or more worker node computers. A scheduler executing on the master node computer acts as an intermediary between the foreign program and the parallel computing system. The scheduler negotiates with a resource manager of the parallel computing system to acquire computing resources. The scheduler then allocates the computing resources to the worker node computers as containers. The foreign program executes in the containers on the worker node computers in parallel. | 2016-12-29 |
20160378560 | EXECUTING A FOREIGN PROGRAM ON A PARALLEL COMPUTING SYSTEM - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a task centric resource scheduling framework. A scheduler executing on a master node computer of a distributed parallel computing system allocates computing resources of the parallel computing system to a program according to one or more policies associated with the program. Each policy includes a set of pre-determined computing resource constraints. Allocation of the computing resources includes performing multiple iterations of negotiation between the scheduler and a resource manager of the parallel computing system. In each iteration, a policy engine of the scheduler submits requests to get more resources from, or requests to release already acquired resources to, the resource manager. The policy engine generates the requests by balancing suggestions provided by analyzer components of the policy engine and a corresponding policy. The policy engine can then determine an allocation plan on how to allocate resources. | 2016-12-29 |
20160378561 | JOB DISTRIBUTION WITHIN A GRID ENVIRONMENT - According to one aspect of the present disclosure, a technique for job distribution within a grid environment includes receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters where each execution cluster includes one or more execution hosts. Resource attributes are determined corresponding to each execution host of the execution clusters. For each execution cluster, execution hosts are grouped based on the resource attributes of the respective execution hosts. For each grouping of execution hosts, a mega-host is defined for the respective execution cluster where the mega-host for a respective execution cluster defines resource attributes based on the resource attributes of the respective grouped execution hosts. Resource requirements for the jobs are determined, and candidate mega-hosts are identified for the jobs based on the resource attributes of the respective mega-hosts and the resource requirements of the jobs. | 2016-12-29 |
20160378562 | JOB DISTRIBUTION WITHIN A GRID ENVIRONMENT - According to one aspect of the present disclosure, a technique for job distribution within a grid environment includes receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters where each execution cluster includes one or more execution hosts. Resource attributes are determined corresponding to each execution host of the execution clusters. For each execution cluster, execution hosts are grouped based on the resource attributes of the respective execution hosts. For each grouping of execution hosts, a mega-host is defined for the respective execution cluster where the mega-host for a respective execution cluster defines resource attributes based on the resource attributes of the respective grouped execution hosts. Resource requirements for the jobs are determined, and candidate mega-hosts are identified for the jobs based on the resource attributes of the respective mega-hosts and the resource requirements of the jobs. | 2016-12-29 |
20160378563 | VIRTUAL RESOURCE SCHEDULING FOR CONTAINERS WITH MIGRATION - A method for scheduling computing resources with container migration includes determining a resource availability for one or more hosts, a resource allocation for one or more virtual machines (VMs), and a resource usage for one or more containers. The method includes identifying the hosts on which VMs and containers can be consolidated based on resource availability. The method also includes calculating a target resource configuration for one or more VMs. The method further includes removing or adding resources to the VMs for which a target resource configuration was calculated to achieve the target resource configuration. The method further includes allocating the one or more VMs on the one or more hosts based on the resource availability of the one or more hosts, and allocating the one or more containers on the one or more VMs based on the resource configuration of each VM and the resource usage of each container. | 2016-12-29 |
20160378564 | VIRTUAL RESOURCE SCHEDULING FOR CONTAINERS WITHOUT MIGRATION - A method for scheduling computing resources without container migration includes determining a resource availability for one or more hosts, a resource allocation for one or more virtual machines (VMs), and a resource usage for one or more containers. The method further includes calculating a target resource configuration for one or more VMs, wherein calculating a target resource configuration comprises determining an upper limit of resource demand on a VM from one or more containers allocated on the VM, based at least in part on the resource usage. The method also includes removing or adding resources to each of the one or more VMs for which a target resource configuration was calculated to achieve the target resource configuration for each VM. The method further includes allocating the one or more VMs on the one or more hosts based on the resource availability of the one or more hosts. | 2016-12-29 |
20160378565 | METHOD AND APPARATUS FOR REGULATING PROCESSING CORE LOAD IMBALANCE - Briefly, methods and apparatus to rebalance workloads among processing cores utilizing a hybrid work donation and work stealing technique are disclosed that improve workload imbalances within processing devices such as, for example, GPUs. In one example, the methods and apparatus allow for workload distribution between a first processing core and a second processing core by providing queue elements from one or more workgroup queues associated with workgroups executing on the first processing core to a first donation queue that may also be associated with the workgroups executing on the first processing core. The method and apparatus also determine if a queue level of the first donation queue is beyond a threshold, and if so, steal one or more queue elements from a second donation queue associated with workgroups executing on the second processing core. | 2016-12-29 |
20160378566 | RUNTIME FUSION OF OPERATORS - The streams environment includes a plurality of operators coupled with processing elements including a first processing element coupled with a first operator instructed with a first programming instructions, and a second processing element coupled with a second operator instructed with a second programming instructions. A workload of the first processing element and a workload of the second processing element are measured. A first threshold of the workload of the first processing element, and second threshold of the workload of the second processing element are determined. The first programming instructions and the second programming instructions are compared to determine if the first operator and the second operator are susceptible to fusion. The first operator is de-coupled and fused to the second processing element, in response to determining the first threshold and the determination that the first operator and the second operator are susceptible to fusion. | 2016-12-29 |
20160378567 | MOBILE DEVICE BASED WORKLOAD DISTRIBUTION - Mobile device based workload distribution may include determining whether a processing requirement for a workload exceeds an operational threshold of an associated mobile device, and detecting, in response to a determination that the processing requirement for the workload exceeds the operational threshold of the associated mobile device, a performance degradation of the associated mobile device. In response to the detected performance degradation of the associated mobile device, the workload may be divided into a plurality of workload portions. A workload portion of the plurality of workload portions may be distributed to a further mobile device for workload processing. Mobile device based workload distribution may further include receiving, from the further mobile device, a processed workload portion related to the distributed workload portion, and assembling the processed workload portion related to the distributed workload portion with a plurality of processed workload portions, for example, for rendering on the associated mobile device. | 2016-12-29 |