13th week of 2016 patent applcation highlights part 35 |
Patent application number | Title | Published |
20160092192 | SYSTEM AND METHOD FOR AUTOMATIC RELOADING OF SOFTWARE INTO EMBARKED EQUIPMENT - A system and method for automating detection of a change of configuration of an embedded unit and the reloading of the appropriate software configuration into the unit. An automatic reloading system comprises a transmitter associated with the unit to transmit a configuration frame comprising a set of identifiers of the current hardware and/or software configuration of the unit, a detector to capture the configuration frame and to detect a change of hardware and/or software configuration of the unit, and a software loader to automatically reload a set of appropriate software elements into the unit according to the detected change of configuration. | 2016-03-31 |
20160092193 | METHOD AND APPARATUS FOR OPERATING A PROCESSING AND/OR PRODUCTION INSTALLATION - The invention relates to a method for operating a processing and/or production installation having at least two engineering systems producing a respective output file including an operating variable for at least one component of the installation. The first output file of a first engineering system is transmitted from the first engineering system to a second engineering system. A second output file is provided by a second engineering system using the first output file, and the processing and/or production installation being operated using the second output file. first origin data describing an origin of the first output file of the first engineering system, and second origin data describing an origin of the second output file from the second engineering system. | 2016-03-31 |
20160092194 | DATACENTER PLATFORM SWITCHING TOOL - A system associated with a data center may include an active first point of deployment (POD) configured to provide a business functionality to users communicating via a network based on a first version of a platform template, a dark second POD configured with a different second version of the platform template and a switching tool to manage communication to and from the first POD and the second POD. The switching tool may send a received message the message to the destination based, at least in part, on the source of the received message, where the switching tool may send messages received from user devices to the first data center and may send messages received from testing devices to the second data center. The switching tool may, in response to a switching command, switch operation of the business functionality from the first POD to the second POD. | 2016-03-31 |
20160092195 | POPULATING CONTENT FOR A BASE VERSION OF AN IMAGE - Techniques are described for standardizing configuration levels across a plurality of software deployments. In one embodiment, a standardization management system generates, based on a plurality of software deployments that have different source components, an end state definition that identifies a recommended standard set of source components for the plurality of software deployments. Based on the end state definition that identifies the recommended standard set of source components for the plurality of software deployments, the standardization management system generates an image that includes the standard set of source components for the plurality of software deployments. The image may be used to update software deployments that are part of the plurality of software deployments and do not include the standard set of source components. | 2016-03-31 |
20160092196 | DEPLOYMENT AND ACTIVATION OF UPDATES ON TARGET HOSTS - Techniques are described for managing updates across one or more targets using standard software images. In one embodiment, a first version of a software application is deployed on a set of one or more targets. A software binary is then generated for an updated version of the software application. The software binary for the updated version of the software application is deployed to the set of one or more targets. While the software binary for the updated version of the software application is deployed, the previous version of the software application remains active on a particular target. The updated version of the software application is activated, using the software binary, on the particular target. | 2016-03-31 |
20160092197 | CIRCULAR BUFFER OF SOFTWARE VERSIONS - Techniques are described for managing a plurality of different versions of a software application for set of software deployments. In one embodiment, a reference for a set of one or more target software deployments is maintained on a computing device. The reference is associated with a plurality of versions of a software application. An updated version of the software application is received for the set of one or more target software deployments. In response, a particular version of the software application is replaced with the updated version of the software application. After replacing the particular version of the software application with the updated version of the software application, the reference is associated with the updated version of the software application and not the particular version of the software application. | 2016-03-31 |
20160092198 | SYSTEMS AND METHODS FOR LIGHTING CONTROL - Methods, apparatuses and systems are described herein for harmonizing firmware among lighting units ( | 2016-03-31 |
20160092199 | DEVICES, SYSTEMS AND METHODS FOR SEGMENTED DEVICE BEHAVIOR - Device segmentation systems and methods enable devices to be manufactured and placed into the stream of commerce without customization for a particular retailer. The customization occurs at a later point in time when the device is activated after sale. This allows devices to be transferred among retailers without the additional expense of re-customizing the device. | 2016-03-31 |
20160092200 | Remote Update of A Portable Storage Device - This invention relates to a method for remotely updating a portable storage device, comprising the steps of: a) delaying an access to an identification card interface of a portable storage device communicating with a hosting device by acknowledging with one or more consecutive no operation procedure byte to commands issued by the hosting device, the portable storage device being installed in an identification card socket; b) updating the portable storage device with new data through an updating device connected to an update interface of the portable storage device during the access of said hosting device to the identification card interface; and c) providing the new data to the hosting device through the identification card interface after the new data is updated in the portable storage device. | 2016-03-31 |
20160092201 | Method and Device for Updating Program Data - A method is provided which includes receiving an input instruction of a user, and determining a startup mode of a terminal device according to the input instruction of the user, and sending, to the terminal device, a control instruction that is used to control the terminal device to enter the determined startup mode. A host device determines that a startup mode of a terminal device is a forcibly loading startup mode, and the terminal device performs data updating according to program data acquired from a storage card, so as to implement program data updating of the terminal device by means of forcibly loading without updating using a network or updating by dismantling a housing. | 2016-03-31 |
20160092202 | Live Operating System Update Mechanisms - Mechanisms are provided for performing a live update of an operating system. The mechanisms receive an update to an operating system and clone a root volume group associated with an operating system instance executing in a first logical partition of the data processing system to generate a cloned root volume group. The mechanisms apply the update to the cloned root volume group to generate an updated and cloned root volume group and boot a second logical partition of the data processing system using the updated and cloned root volume group. Moreover, the mechanisms mirror the original root volume group associated with an operating system instance executing in a first logical partition and import the mirrored root volume group into the second logical partition. The mechanisms migrate application instances to the second logical partition by restarting the application instances in the second logical partition using the mirrored root volume group. | 2016-03-31 |
20160092203 | Live Operating System Update Mechanisms - Mechanisms are provided for performing a live update of an operating system. The mechanisms receive an update to an operating system and clone a root volume group associated with an operating system instance executing in a first logical partition of the data processing system to generate a cloned root volume group. The mechanisms apply the update to the cloned root volume group to generate an updated and cloned root volume group and boot a second logical partition of the data processing system using the updated and cloned root volume group. Moreover, the mechanisms mirror the original root volume group associated with an operating system instance executing in a first logical partition and import the mirrored root volume group into the second logical partition. The mechanisms migrate application instances to the second logical partition by restarting the application instances in the second logical partition using the mirrored root volume group. | 2016-03-31 |
20160092204 | LIVE UPDATING OF A SHARED PLUGIN REGISTRY WITH NO SERVICE LOSS FOR ACTIVE USERS - Embodiments can enable the uploading of a newer version of a plugin package to a plugin service without affecting an existing user session that is using an older version of the plugin package. When a new user session begins, the plugin service can monitor one or more plugin packages and the versions used during the new user session. Throughout the user session, the plugin service continues to make the plugin packages available to the user regardless of newer versions being uploaded to the plugin service. In the meantime, multiple clients with different user sessions may be using different and possibly newer versions of the plugin packages at the same time. The plugin service can remove an older version of a plugin package when it determines that there are no longer any active user sessions utilizing the older version of the plugin package. | 2016-03-31 |
20160092205 | SYSTEM AND METHOD FOR SUPPORTING DYNAMIC DEPLOYMENT OF EXECUTABLE CODE IN A DISTRIBUTED COMPUTING ENVIRONMENT - A system and method supports dynamic deployment of executable code in a distributed computing environment. A server node in the distributed computing environment can receive a class definition from a client to execute, and generate and load into memory an instance of a class using said class definition without restarting or redeploying the server node. The class definition can define a new or updated class otherwise unavailable to the server node. Classes are identified with unique class identities which enables determination of whether a class is new or updated. The class identity can be used to determine the need for transmitting a class definition to a server node and also to ensure that a correct version of a class in implemented. In a particular case the new or updated class definition implements a lambda expression. | 2016-03-31 |
20160092206 | MANAGING EXECUTABLE FILES - Executable files are managed. A determination is made as to whether in a second executable file there exists a function that is the same as a function called in a first executable file. A data package is generated on a portion other than the function in the first executable file and the second executable file, and the function is stored in relation to the data package. The data package includes a first address of the function in the first executable file and a second address of the function in the second executable file. | 2016-03-31 |
20160092207 | DATACENTER CONFIGURATION MANAGEMENT TOOL - A system may include a first point of deployment (POD) and a second POD at a data center, where each of the first POD and the second POD may be configured to support a first version of a platform template. The first POD may a first set of servers based on a first hardware platform and the second POD may include a second set of servers based on a second hardware platform. A configuration manager may be configured to determine a difference between the first hardware platform and the second hardware platform and generate a second version of the platform template based on the difference between the first hardware platform associated and the second hardware platform. In some cases, the second version of the platform template may be installed on the second POD as part of an upgrade process | 2016-03-31 |
20160092208 | MANAGING ACCESS TO RESOURCE VERSIONS IN SHARED COMPUTING ENVIRONMENTS - The disclosed embodiments provide a system that manages access to resource versions in a shared computing environment. Routing data including locations of the resource versions is used to route a request to a resource version in the shared computing environment. For an application that is implemented by a set of resources, the routing data and the request is used to execute the application using an alternative version of a resource that is under test and default versions of other resources that are not under test. | 2016-03-31 |
20160092209 | VERSION MANAGEMENT OF IMAGES - Techniques are described for standardizing software configuration levels across targets. In one embodiment, a subscription is maintained that identifies a group of targets that subscribe to a particular image, where the particular image represents a standard to follow for targets that belong to the group of targets. The particular image may further include a first image version having a first set of source components. In response to receiving an update to the particular image, a second image version is generated for the particular image, where the second image version includes a second set of source components that are different than the first set of source components. Two or more targets in the group of targets that subscribe to the particular image may be updated based on the second image version. | 2016-03-31 |
20160092210 | CREATION OF A SOFTWARE CONFIGURATION SIGNATURE FOR SOFTWARE - Techniques are described for generating configuration level signatures. In an embodiment, one or more computing devices are used to generate a first signature for a particular software deployment that is configured at a particular configuration level. The first signature is generated based on digest information that identifies a plurality of deployed source components for the particular software deployment. Mapping data is stored that maps the first signature to the digest information identifying the plurality of deployed source components for the particular software deployment. A second signature is generated based on information that defines target source components for a set of software deployments that includes the particular software deployment. The first signature is compared with the second signature to determine whether the deployed source components satisfy the target source components. An indication of whether the deployed source components satisfy the target source components is stored. | 2016-03-31 |
20160092211 | VISUALIZATIONS OF INTER-ENTITY CALLS - The disclosure generally describes computer-implemented methods, software, and systems, including methods for generating visualizations. On a client side, a user request is received for an inter-entity call visualization. Code analysis data is accessed. A visualization model is built. The visualization is shown. User inputs are received for interacting with the visualization. The visualization is updated based on the received user inputs. On a server side, a request is received for code analysis data. The requested data collected, including running analyzers for any available data. The requested data is sent. The code analysis data can be used for other purposes than visualizations. | 2016-03-31 |
20160092212 | DYNAMIC ISSUE MASKS FOR PROCESSOR HANG PREVENTION - Embodiments include issuing dynamic issue masks for processor hang prevention. Aspects include storing an instruction in an issue queue for execution by an execution unit, the instruction including a default issue mask. Aspects further include determining whether the instruction in the issue queue is likely to be rescinded by the execution unit. Based on determining that the instruction is not likely to be rescinded by the execution unit, aspects include issuing the instruction to the execution unit with the default issue mask. Based on determining that the instruction is likely to be rescinded by the execution unit, aspects include issuing the instruction to the execution unit with a likely to be rescinded issue mask. | 2016-03-31 |
20160092213 | COMPUTER SYSTEM INCLUDING RECONFIGURABLE ARITHMETIC DEVICE WITH NETWORK OF PROCESSOR ELEMENTS - A reconfigurable arithmetic device includes a plurality of processor elements configured to perform first arithmetic processes corresponding to a first type of instruction and second arithmetic processes corresponding to a second type of instruction, a random-access memory (RAM), and a control unit. The first type of instruction is written into the RAM at a first address, data for the first type of instruction is written into the RAM at a second address, and data for the second type of instruction is written into the RAM at a third address. When the first type of instruction is written at the first address, the control unit decodes the first type of instruction and configures the processor elements to perform the first arithmetic processes. When data for the second type of instruction is written at the third address, the control unit configures the processor elements to perform the second arithmetic processes. | 2016-03-31 |
20160092214 | OPTIMIZING GROUPING OF INSTRUCTIONS - Embodiments include optimizing the grouping of instructions in a microprocessor. Aspects include receiving a first clump of instructions from a streaming buffer, pre-decoding each of instructions for select information and sending the instructions to an instruction queue. Aspects further include storing initial grouping information for the instructions in a local register, wherein the initial grouping information is based on the select information. Aspects further include updating the initial group information stored in the local register when additional pre-decode information becomes available and grouping the instructions that are ready to be dispatched into a dispatch group based on the grouping information stored in the local register. Aspects further include dispatching the dispatch group to an issue unit. | 2016-03-31 |
20160092215 | INSTRUCTION AND LOGIC FOR MULTIPLIER SELECTORS FOR MERGING MATH FUNCTIONS - A processor includes a front end with logic to identify a multiplier, multiplicand, and mathematical mode based upon an instruction. The processor also includes a multiplier circuit to apply Booth encoding to multiply the multiplier and multiplicand. The multiplier circuit includes circuitry to determine leftmost and rightmost partial products of multiplying the multiplier and multiplicand using Booth encoding. The circuitry includes a most significant bit (MSB) array and least significant bit (LSB) array corresponding to the multiplier. The multiplier circuit also includes logic to selectively enable selectors of the circuitry to find partial products based upon the mathematical mode of the instruction. | 2016-03-31 |
20160092216 | OPTIMIZING GROUPING OF INSTRUCTIONS - Embodiments include optimizing the grouping of instructions in a microprocessor. Aspects include receiving a first clump of instructions from a streaming buffer, pre-decoding each of instructions for select information and sending the instructions to an instruction queue. Aspects further include storing initial grouping information for the instructions in a local register, wherein the initial grouping information is based on the select information. Aspects further include updating the initial group information stored in the local register when additional pre-decode information becomes available and grouping the instructions that are ready to be dispatched into a dispatch group based on the grouping information stored in the local register. Aspects further include dispatching the dispatch group to an issue unit. | 2016-03-31 |
20160092217 | Compare Break Instructions - In an embodiment, a processor may implement a vector instruction set including one or more compare break instructions. The compare break instruction may take a pair of operands which may be compared to determine loop termination conditions, and may output a predicate vector indicating which vector elements correspond to loop iterations that are executed and which vector elements correspond to loop iterations that are not executed. The predicate vector may serve as a predicate to vector instructions forming the body of the loop, correctly executing the specified number of iterations. The compare break instruction may be coded to check for a variety of conditions (e.g. equal, not equal, greater than, less than, etc.). In an embodiment, the compare break instruction may take a predicate operand as well, which may be combined with the predicate vector produced by the comparison operations to produce the output vector. | 2016-03-31 |
20160092218 | Conditional Stop Instruction with Accurate Dependency Detection - In an embodiment, a processor may implement a conditional stop instruction that includes a first predicate vector identifying the active elements of the instruction, a second predicate vector indicating true and false results for a conditional expression within a loop that is being vectorized, and a source operand specifying which combinations in the true and false results may indicate a dependency. The conditional stop instruction may generate a vector result indicating vector elements that have a dependency on a prior vector element, as well as an identification of which element position the dependency is on. More particularly, dependencies may be detected only on active elements as indicated by the first predicate vector. False dependencies that may occur due to inactive elements may be avoided, which may improve performance and/or provide for correct functional operation. | 2016-03-31 |
20160092219 | ACCELERATING CONSTANT VALUE GENERATION USING A COMPUTED CONSTANTS TABLE, AND RELATED CIRCUITS, METHODS, AND COMPUTER-READABLE MEDIA - Accelerating constant value generation using a computed constants table, and related circuits, methods, and computer-readable media are disclosed. In one aspect, an instruction processing circuit provides a computed constants table containing one or more entries each comprising an address and a constant value. The instruction processing circuit is configured to detect, in an instruction stream, a constant-generating instruction sequence, and to determine whether an address of the constant-generating instruction sequence is present in an entry of the computed constants table. If the address of the constant-generating instruction sequence is present in the entry of the computed constants table, the instruction processing circuit provides a constant value stored in the entry for execution of at least one dependent instruction on the constant-generating instruction sequence. In this manner, the generation of constant values by a constant-generating instruction sequence may be accelerated, allowing dependent instructions to use the constant values with zero-cycle latency. | 2016-03-31 |
20160092220 | Instruction and Logic for Machine Check Interrupt Management - A processor includes a front end including a decoder to decode an instruction, a scheduler to assign execution of the instruction to a core, and a core to execute the instruction. The instruction specifies that interrupts such as corrected machine check interrupts are to be selectively suppressed. The processor further includes an error handling unit including logic to determine that an interrupt caused by an error is to be created and that an error consumer has requested interrupt notification. The error handling unit further includes logic to, based on the instruction specifying that interrupts are to be selectively suppressed, send the interrupt to a producer that issued the instruction rather than the error consumer. | 2016-03-31 |
20160092221 | DEPENDENCY-PREDICTION OF INSTRUCTIONS - Systems and methods for dependency-prediction include executing instructions in an instruction pipeline of a processor and detecting a conditionality-imposing control instruction, such as an If-Then (IT) instruction, which imposes dependent behavior on a conditionality block size of one or more dependent instructions. Prior to executing a first instruction, a dependency-prediction is made to determine if the first instruction is a dependent instruction of the conditionality-imposing control instruction, based on the conditionality block size and one or more parameters of the instruction pipeline. The first instruction is executed based on the dependency-prediction. When the first instruction is dependency-mispredicted, an associated dependency-misprediction penalty is mitigated. If the first instruction is a branch instruction, the mitigation involves training a branch prediction tracking mechanism to correctly dependency-predict future occurrences of the first instruction. | 2016-03-31 |
20160092222 | INSTRUCTION AND LOGIC FOR BULK REGISTER RECLAMATION - A processor includes a front end, a decoder, an allocator, and a retirement unit. The decoder includes logic to identify an end-of-live-range (EOLR) indicator. The EOLR indicator specifies an architectural register and a location in code for which the architectural register is unused. The allocator includes logic to scan for a mapping of the architectural register to a physical register, based upon the EOLR indicator. The allocator also includes logic to generate a request to disassociate the architectural register from the physical register. The retirement unit includes logic to disassociate the architectural register from the physical register. | 2016-03-31 |
20160092223 | PERSISTENT STORE FENCE PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A processor of an aspect includes a decode unit to decode a persistent store fence instruction. The processor also includes a memory subsystem module coupled with the decode unit. The memory subsystem module, in response to the persistent store fence instruction, is to ensure that a given data corresponding to the persistent store fence instruction is stored persistently in a persistent storage before data of all subsequent store instructions is stored persistently in the persistent storage. The subsequent store instructions occur after the persistent store fence instruction in original program order. Other processors, methods, systems, and articles of manufacture are also disclosed. | 2016-03-31 |
20160092224 | CHECKPOINTS FOR A SIMULTANEOUS MULTITHREADING PROCESSOR - According to an aspect, a system for checkpoint acceleration in a simultaneous multithreading (SMT) processor includes circuitry of a processor core of the SMT processor to execute one or more threads in a processing pipeline. The processing pipeline includes a completion stage followed by a checkpoint stage. The system also includes a checkpoint accelerator disposed between the completion stage and the checkpoint stage. The checkpoint accelerator includes a backlog queue that stores a list of next-to-complete groups of instructions from the one or more threads anticipated to complete in an upcoming cycle. The checkpoint accelerator also includes a selection control that drives one or more of the next-to-complete groups of instructions from the backlog queue to the checkpoint stage based on one or more completion indicators that identify which of the next-to-complete groups of instructions actually completed. | 2016-03-31 |
20160092225 | CHECKPOINTS FOR A SIMULTANEOUS MULTITHREADING PROCESSOR - According to an aspect, a method of checkpoint acceleration in a simultaneous multithreading (SMT) processor includes executing one or more threads in a processing pipeline of a processor core of the SMT processor, where the processing pipeline includes a completion stage followed by a checkpoint stage. A list of next-to-complete groups of instructions from the one or more threads anticipated to complete in an upcoming cycle is stored in a backlog queue. One or more of the next-to-complete groups of instructions are driven from the backlog queue to the checkpoint stage based on one or more completion indicators identifying which of the next-to-complete groups of instructions actually completed. | 2016-03-31 |
20160092226 | Systems, Apparatuses, and Methods for Zeroing of Bits in a Data Element - Embodiments of systems, methods and apparatuses for execution a NAME instruction are described. The execution of a VPBZHI causes, on a per data element basis of a second source, a zeroing of bits higher (more significant) than a starting point in the data element. The starting point is defined by the contents of a data element in a first source. The resultant data elements are stored in a corresponding data element position of a destination. | 2016-03-31 |
20160092227 | Robust and High Performance Instructions for System Call - Robust system call and system return instructions are executed by a processor to transfer control between a requester and an operating system kernel. The processor includes execution circuitry and registers that store pointers to data structures in memory. The execution circuitry receives a system call instruction from a requester to transfer control from a first privilege level of the requester to a second privilege level of an operating system kernel. In response, the execution circuitry swaps the data structures that are pointed to by the registers between the requester and the operating system kernel in one atomic transition. | 2016-03-31 |
20160092228 | SINGLE OPERATION ARRAY INDEX COMPUTATION - Embodiments are directed to a processor for adjusting an index, wherein the index identifies a location of an element within an array. The processor includes a shift circuit configured to perform a single operation that adjusts a first parameter of the index to match a parameter of an array address. The single operation further adjusts a second parameter of the index to match a parameter of an array element. | 2016-03-31 |
20160092229 | SYSTEMS AND METHODS FOR MANAGING RETURN STACKS IN A MULTI-THREADED DATA PROCESSING SYSTEM - A processor is configured to execute instructions of a first thread and a second thread. A first return stack corresponds to the first thread, and a second return stack to the second thread. Control circuitry pushes a return address to the first return stack in response to a branch to subroutine instruction in the first thread. If the first return stack is full and borrowing is not enabled by the borrow enable indicator, the control circuitry removes an oldest return address from the first return stack and not store the removed oldest return address in the second return stack. If the first return stack is full and borrowing is enabled by the borrow enable indicator and the second thread is not enabled, the control circuitry removes the oldest return address from the first return stack and push the removed oldest return address onto the second return stack. | 2016-03-31 |
20160092230 | LOOP PREDICTOR-DIRECTED LOOP BUFFER - A loop predictor trains a branch instruction to determine a trained loop count of a loop. When the loop fits in an instruction buffer, the processor stops fetching from an instruction cache, sends the loop instructions to an execution engine from the buffer without fetching from the cache, maintains a loop pop count of times the branch is sent to the execution engine from the buffer, and predicts the branch instruction is taken when the loop pop count is less than the trained loop count and otherwise predicts not taken. | 2016-03-31 |
20160092231 | INDEPENDENT MAPPING OF THREADS - Embodiments of the present invention provide systems and methods for mapping the architected state of one or more threads to a set of distributed physical register files to enable independent execution of one or more threads in a multiple slice processor. In one embodiment, a system is disclosed including a plurality of dispatch queues which receive instructions from one or more threads and an even number of parallel execution slices, each parallel execution slice containing a register file. A routing network directs an output from the dispatch queues to the parallel execution slices and the parallel execution slices independently execute the one or more threads. | 2016-03-31 |
20160092232 | PROPAGATING CONSTANT VALUES USING A COMPUTED CONSTANTS TABLE, AND RELATED APPARATUSES AND METHODS - Propagating constant values using a computed constants table, and related apparatuses and methods are disclosed. In one aspect, an apparatus comprises an instruction processing circuit configured to provide a computed constants table containing one or more entries. Each entry of the computed constants table comprises an attribute and a computed constant value. The instruction processing circuit is configured to detect a deterministic instruction in an instruction stream. Upon detecting the deterministic instruction, the instruction processing circuit determines whether an attribute of the deterministic instruction matches an entry of the computed constants table. If so, the instruction processing circuit provides the computed constant value stored in the entry to at least one dependent instruction. In this manner, a computed constant value may be propagated between instructions without requiring the deterministic instruction to be re-executed. | 2016-03-31 |
20160092233 | DYNAMIC ISSUE MASKS FOR PROCESSOR HANG PREVENTION - Embodiments include issuing dynamic issue masks for processor hang prevention. Aspects include storing an instruction in an issue queue for execution by an execution unit, the instruction including a default issue mask. Aspects further include determining whether the instruction in the issue queue is likely to be rescinded by the execution unit. Based on determining that the instruction is not likely to be rescinded by the execution unit, aspects include issuing the instruction to the execution unit with the default issue mask. Based on determining that the instruction is likely to be rescinded by the execution unit, aspects include issuing the instruction to the execution unit with a likely to be rescinded issue mask. | 2016-03-31 |
20160092234 | METHOD AND APPARATUS FOR SPECULATIVE VECTORIZATION - An apparatus and method for speculative vectorization. For example, one embodiment of a processor comprises: a queue comprising a set of locations for storing addresses associated with vectorized memory access instructions; and execution logic to execute a first vectorized memory access instruction to access the queue and to compare a new address associated with the first vectorized memory access instruction with existing addresses stored within a specified range of locations within the queue to detect whether a conflict exists, the existing addresses having been previously stored responsive to one or more prior vectorized memory access instructions. | 2016-03-31 |
20160092235 | METHOD AND APPARATUS FOR IMPROVED THREAD SELECTION - An apparatus and method are described for improved thread selection. For example, one embodiment of a processor comprises: first logic to maintain a history table comprising a plurality of entries, each entry in the table associated with an instruction and including history data indicating prior hits and/or misses to a cache level and/or a translation lookaside buffer (TLB) for that instruction; and second logic to select a particular thread for execution at a particular processor pipeline stage based on the history data. | 2016-03-31 |
20160092236 | MECHANISM FOR ALLOWING SPECULATIVE EXECUTION OF LOADS BEYOND A WAIT FOR EVENT INSTRUCTION - A processor includes a mechanism that checks for and flushes only speculative loads and any respective dependent instructions that are younger than an executed wait for event (WEV) instruction, and which also match an address of a store instruction that has been determined to have been executed by a different processor prior to execution of the paired SEV instruction by the different processor. The mechanism may allow speculative loads that do not match the address of any store instruction that has been determined to have been executed by a different processor prior to execution of the paired SEV instruction by the different processor. | 2016-03-31 |
20160092237 | Variable Length Execution Pipeline - In an aspect, a pipelined execution resource can produce an intermediate result for use in an iterative approximation algorithm in an odd number of clock cycles. The pipelined execution resource executes SIMD requests by staggering commencement of execution of the requests from a SIMD instruction. When executing one or more operations for a SIMD iterative approximation algorithm, and an operation for another SIMD iterative approximation algorithm is ready to begin execution, control logic causes intermediate results completed by the pipelined execution resource to pass through a wait state, before being used in a subsequent computation. This wait state presents two open scheduling cycles in which both parts of the next SIMD instruction can begin execution. Although the wait state increases latency to complete an in-progress algorithm, a total throughput of execution on the pipeline increases. | 2016-03-31 |
20160092238 | COPROCESSOR FOR OUT-OF-ORDER LOADS - Systems and methods for implementing certain load instructions, such as vector load instructions by cooperation of a main processor and a coprocessor. The load instructions which are identified by the main processor for offloading to the coprocessor are committed in the main processor without receiving corresponding load data. Post-commit, the load instructions are processed in the coprocessor, such that latencies incurred in fetching the load data are hidden from the main processor. By implementing an out-of-order load data buffer associated with an in-order instruction buffer, the coprocessor is also configured to avoid stalls due to long latencies which may be involved in fetching the load data from levels of memory hierarchy, such as L2, L3, L4 caches, main memory, etc. | 2016-03-31 |
20160092239 | METHOD AND APPARATUS FOR UNSTRUCTURED CONTROL FLOW FOR SIMD EXECUTION ENGINE - An apparatus and method for a SIMD unstructured branching. For example, one embodiment of a processor comprises: an execution unit having a plurality of channels to execute instructions; and a branch unit to process unstructured control flow instructions and to maintain a per channel count value for each channel, the branch unit to store instruction pointer tags for the unstructured control flow instructions in a memory and identify the instruction pointer tags using tag addresses, the branch unit to further enable and disable the channels based at least on the per channel count value. | 2016-03-31 |
20160092240 | METHOD AND APPARATUS FOR SIMD STRUCTURED BRANCHING - An apparatus and method for a SIMD structured branching. For example, one embodiment of a processor comprises: an execution unit having a plurality of channels to execute instructions; and a branch unit to process control flow instructions and to maintain a per channel count for each channel and a control instruction count for the control flow instructions, the branch unit to enable and disable the channels based at least on the per channel count. | 2016-03-31 |
20160092241 | SINGLE INSTRUCTION ARRAY INDEX COMPUTATION - Embodiments are directed to a method of adjusting an index, wherein the index identifies a location of an element within an array. The method includes executing, by a computer, a single instruction that adjusts a first parameter of the index to match a parameter of an array address. The single instruction further adjusts a second parameter of the index to match a parameter of the array element. The adjustment of the first parameter includes a sign extension. | 2016-03-31 |
20160092242 | Fast Start - Aspects of the disclosure relate to methods, systems, and apparatuses of a fast start system. A computing device may automatically restart itself based on a restart schedule from a fast start network server. The computing device may initiate a booting sequence and retrieve login credentials of a user stored in the computing device. Using the stored login credentials, the computing device can login the user to the system. In response to successfully logging in the user, the computing device may initialize at least one startup application on the computing device. Once the user is successfully logged in, the computing device may automatically lock the computing device to the user to prevent any unauthorized use of the workstation. | 2016-03-31 |
20160092243 | HARDWARE SECURITY MODULE ACCESS MANAGEMENT IN A CLOUD COMPUTING ENVIRONMENT - Trusted firmware on a host server is used for managing access to a hardware security module (HSM) connected to the host server. The HSM stores confidential information associated with an operating system. As part of access management, the firmware detects a boot device identifier associated with a boot device configured to boot the operating system on the host server. The firmware then receives a second boot device identifier from the HSM. The boot device identifier and the second boot device identifier are then compared by the firmware. Based on the comparison, the firmware determines that the boot device identifier matches with the second boot device identifier. Based on this determination, the firmware grants the operating system access to the HSM. | 2016-03-31 |
20160092244 | CONFIGURATION GRADING AND PRIORITIZATION DURING REBOOT - Various exemplary embodiments relate to a method of configuring a device in a network, the method including loading one or more system configuration commands into an active memory; processing the one or more system configuration commands; loading one or more blocks of customer commands into an active memory; and processing each of the one or more blocks of customer commands, wherein each block is processed as soon as it is loaded into the active memory. | 2016-03-31 |
20160092245 | DATA RICH TOOLTIP FOR FAVORITE ITEMS - Embodiments of the present invention can comprise a data rich tooltip or other graphical or textual preview of a selected object of interest. This preview can provide the user with additional information about the object so that the user does not need to waste time opening multiple objects or records in order to find the desired one. Instead, the summary view can provide enough information that the user does not need to open the record; the tooltip has the information that they want to know about the item. According to one embodiment, the summary view can be generated for each record based on a pre-configured template. Content presented in the summary view can be defined by the template and may be text about the object (e.g., object field values), about other related objects, images, or other information that would help the user find the desired object without opening it. | 2016-03-31 |
20160092246 | REVERSE DEPENDENCY INJECTION IN A SYSTEM WITH DYNAMIC CODE LOADING - Embodiments are directed to utilizing reverse dependency injection for managing bootstrapping of applications in web browser and mobile environments. By using reverse dependency injection, embodiments enable a component to declare that it is a “dependency of” another component in a visual analyzer application. This ensures that the dependencies are loaded before the other component is loaded, thereby minimizing delays when a user starts up an application. In some embodiments, information identifying a plugin to be loaded can be received. Embodiments can determine configuration information for the plugin where the configuration information includes both forward and reverse dependencies. Embodiments may generate, based on the configuration information, a data structure that represents the forward and reverse dependencies. Embodiments may analyze the data structure to determine an ordered list of loadings. Some embodiments may load the individual components per the ordered list of loadings and indicate that the plugin is ready for execution. | 2016-03-31 |
20160092247 | SELECTING AN OPERATOR GRAPH CONFIGURATION FOR A STREAM-BASED COMPUTING APPLICATION - First and second simulated processing of a stream-based computing application using respective first and second simulation conditions may be performed. The first and second simulation conditions may specify first and second operator graph configurations. Each simulated processing may include inputting a stream of test tuples to the stream-based computing application, which may operate on one or more compute nodes. Each compute node may have one or more computer processors and a memory to store one or more processing elements. Each simulated processing may be monitored to determine one or more performance metrics. The first and second simulated processings may be sorted based on a first performance metric to identify a simulated processing having a first rank. An operator graph configuration associated with the simulated processing having the first rank may be selected if the first performance metric for the simulated processing having the first rank is within a processing constraint. | 2016-03-31 |
20160092248 | HOOK FRAMEWORK - An application process can be executed based on an initialization instruction, where the application process includes instructions associated with a hook framework. A virtual machine configured to load the hook framework on the virtual machine based on instructions included in the application process can be initiated and the instructions associated with the hook framework can be executed upon initiation of the virtual machine to insert a hook on the virtual machine. A nascent process configured to initiate an additional virtual machine can be initiated based on a request to load an application, where the additional virtual machine is hooked via the hook inserted on the virtual machine. | 2016-03-31 |
20160092249 | ABSTRACTION OF BACKTRACKING - A computer-implemented method, computer program product, and computing system is provided for providing a framework for logically representing the discretization of logic for a backtracking algorithm. In an implementation, a method may include defining a validation class representing a validation logic to be tested. A processable class may be defined representing a backtracking logic flow to be implemented. The processable class may be associated with the validation class. One or more candidate options may be evaluated based upon, at least in part, the validation logic and the backtracking logic flow. | 2016-03-31 |
20160092250 | DYNAMIC CODE DEPLOYMENT AND VERSIONING - A system for providing dynamic code deployment and versioning is provided. The system may be configured to receive a first request to execute a newer program code on a virtual compute system, determine, based on the first request, that the newer program code is a newer version of an older program code loaded onto an existing container on a virtual machine instance on the virtual compute system, initiate a download of the newer program code onto a second container on the same virtual machine instance, and causing the first request to be processed with the older program code in the existing container. | 2016-03-31 |
20160092251 | PROGRAMMATIC EVENT DETECTION AND MESSAGE GENERATION FOR REQUESTS TO EXECUTE PROGRAM CODE - A service manages a plurality of virtual machine instances for low latency execution of user codes. The service can provide the capability to execute user code in response to events triggered on an auxillary service to provide implicit and automatic rate matching and scaling between events being triggered on the auxiliary service and the corresponding execution of user code on various virtual machine instances. An auxiliary service may be configured as an event triggering service to detect events and generate event messages for execution of the user codes. The service can request, receive, or poll for event messages directly from the auxiliary service or via an intermediary message service. Event messages can be rapidly converted to requests to execute user code on the service. The time from processing the event message to initiating a request to begin code execution is less than a predetermined duration, for example, 100 ms. | 2016-03-31 |
20160092252 | THREADING AS A SERVICE - A service manages a plurality of virtual machine instances for low latency execution of user codes. The plurality of virtual machine instances can be configured based on a predetermined set of configurations. One or more containers may be created within the virtual machine instances. In response to a request to execute user code, the service identifies a pre-configured virtual machine instance suitable for executing the user code. The service can allocate the identified virtual machine instance to the user, create a new container within an instance already allocated to the user, or re-use a container already created for execution of the user code. When the user code has not been activated for a time-out period, the service can invalidate allocation of the virtual machine instance destroy the container. The time from receiving the request to beginning code execution is less than a predetermined duration, for example, 100 ms. | 2016-03-31 |
20160092253 | OVERCOMMITTING VIRTUAL MACHINE HOSTS - A host-side overcommit value is set upon a physical node that implements virtual machines (VM Node). The overcommit value is determined by receiving a selected enablement template that includes a selected computing capacity and a selected overcommit value. A user-side normalization factor is determined that normalizes the selected computing capacity against a reference data handling system. A comparable computing capacity of the VM Node is determined. A host-side normalization factor is determined that normalizes the comparable computing capacity against the reference data handling system. The host-side overcommit value is determined from the selected overcommit value, the user-side normalization factor, and the host-side normalization factor. The host-side overcommit value may indicate the degree the comparable computing capacity is overcommitted to virtual machines deployed upon heterogeneous VM Nodes as normalized against the reference system. | 2016-03-31 |
20160092254 | SYSTEMS AND METHODS FOR PROVIDING AVAILABILITY TO RESOURCES - Methods and systems for providing a communication path are disclosed. Information can be received via a first communication session based on a first messaging protocol. The first communication session can be terminated at a virtual machine of a group of virtual machines. A dynamically bound communication path to a resource can be selected based on a dynamically reconfigurable routing table for the group of virtual machines. A second communication session can be initiated, at the virtual machine, via the selected dynamically bound communication path. The information can be transmitted to the resource via the second communication session based on a second messaging protocol. | 2016-03-31 |
20160092255 | ALLOCATING ALL OR A PORTION OF THE MEMORY IN A CACHE MODULE IN EACH HYPERVISOR IN A POOL OF HYPERVISORS TO FORM A SHARED CACHE MODULE TO BE UTILIZED BY THE VIRTUAL MACHINES RUN BY THE POOL OF HYPERVISORS - A method, system and computer program product for efficiently utilizing a virtual file system cache across cloud computing nodes. A determination is made as to which hypervisors will be able to share all or a portion of the memory in its cache module (look-aside cache) to become a hypervisor in a “pool of hypervisors” based on the workload of the virtual machines run by the hypervisor. All or a portion of the memory in the cache module in each hypervisor in the pool of hypervisors that is available to be utilized by other virtual machines is allocated to form a “shared cache module” to be utilized by virtual machines run by the pool of hypervisors. In this manner, the look-aside cache available to the hypervisor will be utilized more effectively since any available memory can be utilized by other virtual machines running on different hypervisors on different cloud computing nodes. | 2016-03-31 |
20160092256 | MULTI-SITE DISASTER RECOVERY CONSISTENCY GROUP FOR HETEROGENEOUS SYSTEMS - Methods and arrangements for managing a consistency group for computing sites. A plurality of computing sites are communicated with, each of the sites comprising one or more of (i) and (ii): (i) at least one virtual machine; and (ii) at least one server. Updates captured at each of the sites are received, and the captured updates are batched. The batched updates are communicated to the plurality of sites, thereby ensuring data consistency across the plurality of sites. Other variants and embodiments are broadly contemplated herein. | 2016-03-31 |
20160092257 | CENTRALIZED CONTROLLER FOR DISTRIBUTING NETWORK POLICIES OF VIRTUAL MACHINES IN A DATACENTER - A physical computing device that operates in a network. The device includes a group of tenant virtual machines (VMs). Each VM is hosted on a host machine that includes a virtualization software. The device receives network bandwidth allocation policies for the group of VMs. The device determines a set of potential communication peers for each VM. The device sends the network bandwidth allocation policy of each VM to the virtualization software of the host machines of each potential communication peer of the VM. | 2016-03-31 |
20160092258 | NUMA I/O AWARE NETWORK QUEUE ASSIGNMENTS - Systems and methods for preferentially assigning virtual machines (VMs) on a particular NUMA node with network queues on the same NUMA node are described. A load balancer process on a host assigns multiple VMs to network queues. The assignment of the VMs to a network queues is performed with a bias toward assigning VMs using a particular NUMA node to network queues on the same NUMA node. A scheduler on the host assigns VMs to NUMA nodes. The scheduler is biased toward assigning VMs to the same NUMA node as the PNIC and/or the same NUMA node as a network queue assigned to the VM. | 2016-03-31 |
20160092259 | NUMA I/O AWARE NETWORK QUEUE ASSIGNMENTS - Systems and methods for preferentially assigning virtual machines (VMs) on a particular NUMA node with network queues on the same NUMA node are described. A load balancer process on a host assigns multiple VMs to network queues. The assignment of the VMs to a network queues is performed with a bias toward assigning VMs using a particular NUMA node to network queues on the same NUMA node. A scheduler on the host assigns VMs to NUMA nodes. The scheduler is biased toward assigning VMs to the same NUMA node as the PNIC and/or the same NUMA node as a network queue assigned to the VM. | 2016-03-31 |
20160092260 | DETERMINATION METHOD AND DETERMINATION DEVICE - A determination method includes: receiving a request of a change from a first system configured by a first configuration to a second system configured by a second configuration, the request of the change including configuration data related to the first configuration and change data related to the change; extracting a functional requirement for a function that is realized in the first system based on the configuration data; identifying an operational requirement for realizing the first system based on the functional requirement and data about an operational process that is used for the first system; identifying a constraint condition about the second system based on configuration elements of the second configuration that are identified by the configuration data and the change data; and determining feasibility of the change to the second system based on the functional requirement, the operational requirement, and the constraint condition. | 2016-03-31 |
20160092261 | METHOD AND SYSTEM FOR PHYSICAL COMPUTER SYSTEM VIRTUALIZATION - The present disclosure provides a physical computer virtualization method. The method includes receiving a virtualization instruction inputted by a user on a physical computer; restarting the physical computer; and loading the physical computer with a virtual machine management system mirror image file after restarting the physical computer to boot the physical computer into a virtual machine management system. The method also include obtaining physical disks of the physical computer; and creating a virtual machine through the virtual machine management system and using the physical disks of the physical computer. | 2016-03-31 |
20160092262 | AUTOMATED CREATION OF EXECUTABLE WORKFLOW - A computing device receives information describing one or more workflow components. The computing device determines whether at least one executable step can be determined for each of the one or more workflow components. The computing device provides an indication of whether at least one executable step can be determined for each of the one or more workflow components. | 2016-03-31 |
20160092263 | SYSTEM AND METHOD FOR SUPPORTING DYNAMIC THREAD POOL SIZING IN A DISTRIBUTED DATA GRID - A system and method supports dynamic thread pool sizing suitable for use in multi-threaded processing environment such as a distributed data grid. Dynamic thread pool resizing utilizes measurements of thread pool throughput and worker thread utilization in combination with analysis of the efficacy of prior thread pool resizing actions to determine whether to add or remove worker threads from a thread pool in a current resizing action. Furthermore, the dynamic thread pool resizing system and method can accelerate or decelerate the iterative resizing analysis and the rate of worker thread addition and removal depending on the needs of the system. Optimizations are incorporated to prevent settling on a local maximum throughput. The dynamic thread pool sizing/resizing system and method thereby provides rapid and responsive adjustment of thread pool size in response to changes in work load and processor availability. | 2016-03-31 |
20160092264 | POST-RETURN ASYNCHRONOUS CODE EXECUTION - A method, system, and computer program product for the prioritization of code execution. The method includes accessing a thread in a context containing a set of code instances stored in memory; identifying sections of the set of code instances that correspond to deferrable code tasks; executing the thread in the context; determining that the thread is idle; and executing at least one of the deferrable code tasks. The deferrable code task is executed within the context and in response to determining that the thread is idle. | 2016-03-31 |
20160092265 | Systems and Methods for Utilizing Futures for Constructing Scalable Shared Data Structures - A multithreaded application that includes operations on a shared data structure may exploit futures to improve performance. For each operation that targets the shared data structure, a thread of the application may create a future and store it in a thread-local list of futures (under weak or medium futures linearizability policies) or in a shared queue of futures (under strong futures linearizability policies). Prior to a thread evaluating a future, type-specific optimizations may be performed on the list or queue of pending futures. For example, futures may be sorted temporally or by key, or multiple operations indicated in the futures may be combined or eliminated. During an evaluation of a future, a thread may compute the results of the operations indicated in one or more other futures. The order in which operations take effect and the optimization operations performed may be dependent on the futures linearizability policy. | 2016-03-31 |
20160092266 | DYNAMIC RELOCATION OF APPLICATIONS IN A CLOUD APPLICATION SERVICE MODEL - Software that performs the following steps: (i) running a first customer application on a first set of virtual machine(s), with the first customer application including a first plurality of independently migratable elements, including a first independently migratable element and a second independently migratable element; (ii) dynamically checking a status of the first set of virtual machine(s) to determine whether a first migration condition exists; and (iii) on condition that the first migration condition exists, migrating the first independently migratable element to a second set of virtual machine(s) without migrating the second independently migratable element to the second set of virtual machine(s). | 2016-03-31 |
20160092267 | CROSS-DOMAIN MULTI-ATTRIBUTE HASHED AND WEIGHTED DYNAMIC PROCESS PRIORITIZATION - In response to receipt of a process-level input request that is subject to business-level requirements, multiple sets of attributes are identified. The sets of attributes are each from one of multiple informational domains that represent processing factors associated with at least the process-level input request, contemporaneous infrastructure processing capabilities, and historical process performance of similar processes. The multiple sets of attributes from the multiple informational domains are hashed as a vector into an initial process prioritization. The attributes of the hashed vector of the multiple sets of attributes from the multiple informational domains are weighted in the initial process prioritization into a hashed-weighted resulting process prioritization. The process-level input request is assigned to a process category based upon the hashed-weighted resulting process prioritization. | 2016-03-31 |
20160092268 | SYSTEM AND METHOD FOR SUPPORTING A SCALABLE THREAD POOL IN A DISTRIBUTED DATA GRID - A system and method for supporting a scalable thread pool in a multi-threaded processing environments such as a distributed data grid. A work distribution system utilizes a collection of association piles to hold elements communicated between a service thread and multiple worker threads. Worker threads associated with the association piles poll elements in parallel. Polled elements are not released until returned from the worker thread. First in first out ordering of operations is maintained with respect to related elements by ensuring related elements are held in the same association pile and preventing polling of related elements until any previously polled and related elements have been released. By partitioning the elements across multiple association piles while ensuring proper ordering of operations with respect to related elements the scalable thread pool enables the use of large thread pools with reduced contention compared to a conventional single producer multiple consumer queue. | 2016-03-31 |
20160092269 | TUNABLE COMPUTERIZED JOB SCHEDULING - A computer-implemented method for scheduling a set of jobs executed in a computer system can include determining a workload-time parameter for a set of at least one job. The workload-time parameter can relate to execution-time parameters for the set of at least one job. The method can include determining a schedule tuning parameter for the set of at least one job, the schedule tuning parameter based on the workload-time parameter. The method can include generating a scheduling factor for each job in the set, the scheduling factor generated based on the schedule tuning parameter. The method can include scheduling the set of at least one job based on the scheduling factor. | 2016-03-31 |
20160092270 | ALGORITHM FOR FASTER CONVERGENCE THROUGH AFFINITY OVERRIDE - A method is implemented by a network device having a symmetric multi-processing (SMP) architecture. The method improves response time for processes implementing routing algorithms in a network. The method manages core assignments for the processes during a network convergence process. The method includes determining a number of interrupts or system events processed by a subset of cores of a set of cores of a central processing unit and identifying a core within the subset of cores with a lowest number of interrupts or system events processed. The method further includes changing an affinity mask of at least one process implementing the routing algorithms during the network convergence to target the core within the subset of cores with a lowest number of interrupts or system events processed. | 2016-03-31 |
20160092271 | MERGING CONNECTION POOLS TO FORM A LOGICAL POOL OF CONNECTIONS DURING A PRESET PERIOD OF TIME THEREBY MORE EFFICIENTLY UTILIZING CONNECTIONS IN CONNECTION POOLS - A method, system and computer program product for efficiently utilizing connections in connection pools. A period of time an application running on a virtual machine needs a greater number of connections to an external resource than allocated in its pool of connections is identified. The connection pool for this application as well as the connection pools for the other applications containing connections to the same external resource are merged to form a logical pool of connections to be shared by those applications during the identified period of time. Alternatively, in an application server cluster environment, the connection pools utilized by the application servers to access the external resource may be reconfigured based on the weight assigned to each member (or application server) of the cluster which is based on the member's load size. In these manners, the resource connections in these pools of connections will be more efficiently utilized. | 2016-03-31 |
20160092272 | CONGESTION AVOIDANCE IN NETWORK STORAGE DEVICE USING DYNAMIC WEIGHTS - Methods, systems, and computer programs are presented for allocating CPU cycles and disk Input/Output's (IOs) to resource-creating processes based on dynamic weights that change according to the current percentage of resource utilization in the storage device. One method includes operations for assigning a first weight to a processing task that increases resource utilization of a resource for processing incoming input/output (IO) requests, and for assigning a second weight to a generating task that decreases the resource utilization of the resource. Further, the method includes an operation for dynamically adjusting the second weight based on the current resource utilization in the storage system. Additionally, the method includes an operation for allocating the CPU cycles and disk IOs to the processing task and to the generating task based on their respective first weight and second weight. | 2016-03-31 |
20160092273 | SYSTEM AND METHOD FOR MANAGING THE ALLOCATING AND FREEING OF OBJECTS IN A MULTI-THREADED SYSTEM - A memory management system for managing objects which represent memory in a multi-threaded operating system extracts the ID of the home free-list from the object header to determine whether the object is remote and adds the object to a remote object list if the object is determined to be remote. The memory management system determines whether the number of objects on the remote object list exceeds a threshold. If the threshold is exceeded, the system batch-removes the objects on the remote object list and then adds those objects to the appropriate one or more remote home free-lists. | 2016-03-31 |
20160092274 | Heterogeneous Thread Scheduling - Heterogeneous thread scheduling techniques are described in which a processing workload is distributed to heterogeneous processing cores of a processing system. The heterogeneous thread scheduling may be implemented based upon a combination of periodic assessments of system-wide power management considerations used to control states of the processing cores and higher frequency thread-by-thread placement decisions that are made in accordance with thread specific policies. In one or more implementations, a system workload context is periodically analyzed for a processing system having heterogeneous cores including power efficient cores and performance oriented cores. Based on the periodic analysis, cores states are set for some of the heterogeneous cores to control activation of the power efficient cores and performance oriented cores for thread scheduling. Then, individual threads are scheduled in dependence upon the core states to allocate the individual threads between active cores of the heterogeneous cores on a per-thread basis. | 2016-03-31 |
20160092275 | TUNABLE COMPUTERIZED JOB SCHEDULING - A computer-implemented method for scheduling a set of jobs executed in a computer system can include determining a workload-time parameter for a set of at least one job. The workload-time parameter can relate to execution-time parameters for the set of at least one job. The method can include determining a schedule tuning parameter for the set of at least one job, the schedule tuning parameter based on the workload-time parameter. The method can include generating a scheduling factor for each job in the set, the scheduling factor generated based on the schedule tuning parameter. The method can include scheduling the set of at least one job based on the scheduling factor. | 2016-03-31 |
20160092276 | INDEPENDENT MAPPING OF THREADS - Embodiments of the present invention provide systems and methods for mapping the architected state of one or more threads to a set of distributed physical register files to enable independent execution of one or more threads in a multiple slice processor. In one embodiment, a system is disclosed including a plurality of dispatch queues which receive instructions from one or more threads and an even number of parallel execution slices, each parallel execution slice containing a register file. A routing network directs an output from the dispatch queues to the parallel execution slices and the parallel execution slices independently execute the one or more threads. | 2016-03-31 |
20160092277 | OVERCOMMITTING VIRTUAL MACHINE HOSTS - A host-side overcommit value is set upon a physical node that implements virtual machines (VM Node). The overcommit value is determined by receiving a selected enablement template that includes a selected computing capacity and a selected overcommit value. A user-side normalization factor is determined that normalizes the selected computing capacity against a reference data handling system. A comparable computing capacity of the VM Node is determined. A host-side normalization factor is determined that normalizes the comparable computing capacity against the reference data handling system. The host-side overcommit value is determined from the selected overcommit value, the user-side normalization factor, and the host-side normalization factor. The host-side overcommit value may indicate the degree the comparable computing capacity is overcommitted to virtual machines deployed upon heterogeneous VM Nodes as normalized against the reference system. | 2016-03-31 |
20160092278 | SYSTEM AND METHOD FOR PROVIDING A PARTITION FILE SYSTEM IN A MULTITENANT APPLICATION SERVER ENVIRONMENT - In accordance with an embodiment, described herein is a system and method for providing a partition file system in a multitenant application server environment. The system enables application server components to work with partition-specific files for a given partition, instead of or in addition to domain-wide counterpart files. The system also allows the location of some or all of a partition-specific storage to be specified by higher levels of the software stack. In accordance with an embodiment, also described herein is a system and method for resource overriding in a multitenant application server environment, which provides a means for administrators to customize, at a resource group level, resources that are defined in a resource group template referenced by a partition, and to override resource definitions for particular partitions. | 2016-03-31 |
20160092279 | DISTRIBUTED REAL-TIME COMPUTING FRAMEWORK USING IN-STORAGE PROCESSING - According to one general aspect, a scheduler computing device may include a computing task memory configured to store at least one computing task. The computing task may be executed by a data node of a distributed computing system, wherein the distributed computing system includes at least one data node, each data node having a central processor and an intelligent storage medium, wherein the intelligent storage medium comprises a controller processor and a memory. The scheduler computing device may include a processor configured to assign the computing task to be executed by either the central processor of a data node or the intelligent storage medium of the data node, based, at least in part, upon an amount of data associated with the computing task. | 2016-03-31 |
20160092280 | Adaptive Lock for a Computing System having Multiple Runtime Environments and Multiple Processing Units - A method for operating a lock in a computing system having plural processing units and running under multiple runtime environments is provided. When a requester thread attempts to acquire the lock while the lock is held by a holder thread, determine whether the holder thread is suspendable or non-suspendable. If the holder thread is non-suspendable, put the requester thread in a spin state regardless of whether the requester thread is suspendable or non-suspendable; otherwise determines whether the requester thread is suspendable or non-suspendable unless the requester thread quits acquiring the lock. If the requester thread is non-suspendable, arrange the requester thread to attempt acquiring the lock again; otherwise add the requester thread to a wait queue as an additional suspended thread. Suspended threads stored in the wait queue are allowable to be resumed later for lock acquisition. The method is applicable for the computing system with a multicore processor. | 2016-03-31 |
20160092281 | INFORMATION PROCESSING APPARATUS, METHOD FOR CONTROLLING THE SAME, AND STORAGE MEDIUM - An information processing apparatus includes a generation unit configured to generate a second script for setting the specified setting value, and an execution unit configured to execute a first script using the work setting value and the plurality of setting values to be set excluding the specified setting value, wherein the execution unit executes the generated second script after executing the first script. | 2016-03-31 |
20160092282 | CENTRAL REGISTRY FOR BINDING FEATURES USING DYNAMIC POINTERS - A first feature (e.g., chart or table) includes a reference to a dynamic pointer. Independently, the pointer is defined to point to a second feature (e.g., a query). The first feature is automatically updated to reflect a current value of the second feature. The reference to the pointer and pointer definition are recorded in a central registry, and changes to the pointer or second feature automatically cause the first feature to be updated to reflect the change. A mapping between features can be generated using the registry and can identify interrelationships to a developer. Further, changes in the registry can be tracked, such that a developer can view changes pertaining to a particular time period and/or feature of interest (e.g., corresponding to an operation problem). | 2016-03-31 |
20160092283 | EXECUTION OF END-TO-END-PROCESSES ACROSS APPLICATIONS - An orchestrator executes an end-to-end process across applications. The executing of the end-to-end process by the orchestrator comprises executing flow logic by the orchestrator, the flow logic according to a data model defining arguments to include in interactions between the orchestrator and each of the applications. A message broker exchanges information among the orchestrator and the applications. | 2016-03-31 |
20160092284 | ESTIMATING FLASH QUALITY USING SELECTIVE ERROR EMPHASIS - A method for data storage includes reading from a memory device data that is stored in a group of memory cells as respective analog values, and classifying readout errors in the read data into at least first and second different types, depending on zones in which the analog values fall. A memory quality that emphasizes the readout errors of the second type is assigned to the group of the memory cells, based on evaluated numbers of the readout errors of the first and second types. | 2016-03-31 |
20160092285 | Method and Apparatus for Approximating Detection of Overlaps Between Memory Ranges - A computer-implemented method for managing loop code in a compiler includes using a conflict detection procedure that detects across-iteration dependency for arrays of single memory addresses to determine whether a potential across-iteration dependency exists for arrays of memory addresses for ranges of memory accessed by the loop code. | 2016-03-31 |
20160092286 | TRACING AND DISCOVERING THE ORIGINS AND GENEALOGY OF INSTALL ERRORS - The disclosure generally describes computer-implemented methods, software, and systems for presenting error information. An indication is received of a selected error for a product installation. Installations are identified having a matching stream, build number and error. Other builds in a same stream having the same error are identified. Information is provided for displaying a graph having a horizontal line graph including first nodes representing builds in the same stream having the same error. Other occurrences of the error in builds of other streams are identified. Information for updating the graph is provided with parallel lines for each of the other streams, each parallel line including second nodes representing builds. An oldest one of the first nodes and second nodes is identified. Information is provided for presenting a list of potential changes occurring before the date associated with the oldest node and that are candidates for causing the error. | 2016-03-31 |
20160092287 | EVIDENCE-BASED REPLACEMENT OF STORAGE NODES - Apparatus, systems, and methods for Recovery algorithm in memory are described. In one embodiment, a controller comprises logic to receive reliability information from at least one component of a storage device coupled to the controller, store the reliability information in a memory communicatively coupled to the controller, generate at least one reliability indicator for the storage device, and forward the reliability indicator to an election module. Other embodiments are also disclosed and claimed. | 2016-03-31 |
20160092288 | DETECT PROCESS HEALTH REMOTELY IN A REALTIME FASHION - A system of remote nodes may be divided into sets of partner nodes. One remote node becomes a partner of another remote node. As partners, the nodes agree to monitor each other's health and report anomalies, such as a failure of one of the nodes, to a monitoring server. The nodes do so using a persistent communication link, such as an open socket. Using the described techniques, the monitoring load of a system is distributed in part away from the monitoring server and to the nodes themselves. This may reduce the resources required of the monitoring server. At the same time, since nodes are now being monitored by partner nodes that are likely to be closer than the monitoring server, and/or on account of the monitoring being performed via a persistent communication link, certain failures can be detected in real-time or near real-time. | 2016-03-31 |
20160092289 | DETERMINATION METHOD, SELECTION METHOD, AND DETERMINATION DEVICE - A determination method, for determining a possibility of a new failure in a system, includes: obtaining first setting values for a plurality of setting items of the system when a failure in the system occurs; obtaining second setting values for the plurality of setting items when an input that the failure has been recovered is received; identifying at least one setting item from among the plurality of setting items based on the first setting values and the second setting values, the at least one setting item having a first setting value different from a second setting value; determining a value from among the first value and the second value of the at least one setting item; comparing an input value regarding the at least one setting item and the value; determining the possibility based on a result of the comparing; and outputting information regarding the possibility. | 2016-03-31 |
20160092290 | PROCESSING DATA ERRORS FOR A DATA PROCESSING SYSTEM - Processing data errors in a data processing system, includes a computer receiving one or more patterns and a data set. The one or more patterns describe characteristics of an erroneous data record and are associated with a root cause. The root cause includes a description of a technical deficiency causing the data error in the erroneous data record. Responsive to the computer determining that a first set of data records in the received data set have characteristics that match a first pattern of the one or more patterns, the computer assigns the first set of data records of the received data set having characteristics that match the first pattern to a first error group. | 2016-03-31 |
20160092291 | ANALYSIS SYSTEM, ANALYSIS METHOD AND PROGRAM - Analysis system, analysis method and program. The system includes: trace means for acquiring a command issued by software executed in an information processing system and a physical address of a memory used by the command as trace data, and recording the trace data to storage means; event detecting means for detecting an event caused to occur by the software and acquiring event information; conversion means for converting the event information to a memory access pattern configured with a plurality of commands for accessing the memory and a plurality of physical addresses; and memory accessing means for accessing the memory using the converted memory access pattern, causing the trace means to acquire trace data and record the trace data to the storage means. | 2016-03-31 |