Entries |
Document | Title | Date |
20080256551 | System and Method For Storing State Information - A method for storing state information, the method includes storing, at a first circuit, state information representative of a state of a second circuit while the second circuit enters a low power mode; characterized by receiving an indication that a task switching from a first task to a second task should occur; storing a state information representative of a state of the second circuit, at the first circuit; receiving an indication that the first task should be resumed; and writing the stored state information from the first circuit to the second circuit. A system includes a first circuit and a second circuit, whereas the first circuit is connected to the second circuit and is adapted to store state information representative of a state of a second circuit; characterized by including a controller adapted to control a storage of the state information if at least a portion of the second circuit is powered down or if the second circuit is associated with a task switching operation. | 10-16-2008 |
20080271042 | TESTING MULTI-THREAD SOFTWARE USING PRIORITIZED CONTEXT SWITCH LIMITS - Testing multithreaded application programs for errors can be carried out in an efficient and productive manner at least in part by prioritizing thread schedules based on numbers of context switches between threads therein. In particular, each thread schedule in a multithreaded application program can be prioritized based on whether a given thread schedule has the same as or less than some maximum value. A model checker module can then iteratively execute thread schedules that fit within a given context switch maximum value, or a progressively higher value up to some limit. In one implementation, for example, the model checker module executes all thread schedules that have zero preempting context switches, then all thread schedules that have only one preempting context switch, etc. Most errors in an application program can be identified by executing only those thread schedule with relatively few preempting context switches. | 10-30-2008 |
20080271043 | Accurate measurement of multithreaded processor core utilization and logical processor utilization - An embodiment of the invention provides an apparatus and method for accurate measurement of utilizations in a hardware multithreaded processor core. The apparatus and method perform the acts including: determining idle time spent cycles which are cycles that are spent in idle by a hardware thread in a processor core; determining idle consumed cycles which are cycles that are consumed in the idle time spent cycles, by the hardware thread; and determining at least one of a processor core utilization and a logical processor utilization based upon at least one of the idle time spent cycles (d | 10-30-2008 |
20080271044 | Method and System for Multithreaded Request Dispatching - A method and a system are described that involve processing a request in multiple threads and dispatching the request to a set of applications. The method includes receiving the request, wherein the request contains application context and session data, creating a request context object and associating it with the application context and the session data, storing an identifier of a first thread that processes the request in the request context object associated with the thread, creating a set of threads from the first thread to process the request in parallel threads, each thread in the set having a unique identifier and inheriting the request context object from the first thread, and invoking a request dispatcher on each thread in the set to forward the request to the set of applications. | 10-30-2008 |
20080301700 | Filtering of performance monitoring information - In one embodiment, the present invention includes a method for receiving a signal in a filter register of a performance monitor from an execution unit to enable a field of the filter register associated with a first thread when a filter enable instruction is executed during execution of code of the first thread, receiving a thread identifier and event information in the performance monitor from the execution unit, and determining if the thread that corresponds to the received thread identifier is enabled in the filter register and if so, storing the event information in a first counter of the performance monitor. Other embodiments are described and claimed. | 12-04-2008 |
20080313644 | SYSTEM ANALYSIS APPARATUS AND SYSTEM ANALYSIS METHOD - For an information system to be analyzed, restriction information indicating a restriction to be satisfied in the case where the information system is normal is acquired, and an anomalous state failing to satisfy the restriction is specified in a state transition model involving only the automatic transition. Also, the transition from an anomalous state to a normal state is retrieved in the state transition model involving only the manual transition thereby to output management work information indicating a management work specified by the retrieved manual transition as related to the anomalous state. | 12-18-2008 |
20090007137 | Order preservation in data parallel operations - Various technologies and techniques are disclosed for preserving input element ordering in data parallel operations. This ordering may be based on element ordinal position in the input or a programmer-specified key-selection routine that generates sortable keys for each input element. Complex data parallel operations are re-written to contain individual data parallel operations that introduce partitioning and merging. Each partition is then processed independently in parallel. The system ensures that downstream operations remember ordering information established by certain other operations, using techniques that vary depending upon which categories the consumer operations are in. Data is merged back into one output stream using a final merge process that is aware of the ordering established among data elements. | 01-01-2009 |
20090037927 | Apparatus and method for direct switching of software threads - An embodiment of the invention provides an apparatus and a method for direct switching of software threads. The apparatus and method include performing acts including: issuing a wakeup call from a first thread to a second thread in a sleep state; removing the second thread from the sleep state; switching out the first thread from the resource; switching in the second thread to the resource; and running the second thread on the resource. | 02-05-2009 |
20090037928 | System for Intelligent Context-Based Adjustments of Coordination and Communication Between Multiple Mobile Hosts Engaging in Services - A system and method for intelligent, context-sensitive enhancement of transactions among a plurality of mobile hosts, each having a local coordinator, engaging in services comprising an actual coordinator and an intelligence coordinator that determines context regarding the mobile hosts, and leverages the context to enhance the transactions between the local coordinators and the actual coordinator. The context can be leveraged by reducing the number and/or the amount of data of the transactions. The context can comprise a physical location, temporal data, and a network load near and at a network location of the mobile host. The system can also have an application operating on the services, in which the intelligence coordinator can improve performance of the application. The intelligence coordinator can receive and parse a meta-expression piggy-backed on a transaction message to enhance transactions. | 02-05-2009 |
20090049451 | MULTI-THREADED PROCESSING WITH REDUCED CONTEXT SWITCHING - Multi-threaded processing with reduced context switching is disclosed. Context switches may be avoided through the use of pre-emption notification, a pre-emption wait time attribute and a no-context-save yield. | 02-19-2009 |
20090070774 | LIVE LOCK FREE PRIORITY SCHEME FOR MEMORY TRANSACTIONS IN TRANSACTIONAL MEMORY - A method and apparatus for avoiding live-lock during transaction execution is herein described. Counting logic is utilized to track successfully committed transactions for each processing element. When a data conflict is detected between transactions on multiple processing elements, priority is provided to the processing element with the lower counting logic value. Furthermore, if the values are the same, then the processing element with the lower identification value is given priority, i.e. allowed to continue while the other transaction is aborted. To avoid live-lock between processing elements that both have predetermined counting logic values, such as maximum counting values, when one processing element reaches the predetermined counting value all counters are reset. In addition, a failure at maximum value (FMV) counter may be provided to count a number of aborts of a transaction when counting logic is at a maximum value. When the FMV counter is at a predetermined number of aborts the counting logic is reset to avoid live lock. | 03-12-2009 |
20090077564 | FAST CONTEXT SWITCHING USING VIRTUAL CPUS - Various technologies and techniques are disclosed that provide fast context switching. One embodiment provides a method for a context switch comprising preloading a host virtual machine context in a first portion of a processor, operating a guest virtual machine in a second portion of the processor, writing parameters of the host virtual machine context to a memory location shared by the host virtual machine and the guest virtual machine, and operating the host virtual machine in the processor. In this manner, a fast context switch may be accomplished by preloading the new context in a virtual processor, thus reducing the delay to switch to the new context. | 03-19-2009 |
20090083754 | IMPLEMENTATION OF MULTI-TASKING ON A DIGITAL SIGNAL PROCESSOR - The present invention relates to the implementation for implementing multi-tasking on a digital signal processor. For that purpose blocking functions are arranged such that they do not make use of a processor's hardware stack. Respective function calls are replaced with a piece of inline assembly code, which in stead performs a branch to the correct routine for carrying out said function. If a blocking condition of the blocking function is encountered, a task switch can be done to resume another task. Whilst the hardware stack is not used when a task switch might have to occur, mixed-up contents of the hardware stack among function calls performed by different tasks are avoided. | 03-26-2009 |
20090083755 | TASK SCHEDULING OF FIBER-OPTIC TRANSCEIVER FIRMWARE - Systems and methods for optimizing the task scheduling efficiency of firmware and/or software associated with optoelectronic transceiver devices. In one example, a scheduling module executes microcode that schedules tasks based on the operational parameters. The scheduling module compares operational parameters with their last known values and then flags necessary tasks to be initiated. The scheduling module flags only those tasks that rely on a particular operational parameter and only if the operational parameter has changed in value since the most recent time that it has been measured. Specifically, the scheduling module identifies leading tasks and dependent tasks and flags tasks only if data that relies on the operating parameter has changed since a previous task scheduling determination. | 03-26-2009 |
20090125913 | CONTEXT SWITCH DATA PREFETCHING IN MULTITHREADED COMPUTER - An apparatus initiates, in connection with a context switch operation, a prefetch of data likely to be used by a thread prior to resuming execution of that thread. As a result, once it is known that a context switch will be performed to a particular thread, data may be prefetched on behalf of that thread so that when execution of the thread is resumed, more of the working state for the thread is likely to be cached, or at least in the process of being retrieved into cache memory, thus reducing cache-related performance penalties associated with context switching. | 05-14-2009 |
20090133033 | ADVANCING AND REWINDING A REPLAYED PROGRAM EXECUTION - In an embodiment, a data processing system comprises a storage system coupled to a unit under test comprising a heap memory, a static memory and a stack; second logic operable to perform: detecting one or more changes in a first state of the heap memory and the static memory; storing, in the storage system, as a state point of the unit under test, the one or more changes in the first state of the heap memory and the static memory; third logic operable to perform: receiving a request to change the memory under test to a particular state point; in response to the request, loading the particular state point from the storage system and applying the state point to the heap memory and the static memory to result in changing the heap memory and the static memory to a second state that is substantially equivalent to the first state. | 05-21-2009 |
20090158295 | DEVICE SETTINGS RESTORE POINT - A method and a device may be provided for saving and restoring one or more settings associated with the device. The one or more settings may be saved and changed before performing a task. After completion of the task, or after a determined failure of the task to complete, the one or more settings may be restored. Communications may be exchanged between a host and the device to create a restore point for saving the one or more settings, to change any of the one or more settings before performing the task, and to restore the one or more settings after completion of the task, or after determining the failure of the task to complete. The device may create and store the one or more settings in a restore point in the device, or may send the one or more settings to the host for storing. | 06-18-2009 |
20090187916 | TASK SWITCHING WITH STATE PRESERVATION FOR PROGRAMS RUNNING ON AN ELECTRONIC DEVICE - A method and system providing switching between a plurality of installed programs in a computer system. Embodiments include a jump function comprising the steps: (1) determining a jump program that is to be the next program to be run, possibly from a plurality of possible choices; (2) creating input data far the jump program based on data in the current program; (3) storing the program state of the currently running program into a context packet and saving the context packet to memory; (4) releasing temporary memory that is used by the program, so as to allow other programs to use the memory; (5) calling the jump program with the created input data as input and terminating the currently running program. | 07-23-2009 |
20090217290 | Method and System for Task Switching with Inline Execution - The present disclosure is directed to a method and system for task switching with inline execution. In accordance with a particular embodiment of the present disclosure, a first state and a second state are identified for a function executing in the first state. A switch routine is invoked at a particular execution point in the function. A work element is generated in the switch routine. The work element includes status information for the function. The work element is transmitted to at least one alternate state task. The first state is altered to the second state according to the work element. Execution of the function in the second state is resumed at the particular execution point. | 08-27-2009 |
20090254919 | Sharing Operating System Sub-Processes Across Tasks - An operating system permits sharing of a sub-process (or process unit) across multiple processes (or tasks). Each shared sub-process has its own context. The sharing is enabled by tracking when a process invokes a sub-process. When a process invokes a sub-process, the process is designated as a parent process of the child sub-process. The invoked sub-process may require use of process level variable data. To enable storage of the process level variable data for each calling process, the variable data is stored in memory using a base address and a fixed offset. Although the based address may vary from process to process, the fixed offset remains the same across processes. | 10-08-2009 |
20090265714 | DIVIDED DISK COMMAND PROCESSING SYSTEM AND METHOD THEREOF - This present invention is a divided disk command processing system and method thereof, for processing a disk command by executing the multiple computing processes of the disk command separately in multiple processing stages, to reduce frequently storing and restoring the state as context switching of a CPU. And the processing capability of the CPU in each stage is adequately employed to speed up a disk command processing. | 10-22-2009 |
20090271801 | Split stage call sequence restoration method - Embodiments of the present invention provide for collecting a minimal subset of task execution context in real time and for restoring the task execution context and performing procedure frame unwinding operations at a post-processing stage. A first data structure may be constructed in real time to contain procedure linkage information along with references to the memory area or to a processor register context where each procedure linkage information element (procedure return address or a procedure frame pointer) was originally found. Procedure return addresses may be determined by decoding the instruction preceding the address in question and checking if it is a procedure call instruction. Procedure return addresses may also be determined using other methods (e.g., by checking whether the memory region the address in question belongs to is executable) if the probability of retrieving the correct result is acceptable for a particular area of application of an embodiment of the present invention. Procedure frame pointers may be determined as the conventional memory area elements whose value points back to the conventional memory area. Procedure frame pointers, depending on particular processor architecture, may also have other properties that differentiate them from other elements of the conventional memory area. The conventional memory area for purposes of the present invention may be non-contiguous. The contents of first data structure may then be employed in reconstruction of the task execution environment at the post-processing stage. Then, the procedure frame unwinding operations may be performed over the restored task execution context. | 10-29-2009 |
20090307708 | Thread Selection During Context Switching On A Plurality Of Compute Nodes - Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution. | 12-10-2009 |
20100050185 | Context Conflict Resolution and Automatic Context Source Maintenance - Techniques are disclosed for detecting and resolving conflicts in context information from various sources. That information may be used to automatically update one or more context sources and/or to validate or invalidate (until further notice or for a period of time) input from one or more context sources. Or, the updates can be made in response to the user's instructions. Rules are used in preferred embodiments to dictate the conflict resolution approach for individual users. Updating the context source is particularly useful when the source is an electronic calendar. Updates that may be made to the calendar include adding, deleting, or changing scheduled events and/or working hours. Invalidating data from a context source is particularly useful for lost, forgotten, misplaced, or loaned devices. Marking data from a context source as valid is preferably done when harmony among several context sources is detected. Context suppliers may be notified of errors or discrepancies in their context data. | 02-25-2010 |
20100083275 | TRANSPARENT USER MODE SCHEDULING ON TRADITIONAL THREADING SYSTEMS - Embodiments for performing cooperative user mode scheduling between user mode schedulable (UMS) threads and primary threads are disclosed. In accordance with one embodiment, an asynchronous procedure call (APC) is received on a kernel portion of a user mode schedulable (UMS) thread. The status of the UMS thread as it is being processed in a multi-processor environment is determined. Based on the determined status, the APC is processed on the UMS thread. | 04-01-2010 |
20100115530 | METHOD TO TRACK APPLICATION CONTEXT AND APPLICATION IDENTIFICATION - One particular implementation may take the form of a system or method for tracking application identification and application context in a context-isolated computing environment. The method may store such application information to reduce redundant information being stored on a stack. More particularly, the embodiment may store the application information in a context-specific marker frame. The context-specific marker frame may be stored once on the stack or it may be stored separately from the stack to maintain a small stack size. In another implementation, an invocation handler method may be called to store the redundant information about the executing application. The invocation handler may store the necessary information in a well-known location for later use by the virtual machine. The invocation handler may also provide further benefits, such as synchronization to ensure thread safety on shareable objects. | 05-06-2010 |
20100199287 | Method, Apparatus, and Computer Program Product for Context-Based Contact Information Management - An apparatus for context-based contact information management may include a processor. The processor may be configured to receive contact information and associated sender-based context information. In this regard, the contact information and sender-based context information may have been transmitted from a sending device. The processor may also be configured to associate receiver-based context information with the contact information and identify a historical context within a historical multi-dimensional context environment based at least in part on the sender-based context information and the receiver-based context information. Further, the processor may be configured to link the contact information to the historical context. Associated methods and computer program products may also be provided. | 08-05-2010 |
20100199288 | Multi-Tasking Real-Time Operating System for Microprocessors with Limited Memory - A real-time operating system (RTOS) for use with minimal-memory controllers has a kernel for managing task execution, including context switching, a plurality of defined tasks, individual ones of the tasks having subroutines callable in nested levels for accomplishing tasks. In the RTOS context switching is constrained to occur only at task level, and cannot occur at any lower sub-routine level. This system can operate with a single call . . . return stack, saving memory requirement. The single stack can be implemented as either a general-purpose stack or as a hardware call . . . return stack. In other embodiments novel methods are taught for generating return addresses, and for using timing functions in a RTOS. | 08-05-2010 |
20100223624 | METHOD FOR PUSHING WORK REQUEST-ASSOCIATED CONTEXTS INTO AN IO DEVICE - A system and method employing the system for pushing work request associated contexts into a computer device includes issuing a request to a device in a computer system. Context data is fetched from a data storage device for the device. Context is determined for specified data requests, and context misses in the device are predicted. The system and method then initiates a context push and pushes the context into the device using a controller when a context miss is detected. Thereby, reducing the context miss latency time or delay in retrieving context data. | 09-02-2010 |
20100242050 | METHOD AND SYSTEM FOR DEADLOCK DETECTION IN A DISTRIBUTED ENVIRONMENT - A method of deadlock detection is disclosed which adjusts the detection technique based on statistics maintained for tracking the number of actual deadlocks that are detected in a distributed system, and for which types of locks are most frequently involved in deadlocks. When deadlocks occur rarely, the deadlock detection may be tuned down, for example, by reducing a threshold value which determines timeouts for waiting lock requests. When it is determined that actual deadlocks are detected frequently, the processing time for deadlock detection may be reduced, for example, by using parallel forward or backward search operations and/or by according higher priority in deadlock detection processing to locks which are more likely to involve deadlocks. | 09-23-2010 |
20100251260 | PRE-EMPTIBLE CONTEXT SWITCHING IN A COMPUTING DEVICE - Context switching between threads belonging to different user-side processes is a time consuming procedure because of the need to move a potentially large number of memory mappings around and the need to flush the data cache on hardware architectures which utilise a virtually tagged data cache. This invention allows the modification of page directory entries and the flushing of the data cache during a context switch to occur with pre-emption enabled; if a third process needs to run during a context switch, and this third process doesn't own or require any user memory modification of the page tables, this is now possible. By means of this invention, switches to kernel threads and threads in fixed user processes can occur much faster; these threads don't belong to processes that own any user memory and are the very ones that need to run with a lower guaranteed latency to ensure real-time performance. | 09-30-2010 |
20100262976 | Task Processor - A task processor includes a CPU, a save circuit, and a task control circuit. A task control circuit is provided with a task selection circuit and state storage units associated with respective tasks. When executing a predetermined system call instruction, the CPU notifies the task control circuit accordingly. When informed of the execution of a system call instruction, the task control circuit selects a task to be subsequently executed in accordance with an output from the selection circuit. When an interrupt circuit receives a high-speed interrupt request signal, the task switching circuit controls the state transition of a task by executing an interrupt handling instruction designated by the interrupt circuit. | 10-14-2010 |
20100287561 | DEVICE FOR AND METHOD OF WEIGHTED-REGION CYCLE ACCOUNTING FOR MULTI-THREADED PROCESSOR CORES - An aspect of the present invention improves the accuracy of measuring processor utilization of multi-threaded cores by providing a calibration facility that derives utilization in the context of the overall dynamic operating state of the core by assigning weights to idle threads and assigning weights to run threads, depending on the status of the core. From previous chip designs it has been established in a Simultaneous Multi Thread (SMT) core that not all idle cycles in a hardware thread can be equally converted into useful work. Competition for core resources reduces the conversion efficiency of one thread's idle cycles when any other thread is running on the same core. | 11-11-2010 |
20100293553 | FAIR SCALABLE READER-WRITER MUTUAL EXCLUSION - Implementing fair scalable reader writer mutual exclusion for access to a critical section by a plurality of processing threads in a processing system is accomplished by creating a first queue node for a first thread on the first thread's stack, the queue node representing a request by the first thread to access the critical section; adding the first queue node to a queue pointed to by a single word reader writer mutex for the critical section, the queue representing a list of threads desiring access to the critical section, each queue node in the queue being on a stack of a thread of the plurality of processing threads; waiting until the first queue node has no preceding write requests as indicated by predecessor queue nodes on the queue; entering the critical section by the first thread; exiting the critical section by the first thread; and removing the first queue node from the queue. | 11-18-2010 |
20100319000 | EXECUTION CONTEXT ISOLATION - Methods, systems, apparatuses and program products are disclosed for providing execution context isolation during the DXE phase of computer start-up. | 12-16-2010 |
20100319001 | COMMUNICATION IN ISOLATED EXECUTION CONTEXTS - Methods, systems, apparatuses and program products are disclosed for providing for communications within an environment that provides for execution isolation, especially a DXE (Driver Execution Environment) phase of a PC (personal computer) startup process. | 12-16-2010 |
20110029986 | Supporting Administration of a Multi-Application Landscape - A computer-implemented method for supporting administration of a multi-application landscape includes initiating, in a multi-application computer system, a business process that involves executing multiple applications and uses run control statements associated with process steps of the business process where a business process state is subject to change. The method includes executing the run control statements as part of performing the business process. The method includes, for each run control statement being executed, selecting at least one of multiple state indicators associated with the run control statement, the state indicator representing run-state information of the business process. The method includes generating a representation of the business process state and storing the representation in a repository, the representation comprising (i) each state indicator selected in executing the run control statements, and (ii) an identifier for the process step where the business process state changed. | 02-03-2011 |
20110067034 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM - An information processing device for causing a processor to execute a plurality of threads by switching between them. Each thread performs a process in correspondence with an obtainment of an event. The information processing device, when causing a second thread to transit from a non-execution state to an execution state to replace a first thread, detects whether or not, in the first thread having transited to the non-execution state, a next start position of a process belongs to an already processed part, detects whether or not a start position of a process in the second thread in the execution state belongs to the processed part; and determines whether or not to set a context for execution of the second thread into the processor in accordance with detection results of the first and second detection units, and performs processing in accordance with the determination. | 03-17-2011 |
20110078702 | MULTIPROCESSOR SYSTEM - To suppress bias in processors executing a non-routinely-executed program such as event processing, and so on, and thus improve multiprocessor system performance. For this purpose, a multiprocessor system includes: a first context memory which is shared and stores context data of a program that is non-routinely executed by any one of plural processors; saving-and-restoring control units of a same number as the processors, each of which, when an execution request for a program is issued to the corresponding processor, saves and stores context data between the first context memory in the case where the requested program is the program that is non-routinely executed; and a selecting-and-requesting unit which, each time an execution request for the program that is non-routinely executed is issued, requests execution of such program to a selected processor. | 03-31-2011 |
20110093863 | Context switching in a data processing apparatus - A data engine that can be interrupted is disclosed, the data engine comprising plurality of elements for storing, routing and processing the data, the plurality of elements comprising: processing elements for processing the data; registers for storing the data being processed; the data processing engine being configured to receive a clock signal and in response to the clock signal to periodically transmit a plurality of the control signals to a corresponding plurality of the elements in parallel; the data engine further comprising: control circuitry configured in response to receipt of an external interrupt request: to pause transmission of the control signals to the elements and to transmit a copy of the register data stored in the plurality of registers to a store; to transmit in parallel a next plurality of the control signals in the stream of control signals to a corresponding plurality of the elements, and to transmit a copy of output data output by the processing elements in response to the next plurality of control signals to the store and to repeat the transmitting and copying procedure, such that the procedure is performed a number of times. The state of the engine has then been stored and to restore the state the control circuitry requests the stored register data from the store and restores the plurality of registers to store the register data; transmits the next plurality of control signals to the corresponding plurality of elements, and copies the output data received from the store corresponding to data previously output by the processing elements in response to the next plurality of control signals to output locations of the processing elements and repeats the transmitting and copying procedure such that the transmitting and copying procedure is performed the number of times; and then recommences the periodic transmission of the plurality of control signals to the corresponding plurality of the elements in parallel, in response to the clock signal. | 04-21-2011 |
20110099554 | ANALYSIS AND VISUALIZATION OF APPLICATION CONCURRENCY AND PROCESSOR RESOURCE UTILIZATION - An analysis and visualization depicts how an application is leveraging computer processor cores in time. The analysis and visualization enables a developer to readily identify the degree of concurrency exploited by an application at runtime. Information regarding processes or threads running on the processor cores over time is received, analyzed, and presented to indicate portions of processor cores that are used by the application, idle, or used by other processes in the system. The analysis and visualization can help a developer understand contention for processor resources, confirm the degree of concurrency, or identify serial regions of execution that might provide opportunities for exploiting parallelism. | 04-28-2011 |
20110099555 | Reconfigurable processor and method - Disclosed are a reconfigurable processor and processing method, a reconfiguration control apparatus and method, and a thread modeler and modeling method. A memory area of a reconfigurable processor may be divided into a plurality of areas, and a context enabling a thread process may be stored in respective divided areas, in advance. Accordingly, when a context switching is performed from one thread to another thread, the other thread may be executed by using information stored in an area corresponding to the other thread. | 04-28-2011 |
20110126210 | RESPONSIVE USER INTERFACE WITH BACKGROUND APPLICATION LOGIC - A user interface can be maintained in a responsive state on a user interface thread while synchronous application logic is running on a background thread. The application logic can access an object on the background thread, and the user interface can access the same object on the user interface thread. Additionally, a request for work to be done on an object can be received. If the request is to be dispatched to a background thread, then the work can be dispatched to the background thread without blocking the user interface thread. However, if the request is to be dispatched to the user interface thread, then the work can be dispatched to the user interface thread, and the background thread can be blocked. | 05-26-2011 |
20110161982 | Task Controlling A Multitask System - Ensuring real-time performance of multitask control and improving the processing efficiency of a system provided with a processor processing while switching between a plurality of tasks. The system includes an execution unit executing instructions on individual tasks while switching from one task to another, a distinguishing unit executing an instruction determined to be a predetermined instruction. The system further includes a determination unit set so that on condition that the instruction to be executed is the predetermined instruction, it determines whether to allow the execution unit to execute the predetermined instruction or to perform a task switching process without executing the predetermined instruction based on a predetermined condition. | 06-30-2011 |
20110173631 | Wake-and-Go Mechanism for a Data Processing System - A wake-and-go mechanism is provided for a data processing system. When a thread is waiting for an event, rather than performing a series of get-and-compare sequences, the thread updates a wake-and-go array with a target address associated with the event. The thread then goes to sleep until the event occurs. The wake-and-go array may be a content addressable memory (CAM). When a transaction appears on the symmetric multiprocessing (SMP) fabric that modifies the value at a target address in the CAM, the CAM returns a list of storage addresses at which the target address is stored. The operating system or a background sleeper thread associates these storage addresses with the threads waiting for an even at the target addresses, and may wake the one or more threads waiting for the event. | 07-14-2011 |
20110173632 | Hardware Wake-and-Go Mechanism with Look-Ahead Polling - A hardware wake-and-go mechanism is provided for a data processing system. The wake-and-go mechanism looks ahead in a thread for programming idioms that indicates that the thread is waiting for an event. The wake-and-go mechanism performs a look-ahead polling operation for each of the programming idioms. If each of the look-ahead polling operations fails, then the wake-and-go mechanism updates a wake-and-go array with a target address associated with the event for each recognized programming idiom. | 07-14-2011 |
20110173633 | Task migration system and method thereof - A task migration system is provided which transmits a migration request signal for a plurality of first tasks to a migration manager using a resource manager, transmits information used in response to the migration request signal from a migration initiation handler to the migration manager when a first task, of which a migration point is in a capture ready state, among the plurality of first tasks is received from a processor, and captures, using the migration manager, the migration point of the first task in the capture ready state, in response to a migration request signal for the first task in the capture ready state, so that the first task with the captured migration point migrates to a second task. | 07-14-2011 |
20110173634 | Synchronizing Multiple Threads Efficiently - In one embodiment, the present invention includes a method of assigning a location within a shared variable for each of multiple threads and writing a value to a corresponding location to indicate that the corresponding thread has reached a barrier. In such manner, when all the threads have reached the barrier, synchronization is established. In some embodiments, the shared variable may be stored in a cache accessible by the multiple threads. Other embodiments are described and claimed. | 07-14-2011 |
20110209158 | ANALYSIS OF SHORT TERM CPU SPIKES IN AN OPERATING SYSTEM KERNEL - A profiler may analyze processes being run by a processor. The profiler may include logic to periodically sample a value of an instruction pointer that indicates an instruction in the first process that is currently being executed by the processor and logic to update profile data based on the sampled value. The profiler may additionally include logic to determine, in response to a context switch that includes the operating system switching the active process from the first process to another of the plurality of processes, whether the first process executes for greater than a first length of time; logic to stop operation of the profiler when the first process executes for greater than the first length of time; and logic to clear the profile data when the first process fails to execute for greater than the first length of time. | 08-25-2011 |
20110209159 | CONTEXTUAL CORRELATION ENGINE - Embodiments of the present invention are directed to a communication system that provides various automated operations, including linking applications and metadata across computational devices, using a stimulus to automatically find and launch associative and/or contextual materials and/or information required to conduct a work session without manually having to locate and launch each of these materials and/or information, and, by monitoring user behavior, creating and maintaining tokens defining the state of an instance of a workflow for later workflow resumption. | 08-25-2011 |
20110239225 | APPARATUS AND METHOD FOR ADAPTIVE CONTEXT SWITCHING SCHEDULING SCHEME FOR FAST BLOCK INPUT AND OUTPUT - Provided is a method and apparatus for an adaptive context switching for a fast block input/output. The adaptive context switching method may include: requesting, by a process, an input/output device to perform an input/output of data; comparing a Central Processing Unit (CPU) effectiveness based on whether the context switching is performed; and performing the input/output through the context switching to a driver context of the input/output device, or directly performing, by the process, the input/output based on a comparison result of the CPU effectiveness. | 09-29-2011 |
20110271287 | CONTEXT-BASED COMMUNICATION SERVICE - A method for providing a context-based service to a terminal of a communication network, includes, at a context server cooperating with the communication network: a) receiving a query from a service application suitable for implementing the context based service, the query indicating that the context server should perform an action when a query condition is fulfilled, the query condition referring to one or more attributes of derived context information indicative of a context of the terminal; b) generating a query evaluation trigger indicating that the query condition should be evaluated when the derived context information is updated; c) identifying raw context information allowing to derive the derived context information; d) generating a calculation trigger indicating that the derived context information should be calculated when an update of the raw context information is received from the terminal; e) receiving from the terminal an update of the raw context information and, according to the calculation trigger, calculating a new value of the derived context information according to the update; and f) according to the query evaluation trigger, evaluating the query condition by using the new value and, if the query condition is fulfilled, performing the action. | 11-03-2011 |
20110296430 | CONTEXT AWARE DATA PROTECTION - A method, system, and computer usable program product for context aware data protection. Information about an access context is received in a data processing system. A resource affected by the access context is identified. The identification of the resource may include deriving knowledge about resource by making an inference from a portion of contents of the resource that the access context affects the resource, making an inference that the access context affects a second resource thereby inferring that the resource has to be modified, determining that the access context is relevant to the resource, or a combination thereof. The resource is received. A policy that is applicable to the access context is identified. A part of the resource to modify according to the policy is determined. The part is modified according to the policy and the access context to form a modified resource. The modified resource is transmitted. | 12-01-2011 |
20110314480 | Apparatus, System, And Method For Persistent User-Level Thread - Embodiments of the invention provide a method of creating, based on an operating-system-scheduled thread running on an operating-system-visible sequencer and using an instruction set extension, a persistent user-level thread to run on an operating-system-sequestered sequencer independently of context switch activities on the operating-system-scheduled thread. The operating-system-scheduled thread and the persistent user-level thread may share a common virtual address space. Embodiments of the invention may also provide a method of causing a service thread running on an additional operating-system-visible sequencer to provide operating system services to the persistent user-level thread. Embodiments of the invention may further provide apparatus, system, and machine-readable medium thereof. | 12-22-2011 |
20120023505 | APPARATUS AND METHOD FOR THREAD SCHEDULING AND LOCK ACQUISITION ORDER CONTROL BASED ON DETERMINISTIC PROGRESS INDEX - Provided is a method and apparatus for ensuring a deterministic execution characteristic of an application program to perform data processing and execute particular functions in a computing environment using a micro architecture. A lock controlling apparatus based on a deterministic progress index (DPI) may include a loading unit to load a DPI of a first core and a DPI of a second core among DPIs of a plurality of cores at a lock acquisition point in time of each thread, a comparison unit to compare the DPI of the first core and the DPI of the second core, and a controller to assign a lock to a thread of the first core when the DPI of the first core is less than the DPI of the second core and when the second core corresponds to a last core to be compared among the plurality of cores. | 01-26-2012 |
20120047516 | CONTEXT SWITCHING - The disclosure relates generally to techniques, methods and apparatus for controlling context switching at a central processing unit. Alternatively, methods and apparatus are provided for providing security to memory blocks. Alternatively, methods and apparatus are provided for enabling transactional processing using a multi-core device. | 02-23-2012 |
20120072920 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING APPARATUS CONTROL METHOD - A information processing apparatus having a processor is controlled to execute a procedure of reading from the memory attribute information indicating a usage frequency of a register used by a process to be executed as a next process by the processor when the processor switches a process currently being executed, saving a value of the register used by the next process to be executed by the processor to the memory when the usage frequency of the register indicated by the attribute information is larger than a certain frequency, reading from the memory owner information indicating a process using the register to be used by the next process when the usage frequency of the register indicated by the attribute information is larger than the certain frequency, and restoring a register value saved in the memory to the register when the owner information indicates a process other than the next process. | 03-22-2012 |
20120079503 | Two-Level Scheduler for Multi-Threaded Processing - One embodiment of the present invention sets forth a technique for scheduling thread execution in a multi-threaded processing environment. A two-level scheduler maintains a small set of active threads called strands to hide function unit pipeline latency and local memory access latency. The strands are a sub-set of a larger set of pending threads that is also maintained by the two-leveler scheduler. Pending threads are promoted to strands and strands are demoted to pending threads based on latency characteristics. The two-level scheduler selects strands for execution based on strand state. The longer latency of the pending threads is hidden by selecting strands for execution. When the latency for a pending thread is expired, the pending thread may be promoted to a strand and begin (or resume) execution. When a strand encounters a latency event, the strand may be demoted to a pending thread while the latency is incurred. | 03-29-2012 |
20120084790 | SCHEDULING THREADS IN A PROCESSOR - Guiding OS thread scheduling in multi-core and/or multi-threaded microprocessors by: determining, for each thread among the active threads, the power consumed by each instruction type associated with an instruction executed by the thread during the last context switch interval; determining for each thread among the active threads, the power consumption expected for each instruction type associated with an instruction scheduled by said thread during the next context switch interval; generating at least one combination of N threads among the active threads (M), and for each generated combination determining if the combination of N threads satisfies a main condition related to the power consumption per instruction type expected for each thread of the thread combination during the next context switch interval and to the thread power consumption per instruction type determined for each thread of the thread combination during the last context switch interval; and selecting a combination of N threads. | 04-05-2012 |
20120192202 | Context Switching On A Network On Chip - A network on chip (NOC) that includes IP blocks, routers, memory communications controllers, and network interface controllers, each IP block adapted to the network by an application messaging interconnect including an inbox and an outbox, one or more of the IP blocks including computer processors supporting a plurality of threads, the NOC also including an inbox and outbox controller configured to set pointers to the inbox and outbox, respectively, that identify valid message data for a current thread; and software running in the current thread that, upon a context switch to a new thread, is configured to: save the pointer values for the current thread, and reset the pointer values to identify valid message data for the new thread, where the inbox and outbox controller are further configured to retain the valid message data for the current thread in the boxes until context switches again to the current thread. | 07-26-2012 |
20120198471 | FAIR SCALABLE READER-WRITER MUTUAL EXCLUSION - Implementing fair scalable reader writer mutual exclusion for access to a critical section by a plurality of processing threads is accomplished by creating a first queue node for a first thread, the first queue node representing a request by the first thread to access the critical section; setting at least one pointer within a queue to point to the first queue node, the queue representing at least one thread desiring access to the critical section; waiting until a condition is met, the condition comprising the first queue node having no preceding write requests as indicated by at least one predecessor queue node on the queue; permitting the first thread to enter the critical section in response to the condition being met; and causing the first thread to release a spin lock, the spin lock acquired by a second thread of the plurality of processing threads. | 08-02-2012 |
20120198472 | TASK SWITCHING WITH STATE PRESERVATION FOR PROGRAMS RUNNING ON AN ELECTRONIC DEVICE - A method and system providing switching between a plurality of installed programs in a computer system. Embodiments include a jump function comprising the steps: (1) determining a jump program that is to be the next program to be run, possibly from a plurality of possible choices; (2) creating input data for the jump program based on data in the current program; (3) storing the program state of the currently running program into a context packet and saving the context packet to memory; (4) releasing temporary memory that is used by the program, so as to allow other programs to use the memory; (5) calling the jump program with the created input data as input and terminating the currently running program. | 08-02-2012 |
20120272247 | SOFTWARE EMULATION OF MASSIVE HARDWARE THREADING FOR TOLERATING REMOTE MEMORY REFERENCES - A method and system for software emulation of hardware support for multi-threaded processing using virtual hardware threads is provided. A software threading system executes on a node that has one or more processors, each with one or more hardware threads. The node has access to local memory and access to remote memory. The software threading system manages the execution of tasks of a user program. The software threading system switches between the virtual hardware threads representing the tasks as the tasks issue remote memory access requests while in user privilege mode. Thus, the software threading system emulates more hardware threads than the underlying hardware supports and switches the virtual hardware threads without the overhead of a context switch to the operating system or change in privilege mode. | 10-25-2012 |
20120297398 | Reduced data transfer during processor context switching - Data transfer during processor context switching is reduced, particularly in relation to a time-sharing microtasking programming model. Prior to switching context of a processor having local memory from a first to a second process, a portion of the local memory that does not require transfer to system memory for proper saving of data associated with the first process is determined. The context of the processor is then switched from the first to the second process, including transferring all of the local memory as the data associated with the first process, to system memory—except for the portion of the local memory that has been determined as not requiring saving to the system memory for proper saving of the data associated with the first process. Therefore, switching the context from the first to the second process results in a reduction of data transferred from the local memory to the system memory. | 11-22-2012 |
20130036426 | INFORMATION PROCESSING DEVICE AND TASK SWITCHING METHOD - Disclosed is an information processing device and a task switching method that can reduce the time required for switching of tasks in a plurality of coprocessors. The information processing device includes a processor core; coprocessors including operation units that perform operation in response to a request from the processor core and operation storage units that store the contents of operation of the operation units, save storage units that store the saved contents of operation, a task switching control unit that outputs a save/restore request signal when switching a task on which operation is performed by the coprocessors-, and save/restore units that perform at least one of saving of the contents of operation in the operation storage units to the save storage units and restoration of the contents of operation in the save storage units to the operation storage units in response to the save/restore request signal. | 02-07-2013 |
20130061239 | System and Method for Operating a Processor - A method and system are provided that determine a likelihood that at least one special purpose register (SPR) will be required during execution of a thread; after determining that the SPR is not likely required during execution of the thread, set a flag for the thread to indicate that the SPR is not required; and after determining that the SPR is likely required during execution of the thread, set the flag to indicate that the SPR is required. | 03-07-2013 |
20130074096 | Hierarchical Contexts to Drive Live Sensor Applications - A method for operating a sensor based application includes receiving a context hierarchy for the sensor based application, the context hierarchy comprising a plurality of contexts, wherein each of the contexts is assigned a level of interest and a priority, reading the context hierarchy and discovering at least one sensor associated with each of the plurality of contexts, and reading at least one value of each of the sensors, and applying the values. | 03-21-2013 |
20130081055 | TASK PROCESSOR - A task processor includes a CPU, a save circuit, and a task control circuit. A task control circuit is provided with a task selection circuit and state storage units associated with respective tasks. When executing a predetermined system call instruction, the CPU notifies the task control circuit accordingly. When informed of the execution of a system call instruction, the task control circuit selects a task to be subsequently executed in accordance with an output from the selection circuit. When an interrupt circuit receives a high-speed interrupt request signal, the task switching circuit controls the state transition of a task by executing an interrupt handling instruction designated by the interrupt circuit. | 03-28-2013 |
20130097613 | APPARTUS AND METHOD FOR THREAD PROGRESS TRACKING - Provided is a method and apparatus for measuring a progress or a performance of an application program in a computing environment using a micro-architecture. An apparatus for thread progress tracking may select a thread included in an application program, may determine, based on a predetermined criterion, whether an execution scheme for at least one instruction included in the thread corresponds to an effective execution scheme in which an execution time is uniform or a non-effective execution scheme in which a delayed cycle is included and the execution time is non-uniform, and may generate an effective progress index (EPI) by accumulating an execution time of an instruction executed by the effective execution scheme other than an instruction executed by the non-effective execution scheme. | 04-18-2013 |
20130117760 | Software-Assisted Instruction Level Execution Preemption - One embodiment of the present invention sets forth a technique for instruction level execution preemption. Preempting at the instruction level does not require any draining of the processing pipeline. No new instructions are issued and the context state is unloaded from the processing pipeline. Any in-flight instructions that follow the preemption command in the processing pipeline are captured and stored in a processing task buffer to be reissued when the preempted program is resumed. The processing task buffer is designated as a high priority task to ensure the preempted instructions are reissued before any new instructions for the preempted context when execution of the preempted context is restored. | 05-09-2013 |
20130152105 | LOCK FREE USE OF NON-PREEMPTIVE SYSTEM RESOURCE - A computer-implemented method for lock-free use of a non-preemptive system resource by a preemptive thread, which may be interrupted. The method comprises registering a non-preemptive system resource and a first level reclaim handler for the non-preemptive system resource with the kernel of an operating system, registering a second level reclaim handler with the kernel, wherein the second level reclaim handler is included in an application program, and running the application program as a preemptive thread using the non-preemptive system resource. The first level reclaim handler is code that is a part of the implementation of the non-preemptive system resource in the kernel. The second level reclaim handler is code that is part of the application and is registered with the kernel before the application uses the non-preemptive system resource. The method enables a preemptive thread using a non-preemptive system resource to be preempted without crashing. | 06-13-2013 |
20130179897 | Thread Selection During Context Switching On A Plurality Of Compute Nodes - Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution. | 07-11-2013 |
20130191847 | SAVING PROGRAM EXECUTION STATE - Techniques are described for managing distributed execution of programs. In at least some situations, the techniques include decomposing or otherwise separating the execution of a program into multiple distinct execution jobs that may each be executed on a distinct computing node, such as in a parallel manner with each execution job using a distinct subset of input data for the program. In addition, the techniques may include temporarily terminating and later resuming execution of at least some execution jobs, such as by persistently storing an intermediate state of the partial execution of an execution job, and later retrieving and using the stored intermediate state to resume execution of the execution job from the intermediate state. Furthermore, the techniques may be used in conjunction with a distributed program execution service that executes multiple programs on behalf of multiple customers or other users of the service. | 07-25-2013 |
20130219408 | COMPUTER PROGRAM PRODUCT, AND INFORMATION PROCESSING APPARATUS AND METHOD - According to an embodiment, a computer program product includes a computer-readable medium including program, when executed by a computer, to have a plurality of modules run by the computer. The computer includes a memory having a shared area, which is an area accessible to only those modules which run cooperatively and storing therein execution module identifiers. Each of the modules includes a first operation configured to store, just prior to a switchover of operations to an other module that runs cooperatively, an identifier of the other module as the execution module identifier in the shared area; and a second operation configured to execute, when the execution module identifier stored in the shared area matches with an identifier of own module immediately after a switchover of operations from the other module, a function inside the own module. | 08-22-2013 |
20130305260 | SYSTEM AND METHOD FOR DETERMINISTIC CONTEXT SWITCHING IN A REAL-TIME SCHEDULER - A system and method deterministically switches context in a real-time scheduler to guarantee schedule periodicity. The method includes determining a time slice for each of the plurality of processes. The method includes determining a time slice switch duration between consecutive ones of the time slices. The method includes determining a starting point for each time slice. The method includes generating a schedule as a function of the time slices, the time slice switch durations, and the starting points of the time slices. The schedule includes an order for each of the time slices for a respective one of the plurality of processes. Each of the time slices and each of the time slice switch durations are required to run for their entire duration to guarantee a periodicity of the schedule. | 11-14-2013 |
20130347003 | Intelligent Service Management and Process Control Using Policy-Based Automation - Mechanisms are provided for dynamically determining one or more automation levels for tasks of a workflow. The mechanisms receive a workflow from a source component and receiving context and state information for an environment in which the workflow is to be performed. One or more tasks and associated task attributes are identified in the workflow and applying one or more automation rules to the context and state information and the task attributes to generate one or more automation level settings from the one or more tasks. The one or more tasks are performed in the environment in accordance with the one or more automation level settings. The automation level settings specify a degree of automation to be used when performing the one or more tasks. | 12-26-2013 |
20140013333 | CONTEXT-STATE MANAGEMENT - Extended features such as registers and functions within processors are made available to operating systems (OS) using an extended-state driver and by modifying instruction set extensions, such as XSAVE. A map-table designates a correspondence between memory locations for storing data relating to extended features not supported by the OS and called by an application. As a result, applications may utilize processor resources which are unsupported by the OS. | 01-09-2014 |
20140019990 | INTEGRATED CIRCUIT DEVICE AND METHOD FOR ENABLING CROSS-CONTEXT ACCESS - An integrated circuit device comprising an instruction processing module for performing operations on data in accordance with received instructions. The instruction processing module comprises a context selector unit arranged to selectively provide access to at least one process attribute(s) within a plurality of process contexts in accordance with at least one context selector value received thereby. The instruction processing module is arranged to receive an instruction comprising a context indication for a process attribute with which an operation is to be performed, provide the context selector value based at least partly on the context indication to the context selector unit, and execute the operation to be performed with the process attribute for at least one process context to which the context selector unit provides access in accordance with the context selector value. | 01-16-2014 |
20140019991 | ENHANCED MICROPROCESSOR OR MICROCONTROLLER - A microcontroller device has a central processing unit (CPU); a data memory coupled with the CPU divided into a plurality of memory banks, a plurality of special function registers and general purpose registers which may be memory-mapped, wherein at least the following special function registers are memory-mapped to all memory banks a status register, a bank select register, a plurality of indirect memory address registers, a working register, and a program counter high latch; and wherein upon occurrence of a context switch, the CPU is operable to automatically save the content of the status register, the bank select register, the plurality of indirect memory address registers, the working register, and the program counter high latch, and upon return from the context switch restores the content of the status register, the bank select register, the plurality of indirect memory address registers, the working register, and the program counter high latch. | 01-16-2014 |
20140075450 | MULTI-THREADED PROCESSING WITH REDUCED CONTEXT SWITCHING - Multi-threaded processing with reduced context switching is disclosed. Context switches may be avoided through the use of pre-emption notification, a pre-emption wait time attribute and a no-context-save yield. | 03-13-2014 |
20140082632 | COOPERATIVE PREEMPTION - Preempting the execution of a thread is disclosed. Preempting includes receiving an indication that a preemption of the thread is desired and context switching the thread out at a thread safe point in the event that a thread safe point is reached. | 03-20-2014 |
20140130060 | USER-LEVEL RE-INITIALIZATION INSTRUCTION INTERCEPTION - A data processing system comprising an operating system for supporting processes, such that the process are associated with one or more resources and the operating system being arranged to police the accessing by processes of resources so as to inhibit a process from accessing resources with which it is not associated. Part of this system is an interface for interfacing between each process and the operating system and a memory for storing state information for at least one process. The interface may be arranged to analyze instructions from the processes to the operating system, and upon detecting an instruction to re-initialize a process cause state information corresponding to that pre-existing state information to be stored in the memory as state information for the re-initialized process and to be associated with the resource. | 05-08-2014 |
20140149993 | Application Load Adaptive Multi-stage Parallel Data Processing Architecture - Systems and methods provide an extensible, multi-stage, realtime application program processing load adaptive, manycore data processing architecture shared dynamically among instances of parallelized and pipelined application software programs, according to processing load variations of said programs and their tasks and instances, as well as contractual policies. The invented techniques provide, at the same time, both application software development productivity, through presenting for software a simple, virtual static view of the actually dynamically allocated and assigned processing hardware resources, together with high program runtime performance, through scalable pipelined and parallelized program execution with minimized overhead, as well as high resource efficiency, through adaptively optimized processing resource allocation. | 05-29-2014 |
20140149994 | PARALLEL COMPUTER AND CONTROL METHOD THEREOF - A disclosed parallel computer includes plural nodes, and one node of the plural nodes collects information concerning a state of progress of barrier synchronization from each of the plural nodes, upon detecting that execution of a program for a job is stopped in each of the plural of nodes. And, the one node of the plural nodes in the parallel computer determines a restart position of the program for the job in the one node, based on a stop position of the program for the job in the one node and the information collected from each of the plural nodes. | 05-29-2014 |
20140157287 | Optimized Context Switching for Long-Running Processes - Methods, systems, and computer readable storage media embodiments allow for low overhead context switching of threads. In embodiments, applications, such as, but not limited to, iterative data-parallel applications, substantially reduce the overhead of context switching by adding a user or higher-level program configurability of a state to be saved upon preempting of a executing thread. These methods, systems, and computer readable storage media include aspects of running a group of threads on a processor, saving state information by respective threads in the group in response to a signal from a scheduler, and pre-empting running of the group after the saving of the state information. | 06-05-2014 |
20140173628 | DYNAMIC DEVICE VIRTUALIZATION - A system and method for providing dynamic device virtualization is herein disclosed. According to one embodiment, the computer-implemented method includes providing a device virtualization via context switching between a guest user process and a host. The guest user process has an address space comprising at least a guest kernel and a host kernel. The guest user process is capable of making a first direct call into the host via the guest kernel of the address space. The host is capable of making a second direct call to the guest user process. | 06-19-2014 |
20140189712 | Memory Address Collision Detection Of Ordered Parallel Threads With Bloom Filters - A semiconductor chip is described having a load collision detection circuit comprising a first bloom filter circuit. The semiconductor chip has a store collision detection circuit comprising a second bloom filter circuit. The semiconductor chip has one or more processing units capable of executing ordered parallel threads coupled to the load collision detection circuit and the store collision detection circuit. The load collision detection circuit and the store collision detection circuit is to detect younger stores for load operations of said threads and younger loads for store operations of said threads. | 07-03-2014 |
20140189713 | APPARATUS AND METHOD FOR INVOCATION OF A MULTI THREADED ACCELERATOR - A processor is described having logic circuitry of a general purpose CPU core to save multiple copies of context of a thread of the general purpose CPU core to prepare multiple micro-threads of a multi-threaded accelerator for execution to accelerate operations for the thread through parallel execution of the micro-threads. | 07-03-2014 |
20140201760 | Application Context Intercommunication for Mobile Devices - A method and system include a backend mobile development framework. The framework includes a library having a plurality of apps for mobile devices, the apps divided into groups of related apps. A communications module is operable to communicate with multiple mobile devices. A data store provides interconnectivity between apps in each group such that a first app has content information shared with a second app. | 07-17-2014 |
20140201761 | Context Switching with Offload Processors - A method for context switching of multiple offload processors is disclosed. The method can include receiving network packets for processing through a memory bus connected socket, organizing the network packets into multiple sessions for processing, suspending processing of at least one session by reading a cache state of at least one of the offload processor into a context memory by operation of a scheduling circuit, with virtual memory locations and physical cache locations being aligned, and subsequently directing transfer of the cache state to at least one of the offload processors for processing by operation of the scheduling circuit. | 07-17-2014 |
20140215488 | TASK PROCESSOR - A task processor includes a CPU, a save circuit, and a task control circuit. A task control circuit is provided with a task selection circuit and state storage units associated with respective tasks. When executing a predetermined system call instruction, the CPU notifies the task control circuit accordingly. When informed of the execution of a system call instruction, the task control circuit selects a task to be subsequently executed in accordance with an output from the selection circuit. When an interrupt circuit receives a high-speed interrupt request signal, the task switching circuit controls the state transition of a task by executing an interrupt handling instruction designated by the interrupt circuit. | 07-31-2014 |
20140282606 | META-APPLICATION MANAGEMENT IN A MULTITASKING ENVIRONMENT - Techniques are disclosed to identify concurrently used applications based on application state. Upon determining that usage of a plurality of applications, including a first state of a first application of the plurality of applications, satisfies a criterion for identifying concurrently used applications, the plurality of applications is designated as a first meta-application having a uniquely identifiable set of concurrently used applications. The first meta-application has an associated criterion for launching the first meta-application. Upon determining that the criterion for launching the first meta-application is satisfied, at least one of the plurality of applications is programmatically invoked. | 09-18-2014 |
20140282607 | SYSTEM MANAGEMENT AND INSTRUCTION COUNTING - Techniques for managing a plurality of threads on a multi-threading processing core. Embodiments provide an instruction count threshold condition that determines how many countable instructions of a thread the multi-threading processing core will execute before context switching to another one of the plurality of threads. A first plurality of instructions for a first one of the plurality of threads is processed on the multi-threading processing core. Embodiments determine, for each of the first plurality of instructions, whether the instruction is a countable instruction, wherein at least one of the first plurality of instructions is not a countable instruction. A count of the countable instructions is maintained. Upon determining that the instruction count threshold condition is satisfied, based on the maintained count, embodiments context switch the multi-threading processing core to process a second plurality of instructions for a second one of the plurality of threads. | 09-18-2014 |
20140298352 | COMPUTER WITH PLURALITY OF PROCESSORS SHARING PROCESS QUEUE, AND PROCESS DISPATCH PROCESSING METHOD - A dispatcher stack is allocated to each of a plurality of processors sharing a run queue. Each processor, in process dispatch processing, saves in a switch-source process stack the context of a switch-source process (the process being run), saves in the dispatcher stack of each of the processors a dispatcher context, inserts the switch-source process into the run queue, removes a switch-destination process from the run queue, and, in addition, restores the context of the switch-destination process from the switch-destination process stack. | 10-02-2014 |
20140337858 | APPARATUS AND METHOD FOR ADAPTIVE CONTEXT SWITCHING SCHEDULING SCHEME FOR FAST BLOCK INPUT AND OUTPUT - Provided is a method and apparatus for an adaptive context switching for a fast block input/output. The adaptive context switching method may include: requesting, by a process, an input/output device to perform an input/output of data; comparing a Central Processing Unit (CPU) effectiveness based on whether the context switching is performed; and performing the input/output through the context switching to a driver context of the input/output device, or directly performing, by the process, the input/output based on a comparison result of the CPU effectiveness. | 11-13-2014 |
20140351828 | APPARATUS AND METHOD FOR CONTROLLING MULTI-CORE SYSTEM ON CHIP - An apparatus and method for controlling a multi-core SoC including a main core and at least one sub-core are disclosed. The apparatus includes a determination unit, a storage unit, and a control unit. The determination unit determines whether or not to drive the sub-core by taking the performance or power of the multi-core SoC into consideration. The storage unit stores state information including a register of the main core or the sub-core in accordance with a determination of the determination unit. The control unit performs control so that the main core and the sub-core execute a sub-task, that is, a task of the sub-core, through exchange by sharing the state information. | 11-27-2014 |
20140359636 | MULTI-CORE SYSTEM PERFORMING PACKET PROCESSING WITH CONTEXT SWITCHING - A multi-core processing system includes a first processing core, a second processing core, a task manager coupled to the first and second processing cores. The task manager is operable to receive context information of a task from the first processing core and provide the context information to the second processing core. The second processing core continues executing the task using the context information. | 12-04-2014 |
20140359637 | TASK CONTINUANCE ACROSS DEVICES - Architecture that facilitates a user experience for continuing computer and/or application tasks across user devices. Task status can be synchronized across devices via a cloud service or via a short-range wireless peer-to-peer (P2P). When applied to searching, for example, the user experience enables users to resume the same search session across devices in several ways. The disclosed architecture can also be extended to other tasks such as web browsing, online meetings, office application sessions, etc. The client application of each device collects the states of each application (e.g., document links, websites, online meeting information, etc.) as part of the synchronization, and uses the states to resume the same applications on different devices (e.g., open the same word processing document, a browser to the same websites, re-join online meetings, etc.). | 12-04-2014 |
20140366038 | MANAGING MULTI-APPLICATION CONTEXTS - A method and system for managing software application states selects a plurality of stateful applications for reinstatement at a later time. A set of data contexts is generated based on the selected applications. The set of data contexts is pushed onto a data stack. Thereafter the set of data contexts is popped from the data stack for reinstatement. Each step or function may be initiated automatically or through user input, and may be used in a single-user, multi-user or collaborative setting. | 12-11-2014 |
20150135195 | COMPACTED CONTEXT STATE MANAGEMENT - Embodiments of an invention related to compacted context state management are disclosed. In one embodiment, a processor includes instruction hardware and state management logic. The instruction hardware is to receive a first save instruction and a second save instruction. The state management logic is to, in response to the first save instruction, save context state in an un-compacted format in a first save area. The state management logic is also to, in response to the second save instruction, save a compaction mask and context state in a compacted format in a second save area and set a compacted-save indicator in the second save area. The state management logic is also to, in response to a single restore instruction, determine, based on the compacted-save indicator, whether to restore context from the un-compacted format in the first save area or from the compacted format in the second save area. | 05-14-2015 |
20150150024 | METHOD OF DETECTING STACK OVERFLOWS AND PROCESSOR FOR IMPLEMENTING SUCH A METHOD - A method of detecting stack overflows includes the following steps: storing in at least one dedicated register at least one data item chosen from: a data item (SPHaut) indicating a maximum permitted value for a stack pointer, and a data item (SPBas) indicating a minimum permitted value for said stack pointer; effecting a comparison between a current value (SP) or past value (SPMin, SPMax) of said stack pointer and said data item or each of said data items; and generating a stack overflow exception if said comparison indicates that said current or past value of said stack pointer is greater than said maximum permitted value or less than said minimum permitted value. A processor for implementing such a method is also provided. | 05-28-2015 |
20150293779 | CONTROLLER SYSTEM WITH PEER-TO-PEER REDUNDANCY, AND METHOD TO OPERATE THE SYSTEM - Exemplary controllers in a system are associated with technical entities and are configured to selectively execute tasks in a primary mode when the controllers interact with the associated technical entities with respect to the tasks, and to execute tasks in a secondary mode when the controllers do not interact with the associated technical entities with respect to the task. The system distributes task instructions of a first task to a first controller that is configured to execute the first task in the primary mode, and to distribute the task instructions of the first task to a second controller that is configured to execute the first task in the secondary mode. The system distributes task instructions of a second task to the second controller that is configured to execute the second task in the primary mode. | 10-15-2015 |
20150301856 | TASK PROCESSOR - A task processor includes a CPU, a save circuit, and a task control circuit. A task control circuit is provided with a task selection circuit and state storage units associated with respective tasks. When executing a predetermined system call instruction, the CPU notifies the task control circuit accordingly. When informed of the execution of a system call instruction, the task control circuit selects a task to be subsequently executed in accordance with an output from the selection circuit. When an interrupt circuit receives a high-speed interrupt request signal, the task switching circuit controls the state transition of a task by executing an interrupt handling instruction designated by the interrupt circuit. | 10-22-2015 |
20150324224 | APPARATUS AND METHOD OF DATA CAPTURE - A method of capturing the state of a target program that is running within the environment of an operating system is provided. The method includes identifying threads associated with the target program, suspending threads associated with the target program, preserving data characterising the threads, and preserving data accessible by the threads when in operation. A method of changing the state of a target program that is running within the environment of an operating system is also provided. This method includes identifying threads associated with the target program, suspending threads associated with the target program, replacing data characterising the threads with previously preserved data, and replacing data accessible by the threads when in operation with previously preserved data. In either case, the threads are then resumed to allow the target program to continue operation. | 11-12-2015 |
20150355936 | METHOD AND SYSTEM FOR PERFORMING ADAPTIVE CONTEXT SWITCHING - Exemplary embodiments provide a method for managing a transaction for a memory module in a computer system. The memory modules have latencies. A busyness level of the memory module for the transaction is determined. A projected response time for the transaction is predicted based on the busyness level. In some embodiments whether to perform a context switching for the transaction is determined based on the projected response time and context switching policies. The context switching may be performed based on this determination. | 12-10-2015 |
20150363225 | CHECKPOINTING FOR A HYBRID COMPUTING NODE - According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task. | 12-17-2015 |
20150363227 | DATA PROCESSING UNIT AND METHOD FOR OPERATING A DATA PROCESSING UNIT - A data processing unit providing a core instruction set wherein the core instruction set comprises a specific core instruction that is adapted to receive data for specifying a hardware component to be called, call the hardware component for executing a job, perform a first context switch that suspends an actual task, wherein the actual task previously called the hardware component using the specific core instruction, perform a second context switch that resumes the actual task when the hardware component finished the job and a method for operating such a data processing unit. | 12-17-2015 |
20160041840 | AUTOMATED TICKETING - Methods, systems, and computer program products for automatically issuing travel documents. Tasks relating to issuance of travel documents are generated by an originating application in response to booking a travel service. The tasks are received and stored in a first queue until a triggering event, such as the arrival of a time for issuance of a document. In response to the triggering event, a task in the first queue may be placed in a second queue for transmission to an issuing application. The documents to be issued may be determined based on records in a passenger name record (PNR) stored in a database. The PNR may be determined based on the task. The PNR may be updated with information indicating whether task processing was successful. In the event of an error, information indicating the cause of the error may be added to the PNR. | 02-11-2016 |
20160062797 | SYSTEM AND METHOD FOR DYNAMICALLY MANAGED TASK SWITCH LOOKAHEAD - A processing system includes a processor pipeline, a detector circuit, and a task scheduler. The detector circuit includes a basic block detector circuit to determine that the processor pipeline received a first instruction of a first instance of a basic block, and to determine that a last-in-order instruction of the first instance of the basic block is a resource switch instruction (RSWI), and an indicator circuit to provide an indication in response to determining that the processor pipeline received the first instruction of a second instance of the basic block. The task scheduler initiates a resource switch, in response to the indication, at a time subsequent to the first instruction being received that is based on a cycle count that indicates a first number of processor cycles between receiving the first instruction and receiving the RSWI. | 03-03-2016 |
20160085583 | Multi-Threaded Processing of User Interfaces for an Application - A method performed at an electronic device with a display includes: processing tasks in an application program; at least partially processing a plurality of layout objects in the application program; in accordance with a determination that one or more predefined control criteria are satisfied, pausing the processing of the plurality of layout objects in the application program; while the processing of the plurality of layout objects in the application program is paused, processing system tasks; and, after processing the system tasks while the processing of the plurality of layout objects in the application program is paused, resuming the processing of the plurality of layout objects. | 03-24-2016 |
20160103701 | Storing and Resuming Application Runtime State - Execution of an application is suspended and the runtime state of the application is collected and persisted. Maintenance operations may then be performed on the computer that the application was executing upon. The runtime state might also be moved to another computer. In order to resume execution of the application, the runtime state of the application is restored. Once the runtime state of the application has been restored, execution of the application may be restarted from the point at which execution was suspended. A proxy layer might also be utilized to translate requests received from the application for resources that are modified after the runtime state of the application is persisted. | 04-14-2016 |
20160117191 | CONTROLLING EXECUTION OF THREADS IN A MULTI-THREADED PROCESSOR - Execution of threads in a processor core is controlled. The processor core supports simultaneous multi-threading (SMT) such that there can be effectively multiple logical central processing units (CPUs) operating simultaneously on the same physical processor hardware. Each of these logical CPUs is considered a thread. In such a multi-threading environment, it may be desirous for one thread to stop other threads on the processor core from executing. This may be in response to running a critical sequence or other sequence that needs the processor core resources or is manipulating processor core resources in a way that other threads would interfere with its execution. | 04-28-2016 |
20160132354 | APPLICATION SCHEDULING IN HETEROGENEOUS MULTIPROCESSOR COMPUTING PLATFORMS - Methods and apparatus to schedule applications in heterogeneous multiprocessor computing platforms are described. In one embodiment, information regarding performance (e.g., execution performance and/or power consumption performance) of a plurality of processor cores of a processor is stored (and tracked) in counters and/or tables. Logic in the processor determines which processor core should execute an application based on the stored information. Other embodiments are also claimed and disclosed. | 05-12-2016 |
20160203018 | RE-LAUNCHING CONTEXTUALLY RELATED APPLICATION SETS | 07-14-2016 |
20160203021 | STACK HANDLING USING MULTIPLE PRIMARY USER INTERFACES | 07-14-2016 |
20190146829 | HIGH PERFORMANCE CONTEXT SWITCHING FOR VIRTUALIZED FPGA ACCELERATORS | 05-16-2019 |
20190146832 | CONTEXT SWITCH BY CHANGING MEMORY POINTERS | 05-16-2019 |