50th week of 2015 patent applcation highlights part 47 |
Patent application number | Title | Published |
20150355918 | INFORMATION PROCESSING SYSTEM, COMPUTER PROGRAM PRODUCT, AND INFORMATION PROCESSING METHOD - An information processing system includes: an information processing apparatus including: a shared operation unit that performs verification of operation of inter-model common processing common to multiple models out of processes of an application with a first program for realizing operation common to the models, and sends a result of the operation verification to the application; a processing requesting unit that requests an external device to perform verification of operation of model-dependent processing specific to each model with a second program for realizing operation specific to each model; and an acquiring unit that acquires a result of the verification of operation of model-dependent processing from the external device, and sends the result to the application, and external devices that perform verification of operation of model-dependent processing specific to each model out of the processes of the application with the second program for realizing operation specific to each model. | 2015-12-10 |
20150355919 | System and Method for Real Time Virtualization - A system includes a plurality of compute modules and a first processor configured to implement a virtualization layer, where the virtualization layer is configured to support real time jobs. The system also includes a hardware support layer coupled between the plurality of compute modules and the virtualization layer, where the hardware support layer is configured to provide an interface between the virtualization layer and the plurality of compute modules. | 2015-12-10 |
20150355920 | SYSTEM AND METHODS FOR GENERATING AND MANAGING A VIRTUAL DEVICE - Embodiments of the present disclosure may be configured to permit development and validation of a device driver or a device application program by using improved virtual devices. Such improved virtual devices may facilitate driver development without use of physical devices or hardware prototypes. In various embodiments, advanced validation of a device-driver combination may be permitted that would be difficult to achieve even with a physical device. Certain embodiments also may detect inconsistencies between virtual and physical devices, which may be used to improve drivers and device application programs and increase compatibility of such drivers and device application programs with physical devices. | 2015-12-10 |
20150355921 | AVOIDING OR DEFERRING DATA COPIES - Methods and systems for avoiding or deferring data copies are disclosed. Using a virtual machine, it is determined whether a set of program code comprises references to a data object after an operation to generate a copy of the data object. If not, a set of optimized program code is generated in which the operation to copy the data object is replaced with an operation to update a reference. Using the virtual machine, it is determined whether the set of program code comprises an operation to generate a copy of a buffer object. If so, a set of further optimized program code is generated, comprising an allocation of one or more memory pages to store the buffer object with a copy-on-write parameter instead of the operation to generate the copy of the buffer object. | 2015-12-10 |
20150355922 | SELECTING A HOST FOR A VIRTUAL MACHINE USING A HARDWARE MULTITHREADING PARAMETER - A cloud manager monitors available resources on host computer systems, including a number of hardware threads supported by CPUs on the host computer systems. The cloud manager receives a request to provision a virtual machine (VM) that includes a hardware multithreading parameter that specifies the amount of hardware multithreading required on the host computer system. The cloud manager then selects a host computer system for the VM taking the hardware multithreading parameter into consideration. | 2015-12-10 |
20150355923 | CONFIGURING VIRTUAL MACHINES IN A CLOUD COMPUTING PLATFORM - There is provided a virtual machine control method that includes the ability to configure a virtual machine's behavior when certain conditions are met. The conditions may include configuring a virtual machine based on an amount of time of inactivity. The conditions may include configuring a virtual machine based on exceeding a cost for a given time frame. The control method removes the need to have a user manually monitor and shut down unused virtual machines. Accordingly, the virtual machine may be automatically commanded to shut down if it is inactive beyond a threshold amount of time. The control method may also provide a method of calculating the projected cost of running a virtual machine. | 2015-12-10 |
20150355924 | Decentralized Demand-Based Virtual Machine Migration Management - Embodiments perform decentralized virtual machine (VM) migration decisions. By comparing a set of VM-specific rules with current statistics (e.g., resource usage), one host determines whether to migrate the VM and lazily selects another host to receive the VM. The rules define, for example, threshold values for resource usage. The host makes the migration decision and performs the migration without input from a centralized server. In this manner, migration decisions are offloaded to migration modules executing on each host for reactive and/or proactive migration. Proactive migration involves migrating a VM before the VM violates its rules. | 2015-12-10 |
20150355925 | ADAPTIVE VIRTUAL MACHINE REQUEST APPROVER - An adaptive request handler (ARH) receives a virtual machine (VM) request from a user and determines whether to automatically approve the VM request using a tolerance that defines an allowable amount of deviation from preset resource specifications. In some embodiments, the ARH adaptively varies the tolerance based on one or more monitored factors, such as an aggregate system resource utilization by and/or a billing history of the user or a group that includes the user. In some embodiments, the VM request is based on a template selected by the user from among a plurality of templates eligible for automatic approval, wherein a plurality of tolerances each defines an allowable amount of deviation from preset resource specifications of a respective one of the eligible templates. The ARH may, in some embodiments, vary each of the plurality of tolerances independently based on one or more monitored factors. | 2015-12-10 |
20150355926 | SELECTING A HOST FOR A VIRTUAL MACHINE USING A HARDWARE MULTITHREADING PARAMETER - A cloud manager monitors available resources on host computer systems, including a number of hardware threads supported by CPUs on the host computer systems. The cloud manager receives a request to provision a virtual machine (VM) that includes a hardware multithreading parameter that specifies the amount of hardware multithreading required on the host computer system. The cloud manager then selects a host computer system for the VM taking the hardware multithreading parameter into consideration. | 2015-12-10 |
20150355927 | AUTOMATIC VIRTUAL MACHINE RESIZING TO OPTIMIZE RESOURCE AVAILABILITY - In one embodiment, a configuration associated with an application may be ascertained, where the configuration indicates a number of instances and a first instance type. Requests associated with the application may be routed among two or more sets of instances, where each of the two or more sets of instances have a different, corresponding instance type of two or more instance types including the first instance type. Metrics associated with the routing of requests to each of the two or more sets of instances may be obtained. The metrics may be analyzed to identify an optimal instance type for the application. Further requests associated with the application may be routed to a set of the number of instances having the optimal instance type. | 2015-12-10 |
20150355928 | PLACEMENT OF VIRTUAL CPUS USING A HARDWARE MULTITHREADING PARAMETER - A cloud manager monitors available resources on host computer systems, including a number of hardware threads supported by CPUs on the host computer systems. The cloud manager receives a request to provision a virtual machine (VM) that includes a hardware multithreading parameter that specifies whether hardware multithreading is allowed on the host computer system. The cloud manager then selects a host computer system for the VM taking the hardware multithreading parameter into consideration. The VM is then placed on the selected host computer system using the hardware multithreading parameter. | 2015-12-10 |
20150355929 | PROVISIONING VIRTUAL CPUS USING A HARDWARE MULTITHREADING PARAMETER IN HOSTS WITH SPLIT CORE PROCESSORS - A cloud manager monitors available resources on host computer systems, including a number of hardware threads supported by CPUs on the host computer systems and whether or not the CPUs have split core enabled. The cloud manager receives a request to provision a virtual machine (VM) that includes a hardware multithreading parameter that specifies whether hardware multithreading is allowed on the host computer system. The cloud manager then selects a host computer system for the VM taking into consideration the hardware multithreading parameter, the hardware threads supported by the CPU, and the split core settings. The VM is then placed on the selected host computer system using the hardware multithreading parameter. The result is more efficient utilization of CPU resources in a host for a virtual machine. | 2015-12-10 |
20150355930 | PLACEMENT OF VIRTUAL CPUS USING A HARDWARE MULTITHREADING PARAMETER - A cloud manager monitors available resources on host computer systems, including a number of hardware threads supported by CPUs on the host computer systems. The cloud manager receives a request to provision a virtual machine (VM) that includes a hardware multithreading parameter that specifies whether hardware multithreading is allowed on the host computer system. The cloud manager then selects a host computer system for the VM taking the hardware multithreading parameter into consideration. The VM is then placed on the selected host computer system using the hardware multithreading parameter. | 2015-12-10 |
20150355931 | PROVISIONING VIRTUAL CPUS USING A HARDWARE MULTITHREADING PARAMETER IN HOSTS WITH SPLIT CORE PROCESSORS - A cloud manager monitors available resources on host computer systems, including a number of hardware threads supported by CPUs on the host computer systems and whether or not the CPUs have split core enabled. The cloud manager receives a request to provision a virtual machine (VM) that includes a hardware multithreading parameter that specifies whether hardware multithreading is allowed on the host computer system. The cloud manager then selects a host computer system for the VM taking into consideration the hardware multithreading parameter, the hardware threads supported by the CPU, and the split core settings. The VM is then placed on the selected host computer system using the hardware multithreading parameter. The result is more efficient utilization of CPU resources in a host for a virtual machine. | 2015-12-10 |
20150355932 | ADAPTIVE VIRTUAL MACHINE REQUEST APPROVER - An adaptive request handler (ARH) receives a virtual machine (VM) request from a user and determines whether to automatically approve the VM request using a tolerance that defines an allowable amount of deviation from preset resource specifications. In some embodiments, the ARH adaptively varies the tolerance based on one or more monitored factors, such as an aggregate system resource utilization by and/or a billing history of the user or a group that includes the user. In some embodiments, the VM request is based on a template selected by the user from among a plurality of templates eligible for automatic approval, wherein a plurality of tolerances each defines an allowable amount of deviation from preset resource specifications of a respective one of the eligible templates. The ARH may, in some embodiments, vary each of the plurality of tolerances independently based on one or more monitored factors. | 2015-12-10 |
20150355933 | SYSTEM AND METHODS FOR GENERATING AND MANAGING A VIRTUAL DEVICE - Embodiments of the present disclosure may be configured to permit development and validation of a device driver or a device application program by using improved virtual devices. Such improved virtual devices may facilitate driver development without use of physical devices or hardware prototypes. In various embodiments, advanced validation of a device-driver combination may be permitted that would be difficult to achieve even with a physical device. Certain embodiments also may detect inconsistencies between virtual and physical devices, which may be used to improve drivers and device application programs and increase compatibility of such drivers and device application programs with physical devices. | 2015-12-10 |
20150355934 | METHOD FOR GENERATING CONFIGURATION INFORMATION, AND NETWORK CONTROL UNIT - A method for generating configuration information includes: a network control unit receives a virtual machine association message, where the VM association message includes an identifier of a first VM and an identifier of a first virtual built-in network element (NE), where a state of the first VM changes and the first virtual built-in NE detects that the state of the first VM changes; and the network control unit determines first information according to the identifier of the first VM, where the first information includes at least one of: a first forwarding entry, a location information mapping entry of the first VM, and a first network policy. According to the method, a network control unit determines first information according to an identifier of a first VM whose state changes and whose identifier is included in a VM association message, network configuration efficiency and network performance are improved. | 2015-12-10 |
20150355935 | MANAGEMENT SYSTEM, MANAGEMENT PROGRAM, AND MANAGEMENT METHOD - A plurality of process content is retained, said process content including identifiers of a plurality of part content included in each process and information which denotes dependencies among the plurality of part content. When information is inputted which designates a first process and the part content of a problem portion which is included in the first process, a process similar to the first process is retrieved. On the basis of whether there is a change in any of the plurality of part content which is included in the retrieved process, an evaluation value of the retrieved process is either incremented or decremented, and the information relating to the plurality of processes is outputted on the basis of the evaluation value. | 2015-12-10 |
20150355936 | METHOD AND SYSTEM FOR PERFORMING ADAPTIVE CONTEXT SWITCHING - Exemplary embodiments provide a method for managing a transaction for a memory module in a computer system. The memory modules have latencies. A busyness level of the memory module for the transaction is determined. A projected response time for the transaction is predicted based on the busyness level. In some embodiments whether to perform a context switching for the transaction is determined based on the projected response time and context switching policies. The context switching may be performed based on this determination. | 2015-12-10 |
20150355937 | INDICATING NEARING THE COMPLETION OF A TRANSACTION - In a multi-processor transaction execution environment, a transaction executes a hint instruction indicating proximity to completion of the transaction. Pending aborts of the transaction due to memory conflicts are suppressed based on the proximity of the transaction to completion. | 2015-12-10 |
20150355938 | SYSTEM AND METHOD FOR CONDITIONAL TASK SWITCHING DURING ORDERING SCOPE TRANSITIONS - A data processing system includes a processor core and a hardware module. The processor core performs tasks on data packets. The ordering scope manager stores a first value in a first storage location. The first value indicates that exclusive execution of a first task in a first ordering scope is enabled. In response to a relinquish indicator being received, the ordering scope manager stores a second value in the first storage location. The second value indicates that the exclusively execution of the first task in the first ordering scope is disabled. | 2015-12-10 |
20150355939 | INVERSION OF CONTROL FOR EXECUTABLE EXTENSIONS IN A RUN-TIME ENVIRONMENT - A system method and non-transitory computer readable medium implemented as programming on a suitable computing device, the system for inversion of control of executable extensions including a run-time environment configured to push data to one or a plurality of extensions, wherein said one or plurality of extensions are configured to comprise one or a plurality of signatures. Wherein said one or a plurality of extensions are compilable, designable and testable outside of the run-time environment, and wherein the run-time environment may be configured to accept an extension and to push data to that extension as per said one or a plurality of signatures. | 2015-12-10 |
20150355940 | IDLE TIME ACCUMULATION IN A MULTITHREADING COMPUTER SYSTEM - Embodiments relate to idle time accumulation in a multithreading computer system. According to one aspect, a computer-implemented method for idle time accumulation in a computer system is provided. The computer system includes a configuration having a plurality of cores and an operating system (OS)-image configurable between single thread (ST) mode and a multithreading (MT) mode in a logical partition. The MT mode supports multiple threads on shared resources per core simultaneously. The method includes executing a query instruction on an initiating core of the plurality of cores. The executing includes obtaining, by the OS-image, a maximum thread identification value indicating a current maximum thread identifier of the cores within the logical partition. The initiating core also obtains a multithreading idle time value for each of the cores indicating an aggregate amount of idle time of all threads enabled on each of the cores in the MT mode. | 2015-12-10 |
20150355941 | INFORMATION PROCESSING DEVICE AND METHOD FOR CONTROLLING INFORMATION PROCESSING DEVICE - An information processing device includes arithmetic processing devices, a cooling device, and a job assignment device. Each of the arithmetic processing devices is configured to perform a job. The cooling device is connected to the arithmetic processing devices. The cooling device includes a circulation unit, a cooling unit, and an adjustment unit. The circulation unit is configured to circulate refrigerant through a supply route. The refrigerant absorbs heat generated by the arithmetic processing devices. The cooling unit is configured to cool the refrigerant circulated by the circulation unit. The adjustment unit is configured to adjust, in response to a temperature of the refrigerant, a cooling capacity of the cooling unit to cool the refrigerant. The job assignment device includes a processor configured to control, on the basis of cooling capacity information indicating the cooling capacity, job charging to the arithmetic processing devices. | 2015-12-10 |
20150355942 | ENERGY-EFFICIENT REAL-TIME TASK SCHEDULER - An energy efficient task scheduler for use with a processor that provides multiple reduced energy use modes. In one embodiment, a system for executing tasks includes a processor and a task scheduler. The processor provides a plurality of different reduced energy use modes. The task scheduler is executable by the processor to schedule execution a plurality of sleep tasks. Each of the sleep tasks corresponds to a different one of the reduced energy use modes. The task scheduler is executable by the processor to execute each of the sleep tasks, and as part of the execution of the sleep task to: place the processor in the reduced energy use mode corresponding to the sleep task, and exit the corresponding reduced energy use mode at suspension of the sleep task. | 2015-12-10 |
20150355943 | WEIGHTED STEALING OF RESOURCES - In a computer system with multiple job queues and limited resources, an initial allocation of resources is given to each job queue. The utilization of these initially allocated resources is monitored, and queues with excess resources may have those resources stolen and temporarily redistributed to queues with unmet resource needs. | 2015-12-10 |
20150355944 | USING FUNCTIONAL RESOURCES OF A COMPUTING DEVICE WITH WEB-BASED PROGRAMMATIC RESOURCES - A request is received from a web-based programmatic resource executing within an application that is installed on the computing device. From the request, one or more functional resources of the computing device are identified. The functional resources are not otherwise accessible to the web-based programmatic resource executing within the installed application on the computing device. A task is performed using the identified one or more functional resources. | 2015-12-10 |
20150355945 | Adaptive Scheduling Policy for Jobs Submitted to a Grid - Machines, systems and methods for providing a job description for execution in a computing environment, the method comprising receiving a job description, wherein the job description defines a set of job alternatives based on an order of priority and conditions associated with execution of the job alternatives; processing the job alternatives to determine whether resources for executing at least a first job alternative are available, considering respective first conditions defined in the job description for the first job alternative; selecting a first computing element implemented in a virtualized computing environment, wherein the selected first computing element has sufficient resources to satisfy resource requirements defined in the job description for the first job alternative; and submitting the job to the first computing element for execution. | 2015-12-10 |
20150355946 | "Systems of System" and method for Virtualization and Cloud Computing System - A “systems of system” and method for virtualization and cloud computing system are disclosed. According to one embodiment FIG. | 2015-12-10 |
20150355947 | RESOURCE PROVISIONING BASED ON LOGICAL PROFILES AND PIECEWISE OBJECTIVE FUNCTIONS - Described are techniques for selecting resources for provisioning. A usage definition, including a piecewise objective function, and first set of logical profiles based on core criteria are selected. Each of the logical profiles in the first set represents a resource set characterized by a core criteria value set that specifies values for the core criteria. A second set of resulting objective function values are determined by evaluating one piece of the objective function for each of the logical profiles in the first set. A highest ranked one of the resulting objective function values in the second set is selected having a corresponding first logical profile of the first set and a corresponding core criteria value set. A third set of resources is selected which is characterized by the corresponding core criteria value set for the first logical profile. The third set of resources is any of recommended or selected for provisioning. | 2015-12-10 |
20150355948 | DYNAMICALLY CONFIGURABLE HARDWARE QUEUES FOR DISPATCHING JOBS TO A PLURALITY OF HARDWARE ACCELERATION ENGINES - A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues. | 2015-12-10 |
20150355949 | DYNAMICALLY CONFIGURABLE HARDWARE QUEUES FOR DISPATCHING JOBS TO A PLURALITY OF HARDWARE ACCELERATION ENGINES - A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues. | 2015-12-10 |
20150355950 | RESOURCE ALLOCATION FOR VIRTUAL MACHINES AND LOGICAL PARTITIONS - A computer determines that a utilization level of a resource has satisfied a threshold. The computer scales the allocation of the resource to the furthest of the current allocation of the resource plus a parameter and of a historical limit. The computer determines if the scaled allocation of the resource is outside the historical limit and if so, sets the historical limit equal to the scaled allocation of the resource. The computer determines whether the scaling of the allocation of the resource will result in an allocation oscillation. The computer determines if the scaled allocation of the resource is outside a boundary parameter and if so, sets the allocation of the resource equal to the boundary parameter. | 2015-12-10 |
20150355951 | OPTIMIZING EXECUTION AND RESOURCE USAGE IN LARGE SCALE COMPUTING - A method for tuning workflow settings in a distributed computing workflow comprising sequential interdependent jobs includes pairing a terminal stage of a first job and a leading stage of a second, sequential job to form an optimization pair, in which data segments output by the terminal stage of the first job comprises data input for the leading stage of the second job. The performance of the optimization pair is tuned by determining, with a computational processor, an estimated minimum execution time for the optimization pair and increasing the minimum execution time to generate an increased execution time. The method further includes calculating a minimum number of data segments that still permit execution of the optimization pair within the increased execution time. | 2015-12-10 |
20150355952 | INFORMATION PROCESSING SYSTEM AND PROGRAM MIGRATION METHOD - A first executing unit executes a first program by emulating information processing in a first operational environment in which the first program is executable. A generating unit generates, in parallel with the execution of the first program, a second program which is executable in a second operational environment of an information processing system and which is capable of acquiring the same processing result as the first program. A second executing unit terminates the execution of the first program by the first executing unit and also executing the second program, after the generation of the second program is completed. | 2015-12-10 |
20150355953 | Low Overhead Contention-Based Switching Between Ticket Lock And Queued Lock - A technique for low overhead contention-based switching between ticket locking and queued locking to access shared data may include establishing a ticket lock, establishing a queue lock, operating in ticket lock mode using the ticket lock to access the shared data during periods of relatively low data contention, and operating in queue lock mode using the queue lock to access the shared data during periods of relatively high data contention. | 2015-12-10 |
20150355954 | TRUSTED CLIENT-CENTRIC APPLICATION ARCHITECTURE - Trusted Client-Centric Application Architecture (TC | 2015-12-10 |
20150355955 | OPERATING SYSTEM USER ACTIVITY PROFILES - A user profile and one or more activity profiles selectively operable when the user profile is active are stored in one or more data structures accessible by an operating system of a data processing system. The one or more activity profiles each specify an application set for a respective user activity supported by the data processing system. In response to receiving a user input selecting an activity profile among the one or more activity profiles while the data processing system is executing under the user profile, the operating system automatically starts each application in the application set specified by the activity profile and customizes a user experience by applying, to one or more applications in the application set, context information recorded during previous execution under the activity profile. During execution under the activity profile, context information for the activity profile in the one or more data structures is recorded. | 2015-12-10 |
20150355956 | METHOD, APPARATUS AND COMPUTER PROGRAM FOR ADMINISTERING MESSAGES WHICH A CONSUMING APPLICATION FAILS TO PROCESS - Disclosed is a method for administering messages. In response to a determination that one or more consuming applications have failed to process the same message on a queue a predetermined number of times, the message is made unavailable to consuming applications. Responsive to determining that a predetermined number of messages have been made unavailable to consuming applications, one or more consuming applications are prevented from consuming messages from the queue. | 2015-12-10 |
20150355957 | SYSTEM AND METHOD FOR REAL-TIME DETECTION OF ANOMALIES IN DATABASE USAGE - A system and method for real-time detection of anomalies in database or application usage is disclosed. Embodiments provide a mechanism to detect anomalies in database or application usage, such as data exfiltration attempts, first by identifying correlations (e.g., patterns of normalcy) in events across different heterogeneous data streams (such as those associated with ordinary, authorized and benign database usage, workstation usage, user behavior or application usage) and second by identifying deviations/anomalies from these patterns of normalcy across data streams in real-time as data is being accessed. An alert is issued upon detection of an anomaly, wherein a type of alert is determined based on a characteristic of the detected anomaly. | 2015-12-10 |
20150355958 | SALVAGING HARDWARE TRANSACTIONS - A transactional memory system salvages a partially executed hardware transaction. A processor of the transactional memory system determines information about an about-to-fail handler for transactional execution of a code region of a hardware transaction. The processor saves state information of the hardware transaction, the state information usable to determine whether the hardware transaction is to be salvaged or to be aborted. The processor detects an about-to-fail condition during the transactional execution of the hardware transaction. The processor, based on the detecting, executes the about-to-fail handler using the information about the about-to-fail handler, the about-to-fail handler determining whether the hardware transaction is to be salvaged or to be aborted. | 2015-12-10 |
20150355959 | DEPENDENCY MONITORING - Dependency monitoring can include monitoring a first application and a second application for un-expected behavior. Dependency monitoring can also include receiving a description of a number of dependencies between a number of applications wherein the description of the number of dependencies is created before monitoring of the first application and the second application begins. Dependency monitoring can include sending a message to an information technology (IT) personnel, wherein the message identifies a dependency from the number of dependencies between the first application and the second application based on the description of the number of dependencies. | 2015-12-10 |
20150355960 | MAINTAINING DATA STORAGE IN ACCORDANCE WITH AN ACCESS METRIC - When an access metric regarding an encoded data object exceeds an access threshold, a method begins by a processing module of a dispersed storage network (DSN) retrieving encoded data slices of a first plurality of sets of encoded data slices and recovering the data object utilizing first dispersed storage error encoding parameters. The method continues with the processing module re-encoding the recovered data object using second dispersed storage error encoding parameters to produce a re-encoded data object, where the re-encoded data object includes a second plurality of sets of encoded data slices. The method continues with the processing module outputting the second plurality of sets of encoded data slices to storage units of the DSN for storage therein and sending a message to retrieving devices of the DSN, where the message indicates use of the second plurality of sets of encoded data slices for the data object. | 2015-12-10 |
20150355961 | CONTROLLING ERROR PROPAGATION DUE TO FAULT IN COMPUTING NODE OF A DISTRIBUTED COMPUTING SYSTEM - A technique includes receiving an alert indicator in a distributed computer system that includes a plurality of computing nodes coupled together by cluster interconnection fabric. The alert indicator indicates detection of a fault in a first computing node of the plurality of computing nodes. The technique indicates regulating communication between the first computing node and at least one of the other computing nodes in response to the alert indicator to contain error propagation due to the fault within the first computing node. | 2015-12-10 |
20150355962 | MALFUNCTION ESCALATION - A data processing apparatus | 2015-12-10 |
20150355963 | MRAM SMART BIT WRITE ALGORITHM WITH ERROR CORRECTION PARITY BITS - Some embodiments relate to a system that includes write circuitry, read circuitry, and comparison circuitry. The write circuitry is configured to attempt to write an expected multi-bit word to a memory location in a memory device. The read circuitry is configured to read an actual multi-bit word from the memory location. The comparison circuitry is configured to compare the actual multi-bit word read from the memory location with the expected multi-bit word which was previously written to the memory location to distinguish between a number of erroneous bits in the actual multi-bit word and a number of correct bits in the actual multi-bit word. The write circuitry is further configured to re-write the number of erroneous bits to the memory location without attempting to re-write the number of correct bits to the memory location. | 2015-12-10 |
20150355964 | CONTROLLER DEVICE FOR USE WITH ELECTRICALLY ERASABLE PROGRAMMABLE MEMORY CHIP WITH ERROR DETECTION AND RETRY MODES OF OPERATION - A memory system includes a link having at least one signal line and a controller. The controller includes at least one transmitter coupled to the link to transmit first data, and a first error protection generator coupled to the transmitter. The first error protection generator dynamically adds an error detection code to at least a portion of the first data. At least one receiver is coupled to the link to receive second data. A first error detection logic determines if the second data received by the controller contains at least one error and, if an error is detected, asserts a first error condition. The system includes a memory device having at least one memory device transmitter coupled to the link to transmit the second data. A second error protection generator coupled to the memory device transmitter dynamically adds an error detection code to at least a portion of the second data. | 2015-12-10 |
20150355965 | HIGH SPEED FLASH CONTROLLERS - A high speed USB memory controller includes a microprocessor, flash memory, memory buffers directly accessible to the microprocessor and flash memory, and a USB interface for writing data directly into the memory buffers. This allows devices with multiple flash die to operate at full bus speed. | 2015-12-10 |
20150355966 | VERIFYING A STATUS LEVEL OF STORED ENCODED DATA SLICES - A method begins by a processing module of a dispersed storage network (DSN) retrieving a decode threshold number of encoded data slices of a set of encoded data slices from a first grouping of storage units of the DSN. The method continues with the processing module determining a first status level indication of the retrieved decode threshold number of encoded data slices and sending check status request messages to a second grouping of storage units of the DSN. The method continues with the processing module receiving check status response messages and processing the check response messages to produce a second status level indication. When the second status level indication is substantially equal to the first status level indication, the method continues with the processing module indicating that the decode threshold number of encoded data slices is of a common status level as other encoded data slices of encoded data slices. | 2015-12-10 |
20150355967 | METHOD OF READING AND WRITING TO A SPIN TORQUEMAGNETIC RANDOM ACCESS MEMORY WITH ERROR CORRECTING CODE - A method includes destructively reading bits of a spin torque magnetic random access memory, using error correcting code (ECC) for error correction, and storing inverted or non-inverted data in data-store latches. When a subsequent write operation changes the state of data-store latches, parity calculation and majority detection of the bits are initiated. A majority bit detection and potential inversion of write data minimizes the number of write current pulses. A subsequent write operation received within a specified time or before an original write operation is commenced will cause the majority detection operation to abort. | 2015-12-10 |
20150355968 | SYSTEMS AND METHODS FOR SEQUENTIAL RESILVERING - Implementations claimed and described herein provide systems and methods for the efficient rebuilding of a failed storage device through sequential resilvering. In one implementation, blocks for resilvering are discovered. The blocks correspond to input/output requests not successfully completed for a failed storage device. A coarse grained sorting of the blocks is performed based on a block location of each of the blocks on the failed storage device. The block locations of the blocks are stored in memory according to the coarse grained sorting. A fine grained sorting of the blocks is performed based on the coarse grained sorting of the blocks. The blocks are sequentially resilvered based on the fine grained sorting. | 2015-12-10 |
20150355969 | Storage Cluster - A plurality of storage nodes is provided. The plurality of storage nodes is configured to communicate together as a storage cluster. Each of the plurality of storage nodes includes nonvolatile solid-state memory. The plurality of storage nodes is configured to distribute user data and metadata associated with the user data throughout the plurality of storage nodes such that the plurality of storage nodes maintain the ability to read the user data, using erasure coding, despite a loss of one of the plurality of storage nodes. A chassis enclosing the plurality of storage nodes includes power distribution, a high speed communication bus and the ability to install one or more storage nodes which may use the power distribution and communication bus in some embodiments. A method for accessing user data in a plurality of storage nodes having nonvolatile solid-state memory is also provided. | 2015-12-10 |
20150355970 | MECHANISM FOR PERSISTING MESSAGES IN A STORAGE SYSTEM - A plurality of storage nodes in a single chassis is provided. The plurality of storage nodes in the single chassis is configured to communicate together as a storage cluster. Each of the plurality of storage nodes includes nonvolatile solid-state memory for user data storage. The plurality of storage nodes is configured to distribute the user data and metadata associated with the user data throughout the plurality of storage nodes such that the plurality of storage nodes maintain the ability to read the user data, using erasure coding, despite a loss of two of the plurality of storage nodes. The plurality of storage nodes configured to initiate an action based on the redundant copies of the metadata, responsive to achieving a level of redundancy for the redundant copies of the metadata. A method for accessing user data in a plurality of storage nodes having nonvolatile solid-state memory is also provided. | 2015-12-10 |
20150355971 | FAILURE DOMAIN BASED STORAGE SYSTEM DATA STRIPE LAYOUT - A method for performing stripe placement within a storage system is disclosed. After a set of failure domains within a storage system has been identified, the failure domains are then organized to form a hierarchy of failure domains. A failure domain is defined as a group of one or more disks that are more likely to fail together because a common component is shared by that group of disks. Stripe placement is performed across all active failure domains within the storage system using a greedy algorithm. | 2015-12-10 |
20150355972 | SALVAGING HARDWARE TRANSACTIONS - A transactional memory system salvages a partially executed hardware transaction. A processor of the transactional memory system saves state information in a first code region of a first hardware transaction, the state information useable to determine whether the first hardware transaction is to be salvaged or to be aborted. The processor detects an about to fail condition in the first code region of the first hardware transaction. The processor, based on the detecting, executes an about-to-fail handler, the about-to-fail handler using the saved state information to determine whether the first hardware transaction is to be salvaged or to be aborted. The processor executing the about-to-fail handler, based on the transaction being to be salvaged, uses the saved state information to determine what portion of the first hardware transaction to salvage. | 2015-12-10 |
20150355973 | LOAD CONTROL BACKUP SIGNAL GENERATING CIRCUIT - A load control backup signal generating circuit for supplying a backup control signal to a switch of a load connected to an output of a control processor in a case that abnormality occurs in the control processor, includes a first input terminal that receives a constant period signal that is output periodically from the control processor when the control processor is normal, a constant period signal monitoring section that monitors a state of the constant period signal for identifying whether a length of the time during which a high or low level state of the constant period signal continues is longer than a predetermined time, and that outputs the signal corresponding to a result of the identification, and a backup signal output section that outputs the backup control signal when the output of the constant period signal monitoring section satisfies a predetermined condition. | 2015-12-10 |
20150355974 | REBUILDING DATA ACROSS STORAGE NODES - A method for proactively rebuilding user data in a plurality of storage nodes of a storage cluster is provided. The method includes distributing user data and metadata throughout the plurality of storage nodes such that the plurality of storage nodes can read the user data, using erasure coding, despite loss of two of the storage nodes. The method includes determining that one of the storage nodes is unreachable and determining to rebuild the user data for the one of the storage nodes that is unreachable. The method includes reading the user data across a remainder of the plurality of storage nodes, using the erasure coding and writing the user data across the remainder of the plurality of storage nodes, using the erasure coding. A plurality of storage nodes within a single chassis that can proactively rebuild the user data stored within the storage nodes is also provided. | 2015-12-10 |
20150355975 | METHOD AND APPARATUS FOR PROCESSING REDO DATA IN DATABASE - Embodiments of the present invention provide a method and an apparatus for processing redo data in a database, where the method includes: generating redo data according to a database modification operation, accordingly saving the redo data in a buffer allocated to each application thread, saving an identifier of the application thread in a time sequence queue after a time sequence queue lock is acquired; and determining that a data reading condition is satisfied, reading a sequence of the identifiers of the application threads from the time sequence queue, successively reading a piece of redo data from the buffer of each application thread corresponding to the identifier of each application thread in the time sequence queue, and writing the piece of redo data to a redo queue. Redo data processing efficiency can be improved by separating a time sequence queue from a data queue. | 2015-12-10 |
20150355976 | Selecting During A System Shutdown Procedure, A Restart Incident Checkpoint Of An Incident Analyzer In A Distributed Processing System - Methods, apparatuses, and computer program products for selecting during a system shutdown procedure, a restart incident checkpoint of an incident analyzer in a distributed processing system. Embodiments include the incident analyzer determining whether at least one incident is in a queue. If at least one incident is in the queue, the incident analyzer selects as the restart incident checkpoint, a last incident completed checkpoint. If at least one incident is not in the queue, the incident analyzer determines whether the last incident completed checkpoint matches a last incident analysis pool selection checkpoint. If the last incident completed checkpoint matches a last incident analysis pool selection checkpoint, the incident analyzer selects as the restart incident checkpoint, a monitor checkpoint. If the last incident completed checkpoint does not match the last incident analysis pool selection checkpoint, the incident analyzer selects as the restart incident checkpoint, the last incident completed checkpoint. | 2015-12-10 |
20150355977 | System and Method for Making a Backup Copy of Live Data - A system and method for backing up data on computer-readable physical medium, especially useful for databases, such as those using POSIX standard function calls, whereby select operations performed by a user of the database are intercepted and, while performed, are also translated into a shadow file having information about a database file to be backed up and the operations performed on that file. The resulting shadow file can be used to reconstitute the database file. In another mode of operation, the system and method create a copy of the database and concurrently make the same changes to the copy as the user commands while also concurrently keeping a shadow file system related to the database copy. | 2015-12-10 |
20150355978 | SYSTEMS AND METHODS FOR BACKING UP STORAGE VOLUMES IN A STORAGE SYSTEM - Systems and methods for backing up storage volumes are provided. One system includes a primary side, a secondary side, and a network coupling the primary and secondary sides. The secondary side includes first and second VTS including a cache and storage tape. The first VTS is configured to store a first portion of a group of storage volumes in its cache and migrate the remaining portion to its storage tape. The second VTS is configured to store the remaining portion of the storage volumes in its cache and migrate the first portion to its storage tape. One method includes receiving multiple storage volumes from a primary side, storing the storage volumes in the cache of the first and second VTS, and migrating a portion of the storage volumes from the cache to storage tape in the first VTS. | 2015-12-10 |
20150355979 | ACCESSING DATA BASED ON A DISPERSED STORAGE NETWORK REBUILDING ISSUE - A method begins by a set of storage units of a dispersed storage network (DSN) storing a plurality of encoded data slices, where each storage unit stores a unique sub-set of encoded data slices. The method continues with each storage unit dispersed storage error encoding at least a recovery threshold number of encoded data slices to produce a local set of encoded recovery data slices. In response to a retrieval request, the method continues with a device identifying a storage unit of an initial recovery number of storage units having a rebuilding issue and determining whether the rebuilding issue is correctable at a DSN level. When the rebuilding issue is correctable at the DSN level the method continues with the device selecting another storage unit to replace the storage unit to produce a recovery number of storage units and sending retrieve requests to the recovery number of storage units. | 2015-12-10 |
20150355980 | RELIABLY RECOVERING STORED DATA IN A DISPERSED STORAGE NETWORK - A method begins by storage units of a dispersed storage network (DSN) receiving a retrieval request for a data object, where each storage unit stores a unique group of encoded data slices of the data object and a local set of encoded recovery data slices. The method continues with some storage units sending the unique group of encoded data slices to a requesting computing device and with one storage unit sending an encoded recovery data slice to the requesting computing device. The method continues with the requesting computing device identifying an errant unique group encoded data slice, correcting the errant encoded data slice based on received data slices to produce an updated unique group of encoded data slices, and dispersed storage error decoding the updated unique group of encoded data slices and the unique groups of encoded data slices from other storage units to recover the data object. | 2015-12-10 |
20150355981 | Hybrid SCM-DRAM Transactional Storage Engine For Fast Data Recovery - A data recovery system and method are disclosed. Primary data is stored a database in byte-addressable NVRAM, where the database includes one or more persistent tables of data in a byte-addressable, RAM format, and a persistent memory allocator that maps persistent memory pointers of the persistent memory to virtual memory pointers of a virtual memory associated with the database. Secondary data is stored in volatile DRAM. A failure recovery includes recovering the persistent memory allocator, mapping the persistent memory to the virtual memory to recover primary data using their persistent memory pointers, translating the persistent memory pointers to virtual memory pointers, undoing changes to the primary data made by unfinished transactions of the query execution at the time of failure of one of the one or more queries, and reconstructing the secondary data from the primary data. | 2015-12-10 |
20150355982 | VM AND HOST MANAGEMENT FUNCTION AVAILABILITY DURING MANAGEMENT NETWORK FAILURE IN HOST COMPUTING SYSTEMS IN A FAILOVER CLUSTER - Techniques for virtual machine (VM) management function availability during management network failure in a first host computing system in a cluster are described. In one example embodiment, management network failure is identified in the first host computing system. The management network being coupled to virtual management software in a management server and used for VM and host management functions. VM and host management functions on the first host computing system are then initiated via a failover agent associated with an active host computing system that is connected to the management network in the cluster and a shared storage network. | 2015-12-10 |
20150355983 | Automatic Management of Server Failures - In embodiments of the invention LPARs can be run on any server in a group of servers. Upon detecting a server has failed, each LPAR then running on the failed server is identified, and servers in the group that are available for restarting the identified LPARs are determined. Identified LPARs are assigned to an available server for restarting, wherein each LPAR has a value associated with a specified LPAR priority criterion, and a given LPAR is assigned in accordance with its value. Responsive to assigning the given LPAR to an available server, a specified storage resource is connected for use by the server in association with the given LPAR, wherein the specified storage resource was previously connected for use by the failed server in association with the given LPAR. | 2015-12-10 |
20150355984 | DISASTER RECOVERY AT HIGH RELIABILITY IN A STORAGE CLUSTER - A storage grid is provided. The storage grid includes a first cluster, a second cluster, and a third cluster. Each of the first cluster, the second cluster and the third cluster is configured to store an amount of data ranging from a portion of a copy of the data to a full copy of the data. The first cluster has a full copy of data written to the first cluster and at least a partial copy of data written to the second and third cluster. The second cluster has a full copy of data written to the second cluster, and at least a partial copy of the data written to the first and third cluster. The third cluster has a full copy of data written to the third cluster and at least a partial copy of the data written to the first and second cluster. A method of storing data is also provided. | 2015-12-10 |
20150355985 | RECOVERY CONSUMER FRAMEWORK - A recovery consumer framework provides for execution of recovery actions by one or more recovery consumers to enable efficient recovery of information (e.g., data and metadata) in a storage system after a failure event (e.g., a power failure). The recovery consumer framework permits concurrent execution of recovery actions so as to reduce recovery time (i.e., duration) for the storage system. The recovery consumer framework may coordinate (e.g., notify) the recovery consumers to serialize execution of the recovery actions by those recovery consumers having a dependency while allowing concurrent execution between recovery consumers having no dependency relationship. Each recovery consumer may register with the framework to associate a dependency on one or more of the other recovery consumers. The dependency association may be represented as a directed graph where each vertex of the graph represents a recovery consumer and each directed edge of the graph represents a dependency. The framework may traverse (i.e., walk) the framework graph and for each vertex encountered, notify the associated recovery consumer to initiate its respective recovery actions. | 2015-12-10 |
20150355986 | COOPERATIVE MEMORY ERROR DETECTION AND REPAIR - Some embodiments include apparatuses and methods having a memory structure included in a memory device and a control unit included in the memory device. The control unit can provide information obtained from the memory structure during a memory operation to a host device (e.g., a processor) in response to a command from the host device. If the control unit receives a notification from the host device indicating that the host device has detected an error in the information obtained from the memory structure, then a repair unit included in the memory device performs a memory repair operation to repair a portion in the memory structure. | 2015-12-10 |
20150355987 | AT-SPEED TEST ACCESS PORT OPERATIONS - This disclosure describes different ways to improve the operation of a device's 1149.1 TAP to where the TAP can perform at-speed Update & Capture, Shift & Capture and Back to Back Capture & Shift operations. In a first embodiment of the disclosure the at-speed operations are achieved by time division multiplexing CMD signals onto the TMS input to the TAP. The CMD signals are input to a CMD circuit that operates in conjunction with a Dual Port Router to execute the at-speed operations of a circuit. In a second embodiment of the disclosure the at-speed operations are achieved by detecting the TAP's Exit1DR state as a CMD signal that is input to the CMD circuit that operates in conjunction with a Dual Port Router to execute the at-speed operations of a circuit. In a third embodiment of the disclosure the at-speed operations are achieved by detecting the TAP's Exit1DR and PauseDR states and in response producing Capture and Update signals that are input to a Programmable Switch that operates in conjunction with a Dual Port Router to execute the at-speed operations of a circuit. In a fourth embodiment of the disclosure the at-speed operations are achieved by detecting the TAP's Exit1DR and PauseDR states and inputting these states to a Dual Port Router to control the at-speed operations of a circuit. Each of the embodiments may be augmented to include externally accessible Update and Capture input signals that can be selected to allow a tester to directly control the at-speed operations of a circuit. The improvements of the disclosure are achieved without requiring any additional IC pins beyond the 4 required TAP pins, except for examples showing use of additional data input pins (TDI or WPI signals), additional data output pins (TDO or WPO signals) or examples showing use of additional control input pins (Capture and Update signals). Devices including the TAP improvements can be operated compliantly in a daisy-chain arrangement with devices that don't include the TAP improvements. | 2015-12-10 |
20150355988 | SIMPLIFIED PASSENGER SERVICE UNIT (SSPU) TESTER - Systems, methods, and apparatus for testing a passenger service unit (PSU) of a cabin of a vehicle are disclosed. In one or more embodiments, the disclosed method involves installing a test interface panel (TIP) in the cabin of the vehicle such that the TIP is connected to a power source and is able to communicate with the PSU. The method further involves connecting a user interface to the TIP. Also, the method involves sending at least one command, from the user interface, to the PSU via the TIP. Further, the method involves sending, at least one response, from the PSU. | 2015-12-10 |
20150355989 | SAFETY NODE IN INTERCONNECT DATA BUSES - In safety-critical computer systems, fault tolerance is an important design requirement. Data buses for on-chip interconnection in these processor-based systems are exposed to risk arising from faults in the interconnect itself or in any of the connected peripherals. To provide sufficient fault tolerance, a safety node is inserted between an upstream master section and a downstream slave section of an on-chip bus hierarchy or network. The safety node provides a programmable timeout monitor for detecting a timeout condition for a transaction. If timeout has occurred, the safety node transmits a dummy response back to the master, assumes the role of a master, and waits for the slave device to respond. Furthermore, the safety node rejects any subsequent requests by any of the masters on the upstream section by transmitting a dummy response to those subsequent requests, thus enabling these masters to avoid deadlock or stall. | 2015-12-10 |
20150355990 | Self-Spawning Probe in a Distributed Computing Environment - According to one embodiment, a system includes probes operable to monitor information associated with a host device and includes a controller operable to control the probes. A first probe instance is associated with a plurality of monitoring modules. Each monitoring module is operable to monitor information associated with the host device. The first probe instance is operable to determine a resource usage associated with the first probe instance and determine whether the resource usage exceeds a threshold. The first probe instance is operable to divide the plurality of monitoring modules into a first subset of monitoring modules and a second subset of monitoring modules. The first probe instance is operable to spawn a second probe instance, wherein the second probe instance is associated with the second subset of monitoring modules. The first probe module is operable to associate the first probe instance with the first subset of monitoring modules. | 2015-12-10 |
20150355991 | TRACE CAPTURE DEVICE WITH COMMON STRUCTURE AND RELATED TRACE CAPTURE METHOD - A trace capture device includes a processing system, a trace capture control unit and a bus unit. The processing system includes at least one function block arranged to generate first data, second data, and correlation information corresponding to the first data. The trace capture control unit is arranged to receive the first data and correlation information corresponding to the first data from the processing system, and generate third data according to the first data and the correlation information. The bus unit is coupled to the processing system, the trace capture control unit and a data link interface. The bus unit is arranged to use the data link interface to transmit information derived from the second data in a first mode, and reuse the first data link interface to transmit information derived from the third data in a second mode. | 2015-12-10 |
20150355992 | APPLICATION PERFORMANCE PERCEPTION METER - Embodiments of the present invention provide a method, system and computer program product for application performance perception metering. In an embodiment of the invention, an application performance perception metering method includes initially monitoring resource performance in a computing device during utilization of a computer program through the computing device. Thereafter, the monitored resource performance is compared with historical resource performance during past utilization of the computer program through the computing device. Finally, a prompt can be displayed in the computing device responsive to a determination that the monitored resource performance is deficient relative to the historical resource performance. However, a prompt also can be displayed in the computing device indicating that the computer program is performing poorly based upon a determination that the monitored resource consumption is comparable to the historical resource consumption. | 2015-12-10 |
20150355993 | DETECTING POTENTIAL CLASS LOADER PROBLEMS USING THE CLASS SEARCH PATH SEQUENCE FOR EACH CLASS LOADER - A method, system and computer program product for identifying potential class loader problems prior to or during the deployment of the classes to the production environment. A set of class loaders is loaded into memory. The set of class loaders is arranged hierarchically into parent-child relationships. The class search path sequence for each class loader in the hierarchy is generated to detect and identify potential class loader problems. Those class loaders with a duplicate class in its class search path sequence are identified as those class loaders that may pose a potential problem. A message may then be displayed to the user identifying these class loaders as posing a potential problem. By identifying these class loaders prior to or during the deployment of the classes to the production environment, class loader problems may be prevented from occurring. | 2015-12-10 |
20150355994 | DETECTING POTENTIAL CLASS LOADER PROBLEMS USING THE CLASS SEARCH PATH SEQUENCE FOR EACH CLASS LOADER - A method, system and computer program product for identifying potential class loader problems prior to or during the deployment of the classes to the production environment. A set of class loaders is loaded into memory. The set of class loaders is arranged hierarchically into parent-child relationships. The class search path sequence for each class loader in the hierarchy is generated to detect and identify potential class loader problems. Those class loaders with a duplicate class in its class search path sequence are identified as those class loaders that may pose a potential problem. A message may then be displayed to the user identifying these class loaders as posing a potential problem. By identifying these class loaders prior to or during the deployment of the classes to the production environment, class loader problems may be prevented from occurring. | 2015-12-10 |
20150355995 | DETECTING MERGE CONFLICTS AND COMPILATION ERRORS IN A COLLABORATIVE INTEGRATED DEVELOPMENT ENVIRONMENT - A method, and associated computer system and computer program product, of detecting source code merge conflicts and compilation errors. Uncommitted changes associated with a source code are received periodically at each time of a sequence of times. A temporary branch corresponding to each uncommitted change associated with the source code is created. The temporary branch corresponding to each uncommitted change is merged to corresponding portions of the source code. It is ascertained that no merge conflict resulted from the merging and in response, a compilation of a merged version of the source code is performed, wherein the merged version of the source code includes the temporary branch corresponding to each uncommitted change. It is determined that no compilation error occurred from the compilation and in response, a version of a product that includes the merged version of the source code is created. | 2015-12-10 |
20150355996 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR COLLECTING EXECUTION STATISTICS FOR GRAPHICS PROCESSING UNIT WORKLOADS - A system, method, and computer program product are provided for collecting trace information based on a computational workload. The method includes the steps of compiling source code to generate a program, launching a workload to be executed by the parallel processing unit, collecting one or more records of trace information associated with a plurality of threads configured to execute the program, and correlating the one or more records to one or more corresponding instructions included in the source code. Each record in the one or more records includes at least a value of a program counter and a scheduler state of the thread. | 2015-12-10 |
20150355997 | Server-Platform Simulation Service - A server-platform simulation service process involves receiving requests for server-platform simulation service. Simulation scenes including checkpoint simulation scenes corresponding to respective requests are identified. Respective computer hosts for executing said server-platform simulator based on the identified scenes are configured. Users are provided access to the computer hosts configured with the identified scenes. | 2015-12-10 |
20150355998 | ERROR DEVELOPER ASSOCIATION - Systems, methods, and machine-readable and executable instructions are provided for error developer association. Error developer association can include Identifying a number of portions of the source code associated with a message, wherein the message is associated with an error. Error developer association can also include associating s developer with a portion of the source code of the number of portions of the source code. Error developer association can also include identifying a developer of the number of developers to resolve the error. | 2015-12-10 |
20150355999 | TEST COVERAGE ANALYSIS - A test coverage analysis method and corresponding apparatus are disclosed, wherein, by executing the program under test using one or more test cases, generating one or more heapdump files containing the call stack information of the program under test, and analyzing the call stack information in the one or more heapdump files, the coverage information of the one or more test cases in terms of functions in the program under test is obtained. | 2015-12-10 |
20150356000 | REMEDIATION OF KNOWN DEFECTS AND VULNERABILITIES IN CLOUD APPLICATION PACKAGES - A method for applying remediation policy to a cloud application package having a set of components is described. The method is initiated in response to discovery of a new vulnerability. It begins by comparing information from a deployment description against a data set of known problems associated with the one or more of the components. The deployment description represents the set of components and their interrelationships. For each of the one or more components, one or more known problems that satisfy a given severity and/or complexity criteria are identified. Thereafter, and with respect to at least one of the components for which one known problem satisfying the given criteria has been identified, the remediation policy (e.g., an update, a replacement, a patch, an additional installable) is applied to attempt to rectify the known problem. After applying the policy, the old version of the package is replaced with the new version. | 2015-12-10 |
20150356001 | UNIT TEST AUTOMATION FOR BUSINESS RULES AND APPLICATIONS - There are provided systems and method for unit test automation for business rules and applications. A service provider, such as a payment provider, may wish to integrate software and platforms offered by Pegasystems, Inc., in particular Pega RULES Process (“PRPC), which offers a business process management system. PRPC allows the service provider to create and manage business rules and build business applications and platforms, such as a customer support platform. In order to provide a more flexible and comprehensive automated unit testing mechanism, a Java framework may be utilized that runs test cases in PRPC for the business rules. The Java framework may feed data into test cases and may enable dynamic data to be entered for the test cases. Additionally, the Java framework may allow for editing of data for the PRPC test cases and may allow the test cases to be reused and deleted. | 2015-12-10 |
20150356002 | DEPLOYMENT PATTERN MONITORING - A computer system can detect a request for status information relating to a particular deployment pattern; query, in response to the request, a deployment pattern registry for deployment configuration information about the particular deployment pattern; test deployment capabilities for the particular deployment pattern by: verifying installation files for the particular deployment pattern are accessible; identifying one or more candidate deployment components for a hypothetical deployment of the particular deployment pattern; installing, on the one or more candidate deployment components, a virtual machine that is configured to test computing resources of the one or more candidate deployment components; and deleting the virtual machine in response to receiving test results regarding the resources of the one or more candidate deployment components. The system can generate a notification in response to detecting a failure in the testing. | 2015-12-10 |
20150356003 | METHOD FOR SHARING REFERENCE DATA AMONG APPLICATION PROGRAMS EXECUTED BY A PLURALITY OF VIRTUAL MACHINES AND REFERENCE DATA MANAGEMENT APPARATUS AND SYSTEM THEREOF - Apparatus, method and systems for managing reference data, which can prevent duplicated data loading of reference data and eliminate redundancy of I/O operations for loading of the same reference data required by different virtual machines present in the same physical node to reduce use memory and I/O through sharing virtual machine leveled memories, are provided. | 2015-12-10 |
20150356004 | MEMORY CONTROLLER FOR REQUESTING MEMORY SPACES AND RESOURCES - In an example, an apparatus includes a memory controller. The memory controller may be configured to communicate a request to a computer program for a resource, to initialize a memory, and to perform operations on the memory as instructed. The computer program may be configured to make resources available in response to requests for the resources. The memory controller may be further configured to use the resource in response to an indication from the computer program that the resource is available. | 2015-12-10 |
20150356005 | AUTOMATICALLY RECONFIGURING A STORAGE MEMORY TOPOLOGY - A storage cluster is provided. The storage cluster includes a plurality of storage nodes within a single chassis. Each of the plurality of storage nodes has nonvolatile solid-state memory for storage of user data. The plurality of storage nodes are configured to distribute the user data and metadata throughout the plurality of storage nodes with erasure coding of the user data such that the plurality of storage nodes can access the user data, via the erasure coding, with a failure of two of the plurality of storage nodes. The plurality of storage nodes are configured to employ the erasure coding to reconfigure redundancy of the user data responsive to one of adding or removing a storage node | 2015-12-10 |
20150356006 | NONVOLATILE MEMORY ARRAY LOGIC - A method for implementing nonvolatile memory array logic includes configuring a crosspoint memory array in a first configuration and applying an input voltage to the crosspoint array in the first configuration to produce a setup voltage. The crosspoint array is configured in a second configuration and an input voltage is applied to the crosspoint array in the second configuration to produce a sense voltage. The setup voltage and the sense voltage compared to perform a logical operation on data stored in the crosspoint array. A system for performing nonvolatile memory array logic is also provided. | 2015-12-10 |
20150356007 | PARALLEL GARBAGE COLLECTION IMPLEMENTED IN HARDWARE - Embodiments of the invention provide a method and system for dynamic memory management implemented in hardware. In an embodiment, the method comprises storing objects in a plurality of heaps, and operating a hardware garbage collector to free heap space. The hardware garbage collector traverses the heaps and marks selected objects, uses the marks to identify a plurality of the objects, and frees the identified objects. In an embodiment, the method comprises storing objects in a heap, each of at least some of the objects including a multitude of pointers; and operating a hardware garbage collector to free heap space. The hardware garbage collector traverses the heap, using the pointers of some of the objects to identify others of the objects; processes the objects to mark selected objects; and uses the marks to identify a group of the objects, and frees the identified objects. | 2015-12-10 |
20150356008 | METHOD FOR ACCESS TO ALL THE CELLS OF A MEMORY AREA FOR PURPOSES OF WRITING OR READING DATA BLOCKS IN SAID CELLS - A method for access to all cells in a memory area for purposes of writing or reading data blocks in the cells may include, for each access time (Ti with i=0 to N) to the cells in the memory area to be accessed, a process of determining the address (ADRj, with j=0 to N) of the cell of the memory area to be accessed at the access time (Ti), an address (ADRj) determined for an access time Ti not being once again determined for another access time (Tk, k≠j). The process of determining each address (ADRj) may be a pseudorandom process. The method may be used, for example, in any type of card, chip card, SIM card, etc., which includes a processing unit, such as a microcontroller, for manipulating cryptographic data serving to identify and/or authenticate a user of such a card. | 2015-12-10 |
20150356009 | DATA STORAGE LAYOUT - Examples of the present disclosure provide apparatuses and methods for determining a data storage layout. An example apparatus comprising a first address space of a memory array comprising a first number of memory cells coupled to a plurality of sense lines and to a first select line. The first address space is configured to store a logical representation of a first portion of a value. The example apparatus also comprising a second address space of the memory array comprising a second number of memory cells coupled to the plurality of sense lines and to a second select line. The second address space is configured to store a logical representation of a second portion of the value. The example apparatus also comprising sensing circuitry configured to receive the first value and perform a logical operation using the value without performing a sense line address access. | 2015-12-10 |
20150356010 | DATA STORAGE IN A MOBILE DEVICE WITH EMBEDDED MASS STORAGE DEVICE - A mobile device ( | 2015-12-10 |
20150356011 | ELECTRONIC DEVICE AND DATA WRITING METHOD - An electronic device includes a first storage unit, a second storage unit and a control unit. The first storage unit stores the cache of the data. The second storage unit stores the data. The control unit calculates a first ratio of the cache corresponding to the data according to the capacity of the first storage unit. The control unit sends a distribution signal to the processing unit when the control unit reads the data from the second storage unit. The processing unit obtains a first distribution result corresponding to the cache according to the first ratio, and stores the cache to the first storage unit according to the first distribution result. | 2015-12-10 |
20150356012 | DATA FLUSH OF GROUP TABLE - A group table includes one or more groups. A synch command including a synch address range is received. An order data of the one or more groups is flushed is determined by whether the synch address range is included in the one or more groups. | 2015-12-10 |
20150356013 | SYSTEM AND METHOD FOR MANAGING TRANSACTIONS - A method for writing data, the method may include: receiving or generating, by an interfacing module, a data unit coherent write request for performing a coherent write operation of a data unit to a first address; receiving, by the interfacing module and from a circuit that comprises a cache and a cache controller, a cache coherency indicator that indicates that a most updated version of the content stored at the first address is stored in the cache; and instructing, by the interfacing module, the cache controller to invalidate a cache line of the cache that stored the most updated version of the first address without sending the most updated version of the content stored at the first address from the cache to a memory module that differs from the cache if a length of the data unit equals a length of the cache line. | 2015-12-10 |
20150356014 | DYNAMICALLY ADJUSTING THE HARDWARE STREAM PREFETCHER PREFETCH AHEAD DISTANCE - An apparatus for prefetching data for a processor is presented. The apparatus may include a memory, a first counter, a second counter, and a control circuit. The memory may include a table with at least one entry in which the at least one entry may include an expected address of a next memory access and a next address from which to fetch data, wherein the next address is an offset value different from the expected address. The at least one entry may also include a maximum limit for the offset value. The first counter may increment responsive to an address of a memory access matching the expected address. The second counter may increment responsive to the address of the memory access resulting in a cache miss. The control circuitry may be configured to increment the maximum value of the offset value dependent upon a value of the second counter. | 2015-12-10 |
20150356015 | PROCESSOR PERFORMANCE BY DYNAMICALLY RE-ADJUSTING THE HARDWARE STREAM PREFETCHER STRIDE - An apparatus may include a first memory, a control circuit, a first address comparator and a second address comparator. The first memory may store a table, which may include an expected address of a next memory access and an offset to increment a value of the expected address. The control circuit may read data at a predicted address in a second memory and store the read data in a cache. The first and second address comparators may determine if a value of a received address is between the value of the expected address and the value of the expected address minus a value of the offset. The control circuit may also modify the value of the offset responsive to determining the value of the received address is between the value of the expected address and the value of the expected address minus the value of the offset. | 2015-12-10 |
20150356016 | METHOD OF ESTABLISHING PRE-FETCH CONTROL INFORMATION FROM AN EXECUTABLE CODE AND AN ASSOCIATED NVM CONTROLLER, A DEVICE, A PROCESSOR SYSTEM AND COMPUTER PROGRAM PRODUCTS - A method of establishing pre-fetch control information from an executable code is described. The method comprises inspecting the executable code to find one or more instructions corresponding to an unconditional change in program flow during an execution of the executable code when the executable code is retrieved from a non-volatile memory [NVM] comprising a plurality of NVM lines. For each unconditional change of flow instruction in the executable code, the method comprises establishing a NVM line address of the NVM line containing said unconditional change of flow instruction; establishing a destination address associated with the unconditional change of flow instruction; determining whether the destination address is in an address range corresponding to a NVM-pre-fetch starting from said NVM line address; establishing a pre-fetch flag indicating whether the destination address is in the address range corresponding to a NVM-pre-fetch starting from said NVM line address; and recording the pre-fetch flag in a pre-fetch control information record. Also, a NVM controller, a device, a processor system and computer program products are described. | 2015-12-10 |
20150356017 | PREDICTIVE CACHE APPARATUS AND METHOD OF CACHE PREDICTION - The present disclosure discloses a predictive cache apparatus particularly but not exclusively for controlling the cache update of a database, the predictive cache apparatus including a CEP processor configured to detect events generated the database or operational units, and to generate a cache operation order based on detected events, and a cache distributor configured to control the data to be cached in cache units based on the cache operation order generated by the CEP processor. The disclosure also discloses a method of cache prediction that can be implemented by such a predictive cache apparatus. | 2015-12-10 |