21st week of 2016 patent applcation highlights part 44 |
Patent application number | Title | Published |
20160147517 | METHOD AND COMPUTER PROGRAM PRODUCT FOR DISASSEMBLING A MIXED MACHINE CODE - A method and a computer program product for disassembling a mixed machine code are described. The machine code is provided as a sequence of code items including one or more instructions and one or more data items. The method comprises: storing the sequence of code items in accordance with a corresponding sequence of addresses; executing the machine code, thereby generating an execution trace; and partitioning the sequence of addresses into instruction address blocks and data address blocks on the basis of control data, the control data comprising at least the execution trace. | 2016-05-26 |
20160147518 | MODEL BASED ENFORCEMENT OF SOFTWARE COMPLIANCE - A method for enforcing a model deployment specification for a software application in execution in a virtualised computing environment, the method comprising: retrieving a compliance characteristic for the application, the compliance characteristic having associated a compliance criterion; receiving a model deployment specification for the compliance characteristic, the model deployment specification including an identification of a set of model resources being selected to, when instantiated, satisfy the compliance criterion; identifying a set of instantiated resources as resources instantiated for execution of the application; in response to a determination that the set of model resources includes absent resources as resources outside the set of instantiated resources, modifying the set of instantiated resources by instantiating the absent resources for execution of the application such that the absent resources are included in the set of instantiated resources. | 2016-05-26 |
20160147519 | METHOD FOR INSTALLING AT HIGH SPEED AND INITIALIZING SOFTWARE IN CLIENT PC USING CLOUD SERVER AND CLIENT LAUNCHER - Provided is a method for installing at high speed and initializing software in a client PC using a cloud server and a client launcher, which is capable of preventing the waste of resources, such as PC storage spaces, by selectively installing only software required for a user in a client PC chiefly used in an organization, such as a school or a company. Furthermore, there is a method for installing at high speed and initializing software in a client PC using a cloud server and a client launcher, in which the system of a client PC can be stably maintained because the client PC is automatically initialized after software installed in the client PC is used. | 2016-05-26 |
20160147520 | DEVICE DRIVER AGGREGATION IN OPERATING SYSTEM DEPLOYMENT - A tool for managing device driver aggregation during operating system deployment. The tool receives, by a first computer processor, a request for a device bundle, the request including a unique identifier. The tool determines, by the first computer processor, whether an available driver bundle matches the requested device bundle based, at least in part, on the unique identifier. Responsive to determining an available driver bundle does not match a requested device bundle, the tool creates, by the first computer processor, an associated driver bundle for the requested device bundle. | 2016-05-26 |
20160147521 | SYSTEM FOR DETECTING COMPONENTS OF A VEHICLE - System for detecting components ( | 2016-05-26 |
20160147522 | APPLICATION BROKER FOR MULTIPLE VIRTUALISED COMPUTING ENVIRONMENTS - A method for deploying a software application for execution, the method comprising: receiving an application specification for the application, the application specification including an identification of one or more resources required for execution of the application; receiving a set of infrastructure specifications, each infrastructure specification including an identification of one or more resources associated with a virtualised computing environment in a set of virtualised computing environments; receiving a set of compliance characteristics for the application, each compliance characteristic including one or more criteria, each of the criteria being based on one or more formal parameters concerning a resource; receiving a set of software component definitions, each software component definition including one or more of: a) an indication of one or more actual parameters the software component is operable to provide; and b) an indication of one or more virtualised computing environments in the set of virtualised computing environments with which the software component is operable to execute; selecting, for each of the resources identified in the application specification, a virtualised computing environment based on the set of infrastructure specifications, the set of compliance characteristics and the set of software component definitions, wherein the selected virtualised computing environments are operable to generate actual parameters corresponding to one or more formal parameters for the criteria such that a measure of a number of evaluable criteria meets a predetermined threshold. | 2016-05-26 |
20160147523 | SYSTEM AND METHOD FOR UPDATING MONITORING SOFTWARE USING CONTENT MODEL WITH VALIDITY ATTRIBUTES - According to some embodiments, each of a plurality of computer systems to be monitored receives a monitoring solution agent code portion and a monitoring solution agent content portion, the monitoring solution agent code portion and monitoring solution agent content portion together forming a monitoring solution agent. The monitoring solution agent code portion and monitoring solution agent content portion may comprise a remote Monitoring Solution Agent that executes at the computer system to be monitored. According to some embodiments, monitoring solution agent content portion includes a data source layer, a data provider layer, a request layer, and a user interface layer. | 2016-05-26 |
20160147524 | SYSTEM AND METHOD FOR UPDATING MONITORING SOFTWARE USING CONTENT MODEL - According to some embodiments, each of a plurality of computer systems to be monitored receives a monitoring solution agent code portion and a monitoring solution agent content portion, the monitoring solution agent code portion and monitoring solution agent content portion together forming a monitoring solution agent. The monitoring solution agent code portion and monitoring solution agent content portion may comprise a remote monitoring solution agent that executes at the computer system to be monitored. According to some embodiments, monitoring solution agent content portion includes a data source layer, a data provider layer, a request layer, and a user interface layer. | 2016-05-26 |
20160147525 | SYSTEM AND METHOD FOR FIRMWARE UPDATE OF VEHICLE - A system and a method for a firmware update of a vehicle, wherein the system includes a telematics terminal provided in a vehicle; a mobile communication server configured to provide a firmware of the telematics terminal; and a telematics server configured to manage the firmware of the telematics terminal, and provide update information of the firmware to a mobile terminal, when the update information of the firmware is received from the mobile communication server. The mobile terminal is configured to check whether download is progressed based on the update information of the firmware provided from the telematics server while the remote service is executed in a state in which ignition of the vehicle is turned off, and request update download of the firmware to the telematics server according to the check result. | 2016-05-26 |
20160147526 | CENTRALIZED CLIENT APPLICATION MANAGEMENT - Systems and methods for centralized client application management are provided. In an example embodiment, device data is received from a user device. The user device is identified according to an identification rule. A client state is received from the user device. A match between the client state and a specified state is determined. Based on the client state matching the specified state, an instruction to be performed on the user device is generated. The instruction is caused to be performed on the user device. The instruction causes a change to the client state stored on the user device. | 2016-05-26 |
20160147527 | Electronic Device and Method for Firmware Updating Thereof - A firmware update method applied to a host device and a peripheral device, wherein the peripheral device includes a memory device and a controller. The firmware update method includes: transmitting a first firmware data sector to a peripheral device from the host device, wherein the first firmware data sector has a first mode parameter; and retransmitting the first firmware data sector having a second mode parameter to the peripheral device from the host device after an interruption event has occurred on the memory device during the transmission. | 2016-05-26 |
20160147528 | SYSTEM AND METHOD FOR UPDATING CONTENT WITHOUT DOWNTIME - A local monitoring system of a computer system to be monitored may receive a monitoring solution agent code portion and a first monitoring solution agent content portion. Version D may be assigned to the first content portion, and a status of version D may be set to active such that new end user sessions are initialized with a local agent comprising the code portion and version D. A second content portion may be uploaded and assigned to be version A. Responsive to an end user request, the status of version D may be set to ready and the status of version A may be set to active such that new sessions are initialized with an agent comprising the code portion and version A. A third content portion may then be uploaded and assigned to be version B. Responsive to an end user request, the status of version A may be to ready and the status of version B may be set to active such that new sessions are initialized with an agent comprising the code portion and version B. When sessions using version A no longer exist, version A may be deleted. | 2016-05-26 |
20160147529 | Source Code Management for a Multi-Tenant Platform-as-a-Service (PaaS) System - Aspects of the disclosure provide for source code management for a multi-tenant Platform-as-Service (PaaS) system. A method of the disclosure includes creating, by a processing device of a platform-as-a-service (PaaS) system, a first container to host a first source code management repository for an application; receiving, at the first container, source code associated with the application; creating, by the processing device, a second container to provide deployment functionality for the application, the second container comprising resource-constrained processing space of a node of the PaaS system to execute functionality of the application; and deploying, by the processing device, the source code on the PaaS system using the second container. Aspects of the disclosure may be implemented using high-availability (HA) clusters by replicating the SCM container(s). Aspects of the disclosure may provide users with cost-effective, scaled, and secure PaaS services using reduced infrastructure. | 2016-05-26 |
20160147530 | STRUCTURE FOR MICROPROCESSOR ARITHMETIC LOGIC UNITS - Examples of techniques for designing processors are described herein. In one example, a design structure can be tangibly embodied in a machine readable medium for designing, manufacturing, or testing an integrated circuit. The design structure can include a logic to determine whether a received instruction is an updating fixed point instruction or a non-updating fixed point instruction. The design structure can include a first arithmetic logic unit (ALU) to execute the received instruction if the received instruction is determined to be an updating fixed point instruction and store an update value in a general register. The design structure can include a second arithmetic logic unit (ALU) to execute the received instruction if the received instruction is determined to be a non-updating fixed point instruction. | 2016-05-26 |
20160147531 | DESIGN STRUCTURE FOR MICROPROCESSOR ARITHMETIC LOGIC UNITS - A method in a computer-aided design system for generating a functional design model of a processor, is described herein. The method comprises generating a functional representation of logic to determine whether an instruction is an updating instruction or a non-updating instruction. The method further comprises generating a functional representation of a first arithmetic logic unit (ALU) coupled to a general register in the processor, the first ALU to execute the instruction if the instruction is an updating instruction and store an update value in the general register, and generating a functional representation of a second ALU in the processor to execute the instruction if the instruction is a non-updating instruction. | 2016-05-26 |
20160147532 | METHOD FOR HANDLING INTERRUPTS - Provided is a method for handling interrupts. The method includes receiving a first interrupt, and allocating the first interrupt to a first task queue of a first processing unit among a plurality of processing units, receiving a second interrupt, and allocating the second interrupt to the first task queue, handling the first interrupt allocated to the first task queue on the first processing unit, selecting a second processing unit that will handle the second interrupt among the plurality of processing units while the first interrupt is handled, and transferring the second interrupt allocated to the first task queue to a second task queue of the selected second processing unit. | 2016-05-26 |
20160147533 | INSTRUCTION TO LOAD DATA UP TO A SPECIFIED MEMORY BOUNDARY INDICATED BY THE INSTRUCTION - A Load to Block Boundary instruction is provided that loads a variable number of bytes of data into a register while ensuring that a specified memory boundary is not crossed. The boundary may be specified a number of ways, including, but not limited to, a variable value in the instruction text, a fixed instruction text value encoded in the opcode, or a register based boundary. | 2016-05-26 |
20160147534 | METHOD FOR MIGRATING CPU STATE FROM AN INOPERABLE CORE TO A SPARE CORE - An apparatus is disclosed in which the apparatus may include a plurality of cores, including a first core, a second core and a third core, and circuitry coupled to the first core. The first core may be configured to process a plurality of instructions. The circuitry may be may be configured to detect that the first core stopped committing a subset of the plurality of instructions, and to send an indication to the second core that the first core stopped committing the subset. The second core may be configured to disable the first core from further processing instructions of the subset responsive to receiving the indication, and to copy data from the first core to a third core responsive to disabling the first core. The third core may be configured to resume processing the subset dependent upon the data. | 2016-05-26 |
20160147535 | VARIABLE REGISTER AND IMMEDIATE FIELD ENCODING IN AN INSTRUCTION SET ARCHITECTURE - A method and apparatus provide means for compressing instruction code size. An Instruction Set Architecture (ISA) encodes instructions compact, usual or extended bit lengths. Commonly used instructions are encoded having both compact and usual bit lengths, with compact or usual bit length instructions chosen based on power, performance or code size requirements. Instructions of the ISA can be used in both privileged and non-privileged operating modes of a microprocessor. The instruction encodings can be used interchangeably in software applications. Instructions from the ISA may be executed on any programmable device enabled for the ISA, including a single instruction set architecture processor or a multi-instruction set architecture processor. | 2016-05-26 |
20160147536 | Transitioning the Processor Core from Thread to Lane Mode and Enabling Data Transfer Between the Two Modes - Techniques for switching between two (thread and lane) modes of execution in a dual execution mode processor are provided. In one aspect, a method for executing a single instruction stream having alternating serial regions and parallel regions in a same processor is provided. The method includes the steps of: creating a processor architecture having, for each architected thread of the single instruction stream, one set of thread registers, and N sets of lane registers across N lanes; executing instructions in the serial regions of the single instruction stream in a thread mode against the thread registers; executing instructions in the parallel regions of the single instruction stream in a lane mode against the lane registers; and transitioning execution of the single instruction stream from the thread mode to the lane mode or from the lane mode to the thread mode. | 2016-05-26 |
20160147537 | Transitioning the Processor Core from Thread to Lane Mode and Enabling Data Transfer Between the Two Modes - Techniques for switching between two (thread and lane) modes of execution in a dual execution mode processor are provided. In one aspect, a method for executing a single instruction stream having alternating serial regions and parallel regions in a same processor is provided. The method includes the steps of: creating a processor architecture having, for each architected thread of the single instruction stream, one set of thread registers, and N sets of lane registers across N lanes; executing instructions in the serial regions of the single instruction stream in a thread mode against the thread registers; executing instructions in the parallel regions of the single instruction stream in a lane mode against the lane registers; and transitioning execution of the single instruction stream from the thread mode to the lane mode or from the lane mode to the thread mode. | 2016-05-26 |
20160147538 | PROCESSOR WITH MULTIPLE EXECUTION PIPELINES - An apparatus and method system and method for increasing performance in a processor or other instruction execution device while minimizing energy consumption. A processor includes a first execution pipeline and a second execution pipeline. The first execution pipeline includes a first decode unit and a first execution control unit coupled to the first decode unit. The first execution control unit is configured to control execution of all instructions executable by the processor. The second execution pipeline includes a second decode unit, and a second execution control unit coupled to the second decode unit. The second execution control unit is configured to control execution of a subset of the instructions executable via the first execution control unit. | 2016-05-26 |
20160147539 | INFORMATION HANDLING SYSTEM PERFORMANCE OPTIMIZATION SYSTEM - A performance optimization system includes a plurality of system components. A monitoring plug-in and a configuration plug-in are coupled to each of the plurality of system components. A monitoring engine receives monitoring information for each of the plurality of system components from their respective monitoring plug-in. A configuration engine sends configuration setting information to each of the plurality of system components through their respective configuration plug-ins. A performance optimization engine receives the monitoring information from the monitoring engine, determines a policy associated with the monitoring information and, in response, retrieves configuration setting information that is associated with the policy and sends the configuration setting information to the configuration engine in order to change the configuration of at least one of the plurality of system components. | 2016-05-26 |
20160147540 | SERVER SYSTEM - A server system is disclosed herein, which includes a first BIOS chip, a second BIOS chip, a platform controller, and a baseboard management controller. The platform controller and the baseboard management controller are electrically connected to a first multi-way selector and a second multi-way selector, respectively. The first multi-way selector and the second multi-way selector are individually electrically connected to both the first BIOS chip and the second BIOS chip. The disclosure can accomplish an aspect that when either of the first BIOS chip and the second BIOS chip fails in activating the server system, the server system can be automatically activated by the other BIOS chip. Further, by the baseboard management controller, a firmware of the fail-to-activate BIOS chip can be simultaneously updated, thereby improving security and reliability of the server system. | 2016-05-26 |
20160147541 | DEVICE DRIVER AGGREGATION IN OPERATING SYSTEM DEPLOYMENT - A tool for managing device driver aggregation during operating system deployment. The tool receives, by a first computer processor, a request for a device bundle, the request including a unique identifier. The tool determines, by the first computer processor, whether an available driver bundle matches the requested device bundle based, at least in part, on the unique identifier. Responsive to determining an available driver bundle does not match a requested device bundle, the tool creates, by the first computer processor, an associated driver bundle for the requested device bundle. | 2016-05-26 |
20160147542 | INFORMATION PROCESSING APPARATUS, SERVER APPARATUS, INFORMATION PROCESSING SYSTEM, CONTROL METHOD, AND COMPUTER PROGRAM - An information processing apparatus having a function of entering and returning from a hibernation state and communicable with a server apparatus performing device certification includes a storage unit configured to, in a case where a software module is activated, store a hash value of the activated software module in a volatile memory, a request unit configured to request device certification based on a hash value stored in the volatile memory from the server apparatus, and an excluding unit configured to, in a case where the device certification is requested after returning from the hibernation state, exclude a software module activated before entering the hibernation state from a target of the device certification. | 2016-05-26 |
20160147543 | SELECTIVE HIBERNATION OF ACTIVITIES IN AN ELECTRONIC DEVICE - In an electronic device capable of running multiple software applications concurrently, applications, documents, cards, or other activities can be selected for hibernation so as to free up system resources for other activities that are in active use. A determination is made as to which activities should hibernate, for example based on a determination as to which activities have not been used recently or based on relative resource usage. When an activity is to hibernate, its state is preserved on a storage medium such as a disk, so that the activity can later be revived in the same state and the user can continue with the same task that was being performed before the activity entered hibernation. | 2016-05-26 |
20160147544 | ASSISTED CLIENT APPLICATION ACCESSIBILITY - Various embodiments provide accessibility features on a computing device. For example, for a setup installer to install a client application for a content management system (CMS) that is not accessibility feature enabled, a computing device can output accessibility information to prompt the user to perform an action. If the user performs this action, such as keyboard shortcut, the computing device exchanges an identifier with the CMS for a token, which the CMS encodes into a URL. When a web browser to the URL is opened, the computing device becomes linked with the CMS through the web browser to enable accessibility features to be routed through the web browser to enable the user to continue setting up an account or linking the computing device to an existing account. | 2016-05-26 |
20160147545 | Real-Time Optimization of Many-Core Systems - An embodiment is a device including a processor having a plurality of cores, each of the plurality of cores including a real-time monitoring circuit, each of the real-time monitoring circuits configured to determine a status of the respective core and generate status signals based on the determined status in the respective core. The device further comprising a controller configured to: receive the status signals from real-time monitoring circuits of the plurality of cores; and configure an operation of each of the plurality of cores based on their respective status signals. | 2016-05-26 |
20160147546 | Managing the Customizing of Appliances - Disclosed is a method of customizing an appliance. The method includes steps of pre-storing a public key in the appliance; connecting the appliance to an external storage device; and booting up the appliance to automatically proceed with the following customization process: obtaining a customization file from the external storage device; authenticating the customization file with the public key; and executing customization with the customization file if the authentication succeeds. | 2016-05-26 |
20160147547 | METADATA-BASED CLASS LOADING USING A CONTENT REPOSITORY - An example method of loading classes from a content repository includes storing a set of files in a content repository. The set of files includes a representation of a set of classes. The method also includes extracting first metadata that describes the set of classes and storing the first metadata in a content repository. The method further includes receiving a request including second metadata corresponding to one or more classes of the set of classes. The request is from a repository-class loader executable in a node. The method also includes selecting, based on the second metadata, a class of the set of classes. The method further includes sending the selected class to the repository-class loader for loading into the node. | 2016-05-26 |
20160147548 | VIRTUAL MACHINE ARRANGEMENT DESIGN APPARATUS AND METHOD , SYSTEM, AND PROGRAM - An apparatus includes an input unit that receives a requested resource, and a VM arrangement destination computation unit that predicts traffic volume flowing through a network with the physical machines connected thereto in a case wherein the virtual machine is arranged on the physical machine that conform to a condition specified by the requested resource, and based on the predicted traffic volume, and selects the physical machine that balances a link utilization of the network as an arrangement destination of the virtual machine. | 2016-05-26 |
20160147549 | OPTIMIZING VIRTUAL MACHINE ALLOCATION TO CLUSTER HOSTS - Systems and methods for optimizing a virtual machine cluster. An example method may comprise receiving, by a processing device, an information characterizing a virtual machine cluster, the information comprising at least one of: values of one or more cluster configuration parameters, values of one or more cluster state parameters, or values of one or more user request parameters; and producing, in view of the received information, an ordered list of cluster configuration operations to be performed on virtual machines of the virtual machine cluster, the cluster configuration operations designed to yield a resulting configuration of the virtual machine cluster, wherein the resulting configuration is characterized by a quasi-optimal configuration score among configuration scores of two or more candidate configurations, the configuration score determined by applying one or more virtual machine scheduling policy rules to parameters of a candidate configuration. | 2016-05-26 |
20160147550 | Monitoring and Reporting Resource Allocation and Usage in a Virtualized Environment - Various aspects of the disclosure relate to monitoring of resource usage in a virtualized environment, including usage of a physical processor that executes a virtual machine or an application of the virtualized environment. By monitoring physical computing resources (e.g., by number and type) that are used to execute a virtual machine or an application of the virtualized environment, a user may, for example, be informed as to when physical computing resources are used in excess or less than the limits set by the license. In some embodiments, additional actions may be taken to update the license to better satisfy the user's resource requirements or reduce the amount paid annually for ongoing technical services. To inform a user, or form the basis for the additional actions, a report may be generated that includes data describing how a virtual machine or application executed on the physical computing resources. | 2016-05-26 |
20160147551 | PARAVIRTUALIZED ACCESS FOR DEVICE ASSIGNMENT BY BAR EXTENSION - A hypervisor associates a combined register space with a virtual device to be presented to a guest operating system of a virtual machine, the combined register space comprising a default register space and an additional register space. Responsive to detecting an access of the additional register space by the guest operating system of the virtual machine, the hypervisor performs an operation on behalf of the virtual machine, the operation pertaining to the access of the additional register space. | 2016-05-26 |
20160147552 | TRAFFIC-AWARE DATA CENTER VM PLACEMENT CONSIDERING JOB DYNAMIC AND SERVER HETEROGENEITY - A method is implemented by a computing device to provide traffic-aware virtual machine (VM) placement onto physical servers of a data center where the placement takes incremental VM job arrival and physical server heterogeneity into consideration. The method forms a graph including a new VM node, an existing VM node, and an edge between the nodes, where the edge is assigned a weight that represents a traffic demand. The method marks the existing VM node as belonging to one of the physical servers, adds dummy VM nodes to the graph, adds pseudo VM nodes to the graph, connects nodes belonging to a same physical server using an infinite weight pseudo edge, runs a balanced minimum k-cut problem algorithm on the graph to thereby divide the graph into sub-graphs, and maps the new VM to one of the physical servers based on the division of sub-graphs. | 2016-05-26 |
20160147553 | MINIMIZING GUEST OPERATING SYSTEM LICENSING COSTS IN A PROCESSOR BASED LICENSING MODEL IN A VIRTUAL DATACENTER - Techniques for optimizing guest operating system (OS) utilization cost in a processor based licensing model in a virtual datacenter are described. In one example embodiment, a virtual machine (VM) that has or is scheduled to have an instance of an operating system (OS) that requires a license is identified. Availability of a physical processor of a first host computing system that is licensed to execute the OS based on the computing resource requirements of the VM, the physical processor based license, author assigned affinity to physical processors in the first host computing. system is determined. The VM is then migrated/placed to/on the physical processor of the first host computing system or migrated/placed to/on a physical processor of a second host computing system based on the outcome of the determination. | 2016-05-26 |
20160147554 | HOT-SWAPPING STORAGE POOL BACKEND FUNCTIONAL MODULES - Systems and methods for hot-swapping storage pool backend functional modules of a host computer system. An example method may comprise: identifying, by a processing device of a host computer system executing a virtual machine managed by a virtual machine manager, a storage pool backend functional module; and activating the identified storage pool backend functional module by directing, to the identified storage pool backend functional module, backend storage function calls. | 2016-05-26 |
20160147555 | Hardware Accelerated Virtual Context Switching - In a virtual computing environment, a system configured to switch between isolated virtual contexts. A system includes a physical processor. The physical processor includes an instruction set architecture. The instruction set architecture includes an instruction included in the instruction set architecture for the physical processor that when invoked indicates that a virtual processor implemented using the physical processor should switch directly from a first virtual machine context to a second virtual machine context. The first and second virtual machine contexts are isolated from each other. | 2016-05-26 |
20160147556 | MULTI-HYPERVISOR VIRTUAL MACHINES - Standard nested virtualization allows a hypervisor to run other hypervisors as guests, i.e. a level-0 (L0) hypervisor can run multiple level-1 (L1) hypervisors, each of which can run multiple level-2 (L2) virtual machines (VMs), with each L2 VM is restricted to run on only one L1 hypervisor. Span provides a Multi-hypervisor VM in which a single VM can simultaneously run on multiple hypervisors, which permits a VM to benefit from different services provided by multiple hypervisors that co-exist on a single physical machine. Span allows (a) the memory footprint of the VM to be shared across two hypervisors, and (b) the responsibility for CPU and I/O scheduling to be distributed among the two hypervisors. Span VMs can achieve performance comparable to traditional (single-hypervisor) nested VMs for common benchmarks. | 2016-05-26 |
20160147557 | File Transfer Using Standard Blocks and Standard-Block Identifiers - Instead of transferring a large original file, such as a virtual-machine image file, from a source system to a target system, the original file is encoded to define a recipe file that is transferred. The recipe is then decoded to yield a duplicate of the original file on the target system. Encoding involves identifying standard blocks in the original file and including standard-block identifiers for the standard blocks in the recipe in lieu of the original blocks. Decoding involves an exchange with a standard-block identifier server system, which provides standard blocks in response to received standard-block identifiers. | 2016-05-26 |
20160147558 | Virtual machine disk image installation - A processor copies first and second installable binary files into first and second disk images of first and second virtual machines, respectively, before instantiating the images. The processor can copy first installation parameters and second installation parameters into the first image. The processor copies additional first installation parameters and additional second installation parameters into the second image. The processor at least partially executes a first installation process, based on the first installation parameters, to install the first installable binary files, and a second installation process, based on the additional second installation parameters, to install the second installable binary files. The processor at least partially executes the installation processes in an interleaved manner in relation to one another, based on dependencies. After instantiating the images, the processor can execute scripts based on the second installation parameters and the additional second installation parameters to complete installation. | 2016-05-26 |
20160147559 | MODIFICATION OF CONTEXT SAVING FUNCTIONS - A method for modifying a context saving function is disclosed. The method identifies a context saving function within a code fragment. The method further modifies the context saving function to determine a size of a register save buffer, allocate the register save buffer using the determined size, and save a register value in the register save buffer. | 2016-05-26 |
20160147560 | Light-Weight Lifecycle Management of Enqueue Locks - In an example embodiment, a request for an enqueue lock for a first piece of data is received from a client application. At an enqueue server separate from an application server instance, a light-weight enqueue session is then created, including generating a light-weight enqueue session identification for the light-weight enqueue session. An enqueue lock for the first piece of data is stored in the light-weight enqueue session. The light-weight enqueue session identification is then sent to the client application. In response to a detection that a session between the client application and the application server instance has been terminated, all enqueue locks in the light-weight enqueue session are deleted and the light-weight enqueue session is deleted. | 2016-05-26 |
20160147561 | INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM - A job management unit manages a registered job by associating the job with an identifier specific to the job and reports a request to execute the job and the identifier associated with the job to a job execution unit on an execution date and time of the job; and the job execution unit manages a generation file of a next generation, which is created by executing the job, by associating the generation file with the identifier reported together with the request to execute the job from the job management unit; and if the generation file associated with the identifier reported together with the execution request already exists when executing the job based on the execution request from the job management unit, the job execution unit executes designated operation. | 2016-05-26 |
20160147562 | BATCH SCHEDULING - There is provided a method to schedule execution of a plurality of batch jobs by a computer system. The method includes: reading one or more constraints that constrain the execution of the plurality of batch jobs by the computer system and a current load on the computer system; grouping the plurality of batch jobs into at least one run frequency that includes at least one batch job; setting the at least one run frequency to a first run frequency; computing a load generated by each batch job in the first run frequency on the computer system based on each batch job's start time; and determining an optimized start time for each batch job in the first run frequency that meets the one or more constraints and that distributes each batch job's load on the computer system using each batch job's computed load and the current load. | 2016-05-26 |
20160147563 | Method and Apparatus for Brought-In Device Communication Request Handling - A system includes a processor configured to receive an incoming message request identifying a requesting application and requested user interface. The processor is also configured to determine an incoming message priority value. The processor is further configured to determine a message type. Also, the processor is configured to determine a driver attention demand value and provide access to the requested user interface when the priority value, message type, and driver attention demand value match parameters defined for the requested user interface. | 2016-05-26 |
20160147564 | APPARATUS AND METHOD FOR ALLOCATING RESOURSES USING PRIORITIZATION OF REQUESTS AND UPDATING OF REQUESTS - A system and method for allocating resources receive one or more resource requests describing tasks, each of the one or more resource requests having a request priority, a requested configuration type, and a requestor identifier. In a winner-take-all circuit, all of the existing resource priorities within each configuration of the requested configuration type are compared to determine a highest-priority task occupying each assignment. In a loser-take-all circuit, one or more current highest resource priorities of each configuration within the requested configuration type, which are output from the winner-take-all circuit associated with the requested resource assignment, each of the one or more current resources having a current priority, are compared. One of the one or more current resource configurations within the requested configuration type having the lowest current priority is identified as the lowest-priority current resource configuration. The requested configuration type is allocated to the selected resource request if the request priority is higher than the lowest current priority configuration output from the loser-take-all circuit. The method further comprises continuing to allocate the requested configuration type to the lowest-priority current resource tasks currently occupying the lowest current priority configuration within the requested configuration if the lowest current priority configuration within the requested configuration is higher than or equal to the request priority. | 2016-05-26 |
20160147565 | INTERACTIONS WITH CONTEXTUAL AND TASK-BASED COMPUTING ENVIRONMENTS - Concepts and technologies are described herein for interacting with contextual and task-focused computing environments. Tasks associated with applications are described by task data. Tasks and/or batches of tasks relevant to activities occurring at a client are identified, and a UI for presenting the tasks is generated. The UIs can include tasks and workflows corresponding to batches of tasks. Workflows can be executed, interrupted, and resumed on demand. Interrupted workflows are stored with data indicating progress, contextual information, UI information, and other information. The workflow is stored and/or shared. When execution of the workflow is resumed, the same or a different UI can be provided, based upon the device used to resume execution of the workflow. Thus, multiple devices and users can access workflows in parallel to provide collaborative task execution. | 2016-05-26 |
20160147566 | Cross-Platform Scheduling with Long-Term Fairness and Platform-Specific Optimization - Methods, systems, and computer program products for cross-platform scheduling with fairness and platform-specific optimization are provided herein. A method includes determining dimensions of a set of containers in which multiple tasks associated with a request are to be executed; assigning each of the containers to a processing node on one of multiple platforms based on the dimensions of the given container, and to a platform owner selected from the multiple platforms based on a comparison of resource requirements of each of the multiple platforms and the dimensions of the given container; and generating container assignments across the set of containers by incorporating the assigned node of each container in the set of containers, the assigned platform owner of each container in the set of containers, one or more scheduling requirements of each of the platforms, one or more utilization objectives, and enforcing a sharing guarantee of each of the platforms. | 2016-05-26 |
20160147567 | Incentive-Based App Execution - Systems and methods of a personal daemon, executing as a background process on a mobile computing device, for providing personal assistant to an associated user is presented. Also executing on the mobile computing device is a scheduling manager. The personal daemon executes one or more personal assistance actions on behalf of the associated user. The scheduling manager responds to events in support of the personal daemon. More particularly, in response to receiving an event the scheduling manager determines a set of apps that are responsive to the received event and from that set of apps, identifies at least a first subset of apps for execution on the mobile computing device. The scheduling manager receives feedback information regarding the usefulness of the executed apps of the first subset of apps and updates the associated score of each of the apps of the first subset of apps. | 2016-05-26 |
20160147568 | METHOD AND APPARATUS FOR DATA TRANSFER TO THE CYCLIC TASKS IN A DISTRIBUTED REAL-TIME SYSTEM AT THE CORRECT TIME - The invention relates to a method for the time-correct data transfer between cyclic tasks in a distributed real-time system, which real-time system comprises a real-time communication system and a multiplicity of computer nodes, wherein a local real-time clock in each computer node is synchronised with the global time, wherein all periodic trigger signals z | 2016-05-26 |
20160147569 | DISTRIBUTED TECHNIQUE FOR ALLOCATING LONG-LIVED JOBS AMONG WORKER PROCESSES - A distributed computing system that executes a set of long-lived jobs is described. During operation, each worker process performs the following operations. First, the worker process identifies a set of jobs to be executed and a set of worker processes that can execute the set of jobs. Next, the worker process sorts the set of worker processes based on unique identifiers for the worker processes. Then, the worker process assigns jobs to each worker process in the set of worker processes, wherein approximately the same number of jobs is assigned to each worker process, and jobs are assigned to the worker processes in sorted order. While assigning jobs, the worker process uses an identifier for each worker process to seed a pseudorandom number generator, and then uses the pseudorandom number generator to select jobs for each worker process to execute. | 2016-05-26 |
20160147570 | COMPONENT SERVICES INTEGRATION WITH DYNAMIC CONSTRAINT PROVISIONING - Resource provisioning information links to resource provisioning information of at least one reusable component resource that satisfies at least a portion of user-specified resource development constraints of a new resource under development are identified within a resource provisioning-link registry. Using the identified resource provisioning information links, the resource provisioning information of the at least one reusable component resource is programmatically collected from at least one data provider repository that stores reusable resources and that publishes the resource provisioning information links to the resource provisioning-link registry. The programmatically-collected resource provisioning information of the at least one reusable component resource is analyzed. Based upon the analyzed programmatically-collected resource provisioning information of the at least one reusable component resource, a resource integration recommendation is provided that uses the at least one reusable component resource and that satisfies at least the portion of the user-specified resource development constraints of the new resource under development. | 2016-05-26 |
20160147571 | METHOD FOR OPTIMIZING THE PARALLEL PROCESSING OF DATA ON A HARDWARE PLATFORM - The invention relates to a method for optimizing the parallel processing of data on a hardware platform, the hardware platform comprising at least one computing unit comprising a plurality of processing units able to execute a plurality of executable tasks in parallel, the data to be processed forming a data set that can be broken down into data subsets, a same sequence of operations being performed on each data subset. | 2016-05-26 |
20160147572 | MODIFYING MEMORY SPACE ALLOCATION FOR INACTIVE TASKS - Provided are a computer program product, system, and method for modifying memory space allocation for inactive tasks. Information is maintained on computational resources consumed by tasks running in the computer system allocated memory space in the memory. The information on the computational resources consumed by the tasks is used to determine inactive tasks of the tasks. The allocation of the memory space allocated to at least one of the determined inactive tasks is modified. | 2016-05-26 |
20160147573 | COMPUTING SYSTEM WITH HETEROGENEOUS STORAGE AND PROCESS MECHANISM AND METHOD OF OPERATION THEREOF - A computing system includes: a monitor block configured to calculate a total access time based on a device access time, a traffic latency, a traffic information, or a combination thereof; a name node block, coupled to the monitor block, configured to determine a data location of a data content; and a scheduler block, coupled to the name node block, configured to distribute a task assignment based on the total access time, the data location, device performance criteria, or a combination thereof for accessing the data content from a target device. | 2016-05-26 |
20160147574 | FACILITATING PROVISIONING IN A MIXED ENVIRONMENT OF LOCALES - Aspects capable of dynamically and flexibly supporting a plurality of locales upon provisioning are provided. An associated management server includes a storage table configured to store a plurality of logical device operations, a plurality of locales, and a plurality of workflows, wherein each resource server among all resource servers connected to the management server is associated with a different one of the plurality of locales. The management server further includes a provisioning circuit configured to dynamically determine, for a required logical device operation, a resource server among all of the resource servers connected to the management server by way of provisioning. The management server further includes a calling circuit configured to search the storage table using a locale among the plurality of locales that is associated with the dynamically determined resource server to select a workflow from the plurality of workflows for the required logical device operation. | 2016-05-26 |
20160147575 | PRIORITIZING AND DISTRIBUTING WORKLOADS BETWEEN STORAGE RESOURCE CLASSES - A method includes storing a plurality of workloads in a first disk resource associated with a high end disk classification. The method further includes determining a corresponding activity level for each of the plurality of workloads. The method also includes classifying each of the plurality of workloads into a first set indicative of high-priority workloads and a second set indicative of low-priority workloads based on whether the corresponding activity level is greater than a threshold activity level. The method further includes determining whether a second disk resource associated with a low end disk classification can accommodate storage of a first particular workload in the second set based on an available storage capacity of the second disk resource. The method additionally includes migrating the first particular workload from the first disk resource to the second disk resource. | 2016-05-26 |
20160147576 | WAKE-UP ORDERING OF PROCESSING STREAMS USING SEQUENTIAL IDENTIFIERS - Systems and methods for waking up waiting processing streams in a manner that reduces the number of spurious wakeups. An example method may comprise: assigning a first identifier of a sequence of identifiers to a processing stream in a waiting state; receiving a wakeup signal associated with a second identifier of the sequence of identifiers; comparing, by a processing device, the first identifier with the second identifier; and waking the processing stream responsive to determining, in view of comparing, that the processing stream began waiting prior to an initiation of the wakeup signal. | 2016-05-26 |
20160147577 | SYSTEM AND METHOD FOR ADAPTIVE THREAD CONTROL IN A PORTABLE COMPUTING DEVICE (PCD) - Systems and methods for adaptive thread control in a portable computing device (PCD) are provided. During operation a plurality of parallelized tasks for an application on the PCD are created. The application is executed with at least one processor of the PCD processing at least one main thread of the application. A determination is made whether a portion of the application being executed includes one or more of the parallelized tasks. A determination is made whether to perform the parallelized tasks in parallel. Based on the determination whether to perform the parallelized tasks in parallel, the parallelized tasks are executed with the at least one main thread of the application if the determination is not to perform the parallelized tasks in parallel, or if the determination is to perform the parallelized tasks in parallel, at least one worker thread is activated to execute the parallelized task in parallel with the main thread. | 2016-05-26 |
20160147578 | API VALIDATION SYSTEM - A system that validates an application programming interface (API) call is provided. A key and a value associated with the key are read from a test script containing a script. The key and the value are separated by a colon. The key is included in first double quotes, and the value is included in second double quotes. Whether the key matches a plurality of keys defined for an API call is determined. Based on the key matching the plurality of keys defined for the API call, the API call is configured using the key and the value without any of the colon, the first double quotes, or the second double quotes. The configured API call is executed. | 2016-05-26 |
20160147579 | Event Generation Management For An Industrial Controller - An improved system for handling events in an industrial control system is disclosed. A module in an industrial controller is configured to generate an event responsive to a predefined signal or combination of signals occurring. The event is transferred to an event queue for subsequent execution. The event queue may also be configured to store a copy of the state of the module at the time the event is generated. The event queue may hold multiple events and each event is configured to trigger at least one event task. Subsequent events that occur during execution of the event task are stored in the event queue for later execution. An event, or combination of events, may trigger execution of an event task within the module, within the controller to which the module is connected, or within multiple controllers. | 2016-05-26 |
20160147580 | INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING THE SAME AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing apparatus capable of executing processing in a background, comprises a control unit configured to, when processing is executed in accordance with a request from an application, control execution of other processing in the background, wherein the control unit suppresses, in accordance with the request from the application, so that the other processing is not executed in the background, and releases the suppression when, in a case where a request for release of suppression is not instructed from the application, a predetermined interval elapses from when the suppression started. | 2016-05-26 |
20160147581 | ENHANCED NOTIFICATIONS - A facility for providing enhanced time-sensitive notifications on an electronic device is described. In some such notifications, the facility replaces an icon or name of an application presenting the notification with another image or other text, respectively. In some such notifications, the facility renders certain aspects of the notification on an optional basis, such as based on the capabilities of the electronic device. | 2016-05-26 |
20160147582 | READ LEVEL GROUPING FOR INCREASED FLASH PERFORMANCE - A table of error counts is generated based on reading wordlines of a flash memory device, the table storing an error count for each combination of wordline and respective read level voltage used to read the wordlines. A plurality of offset wordline groups are generated based on the table of error counts, with each group associating a different read level offset voltage with a plurality of wordline addresses. A storage device is configured to read memory cells using a read level offset voltage of a generated offset wordline group associated with a wordline address of the memory cells to be read. After a predetermined point in a life cycle of a respective memory block, the table is regenerated and plurality of offset wordline groups are regenerated based the regenerated table of error counts. | 2016-05-26 |
20160147583 | System and Method for Transforming Observed Metrics into Detected and Scored Anomalies - A system includes a normal behavior characterization module configured to receive values for a first metric of a plurality of metrics and generate a baseline profile indicating normal behavior of the first metric based on the received values. The system also includes an anomaly identification module configured to identify an anomaly in response to present values of the metric deviating outside the baseline profile. The system also includes an anomaly behavior characterization module configured to analyze a plurality of prior anomalies identified by the anomaly identification module and develop a model of the anomalies of the first metric. The system also includes an anomaly scoring module configured to determine a first score for a present anomaly detected by the anomaly identification module for the first metric. The first score is based on characteristics of the present anomaly and the model of the anomalies of the first metric. | 2016-05-26 |
20160147584 | ERROR DETECTION METHOD OF FAILSAFE SOFTWARE - Disclosed is an error detection method of fail safe software, the method including outputting a pulse signal according to an operation state of fail safe software monitoring a fail safe function of a motor control device; determining presence or absence of an error of the fail safe software using a frequency of the pulse signal; and controlling an output of the motor control device based on the presence or absence of the error of the fail safe software. Accordingly, even though whether the motor control device is in an abnormal state is not determined due to an error of the fail safe software, it is possible to prevent excessive motor torque from occurring. | 2016-05-26 |
20160147585 | PERFORMANCE ANOMALY DIAGNOSIS - The described implementations relate to tunable predicate discovery. One implementation is manifest as a method for obtaining a data set and determining anomaly scores for anomalies of an attribute of interest in the data set. The method can also generate a ranked list of predicates based on the anomaly scores and cause at least one of the predicates of the ranked list to be presented. | 2016-05-26 |
20160147586 | DEVICE AND METHOD FOR EXECUTING A PROGRAM, AND METHOD FOR STORING A PROGRAM - A device and a method for executing a program, and a method for storing a program are described. The method of executing a program includes a sequence of instruction cycles, wherein each instruction cycle comprises: updating the program counter value; reading a data word from a memory location identified by the updated program counter value, wherein the data word comprises an instruction and a protection signature; determining a verification signature by applying a signature function associated with the program counter value to the instruction; executing the instruction if the verification signature and the protection signature are consistent with each other; and initiating an error action if they are inconsistent with each other. A method for storing a program on a data carrier is also described. | 2016-05-26 |
20160147587 | METHOD OF ANALYZING A FAULT OF AN ELECTRONIC SYSTEM - In a method of analyzing a fault and/or error of an electronic system according to some example embodiments, a system call that accesses a hardware is replaced with a hooking system call including a code that executes the system call and a code that obtains monitoring information, the monitoring information including system call execution information and hardware performance information is obtained by executing the hooking system call when the hooking system call is called instead of the system call, and the monitoring information is recorded to analyze the fault/error of the electronic system based on the monitoring information. | 2016-05-26 |
20160147588 | CONTROL MECHANISM BASED ON TIMING INFORMATION - There is provided an apparatus comprising thresholding means adapted to check if an average frequency of occurrence of timing violations is outside a range; and controlling means adapted to control at least one of a clock frequency, a processing, a heat generation, a bias voltage, a current, and a temperature in a direction to bring the average frequency of occurrence of timing violations into the range if the average frequency of occurrence of timing violations is outside the range. | 2016-05-26 |
20160147589 | Identifying Anomalous Conditions in Machine Data - Embodiments are directed towards the visualization of machine data received from computing clusters. Embodiments may enable improved analysis of computing cluster performance, error detection, troubleshooting, error prediction, or the like. Individual cluster nodes may generate machine data that includes information and data regarding the operation and status of the cluster node. The machine data is received from each cluster node for indexing by one or more indexing applications. The indexed machine data including the complete data set may be stored in one or more index stores. A visualization application enables a user to select one or more analysis lenses that may be used to generate visualizations of the machine data. The visualization application employs the analysis lens to produce visualizations of the computing cluster machine data. | 2016-05-26 |
20160147590 | DETERMINE MALFUNCTION STATE OF POWER SUPPLY MODULE - A method and system including a power supply module. The method and system determine whether the power supply module is in a malfunction state. | 2016-05-26 |
20160147591 | METHOD AND SYSTEM FOR EFFICIENT TRANSITION OF INFORMATION TECHNOLOGY OPERATIONS - This disclosure relates generally to computing devices, and more particularly to transition of IT operations. In one embodiment, a method and system is provided for generating an efficient transition plan for IT operations while addressing aspects such as coverage, risk, time, and cost. The IT operations are modeled through graphs and use well-defined problems in graph theory to build solutions. Heavy hitter issues are identified to maximize coverage. To minimize risk, severity of an issue is determined, wherein the severity is based on the instability caused or penalties associated with the issue. Further, transition time is minimized by finding issue-communities for parallel transition by finding maximum cliques. Yet further, the bin-packing algorithm is used to optimize the teams of resolvers and thus minimize cost. Finally, a transition plan is generated by systematically identifying issue communities for transition using the minimum hitting set and minimum vertex cover problem. | 2016-05-26 |
20160147592 | HEADER PARITY ERROR HANDLING - A parity error is detected in a header, where the header is in a particular one of a plurality of queues, the header is to include a plurality of fields, and each of the queues is to correspond to a respective transaction type. Fabricated header data is generated for one or more of the plurality of fields to indicate the parity error and replace data of one or more of the plurality of fields. An error containment mode is entered based on the parity error. | 2016-05-26 |
20160147593 | DETECTING STORAGE ERRORS IN A DISPERSED STORAGE NETWORK - A method includes dividing a data object into data partitions. The method further includes, for each data partition: dividing the data partition into data segments; dispersed storage error encoding the data segments to produce sets of encoded data slices; storing the sets of encoded data slices in a first set of storage units; and generating a segment allocation table regarding storage information of the sets of encoded data slices. The method further includes generating a directory of segment allocation tables. The method further includes receiving an access request regarding at least a portion of the data object. The method further includes accessing the directory to identify one or more segment allocation tables containing storage information for the at least a portion of the data object. The method further includes accessing encoded data slices of the at least the portion of the data object based on the storage information. | 2016-05-26 |
20160147594 | METHOD AND APPARATUS FOR PREVENTING AND MANAGING CORRUPTION OF FLASH MEMORY CONTENTS - The present invention relates to methods and apparatuses for eliminating or mitigating the effects of the corruption of contents in a flash memory, such as that which can occur during a power interruption. Embodiments of the invention include methods for preventing the corruption of code stored in flash memory. Such methods can include partitioning code in separate physical blocks as data in a flash memory. Embodiments of the invention also include methods for mitigating the effects of corruption of data stored in flash memory. Such methods can include a book-keeping mechanism that allows for the detection of corruption events, along with the affected locations in flash memory. | 2016-05-26 |
20160147595 | MANAGING INTEGRITY OF FRAMED PAYLOADS USING REDUNDANT SIGNALS - A frame error correction circuit may identify and correct errors in data frames provided to a receiver as part of a diversity communications scheme. The frame error correction circuit may further align the data frames so that the data frames can be compared. The frame error correction circuit may perform a bit-wise comparison of the data frames and identify inconsistent bit positions where bits in the data frames differ from one another. Once inconsistent bit positions have been identified, the frame error correction circuit may access a permutation table of permutations of bits at the inconsistent bit positions. In some implementations, the frame error correction circuit uses the permutation table to reassemble permutations of the data frames. In various implementations, the frame error correction circuit performs a CRC of each permutation of the data frames, and provides a valid permutation to a network. | 2016-05-26 |
20160147596 | ERROR DETECTION CONSTANTS OF SYMBOL TRANSITION CLOCKING TRANSCODING - Apparatus, systems and methods for error detection in transmissions on a multi-wire interface are disclosed. A method for transmitting data on the multi-wire interface includes transmitting data on a multi-wire interface includes obtaining a plurality of bits to be transmitted over a plurality of connectors, converting the plurality of bits into a sequence of symbols, and transmitting the sequence of symbols on the plurality of connectors. A predetermined number of least significant bits in the plurality of bits may be used for error detection. The predetermined number of least significant bits may have a constant value that is different from each of a plurality of error values. A symbol error affecting one or two symbols in the sequence of symbols may cause a decoded version of the predetermined number of least significant bits to have value that is one of a plurality of error values. | 2016-05-26 |
20160147597 | DYNAMIC PARTIAL BLOCKING OF A CACHE ECC BYPASS - An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process. | 2016-05-26 |
20160147598 | OPERATING A MEMORY UNIT - A method for operating a memory unit is disclosed. The method includes encoding data from a cache line divided in a plurality of groups and generating a plurality of codewords. The method further includes storing the LED data for the cache line combined with the data of the cache line retrieved from a first portion of the codewords across a plurality of chips in the memory unit to create a first tier of protection. The method also includes storing the GEC data for the cache line retrieved from a second portion of the codewords across the plurality of chips to create a second tier of protection for the cache line. The method also includes receiving information corresponding to the first tier of protection, determining whether an error exists in the data of the cache line, decoding the data of the cache line, and outputting the data of the cache line at the controller. | 2016-05-26 |
20160147599 | Memory Systems that Perform Rewrites of Resistive Memory Elements and Rewrite Methods for Memory Systems Including Resistive Memory Elements - A method of operating a nonvolatile memory device, such as a resistive memory device. The method includes performing error correction code (ECC) processing on data read from resistive memory cells to detect whether any of the resistive memories or soft error cell; checking completion of a read operation after storing an address of the soft error cell when the soft error cell is detected; and selectively rewriting error-corrected data into a soft error cell corresponding to the stored address in response to determining that the read operation is completed. | 2016-05-26 |
20160147600 | MEMORY ACCESS METHOD AND APPARATUS FOR MESSAGE-TYPE MEMORY MODULE - A memory access apparatus includes a read-write module and a processing module. The read-write module is configured to store an error detecting code in an (M+2) | 2016-05-26 |
20160147601 | METHOD FOR SCHEDULING HIGH SPEED CACHE OF ASYMMETRIC DISK ARRAY - A method for asymmetrically scheduling buffer cache of disk array. The method including: (1) detecting whether access from a upper layer is hit in a buffer cache, proceeding to (7) if yes, and proceeding to (2) if no; (2) detecting whether the buffer cache is full, proceeding to (3) if yes, and proceeding to (5) if no; (3) detecting whether the number of pages of a sacrificial disk is greater than a threshold, proceeding to (4) if yes, and proceeding to (6) if no; (4) selecting and replacing a cold page of the sacrificial disk; (5) buffering data requested by a user in a blank page in the buffer cache; (6) selecting and replacing all cold pages of the buffer cache; and (7) reading or writing the data, and changing positions or status thereof in pages of the buffer cache. | 2016-05-26 |
20160147602 | I/O HANDLING BETWEEN VIRTUALIZATION AND RAID STORAGE - A method for handling input/output (I/O) in a data storage system comprising a RAID subsystem storing data according to a RAID level utilizing a parity scheme, where RAID stripes have been configured across a plurality of data storage devices. The method may include monitoring write requests to the RAID subsystem, identifying write requests destined for the same RAID stripe, and bundling the identified write requests for substantially simultaneous execution at the corresponding RAID stripe. Monitoring write requests to the RAID subsystem may include delaying at least some of the write requests to the RAID subsystem so as to build-up a queue of write requests. In some embodiments, identifying write requests and bundling the identified write requests may include identifying and bundling a number of write requests as required to perform a full stripe write to the corresponding RAID stripe. | 2016-05-26 |
20160147603 | ALLOCATION OF REPLICA-SETS IN A STORAGE CLUSTER - A data storage system may be configured to allocate replica-sets in a balanced manner and mark some of these balanced replica-sets as being spares. As one or more drives or machines fail, the data storage system may move all copies of an affected replica-set to a marked spare replica-set and mark the affected replica-set as being inactive or invalid. As the failed drives are replaced, the data storage system may reconfigure those inactive replica-sets and use them as new spares. The data storage system may implement a coordinator module that handles the balancing and allocation of spares within a sub-cluster. The coordinator may also reallocate entire replica-sets across sub-clusters to maintain balance at the cluster level. | 2016-05-26 |
20160147604 | SERVER SYSTEM - A server system is disclosed herein, which comprises a first BIOS (Basic Input/Output System chip, a second BIOS chip, a baseboard management controller (BMC) and a platform controller. In a preset mode, the platform controller is conductively connected with the first BIOS chip through the BMC and the multiplexer so that the server system is activated by the first BIOS chip. Upon detecting a failure of a POST (Power-on self-test) initialization of the first BIOS chip, the BMC transmits a control command to the multiplexer so as to make the platform controller conductively connecting with the second BIOS chip through the BMC and the multiplexer so that the server system is activated by the second BIOS chip. | 2016-05-26 |
20160147605 | SYSTEM ERROR RESOLVING METHOD - The present invention provides a system error resolving method including the following steps. In a resolving period of a first system error, determine whether a second system error occurs. When the second system error occurs, a system status is identified. A second error type corresponding to the second system error is identified, wherein a first error type of the first system error and the second error type of the second system error are in association with a first priority value and a second priority value, respectively. According to the first priority value of the first error type and the second priority value of the second error type, the first system error and the second system error are sorted into a resolving sequence. | 2016-05-26 |
20160147606 | DETECTING AND SPARING OF OPTICAL PCIE CABLE CHANNEL ATTACHED IO DRAWER - A method, system and computer program product are provided for detecting state and sparing of optical Peripheral Component Interconnect Express (PCI-Express or PCIE) cable channels attached to an IO drawer. System firmware is provided for implementing health check functions and state detection and sparing functions. One or more optical cables are connected between a host bridge and an PCIE enclosure, each optical cable includes one or more spare optical channels. An identified failed optical channel is rerouted to the spare optical channel. | 2016-05-26 |
20160147607 | VIRTUAL MACHINE CHANGE BLOCK TRACKING - According to certain aspects, a system includes a client device that includes a virtual machine (VM) executed by a hypervisor, a driver located within the hypervisor, and a data agent. The VM may include a virtual hard disk file and a change block bitmap file. The driver may intercept a first write operation generated by the VM to store data in a first sector, determine an identity of the first sector based on the intercepted write operation, determine an entry in the change block bitmap file that corresponds with the first sector, and modify the entry in the change block bitmap file to indicate that data in the first sector has changed. The data agent may generate an incremental backup of the VM based on the change block bitmap file in response to an instruction from a storage manager, where the incremental backup includes the data in the first sector. | 2016-05-26 |
20160147608 | USING GEOGRAPHICAL LOCATION INFORMATION TO PROVISION MULTIPLE TARGET STORAGES FOR A SOURCE DEVICE - Provided are a computer program product, system, and method for using geographical location information to provision multiple target storages for a source device. A determination is made of a geographical location of the source device and a distance between the source device and each of the target storages and between each pair of target storages. A determination is further made of qualifying k-tuples of the target storages, wherein each k-tuple comprises a group of k target storages to which the source data is to be backed-up. A qualifying k-tuple has one target storage that satisfies a distance requirement with respect to the source device and a distance between any two target storages in the k-tuple satisfies the distance requirement. A selected qualifying k-tuple is indicated to use to backup the source data at the k target storages in the qualifying k-tuple. | 2016-05-26 |
20160147609 | SNAPSHOT MANAGEMENT - Systems and methods are disclosed for backing up a computer. The method includes choosing a time window to back up the computer; determining jobs that need to be synced during the time window and snapshots; determining an optimal set of snapshots that cover all jobs; altering job records in the database to point to one of the optimal snapshots; and deleting all snapshots not in the optimal set. | 2016-05-26 |
20160147610 | USING GEOGRAPHICAL LOCATION INFORMATION TO PROVISION A TARGET STORAGE FOR A SOURCE DEVICE - Provided are a computer program product, system, and method for using geographical location information to provision one or more target storages for a source device. A determination is made of a geographical location of the source device and of geographical locations of the target storages. A determination is made of one of the target storages whose distance from the source device based on the geographical locations of the source device and the target storages satisfies at least one distance requirement. A configuration procedure is initiated to configure the source device and the determined target storage to have the source data backed-up from the source device to the target storage over the network. | 2016-05-26 |
20160147611 | USING GEOGRAPHICAL LOCATION INFORMATION TO PROVISION MULTIPLE TARGET STORAGES FOR A SOURCE DEVICE - Provided are a computer program product, system, and method for using geographical location information to provision multiple target storages for a source device. A determination is made of a geographical location of the source device and a distance between the source device and each of the target storages and between each pair of target storages. A determination is further made of qualifying k-tuples of the target storages, wherein each k-tuple comprises a group of k target storages to which the source data is to be backed-up. A qualifying k-tuple has one target storage that satisfies a distance requirement with respect to the source device and a distance between any two target storages in the k-tuple satisfies the distance requirement. A selected qualifying k-tuple is indicated to use to backup the source data at the k target storages in the qualifying k-tuple. | 2016-05-26 |
20160147612 | METHOD AND SYSTEM TO AVOID DEADLOCKS DURING A LOG RECOVERY - A method, medium, and system to receive a request to perform a log recovery to restore multiple database services; determine log backup entries corresponding to a target log position for a first database service of the multiple database services; read from a sequential stream device, by the first database service, the log backup entries corresponding to the target log position for the first database service; inform a second database service of the multiple database services that the first database service has concluded executing the log backup entries corresponding to the target log position for the first database service from the sequential stream device; assuring that no resources of the streaming device are blocked by the first database service; and read log backup entries of the second database service corresponding to a target log position for the second database service from the sequential stream device. | 2016-05-26 |
20160147613 | DATABASE RECOVERY USING FOREIGN BACKUPS - A system includes reception, at a target database system, of a request to recover a backup created by a source database system into the target database system, determination of a backup tool configuration file associated with the source database system, determination of a filepath of the backup, determination of a backup filepath associated with the target database system, and request of a recovery of the backup using the backup tool configuration file, wherein the request using the backup tool configuration file includes the filepath of the backup and the backup filepath associated with the target database system. | 2016-05-26 |
20160147614 | Synchronized Backup and Recovery of Database Systems - Disclosed herein are system, method, and computer program product embodiments for utilizing a backup catalog to perform synchronized backup and recovery of heterogeneous database systems. An embodiment operates by performing a global data backup of a heterogeneous database system comprising a first database management system (DBMS) at a first server and a second DBMS at a second server and recording a global data backup entry identifying the global data backup into a backup catalog. Upon receiving log backup notifications regarding asynchronous log backups on the first server and the second server, log backup entries identifying the asynchronous log backups are recorded into the backup catalog. To successfully perform a point-in-time recovery, the embodiment operates by using the backup catalog to identify data and log backups required for the recovery of the first and second servers to a recovery timestamp associated with the point-in-time recovery. | 2016-05-26 |
20160147615 | DATABASE RECOVERY AFTER SYSTEM COPY - A system includes reception, at a target database system, of a request to recover a backup created by a source database system into the target database system, where the request comprises a system identifier of the source database system, determination of a backup tool configuration file associated with the source database system based on the system identifier of the source database system, request of a recovery of the backup into the target database system using the backup tool configuration file, copying of a backup catalog of the source database system into a storage location associated with the target database system, and appending of a system change marker to the copied backup catalog, wherein the system change marker comprises the system identifier of the source database system. | 2016-05-26 |
20160147616 | RECOVERY STRATEGY WITH DYNAMIC NUMBER OF VOLUMES - A system includes reception of a command to recover a database to a point in time, determining a log backup which covers the point in time, determination of a sequence identifier associated with the log backup, collection of log backups which are older than the determined log backup and associated with the sequence identifier, and a data backup associated with the sequence identifier, and execution of a recovery of the database based on the determined log backup and the collected log backups and data backup. | 2016-05-26 |