01st week of 2016 patent applcation highlights part 46 |
Patent application number | Title | Published |
20160004510 | RANDOM NUMBER GENERATOR - An integrated random signal generation circuit includes two logic gates, the output of each gate coupled to a respective first input of the other gate via assemblies of delay elements. The respective delays introduced by the assemblies of delay elements are adjustable. | 2016-01-07 |
20160004511 | METHOD FOR IMPLEMENTING PRECOMPUTATION OF LARGE NUMBER IN EMBEDDED SYSTEM - Disclosed is a method for implementing precomputation of a large number in an embedded system. A modulo module, a modulo adding module, and a Montgomery modular multiplier are invoked according to a data format of a modulus length and a value of each data bit of a binary number corresponding to the modulus length, to perform an iterative operation, so that a precomputation result of a large number can be obtained when the modulus length is an arbitrary value, thereby improving the data processing speed. | 2016-01-07 |
20160004512 | METHOD OF PROJECTING A WORKSPACE AND SYSTEM USING THE SAME - A method of projecting a workspace includes the following steps. Firstly, a projectable space instance which is instantiated from a unified script is provided through a URI (uniform resource identifier). The unified script is defined to configure at least one of an matterizer, information and tool to model a workspace. The projectable space instance is used for building a projected workspace corresponding to the workspace so as to provide an interface for operating at least one of the matterizer, the information and the tool to perform a task. Then, a projector is used to parse the projectable space instance and build a working environment to configure at least one of the matterizer, the information and the tool. Consequently, the projected workspace is executed for providing interaction between at least one user and the projected workspace. | 2016-01-07 |
20160004513 | DESIGN ASSISTING SYSTEM, DESIGN ASSISTING METHOD, AND STORAGE MEDIUM STORING PROGRAM THEREFOR - To design parameters for building a system quickly, it is provided a design assisting system, comprising: a processor configured to execute a program; and a memory configured to store the program to be executed by the processor. The memory is configured to store a plurality of parameter sets for defining usage of resources of a computer system on which a business operation is run. The processor is configured to: calculate a resource value score of each resource for each of the plurality of parameter sets, based on information for defining items to be taken into account in system designing; sum up the calculated resource value scores for each of the plurality of parameter sets separately; and determine a priority level of each of the plurality of parameter sets, based on the resource value scores summed up for each of the plurality of parameter sets separately. | 2016-01-07 |
20160004514 | METHOD OF UNIFYING INFORMATION AND TOOL FROM A PLURALITY OF INFORMATION SOURCES AND COMPUTER PROGRAM PRODUCT AND MATTERIZER USING THE SAME - A method of unifying information and tool from a plurality of information sources includes the following steps. Firstly, an access scheme is provided to retrieve attributes and an associated link from an original information and/or attributes and an associated link from an original tool. Then, the original information is modeled into a unified information unit with a first unified data model by re-organizing the attributes and the associated link of the original information, and/or the original tool is modeled into a unified tool with a second unified data model by re-organizing the attributes and the associated link of the original tool. A format of the original information is modeled by the first unified data model and/or a format of the original tool is modeled by the second unified data model. | 2016-01-07 |
20160004515 | METHOD FOR PROVIDING APPLICATION DEVELOPMENT ENVIRONMENT AND DEVICE - A method for providing an application development environment according to one embodiment of the present invention comprises the steps of: displaying a first screen showing a connection relationship among a plurality of pages forming an application; and displaying a second screen showing a connection relationship among a plurality of components for any one of the plurality of pages, wherein the first component containing event information is connected to a second component containing action information on the second screen. | 2016-01-07 |
20160004516 | Code Generation Framework for Application Program Interface for Model - A code generating framework generates code for a model Application Program Interface (API). The framework comprises three components: an API code generator, a serialization code generator, and a deserialization code generator. The API code generator generates code for a model API. This model API produces a first model instance version in a first language. Code from the serialization code generator converts the model instance into a second version in a different language accessible to other applications (e.g., graphical modeling editors). Code from the deserialization code generator converts the second version of the model instance back into the original language. In a particular embodiment, the code generation framework generates JavaScript/XSJS APIs for manipulating model instances compatible with the Eclipse Modeling Framework (EMF). Serialization and deserialization code generated by the framework, converts the JavaScript/XSJS model instance into XMI recognized by other applications built on top of EMF, and back again into JavaScript/XSJS. | 2016-01-07 |
20160004517 | SOFTWARE DEVELOPMENT IMPROVEMENT TOOL - iREVIEW - Aspects of the disclosure relate to providing a tool for detecting defects in iSeries source code. The tool addresses a requirement of a software development team to review code requirements while developing a software/program. There may be strict coding guidelines if not followed properly, increase likelihood of a programming defect. The tool may scan code written by the software development team to ensure compliance with the coding guidelines and generate a list of defects/warnings in a text file format. The tool may transmit the list of defects/warnings, such as by email, to a manager of the software development team. The tool may measure productivity of the software development team based on the number of defects/warnings reported. The tool may include a control table. The control table may maintain the coding guidelines as rules which may be easily amendable by a user of the tool. | 2016-01-07 |
20160004518 | PROFILE GUIDED OPTIMIZATION IN THE PRESENCE OF STALE PROFILE DATA - Profile guided optimization (PGO) in the presence of stale profile data as described herein can be based on path profiling, whereby different paths through a program's call graph are uniquely identified. Stale profile data is data collected in a training run of a previous version of the program. Profile data can be collected along these paths and optimization decisions can be made using the collected data. The paths can be numbered using an algorithm that assigns path increments to all the callees of a function. The path increment assignments (which can be stored in the profile database) can be used to locate the profile data for that path and to make corresponding optimization decisions. PGO optimizations along call graph paths involving edited functions can be performed. | 2016-01-07 |
20160004519 | SYSTEM FOR DYNAMIC COMPILATION OF AT LEAST ONE INSTRUCTION FLOW - A compilation system for at least one instruction flow to be executed on a target circuit comprises a hardware acceleration circuit performing the functions of loading a set of at least one portion of said flow to a memory internal to the circuit and of decoding the set; the instructions resulting from the loading and from the decoding being transmitted to a programmable core operating in parallel to the hardware acceleration circuit, the programmable core producing the transcription of the decoded instructions into a machine code suitable for execution on the target circuit. | 2016-01-07 |
20160004520 | OPTIMIZATION OF COMPUTER MODULES FOR THE DEPLOYMENT OF A COMPUTER SERVICE - The invention relates to an automated selection of optimal computer modules for deploying a computer service. For this purpose, the following steps are provided:
| 2016-01-07 |
20160004521 | SYSTEM AND METHOD OF PROVIDING CONTEXT SENSITIVE HELP FOR ALARM SYSTEM INSTALLATION - Systems and methods of providing context sensitive help for alarm system installation are provided. Methods can include the alarm system transmitting a first piece of information to a smart device responsive to an event, where the first piece of information can include information displayed on a user interface of the alarm system or an alternative representation of the information displayed on the user interface of the alarm system. Methods can also include the smart device receiving the first piece of information, the smart device identifying and retrieving a second piece of information associated with the first piece of information, and the smart device displaying the second piece of information on a user interface of the smart device. | 2016-01-07 |
20160004522 | Multiple Virtual Machines in a Mobile Virtualization Platform - Systems and methods are described for embodiments of a mobile virtualization platform (MVP) where in some aspects a wireless mobile device including multiple virtual machines (VMs) may receive data from a remote content provider and process/execute the data using an appropriate virtual machine. In some examples, the MVP may facilitate communication between and coordination among different virtual machines in the MVP, such as to facilitate optimization of data processing/execution. | 2016-01-07 |
20160004523 | SOFTWARE SIGNATURE DISCOVERY - In a method for determining and scoring a signature for a software package. A processor determines a signature of a first software package, wherein the signature comprises an indication of a first set of files on a computer system after installation of the first software package that were not present on the computer system before the installation, and an indication of a second set of files not remaining on the computer system after an uninstall of the first software package. A processor compares the first and the second set of files indicated in the signature of the first software package to files indicated in one or more other signatures of other software packages. A processor determines a score for the signature of the first software package based on the comparison. | 2016-01-07 |
20160004524 | RETIRING TARGET MACHINES BY A PROVISIONING SERVER - A provisioning server can provide and interact with an eraser agent on target machines. The eraser agent can execute on one or more target machines to erase all the contents of storage on the target machines. In particular, the eraser agent can utilize secure algorithms to alter and obscure the information stored the storage devices of the target machines. The provisioning server can instruct the target machines to alter their power state (e.g. power cycle), if necessary, to provide and to initiate the eraser agent. | 2016-01-07 |
20160004525 | METHOD OF DETERMINING WHETHER INPUT OPERATION DIALOGUE IS DISPLAYABLE AND COMPUTER SYSTEM - A method of determining whether a dialogue is displayable includes recording a previous software use state in a terminal which has been responded to an input operation to the terminal, comparing the recorded previous software use state with a current software use state in the terminal, and displaying an input operation dialogue according to a result of the comparison. | 2016-01-07 |
20160004526 | BRIDGE MODULE FOR UPDATING BASIC INPUT/OUTPUT SYSTEM AND UPDATING METHOD THEREOF - A bridge module for updating basic input/output system (BIOS) includes a universal series bus (USB) port and a series peripheral interface bus (SPI) port which are used to connect to a communication device and an electrical device respectively. The communication device stores a BIOS updating data and includes USB port. The electrical device includes a motherboard, a SPI port and a BIOS chip. The SPI port is electrically connected to the motherboard. The BIOS chip is electrically connected to the SPI port and stores a BIOS old data. The method includes the steps of making the bridge module connect to the communication device and the electrical device; and triggering updating process to make the communication device transmit the BIOS updating data to the electrical device through the bridge module to update the electrical device. | 2016-01-07 |
20160004527 | TECHNIQUES FOR CUSTOMIZING MOBILE APPLICATIONS - A mobile data processing device (MT), including a memory, a processing system and a detector system for detecting environmental identifier(s) from the environment. The memory includes at least one application, which defines a set of functions, and one or more plugins. At least one plugin is currently active. The currently active plugin defines operations of the mobile data processing device during execution of the application. The operations defined by the currently active plugin include one or more tasks. At least one task defines a subset of the functions defined by the application, including at least one function for operating the detector system. The plugin further defines an order of execution for the subset of functions. The device enables customized functionality in applications that must be submitted to compliance checking by platform providers. | 2016-01-07 |
20160004528 | EFFICIENT APPLICATION PATCHING IN HETEROGENEOUS COMPUTING ENVIRONMENTS - Techniques are disclosed for efficiently updating multiple computing systems in potentially heterogeneous computing environments. Embodiments provide for efficient patching of multiple software applications executing in multiple execution environments. For example, a custom installation tool can be provided to each of the computing devices in the cloud infrastructure system. The computing devices can execute the custom installation tool and identify, retrieve, and apply the necessary patches to applications on the computing devices. The patch-related processing across the multiple computing devices may occur in parallel such that at least a portion of the processing is performed concurrently from one another. | 2016-01-07 |
20160004529 | INTEGRATION OF SOCIAL NETWORKS WITH INTEGRATED DEVELOPMENT ENVIRONMENT (IDE) - Disclosed herein is a framework for integrating social networks with integrated development environment (IDE). In accordance with one aspect, the framework automatically downloads social information based on a user's workspace content. The downloaded social information may be filtered and further displayed. Software development may be performed using the displayed social information and updated social information may be stored in a database. | 2016-01-07 |
20160004530 | INCREMENTAL UPGRADE METHOD, APPARATUS FOR APPLYING METHOD AND STORAGE MEDIUM - The present invention provides an incremental upgrade method, an apparatus applying the method and a storage medium. The incremental upgrade method comprises: sending a request to a server for downloading an incremental upgrade package corresponding to a local old version file on a terminal; receiving the incremental upgrade package, which comprises unmatched block data obtained by comparing a first compressed file with a second compressed file, and the start and end information of the unmatched block data, wherein the first compressed file is a compressed file that concatenates an old version file and a new version file on the server, and the second compressed file is a compressed file of the old version file on the server; concatenating the unmatched block data in the incremental upgrade package and a local second compressed file generated from the local old version file on the terminal to generate a concatenate compressed file; and at last decompressing the concatenate compressed file to obtain the new version file. The method reduces data traffic and occupied bandwidth resources. | 2016-01-07 |
20160004531 | INTERACTIVE CONTENT DEVELOPMENT - Techniques for developing and deploying software applications in a virtualized computing environment are described. A developer user is presented with a user interface providing options for accessing a software development project. Inputs are provided to the software development project. The inputs may include data and selection of a software component. A plurality of predefined data objects are accessed and an executable software application is generated. The application executes on virtual machine instances of the virtualized computing environment and is accessible by a plurality of end-users. The executable software application is developed within the multi-user computing and network services platform via the web-based user interface and is hosted by the multi-user computing and network services platform for use by end-users. | 2016-01-07 |
20160004532 | SELF-DESCRIBING DEVICE MODULE AND SYSTEM AND COMPUTER-READABLE MEDIUM FOR THE PRODUCTION THEREOF - A system, method, and computer-readable medium for generation of a controlled device Module are provided. Various components are provided to a Module designer for selection, and the designer defines the interface APIs specifying the component functionalities. The designer may specify custom commands or events for the Module including Commands, Properties, and Parameters, and custom components corresponding to the custom commands are generated. A self-describing capabilities component is then generated for each component, and a composite capabilities component may then be generated from the capabilities components of each of the components. The completed Module package is then produced by an integrated development environment station. | 2016-01-07 |
20160004533 | Methods And Apparatuses For Reducing Power Consumption Of Processor Switch Operations - Methods and apparatuses for reducing power consumption of processor switch operations are disclosed. One or more embodiments may comprise specifying a subset of registers or state storage elements to be involved in a register or state storage operation, performing the register or state storage operation, and performing a switch operation. The embodiments may minimize the number of registers or state storage elements involved with the standby operation by specifying only the subset of registers or state storage elements, which may involve considerably fewer than the total number of registers or state storage or elements of the processor. The switch operation may be switch from one mode to another, such as a transition to or from a sleep mode, a context switch, or the execution of various types of instructions. | 2016-01-07 |
20160004534 | CONTROL OF SWITCHING BETWEEN EXECUTED MECHANISMS - A data processing apparatus | 2016-01-07 |
20160004535 | METHOD OF OPERATING A MULTI-THREAD CAPABLE PROCESSOR SYSTEM, AN AUTOMOTIVE SYSTEM COMPRISING SUCH MULTI-THREAD CAPABLE PROCESSOR SYSTEM, AND A COMPUTER PROGRAM PRODUCT - A method of operating a multi-thread capable processor system comprising a plurality of processor pipelines is described. The method comprises fetching an instruction comprising an address and selecting an operation mode based on the address of the fetched instruction, the operation mode being selected from at least a lock-step mode and a multi-thread mode. If the operation mode is selected to be the lock-step mode, the method comprises letting at least two processor pipelines of the multi-thread capable processor system execute the instruction in lock-step mode to obtain respective lock-step results, comparing the respective lock-step results against a comparison criterion for determining whether the respective lock-step results match, and, if the respective lock-step results match, determine a matching result from the respective lock-step results, and writing back the matching results. | 2016-01-07 |
20160004536 | Systems And Methods For Processing Inline Constants - Disclosed is a digital processor comprising an instruction memory having a first input, a second input, a first output, and a second output. A program counter register is in communication with the first input of the instruction memory. The program counter register is configured to store an address of an instruction to be fetched. A data pointer register is in communication with the second input of the instruction memory. The data pointer register is configured to store an address of a data value in the instruction memory. An instruction buffer is in communication with the first output of the instruction memory. The instruction buffer is arranged to receive an instruction according to a value at the program counter register. A data buffer is in communication with the second output of the instruction memory. The data buffer is arranged to receive a data value according to a value at the data pointer register. | 2016-01-07 |
20160004537 | COMMITTING HARDWARE TRANSACTIONS THAT ARE ABOUT TO RUN OUT OF RESOURCE - A transactional memory system determines whether a hardware transaction can be salvaged. A processor of the transactional memory system begins execution of a transaction in a transactional memory environment. Based on detection that an amount of available resource for transactional execution is below a predetermined threshold level, the processor determines whether the transaction can be salvaged. Based on determining that the transaction can not be salvaged, the processor aborts the transaction. Based on determining the transaction can be salvaged, the processor performs a salvage operation, wherein the salvage operation comprises one or more of: determining that the transaction can be brought to a stable state without exceeding the amount of available resource for transactional execution, and bringing the transaction to a stable state; and determining that a resource can be made available, and making the resource available. | 2016-01-07 |
20160004538 | MULTIPLE ISSUE INSTRUCTION PROCESSING SYSTEM AND METHOD - A multiple issue instruction processing system is provided. The system includes a central processing unit (CPU), a memory system and an instruction control unit. The CPU is configured to execute one or more instructions of the executable instructions at the same time. The memory system is configured to store the instructions. The instruction control unit is configured to, based on location of a branch instruction stored in a track table, control the memory system to output the instructions likely to be executed to the CPU. | 2016-01-07 |
20160004539 | OPERATING ENVIRONMENT SWITCHING BETWEEN A PRIMARY AND A SECONDARY OPERATING SYSTEM - Provided is a manner of switching between the operating environment of a primary OS and the operating environment of a secondary OS. In certain embodiments, a HDD keeps a runtime image of the secondary OS generated in a system memory. A DMA space for allowing the secondary OS to operate is formed in a physical address space where a memory image of the primary OS is active. The runtime image of the secondary OS is transferred to the DMA space. The operation of the memory image of the primary OS is stopped and the runtime image of the secondary OS is executed in the DMA space. Before activating the memory image of the primary OS, the runtime image of the secondary OS is saved to the HDD again. | 2016-01-07 |
20160004540 | FAST BOOTING A COMPUTING DEVICE TO A SPECIALIZED EXPERIENCE - Described is a technology by which independent computing functions such as corresponding to separate operating systems may be partitioned into coexisting partitions. A virtual machine manager, or hypervisor, manages the input and output of each partition to operate computer system hardware. One partition may correspond to a special purpose operating system that quickly boots, such as to provide appliance-like behavior, while another partition may correspond to a general purpose operating system that may load while the special purpose operating system is already running. The computer system that contains the partitions may transition functionality and devices from one operating system to the other. The virtual machine manager controls which computer hardware devices are capable of being utilized by which partition at any given time, and may also facilitate inter-partition communication. | 2016-01-07 |
20160004541 | METHOD, APPARATUS, AND SYSTEM FOR RUNNING AN APPLICATION - According to an example, a computer creates an application entry in a microblog page, receives a triggering operation command associated with the application entry, generates, based on the triggering operation command, a floating layer at a predetermined position on the microblog page, receives application data at the floating layer, and runs an application in the floating layer based on the application data. | 2016-01-07 |
20160004542 | BOOTING METHOD FOR COMPUTER SYSTEM WITH MULTIPLE CENTRAL PROCESSING UNITS - A booting method for computer system with multiple central processing units is provided. The method includes: initializing at least two CPUs of the multiple CPUs at start of a booting process; accessing, by each of the at least two initialized CPUs, a task description chart (TDC) stored in the computer system, wherein the TDC includes information of at least two tasks of the booting process; and selecting, by each of the at least two initialized CPUs, a task from at least two tasks according to selection information of the at least two tasks in the TDC; obtaining, by each of the at least two initialized CPUs, the selected task according to address information of the selected task in the TDC; and executing, by the initialized CPUs, the selected tasks at least partially in parallel. | 2016-01-07 |
20160004543 | PARALLEL PROCESSING DEVICE, PARALLEL PROCESSING METHOD, AND PARALLEL PROCESSING PROGRAM STORAGE MEDIUM - Provided is a parallel processing device whereby a plurality of single processes is efficiently and simply parallel processed by a plurality of processors. The parallel processing device includes: a first processor which executes, upon data which is included in data sets, a first program which defines a single process which is executed with the data as an input thereof, and outputs a first result; and includes a second processor which executes, upon the inputted data, a second program which defines a unit process and outputs a second result. A selection unit selects, based on a prescribed index which denotes either performance or function of the first processor and the second processor, a first partial set and a second partial set from the data set. A first processor control unit inputs into the first processor first data which is included in the first partial set. A second processor control unit inputs into the second processor second data which is included in the second partial set. The first and second programs are executed in parallel by the first and second processors. | 2016-01-07 |
20160004544 | DATA VISUALIZATION TECHNIQUES - To provide visualization data to a client device, a server generates a plurality of display objects for selectively displaying at the client device to a user. Each display object includes at least one of a data portion and a graphics portion. The plurality of display objects is assigned to a plurality of vertices organized as a logical display tree. A mask specifying visual characteristics of the plurality of vertices is created. The visual characteristics of a given vertex simultaneously controls display attributes of all display objects assigned to the given vertex. The server transmits a description of the plurality of display objects, the logical display tree and the mask in a payload format. | 2016-01-07 |
20160004545 | METHOD AND EMBEDDED DEVICE FOR LOADING DRIVER - The invention discloses a method and a device for loading a driver, where the method includes: determining a model identifier corresponding to a component included in an embedded device, and searching for a driver associated with the model identifier; loading a found driver into a memory of the embedded device, and controlling the driver to drive the component. In this solution, when a driver is loaded onto a component onto which a driver is to be loaded, an associated driver is searched for according to a model identifier of the component onto which the driver is to be loaded, and then the associated driver may be loaded. A combination of drivers of multiple components does not need to be searched for, or a combination of identifiers corresponding to drivers of multiple components does not need to be generated. Therefore, consumed time is reduced and loading efficiency is improved. | 2016-01-07 |
20160004546 | SYSTEMS AND METHODS FOR MONITORING AND MAINTAINING CONSISTENCY OF A CONFIGURATION - The present application is directed towards systems and methods for monitoring and maintaining consistency of a configuration across a plurality of cores or packet engines in a multi-core system. A configuration manager handles communication of configuration commands to a plurality of cores or packet engines. If a command executes successfully on a first packet engine but fails on a second packet engine, the configuration manager may communicate an undo command to the first packet engine. Successful execution of the undo command may restore the packet engines to a consistent configuration. | 2016-01-07 |
20160004547 | APPARATUS, METHOD, PROGRAM AND SYSTEM FOR PROCESSING INFORMATION UTILIZING A MULTI-PLATFORM CAPABLE OF MANAGING A PLURALITY OF APPLICATIONS - There is provided an information processing apparatus, including a multi-platform capable of managing a plurality of applications, and an operating system which operates on the multi-platform, and is capable of being activated by a command of the multi-platform. | 2016-01-07 |
20160004548 | NOTIFICATION CONVERSION PROGRAM AND NOTIFICATION CONVERSION METHOD - A non-transitory computer-readable storage medium storing a notification conversion program causing a computer to execute a process includes determining, in response to acquisition of a first notification transmitted from a first virtual machine, a type of first management software capable of executing the first notification by referring to a first storage unit storing therein first information in which a notification transmitted from a virtual machine and a type of management software capable of executing a notification transmitted from a virtual machine are associated with each other, converting the first notification into a second notification executable by second management software that has acquired the first notification, based on the determined type of the first management software; and executing the second notification obtained by the conversion. | 2016-01-07 |
20160004549 | METHOD AND APPARATUS TO CONCEAL THE CONFIGURATION AND PROCESSING OF THE REPLICATION BY VIRTUAL STORAGE - A computer comprises a memory, and a processor being operable to manage a relationship between an image of a virtual machine and a plurality of storage systems forming a virtual storage system, and storing the relationship in the memory. The processor is operable to create a new image of the virtual machine in a target storage system of the plurality of storage systems based on the relationship, the new image of the virtual machine to be used to deploy the virtual machine in the target storage system. When the relationship indicates that the target storage system does not have the image, the processor is operable to copy the image from another storage system of the plurality of storage system to the target storage system and to create a new image of the virtual machine in the target storage system from the copied image in the target storage system. | 2016-01-07 |
20160004550 | VIRTUALIZATION SYSTEM - A virtualization system includes: a virtualizing means for activating a virtual machine to which identification information identifying the virtual machine is assigned and which is capable of executing a predetermined function; a plurality of virtual machines activated by the virtualizing means; and a correspondence table creating means for creating a function correspondence table in which the identification information assigned to each of the virtual machines is associated with a function to be executed by the virtual machine among functions that can be executed by the virtual machines. The virtual machine specifies an associated function in the function correspondence table on the basis of the identification information assigned to the virtual machine, and executes the specified function. | 2016-01-07 |
20160004551 | RESOURCE MANAGEMENT SYSTEM AND RESOURCE MANAGEMENT METHOD - The integrated resource management unit | 2016-01-07 |
20160004552 | COMPUTER SYSTEM AND CONTROL METHOD THEREFOR - The performance of a virtual machine is maintained, by migrating an appropriate target virtual machine for migration to an appropriate destination resource, in response to a load on the virtual machine. | 2016-01-07 |
20160004553 | INFORMATION PROCESSING SYSTEM AND METHOD FOR RELOCATING APPLICATION - An application owned by an information processing device of any of a plurality of bases connected to each other via a network and each having an information processing device for operating the application by a virtual machine unit is, including the virtual machine unit, relocated to another base. The migration of the virtual machine unit that executes the application is controlled, the information processing device of the base that is a relocation destination is determined, and a base-by-base and application-by-application backup generation is grasped for each application. As bases for backing up data required to execute the application, information processing devices of a plurality of bases are selected, and the data is moved to each of the information processing devices and stored. | 2016-01-07 |
20160004554 | INFORMATION PROCESSING DEVICE AND RESOURCE ALLOCATION METHOD - A device includes a storage which has stored therein setting information that specifies, for each virtual machine to be created, the number of arithmetic processing unit cores that have to be allocated to a virtual machine, and group information that represents a plurality of virtual machines operating in cooperation as a group, from among the virtual machines represented by the setting information, and a virtual machine monitor, when a first virtual machine has been created, from among the virtual machines represented by the setting information, which refers to the setting information and the group information so as to allocate as many arithmetic processing unit cores as the setting information specifies to the first virtual machine, according to a rule that takes account of a decrease in operation performance of all the operable virtual machines that is associated with a failure occurring in any of the arithmetic processing units. | 2016-01-07 |
20160004555 | DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD - A data processing apparatus generates by a stream processing control program, for a time-series first stream data group of stream data out of a time-series stream data sequence, first vector data including elements acquired by collecting respective pieces of stream data of the time-series first stream data group; generates, by the stream processing control program, for a time-series second stream data group including, as a head, a piece of intermediate stream data of the time-series first stream data group and having the same number of pieces of data as the time-series first stream data group, second vector data including elements acquired by collecting respective pieces of stream data of the time-series second stream data group; and inputs, by the stream processing control program, the first and second vector data generated respectively to a batch program to control the batch program to carry out a batch processing. | 2016-01-07 |
20160004556 | DYNAMIC PREDICTION OF HARDWARE TRANSACTION RESOURCE REQUIREMENTS - A transactional memory system dynamically predicts the resource requirements of hardware transactions. A processor of the transactional memory system predicts resource requirements of a first hardware transaction to be executed based on any one of a resource hint and a previous execution of a prior hardware transaction. The processor allocates resources for the first hardware transaction based on the predicted resource requirements. The processor executes the first hardware transaction. The processor saves resource usage information of the first hardware transaction for future prediction. | 2016-01-07 |
20160004557 | ABORT REDUCING METHOD, ABORT REDUCING APPARATUS, AND ABORT REDUCING PROGRAM - A system and method for reducing the number of aborts caused by a runtime helper being called during the execution of a transaction block. When a runtime helper is called during the execution of a transaction block while a program using hardware transactional memory is running, the runtime helper passes ID information indicating the type of runtime helper to an abort handler. When there is an abort caused by a call to a runtime helper, the abort handler responds by acquiring the ID information of the runtime helper that caused the abort, disables the transaction block with respect to a specific type of runtime helper, executes the non-transactional path corresponding to the transaction block, and re-enables the transaction block when predetermined conditions are satisfied. | 2016-01-07 |
20160004558 | ALERTING HARDWARE TRANSACTIONS THAT ARE ABOUT TO RUN OUT OF SPACE - A transactional memory system determines whether to pass control of a transaction to an about-to-run-out-of-resource handler. A processor of the transactional memory system determines information about an about-to-run-out-of-resource handler for transaction execution of a code region of a hardware transaction. The processor dynamically monitors an amount of available resource for the currently running code region of the hardware transaction. The processor detects that the amount of available resource for transactional execution of the hardware transaction is below a predetermined threshold level. The processor, based on the detecting, saves speculative state information of the hardware transaction, and executes the about-to-run-out-of-resource handler, the about-to-run-out-of-resource handler determining whether the hardware transaction is to be aborted or salvaged. | 2016-01-07 |
20160004559 | SOFTWARE ENABLED AND DISABLED COALESCING OF MEMORY TRANSACTIONS - A program controls coalescing of outermost memory transactions, the coalescing causing committing of memory store data to memory for a first transaction to be done at transaction execution (TX) end of a second transaction. wherein optimized machine instructions are generated based on an intermediate representation of a program, wherein either two atomic tasks are merged into a single coalesced transaction or are executed as separate transactions. | 2016-01-07 |
20160004560 | METHOD FOR SINGLETON PROCESS CONTROL - A method for singleton process control in a computer environment is provided. A process identification (PID) for a background process is stored in a first temporary file. A determination operation is performed for determining if the parent process is alive for a predetermined number of tries. The PID of the background process is written from the first temporary file into a first PID variable when the parent process ends. A determination operation is performed for determining whether a second, global temporary file is empty. The background process is exited if an active PID is determined to exist in a second, global temporary file. The PID from the first temporary file is stored into the second, global temporary file. A singleton code block is then executed. | 2016-01-07 |
20160004561 | Model Driven Optimization of Annotator Execution in Question Answering System - Mechanisms are provided for scheduling execution of pre-execution operations of an annotator of a question and answer (QA) system pipeline. A model is used to represent a system of annotators of the QA system pipeline, where the model represents each annotator as a node having one or more performance parameters indicating a performance of an execution of an annotator corresponding to the node. For each annotator in a set of annotators of the system of annotators, an effective response time for the annotator is calculated based on the performance parameters. A pre-execution start interval for a first annotator based on an effective response time of a second annotator is calculated where execution of the first annotator is sequentially after execution of the second annotator. Execution of pre-execution operations associated with the first annotator is scheduled based on the calculated pre-execution start interval for the first annotator. | 2016-01-07 |
20160004562 | Method of Centralized Planning of Tasks to be Executed by Computers Satisfying Certain Qualitative Criteria Within a Distributed Set of Computers - A method of disseminating a planning of tasks in a network of distributed computers. The network includes a planning server, the programming on the planning server, of a planning of tasks for at least one class of distributed computers; and, independently the defining of ranges of transfer to the distributed computers of information allocated to each distributed computer, the transfer range being defined as a function of the constraints of the network; the splitting of the planned tasks into scheduling information, for each distributed computer and for a period of time dependent on the transfer ranges defined, the scheduling information being generated as a function of the class or classes to which the computer belongs; the transfer to the distributed computers of the scheduling information while complying with the transfer ranges defined. | 2016-01-07 |
20160004563 | MANAGING NODES IN A HIGH-PERFORMANCE COMPUTING SYSTEM USING A NODE REGISTRAR - A method of managing nodes in a high-performance computing (HPC) system, which includes a management subsystem and a job scheduler subsystem, includes providing a node registrar subsystem. Logical node management functions are performed with the node registrar subsystem. Other management functions are performed with the management subsystem using the node registrar subsystem. Job scheduling functions are performed with the job scheduler subsystem using the node registrar subsystem. | 2016-01-07 |
20160004564 | METHOD FOR TASK SCHEDULING AND ELECTRONIC DEVICE USING THE SAME - A method for task scheduling and an electronic device using the same are provided. The method for scheduling tasks in an electronic device includes assigning a task to one of first processing units functionally connected to the electronic device, measuring a task load of the task, and controlling migration of the task to one of second processing units functionally connected to the electronic device based on the task load. | 2016-01-07 |
20160004565 | System and Method for Implementing Workflow Management Using Messaging - A system provides workflow management functions over a messaging or data protocol. A workflow management object defining functions and values and events for sending and receiving workflow management data is defined on a first device and transmitted to a second device. On the second device the workflow is rendered for interaction and response, and an interaction with the workflow object is captured. A captured or generated response is transmitted back to the first device or intermediary system via the messaging protocol. The response to the workflow object (e.g. an event) may be used by the device or intermediary systems to update a status of a workflow such as hosted by a remote server system. Events detected by a workflow system may invoke processing of subsequent workflow objects in a chain such that a complex workflow may be processed over the messaging protocol. | 2016-01-07 |
20160004566 | EXECUTION TIME ESTIMATION DEVICE AND EXECUTION TIME ESTIMATION METHOD - A device includes: a memory configured to store a condition of exclusive execution for a plurality of processes, and execution time ranges of each of one or more modules, the execution time ranges indicating from a shortest estimation time to a longest estimation time regarding; and a processor configured to estimate an entire execution time by executing estimation processing so as to cause simulation of the estimation processing to progress, the estimation processing including: generating one or more cases, for each of the plurality of processes, in order of the one or more modules based on the execution time ranges, determining whether there is a possibility that exclusion waiting occurs based on the condition, for each of the one or more cases, and setting the exclusion waiting for a certain case in which it is determined that there is the possibility, from among the one or more cases. | 2016-01-07 |
20160004567 | SCHEDULING APPLICATIONS IN A CLUSTERED COMPUTER SYSTEM - Disclosed is a method for scheduling applications for a clustered computer system having a plurality of computers and at least one resource, the clustered computer system executing one or more applications. A method includes: monitoring hardware counters in at least one of the resources and the plurality of computers of the clustered computer system for each of the applications; responsive to said monitoring, determining the utilization of at least one of the resources and the plurality of computers of the clustered computer system by each of the applications; for each of the applications, storing said utilization of at least one of the resource and plurality of computers of the clustered computer system; and upon receiving a request to schedule an application on one of said computers, scheduling a computer to execute the application based on stored utilization for the application and stored utilizations of other applications executing on the computers. | 2016-01-07 |
20160004568 | DATA PROCESSING SYSTEM AND METHOD - A method of optimizing an application in a system having a plurality of processors, the method comprising: analyzing the application for a first period to obtain a first activity analysis; selecting one of the processors based on the activity analysis for running the application; and binding the application to the selected processor. | 2016-01-07 |
20160004569 | METHOD FOR ASSIGNING PRIORITY TO MULTIPROCESSOR TASKS AND ELECTRONIC DEVICE SUPPORTING THE SAME - A method for determining task priorities in an electronic device is provided. The method includes receiving, at the electronic device, a request to perform a task, identifying a threshold parameter and a weighted value in accordance with a type of the requested task, measuring the threshold parameter of the task based on the identified weighted value, and assigning the requested task to one of a first operational unit and a second operational unit based on the measured threshold parameter and weighted value. | 2016-01-07 |
20160004570 | PARALLELIZATION METHOD AND ELECTRONIC DEVICE - A parallelization method includes: obtaining profiling information for each job step of a job by performing profiling of the job to be executed on an electronic device; determining at least one job step to be parallelized on a central processing unit (CPU) and at least one heterogeneous unit of the electronic device among a plurality of job steps of the job based on the profiling information; determining a unit to process each unit data among the CPU and the heterogeneous unit based on the profiling information, with respect to the determined at least one job step; and determining a unit to process each task among the CPU and the heterogeneous unit based on the profiling information, with respect to at least one job step including a plurality of separately executable tasks in the determined at least one job step. | 2016-01-07 |
20160004571 | SYSTEM AND METHOD FOR LOAD BALANCING IN A DISTRIBUTED SYSTEM BY DYNAMIC MIGRATION - A system and method for load balancing between components of a distributed data grid. The system and method support dynamic data migration of selected data partitions in response to detection of hot spots in the data grid which degrade system performance. In embodiments, the system and method relies upon analysis of per-partition performance statistics for both the identification of data nodes which would benefit from data migration and the selection of data nodes for migration. Tuning of the data migration thresholds and method provides for optimizing throughput of the data grid to avoid degradation of performance resulting from load-induced hot spots. | 2016-01-07 |
20160004572 | METHODS FOR SINGLE-OWNER MULTI-CONSUMER WORK QUEUES FOR REPEATABLE TASKS - There are provided methods for single-owner multi-consumer work queues for repeatable tasks. A method includes permitting a single owner thread of a single owner, multi-consumer, work queue to access the work queue using atomic instructions limited to only a single access and using non-atomic operations. The method further includes restricting the single owner thread from accessing the work queue using atomic instructions involving more than one access. The method also includes synchronizing amongst other threads with respect to their respective accesses to the work queue. | 2016-01-07 |
20160004573 | SALVAGING HARDWARE TRANSACTIONS WITH INSTRUCTIONS - A transactional memory system salvages a hardware transaction. A processor of the transactional memory system records information about an about-to-fail handler for transactional execution of a code region, and records information about a lock elided to begin transactional execution of the code region. The processor detects a pending point of failure in the code region during the transactional execution, and based on the detecting, stops transactional execution at a first instruction in the code region and executes the about-to-fail handler using the information about the about-to-fail handler. The processor, executing the about-to-fail handler, acquires the lock using the information about the lock, commits speculative state of the stopped transactional execution, and starts non-transactional execution at a second instruction following the first instruction in the code region. | 2016-01-07 |
20160004574 | METHOD AND APPARATUS FOR ACCELERATING SYSTEM RUNNING - The invention discloses a method and apparatus for accelerating. It comprises a method and apparatus for accelerating. The method comprises: an acceleration enabling step of constructing and displaying an acceleration panel containing a one-key acceleration control when a preset enabling condition is triggered; and an acceleration execution step of detecting the one-key acceleration control within the acceleration panel in real time, and swapping memory occupied by all currently running processes to virtual memory to assist the system in running acceleration when the one-key acceleration control is triggered. The method and the apparatus of the invention can organize the system running condition for a user at a fastest speed, free redundant resources, increase the real-time system running speed of the user, and well solve the problem in the prior art that the system running speed can not be increased effectively. | 2016-01-07 |
20160004575 | METHODS AND SYSTEMS FOR MULTIPLE ACCESS TO A SINGLE HARDWARE DATA STREAM - Methods for providing simultaneous access to a hardware data stream to multiple applications are disclosed. The first application to access a hardware device is responsible for providing and publishing an application programming interface (API) that provides access to the hardware device's data stream, which other applications can then call to gain access to the data stream. In some examples, the first application may be a server process or daemon dedicated to managing the hardware device data stream and publishing the API. In some further examples, the first application may instead may carry out user functionality unrelated to managing the hardware device. | 2016-01-07 |
20160004576 | APPARATUS FOR MANAGING APPLICATION PROGRAM AND METHOD THEREFOR - An embodiment of the present invention relates to an apparatus for managing an application program (AP) and a method therefor, and includes a processing module which, if the AP execution process thread corresponding to an AP to be terminated in a program block of an information processing device is terminated, reads the module information of each thread and the stack information of each module so as to select the module and stack having charge of processing a dynamic data exchange (DDE) message among each thread module and each module stack, and releases the termination of the thread including the selected module and stack. Thus, even in the state of terminating each AP execution process thread of the AP to be terminated (for example, user's unused AP), various problems due to the delay of processing the DDE message may be readily avoided. | 2016-01-07 |
20160004577 | TECHNOLOGY FOR STALL DETECTION - Detecting stalling of a software process in a computer system includes receiving identification of a task thread group executing in a work process executing on a computer system. The task thread group includes one or more threads and the receiving includes receiving identification of the one or more threads by a control process executing on a computer system. The detecting includes detecting whether there is a thread state change for the task thread group, marking the task as running responsive to detecting a thread state change for the task thread group, marking the task as stalled responsive to detecting an absence of a thread state change for at least a predefined amount of time, and marking the work process as stalled responsive detecting an absence of a predetermined signal from the work process for at least a predefined amount of time. | 2016-01-07 |
20160004578 | REALTIME PROCESSING OF STREAMING DATA - The invention described here is intended for enhancing the technology domain of real-time and high-performance distributed computing. This invention provides a connotative and intuitive grammar that allows users to define how data is to be automatically encoded/decoded for transport between computing systems. This capability eliminates the need for hand-crafting custom solutions for every combination of platform and transport medium. This is a software framework that can serve as a basis for real-time capture, distribution, and analysis of large volumes and variety of data moving at rapid or real-time velocity. It can be configured as-is or can be extended as a framework to filter-and-extract data from a system for distribution to other systems (including other instances of the framework). Users control all features for capture, filtering, distribution, analysis, and visualization by configuration files (as opposed to software programming) that are read at program startup. It enables large scalable computation of high velocity data over distributed heterogeneous platforms. As compared with conventional approaches to data capture which extract data in proprietary formats and rely upon post-run standalone analysis programs in non-real-time, this invention also allows data streaming in real-time to an open range of analysis and visualization tools. Data treatment options are specified via end-user configuration files as opposed to hard-coding software revisions. | 2016-01-07 |
20160004579 | METHOD OF GENERATING AUTOMATIC CODE FOR REMOTE PROCEDURE CALL - A method of generating a code for a remote procedure call (RPC) includes obtaining a source code including information indicating a part where the RPC is to be performed, and generating a code for calling the RPC and a code for executing an RPC procedure, by analyzing the source code including information indicating the part where the RPC is to be performed. | 2016-01-07 |
20160004580 | System and Method for Bruteforce Intrusion Detection - Systems and methods are shown for detecting potential attacks on a domain, where one or more servers, in response to a failure event, obtain a lambda value from a baseline model of historical data associated with a current time interval corresponding to the failure event, determine a probability of whether a total count of failure events for the current time interval is within an expected range using a cumulative density function based on the lambda value, and identify a possible malicious attack if the probability is less than or equal to a selected alpha value. | 2016-01-07 |
20160004581 | SYSTEMS AND METHODS FOR SYNCHRONIZING MICROPROCESSORS WHILE ENSURING CROSS-PROCESSOR STATE AND DATA INTEGRITY | 2016-01-07 |
20160004582 | MANAGEMENT SYSTEM AND MANAGEMENT PROGRAM - A management system manages a plurality of management target devices. A storage device stores one or more rules, plan information, and plan history information. A control device specifies a first cause event that is a candidate of a cause of the event that has occurred in any one of the management target devices based on the one or more rules, specifies a plurality of first plans that can be executed in the case in which the first cause event is a cause based on the plan information, calculates an index value indicating a possibility of succeeding in a failure recovery in the case in which the plan is executed for each of the plurality of first plans based on the plan history information, and displays data indicating any one or more plans of the plurality of first plans according to a display mode decided based on the index value. | 2016-01-07 |
20160004583 | SYSTEM FOR PROJECT MANAGEMENT FROM NON-FUNCTION EVALUATION, METHOD FOR PROJECT MANAGEMENT FROM NON-FUNCTION EVALUATION, AND PROGRAM FOR PROJECT MANAGEMENT FROM NON-FUNCTION EVALUATION - Provided is a progress management technique for a project, the technique also covering a non-functional requirement of the project. A parameter required for evaluating the non-functional requirement is adjusted according to the progress of the project, and calculation is made, using the adjusted parameter, on to which extent the non-functional requirement may finally differ from a target value. | 2016-01-07 |
20160004584 | METHOD AND COMPUTER SYSTEM TO ALLOCATE ACTUAL MEMORY AREA FROM STORAGE POOL TO VIRTUAL VOLUME - An exemplary event analysis method generates a topology, indicating a correlation between management objects corresponding to a correlation between events defined in selected event propagation model, from configuration management information. It generates, from the selected event propagation model and the topology, a causality indicating a correlation between the causal event identifying an identifier of the management object and the type of the event, and the derivative event sequentially taking place from the causal event. It, in generating the causality, identifies the type of the management object where the derivative event takes place and the type of the event, without identifying the identifier of the management object where the derivative event takes place, when the topology for identifying the identifier of the derivative event is ungeneratable. It performs an event analysis by comparing the generated causality and the event actually taking place at the management target apparatuses. | 2016-01-07 |
20160004585 | APPARATUS AND A METHOD FOR PROVIDING AN ERROR SIGNAL FOR A CONTROL UNIT - An apparatus for providing an error signal for a control unit, the error signal indicating a malfunction of a sensor unit. The apparatus includes an input module configured to receive a sensor signal from the sensor unit, the sensor signal being a periodic signal between an upper level and a lower level of a physical quantity. Further, the apparatus includes a determination module configured to determine the malfunction of the sensor unit and an output module configured to provide the error signal indicating the malfunction for the control unit. The error signal comprises a predetermined level of the physical quantity which differs from the upper level and from the lower level. | 2016-01-07 |
20160004586 | DELAYED DISK RECOVERY - A method of recovering content stored on a computer readable medium transported by a vehicle comprises identifying, by one or more computer processors, an error on the computer readable medium, storing, by the one or more computer processors, an indication of the error, and detecting, by the one or more computer processors, an interval of travel of the vehicle during which the computer readable medium has access to stable power. The method further includes, during the detected interval of travel, initiating, by the one or more computer processors, a recovery of the computer readable medium based on the indication of the error. | 2016-01-07 |
20160004587 | METHOD, APPARATUS AND SYSTEM FOR HANDLING DATA ERROR EVENTS WITH A MEMORY CONTROLLER - Techniques and mechanisms for providing error detection and correction for a platform comprising a memory including one or more spare memory segments. In an embodiment, a memory controller performs first scrubbing operations including detection for errors in a plurality of currently active memory segments. Additional patrol scrubbing is performed for one or more memory segments while the memory segments are each available for activation as a replacement memory segment. In another embodiment, a first handler process (but not a second handler process) is signaled if an uncorrectable error event is detected based on the active segment scrubbing, whereas the second handler process (but not the first handler process) is signaled if an uncorrectable error event is detected based on the spare segment scrubbing. Of the first handler process and the second handler process, only signaling of the first handler process results in a crash event of the platform. | 2016-01-07 |
20160004588 | PROBLEM MANAGEMENT SOFTWARE - Computer systems are managed by providing systems programmers with visual displays and user interfaces that identify certain issues and allow the system programmer to readily apply fixes, patches, and other updates without tediously sifting through a mountain of information and manually addressing those issues. The systems herein, provide a more streamlined approach for the system programmer by reducing the possibility of overlooking a particular issue that may adversely affect the system. | 2016-01-07 |
20160004589 | SALVAGING HARDWARE TRANSACTIONS WITH INSTRUCTIONS - A transactional memory system salvages a hardware transaction. A processor of the transactional memory system executes a salvage indicator instruction, such execution including obtaining a salvage indication information specified by the salvage indicator instruction, and saving the salvage indication information comprising a salvage indication. Based on a pending point of failure being detected, the processor uses the saved salvage indication information to avoid aborting a hardware transaction, wherein absent salvage indication information, the pending point of failure causes a hardware transaction to abort. The processor detects the point of failure, and based on the detecting, determines whether the salvage indication has been recorded. Based on determining that the salvage indication has been recorded, the processor executes an about-to-fail handler, and based on determining that the salvage indication has not been recorded, the processor aborts the transactional execution of the code region. | 2016-01-07 |
20160004590 | SALVAGING HARDWARE TRANSACTIONS WITH INSTRUCTIONS - A transactional memory system salvages a hardware transaction. A processor of the transactional memory system executes a first salvage checkpoint instruction in a code region during transactional execution of the code region, and based on the executing the first salvage checkpoint instruction, the processor records transaction state information comprising an address of the first salvage checkpoint instruction within the code region. The processor detects a pending point of failure in the code region during the transactional execution, and based on the detecting, determines that the transaction state information been recorded, and further based on the detecting, executes an about-to-fail handler. Based on executing the about-to-fail handler, the processor returns to the execution of the code region of the transaction at the address of the checkpoint instruction. | 2016-01-07 |
20160004591 | METHOD AND DEVICE FOR PROCESSING DATA - A method for processing data includes coding a data item to obtain a coded data item that includes a predefinable number of bits, influencing maximally k many bits of the coded data item to obtain a changed data item, decoding the changed data item by using a fault-correcting code to obtain a decoded data item, and processing the decoded data item. | 2016-01-07 |
20160004592 | METHOD FOR DETECTING ERROR OF DATA, STORAGE DEVICE, AND RECORDING MEDIUM - A method includes storing, by a processor that is configured to avoid adding the error correcting code to the data when the data passes through the inside of the processor, the data received from a host device and an error correcting code in the buffer memory; reading the data from the buffer memory and transmitting the read data to a calculating circuit; calculating, by the calculating circuit, a first checksum of the data and transmitting the data to the processor; storing, by the processor, the data and the error correcting code in a sub memory; reading the data from the sub memory and transmitting the read data to the calculating circuit through the processor; calculating, by the calculating circuit, a second checksum of the data; and determining, by the processor, whether an error of the data occurs within the processor by comparing the first checksum with the second checksum. | 2016-01-07 |
20160004593 | MEMORY DEVICE WITH RETRANSMISSION UPON ERROR - A controller includes a link interface that is to couple to a first link to communicate bidirectional data and a second link to transmit unidirectional error-detection information. An encoder is to dynamically add first error-detection information to at least a portion of write data. A transmitter, coupled to the link interface, is to transmit the write data. A delay element is coupled to an output from the encoder. A receiver, coupled to the link interface, is to receive second error-detection information corresponding to at least the portion of the write data. Error-detection logic is coupled to an output from the delay element and an output from the receiver. The error-detection logic is to determine errors in at least the portion of the write data by comparing the first error-detection information and the second error-detection information, and, if an error is detected, is to assert an error condition. | 2016-01-07 |
20160004594 | CONTROLLER DEVICE WITH RETRANSMISSION UPON ERROR - A controller includes a link interface that is to couple to a first link to communicate bi-directional data and a second link to transmit unidirectional error-detection information. An encoder is to dynamically add first error-detection information to at least a portion of write data. A transmitter, coupled to the link interface, is to transmit the write data. A delay element is coupled to an output from the encoder. A receiver, coupled to the link interface, is to receive second error-detection information corresponding to at least the portion of the write data. Error-detection logic is coupled to an output from the delay element and an output from the receiver. The error-detection logic is to determine errors in at least the portion of the write data by comparing the first error-detection information and the second error-detection information, and, if an error is detected, is to assert an error condition. | 2016-01-07 |
20160004595 | SHIFTING READ DATA - This disclosure relates to avoiding a hard error in memory during write time by shifting data to be programmed to memory to mask the hard error. In one implementation, a method of programming data to a memory array includes obtaining error data corresponding to a selected memory cell, shifting a data pattern such that a value to be stored by the selected memory cell matches a value associated with a hard error, and programming the shifted data pattern to memory array such that the value programmed to the selected memory cell matches the value associated with the hard error. | 2016-01-07 |
20160004596 | DATA STORAGE DEVICE WITH IN-MEMORY PARITY CIRCUITRY - A data storage device includes a memory die. The memory die includes parity circuitry and a memory having a three-dimensional (3D) memory configuration. The memory includes a first block, a second block, and a third block. A method includes generating parity information based on first data associated with a first word line of the first block and further based on second data associated with a second word line of the second block. The parity information is generated by the parity circuitry. The method further includes writing the parity information to a third word line of the third block. | 2016-01-07 |
20160004597 | Memory Controller With Error Detection And Retry Modes Of Operation - A memory system includes a link having at least one signal line and a controller. The controller includes at least one transmitter coupled to the link to transmit first data, and a first error protection generator coupled to the transmitter. The first error protection generator dynamically adds an error detection code to at least a portion of the first data. At least one receiver is coupled to the link to receive second data. A first error detection logic determines if the second data received by the controller contains at least one error and, if an error is detected, asserts a first error condition. The system includes a memory device having at least one memory device transmitter coupled to the link to transmit the second data. A second error protection generator coupled to the memory device transmitter dynamically adds an error detection code to at least a portion of the second data. | 2016-01-07 |
20160004598 | GROUPING CHUNKS OF DATA INTO A COMPRESSION REGION - Examples disclosed herein relate to grouping chunks of data into a compression region. Examples relate to a chunk container comprising a first plurality of chunks of data in a plurality of first compression regions, and include grouping a second plurality of the chunks into a second compression region, and compressing the chunks of the second compression region relative to each other. | 2016-01-07 |
20160004599 | FILE BASED INCREMENTAL BLOCK BACKUP FROM USER MODE - A system for incremental backup comprises a storage device and a processor. The processor is configured to: 1) start tracking, wherein a file changed block info is tracked in map(s), wherein each of the map(s) tracks writes indicated via a node of a set of nodes; 2) receive request for an incremental backup of a volume of one or more volumes, wherein the map(s) track changed blocks from writes to the volume; 3) halt writes to the volume and queue writes to the volume after halting; 4) freeze the map(s) of changed blocks; 5) change tracking, wherein the change block info is tracked to a new set of maps; 6) determine changed blocks using the map(s); 7) write changed blocks to a backup volume; and 8) release writes to volume. | 2016-01-07 |
20160004600 | DATA PROCESSING DEVICE - A data processing device includes a digital data processing unit and a control unit. The digital data processing unit includes a computing unit that computes digital data and a power source management unit that transceives commands with the computing unit and manages a power supply to the computing unit. The control unit controls a user interface unit that provides a user interface function. The control unit diagnoses an operation of the digital data processing unit by monitoring a transfer state of the commands between the computing unit and the power source management unit. When determining an abnormality occurrence in the operation of the digital data processing unit, the control unit resets all parts of the digital data processing unit or a part of the digital data processing unit without interrupting an operation of the user interface unit. | 2016-01-07 |
20160004601 | LIGHTWEIGHT DATA RECONSTRUCTION BASED ON BACKUP DATA - A information management system allows a user to search through a secondary copy of data, such as a back up, archive, or snapshot without first retrieving the secondary copy of data. Instead, the system constructs lightweight data that can be displayed to a user as a representation of the search results. Lightweight data may include metadata or other information that identifies data included in the secondary copy of data. The lightweight data may be perceived as being the secondary copy of data and allow a user to browse through search results. Once the user identifies a search result that is of interest, information in the lightweight data can be used to retrieve the secondary copy of data. Because lightweight data may have a smaller file size than the file size of the secondary copy of data, the latency of performing a search may be reduced. | 2016-01-07 |
20160004602 | COPY-ON-READ PROCESS IN DISASTER RECOVERY - Systems, methods, and computer products for copy-on-read processes in disaster recovery include: making a disaster recovery storage volume available for read access before all data from a corresponding primary storage volume has been copied to a disaster recovery storage volume; maintaining a record of regions of the disaster recovery storage volume; in response to receiving a read request for data at the disaster recovery system: looking up the record of regions of the disaster recovery storage volume to determine available data for the read request; reading any available data from the disaster recovery storage volume; obtaining data unavailable at the disaster recovery storage volume from the corresponding primary storage volume; updating the disaster recovery storage volume with the obtained data; supplying the obtained data to the read request; and updating the record of regions of the disaster recovery storage volume for the regions of the obtained data. | 2016-01-07 |
20160004603 | STORAGE SYSTEM WITH VIRTUAL DISKS - An administrator provisions a virtual disk in a remote storage platform and defines policies for that virtual disk. A virtual machine writes to and reads from the storage platform using any storage protocol. Virtual disk data within a failed storage pool is migrated to different storage pools while still respecting the policies of each virtual disk. Snapshot and revert commands are given for a virtual disk at a particular point in time and overhead is minimal. A virtual disk is cloned utilizing snapshot information and no data need be copied. Any number of Zookeeper clusters are executing in a coordinated fashion within the storage platform, thus increasing overall throughput. A timestamp is generated that guarantees a monotonically increasing counter, even upon a crash of a virtual machine. Any virtual disk has a “hybrid cloud aware” policy in which one replica of the virtual disk is stored in a public cloud. | 2016-01-07 |
20160004604 | NON-DESTRUCTIVE DATA STORAGE - Non-destructive data storage is disclosed. An information change is stored that is associated with a business object such that tracking of the information change is enabled with respect to one a transaction time and/or an effective time. The stored information change is accessed with respect to a time. | 2016-01-07 |
20160004605 | LIGHTWEIGHT DATA RECONSTRUCTION BASED ON BACKUP DATA - A information management system allows a user to search through a secondary copy of data, such as a back up, archive, or snapshot without first retrieving the secondary copy of data. Instead, the system constructs lightweight data that can be displayed to a user as a representation of the search results. Lightweight data may include metadata or other information that identifies data included in the secondary copy of data. The lightweight data may be perceived as being the secondary copy of data and allow a user to browse through search results. Once the user identifies a search result that is of interest, information in the lightweight data can be used to retrieve the secondary copy of data. Because lightweight data may have a smaller file size than the file size of the secondary copy of data, the latency of performing a search may be reduced. | 2016-01-07 |
20160004606 | METHOD, SYSTEM AND DEVICE FOR VALIDATING REPAIR FILES AND REPAIRING CORRUPT SOFTWARE - A system and method for repairing corrupt software components of a computer system. Corrupt software is detected and repaired utilizing an automated component repair service. Repair files are downloaded from an external storage location and used to repair the corruption. The downloaded files are preferably the smallest amount of data necessary to repair the identified corruption. The process of repairing corrupt files is used in conjunction with a software updating service to resolve problems that occur when corrupt software is updated by allowing a corrupt component to be repaired and then uninstalled such that an updated component can be properly installed. | 2016-01-07 |
20160004607 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus includes a storage unit that stores a location and a file name of a change target file which is changed in a prescribed process, and a processor that executes a process including obtaining and saving the change target file by using the location and the file name of the change target file; conducting the installation; detecting a failure that has occurred during the installation; obtaining progress information which represents progress of the installation, and identifying the prescribed process at occurrence of the failure as a failure time process on the basis of the progress information when the failure has been detected; restoring a file changed in the failure time process by using the saved change target file that corresponds to the changed file; and resuming the installation from a point in time at which the failure time process started. | 2016-01-07 |
20160004608 | METHOD AND DEVICE FOR SYNCHRONOUSLY RUNNING AN APPLICATION IN A HIGH AVAILABILITY ENVIRONMENT - A method for synchronously running an application in a high availability environment including a plurality of calculating modules interconnected by a very high-speed broad band network, includes: configuring the modules into partitions including a primary and a secondary partition and a monitoring partition; running the application on each running partition, inputs-outputs processed by the primary partition transmitted to the secondary running partition via the monitoring partition; synchronizing the runnings via exploiting microprocessor context changes; transmitting a catastrophic error signal to the monitoring partition; continuing the running by switching to a degraded mode, the running continuing on a single partition. | 2016-01-07 |
20160004609 | FAULT TOLERANT COMMUNICATIONS - Apparatuses, systems and methods are disclosed for tolerating fault in a communications grid. Specifically, various techniques and systems are provided for detecting a fault or failure by a node in a network of computer nodes in a communications grid, adjusting the grid to avoid grid failure, and taking action based on the failure. In an example, a system may include receiving grid status information at a backup control node, the grid status information including a project status, storing the grid status information within the backup control node, receiving a failure communication including an indication that a primary control node has failed, designating the backup control node as a new primary control node, receiving updated grid status information based on the indication that the primary control node has failed, and transmitting a set of instructions based on the updated grid status information. | 2016-01-07 |