51st week of 2015 patent applcation highlights part 41 |
Patent application number | Title | Published |
20150363169 | MULTIPLIER UNIT WITH SPECULATIVE ROUNDING FOR USE WITH DIVISION AND SQUARE-ROOT OPERATIONS - Embodiments of a multiplier unit that may be used for division and square root operations are disclosed. The embodiments may provide a reduced and fixed latency for denormalization and rounding used in the division and square root operations. A storage circuit may be configured to receive first and second source operands. A multiplier circuit may be configured to perform a plurality of multiplication operations dependent upon the first and second source operands. Each result after an initial result of the multiplier may also depend on at least one previous result. Circuitry may be configured to perform a shift operation and a rounding operation on a given result of the plurality of results. An error of the given result may be less than a predetermined threshold value. | 2015-12-17 |
20150363170 | CALCULATION OF A NUMBER OF ITERATIONS - Performing an arithmetic operation in a data processing unit, including calculating a number of iterations for performing the arithmetic operation with a given number of bits per iteration. The number of bits per iteration is a positive natural number. A number of consecutive digit positions of a digit in a sequence of bits represented in the data processing unit is counted. The length of the sequence is a multiple of the number of bits per iteration. A quotient of the number of consecutive digit positions divided by the number of bits per iteration is calculated, as well as a remainder of the division. | 2015-12-17 |
20150363171 | GENERATING VIRTUALIZED APPLICATION PROGRAMMING INTERFACE (API) IMPLEMENTATION FROM NARRATIVE API DOCUMENTATION - A virtualized Application Program Interface (API) implementation is generated based upon narrative API documentation that includes sentences that describe the API, by generating programming statements for the virtualized API implementation based upon parsing the narrative API documentation, and generating the virtualized API implementation based on upon the programming statements for the virtualized API implementation. The parsing of the narrative documentation may use a natural language parser and a domain-specific ontology for the API that may be obtained or created for the API. The virtualized API implementation may be generated using an API virtualizer. | 2015-12-17 |
20150363172 | AUTOMATED MODIFICATION INTEGRATION AND CONFLICT RESOLUTION FOR SOFTWARE DEVELOPMENT - Proposed changes to a source code generated by client computing devices are integrated with a master version of the code resident on a server computing system remote from the client devices. The client devices connect to the server system over a network and transmit proposed changes to the server system, where resident integration/conflict resolution software automatically integrates the proposed changes into the master version. Any unresolved conflicts remaining after the automatic integration are identified and the server system automatically sends an email notifying the one or more client devices that there are unresolved conflicts. The email includes a link that enables a client device to launch a window on a monitor, and the client device user employs the window to transmit commands directly to and receive further communications directly from the integration/conflict resolution software in an interactive operation to attempt to manually resolve the conflicts. | 2015-12-17 |
20150363173 | APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - This invention provides an apparatus use environment with higher flexibility and convenience. To achieve this, in a program including the first program layer with an instruction set to be interpreted and executed by a processor and the second program layer with an instruction set compiled in advance by a unit other than the processor, this invention controls to perform communication between an external device and the first program layer via the second program layer. Based on information about the external device received from the external device via communication, display contents of a display screen for using a function of the external device, which are displayed in the first program layer, are controlled. | 2015-12-17 |
20150363174 | COMPLEX CONSTANTS - In an approach, a virtual machine identifies, within a set of instructions, an instruction to load a constant; identifies, based on the instruction to load the constant, a first entry in a data structure that identifies a particular constant type of the one or more constant types, wherein the first entry specifies at least constant data and a first set of instructions for assembling a value or partial value from the constant data; executes the first set of instructions to assemble the value or the partial value from the constant data; and stores a particular value or a reference to the particular value onto a run-time data structure used to pass values or references between sets of instructions executing in a run-time environment, wherein the particular value is based on the value or the particular value assembled from the constant data. | 2015-12-17 |
20150363175 | Computing Expression Medium, Development Environment, and Device Communication and Control Architecture - During a process called live design, a computing system may receive, from a repository, an instance of a first component comprising a first set of one or more metaobjects that provides a binary representation of the instance of the first component. In turn, the computing system may render the instance of the first component as an icon and a first set of one or more underlying panes that provide a visual expression of the instance of the first component. The computing system may then receive, via the first set of one or more underlying panes, a user modification to the instance of the first component. Thereafter, the computing device and/or the repository may determine whether the user modification to the instance of the first component is valid, and may process the user modification in accordance with the determining. | 2015-12-17 |
20150363176 | METHOD AND APPARATUS FOR CODE CONVERSION - The present invention relates to the field of computer programming, in particular, to a method and apparatus for code conversion in which the codes in the code file to be converted or the code tree to be converted is read and stored into the stack and popped up in the last-in first-out sequence of the stack, and then the code line or the child node currently popped up is resolved into the file to be converted, and lastly the natural semantics comparison table is traversed, and the inter-conversion between the codes and the natural language is automatically carried out, so as to avoid the programmers from manually adding the marks and notes for the codes, which greatly decreases workload of the programmers, and can intuitively display the direct logical relationship of the codes, and at the same time, depending on different situations, the codes can be represented selectively in different forms, facilitating the creating, searching and maintaining of the codes. | 2015-12-17 |
20150363177 | MULTI-BRANCH DETERMINATION SYNTAX OPTIMIZATION APPARATUS - A multi-branch determination syntax optimization apparatus includes: a memory that retains multi-branch determination syntax including tokens; a database that retains (1) CPU performance information being a parameter depending on a CPU incorporated in the multi-branch determination syntax optimization apparatus and set based on time required for multi-branch determination processing and (2) frequently-appearing token table representing types and rates of appearance of tokens sorted in order of appearance frequency in a query that statically analyzes a source code and performs lexical analysis in advance; and a conversion section that executes determination for the multi-branch determination syntax by referring to the CPU performance information and the frequently-appearing token table, and creates a branch code converted to make a speculatively executable branch for the token type having a high frequency of appearance and to make a branch using the jump table for the token type having a low frequency of appearance. | 2015-12-17 |
20150363178 | DEPLOYING SOFTWARE IN A COMPUTER NETWORK - A central server in a network stores, or has access to, data relating to software stored on computers in subnets of the network. The central server is able to designate a computer in each subnet as a wake-up master for that subnet. The wake up master maintains an awoken state and is able to issue a wakeup signal to any computer designated by the central server in the subnet. A computer in a subnet requesting software from another computer in the subnet, but unable to find it because the other computer may not be awake, issues a request to the central server. The central server identifies a computer in the subnet likely to have the software and causes the wake-up master of the subnet to wake up the identified computer so the requesting computer can communicate with, and download, the requested software from the identified computer. | 2015-12-17 |
20150363179 | Platform on a Platform System - A platform on a platform system has a first platform that provides deployment and configuration settings for applications developed on the platform; and a second platform developed using the deployment and configuration settings provided by the first platform. The second platform provides second deployment and second configuration settings, and the second platform also provides services that allow second applications to interact with the first platform through the second platform services. | 2015-12-17 |
20150363180 | SOFTWARE DEPLOYMENT IN A DISTRIBUTED VIRTUAL MACHINE ENVIRONMENT - A computer implemented method for deploying, in a distributed virtual environment, a multi-component software product is disclosed. The method may include requesting and receiving product installation parameters, which may include virtual machine IDs corresponding to subsets of the product installation parameters. The method may also include copying software product installation files and parameters onto a first virtual machine, halting the first virtual machine, cloning the first virtual machine to a second virtual machine and setting virtual machine IDs on the virtual machines. The method may also include starting the virtual machines and identifying, based on virtual machine IDs, subsets of the product installation parameters. The method may also include deploying, based on subsets of the product installation parameters, the software product by installing first and second components of the software product on the first and second virtual machines, respectively. | 2015-12-17 |
20150363181 | SOFTWARE DEPLOYMENT IN A DISTRIBUTED VIRTUAL MACHINE ENVIRONMENT - A computer implemented method for deploying, in a distributed virtual environment, a multi-component software product is disclosed. The method may include requesting and receiving product installation parameters, which may include virtual machine IDs corresponding to subsets of the product installation parameters. The method may also include copying software product installation files and parameters onto a first virtual machine, halting the first virtual machine, cloning the first virtual machine to a second virtual machine and setting virtual machine IDs on the virtual machines. The method may also include starting the virtual machines and identifying, based on virtual machine IDs, subsets of the product installation parameters. The method may also include deploying, based on subsets of the product installation parameters, the software product by installing first and second components of the software product on the first and second virtual machines, respectively. | 2015-12-17 |
20150363182 | SMART DEVICE, WEARABLE DEVICE AND METHOD FOR PUSHING &RECEIVING INSTALLATION PACKAGE - A method for pushing an installation package from a wearable device to a smart device is provided. The method includes the steps of: pre-storing an installation package required by a smart device into a wearable device; pairing with a smart device via a transmission link; and automatically pushing the installation package to the smart device via the transmission link to install the installation package on the smart device when the wearable device and the smart device are paired with each other for the first time. | 2015-12-17 |
20150363183 | Automated Configuration and Installation of Virtualized Solutions - An installation system for a multiple device, multiple application solution may include options for installing and configuring one or more of the devices as virtual machines. The installation system may start from bare hardware, install a virtual machine host, and configure one or more devices as virtual machines. The installation system may provide a set of predefined configurations from which an administrator may choose, and some embodiments may provide various algorithms or optimization routines to select an appropriate configuration based on intended uses or other factors. The configurations may be customized to create one or more documents that may be consumed during the installation process to automate many configuration settings. | 2015-12-17 |
20150363184 | METHODS AND DEVICES FOR PROMPTING APPLICATION REMOVAL - A method for a device to prompt application removal is provided. The method includes: receiving an instruction for removing an application; acquiring historical use information of the application based on the instruction for removing application; and outputting removal prompting information based on the historical use information. | 2015-12-17 |
20150363185 | UPDATING SOFTWARE BASED ON UTILIZED FUNCTIONS - In a method for managing updates for a software product, receiving a request to install a software product update, wherein the software product update modifies a software product on a computing device. The method further includes identifying a first set of one or more functions of the software product that are to be modified by the software product update. The method further includes identifying historical usage information corresponding to the software product, wherein the historical usage information indicates a second set of one or more functions of the software product and a number of times each respective function of the second set of one or more functions of the software product has been used by the computing device. The method further includes determining whether the software product update modifies at least one function of the software product that corresponds to historical usage information that exceeds a minimum usage threshold condition. | 2015-12-17 |
20150363186 | MANAGING SOFTWARE SUITE COMPONENT VERSIONS - A component manager may be used to install or upgrade components of a software suite. The component manager may be installed via an application store of an electronic device. The component manager may retrieve binaries for the components of the software suite. The component manager may determine a plurality of applications to install or upgrade based at least in part on the versions of the applications and a version numbering scheme. The version numbering scheme is designed to maintain compatibility between the applications in the software suite. The component manager may install the plurality of applications and/or upgrade a database schema in order to maintain compatibility between the components of the software suite. | 2015-12-17 |
20150363187 | SYSTEMS AND METHODS FOR INSTALLING UPGRADED SOFTWARE ON ELECTRONIC DEVICES - Systems, methods, and computer-readable media for upgrading electronic devices are provided. An exemplary method executed by a hardware processor may comprise providing a management agent on an electronic device for communicating with one or more device drivers associated with the electronic device. The management agent may be installed, for example, using a downloaded upgrade package. The method may further comprise upgrading the one or more device drivers to enable a direct connection between the management agent and the one or more device drivers. This direct connection, in some embodiments, may enable the management agent to access, using the one or more device drivers, persistent storage associated with the electronic device. The method may further comprise providing a new boot loader to the management agent, and overwriting, by the management agent, an existing boot loader in the persistent storage with the received new boot loader, using the one or more device drivers. | 2015-12-17 |
20150363188 | Information Processing Device and Information Processing System - An information processing device includes: a notification processing unit notifying a user that a terminal device is in a state of being able to update software; a confirmation receiving part receiving a confirmation that an update is to be performed from the user; and a transmitting section transmitting an execution instruction to update the software. | 2015-12-17 |
20150363189 | APPARATUS AND METHOD FOR SOFTWARE INFORMATION MANAGEMENT - A storage unit stores the programs of software. An operation unit monitors processes generated by an installer for the software when the software is installed. The operation unit generates, based on file information about files accessed by the generated processes and the relationships between the processes, software configuration information associating the file information about the files with the software. | 2015-12-17 |
20150363190 | DYNAMIC PACING FOR SERVICE UPGRADES - Disclosed herein are systems, methods, and software to enhance the upgrade process with respect to software service deployments. In at least one implementation, a user interface to an administrative portal for administering an initial deployment of a software service is presented and a notification that an upgrade is available is surfaced therein. In response to a selection of the notification in the user interface, upgrade controls are surfaced in the user interface for controlling a pace of the upgrade with respect to service components of the initial deployment. The upgrade is then applied incrementally to the service components based least in part on the pace of the upgrade specified via the upgrade controls. | 2015-12-17 |
20150363191 | CONFIGURATION-BASED PROCESSING OF REQUESTS BY CONDITIONAL EXECUTION OF SOFTWARE CODE TO RENDER REGIONS IN A DISPLAY - Server(s) prepare requests to obtain user input indicative of at least one of approval or disapproval by conditionally including therein one or more regions based on rules. The rules are configurable, and each rule is associated with an identifier of a software code. On receipt of a message identifying a request, rules corresponding to regions includable in the request are evaluated to identify regions to be rendered. For a to-be-rendered region a software code identified in a rule action pair is executed to obtain one or more rows, each row including multiple name value pairs. The server(s) prepare content of the request, by including each name value pair in a single line among multiple lines for a row, the multiple lines being sequenced relative to one another in a specific sequence to be displayed by a mobile device, the specific sequence being configurable. | 2015-12-17 |
20150363192 | COMPUTER-IMPLEMENTED TOOLS AND METHODS FOR EXTRACTING INFORMATION ABOUT THE STRUCTURE OF A LARGE COMPUTER SOFTWARE SYSTEM, EXPLORING ITS STRUCTURE, DISCOVERING PROBLEMS IN ITS DESIGN, AND ENABLING REFACTORING - An interrelated set of tools and methods are disclosed for recording the identity of software components responsible for creating files, recording the identity of software components that access software files, reasoning about the dependency relationships between software components, identifying and reporting undesirable dependencies between them, and reporting other useful information about a large-scale software architecture by instrumenting a software build process or test process. | 2015-12-17 |
20150363193 | AUTOMATIC SOFTWARE CATALOG CONTENT CREATION BASED ON BIO-INSPIRED COMPUTING PREDICTION - A computer system for automatically creating a software catalog content that includes a plurality of software components associated with a computing system is provided. The computer system may include creating a population comprising a plurality of potential software signatures associated with the plurality of software components. The computer system may include ranking the population based on a highest ratio value. The computer system may include selecting a set of parent software signatures based on the ranking. The computer system may include creating a new population of potential software signatures based on the selected set of parent software signatures. The computer system may include performing recombination on the new population of potential software signatures. The computer system may include predicting at least one potential software signature from the new population of potential software signatures based on a comparison between the performed recombination and the created new population of potential software signatures. | 2015-12-17 |
20150363194 | AUTOMATIC SOFTWARE CATALOG CONTENT CREATION BASED ON BIO-INSPIRED COMPUTING PREDICTION - A method for automatically creating a software catalog content that includes a plurality of software components associated with a computing system is provided. The method may include creating a population comprising a plurality of potential software signatures associated with the plurality of software components. The method may include ranking the population comprising the potential software signatures based on a highest ratio value. The method may include selecting a set of parent software signatures based on the ranking. Additionally, the method may include creating a new population of potential software signatures based on the selected set of parent software signatures. The method may include performing recombination on the new population of potential software signatures. Furthermore, the method may include predicting at least one potential software signature from the new population of potential software signatures based on a comparison between the performed recombination and the created new population of potential software signatures. | 2015-12-17 |
20150363195 | SOFTWARE PACKAGE MANAGEMENT - Systems and methods for software package management are provided to avoid dependency conflicts that occur when running software packages. According to one embodiment of this disclosure, there is provided a computer-implemented method for organizing a plurality of software modules. The method can include receiving a request for organizing the plurality of software modules, determining a dependency relationship between the plurality of software modules, creating or updating a file system of nested folders based on the dependency relationship between the plurality of software modules, and storing the plurality of software modules in the file system of nested folders, wherein each folder is associated with one of the software modules based on the dependency relationship between the plurality of software modules. | 2015-12-17 |
20150363196 | Systems And Methods For Software Corpora - Systems, methods, and computer program products are shown for providing a corpus. An example embodiment includes automatically obtaining a plurality of software files, determining a plurality of artifacts for each of the plurality of software files, and storing the plurality of artifacts for each of the plurality of software files in a database. Additional embodiments determine some of the artifacts for each of the software files by converting each of the software files into an intermediate representation and determining at least some of the artifacts from the intermediate representation for each of the software files. Certain example embodiments determine at least some of the artifacts for each of the software files by extracting a string of characters from each of the plurality of software files. The software files can be in a source code or a binary format. | 2015-12-17 |
20150363197 | Systems And Methods For Software Analytics - Systems, methods, and computer program products are provided for locating design patterns in software. An example method includes accessing a database having multiple artifacts corresponding to multiple software, and identifying a design pattern for at least one of the software files by automatically analyzing at least one of the artifacts associated with the software. Additional embodiments also provide for storing an identifier for the design pattern for the software in the database. For certain example embodiments, the artifacts include developmental, which may be searched for a string that denotes a design pattern, such as flaw, feature, or repair. Additional example embodiments also include finding in the software file a program fragment that implements the design pattern. | 2015-12-17 |
20150363198 | DYNAMIC CALL TRACKING METHOD BASED ON CPU INTERRUPT INSTRUCTIONS TO IMPROVE DISASSEMBLY QUALITY OF INDIRECT CALLS - Embodiments presented herein describe techniques to track and correct indirect function calls in disassembled object code. Assembly language source code is generated from a binary executable object. The assembly language source code may include indirect function calls. Memory addresses associated with the function calls are identified. A central processing unit (CPU) interrupt instruction is inserted in the disassembled source code at each indirect function call. The disassembled source code is executed. When the interrupt at each indirect function call is triggered, the function name of a function referenced by a register may be determined. | 2015-12-17 |
20150363199 | PERFORMING A CLEAR OPERATION ABSENT HOST INTERVENTION - Optimizations are provided for frame management operations, including a clear operation and/or a set storage key operation, requested by pageable guests. The operations are performed, absent host intervention, on frames not resident in host memory. The operations may be specified in an instruction issued by the pageable guests. | 2015-12-17 |
20150363200 | QoS Based Dynamic Execution Engine Selection - In one embodiment, a processor includes plural processing cores, and plural instruction stores, each instruction store storing at least one instruction, each instruction having a corresponding group number, each instruction store having a unique identifier. The processor also includes a group execution matrix having a plurality of group execution masks and a store execution matrix comprising a plurality of store execution masks. The processor further includes a core selection unit that, for each instruction within each instruction store, selects a store execution mask from the store execution matrix. The core selection unit for each instruction within each instruction store selects at least one group execution mask from the group execution matrix. The core selection unit performs logic operations to create a core request mask. The processor includes an arbitration unit that determines instruction priority among each instruction, assigns an instruction for each available core, and signals the instruction store. | 2015-12-17 |
20150363201 | PREDICTING INDIRECT BRANCHES USING PROBLEM BRANCH FILTERING AND PATTERN CACHE - Predicting indirect branch instructions may comprise predicting a target address for a fetched branch instruction. Accuracy of the target address may be tracked. The fetched branch instruction may be flagged as a problematic branch instruction based on the tracking. A pattern cache may be trained for predicting more accurate target address for the fetched branch instruction, and the next time the fetched branch instruction is again fetched, a target address may be predicted from the pattern cache. | 2015-12-17 |
20150363202 | BRANCH PREDICTION BASED ON CORRELATING EVENTS - Branch prediction using a correlating event, such as an unconditional branch that calls a routine including the branch, instead of the branch itself, to predict the behavior of the branch. The circumstances in which the branch is employed, and not the actual branch itself, is used to predict how strongly taken or not taken the branch is to behave. A correlating value associated with the branch (e.g., an address of the instruction calling a routine that includes the branch), an address of the branch, and a value that represents the number of selected branch instructions between the anchor point and the branch are used to select information to be used to predict the direction of the branch. | 2015-12-17 |
20150363203 | Apparatus and Method for Bias-Free Branch Prediction - Aspects of the present invention provide an apparatus and method for filtering biased conditional branches in a branch predictor in favor of non-biased conditional branches. Biased conditional branches, which are consistently skewed toward one direction or outcome, are filtered such that an increased number of non-biased conditional branches which resolve in both directions may be considered. As a result, more useful branches may be captured over larger distances, thereby providing correlations deeper in a global history to provide greater prediction accuracy. In addition, by tracking only the latest occurrences of non-biased conditional branches using a recency stack structure, even more distant branch correlations may be made. | 2015-12-17 |
20150363204 | BRANCH PREDICTION BASED ON CORRELATING EVENTS - Branch prediction using a correlating event, such as an unconditional branch that calls a routine including the branch, instead of the branch itself, to predict the behavior of the branch. The circumstances in which the branch is employed, and not the actual branch itself, is used to predict how strongly taken or not taken the branch is to behave. A correlating value associated with the branch (e.g., an address of the instruction calling a routine that includes the branch), an address of the branch, and a value that represents the number of selected branch instructions between the anchor point and the branch are used to select information to be used to predict the direction of the branch. | 2015-12-17 |
20150363205 | IMPLEMENTING OUT OF ORDER PROCESSOR INSTRUCTION ISSUE QUEUE - A method and apparatus are provided for implementing an enhanced out of order processor instruction issue queue in a computer system. Instructions are selectively accepted into an instruction issue queue and ages are assigned to the accepted queue entry instructions using a queue counter. The queue entry instructions are issued based upon resources being ready and ages of the instructions. Ages of the queue entry instructions and the queue counter are selectively decremented, responsive to issuing instructions. | 2015-12-17 |
20150363206 | IMPLEMENTING OUT OF ORDER PROCESSOR INSTRUCTION ISSUE QUEUE - A method and apparatus are provided for implementing an enhanced out of order processor instruction issue queue in a computer system. Instructions are selectively accepted into an instruction issue queue and ages are assigned to the accepted queue entry instructions using a queue counter. The queue entry instructions are issued based upon resources being ready and ages of the instructions. Ages of the queue entry instructions and the queue counter are selectively decremented, responsive to issuing instructions. | 2015-12-17 |
20150363207 | ALLOWING A COMPUTING DEVICE TO OPERATE IN A DEMO MODE AND A CONSUMER MODE - Systems and methods for a demo mode for a computing device are disclosed. In some implementations, a computing device receives a first input for entering a demo mode. The computing device prompts, in response to the first input, for a user input indicating whether the user wishes to place the computing device in the demo mode. The computing device receives the user input indicating that the user wishes to place the computing device in the demo mode. The computing device enters the demo mode responsive to the user input indicating that the user wishes to place the computing device in the demo mode. Entering the demo mode includes adjusting battery settings of the computing device. | 2015-12-17 |
20150363208 | UPDATING A COMMIT LIST TO INDICATE DATA TO BE WRITTEN TO A FIRMWARE INTERFACE VARIABLE REPOSITORY - Examples disclosed herein relate to updating a commit list to indicate data to be written to a firmware interface (FI) variable repository. Examples include storing target data in a variable repository cache of system management memory of a computing device during a given SMM event, updating a commit list, during the given SMM event, to indicate that the target data is to be written to the FI variable repository, and ending the given SMM event without at least some portion of the target data being written to the FI variable repository during the given SMM event. | 2015-12-17 |
20150363209 | INFORMATION PROCESSING APPARATUS AND COMPUTER-READABLE RECORDING MEDIUM - An information processing apparatus receives, in a state of an electrical power saving mode that limits a function, a control packet that includes therein control information indicating a device or an application that is a startup target by an arbitrary number of times within a predetermined time period. Then, when the control packet is received, the information processing apparatus starts up a control program that starts up each device or each application. Thereafter, when the information processing apparatus receives the control packet after the control program is started up, the information processing apparatus starts up, by using the control program, the device or the application that is the startup target specified by the control information included in the control packet. | 2015-12-17 |
20150363210 | VEHICLE DOWNLOAD BY REMOTE MOBILE DEVICE - A nomadic device may be configured to receive an indication of a software update to download over a local data connection to a vehicle computing system, download the software update from an update server over an approved wide-area data connection unavailable to the nomadic device when located within the vehicle, and provide the software update to the vehicle computing system for installation to the vehicle. A vehicle computing system may be configured receive, from an update server, an indication of a software update, provide the indication, over a local connection, to a nomadic device configured to download software updates when connected to a wide-area data connection unavailable to the nomadic device when located within the vehicle, and receive the software update, downloaded when connected to the wide-area data connection, from the nomadic device over the local connection. | 2015-12-17 |
20150363211 | SYSTEMS AND METHODS FOR MANAGING DISTRIBUTED SALES, SERVICE AND REPAIR OPERATIONS - The systems and methods of the present disclosure are generally related to managing distributed sales, service and repair operations. In particular, the systems and methods of the present disclosure relate to managing a distributed network of sales, service and/or repair operations that include automated features. | 2015-12-17 |
20150363212 | MINIMIZING PERFORMANCE LOSS ON WORKLOADS THAT EXHIBIT FREQUENT CORE WAKE-UP ACTIVITY - A processor may include a cause agnostic frequency dither filter (FD filter), which may cause reduction in the frequency transitions while maintaining the performance levels. The FD Filter may minimize the performance loss, which may otherwise accrue from these frequency transitions, while trying to maximize the peak frequency of the processor. The FD filter may determine a minimum and maximum limit, which may be used by a power management unit (PMU) to restrict the number of frequency transitions to be within a specified threshold. The FD filter may determine the maximum and minimum limits based on transition data stored in internal tables captured during one or more time windows (or observation windows). Based on an average system behavior, the PMU may either apply the minimum or the maximum limit over the subsequent time window. | 2015-12-17 |
20150363213 | Method For Model-Based Generation Of Startup Configurations Of Embedded Systems - A method for model-based generation of startup configurations of embedded systems that includes importing into a modeling module a first startup configuration in textual representation by a system synchronization module, generating a graphical representation of the startup configuration by the modeling module, and modifying the graphical representation of the first startup configuration generating from the modified graphical representation a second, modified startup configuration in textual representation by the modeling module, and exporting the second modified startup configuration into the system synchronization module, which can simplify generation of startup configurations of an embedded system. | 2015-12-17 |
20150363214 | SYSTEMS AND METHODS FOR CLUSTERING TRACE MESSAGES FOR EFFICIENT OPAQUE RESPONSE GENERATION - In a method of service emulation, ones of a plurality of messages communicated between a system under test and a target system for emulation are clustered into message clusters. A request is received from the system under test, and one of the message clusters is identified as corresponding to the request based on a distance measure. A response to the request is generated using the one of the message clusters that was identified. Related computer systems and computer program products are also discussed. | 2015-12-17 |
20150363215 | SYSTEMS AND METHODS FOR AUTOMATICALLY GENERATING MESSAGE PROTOTYPES FOR ACCURATE AND EFFICIENT OPAQUE SERVICE EMULATION - In a method of service emulation, a plurality of messages communicated between a system under test and a target system for emulation are recorded in a computer-readable memory. Ones of the messages are clustered to define a plurality of message clusters, and respective cluster prototypes are generated for the message clusters. The respective cluster prototypes include a commonality among the ones of the messages of the corresponding message clusters. One of the message clusters is identified as corresponding to a request from the system under test based on a comparison of the request with the respective cluster prototypes, and a response to the request for transmission to the system under test is generated based on the one of the message clusters that was identified. Related computer systems and computer program products are also discussed. | 2015-12-17 |
20150363216 | METHOD AND SYSTEM FOR MANAGING HOSTS THAT RUN VIRTUAL MACHINES WITHIN A CLUSTER - Embodiments of a non-transitory computer-readable storage medium and a computer system are disclosed. In an embodiment, a non-transitory computer-readable storage medium containing program instructions for managing host computers that run virtual machines into host-groups within a cluster is disclosed. When executed, the instructions cause one or more processors to perform steps including determining if a virtual machine entity needs additional resources and, if the virtual machine entity needs additional resources, mapping a host computer to a host-group with which the virtual machine entity is associated. | 2015-12-17 |
20150363217 | TECHNIQUES FOR UTILIZING A RESOURCE FOLD FACTOR IN PLACEMENT OF PHYSICAL RESOURCES FOR A VIRTUAL MACHINE - A technique for assigning physical resources of a data processing system to a virtual machine (VM) includes reading, by a hypervisor executing on the data processing system, a fold factor attribute for the VM. The fold factor attribute defines an anticipated usage of physical resources of the data processing system by the VM. The technique also includes mapping based on a value of the fold factor attribute, by the hypervisor, allocated virtual processors of the VM to the physical resources to maximize processor core access to local memory for ones of the allocated virtual processors that are anticipated to be utilized. | 2015-12-17 |
20150363218 | TECHNIQUES FOR UTILIZING A RESOURCE FOLD FACTOR IN PLACEMENT OF PHYSICAL RESOURCES FOR A VIRTUAL MACHINE - A technique for assigning physical resources of a data processing system to a virtual machine (VM) includes reading, by a hypervisor executing on the data processing system, a fold factor attribute for the VM. The fold factor attribute defines an anticipated usage of physical resources of the data processing system by the VM. The technique also includes mapping based on a value of the fold factor attribute, by the hypervisor, allocated virtual processors of the VM to the physical resources to maximize processor core access to local memory for ones of the allocated virtual processors that are anticipated to be utilized. | 2015-12-17 |
20150363219 | OPTIMIZATION TO CREATE A HIGHLY SCALABLE VIRTUAL NETORK SERVICE/APPLICATION USING COMMODITY HARDWARE - A method of deployment of virtual machines (VMs) including receiving traffic having characteristics from clients and based on the traffic, dynamically bringing up son VMs and when the traffic goes down, removing the son VMs. Sharing a cache between the son VMs by the VMs directly accessing the cache when receiving traffic from existing clients and performing encryption/decryption for new clients. | 2015-12-17 |
20150363220 | VIRTUAL COMPUTER SYSTEM AND DATA TRANSFER CONTROL METHOD FOR VIRTUAL COMPUTER SYSTEM - A computer has an adapter which is coupled to storage devices; the adapter transmits data and receives data, and measures the transfer amount of data that has been transmitted and received, and the number of I/O accesses, for each virtual computer; a virtualization part, on the basis of the transfer amount of the data, and the number of I/O accesses, acquired from the adapter, computes an upper limit for the data transfer amount and an upper limit for the number of I/O accesses for each virtual computer and reports to the virtual computers; and the virtual computers retain the data to transfer to and to receive from the storage devices in a queue, and the virtual computers control data to output from the queue so as not to exceed the upper limit of the data transfer amount or the number of I/O accesses. | 2015-12-17 |
20150363221 | METHOD OF MANAGING TENANT NETWORK CONFIGURATION IN ENVIRONMENT WHERE VIRTUAL SERVER AND NON-VIRTUAL SERVER COEXIST - A non-virtual server and a virtual server make up the same tenant in an environment, in which virtual servers created by dividing a single physical server into a plurality of computer environments coexist with a non-virtual server that directly uses a single physical server as a computer environment without using server virtualization. A management computer is provided with virtual switch management information that shows a correlation between the virtual servers and an internal network to which a relevant virtual server connects, and physical switch management information that shows a correlation between the non-virtual server and an internal network to which the non-virtual server connects. The management computer creates a virtual server that belongs to the same tenant as a physical instance, identifies a first internal network to which the non-virtual server connects, and configures the tenant so that the relevant virtual server is connected to the first internal network. | 2015-12-17 |
20150363222 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DATA PROCESSING AND SYSTEM DEPLOYMENT IN A VIRTUAL ENVIRONMENT - In one embodiment, a method for deploying a data processing system in a virtual environment includes deploying a data processing system call interface in a virtual machine in a virtualization environment, the system call interface being configured to trigger a locally called data processing instruction. The method also includes deploying a data processing driver in a virtual machine management platform in the virtualization environment, the data processing driver being configured to read the data processing instruction triggered by the system call interface. Moreover, the method includes deploying a data processing instruction optimizer in the virtualization environment, the optimizer being configured to optimize the data processing instruction read by the deployed data processing driver. | 2015-12-17 |
20150363223 | PREDICTING THE LENGTH OF A TRANSACTION - In a multi-processor transaction execution environment a transaction is executed a plurality of times. Based on the executions, a duration is predicted for executing the transaction. Based on the predicted duration, a threshold is determined. Pending aborts of the transaction due to memory conflicts are suppressed based on the transaction exceeding the determined threshold. | 2015-12-17 |
20150363224 | MOBILE AND REMOTE RUNTIME INTEGRATION - An application program may be analyzed to identify candidate classes or methods that may be executed using a remote computing node. Candidate classes or methods may be translocated to the remote computing node based on performance characteristics of the device on which the application program is running, the environment in which the device operates, and on the performance and availability of the remote computing node. An application program publisher may be assigned instances of virtual machines that may be dedicated to hosting translocated classes and methods. | 2015-12-17 |
20150363225 | CHECKPOINTING FOR A HYBRID COMPUTING NODE - According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task. | 2015-12-17 |
20150363226 | RUN TIME ESTIMATION SYSTEM OPTIMIZATION - Methods, systems, and computer program products for training an optimized time estimation system for completing a data processing job to be run on a data processing device that operates within a distributed processing system having a range of platforms. Embodiments include creating a prediction algorithm based upon retrieved operational parameters associated with a data processing job. Embodiments also include retrieving further operational parameters associated with the data processing job. Embodiments include updating the prediction algorithm based on the further operational parameters, in which the prediction algorithm is updated by modifying parameter values associated with variable parameters of the prediction algorithm. | 2015-12-17 |
20150363227 | DATA PROCESSING UNIT AND METHOD FOR OPERATING A DATA PROCESSING UNIT - A data processing unit providing a core instruction set wherein the core instruction set comprises a specific core instruction that is adapted to receive data for specifying a hardware component to be called, call the hardware component for executing a job, perform a first context switch that suspends an actual task, wherein the actual task previously called the hardware component using the specific core instruction, perform a second context switch that resumes the actual task when the hardware component finished the job and a method for operating such a data processing unit. | 2015-12-17 |
20150363228 | INTEGRATING SOFTWARE SOLUTIONS TO EXECUTE BUSINESS APPLICATIONS - Various embodiments of systems and methods to integrate software solutions to execute business applications are described herein. A request is received at a first software solution to execute a business application. In one aspect, the request is forwarded to a second software solution when a resource required to execute the business application is associated with the second software solution. A response is received from the second software solution corresponding to the execution of the business application. In another aspect, the business application is executed at the first software solution when the resource required to execute the business application is associated with the first software solution. The response corresponding to the execution of the business application is rendered on a computer generated UI associated with the first software solution. | 2015-12-17 |
20150363229 | RESOLVING TASK DEPENDENCIES IN TASK QUEUES FOR IMPROVED RESOURCE MANAGEMENT - A database system comprises a database server and a database storage system comprising a storage processing node and a queue. The database server is operable to define a priority for each of a plurality of database tasks. The storage processing node is operable to receive database tasks from the database server and place them into the queue based upon their priority. The storage processing node is further operable to determine whether there are dependencies between a first database task and a second database task with a previously defined higher priority so that the storage processing node is operable to place the first database task into a same queue as the second database task. The second database task is dependent upon the first database task when an input of the second database task is waiting for an output of the first database task. | 2015-12-17 |
20150363230 | PARALLELISM EXTRACTION METHOD AND METHOD FOR MAKING PROGRAM - A method of extracting parallelism of an original program by a computer includes: a process of determining whether or not a plurality of macro tasks to be executed after a condition of one conditional branch included in the original program is satisfied are executable in parallel; and a process of copying the conditional branch regarding which the macro tasks are determined to be executable in parallel, to generate a plurality of conditional branches. | 2015-12-17 |
20150363231 | DATA VISUALIZATION AND ACCUMULATION DEVICE FOR CONTROLLING STEPS IN CONTINUOUS PROCESSING SYSTEM - A visualization device capable of visualizing the process flow on entire multiple item continuous processing system on a time series basis and investigating the cause of a system loss. A visualization device for managing an operating state in a first processing step to which a step loss retroacts or swears back in a multiple item continuous processing process in which a batch processing is performed item by item, wherein each of icons indicating various kinds of items, shows the progress of the operation in the processing step sequentially item by item on a time series basis in a matrix of cells wherein a vertical length of the matrix is divided by cells of operating hours and rows of the respective operating hours are partitioned by each one batch processing time of the first processing step. The data is accumulated in the matrix and the data is utilized while being visualized. | 2015-12-17 |
20150363232 | METHODS AND SYSTEMS FOR CALCULATING STATISTICAL QUANTITIES IN A COMPUTING ENVIRONMENT - This disclosure is directed to methods and systems for calculating statistical quantities of computational resources used by distributed data sources in a computing environment. In one aspect, a master node receives a query regarding use of computational resources used by distributed data sources of a computing environment. The data sources generate metric data that represents use of the computational resources and distribute the metric data to two or more worker nodes. The master node directs each worker node to generate worker-node data that represents the metric data received by each of the worker nodes and each worker node sends worker-node data to the master node. The master node receives the worker-node data and calculates a master-data structure based on the worker-node data, which may be used to estimate percentiles of the metric data in response to the query. | 2015-12-17 |
20150363233 | LEDGER-BASED RESOURCE TRACKING - Disclosed are systems, methods, and non-transitory computer-readable storage media for tracking and managing resource usage through a ledger feature that can trigger complex real-time reactions. The resource tracking can be managed through a ledger module and a ledger data structure. The ledger data structure can be updated each time a task requests a resource. Additionally, as part of the update, the ledger module can verify whether a resource has been over consumed. In response to the detection of an over consumption, the ledger module can set a flag. At some later pointer when the thread is in a stable, well-understood point, the ledger module can check if the flag has been set. If the flag has been set, the ledger module can call the appropriate callback function, which can react to the over consumption in a resource specific manner. | 2015-12-17 |
20150363234 | RESOURCE ALLOCATION FOR MIGRATION WITHIN A MULTI-TIERED SYSTEM - A method and system for intelligent tiering is provided. The method includes receiving a request for enabling a tiering process with respect to data. The computer processor retrieves a migration list indicating migration engines associated with the data. Additionally, an entity list of migration entities is retrieved and each migration entity is compared to associated policy conditions. In response, it is determined if matches exist between the migration entities and the associated policy conditions and a consolidated entity list is generated. | 2015-12-17 |
20150363235 | AUTOMATING APPLICATION PROVISIONING FOR HETEROGENEOUS DATACENTER ENVIRONMENTS - Disclosed is a method of managing computer resources in a dynamic computing environment. The method includes identifying available resources from an available pool based on an augmented model, the available pool including resources unallocated resources, allocating the identified available resources in accordance with the augmented model, identifying reserve resources from a reserve pool based on the augmented model, the reserve pool including resources not allocated and not configured, and upon determining the available pool includes a number of resources below a threshold, replenishing the available pool with the identified reserve resources. | 2015-12-17 |
20150363236 | DATA REUSE TRACKING AND MEMORY ALLOCATION MANAGEMENT - Exemplary methods, apparatuses, and systems receive a first request for a storage address at a first access time. Entries are added to first and second data structures. Each entry includes the storage address and the first access time. The first data structure is sorted in an order of storage addresses. The second data structure is sorted in an order of access times. A second request for the storage address is received at a second access time. The first access time is determined by looking up the entry in first data structure using the storage address received in the second request. The entry in the second data structure is looked up using the determined first access time. A number of entries in second data structure that were subsequent to the second entry is determined. A hit count for a reuse distance corresponding to the determined number of entries is incremented. | 2015-12-17 |
20150363237 | MANAGING RESOURCE CONSUMPTION IN A COMPUTING SYSTEM - Embodiments relate to managing resource consumption in a computing system. An aspect includes providing a resource policy by defining a plurality of threshold values relating to the resource consumption, wherein the resources are consumed by a plurality of user-defined functions performing tasks for a database management system, wherein the user-defined functions are executed by a plurality of processes external to the database management system. Another aspect includes performing an action, as defined by the resource policy, on at least one of the user-defined functions. | 2015-12-17 |
20150363238 | RESOURCE MANAGEMENT IN A VIRTUALIZED COMPUTING ENVIRONMENT - According to examples of the present disclosure, a method is provided to perform resource management in a virtualized computing environment. The method may comprise monitoring multiple first virtual machines to update a status of each first virtual machine based on a resource consumption level of resources allocated to the first virtual machine. The method may further comprise: in response to receiving a request to allocate resources to a second virtual machine, selecting at least one of the multiple first virtual machines with an inactive status to satisfy the request. Resources allocated to the selected at least one of the multiple first virtual machines may then be released and reallocated to the second virtual machine. | 2015-12-17 |
20150363239 | DYNAMIC TASK SCHEDULING METHOD FOR DISPATCHING SUB-TASKS TO COMPUTING DEVICES OF HETEROGENEOUS COMPUTING SYSTEM AND RELATED COMPUTER READABLE MEDIUM - One dynamic task scheduling method includes: receiving a task, wherein the task comprises a kernel and a plurality of data items to be processed by the kernel; dynamically partitioning the task into a plurality of sub-tasks, each having the kernel and a variable-sized portion of the data items; and dispatching the sub-tasks to a plurality of computing devices of a heterogeneous computing system. Another dynamic task scheduling method includes: receiving a task, wherein the task comprises a kernel and a plurality of data items to be processed by the kernel; partitioning the task into a plurality of sub-tasks, each having the kernel and a same fixed-sized portion of the data items; and dynamically dispatching the sub-tasks to a plurality of computing devices of a heterogeneous computing system. | 2015-12-17 |
20150363240 | SYSTEM FOR CONTROLLING RESOURCES, CONTROL PATTERN GENERATION APPARATUS, CONTROL APPARATUS, METHOD FOR CONTROLLING RESOURCES AND PROGRAM - A system for controlling the resources includes a control pattern generation unit for generating a plurality of control patterns from a virtual system model, produced by modeling the behaviors of a network element and a server of a virtual system operating on a virtual datacenter, and from a resource allocation change policy stipulating a policy or policies of change of allocation of resources to the virtual systems. The control patterns prove candidates for control commands to network resources and server resources of the virtual systems. The system for controlling the resources also includes a control unit for carrying out prediction of a service level, using the control patterns, selecting such control pattern that satisfies the service level of the virtual systems and that also satisfies a preset standard or reference for selection, from among the control patterns, with the use of the result of prediction, and putting the control pattern selected to use. | 2015-12-17 |
20150363241 | METHOD AND APPARATUS TO MIGRATE STACKS FOR THREAD EXECUTION - A method and an apparatus that generate a request from a first thread of a process using a first stack for a second thread of the process to execute a code are described. Based on the request, the second thread executes the code using the first stack. Subsequent to the execution of the code, the first thread receives a return of the request using the first stack. | 2015-12-17 |
20150363242 | METHODS AND APPARATUS TO MANAGE CONCURRENT PREDICATE EXPRESSIONS - Methods, apparatus, systems and articles of manufacture are disclosed to manage concurrent predicate expressions. An example method discloses inserting a first condition hook into a first thread, the first condition hook associated with a first condition, inserting a second condition hook into a second thread, the second condition hook associated with a second condition, preventing the second thread from executing until the first condition is satisfied, and identifying a concurrency violation when the second condition is satisfied. | 2015-12-17 |
20150363243 | ADAPTIVE PROCESS FOR DATA SHARING WITH SELECTION OF LOCK ELISION AND LOCKING - In a Hardware Lock Elision (HLE) Environment, predictively determining whether a HLE transaction should actually acquire a lock and execute non-transactionally, is provided. Included is, based on encountering an HLE lock-acquire instruction, determining, based on an HLE predictor, whether to elide the lock and proceed as an HLE transaction or to acquire the lock and proceed as a non-transaction; based on the HLE predictor predicting to elide, setting the address of the lock as a read-set of the transaction, and suppressing any write by the lock-acquire instruction to the lock and proceeding in HLE transactional execution mode until an xrelease instruction is encountered wherein the xrelease instruction releases the lock or the HLE transaction encounters a transactional conflict; and based on the HLE predictor predicting not-to-elide, treating the HLE lock-acquire instruction as a non-HLE lock-acquire instruction, and proceeding in non-transactional mode. | 2015-12-17 |
20150363244 | METHODS AND SYSTEMS FOR PROVIDING APPLICATION PROGRAMMING INTERFACES AND APPLICATION PROGRAMMING INTERFACE EXTENSIONS TO THIRD PARTY APPLICATIONS FOR OPTIMIZING AND MINIMIZING APPLICATION TRAFFIC - Methods and systems for providing APIs and API extensions to third party applications for optimizing and minimizing application traffic are provided. According to one aspect, a method for optimizing and minimizing application traffic in a wireless network includes defining an application programming interface (API) for controlling application traffic between an application client residing on a mobile device that operates within a wireless network and an application server not residing on the mobile device and using the API to optimize application traffic in the wireless network. | 2015-12-17 |
20150363245 | APPARATUS, METHOD AND COMPUTER PROGRAM FOR PROCESSING OUT-OF-ORDER EVENTS - Embodiments relate to a concept for ordering events of an event stream, comprising out-of-order events, for an event detector, wherein the events have associated thereto individual event occurrence times (e | 2015-12-17 |
20150363246 | Application Service Aggregation and Management - A method and system for aggregating services is provided. The method includes receiving and processing a service request. The service request is submitted to a service catalog and dispatched to a data integration and API module. The service request is transmitted to a management module and processed with respect to a plurality of service providers. Inter process communications associated with the service request are managed. Additionally, an account associated with the service request and the plurality of service providers is managed. | 2015-12-17 |
20150363247 | Accurate and Fast In-Service Estimation of Input Bit Error Ratio of Low Density Parity Check Decoders - A device receives signals over a communication medium and uses a low density parity check decoder to decode data in the signals. A number of unsatisfied parity checks are counted prior to a first decoding iteration of the low density parity check decoder on a basis of log likelihood ratios computed from the signals. An operational characteristic of the low density parity check decoder is computed based on an accumulated number of unsatisfied parity checks. | 2015-12-17 |
20150363248 | THREE DIMENSIONAL (3D) MEMORY INCLUDING ERROR DETECTION CIRCUITRY - A method performed at a non-volatile memory of a data storage device includes determining, at error detection circuitry included in the non-volatile memory, an indication of a number of errors associated with a portion of the non-volatile memory. The method also includes providing the indication to a controller of the data storage device, where the controller includes error correction circuitry. The non-volatile memory has a 3D configuration that is monolithically formed in one or more physical levels of arrays of memory cells having an active area disposed above a silicon substrate. The non-volatile memory includes circuitry associated with operation of the memory cells. | 2015-12-17 |
20150363249 | EVALUATION METHOD AND EVALUATION APPARATUS - A calculation unit calculates, for each of a plurality of systems in which a countermeasure is taken, a maturity index of the system, indicating the degree of operational stability of the system, based on a value related to a non-functional requirement of the system. An evaluation unit evaluates usefulness of the countermeasure for a particular system based on similarity of configuration between the particular system and the system, timing that the countermeasure is taken, effects of the countermeasure, and the calculated maturity index. | 2015-12-17 |
20150363250 | SYSTEM ANALYSIS DEVICE AND SYSTEM ANALYSIS METHOD - In state detection of a system using a correlation destruction pattern, the versatility of the correlation destruction pattern is improved. A system analysis device ( | 2015-12-17 |
20150363251 | METHOD FOR GENERATING A MACHINE HEARTBEAT - A method and system for generating a heartbeat of a process including at least one machine configured to perform a process cycle consisting of timed events performed in a process sequence includes determining the duration of each timed event during performance of the process cycle, ordering the durations of the timed events in the process sequence, and generating a heartbeat defined by the ordered durations of a process cycle. One or more process parameters can be sensed and displayed with the heartbeat in real time. The variance of a current heartbeat to a baseline heartbeat and/or a comparison of a process parameter to a parameter limit can be analyzed to monitor and/or control the process or machine. The heartbeat, the process parameter corresponding to the heartbeat can be displayed on a user interface which can include a message corresponding to the heartbeat and/or the process parameter. | 2015-12-17 |
20150363252 | DETERMINING AND CORRECTING SOFTWARE SERVER ERROR CONDITIONS - A system and method of diagnosing and correcting errors in a server computer. A server computer is coupled by a communication path to a client computer. A storage device stores a diagnostic error detecting and correcting program and the server computer is programmed to implement the diagnostic error detecting and correcting program. The server computer detects several selected operating parameters during operation of the server process and determines if at least a first of the selected operating parameters are outside a pre-determined specification for the selected operating parameters. In response to the selected operating parameters being outside the pre-determined specification, the server computer notifies the client computer of an error with the server process. The server computer can also detect communication errors and attempt to restore communications by modifying communication parameter(s). | 2015-12-17 |
20150363253 | NON-VOLATILE FAULT INDICATION IN A STORAGE ENCLOSURE - Apparatus and method for providing a non-volatile fault indication in a multi-device storage enclosure. In some embodiments, a storage enclosure includes a plurality of storage devices housed within a storage enclosure housing. A non-volatile display element is arranged to provide a persistent display of fault information relating to a component of the storage enclosure after power is removed from the display element. | 2015-12-17 |
20150363254 | STORAGE SYSTEM AND STORAGE SYSTEM FAILURE MANAGEMENT METHOD - Failures in a storage system are managed at low cost and with high reliability. A storage system is coupled to a file command issuing apparatus and a block command issuing apparatus, and processes commands from both. The storage system is provided with: a first control portion which is provided extending across a plurality of clusters and which is configured to control block access requests to a disk device; a plurality of second control portions which are configured to process file access requests and provided respectively in the clusters, and operate on virtual machines managed by a virtualization control portion; failure detecting portions which are configured to detect failures within each of the clusters; and a failure information management portion which is provided in the first control portion and which is configured to consolidate and manage failure information relating to failures detected by the failure detecting portions. | 2015-12-17 |
20150363255 | BANK-LEVEL FAULT MANAGEMENT IN A MEMORY SYSTEM - According to one aspect, bank-level fault management in a memory system is provided. The memory system includes a plurality of ranks, each rank including a plurality of memory devices each having a plurality of banks. A first error is detected in a first bank number of a first memory device of a rank. The first bank number of the first memory device is marked with a bank-level chip mark. The bank-level chip mark isolates declaration of an error condition to the first bank number. A bank-level fault management action is performed based on the bank-level chip mark to accommodate the error condition. | 2015-12-17 |
20150363256 | ADMISSION CONTROL BASED ON THE END-TO-END AVAILABILITY - Providing admission control for a request may comprise creating a process flow associated with the request, the process flow identifying a plurality of computer-implemented components and a flow of transactions occurring between the computer-implemented components; executing the flow of transactions on the plurality of computer-implemented components; logging the flow of transactions; monitoring the flow of transactions to detect a problem in the flow or one or more of the components, or combination thereof; responsive to not detecting a problem in the flow or one or more of the components, or combination thereof, allowing the request to proceed; and responsive to detecting a problem in the flow or one or more of the components, or combination thereof, not allowing the request to proceed. | 2015-12-17 |
20150363257 | RESISTIVE MEMORY DEVICE AND OPERATING METHOD - Provided are a resistive memory device and an operating method for the resistive memory device. The operating method includes detecting a write cycle, determining whether or not to perform a recovery operation by comparing the detected write cycle with a first reference value, and upon determining to perform the recovery operation, performing the recovery operation on target memory cells of the memory cell array. | 2015-12-17 |
20150363258 | DEVICE AND SYSTEM INCLUDING ADAPTIVE REPAIR CIRCUIT - A device, system, and/or method includes an internal circuit configured to perform at least one function, an input-output terminal set and a repair circuit. The input-output terminal set includes a plurality of normal input-output terminals connected to an external device via a plurality of normal signal paths and at least one repair input-output terminal selectively connected to the external device via at least one repair signal path. The repair circuit repairs at least one failed signal path included in the normal signal paths based on a mode signal and fail information signal, where the mode signal represents whether to use the repair signal path and the fail information signal represents fail information on the normal signal paths. Using the repair circuit, various systems adopting different repair schemes may be repaired and cost of designing and manufacturing the various systems may be reduced. | 2015-12-17 |
20150363259 | MANAGING A STORAGE DEVICE USING A HYBRID CONTROLLER - Methods, apparatuses, and computer program products for managing a storage device using a hybrid controller are provided where the storage device comprises an internal peripheral component interconnect express (PCIe) interface to control solid state memory within the storage device. In particular embodiments, the storage device includes a first external interface configured to establish an external PCIe link and a second external interface configured to establish at least one of an external serial attached small computer system interface (SAS) link and an external serial advanced technology attachment (SATA) link. Embodiments include receiving from an external source, by the hybrid controller, a first command at the first external interface and a second command at the second external interface; and concurrently implementing, by the hybrid controller, the first command using a PCIe protocol and the second command using one of a SAS protocol and a SATA protocol. | 2015-12-17 |
20150363260 | DATA BUS INVERSION USABLE IN A MEMORY SYSTEM - Implementations of Data Bus Inversion (DBI) techniques within a memory system are disclosed. In one embodiment, a set of random access memory (RAM) integrated circuits (ICs) is separated from a logic system by a bus. The logic system can contain many of the logic functions traditionally performed on conventional RAM ICs, and accordingly the RAM ICs can be modified to not include such logic functions. The logic system, which can be a logic integrated circuit intervening between the modified RAM ICs and a traditional memory controller, additionally contains DBI encoding and decoding circuitry. In such a system, data is DBI encoded and at least one DBI bit issued when writing to the modified RAM ICs. The RAM ICs in turn store the DBI bit(s) with the encoded data. When the encoded data is read from the modified RAM ICs, it is transmitted across the bus in its encoded state along with the DBI bit(s). The logic integrated circuit then decodes the data using the DBI bit(s) to return it to its original state. | 2015-12-17 |
20150363261 | RAM REFRESH RATE - A refresh rate of a random-access memory (RAM) is increased if a number of errors is greater than an error threshold and the refresh rate has not reached a maximum rate. The refresh rate of the RAM is set to a normal rate if the number of errors is less than or equal to the error threshold. | 2015-12-17 |
20150363262 | ERROR CORRECTING CODE ADJUSTMENT FOR A DATA STORAGE DEVICE - A data storage device includes a non-volatile memory and a controller operationally coupled to the non-volatile memory. The controller is configured to access information stored at the non-volatile memory. The information includes a user data portion and an error correcting code (ECC) portion corresponding to the user data portion. The controller is further configured to modify the ECC portion in response to an error rate associated with the information exceeding a threshold. The one or more ECC parameters are modified without erasing or re-programming the user data portion. | 2015-12-17 |
20150363263 | ECC Encoder Using Partial-Parity Feedback - ECC Encoders that process packets of p bits (with p>1) in a data block in parallel and generate a set of N parity/check bits that are stored along with the original data in the memory block. Encoders according to the invention can be used to create a nonvolatile NAND Flash memory write cache with BCH-ECC for use in a disk drive that can speed up the response time for some write operations. Encoder embodiments of the invention use Partial-Parity Feedback along with a XOR-Matrix Logic Module, which calculates N output bits from p input bits, and a Shift Register Module that accumulates N check bits. The XOR-Matrix Logic Module is designed using a precalculated Matrix of p×N bits, which is translated into VHDL design language to generate the hardware gates. High-Order p-bit Partial-Parity Feedback improves over LFSR designs and achieves Minimal Critical Path Length:=p. | 2015-12-17 |
20150363264 | CELL-TO-CELL PROGRAM INTERFERENCE AWARE DATA RECOVERY WHEN ECC FAILS WITH AN OPTIMUM READ REFERENCE VOLTAGE - An apparatus comprising a memory and a controller. The memory may be configured to process a plurality of read/write operations. The memory may comprise a plurality of memory modules each having a size less than a total size of the memory. The controller may be configured to recover data stored in the memory determined to exceed a maximum number of errors after performing a first read operation using a first read reference voltage. The controller may perform a second read operation using a second read reference voltage. The controller may identify a victim cell having a threshold voltage in a region between the first read reference voltage and the second read reference voltage. The controller may perform a third read operation on aggressor cells of the victim cell. The controller may perform a fourth read operation using the first read reference voltage with bit-fixed values on the victim cell based on a type of interference from the aggressor cells. | 2015-12-17 |
20150363265 | METHOD FOR CONTROLLING MEMORY APPARATUS, AND ASSOCIATED MEMORY APPARATUS AND CONTROLLER THEREOF - A method for controlling a memory apparatus and the associated memory apparatus thereof and the associated controller thereof are provided, where the method includes: reading encoded data of a second set of error correction configuring parameters from a system block, and utilizing an LDPC engine to decode the encoded data to obtain the second set of error correction configuring parameters, where the LDPC engine stores a first set of error correction configuring parameters, and during decoding the encoded data, the LDPC engine performs decoding corresponding to a first LDPC characteristic matrix based on the first set of error correction configuring parameters; and controlling the LDPC engine to perform operations corresponding to a second LDPC characteristic matrix based on the second set of error correction configuring parameters in RAM, in order to make the LDPC engine be equipped with new encoding and decoding capabilities corresponding to the second LDPC characteristic matrix. | 2015-12-17 |
20150363266 | PARITY SCHEME FOR A DATA STORAGE DEVICE - A data storage device includes a non-volatile memory. The non-volatile memory may include a first word line, a second word line, and a third word line. The second word line may be between the first word line and the third word line. The non-volatile memory may further include a first string and a second string. The first string may be adjacent to the second string. The data storage device may further include circuitry configured to store parity information at a fourth word line of the non-volatile memory. The parity information may correspond to a combination of first data associated with the first word line and the first string, second data associated with the first word line and the second string, third data associated with the third word line and the first string, and fourth data associated with the third word line and the second string. | 2015-12-17 |
20150363267 | ERROR DETECTION IN STORED DATA VALUES - A data storage apparatus is provided which has a plurality of data storage units, each respective data storage unit configured to store a respective data bit of a data word. Stored data value parity generation circuitry is configured to generate a parity bit for the data word in dependence on the data bits of the data word stored in the plurality of data storage units. The stored data value parity generation circuitry is configured such that switching within the stored data value parity generation circuitry does not occur when the data word is read out from the plurality of data storage units. Transition detection circuitry is configured to detect a change in value of the parity bit. | 2015-12-17 |
20150363268 | ERROR DETECTION IN STORED DATA VALUES - An apparatus has a plurality of storage units. A parity generator is configured to generate a parity value in dependence on the respective values stored in the plurality of storage units. The parity generator is configured such that determination of the parity value is independent of a read access to the data stored the plurality of storage units. A detector is configured to detect a change in value of the parity value. | 2015-12-17 |