26th week of 2015 patent applcation highlights part 49 |
Patent application number | Title | Published |
20150178099 | INTERCONNECTING PORTAL COMPONENTS WITH DIALOG STATE TRANSITIONS - In one embodiment, a method for interconnecting portlets is disclosed. A first view is displayed on a computing device, wherein the first view is associated with a software application in a first state and includes a first portlet. A first event is received from the first portlet. A state model for the software application is accessed, wherein the state model includes a plurality of transitions between states of the software application, and wherein one transition of the plurality of transitions is triggered to bring the software application into a second state based on a type of the first portlet and data associated with the type of the first portlet. The one transition is executed to bring the software application into the second state, and a second view is displayed, wherein the second view is associated with the software application in the second state. | 2015-06-25 |
20150178100 | TRIGGER BASED PORTABLE DEVICE MORPHING - Embodiments relate to morphing a portable device. An aspect includes a computer program product for morphing. The computer program product includes a storage medium readable by a processing circuit of the portable device and storing instructions for execution by the processing circuit for performing a method. The method includes obtaining an input based on a result of executing an application among a set of applications of the portable device, and comparing the input to a plurality of triggers. The method also includes activating a profile of the portable device, based on the input matching one of the plurality of triggers, the profile corresponding with the one of the plurality of triggers, the profile being one of a plurality of profiles, each of the plurality of profiles defining enabled applications, security settings for information access by the portable device, and redundancy settings of the portable device including frequency of document backup. | 2015-06-25 |
20150178101 | ADJUSTING SETTINGS BASED ON SENSOR DATA - Various techniques for adjusting settings based on sensor data are described herein. In one example, a method includes detecting sensor data from a sensor and ranking the sensor data based on predetermined zones. The method can also include identifying a dominant zone from the predetermined zones, and adjusting the setting based on the dominant zone. | 2015-06-25 |
20150178102 | SYSTEM-ON-CHIP, METHOD OF MANUFACTURE THEREOF AND METHOD OF CONTROLLING A SYSTEM-ON-CHIP - A system-on-chip comprises a plurality of functional domains. The plurality of functional domains comprise a first domain and a second domain, the first domain having a first active mode of operation and the second domain having a second active mode of operation different from the first active mode of operation. The system-on-chip also comprises a control unit operably coupled to the first and second domains and capable of placing the first domain in the first active mode and the second domain in the second active mode so that the first domain is in the first active mode and the second domain is in the second active mode substantially contemporaneously. The first active mode of operation is functionally different from the second active mode of operation. | 2015-06-25 |
20150178103 | APPARATUS AND METHOD FOR STORAGE AND DECOMPRESSION OF CONFIGURATION DATA - An apparatus includes a plurality of cores and a fuse array. The plurality of cores is disposed on a die. The fuse array is disposed on the die and is coupled to each of the plurality of cores, where the fuse array includes a plurality of semiconductor fuses that are programmed with compressed configuration data for the each of the plurality of cores, and where the each of the plurality of cores accesses and decompresses all of the compressed configuration data upon power-up/reset, for initialization of elements within the each of the plurality of cores. | 2015-06-25 |
20150178104 | METHODS AND APPARATUS TO VALIDATE TRANSLATED GUEST CODE IN A DYNAMIC BINARY TRANSLATOR - Methods, apparatus, systems and articles of manufacture are disclosed to validate translated guest code in a dynamic binary translator. An example apparatus disclosed herein includes a translator to generate a first translation of code to execute on a host machine, the first translation of the guest code to facilitate creating a first translated guest code, and the translator to generate a second translation of the translated guest code to execute on the host machine. The example apparatus also includes a translation versions manager to identify a first host machine state based on executing a portion of the first translation, and the translation versions manager to identify a second host machine state based on executing a portion of the second translation. The example system also includes a validator to determine a state divergence status of the second translation based on a comparison between the first host machine state and the second host machine state. | 2015-06-25 |
20150178105 | Method and System for Optimizing Virtual Disk Provisioning - A first computing device is provided for virtual disk provisioning. The first computing device includes one or more processors configured to provide a first virtual disk and a first publish differencing disk. The one or more processors are further configured to obtain meta data associated with the first virtual disk and the first publish differencing disk, and generate one or more first differencing patches and one or more second differencing patches. The first and second differencing patches having a binary format. The first computing device further includes a storage configured to store data associated with the first virtual disk and the first publish differencing disk, the meta data, and the one or more first and second differencing patches. The first computing device further includes a communication subsystem configured to provide one or more first and second differencing patches to provision the virtual machine associated with a second computing device. | 2015-06-25 |
20150178106 | VIRTUAL MACHINE DATA REPLICATION WITH SHARED RESOURCES - Systems and methods for virtual machine data replication with shared resources. An example method may include: identifying resources that are shared across a plurality of virtual machines, storing a copy of the resources, receiving, an indication of a portion of virtual storage of a virtual machine to be replicated, determining that the portion of virtual storage is not included in the resources, in response to the determination, updating a replicated copy of the virtual machine in view of the portion of virtual storage, the replicated copy further including the resources that are shared across a plurality of virtual machines, determining an initialization efficiency metric in relation to the replicated copy, and in response to the determination that the initialization efficiency metric exceeds an efficiency threshold, storing a copy of the virtual storage. | 2015-06-25 |
20150178107 | Elastic Compute Fabric Using Virtual Machine Templates - Embodiments include an infrastructure shared among cloud services that supports fast provisioning of virtual machines (VMs). A set of powered-on parent VM templates and a set of powered-off child VMs are maintained by the infrastructure in a hierarchy. The child VMs are instantiated from the parent VM templates, and pre-registered to a cloud operating system in some embodiments. In response to requests from the cloud services for the child VMs, where the requests specify child VM configurations, child VMs from the set of powered-off child VMs are selected and customized based on the child VM configurations, and then deployed for use by the cloud services. In some embodiments, the fast provisioning of VMs is supported by forking operations in the infrastructure. | 2015-06-25 |
20150178108 | Fast Instantiation of Virtual Machines - Embodiments support instant forking of virtual machines (VMs) and state customization. Virtual device state and persistent storage of a child VM are defined based on virtual device state and persistent storage of parent VMs. After forking, a state of the child VM is customized based on configuration data. Customizing the state includes configuring one or more identities of the child VM, before bootup completes on the child VM. | 2015-06-25 |
20150178109 | Provisioning Customized Virtual Machines without Rebooting - Embodiments provision and customize virtual machines (VMs), such as desktop VMs, without rebooting the desktop VMs. In response to a request to provision the VMs, a computing device creates a clone VM from a parent VM template identified in the request. One or more customization that prompt rebooting of the clone VM are applied to the clone VM. The computing device instantiates a plurality of child VMs from the customized clone VM. A child VM configuration is applied to at least one of the instantiated child VMs without provoking a reboot of those child VMs. | 2015-06-25 |
20150178110 | State Customization of Forked Virtual Machines - Embodiments support instant forking of virtual machines (VMs) and state customization. A computing device initiates execution of a first group of services (e.g., identity-independent) in a first VM. A second VM is instantiated from the first VM. The second VM shares memory and storage with the first VM. The computing device customizes the second VM based on configuration data associated with the second VM. A second group of services (e.g., identity-dependent) starts executing on the second VM after configuring the identity of the second VM. Customizing the second VM includes configuring one or more identities of the second VM. In some embodiments, a domain identity is selected from a pool of previously-created identities and applied to the second VM, before bootup completes on the second VM. | 2015-06-25 |
20150178111 | HYPERVISOR MANAGED SCHEDULING OF VIRTUAL MACHINES - A hypervisor determines that a virtual machine is important. In response, the hypervisor selects one or more processing devices of a multiprocessor computer system and pins the virtual machine to the selected processing devices. The virtual machine exclusively executes in the selected processing devices resulting in an unfair resource allocation. | 2015-06-25 |
20150178112 | METHOD FOR CERTIFICATION OF RECONFIGURABLE RADIO EQUIPMENT WHEN RECONFIGURATION SOFTWARE DEVELOPED BY THIRD PARTY - A radio equipment comprises waveform generator to receive input data and to generate output baseband waves corresponding to the received input data, and a radio-frequency component to transform the baseband waves generated in radio waves. The waveform generator comprises a Radio Virtual Machine (RVM) that has been compiled to operate on hardware underlying the RVM. The RVM comprises an associated RVM class that establishes a level of reconfigurability of low-level parameters of the RVM. The RVM class comprises one of a plurality of RVM classes in which each RVM class comprises a corresponding level of reconfigurability of low-level RVM parameters and a corresponding level of certification testing for reconfigured RVMs of the class. In one exemplary embodiment, the plurality of RVM classes comprises at least one RVM class comprising full reconfigurability of low-level RVM parameters and at least one RVM class comprising limited reconfigurability of low-level RVM parameters. | 2015-06-25 |
20150178113 | LOADING RUNTIME CONFIGURATION FILES INTO VIRTUAL MACHINE INSTANCES - Systems and methods for loading runtime configuration files into virtual machine instances. An example method may comprise: storing, by a processing device, a plurality of virtual machine configuration files in a storage memory accessible by a virtual machine instance; creating a file list referencing a virtual machine configuration file of the plurality of virtual machine configuration files, the file list further specifying a target location of the virtual machine configuration file in the virtual machine instance; and causing a boot process of the virtual machine instance to download the virtual machine configuration file specified by the file list from the storage memory into the target location specified by the file list. | 2015-06-25 |
20150178114 | Parallel Processing of Data - An untrusted application is received at a data center including one or more processing modules and providing a native processing environment. The untrusted application includes a data parallel pipeline. Secured processing environments are used to execute the untrusted application. | 2015-06-25 |
20150178115 | OPTIMAL ASSIGNMENT OF VIRTUAL MACHINES AND VIRTUAL DISKS USING MULTIARY TREE - A multiary tree represents relationships among physical storage units and host computing devices. Virtual machines are optimally assigned among the host computing devices, and virtual disks of the virtual machines are optimally assigned among the physical storage units, using and extending the multiary tree used on constraints. The constraints regard the physical storage units, the host computing devices, the virtual machines, and the virtual disks. | 2015-06-25 |
20150178116 | PROVIDING SERVICE QUALITY LEVELS THROUGH CPU SCHEDULING - In this disclosure, a resource scheduler is described that allows virtual machine instances to earn resource credits during the low activity levels. Virtual machine instances that spend a predominant amount of time operating at low activity levels are able to quickly gain resource credits. Once these virtual machine instances acquire enough resource credits to surpass a threshold level, the resource scheduler can assign a high priority level to the virtual machine instances that provide them with priority access to CPU resources. The next time that the virtual machine instances enter a high activity level, they have a high priority level that allows them to preempt other, lower priority virtual machine instances. Thus, these virtual machine instances are able to process operations and/or respond to user requests with low latency. | 2015-06-25 |
20150178117 | SELECTING CLOUD COMPUTING RESOURCE BASED ON FAULT TOLERANCE AND NETWORK EFFICIENCY - The disclosure is related to selecting and allocating one of resources in a cloud computing system to create a virtual machine. A control server may determine a resource selection reference by selecting one of fault-tolerance and network efficiency upon receipt of a request message for creating a target virtual machine. The control server may calculate at least one of a fault-tolerance point and a network efficiency point of each candidate resource according to the selected resource selection reference. The control server may select one of candidate resources to create a requested virtual machine based on at least one of the calculated fault-tolerance point and the calculated network efficiency point of the candidate resources. | 2015-06-25 |
20150178118 | GUEST CUSTOMIZATION - A system for guest customization includes a processor and a data storage device. A service operating system is stored on the data storage device that, when executed by the processor, boots a virtual machine into maintenance mode. A response file creation module is stored on the storage device that, when executed by the processor, creates a response file. A customization agent is embedded within the service operating system that when executed by the processor on its startup, automatically performs customizations based on the response file including at least one of including adding or removing files within the data storage device and injecting main operating system or virtual machine agent startup scripts to complete customization once the virtual machine is rebooted into the main operating system. | 2015-06-25 |
20150178119 | HYPERVISOR-BASED SERVER DUPLICATION SYSTEM AND METHOD AND STORAGE MEDIUM STORING SERVER DUPLICATION COMPUTER PROGRAM - Disclosed herein are a server duplication system and method and a storage medium storing a server duplication computer program. The server duplication system includes a primary server including a hypervisor including a hypervisor-based fault tolerance module and a first file system virtual machine (FS VM), and a first standby server including a hypervisor including a hypervisor-based fault tolerance module that exchanges data with a fault tolerance module provided on the hypervisor of the primary server and duplicates the primary server. The first FS VM provides a first file system that is shared by a user virtual machine (USER VM), and a buffer cache that is used in conjunction with the first file system is provided on virtual memory. The first FS VM is duplicated into the standby server using the hypervisor-based fault tolerance module of the primary server and the hypervisor-based fault tolerance module of the standby server. | 2015-06-25 |
20150178120 | Method And System For Estimating Power Consumption For Aggregate System Workload - A method for estimating power consumption by a target host involves estimating a per-workload in-scenario utilization function of time for each workload running on said host in said what-if scenario so as to yield per-workload in-scenario utilization functions of time. The utilization functions are aggregated to yield a target host utilization function of time. The target host utilization function of time is converted to a host power-consumption function of time. | 2015-06-25 |
20150178121 | REPLICATION OF BATCH JOBS OF COMPUTING SYSTEMS - A method for replicating the effect of batch jobs using a replication agent is provided. The method comprises a replicating agent maintaining a maximum level in rows and a minimum level in rows for one or more columns in a database table of one or more database systems. The replicating agent further analyzes a mapping defined in the replicating agent to identify source columns of a source database system of the one or more database systems which are mapped to a target column of a target database system of the one or more database systems. According to at least one embodiment, the replicating agent further identifies a logical clause of the source database system and the target database system to define a range refresh. The replication agent further initiates the defined range refresh. | 2015-06-25 |
20150178122 | METHOD AND SYSTEM FOR PROVIDING A HIGH-AVAILABILITY APPLICATION - A system, method, and techniques for providing high availability to an application are provided. An example system includes a plurality of databases and a persistence layer that generates, based on a request, one or more sets of database commands that is specific to a database. The system also includes a high-availability layer that is an intermediary between the persistence layer and the plurality of databases, and includes a transaction manager and an execution engine. The transaction manager starts a composite transaction including a sub-transaction corresponding to each database of the plurality of databases and determines whether each applied sub-transaction has successfully completed. A sub-transaction includes a set of database commands. The execution engine applies each sub-transaction to its corresponding database. | 2015-06-25 |
20150178123 | LATENCY AGNOSTIC TRANSACTION BUFFER FOR REQUEST-GRANT PROTOCOLS - According to one embodiment, an apparatus includes a transaction data storage to store transaction data to be transmitted over an interconnect of a data processing system, a transaction buffer coupled to the transaction data storage to buffer at least a portion of the transaction data, and a transaction logic coupled to the transaction data storage and the transaction buffer to transmit a request (REQ) signal to an arbiter associated with the interconnect in response to first transaction data that becomes available in the transaction data storage, in response to a grant (GNT) signal received from the arbiter, retrieve second transaction data from the transaction buffer and transmit the second transaction data onto the interconnect, and refill the transaction buffer with third transaction data retrieved from the transaction data storage after the second transaction data has been transmitted onto the interconnect. | 2015-06-25 |
20150178124 | BACKFILL SCHEDULING FOR EMBARRASSINGLY PARALLEL JOBS - Backfill scheduling for embarrassingly parallel jobs. A disclosed method includes: receiving an initial schedule having a plurality of jobs scheduled over time on a plurality of nodes, determining that a first job can be split into a plurality of sub-tasks that can respectively be performed in parallel on different nodes, splitting the first job into the plurality of sub-tasks, and moving a first sub-task from its position in the initial schedule to a new position to yield a first revised schedule. | 2015-06-25 |
20150178125 | REDUCING SYNCHRONIZATION OF TASKS IN LATENCY-TOLERANT TASK-PARALLEL SYSTEMS - Techniques are provided for reducing synchronization of tasks in a task scheduling system. A task queue includes multiple tasks, some of which require an I/O operation while other tasks require data stored locally in memory. A single thread is assigned to process tasks in the task queue. The thread determines if a task at the head of the task queue requires an I/O operation. If so, then the thread generates an I/O request, submits the I/O request, and places the task at (or toward) the end of the task queue. When the task reaches the head of the task queue again, the thread determines if data requested by the I/O request is available yet. If so, then the thread processes the request. Otherwise, the thread places the task at (or toward) the end of the task queue again. | 2015-06-25 |
20150178126 | BACKFILL SCHEDULING FOR EMBARRASSINGLY PARALLEL JOBS - Backfill scheduling for embarrassingly parallel jobs. A disclosed method includes: receiving an initial schedule having a plurality of jobs scheduled over time on a plurality of nodes, determining that a first job can be split into a plurality of sub-tasks that can respectively be performed in parallel on different nodes, splitting the first job into the plurality of sub-tasks, and moving a first sub-task from its position in the initial schedule to a new position to yield a first revised schedule. | 2015-06-25 |
20150178127 | OPTIMALLY PROVISIONING AND MERGING SHARED RESOURCES TO MAXIMIZE RESOURCE AVAILABILITY - A shared resource system, a method of managing resources on the system and computer program products therefor. A resource consolidation unit causes identification of identical memory segments on host computers. The resource consolidation unit may be in one or more host computers. Each identical memory segment is associated with multiple instances of resources provisioned on at least two host computers. The resource consolidation unit causes provisioned resources to be migrated for at least one instance from one of the two hosts to another. On the other host computer the migrated resources share respective identical memory segments with resources already provisioned on the other host. | 2015-06-25 |
20150178128 | CROSS ARCHITECTURE VIRTUAL MACHINE MIGRATION - A system, method and computer program for transferring a running virtual machine from a first to a second physical machine, where each of the physical machines has a different instruction set architecture. The system may comprise a receiver for receiving a transfer request; responsive to receiving the transfer request, means for pausing the virtual machine; and means for collecting a state of the virtual machine. The system may include means for stopping a first interface component operable on the first physical machine; means for starting a second interface component operable on the second physical machine; and means for transferring the state to the second interface component. The system may further comprise means for starting the virtual machine on the second physical machine in response to the state transfer. | 2015-06-25 |
20150178129 | RESOURCE BOTTLENECK IDENTIFICATION FOR MULTI-STAGE WORKFLOWS PROCESSING - Identifying resource bottleneck in multi-stage workflow processing may include identifying dependencies between logical stages and physical resources in a computing system to determine which logical stage involves what set of resources; for each of the identified dependencies, determining a functional relationship between a usage level of a physical resource and concurrency level of a logical stage; estimating consumption of the physical resources by each of the logical stages based on the functional relationship determined for each of the logical stages; and performing a predictive modeling based on the estimated consumption to determine a concurrency level at which said each of the logical stages will become bottleneck. | 2015-06-25 |
20150178130 | Method and System for Secure Data Processing - Described herein are embodiments that relate to a method for use in data processing. An embodiment includes providing an arithmetic unit configured to perform any one in a set of operations. An embodiment includes providing a control register configured to hold control data. An embodiment includes providing in the set of operations, a control operation to provide process control, the control operation to operate on an operand that is coupled to the control data. A system for use in data processing is also disclosed having process registers and a control register. Further, a non-transitory computer-readable medium storing instruction code thereon for use in data processing is disclosed. When executed, the code causes a control operation forming part of a set of operations to operate on an operand that is coupled to control data held in a control register. | 2015-06-25 |
20150178131 | HONORING HARDWARE ENTITLEMENT OF A HARDWARE THREAD - A method for scheduling the execution of a computer instruction, receive an entitlement processor resource percentage for a logical partition on a computer system. The logical partition is associated with a hardware thread of a processor of the computer system. The entitlement processor resource percentage for the logical partition is stored in a register of the hardware thread associated with the logical partition. An instruction is received from the logical partition of the computer system and the processor dispatches the instruction based on the entitlement processor resource percentage stored in the register of the hardware thread associated with the logical partition. | 2015-06-25 |
20150178132 | FUNCTIONAL UNIT FOR SUPPORTING MULTITHREADING, PROCESSOR COMPRISING THE SAME, AND OPERATING METHOD THEREOF - A functional unit for supporting multithreading, a processor including the same, and an operating method of the processor are provided. The functional unit for supporting multithreading includes a plurality of input ports configured to receive opcodes and operands for a plurality of threads, wherein each of the plurality of input ports is configured to receive an opcode and an operand for a different thread, a plurality of operators configured to perform operations using the received operands, an operator selector configured to select, based on each opcode, an operator from among the plurality of operators to perform a specific operation using an operand from among the received operands, and a plurality of output ports configured to output operation results of operations for each thread. | 2015-06-25 |
20150178133 | PRIORITIZING DATA REQUESTS BASED ON QUALITY OF SERVICE - Systems, methods, and software described herein facilitate servicing of data requests based on quality of service assigned to processing jobs. In one example, a method of prioritizing data requests in a computing system based on quality of service includes identifying a plurality of data requests from a plurality of processing jobs. The method further includes prioritizing the plurality of data requests based on a quality of service assessed to each of the plurality of processing jobs, and assigning cache memory in the computing system to each of the plurality of data requests based on the prioritization. | 2015-06-25 |
20150178134 | Hybrid Crowdsourcing Platform - Systems and methods for implementing a hybrid crowdsourcing platform are provided. The hybrid crowdsourcing platform can receive a work request having a task with a plurality of units of work. One or more of the units of work can be suitable for completion by either a computer-based resource or a crowdsourcing resource. The individual units of work for the task can be analyzed to identify metrics associated with completion of the unit of work by the crowdsourcing resource and by the computer-based resource. Based on these metrics, the units of work can be assigned for completion by either the crowdsourcing resource or by the computer-based resource to improve the utility of the solution to the task. | 2015-06-25 |
20150178135 | FACILITATING TIERED SERVICE MODEL-BASED FAIR ALLOCATION OF RESOURCES FOR APPLICATION SERVERS IN MULTI-TENANT ENVIRONMENTS - In accordance with embodiments, there are provided mechanisms and methods for facilitating tiered service model-based fair allocation of resources for application servers in multi-tenant environments. In one embodiment and by way of example, a method includes collecting, by and incorporating into the database system, data relating to job types associated with one or more tenants of a plurality of tenants within a multi-tenant database system, computing, based on the data, an actual resource use and an expected resource allocation associated with each job type, and assigning classifications to the job types based on their corresponding actual resource use and the expected resource allocation. The method may further include routing the job types between tiers based on the assigned classifications, where the routing includes at least one of promoting, demoting, and maintaining one or more tiers for the job types. | 2015-06-25 |
20150178136 | Generating Hardware Accelerators and Processor Offloads - System and method for generating hardware accelerators and processor offloads. System for hardware acceleration. System and method for implementing an asynchronous offload. Method of automatically creating a hardware accelerator. Computerized method for automatically creating a test harness for a hardware accelerator from a software program. System and method for interconnecting hardware accelerators and processors. System and method for interconnecting a processor and a hardware accelerator. Computer implemented method of generating a hardware circuit logic block design for a hardware accelerator automatically from software. Computer program and computer program product stored on tangible media implementing the methods and procedures of the invention. | 2015-06-25 |
20150178137 | DYNAMIC SYSTEM AVAILABILITY MANAGEMENT - Server cluster management includes dynamically migrating machines between different server pools within the server cluster. The server pools include an active pool and at least one standby pool. Different standby pools can also be maintained to provide machines in different states of standby, including but not limited to different powered down or hibernation states. Machines are migrated between the different server pools based on network demands and machine status and capabilities. In some instances, the network demands are determined by forecasting future demands. The status and capability of the individual machines is evaluated on a continual basis to determine whether there is adequate capacity of the machines in the active pool to satisfy the one or more network demands, as well as to determine which machine is the most appropriate machine to migrate between server pools. Machines can also be migrated between the different standby pools. | 2015-06-25 |
20150178138 | MULTI-CORE DYNAMIC WORKLOAD MANAGEMENT - A dynamic scheduler is provided that schedules tasks for a plurality of cores based upon current operating characteristics for the cores. The current operating characteristics include a predicted leakage current for each core based upon an analytical model. | 2015-06-25 |
20150178139 | METHOD AND SYSTEM FOR TRANSFORMING INPUT DATA STREAMS - A system and method for processing an input data stream in a first data format of a plurality of first data formats to an output data stream in a second data format of a plurality of second data formats. A plurality of input connector modules receive respective input data streams and at least one input queue stores the received input data streams. A plurality of job threads is operatively connected to the at least one input queue, each job thread formatting a stored input data stream to produce an output data stream. At least one output queue stores the output data streams from the plurality of job threads. A plurality of output connector modules is operatively connected to the at least one output queue, the output connector modules supplying respective output data streams. | 2015-06-25 |
20150178140 | INFORMATION PROCESSING SYSTEM AND MONITORING METHOD - An abnormal state of an allocated task in a processing node is detected correctly without using any resource of the processing node. A power usage statistics storing unit | 2015-06-25 |
20150178141 | REPORT CREATION SYSTEM AND PROGRAM - A report creation system according to one embodiment creates a report indicative of a condition of a cooperative service in which a plurality of services are made to cooperate, the plurality of services being provided by a plurality of service providing apparatuses including at least one service providing apparatus in another company. The report creation creates the report, based on the first log information and the error occurrence information collected by each of the information collection apparatuses, and second log information of each of the services collected from each of the service providing apparatuses. | 2015-06-25 |
20150178142 | EXCHANGE ERROR INFORMATION FROM PLATFORM FIRMWARE TO OPERATING SYSTEM - A computing system can include a platform firmware to monitor hardware errors and to notify an operating system when a corrective action is to be performed to address a hardware error. The computing system can also include an extended error log to describe a hardware error. The computing system can further include an action record to direct the operating system to perform the corrective action to address the hardware error. | 2015-06-25 |
20150178143 | USING DARK BITS TO REDUCE PHYSICAL UNCLONABLE FUNCTION (PUF) ERROR RATE WITHOUT STORING DARK BITS LOCATION - Dark-bit masking technologies for physically unclonable function (PUF) components are described. A computing system includes a processor core and a secure key manager component coupled to the processor core. The secure key manager includes the PUF component, and a dark-bit masking circuit coupled to the PUF component. The dark-bit masking circuit is to measure a PUF value of the PUF component multiple times during a dark-bit window to detect whether the PUF value of the PUF component is a dark bit. The dark bit indicates that the PUF value of the PUF component is unstable during the dark-bit window. The dark-bit masking circuit is to output the PUF value as an output PUF bit of the PUF component when the PUF value is not the dark bit and set the output PUF bit to be a specified value when the PUF value of the PUF component is the dark bit. | 2015-06-25 |
20150178144 | ELECTRONIC CONTROL UNIT - An ECU having a microcomputer for controlling a control object includes: a detection device that detects an anomalous operation of the microcomputer; a first reset device that outputs a reset signal for the microcomputer when the detection device detects the anomalous operation; a failsafe control device that executes a failsafe control operation for controlling the control object to be safer than the control object before resetting the microcomputer when the microcomputer is reset to a normal state; a counting device that counts a number of times of occurrence of the anomalous operation when the detection device detects the anomalous operation again after the failsafe control device starts to execute the failsafe control operation; and a second reset device that outputs the reset signal and holds an output of the reset signal when the number of times of occurrence reaches a predetermined number of times. | 2015-06-25 |
20150178145 | Error Resilient Pipeline - For an error resilient pipeline, a Dynamically Adaptable Resilient Pipeline (DARP) controller determines a minimum error pipeline stage of a processor instruction pipeline with a minimum number of errors. In addition, the DARP controller determines a maximum error pipeline stage of the instruction pipeline with a maximum number of errors. The DARP controller increases a clock frequency for the instruction pipeline if the minimum number of errors of the minimum error pipeline stage is zero and the maximum number of errors of the maximum error pipeline stage does not exceed an error threshold. In addition, the DARP controller decreases the clock frequency if the minimum number of errors exceeds an error constant. | 2015-06-25 |
20150178146 | METHOD AND APPARATUS FOR CIPHER FAULT DETECTION - What is disclosed is an embodiment of a method for ciphering data. Data is provided for ciphering thereof. The data is ciphered in a plurality of steps. For each step, an encoding for error detection of the data for being processed within the step is determined. An output error detection encoding for the step is determined. The data for being processed within the round is processed to provide output error detection encoding which is then verified against the determined output error detection encoding. When the output error detection encoding is other than same as the determined error detection encoding, providing a signal indicative of an error within the cipher process. | 2015-06-25 |
20150178147 | SELF MONITORING AND SELF REPAIRING ECC - Exemplary embodiments of the present invention disclose a method and system for monitoring a first Error Correcting Code (ECC) device for failure and replacing the first ECC device with a second ECC device if the first ECC device begins to fail or fails. In a step, an exemplary embodiment performs a loopback test on an ECC device if a specified number of correctable errors is exceeded or if an uncorrectable error occurs. In another step, an exemplary embodiment replaces an ECC device that fails the loopback test with an ECC device that passes a loopback test. | 2015-06-25 |
20150178148 | THRESHOLD VOLTAGE CALIBRATION USING REFERENCE PATTERN DETECTION - A memory controller identifies a predominant type of error of a memory unit of solid state memory cells. An error type differential is calculated. The error type differential is a difference between a number of charge loss errors and a number of charge gain errors of the memory unit. A V | 2015-06-25 |
20150178149 | METHOD TO DISTRIBUTE USER DATA AND ERROR CORRECTION DATA OVER DIFFERENT PAGE TYPES BY LEVERAGING ERROR RATE VARIATIONS - An apparatus includes a memory and a controller. The memory includes a plurality of memory devices. Each memory device has a plurality of page types. The plurality of page types are classified based on error rate variations. The controller may be configured to write user data and error-correction data to the memory. The user data and the error-correction data are organized as a super-page. The super-page includes a plurality of sub-pages. The plurality of sub-pages are written across the plurality of memory devices such that the plurality of sub-pages are stored using more than one of the plurality of page types. | 2015-06-25 |
20150178150 | Techniques for Assessing Pass/Fail Status of Non-Volatile Memory - Examples are disclosed for assessing pass/fail status of non-volatile memory. In some examples, information may be received to indicate a block having memory pages associated with non-volatile memory cells. The information may indicate at least some of the memory pages have bit errors in excess of an error correction code (ECC) ability to correct. For these examples, the block may be selected for read testing. Read testing may include programming the memory pages with a known pattern and waiting a period of time. Following the period of time each memory page may be read and if a resulting pattern read matches the known pattern programmed to each memory page, the memory page passes. The block may be taken offline if the number of passing memory pages is below a pass threshold number. Other examples are described and claimed. | 2015-06-25 |
20150178151 | DATA STORAGE DEVICE DECODER AND METHOD OF OPERATION - A data storage device includes a nonvolatile memory and a controller having a decoder. The nonvolatile memory is operatively coupled to the controller. The nonvolatile memory is configured to store a set of bits. The decoder is configured to receive the set of bits from the memory. The decoder is further configured to perform a decoding operation using the set of bits based on a parity check matrix. The parity check matrix includes a block row. The block row has a first non-zero sub-matrix and a second non-zero sub-matrix that is separated from the first non-zero sub-matrix within the block row by at least a threshold number of null sub-matrices of the block row. | 2015-06-25 |
20150178152 | PREVENTING PROGRAMMING ERRORS FROM OCCURRING WHEN PROGRAMMING FLASH MEMORY CELLS - Mis-programming of MSB data in flash memory is prevented by using ECC decoding logic on the flash die that error corrects the LSB values prior to the LSB values being used in conjunction with the MSB values to determine the proper reference voltage ranges. Error correcting the LSB page data prior to using it in combination with the MSB page data to determine the reference voltage ranges ensures that the reference voltage ranges will be properly determined and programmed into the flash cells. | 2015-06-25 |
20150178153 | MEMORY SYSTEM - Provided is a memory system having a memory device. The memory system includes a memory device suitable for performing an even read operation of even memory cells connected to a word line and an odd read operation of odd memory cells connected to the word line, and a controller suitable for performing an error correction operation on even data read out from the even memory cells according to even probability information and odd data read out from the odd memory cells according to odd probability information, and the controller is configured to correct the even probability information or the odd probability information according to characteristics of the even memory cells and the odd memory cells. | 2015-06-25 |
20150178154 | MEMORY CONTROLLER OPERATING METHOD AND MEMORY CONTROLLER - A method of operating a memory controller includes; receiving hard decision data and first soft decision data from a non-volatile memory device, performing a first ECC decoding operation using the hard decision data and the first soft decision data: and then determining a second soft decision read voltage or reclaim operation of the non-volatile memory device based on the number of iteration operation of the first ECC (error correction code). | 2015-06-25 |
20150178155 | MEMORY CONTROLLER, STORAGE DEVICE INCLUDING THE SAME AND DATA ENCODING AND DECODING METHODS THEREOF - A storage device is provided which includes an ECC circuit. At a write operation, the ECC circuit generates a CRC (cyclic redundancy check) parity corresponding to data and generates an ECC (error correction code) parity corresponding to the data using an error correction code. At a read operation about the data stored in the at least one nonvolatile memory device, the ECC circuit corrects an error of the data using the CRC parity and the ECC parity. | 2015-06-25 |
20150178156 | MEMORY SYSTEM - A memory system is provided. The memory system includes a memory device suitable for reading out data from memory cells by a plurality of read voltages having various levels, and a controller suitable for updating probabilistic information based on the read out data when the read out data is input to the controller, and performing an error correction operation by the updated probabilistic information, wherein the controller updates the probabilistic information a predetermined number of times that the memory device reads out the data. | 2015-06-25 |
20150178157 | MEMORY MANAGEMENT SYSTEM AND METHOD - A memory system and method of operating the same is described, where the memory system is used to store data in a RAIDed manner. The stored data may be retrieved, including the parity data so that the stored data is recovered when the first of either the stored data without the parity data, or the stored data from all but one memory module and the parity data, has been received. The writing of data, for low write data loads, is managed such that only one of the memory modules of a RAID stripe is being written to, or erased, during a time interval. | 2015-06-25 |
20150178158 | SHAPING CODES FOR MEMORY - Apparatuses and methods associated with shaping codes for memory are provided. One example apparatus comprises an array of memory cells and a shaping component coupled to the array and configured to encode each of a number of received digit patterns according to a mapping of received digit patterns to shaping digit patterns. The mapping of received digit patterns to shaping digit patterns obeys a shaping constraint that limits, to an uppermost amount, an amount of consecutive digits of the shaping digit patterns allowed to have a particular digit value. | 2015-06-25 |
20150178159 | MEMORY DEVICE AND CONTROL METHOD OF MEMORY DEVICE - A memory device includes: a memory including a first port and a second port that are accessible; an error check and correct encoding circuit that applies an error check and correct code to data and writes them into the first port of the memory; an error check and correct decoding circuit that receives input of the data and the error check and correct code read from the first port of the memory, and corrects the inputted data in case of an error in the inputted data is detected based on the inputted error check and correct code; and a control circuit that writes the corrected data and the error check and correct code into the second port of the memory in case of the error is detected and a current access address and a previous access address to the first port of the memory are different. | 2015-06-25 |
20150178160 | Transforming Data in a Distributed Storage and Task Network - A method begins with a computing device dividing data into data partitions. For a data partition of the data partitions, the method continues with the computing device associating indexing information with the data partition. The method continues with the computing device segmenting the data partition into a plurality of data segments. The method continues with the computing device dispersed storage error encoding the plurality of data segments to produce a plurality of sets of encoded data slices. The method continues with the computing device grouping encoded data slices of the plurality of sets of encoded data slices to produce a set of groupings of encoded data slices. | 2015-06-25 |
20150178161 | Error Correction for Storage Devices - Systems and techniques relating to fault tolerant data storage in storage devices, such as storage devices that employ Shingled Magnetic Recording (SMR) and/or storage devices that employ solid state memory, include a method, in some implementations, including: receiving, at a storage controller, a data request for a storage device; reading, in response to the data request, data from discrete units of storage in the storage device, the data comprising stored data read from two or more of the discrete units of storage and parity data read from at least one of the discrete units of storage; detecting an error in the stored data from the reading; and recovering stored data for at least one of the discrete units of storage using the parity data and the stored data read from one or more remaining ones of the two or more of the discrete units of storage. | 2015-06-25 |
20150178162 | Method for Recovering Recordings in a Storage Device and System for Implementing Same - The memory of the storage device is divided into information areas of identical size selected from different parts of the storage device, and control zones are selected from different parts of the device. Each group of data is a set of code words written to a corresponding information zone. Three reference control sums S | 2015-06-25 |
20150178163 | SYSTEM AND METHOD FOR TRANSMITTING FILES - A system and method for transmitting files are provided. The system for transmitting files, which is applied for transmitting data in a multiple-session transmission, wherein the data includes a plurality of split files and in the multiple-session transmission, every split file is regarded as a transmission unit. The system includes a transmitting device transmits the split files and when a repair-file mechanism is triggered, generate a repair file. The system further includes a receiving device receives the repair file and generate the split files which have not been received according to the repair file. | 2015-06-25 |
20150178164 | RECONSTRUCTING AN INDIRECTION TABLE FROM LOGGED MEDIA ADDRESSES - The storage system uses a combination of checkpoint data and journal data to reconstruct an indirection table. The checkpoint data comprises compacted media addresses from the indirection table that are then stored in a relatively few number of media blocks. This allows the media controller to quickly read the compacted checkpoint data from the solid state media. The media controller generates the journal data from logical addresses and associated media addresses for additional write operations received while creating the checkpoint data. The media controller uses metadata when errors are identified in the checkpoint data or journal data. | 2015-06-25 |
20150178165 | CHECKPOINTS FOR A FILE SYSTEM - Aspects of the subject matter described herein relate to checkpoints for a file system. In aspects, updates to the file system are organized into checkpoint buckets. When a checkpoint is desired, subsequent updates are directed to another checkpoint bucket. After global tables have been updated for updates in the current checkpoint bucket, a logical copy of the global tables is created. This logical copy is stored as part of the checkpoint data. To assist in recovery, a checkpoint manager may wait until all updates of the current checkpoint bucket have been written to storage before writing final checkpoint data to storage. This final checkpoint data may refer to the logical copy of the global tables and include a validation code to verify that the checkpoint data is correct. | 2015-06-25 |
20150178166 | APPARATUS AND METHOD FOR MONITORING MULTIPLE MICRO-CORES - An apparatus and method for monitoring multiple micro-cores enable one watchdog to monitor a plurality of micro-cores. The multiple micro-core monitoring apparatus includes: a plurality of micro-cores that periodically output clear signals having different pulse waves; and a watchdog that respectively receives the clear signals having different pulse waves so as to determine presence or absence of an error in the micro-cores, and reset an erroneous micro-core. | 2015-06-25 |
20150178167 | SYSTEMS AND METHODS FOR GENERATING CATALOGS FOR SNAPSHOTS - A computer-implemented method for generating catalogs for snapshots may include (1) identifying an initial snapshot and a subsequent snapshot for a protected volume, (2) providing identifiers of the initial snapshot and the subsequent snapshot to a storage vendor application programming interface (API), (3) receiving, from the storage vendor API, an indication of at least one difference between the initial snapshot and the subsequent snapshot, and (4) synthetically generating a catalog for the subsequent snapshot based on a preexisting catalog for the initial snapshot such that the synthetically generated catalog reflects the difference between the initial snapshot and the subsequent snapshot indicated by the storage vendor API. Various other methods, systems, and computer-readable media are also disclosed. | 2015-06-25 |
20150178168 | Persistent Data Across Reboots - A method, system and computer-usable medium are disclosed for persisting Lightweight Memory Trace (LMT) data across reboots of a system. One or more LMT traces are stored in a predetermined pinned memory area with a server's operating system (OS) through a system reboot. A pointer to each LMT is likewise stored in nonvolatile storage (NVS) at a known memory location. The pointers in NVS point to a page which describes where the LMT trace and other kernel structures are in real memory. During initialization, the OS guards these preserved pages to prevent them from being used. By keeping the current and prior address within NVS, the current LMT and prior traces can be retrieved and processed to determine the cause of the system reboot. | 2015-06-25 |
20150178169 | VIRTUAL FULL BACKUPS - According to embodiments described herein, a backup server maintains backup data for a set of data, which includes data for a first block and a second block. Backup data for the first and second block include backup data for a plurality of versions of the first and second block. A distinct watermark is stored for each version of the first block and each version of the second block. In response to a request to perform a restoration operation on the set of data, a particular version of the first block and a particular version of the second block are selected to use in the restoration operation by comparing a restoration target with the watermarks of the version of the first block and second block. The selected version of the first block has a different watermark than the selected version of the second block. | 2015-06-25 |
20150178170 | Method and Apparatus for Recovering Data - In a data recovery method, there are a server and a plurality of storage devices each storing a copy of a data block. The server divides each copy of the data block into N segments corresponding to a sequence of N partitions. And then, the server constructs a plurality of different trial data blocks each including N segments corresponding to the sequence of N partitions. After that, the server calculates a check code for each trial data block, and continues to identify a trial data block having a check code identical to a pre-stored standard check code of the data block. At last, the server replaces at least one of the copies of the data block with the identified trial data block having the check code identical to the pre-stored standard check code. | 2015-06-25 |
20150178171 | FILE CORRUPTION RECOVERY IN CONCURRENT DATA PROTECTION - An incremental backup system that performs the following (not necessarily in the following order): (i) making a plurality of time-ordered journal entries; (ii) determining that a corruption condition exists; (iii) responsive to a corruption condition, constructing a first incremental mirror data set that reflects a backup data set and all journal entries up to a first corrupted journal entry which is the earliest in time journal entry, of the plurality of journal entries, that is a corrupted journal entry; (iv) responsive to a corruption condition, constructing a second incremental mirror data set that reflects the backup data set and all journal entries up to the first corrupted journal entry; and (v) checking for corruption in the first and second incremental mirror data sets to determine the latest uncorrupted version of the data set. | 2015-06-25 |
20150178172 | VALIDATING CONNECTION, STRUCTURAL CHARACTERISTICS AND POSITIONING OF CABLE CONNECTORS - In one or more aspects, a determination is made as to whether a connector is securely fastened, whether the connector connected within a socket structure is the expected connector for that socket structure, and/or whether connectors coupled to one another via one or more cables are properly positioned for communication between them. Information on selected physical connection elements of a connector is used to determine one or more structural characteristics of the cable(s) connected to the connector and to determine whether the connector is the expected connector for a particular socket structure. | 2015-06-25 |
20150178173 | VALIDATING CONNECTION, STRUCTURAL CHARACTERISTICS AND POSITIONING OF CABLE CONNECTORS - In one or more aspects, a determination is made as to whether a connector is securely fastened, whether the connector connected within a socket structure is the expected connector for that socket structure, and/or whether connectors coupled to one another via one or more cables are properly positioned for communication between them. Information on selected physical connection elements of a connector is used to determine one or more structural characteristics of the cable(s) connected to the connector and to determine whether the connector is the expected connector for a particular socket structure. | 2015-06-25 |
20150178174 | Generating and Detecting Hang Scenarios in a Partially Populated Simulation Environment - A method, system and computer-usable medium are disclosed for detecting the cause of a system hang in a verification environment. Hardware components associated with the design under test that are not included in the verification environment are replaced by software drivers. A dependency is set between a first driver and a second driver such that quiescing of the first driver is prevented until the second driver is quiesced. Each driver in a simulation test is designated to be either independent or dependent, with each dependent driver being associated with at least one independent driver. The independent driver is quiesced at a predetermined time. Dependent drivers do not quiesce until of their associated drivers have quiesced and completed all of their respectively issued instructions. | 2015-06-25 |
20150178175 | INFORMATION PROCESSING DEVICE AND MONITORING METHOD - An information processing device includes a processing unit, a control unit, and a monitoring unit. The processing unit executes an OS. The control unit controls an I/O device connected to the processing unit, and obtains, from the processing unit, management information about the I/O device. The monitoring unit monitors a boot-up state of the OS based on the management information obtained by the control unit. | 2015-06-25 |
20150178176 | SYSTEMS, METHODS, AND COMPUTER PROGRAMS PRODUCTS PROVIDING RELEVANT CORRELATION OF DATA SOURCE PERFORMANCE - A method performed by a monitoring tool in a computer system, the method including: for a set of network nodes in a computer system: applying a correlation formula on an input based on performance data of the set, and determining a correlation score based on applying the correlation formula, the correlation score indicating a correlation between network nodes in the set; determining, based on the correlation scores, a first list including a first plurality of network nodes having a correlation score that satisfies a first threshold; identifying a second plurality of network nodes included in the first list, the second plurality of network nodes having a correlation score that satisfies a second threshold, which indicates a correlation that is higher than the first threshold; analyzing the performance data of the second plurality against a constancy metric; and removing, based on the analyzing, the second plurality from the first list. | 2015-06-25 |
20150178177 | COHERENCE PROTOCOL TABLES - An agent is provided to include state table storage to hold a set of state tables to represent a plurality of coherence protocol actions, where the set of state tables is to include at least one nested state table. The agent further includes protocol logic associated with the state table storage, the protocol logic to receive a coherence protocol message, and determine a coherence protocol action of the plurality of coherence protocol actions from the set of state tables based at least in part on the coherence protocol message. | 2015-06-25 |
20150178178 | Call Graph Simplification/Comparison and Automatic Initial Suspects Finding of Performance Degradations - In one embodiment, a method for call graph analysis is provided. The method includes determining a plurality of nodes in a call graph. The plurality of nodes represent resource consumption of functions of a software program executed in a software system. A simplification factor is determined. A first set of nodes in the plurality of nodes is then eliminated based on exclusive values for the plurality of nodes, inclusive values for the plurality of nodes, and the simplification factor. An inclusive value for a node is a first amount of resources consumed by the node and any descendent nodes of that node. An exclusive value for the node is a second amount of resources consumed by the node. A simplified call graph is output including a second set of nodes in the plurality of nodes. The second set of nodes does not include the eliminated first set of nodes. | 2015-06-25 |
20150178179 | CREATING TRACE DATA FROM RECENT SOFTWARE OUTPUT AND ACTIVITY - Creating additional trace entries by dynamically processing recently captured output data, working data, and input data to diagnose a software error. Integrating additional trace entries in chronological order with conventional trace entries into a single trace dataset for analysis. | 2015-06-25 |
20150178180 | CREATING TRACE DATA FROM RECENT SOFTWARE OUTPUT AND ACTIVITY - Creating additional trace entries by dynamically processing recently captured output data, working data, and input data to diagnose a software error. Integrating additional trace entries in chronological order with conventional trace entries into a single trace dataset for analysis. | 2015-06-25 |
20150178181 | METHODS AND SYSTEMS FOR INTERNALLY DEBUGGING CODE IN AN ON-DEMAND SERVICE ENVIRONMENT - A remote debug session for a server group is provided. A server group including multiple servers that perform workload sharing receives a request to debug code executed at the server group. The code is executed on behalf of a client of a database associated with the server group. One of the servers of the group initiates a debugging session and establishes a communication connection with the client. The one server maintains the connection open with the client for the duration of the debugging session. Subsequent requests related to the debug session can be handled in a number of ways by the server group, and all communication to the client about processing the requests is through the connection by the one server. | 2015-06-25 |
20150178182 | SOFTWARE TESTING PLATFORM AND METHOD - An integrated test accelerator platform that ensures discrete testing solutions to be integrated to work together in harmony, and resources (people, data, and process) allocated to these solutions to collaborate and work in tandem. The platform provides a flexible way of interconnecting accelerators (i.e., automation framework, regression optimization, risk based testing, test data management, pair-wise testing, and metrics) using coupling and decoupling mechanisms. The accelerators are configurable and customizable for any unique test execution workflow scenario. This provides solutions for the end-to-end test process. | 2015-06-25 |
20150178183 | PAYLOAD GENERATION FOR COMPUTER SOFTWARE TESTING - A method of generating test payloads for a target system includes receiving a plurality of reference programs, each reference program modelling at least one aspect of the target system, building a specification for each received reference program, each specification defining illegal states for the respective reference program, analyzing each specification to determine one or more entry constraints that would generate an illegal state from a specific input, and synthesizing one or more payloads from the determined entry constraints. | 2015-06-25 |
20150178184 | TEST MANAGEMENT USING DISTRIBUTED COMPUTING - Embodiments of the present invention relate to test management using distributed computing. A computing device transmits a test job to a test client for servicing, wherein the test client has an idle resource. The computing device receives results of the test job servicing from the test client, wherein the test job includes computer code that is under development and predefined information that reflects how the test client is to execute the test job. | 2015-06-25 |
20150178185 | Technique for Controlling Memory Accesses - A technique for controlling memory accesses of a data stream is provided. The data is streamed between selected ports of a plurality of ports coupled to a memory ( | 2015-06-25 |
20150178186 | METHOD AND APPARATUS FOR SWAPPING IN DATA TO MEMORY - A method for swapping in data to a memory is disclosed. The method includes: determining, according to a received instruction command, a process in which data needs to be swapped in to a physical memory of an operating system; and swapping in all data, in a swap partition of the operating system, of at least one process of the determined processes to the physical memory of the operating system according to a current free memory capacity of the physical memory of the operating system. According to embodiments of the present invention, data that needs to be swapped in to a physical memory of an operating system and corresponds to at least one process can be actively swapped in to the physical memory at a time according to a user requirement. | 2015-06-25 |
20150178187 | SINGLE COMMAND, MULTIPLE COLUMN-OPERATION MEMORY DEVICE - A memory access command, column address and plurality of write data values are received within an integrated-circuit memory chip via external signaling links. In response to the memory access command, the integrated-circuit memory chip (i) decodes the column address to select address-specified sense amplifiers from among a plurality of sense amplifiers that constitute a sense amplifier bank, (ii) reads first data, constituted by a plurality of read data values, out of the address-specified sense amplifiers, and (iii) overwrites the first data within the address-specified sense amplifiers with second data constituted by one or more of the write data values and by one or more of the read data values. | 2015-06-25 |
20150178188 | Storage Module and Method for Re-Enabling Preloading of Data in the Storage Module - A storage module and method for re-enabling preloading of data in the storage module are disclosed. In one embodiment, a storage module is provided with a memory and a register. In response to receiving a register-setting command, the storage module sets a value in the register to enable preloading of data in the memory. The storage module then receives the data for storage in the memory. After the storage module has determined that all of the data has been received, the storage module changes the value in the register to disable further preloading of data. In response to receiving a register-resetting command, the storage module resets the value in the register to re-enable preloading of data even though the storage module already changed the value in the register to disable further preloading of data. | 2015-06-25 |
20150178189 | Systems and Methods for Scheduling Post-Write Read in Nonvolatile Memory - Post-write reading of data stored in a memory is performed only after a threshold amount of time has elapsed from the time the data was programmed. The threshold amount of time is at least the relaxation time of the memory cells, so that memory cells have reached stable states when post-write reading is performed. | 2015-06-25 |
20150178190 | DETECTING HOT SPOTS THROUGH FLASH MEMORY MANAGEMENT TABLE SNAPSHOTS - Decisions about how to correlate logical address to physical addresses in a flash memory (or other non-volatile random access memory) is based at least in part upon how frequently a logical address is accessed over time. Accordingly, software tracks accesses, by logical address, to the stored data using a flash memory metadata structure, and calculates a frequency-of-access value for each logical address of the set of logical addresses corresponding to the relative frequency with which the corresponding logical address is accessed, based, at least in part, on the flash memory metadata structure. For example, logical addresses with low frequency may be grouped together so that the frequency of erasure operations (which are often done on a block by block basis) will tend to be reduced. | 2015-06-25 |
20150178191 | COLLABORATIVE HEALTH MANAGEMENT IN A STORAGE SYSTEM - In at least one embodiment, multiple controllers implement collaborative management of a non-volatile hierarchical storage system. In the storage system, a first controller receives health reports from at least second and third controllers regarding health of multiple storage units of physical storage under control of the second and third controllers and maintains a health database of information received in the health reports. In response to a health event and based on information in the health database, the first controller modifies logical-to-physical address mappings of one or more of multiple storage units under its control such that data having greater access heat is mapped to relatively healthier storage units and data having less access heat is mapped to relatively less healthy storage units. Thereafter, the first controller directs write requests to storage units under its control in accordance with the modified logical-to-physical address mappings. | 2015-06-25 |
20150178192 | NONVOLATILE MEMORY DEVICE AND DATA STORAGE DEVICE INCLUDING THE SAME - A data storage device including a first nonvolatile memory device having a first state information transmission block, a second nonvolatile memory device having a second state information transmission block, which shares a state information line with the first state information transmission block, and a controller having a state information reception block which is suitable for transmitting a control signal for controlling the first state information transmission block and the second state information transmission block to transmit a state information frame, and sequentially receiving a first state information frame transmitted from the first state information transmission block and a second state information frame transmitted from the second state information transmission block, through the state information line. | 2015-06-25 |
20150178193 | APPARATUS AND METHOD FOR MANAGING FLASH MEMORY BY MEANS OF WRITING DATA PATTERN RECOGNITION - An apparatus and method for managing flash memory based on recognition of patterns of write-target data are disclosed. A data analysis unit analyzes bit storage patterns that are stored in cells of the flash memory, and a data matching unit matches corresponding alternative patterns to the bit storage patterns based on the results of the analysis of the data analysis unit. According to the present invention, the reliability and durability of NAND flash memory can be improved because a minimum number of “0” bits are stored in a page. Furthermore, the application of the technology is easy and simple because a memory controller can perform management without changes in the structure and cell arrangement of a NAND flash device. | 2015-06-25 |
20150178194 | SYSTEMS AND METHODS OF ADDRESS-AWARE GARBAGE COLLECTION - A method includes determining a first logical block address (LBA) range of a first set of data units of a first candidate block of the memory. The method also includes determining a second LBA range of a second set of data units of a relocation block of the memory. The method also includes determining that the first LBA range matches the second LBA range. The method further includes relocating first valid data of the first candidate block to the relocation block of the memory in response to determining that the first LBA range matches the second LBA range, where the first LBA range corresponds to multiple LBAs. | 2015-06-25 |
20150178195 | METHOD AND AN APPARATUS FOR MEMORY ADDRESS ALLIGNMENT - A method and a system embodying the method for a memory address alignment, comprising configuring one or more naturally aligned buffer structure(s); providing a return address pointer in a buffer of one of the one or more naturally aligned buffer structure(s); determining a configuration of the one of the one or more naturally aligned buffer structure(s); applying a modulo arithmetic to the return address and at least one parameter of the determined configuration; and providing a stacked address pointer determined in accordance with the applied modulo arithmetic, is disclosed. | 2015-06-25 |
20150178196 | APPARATUS AND METHOD FOR CONFIGURABLE REDUNDANT FUSE BANKS - An apparatus is contemplated for storing and providing configuration data to an integrated circuit device, the apparatus has a fuse array and a plurality of cores. The fuse array is disposed on a die. The fuse array has a first plurality of semiconductor fuses and a second plurality of semiconductor fuses. The plurality of cores is disposed on the die, where each of the plurality of cores is coupled to the fuse array. The each of the plurality of cores includes array control, configured to access the first and second pluralities of fuses, and configured to process first states of the first plurality of semiconductor fuses and second states of the second plurality of semiconductor fuses according to contents of a configuration data register. | 2015-06-25 |
20150178197 | Addressing Auto address Assignment and Auto-Routing in NAND Memory Network - A topology for memory circuits of a non-volatile memory system reduces capacitive loading. For a given channel, a single memory chip can be connected to the controller, but is in turn connected to multiple other memory devices that fan out in a tree-like structure, which can also fan back in to a single memory device. In addition to the usual circuitry, such as a memory arrays and associated peripheral circuitry, the memory chip also includes a flip-flop circuit and can function in several modes, including pass-through and active modes. Techniques are presented for the addressing of memory chips within such a topology, including an address assignment scheme. | 2015-06-25 |
20150178198 | Hypervisor Managing Memory Addressed Above Four Gigabytes - Approaches for performing memory management by a hypervisor. A host operating system and a hypervisor are executed on a device. The host operating system is not configured to access physical memory addressed above four gigabytes. The hypervisor manages memory for a device, including memory addressed above four gigabytes. When the hypervisor instantiates a virtual machine, the hypervisor may allocate memory pages for the newly instantiated virtual machine by preferentially using any unassigned memory addressed above four gigabytes before using memory allocated from the host (and hence addressed below four gigabytes). | 2015-06-25 |