07th week of 2014 patent applcation highlights part 61 |
Patent application number | Title | Published |
20140047104 | Systems and Methods for Load Balancing Using Predictive Routing - Systems and methods are disclosed for routing requests for information based on predictive data. The systems and methods may receive measurement data indicative of states of each of a plurality of destination servers, and generate predicted measurement data values for each of the plurality of destination servers based on the retrieved measurement data. The predicted measurement data values may represent predicted states of each of the destination servers at a time later than a time corresponding to the received measurement data. The systems and methods may also receive requests for information from a client computer, and route the received requests for information to one of the plurality of destination servers based on the predicted measurement data value. | 2014-02-13 |
20140047105 | LINK AGGREGATION USING DIGESTS - Methods, systems and computer readable media for link aggregation using digests are described. In some implementations, the method can include obtaining information about each port in a group of one or more ports. The method can also include computing a digest corresponding to each port in the group of one or more ports, the digest being based on the information about the corresponding port. The method can further include determining whether each port in the group of one or more ports is suitable for aggregation in a link aggregation group by comparing the digest corresponding to each port with a digest of a port in the link aggregation group. | 2014-02-13 |
20140047106 | REAL-TIME COMPRESSIVE DATA COLLECTION FOR CLOUD MONITORING - Technologies are presented for implementing a compressive-sensing-based data collection system in a cloud environment. In some examples, high-dimensional sensor data may be compressed using sparsity transforms and compressive sampling. The resulting low-dimensional data messages may be steered through a switch network to a cloud service manager, which then reconstructs the compressed messages for subsequent analysis, reporting, and/or comparable actions. | 2014-02-13 |
20140047107 | REMOTE INDUSTRIAL MONITORING AND ANALYTICS USING A CLOUD INFRASTRUCTURE - A cloud-based infrastructure facilitates gathering, transmitting, and remote storage of control and automation data using an agent-based communication channel. The infrastructure collects the industrial data from an industrial enterprise and intelligently sorts and organizes the acquired data based on selected criteria. Message queues can be configured on the cloud platform to segregate the industrial data according to priority, data type, or other criteria. Behavior assemblies stored in customer-specific manifests on the cloud platform define customer-specific preferences for processing data stored in the respective message queues. An agent-based analytics framework in the cloud platform performs desired analytics on the data using distributed parallel processing. | 2014-02-13 |
20140047108 | SELF ORGANIZING NETWORK EVENT REPORTING - A system and method for detecting, reporting and collecting information associated with network events is provided. A network element, such as an Event Reporter, detects an event that requires reporting to a manager of the network. The Event Reporter can determine if it is the lead responsible for reporting the event and forward it directly to the manager. Alternatively, it can forward the event report to a peer entity that has been designated as the lead. If the Event Reporter is the lead for the event, it can determine that it should relinquish the lead responsibility and initiate a hand-over of the lead role to a peer. | 2014-02-13 |
20140047109 | Integrated Adaptive Anycast For Content Distribution - A system includes first and second cache servers a domain name server, and a route controller. The cache servers are each configured to respond to an anycast address. Additionally, the first cache server is configured to respond to a first unicast address, and the second cache server is configured to respond to a second unicast address. The router controller configured to determine wither the status of the first cache server is non-overloaded, overloaded, or offline. The route controller is further configured to instruct the domain name server to provide the second unicast address when the status is overloaded or offline, and modify routing of the anycast address to direct a content request sent to the anycast address to the second cache server when the status is offline. The domain name server is configured to receive a request from a requestor for a cache server address. Additionally, the domain name server is configured to provide an anycast address to the requestor when the status of the first cache server is non-overloaded, and provide the second unicast address to the requestor when the status of the first cache server is offline or overloaded. | 2014-02-13 |
20140047110 | DEVICE LEVEL ENABLEMENT OF A COMMUNICATIONS PROTOCOL - An apparatus, system, and method are disclosed for device level enablement of a communications protocol. An adapter compatibility module determines an adapter compatibility status for a plurality of host adapters. A positive adapter compatibility status indicates that each host adapter in the plurality of host adapters is compatible with a communications protocol. A processor compatibility module determines a processor compatibility status for one or more processors. The one or more processors coordinate data transfers to and from the plurality of host adapters. A positive processor compatibility status indicates that each of the one or more processors is compatible with the communications protocol. A compatibility summary module determines a compatibility summary for the plurality of host adapters and the one or more processors. The compatibility summary indicates a positive compatibility relative to the communications protocol in response to a positive processor compatibility status and a positive adapter compatibility status. | 2014-02-13 |
20140047111 | SYSTEMS AND METHODS TO CONTROL WEB SCRAPING - Systems and methods to control web scraping through a plurality of web servers using real time access statistics are described. | 2014-02-13 |
20140047112 | NETWORK ANALYSIS ASSISTANCE DEVICE, NETWORK ASSESSMENT DEVICE, NETWORK ANALYSIS ASSISTANCE METHOD, NETWORK ASSESSMENT METHOD, NETWORK ANALYSIS ASSISTANCE PROGRAM AND NETWORK ASSESSMENT PROGRAM - A first electronic message collector collects electronic messages travelling on a first network and stores the electronic messages in a first storage. A second electronic message collector collects electronic messages travelling on a second network and stores the electronic messages in a second storage. An electronic message associator retains a mapping table in which the correlation, or similar, between electronic messages travelling from the first network to a gateway device and electronic messages travelling from the gateway device the second network are defined. The electronic message associator references the mapping table and associates the electronic messages stored in the second storage with the electronic messages stored in the first storage. From the result of the above-mentioned association, a status analyzer determines whether or not receipt of electronic message between the first network and the second network is accomplished normally. | 2014-02-13 |
20140047113 | HIERARCHICAL CRITERIA-BASED TIMEOUT PROTOCOLS - A method of applying a timeout protocol by an access manager to a plurality of resources may include storing the timeout protocol comprising at least one criterion, and receiving a request for a first resource. Each of the resources can be segregated into separate application domains, the first resource can be associated with a first attribute, and the first attribute can be assigned a first value. The method may also include determining that the first value satisfies the at least one criterion, associating the timeout protocol with the first resource, and associating the timeout protocol with each resource that is associated with the first attribute assigned a value that satisfies the at least one criterion. The method may further include granting access to the first resource according to the timeout protocol. | 2014-02-13 |
20140047114 | VIRTUAL DESKTOP POLICY CONTROL - In one implementation, a network device provides virtual desktop policy control. The network device detects a number of sessions hosted by a virtual desktop interface (VDI) server, and performs a comparison of the number of sessions to a predetermined threshold capacity of the network device. When a request for a new session to be hosted by the VDI server is received at the network device, the new session request is forwarded according to the comparison of the number of sessions to the predetermined threshold capacity. In one example, the new request is forwarded to establish a new VDI session with the VDI server but with limited capabilities. For example, the client device of the new VDI session may have access to a generic desktop set of necessary applications but not all applications otherwise available to the client device. | 2014-02-13 |
20140047115 | IMMEDIATELY LAUNCHING APPLICATIONS - Disclosed are various embodiments for a deployment management system. A second version of a deployable application is executed concurrently with a first version. Network traffic sent to the first version of the application is redirected to the second version. In the event of an error, network traffic is directed back to the first version of the application. After a period of concurrent execution, the first version of the application is terminated. | 2014-02-13 |
20140047116 | SERVICE MANAGEMENT MODES OF OPERATION IN DISTRIBUTED NODE SERVICE MANAGEMENT - A distributed node service management system utilizes multiple existing processor nodes of a distributed computing system, in support of the primary data processing functions of the distributed computing system. The distributed node service management system coordinates and manages service functions on behalf of processor nodes of the distributed computing system. Other features and aspects may be realized, depending upon the particular application. | 2014-02-13 |
20140047117 | RESOLVING INFORMATION IN A MULTITENANT DATABASE ENVIRONMENT - Disclosed herein are techniques for creating a representation of dependency relationships between computing resources within a computing environment. In some implementations, one or more sources for dependency analysis may be identified. Each source may be capable of being accessed to provide computing functionality via the computing environment. Each source may include one or more references to a respective one or more computing resources. Each computing resource may define a unit of the computing functionality available within the computing environment. A plurality of dependency relationships may be identified based on the one or more sources. A dependency relationship representation may be created based on the identified dependency relationships. | 2014-02-13 |
20140047118 | OPTIMIZING RESOURCE CONFIGURATIONS - Systems and methods for monitoring the performance associated with fulfilling resource requests and determining optimizations for improving such performance are provided. A processing device obtains and processes performance metric information associated with processing a request corresponding to a set of resources. The processing device uses the processed performance metric information to determine a resource configuration to be associated with the set of resources. In some embodiments, in making such a determination, the processing device assesses performance metric information collected and associated with subsequent requests corresponding to the content associated with the set of resources and using each of a variety of alternative resource configurations. The processing device may also consider a number of factors. Aspects of systems and methods for generating recommendations to use a particular resource configuration to process a subsequent request corresponding to the content associated with the set of resources are also provided. | 2014-02-13 |
20140047119 | METHOD AND SYSTEM FOR MODELING AND ANALYZING COMPUTING RESOURCE REQUIREMENTS OF SOFTWARE APPLICATIONS IN A SHARED AND DISTRIBUTED COMPUTING ENVIRONMENT - An application manager receives or defines a service specification for a first application that defines a set of required computing resources that are necessary to run each application component of the first application. A resource supply manager in communication with the application manager manages a plurality of computing resources in a shared computing environment. The application manager is operable to request the set of required computing resources from the computing resource supply manager, and wherein the resource supply manager determines the availability of the required computing resources within the shared computing environment according to resource allocation policies and allocates computing resources to the application manager, and wherein the application manager is operable manage allocation of the computing resources to the first application, the application manager operable to create and manage deployment of instances of each application component of the first application on the allocated computing resources. | 2014-02-13 |
20140047120 | ONTOLOGY BASED RESOURCE PROVISIONING AND MANAGEMENT FOR SERVICES - Techniques are disclosed for integration, provisioning and management of entities and processes in a computing system such as, by way of example only, business entities and business processes. In particular, techniques are disclosed for ontology based resource provisioning and management for services. For example, such an ontology based approach can be utilized in conjunction with a business support system which may be employed in conjunction with a cloud computing environment. | 2014-02-13 |
20140047121 | FAST SETUP RESPONSE PREDICTION - Mechanisms are provided to accelerate Real-Time Streaming Protocol (RTSP) setup messages. A client transmits an RTSP request to a server. The server responds to the request and preemptively responds with acknowledgements for messages not yet received. For example, a server responds to an RTSP describe message with an RTSP describe acknowledgement, an RTSP setup acknowledgement, and an RTSP play response before setup and play messages are received by the server or even transmitted by the client. The client processes the anticipatory responses and transmits setup and play responses when the anticipatory responses are processed. | 2014-02-13 |
20140047122 | HIGH AVAILABILITY SESSION RECONSTRUCTION - A first message is received at a primary container that is replicated by a secondary container. The first message is an initial message to initiate a first session. The first message is processed by an application in the primary container. At a point in time, the primary container is unavailable and the system and method detect that the primary container is unavailable. A second message is received. The second message is associated with the first session. The second message is modified by moving at least a portion of a header in the second message into a different header in the second message and adding an additional header to the second message in response to the primary container being unavailable. | 2014-02-13 |
20140047123 | QUALITY OF EXPERIENCE REPORTING FOR COMBINED UNICAST-MULTICAST/BROADCAST STREAMING OF MEDIA CONTENT - Embodiments of the present disclosure describe devices, methods, computer-readable media and systems configurations for monitoring and reporting quality of experience (QoE) metrics that are associated with an access method. Other embodiments may be described and claimed. | 2014-02-13 |
20140047124 | TRIVIAL FILE TRANSFER PROTOCOL (TFTP) DATA TRANSFERRING PRIOR TO FILE TRANSFER COMPLETION - One embodiment is directed toward a method of transferring data using a Trivial File Transfer Protocol (TFTP). The method includes receiving a first subset of TFTP data packets containing first payload data for the file, extracting the first payload data from the first subset of TFTP data packets, conforming the first payload data to a second protocol, and sending the first payload data to another device prior to receiving a last TFTP data packet for the file, wherein the first payload data is sent in compliance with the second protocol. The method also includes receiving one or more remaining TFTP data packets containing the remaining payload data for the file, the one or more remaining TFTP data packets including the last TFTP data packet for the file, extracting the remaining payload data from the one or more remaining TFTP data packets, and sending the remaining payload data to the other device. | 2014-02-13 |
20140047125 | APPARATUS AND METHOD FOR CONTROLLING VIRTUAL SWITCHES - An apparatus includes a controller, a converter, and table information storing physical identification information identifying a physical port of first and second physical switches in association with virtual identification information identifying a virtual port of a virtual switch. For each virtual switch, the controller outputs control information determined based on a predetermined protocol and manages the received control information. The converter, based on the table information, converts the physical identification information added to the received control information and identifying a physical port via which the first physical switch has received the control information, into the associated virtual identification information, converts the virtual identification information added to the control information outputted from the controller and identifying a virtual port via which the control information is to be transmitted, into the associated physical identification information, and relays the control information between the controller and the second physical switch. | 2014-02-13 |
20140047126 | COORDINATED ENFORCEMENT OF TRAFFIC SHAPING LIMITS IN A NETWORK SYSTEM - Methods and protocols coordinate enforcement of application traffic shaping limits within clusters of middleware appliance information handling systems (MA IHSs). The protocols dynamically set the local traffic shaping requirements at each entry point of an MA IHS. Each MA IHS receives from other MA IHSs runtime statistics containing local shaping requirements and rates of requests. The method uses runtime statistics to measure performance against specified traffic shaping goals, and based on this comparison uses unique protocols to dynamically adjust the local shaping requirements in each MA IHS. The method may eliminate the need to statistically bind service domains to particular MA IHSs. Additional MA IHSs activate and/or deactivate service domains to accommodate service domain (server farm) CPU resource demands. | 2014-02-13 |
20140047127 | COORDINATED ENFORCEMENT OF TRAFFIC SHAPING LIMITS IN A NETWORK SYSTEM - Methods and protocols coordinate enforcement of application traffic shaping limits within clusters of middleware appliance information handling systems (MA IHSs). The protocols dynamically set the local traffic shaping requirements at each entry point of an MA IHS. Each MA IHS receives from other MA IHSs runtime statistics containing local shaping requirements and rates of requests. The method uses runtime statistics to measure performance against specified traffic shaping goals, and based on this comparison uses unique protocols to dynamically adjust the local shaping requirements in each MA IHS. The method may eliminate the need to statistically bind service domains to particular MA IHSs. Additional MA IHSs activate and/or deactivate service domains to accommodate service domain (server farm) CPU resource demands. | 2014-02-13 |
20140047128 | METHOD FOR GENERATING ADDRESSES IN A COMPUTER NETWORK - In a method for creating a plurality of addresses (h) for a network element of a communication network, the following steps are provided: a) creating (1) a virtual identifier (c) for each address to be created from an existing identifier (a) of said network element and from at least one configured piece of additional information (b); b) creating (2) an address from at least one created virtual identifier; c) checking (3) the virtual identifiers created in such a way or the addresses created from said virtual identifiers for the presence of a collision; d) discarding (4) colliding virtual identifiers or the addresses created from said virtual identifiers. | 2014-02-13 |
20140047129 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR INTERFACING WITH AN UNIDENTIFIED HEALTH INFORMATION TECHNOLOGY SYSTEM - A method for analyzing data received from an identified third party health information technology system to identify the third party health information technology system is provided. The method may include, by a third party identification apparatus, receiving data from a health information technology system, analyzing the data, identifying the format of the data based on an analysis of the data, and extracting one or more data elements from the data in accordance with the format that was identified. Corresponding apparatuses and computer program products are also provided. | 2014-02-13 |
20140047130 | SYNCHRONIZATION OF ARTIFACTS ACROSS DIFFERENT DOMAINS - A method of synchronizing artifacts of a first domain with artifacts of a second domain is provided. The method includes: loading a first set of transformed artifacts and a first artifact map from a first domain into a second domain; generating an association model based on an evaluation of the first artifact map and a second artifact map; comparing a first transformed artifact of the first set of transformed artifacts with a second artifact of a second set of artifacts corresponding to the second artifact map based on the association model; determining differences based on the comparing; and selectively updating the second artifact map based on the differences. | 2014-02-13 |
20140047131 | TECHNIQUE FOR SYNCHRONIZED CONTENT SHARING - Multiple media devices ( | 2014-02-13 |
20140047132 | STACKING ELECTRONIC SYSTEM - A stacking electronic system including a first master device, a second master device having a first connector, a second connector, and at least one slave device having a third connector is provided. The second and the third connectors have an error-proofing structure corresponding to each other, such that the first master device is structurally connected to the second master device to control the slave device directly. | 2014-02-13 |
20140047133 | Method and System for Late Binding of Features - A system, method, and computer-readable medium are disclosed for entitling the implementation of a feature associated with a device after it is manufactured. A feature entitlement management system receives a device's unique identifier, which is then processed to determine which features associated with the device are available for implementation. Once determined, the available features are provided to the user of the device, who in turn selects a feature for implementation. A feature entitlement is then generated by performing late binding entitlement operations to associate the selected feature's corresponding entitlement data with the device's unique identifier. The resulting feature entitlement is then is processed to implement the selected feature. | 2014-02-13 |
20140047134 | METHODS AND STRUCTURE FOR HARDWARE MANAGEMENT OF SERIAL ADVANCED TECHNOLOGY ATTACHMENT (SATA) DMA NON-ZERO OFFSETS IN A SERIAL ATTACHED SCSI (SAS) EXPANDER - Methods and structure for enhanced SAS expander functionality to store and forward buffered information transmitted from a SATA end device to an STP initiator device while managing use of Non-Zero Offset (“NZO”) field values in DMA Setup FISs transmitted by the SATA end device. The enhanced expander establishes a connection between an STP initiator and a SATA end device. The expander forwards a read command from the initiator to the end device. If NZO use is supported and enabled in the end device, the end device may return read data in any order by use of the NZO field values in multiple DMA Setup FISs. The expander is further adapted to store received data and the associated multiple DMA Setup FISs from the end device in its buffer and forwards the stored data to the initiator device. In another embodiment, use of NZO in the end device is disabled. | 2014-02-13 |
20140047135 | SYSTEMS AND METHODS FOR ENHANCING MULTIMEDIA EXPERIENCE - Systems and methods for enhancing multimedia experience are disclosed. A system includes a multimedia device adapted to obtain a multimedia data stream comprising multimedia data and at least one multimedia enhancement data sequence, and adapted to obtain instructions from a multimedia enhancement data sequence. The system further includes auxiliary devices communicatively coupled to the multimedia device, and adapted to receive the instructions from a multimedia device. The multimedia enhancement data sequences each include a start section, a target section, an instruction section, and an end section. A multimedia device is adapted to send instructions to at least one auxiliary device. A method includes obtaining a multimedia data stream comprising multimedia data and at least one multimedia enhancement data sequence using a multimedia device, obtaining instructions from a multimedia enhancement data sequence, and sending instructions to auxiliary devices communicatively coupled to the multimedia device. | 2014-02-13 |
20140047136 | LOCALIZED DEVICE MISSING DELAY TIMERS IN SAS/SATA TOPOLOGY - A SAS expander includes DMD timers for each PHY so that the expander can track disconnected devices directly connected to the expander and signal a SAS controller when the DMD is exceeded. A system including such SAS expanders may reduce the load on the system controller. A controller may recognize expanders capable of tracking DMDs for backwards compatibility. | 2014-02-13 |
20140047137 | INPUT/OUTPUT MODULE FOR PROGRAMMABLE LOGIC CONTROLLER BASED SYSTEMS - An input/output module for use in an industrial control system and connectable to a programmable logic controller (PLC), the input/output module having an interface configured for an electrical connection to the PLC, a plurality of pins configured for connection to one of a plurality of peripherals, an application specific integrated circuit (ASIC) disposed in the I/O module and electrically coupled to a system controller, the ASIC having a plurality of connection paths, each path being configured for a function, and a switch block configured to reassign a signal from a first connection path of the plurality of connection paths to a second connection path of the plurality of connection paths. | 2014-02-13 |
20140047138 | EXPANSION MODULE AND CLOUD DEVICE THEREOF - An expansion module is configured to provide expansion functions to a mobile electronic device. The expansion module includes a cloud device and at least one first expansion device. The cloud device includes a first expansion bus interface and a network interface. The first expansion device is coupled to the cloud device in a daisy-chain manner, wherein each of the first expansion device includes at least one first peripheral device. The cloud device is coupled to the mobile electronic device through the first expansion bus interface or the network interface, and provides the first peripheral device to the mobile electronic device for use. | 2014-02-13 |
20140047139 | METHOD AND APPARATUS TO MIGRATE EXISTING DATA AMONG STORAGE SYSTEMS - According to an aspect of the invention, a computer comprises a memory; and a processor operable to manage a plurality of path groups, each of which includes a plurality of logical paths associated with a host computer, wherein each logical path of the plurality of logical paths connects the host computer to a logical volume of one or more logical volumes in one or more storage systems. The processor is operable to manage a priority of each path group of the plurality of path groups, and to use a logical path of a first path group instead of a logical path of a second path group for I/O (input/output) usage between the host computer and the one or more storage systems, representing a migration of I/O usage from the second path group to the first path group, based on at least the priorities of the first and second path groups. | 2014-02-13 |
20140047140 | SYSTEM AND METHOD FOR PROVIDING A LINEARIZABLE REQUEST MANAGER - Described herein are systems and methods for improving concurrency of a request manager for use in an application server or other environment. A request manager receives a request, and upon receiving the request the request manager associates a token with the request. A reference to the request is enqueued in each of a plurality of queues, wherein each queue stores a local copy of the token. A first reference to the request is dequeued from a particular queue, wherein when the first reference to the request is dequeued, the token is modified to create a modified token. Thereafter the request is processed. When other references to the request are dequeued from other queues, the other references to the request are discarded. | 2014-02-13 |
20140047141 | BUFFER-RELATED USB COMMUNICATION - According to various embodiments, apparatuses and methods to communicate buffer allocation information are presented. The disclosed apparatuses and methods may include transmitting a buffer message by a wireless USB device to a wireless USB host, which may indicate an available storage space in a buffer of the USB device to store data from the USB host. The buffer message may be transmitted independent of whether or not the USB device has received a request message (e.g., from the USB host) for information relating the available storage space in the buffer. Additionally, the buffer message may be transmitted independent of any data exchange mechanism between the USB host and the USB device. The USB device may receive a data packet from the USB host, and transmit a data packet acknowledgement message including data packet status information, and information regarding the available storage space in the buffer. | 2014-02-13 |
20140047142 | DATA TRANSMISSION SYSTEM AND METHOD THEREOF - A data transmission system and method are provided. The data transmission method receives a second format data packet sent by a host; decodes the second format data packet sent by the host, and translating the decoded second format data packet into a first format data packet; transmits the first format data packet to a first device; receives a transmission response sent by the first device in response to the first format data packet, determines whether to transmit the transmission response to the host, and performs a re-try flow when the transmission response does not need to be transmitted to the host. Preferably, a data transmission rate of the first device is slower than that of a second device, and the data transmission system is backward compatible to the first device, and the second format data packet is consistent with the second device. | 2014-02-13 |
20140047143 | WIRELESS VIDEO CAMERA AND CONNECTION METHODS INCLUDING A USB EMULATION - Systems and methods for connecting wireless cameras are provided. A computing device may include a network interface, and a processor configured to establish a virtual USB bus available to an operating system of the computing device, establish a virtual USB camera device, and report to the operating system that the virtual USB camera device is connected to the virtual USB bus. The virtual USB camera may be configured to establish a network connection to a network camera using the network interface, receive video data from the network camera via the network interface, and send the video data via the virtual USB bus. Alternatively, the virtual USB camera may send the video data to the operating system as USB packets, without establishing a virtual USB bus. | 2014-02-13 |
20140047144 | I/O DEVICE AND STORAGE MANAGEMENT SYSTEM - An input/output (I/O) device includes at least one communication port; at least one storage device attached to the I/O device and is configured to provide a storage volume; and an I/O manager configured to manage operations of the I/O device. The I/O manager is configured to receive a request to create a new logical volume, create a new logical volume on the storage device based on the request, and define a first relationship between the created logical volume and a virtual I/O instance based on the request. The virtual I/O instance is a virtual access point for enabling a computer system connected to the I/O device via the communication port to access the created logical volume. | 2014-02-13 |
20140047145 | EXPANSION MODULE - An expansion module including a first expansion device and at least one second expansion device is provided. The first expansion device includes a first expansion bus interface, a second expansion bus interface and at least one first peripheral device. The first expansion device is coupled to a mobile electronic device via the first expansion bus interface. The first expansion bus interface provides the first peripheral device to the mobile electronic device for use. Each of the second expansion devices includes a third expansion bus interface and at least one second peripheral device. The second peripheral device is coupled to the third expansion bus interface and coupled to the second expansion bus interface in a daisy chain via the third expansion bus interface. The first expansion bus interface and the second expansion bus interface provide the second peripheral device to the mobile electronic device via the third expansion bus interface. | 2014-02-13 |
20140047146 | COMMUNICATION LOAD DETERMINING APPARATUS - A communication load determining apparatus is used for a communication system which includes a plurality of communication devices performing communication via a common bus. The communication system operates in accordance with a communication protocol that defines which a priority order is set to each of the frames transmitted from the communication devices and which a frame having a lower priority has a longer transmission latency before being transmitted to the bus. In the communication load determining apparatus, a low-priority frame having a lower priority than other frames to the bus is transmitted, and a transmission latency of the low-priority frame is measured. The communication load determining apparatus determines whether or not abnormality has occurred in a communication load in the bus on the basis of the measured transmission latency to produce a determination result. The produced determination result is stored. | 2014-02-13 |
20140047147 | BUS CONTROL DEVICE, IMAGE PROCESSING APPARATUS, AND BUS CONTROL METHOD - A bus control device includes a plurality of bus masters classified into a plurality of groups according to a priority level, a plurality of group buses each group bus being connected to a corresponding group of bus masters and assigned with a priority level determined according to the priority levels of the corresponding group of bus masters, an upper priority bus that arbitrates a plurality of bus obtaining requests received from the plurality of bus maters via the plurality of group buses, a plurality of masks respectively provided for the plurality of bus masters to mask the bus obtaining request addressed to the corresponding group bus from the corresponding bus master, and a plurality of mask controllers respectively provided for the plurality of group buses to output at least one mask signal that controls operation of at least one corresponding mask connected to the corresponding group bus. | 2014-02-13 |
20140047148 | DATA PROCESSING APPARATUS AND A METHOD FOR SETTING PRIORITY LEVELS FOR TRANSACTIONS - A data processing apparatus and method for setting priority levels for transactions has a shared resource for processing transactions, and at least one master device for issuing the transactions to the shared resource. The master device provides a plurality of sources of the transactions, and each of the transactions has a priority level associated therewith. Arbitration circuitry applies an arbitration policy to select a transaction from amongst multiple transactions issued to the shared resource. Adaptive priority circuitry is associated with at least one of the sources and monitors throughput indication data for previously issued transactions from the associated source. For each new transaction from the associated source, the circuitry sets the priority level to one of a plurality of predetermined priority levels dependent on the throughput indication data. The adaptive priority circuitry sets the lowest priority level from amongst the plurality of predetermined priority levels. | 2014-02-13 |
20140047149 | Interrupt Priority Management Using Partition-Based Priority Blocking Processor Registers - A method and circuit for a data processing system ( | 2014-02-13 |
20140047150 | Processor Interrupt Interface with Interrupt Partitioning and Virtualization Enhancements - A method and circuit for a data processing system ( | 2014-02-13 |
20140047151 | INTERRUPT PROCESSING UNIT FOR PREVENTING INTERRUPT LOSS - Techniques are disclosed relating to systems that allow sending and receiving of interrupts between processing elements. In various embodiments, a system includes an interrupt processing unit that in turn includes various indicators corresponding to processing elements. In some embodiments, the interrupt processing unit may be configured to receive an interrupt and determine whether a first processing element associated with the interrupt is available to receive interrupts. The system may initiate a corrective action if the first processing element is not available to receive interrupts. In some embodiments, the corrective action may include redirecting the interrupt to a second processing element. In some embodiments, the interrupt processing unit may include a dropped interrupt management register to store information corresponding to the second processing element. In some embodiments, the corrective action may include altering the power state of the first processing element such that it becomes available to receive interrupts. | 2014-02-13 |
20140047152 | Data communication interface for an agricultural utility vehicle - A data communication interface for an agricultural utility vehicle, particularly an agricultural tractor, having an interface connector that can be connected either to a first data communication network or to a second data communication network by means of an electrically operatable changeover device, wherein the first data communication network is terminated at a line end associated with the interface connector by means of a disconnectable terminating resistor, and having a control unit that connects the interface connector to the first data communication network by means of appropriate operating of the changeover device exclusively when it infers the presence of a control signal that is provided for disconnecting the terminating resistor. | 2014-02-13 |
20140047153 | COMPUTING APPARATUS WITH ENHANCED PARALLEL I/O FEATURES - Provided is a parallel I/O computing apparatus that includes a plurality of computing devices that may have different response characteristics depending on a number of parallel I/Os that are processed by the computing devices. The computing apparatus also includes an I/O dispatcher that distributes a different number of I/Os to one or more of the computing devices based on characteristics of the computing devices. | 2014-02-13 |
20140047154 | INTER-CHIP COMMUNICATIONS FOR IMPLANTABLE STIMULATING DEVICES - A device including a first integrated circuit (IC), a second IC configured to provide instructions to the first IC based on received data, wherein the first IC is a high-voltage IC and the second IC is a low-voltage IC, and a communication interface between the first and second ICs including a data bus of parallel data lines. The second IC is configured to select, based on the received data, one of a plurality of different communication modes for providing the instructions to the first IC via the communication interface, wherein each mode is defined by a quantity of address data and a quantity of configuration data used to provide the instructions to the first IC. | 2014-02-13 |
20140047155 | MEMORY MODULE THREADING WITH STAGGERED DATA TRANSFERS - A method of transferring data between a memory controller and at least one memory module via a primary data bus having a primary data bus width is disclosed. The method includes accessing a first one of a memory device group via a corresponding data bus path in response to a threaded memory request from the memory controller. The accessing results in data groups collectively forming a first data thread transferred across a corresponding secondary data bus path. Transfer of the first data thread across the primary data bus width is carried out over a first time interval, while using less than the primary data transfer continuous throughput during that first time interval. During the first time interval, at least one data group from a second data thread is transferred on the primary data bus. | 2014-02-13 |
20140047156 | HYBRID COMPUTING SYSTEM - A hybrid computing system comprising: a network fabric; at least one Root Complex board (RCB) and at least one Endpoint Board (EB). Each Root Complex board (RCB) comprises a first processor; a PCIe root complex connected to the first processor; and a first PCIe network switch directly connected to the PCIe root complex. Each Endpoint Board (EB) comprises a second processor; a PCIe interface connected to the second processor; and a second PCIe network switch connected to the PCIe interface. The PCIe network switches of each board (RCB, EB) are connected to the network fabric wherein each Root Complex board (RCB) and each Endpoint Board (EB) are configured for simultaneous use within the hybrid computing system. | 2014-02-13 |
20140047157 | PARALLEL COMPUTER SYSTEM, CROSSBAR SWITCH, AND METHOD OF CONTROLLING PARALLEL COMPUTER SYSTEM - A parallel computer system includes a plurality of processors including a first processor and a plurality of second processors; and a crossbar switch provided with a plurality of ports; wherein the first processor transmits data to a first port among the plurality of ports, and transmits standby time information to the first port in the case where the plurality of second processors are unable to transmit data to the first port despite receiving a communication authorization notification from the first port, and the first port receives the standby time information, and after the standby time elapses, selects one of the plurality of second processors. | 2014-02-13 |
20140047158 | SYNCHRONOUS WIRED-OR ACK STATUS FOR MEMORY WITH VARIABLE WRITE LATENCY - A memory controller comprises a command interface to transmit a memory command to a plurality of memory devices associated with the memory controller. The memory controller also comprises an acknowledgement interface to receive an acknowledgment status packet from the plurality of memory devices over a shared acknowledgement link coupled between the memory controller and the plurality of memory devices, the acknowledgement status packet indicating whether the command was received by the plurality of memory devices. In addition, the memory controller comprises a memory controller core to decode the acknowledgment status packet to identify a portion of the acknowledgement status packet corresponding to each of the plurality of memory devices. | 2014-02-13 |
20140047159 | ENTERPRISE SERVER WITH FLASH STORAGE MODULES - A server system, such as an enterprise server, may include an array of memory devices. The memory devices may include non-volatile or flash memory and be referred to as flash storage modules (“FSM”). The server system includes a host computer or host server that communicates with the array of FSM. The host may include a media management layer or flash transformation layer that is implemented by drivers on the host for controlling the FSM. | 2014-02-13 |
20140047160 | DATA WRITING METHOD, AND MEMORY CONTROLLER AND MEMORY STORAGE APPARATUS USING THE SAME - A data writing method for writing data into a memory cell of a rewritable non-volatile memory module, and a memory controller and a memory storage apparatus using the same area provided. The method includes recording a wear degree of the memory cell and adjusting an initial write voltage and a write voltage pulse time corresponding to the memory cell based on the wear degree thereof. The method further includes programming the memory cell by applying the initial write voltage and the write voltage pulse time, thereby writing the data into the memory cell. Accordingly, data can be accurately stored into the rewritable non-volatile memory module by the method. | 2014-02-13 |
20140047161 | System Employing MRAM and Physically Addressed Solid State Disk - A computer system includes a Central Processing Unit (CPU) that has a physically-addressed solid state disk (SSD), addressable using physical addresses associated with user data and provided by a host. The user data is to be stored in or retrieved from the physically-addressed SSD in blocks. Further, a non-volatile memory module is coupled to the CPU and includes flash tables used to manage blocks in the physically addressed SSD. The flash tables have tables that are used to map logical to physical blocks for identifying the location of stored data in the physically addressed SSD. The flash tables are maintained in the non-volatile memory modules thereby avoiding reconstruction of the flash tables upon power interruption. | 2014-02-13 |
20140047162 | MEMORY SYSTEM CAPABLE OF PREVENTING DATA DESTRUCTION - According to one embodiment, a memory system includes a memory unit, a first controller, and a second controller. In the memory unit, first to fourth levels (first level2014-02-13 | |
20140047163 | NONVOLATILE MEMORY DEVICE AND PROGRAMMING METHOD - A non-volatile memory (NVM) includes a memory cell array of multi-level memory cells (MLC) arranged in physical pages. A programming method for the NVM includes; receiving first data and partitioning the first data according to a single bit page capacity of a physical page to generate partitioned first data, programming the partitioned first data as single-bit data to a plurality of physical pages, and receiving second data and programming the second data as multi-bit data to a selected physical page among the plurality of physical pages, wherein the second data is simultaneously programmed to the MLC of the selected physical page. | 2014-02-13 |
20140047164 | Physically Addressed Solid State Disk Employing Magnetic Random Access Memory (MRAM) - A computer system includes a central processing unit (CPU), a system memory coupled to the CPU and including flash tables, and a physically-addressable solid state disk (SSD) coupled to the CPU. The physically-addressable SSD includes a flash subsystem and a non-volatile memory and is addressable using physical addresses. The flash subsystem includes one or more copies of the flash tables and the non-volatile memory includes updates to the copy of the flash tables. The flash tables include tables used to map logical to physical blocks for identifying the location of stored data in the physically addressable SSD, wherein the updates to the copy of the flash tables and the one or more copies of the flash tables are used to reconstruct the flash tables upon power interruption. | 2014-02-13 |
20140047165 | Storage System Employing MRAM and Physically Addressed Solid State Disk - A storage system includes a Central Processing Unit (CPU) that has a physically-addressed solid state disk (SSD), addressable using physical addresses associated with user data and provided by a host. The user data is to be stored in or retrieved from the physically-addressed SSD in blocks. Further, a non-volatile memory module is coupled to the CPU and includes flash tables used to manage blocks in the physically addressed SSD. The flash tables have tables that are used to map logical to physical blocks for identifying the location of stored data in the physically addressed SSD. The flash tables are maintained in the non-volatile memory modules thereby avoiding reconstruction of the flash tables upon power interruption. | 2014-02-13 |
20140047166 | STORAGE SYSTEM EMPLOYING MRAM AND ARRAY OF SOLID STATE DISKS WITH INTEGRATED SWITCH - A storage system includes a central processing unit (CPU) subsystem including a CPU, a physically-addressed solid state disk (SSD) that is addressable using physical addresses associated with user data, provided by the CPU, to be stored in or retrieved from the physically-addressed SSD in blocks. Further, the storage system includes a non-volatile memory module, the non-volatile memory module having flash tables used to manage blocks in the physically addressed SSD, the flash tables include tables used to map logical to physical blocks for identifying the location of stored data in the physically addressed SSD. Additionally, the storage system includes a peripheral component interconnect express (PCIe) switch coupled to the CPU subsystem and a network interface controller coupled through a PCIe bus to the PCIe switch, wherein the flash tables are maintained in the non-volatile memory modules thereby avoiding reconstruction of the flash tables upon power interruption. | 2014-02-13 |
20140047167 | NONVOLATILE MEMORY DEVICE AND METHOD OF CONTROLLING SUSPENSION OF COMMAND EXECUTION OF THE SAME - A nonvolatile memory device includes a memory cell array, a row decoder, a page buffer, and control logic. The memory cell array includes memory cells connected to word lines and bit lines, the memory cell array being configured to store data. The row decoder is configured to selectively activate a string selection line, a ground selection line, and the word lines of the memory cell array. The page buffer is configured to temporarily store external data and to apply a predetermined voltage to the bit lines according to the stored data during a program operation, and to sense data stored in selected memory cells using the bit lines during a read operation or a verification operation. The control logic is configured to control the row decoder and the page buffer. During execution of commands, when a request to suspend the execution of the commands is retrieved, chip information is backed up to a storage space separate from the control logic. | 2014-02-13 |
20140047168 | DATA STORAGE SYSTEM AND METHOD OF OPERATING DATA STORAGE SYSTEM - A method of operating a data storage device includes providing a memory cell array that includes a first word line, a second word line and a buffer configured to store second data to be programmed into the second word line, reading the second data from the buffer, and programming first data into the first word line. A programming condition of the first data being is changed based on the second data read from the buffer. | 2014-02-13 |
20140047169 | METHOD FOR OPERATING A MEMORY CONTROLLER AND A SYSTEM HAVING THE MEMORY CONTROLLER - A method for operating a memory controller includes determining a number of free blocks to be created during an idle time by using a block consumption history, and controlling a non-volatile memory device to perform a garbage collection operation during the idle time to create the determined number of free blocks. | 2014-02-13 |
20140047170 | MAINTAINING ORDERING VIA A MULTI-LEVEL MAP OF A SOLID-STATE MEDIA - Described embodiments provide a media controller that processes requests including a logical address and address range. A map of the media controller determines physical addresses of a media associated with the logical address and address range of the request. The map is a multi-level map having a plurality of leaf-level map pages that are stored in the media, with a subset of the leaf-level map pages stored in a map cache. Based on the logical address and address range, it is determined whether a corresponding leaf-level map page is stored in the map cache. If the leaf-level map page is stored in the map cache, a cache index and control indicators of the map cache entry are returned in order to enforce ordering rules that selectively enable access to a corresponding leaf-level map page based on the control indicators and a determined request type. | 2014-02-13 |
20140047171 | SYSTEM AND METHOD OF CACHING INFORMATION - A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache. | 2014-02-13 |
20140047172 | DATA STORAGE DEVICE - A data storage device may include a first memory board having multiple memory chips and a controller board that is arranged and configured to operably connect to the first memory board. The controller board may include an interface to a host and a controller that is arranged and configured to control command processing for multiple different types of memory chips, automatically recognize a type of the memory chips on the first memory board, receive commands from the host using the interface, and execute the commands using the memory chips. | 2014-02-13 |
20140047173 | APPLICATION PRE-LAUNCH TO REDUCE USER INTERFACE LATENCY - A device stores a plurality of applications and a list of associations for those applications. The applications are preferably stored within a secondary memory of the device, and once launched each application is loaded into RAM. Each application is preferably associated to one or more of the other applications. Preferably, no applications are launched when the device is powered on. A user selects an application, which is then launched by the device, thereby loading the application from the secondary memory to RAM. Whenever an application is determined to be associated with a currently active state application, and that associated application has yet to be loaded from secondary memory to RAM, the associated application is pre-launched such that the associated application is loaded into RAM, but is set to an inactive state. | 2014-02-13 |
20140047174 | SECURE DATA PROTECTION WITH IMPROVED READ-ONLY MEMORY LOCKING DURING SYSTEM PRE-BOOT - Generally, this disclosure provides methods and systems for secure data protection with improved read-only memory locking during system pre-boot including protection of Advanced Configuration and Power Interface (ACPI) tables. The methods may include selecting a region of system memory to be protected, the selection occurring in response to a system reset state and performed by a trusted control block (TCB) comprising a trusted basic input/output system (BIOS); programming an address decoder circuit to configure the selected region as read-write; moving data to be secured to the selected region; programming the address decoder circuit to configure the selected region as read-only; and locking the read-only configuration in the address decoder circuit. | 2014-02-13 |
20140047175 | IMPLEMENTING EFFICIENT CACHE TAG LOOKUP IN VERY LARGE CACHE SYSTEMS - A method and circuit for implementing a cache directory and efficient cache tag lookup in very large cache systems, and a design structure on which the subject circuit resides are provided. A tag cache includes a fast partial large (LX) cache directory maintained separately on chip apart from a main LX cache directory (LXDIR) stored off chip in dynamic random access memory (DRAM) with large cache data (LXDATA). The tag cache stores most frequently accessed LXDIR tags. The tag cache contains predefined information enabling access to LXDATA directly on tag cache hit with matching address and data present in the LX cache. Only on tag cache misses the LXDIR is accessed to reach LXDATA. | 2014-02-13 |
20140047176 | DRAM ENERGY USE OPTIMIZATION USING APPLICATION INFORMATION - An application program identifies a plurality of least recently accessed constructs of the application program that reside in DRAM memory. The application program causes the aggregation of at least a portion of the identified least recently accessed constructs onto one or more pages of the DRAM memory. The application program then causes the one or more memory pages of the DRAM memory to be put into self-refresh operation mode. | 2014-02-13 |
20140047177 | MIRRORED DATA STORAGE PHYSICAL ENTITY PAIRING IN ACCORDANCE WITH RELIABILITY WEIGHTINGS - A mirrored data storage system with a plurality of data storage physical entities (drives) arranged in mirrored pairs in accordance with reliability weightings that have been assigned to each of the drives. Each mirrored pair comprises one drive with at least a median and greater reliability weighting, and one drive with at least a median and lesser reliability weighting. In an example, the assigned reliability weightings are sorted into descending order; with the drives having weightings in the upper half of the sorted order assigned to a first set with greater reliability weighting, and the drives having weightings in the lower half of the sorted order assigned to a second set with lesser reliability weighting. Each mirrored pair has one drive selected from the first set and one drive selected from the second set. | 2014-02-13 |
20140047178 | STORAGE SYSTEM AND STORAGE CONTROL METHOD - A storage system includes: a grouping unit configured to generate one or more storage device sub-groups, each of the storage device sub-groups including a storage device used to store data, from storage devices included in a plurality of storage device groups that respectively include a plurality of storage devices; a selection unit configured to select any of the one or more storage device sub-groups; and a control unit configured to shut off power supply to a non-selected device group, which is a storage device sub-group other than a selected storage device sub-group and included in a storage device group including the selected storage device sub-group, and shut off power supply to storage devices included in a storage device group other than the storage device group including the selected storage device sub-group. | 2014-02-13 |
20140047179 | MANAGEMENT METHOD FOR A VIRTUAL VOLUME ACROSS A PLURALITY OF STORAGES - A first storage system includes a plurality of first storage devices and is coupled to a computer. A second storage system includes a plurality of second storage devices and is coupled to the first storage system. A first controller provides a thin provisioning logical volume (LU) to the computer. A second controller provides an external thin provisioning LU to the first storage system. The first controller provides pool areas associated with the thin provisioning LU, including a first pool area mapped to the external thin provisioning LU, and allocates the first pool area to a first region in the thin provisioning LU to store a write data to the first region in the thin provisioning LU. The second controller allocates at least one of a plurality of pool areas to store the write data to the first region in the thin provisioning LU. | 2014-02-13 |
20140047180 | METHOD, DEVICE, AND SYSTEM FOR DETERMINING DRIVE LETTER - The present disclosure discloses a method for determining a drive letter, including: obtaining a number of a port connecting a redundant array of independent disk RAID controller to an exchange chip and a location number, of a disk, meeting a report condition in each RAID group under the control of the RAID controller, where the location number, of the disk, meeting the report condition is a location number, of a disk, on a preset location after location numbers of all disks included in each RAID group when each RAID group is configured, are sorted according to a preset sequence; and determining a drive letter corresponding to each RAID group according to the number of the port connecting the RAID controller to the exchange chip and the location number, of the disk, meeting the report condition in each RAID group. | 2014-02-13 |
20140047181 | System and Method for Updating Data in a Cache - In one embodiment, a computing system includes a cache having one or more memories and a cache manager. The cache manager is able to receive a request to write data to a first portion of the cache, write the data to the first portion of the cache, update a first map corresponding to the first portion of the cache, receive a request to read data from the first portion of the cache, read from a storage communicatively linked to the computing system data according to the first map, and update a second map corresponding to the first portion of the cache. The cache manager may also be able to write data to the storage according to the first map. | 2014-02-13 |
20140047182 | METHOD AND DEVICE FOR PROCESSSING DATA USING WINDOW - Provided are a data processing method and device using a window. The data processing method may include caching data by applying a window to data stored in a memory on a per channel basis, and transmitting the cached data to a core processor using location information of a point. | 2014-02-13 |
20140047183 | System and Method for Utilizing a Cache with a Virtual Machine - In one embodiment, a computer system includes a cache having one or more memory locations associated with one or more computing systems, one or more cache managers, each cache manager associated with a portion of the cache, a metadata service communicatively linked with the cache managers, a configuration manager communicatively linked with the cache managers and the metadata service, and a data store. | 2014-02-13 |
20140047184 | TUNABLE MULTI-TIERED STT-MRAM CACHE FOR MULTI-CORE PROCESSORS - A multi-core processor is presented. The multi-core processor includes a first spin transfer torque magnetoresistive random-access memory (STT-MRAM) cache associated with a first core of the multi-core processor and tuned according to first attributes and a second STT-MRAM cache associated with a second core of the multi-core processor and tuned according to second attributes. | 2014-02-13 |
20140047185 | System and Method for Data Redundancy Within a Cache - In one embodiment, a computing system includes a cache and a cache manager. The cache manager is able to receive data, write the data to a first portion of the cache, write the data to a second portion of the cache, and delete the data from the second portion of the cache when the data in the first portion of the cache is flushed. | 2014-02-13 |
20140047186 | TRANSACTIONAL MEMORY SYSTEM WITH EFFICIENT CACHE SUPPORT - Embodiments related to a transaction program. An aspect includes, based on determining that one instruction is part of an active atomic instruction group (AIG), determining whether a private-to-transaction (PTRAN) bit associated with an address of the one instruction in a main memory is set, the PTRAN bit being located in a main memory comprising a plurality of memory increments each having a respective directly addressable PTRAN bit in the main memory. Another aspect includes, based on determining that the PTRAN bit is not set: setting the PTRAN bit; adding a new entry to a cache structure and a transaction table including an old data state of the address of the one instruction stored in the cache structure and control information stored in the transaction table; and completing the one instruction as part of the active AIG. | 2014-02-13 |
20140047187 | ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS - A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress. | 2014-02-13 |
20140047188 | Method and Multi-Core Communication Processor for Replacing Data in System Cache - A method for replacing data in a system cache includes obtaining, by the system cache, an access statistics value corresponding to each piece of header data in the system cache, wherein the access statistics value corresponding to the header data represents a difference of the predetermined number of access times of the header data minus the number of times that the header data has been accessed by central processing units (CPUs); obtaining, by the system cache according to the access statistics value corresponding to each piece of header data, header data to be transferred; and transferring, by the system cache, the header data, which is to be transferred, to an external memory. | 2014-02-13 |
20140047189 | Optimizing Write and Wear Performance for a Memory - Determining and using the ideal size of memory to be transferred from high speed memory to a low speed memory may result in speedier saves to the low speed memory and a longer life for the low speed memory. | 2014-02-13 |
20140047190 | Location and Relocation of Data Within a Cache - In one embodiment, a computer system includes a cache having one or more memories and a metadata service. The metadata service is able to receive requests for data stored in the cache from a first client and from a second client. The metadata service is further able to determine whether the performance of the cache would be improved by relocating the data stored in the cache. The metadata service is further operable to relocate the data stored in the cache when such relocation would improve the performance of the cache. | 2014-02-13 |
20140047191 | SYSTEM AND METHOD OF CACHING INFORMATION - A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache. | 2014-02-13 |
20140047192 | OPPORTUNISTIC BLOCK TRANSMISSION WITH TIME CONSTRAINTS - A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed. | 2014-02-13 |
20140047193 | System and Method for Utilizing Non-Volatile Memory in a Cache - In one embodiment, a computing system includes a cache having one or more memories, a cache journal operable to store data associated with one or more portions of the cache, and a configuration manager operable to access the cache and the cache journal. The configuration manager is operable to determine whether the cache journal includes data associated with a first portion of the cache, and to create, in the cache journal, data associated with the first portion of the cache if the cache journal does not yet comprise data associated with the first portion of the cache. The configuration manager is also operable to determine whether the first portion of the cache is valid for use, and to communicate with a memory manager associated with the first portion of the cache regarding whether the first portion of the cache is valid for use. | 2014-02-13 |
20140047194 | PROCESSOR AND CONTROL METHOD THEREOF - A processor has a first core unit which outputs history information and occupancy mode information related to an arithmetic processing, a memory which has a first storage area and a second storage area, and a control circuit which writes the history information outputted by the first core unit into the first storage area of the memory when the occupancy mode information outputted by the first core unit indicates invalidity, and writes the history information outputted by the first core unit into the first storage area and the second storage area of the memory when the occupancy mode information outputted by the first core unit indicates validity. | 2014-02-13 |
20140047195 | TRANSACTION CHECK INSTRUCTION FOR MEMORY TRANSACTIONS - A processing unit of a data processing system having a shared memory system executes a memory transaction including a transactional store instruction that causes a processing unit of the data processing system to make a conditional update to a target memory block of the shared memory system conditioned on successful commitment of the memory transaction. The memory transaction further includes a transaction check instruction. In response to executing the transaction check instruction, the processing unit determines, prior to conclusion of the memory transaction, whether the target memory block of the shared memory system was modified after the conditional update caused by execution of the transactional store instruction. In response to determining that the target memory block has been modified, a condition register within the processing unit is set to indicate a conflict for the memory transaction. | 2014-02-13 |
20140047196 | TRANSACTION CHECK INSTRUCTION FOR MEMORY TRANSACTIONS - A processing unit of a data processing system having a shared memory system executes a memory transaction including a transactional store instruction that causes a processing unit of the data processing system to make a conditional update to a target memory block of the shared memory system conditioned on successful commitment of the memory transaction. The memory transaction further includes a transaction check instruction. In response to executing the transaction check instruction, the processing unit determines, prior to conclusion of the memory transaction, whether the target memory block of the shared memory system was modified after the conditional update caused by execution of the transactional store instruction. In response to determining that the target memory block has been modified, a condition register within the processing unit is set to indicate a conflict for the memory transaction. | 2014-02-13 |
20140047197 | MULTIPORT MEMORY EMULATION USING SINGLE-PORT MEMORY DEVICES - A multiport memory emulator receives first and a second memory commands for concurrent processing of memory commands in one operation clock cycle. Data operands are stored in a memory array of bitcells that is arranged as rows and memory banks. An auxiliary memory bank provides a bitcell for physically storing an additional word for each row. The bank address portion of each of the first and second memory commands is respectively translated into a first and second physical bank address. The second physical bank address is assigned a bank address of a bank that is currently unused in response to a determination that the bank address portions are equal and the bank associated with the first bank address is designated as a currently unused bank for subsequently received memory commands in response to the determination that the bank address portions are equal. Simultaneous read and write operations are possible. | 2014-02-13 |
20140047198 | DATA ACQUISITION DEVICE WITH REAL TIME DIGITAL TRIGGER - A data acquisition device incorporates a front end analog-to-digital converter (ADC), which is responsive to an applied analog input signal, sample that signal and provide digital data representative of the sampled signal. The digital data is applied to a data channel connected to a data acquisition memory, which stores data values representative of the sampled analog input signal. The digital data from the ADC is also applied to a real time a trigger channel connected to a composite function trigger equalizer and filter, a trigger processor and to a trigger memory. The trigger channel operates in real time to identify trigger events and store real-time trigger event occurrence signals in the trigger memory. A controller reads out the stored data values from the data acquisition memory by way of a data equalizer, in synchronism with corresponding real-time trigger event occurrence signals from the trigger memory. | 2014-02-13 |
20140047199 | Memory-Link Compression for Graphic Processor Unit - A graphic processing unit having multiple computational elements flexibly interconnected to memory elements provides for data compressors/decompressors in the memory channels communicating between the computational elements and memory elements to provide an effective increase in bandwidth of those connections by the compression of data transferred thereon. | 2014-02-13 |
20140047200 | REDUCING PEAK CURRENT IN MEMORY SYSTEMS - A memory device includes a plurality of memory cells, a token input interface, a token output interface and control circuitry. The control circuitry is configured to accept a storage command, to condition execution of at least a part of the storage command on a presence of a token pulse on the token input interface, to execute the storage command, including the conditioned part, in the memory cells upon reception of the token pulse on the token input interface, and to reproduce the token pulse on the token output interface upon completion of the execution. | 2014-02-13 |
20140047201 | MEMORY-ACCESS-RESOURCE MANAGEMENT - The present application is directed to a memory-access-multiplexing memory controller that can multiplex memory accesses from multiple hardware threads, cores, and processors according to externally specified policies or parameters, including policies or parameters set by management layers within a virtualized computer system. A memory-access-multiplexing memory controller provides, at the physical-hardware level, a basis for ensuring rational and policy-driven sharing of the memory-access resource among multiple hardware threads, cores, and/or processors. | 2014-02-13 |
20140047202 | Systems, Methods, and Computer Program Products Providing Change Logging In a Deduplication Process - A method performed in a network storage system, the method including receiving a plurality of data blocks at a secondary storage subsystem from a primary storage subsystem, generating a first log that includes a first plurality of entries, one entry for each of the data blocks, in which each entry of the first plurality of entries includes a name for a respective data block and a fingerprint of the respective data block, receiving metadata at the secondary storage subsystem from the primary storage subsystem, the metadata describing relationships between the plurality of blocks and a plurality of files, generating a second log that includes a second plurality of entries, and merging the first log with the second log to generate a change log. | 2014-02-13 |
20140047203 | SYSTEM AND METHOD FOR PREDICTING BACKUP DATA VOLUMES - A method and system for predicting the managed backup occupancy of a backup system are disclosed. The method includes determining the variables m | 2014-02-13 |