05th week of 2013 patent applcation highlights part 63 |
Patent application number | Title | Published |
20130031229 | TRAFFIC REDUCTION METHOD FOR DISTRIBUTED KEY-VALUE STORE - In a system of local DHT overlays, each has KVS nodes, including one super node. The super nodes organize a global DHT overlay. Each super node maintains Bloom filters of keys in its local DHT overlay for all key ranges. To obtain data corresponding to a key from other local DHT overlays, a super node sends a request to a node which is responsible for the key range hashed from the specified key. The responsible node determines local DHT overlays which may have data corresponding to the key according to Bloom filters registered in the super nodes of the local DHT overlays, updated and converted from counting filters. Requests are sent to the super nodes of the local DHT overlays identified by the responsible node. Thus, requests are not needlessly sent to super nodes of local DHT overlays which do not have data corresponding to the key, thereby reducing traffic. | 2013-01-31 |
20130031230 | METHOD AND SYSTEM FOR MANAGING NETWORK ELEMENTS - A management request complying with a first protocol is generated by a management application executed by a computing system coupled to a network element. A processor executable agent encapsulates a management request using a second protocol. The encapsulated management request is transmitted using a third protocol via a link used by the computing system to send input/output requests for reading and writing data to a storage device. The management request is de-encapsulated to provide the management request complying with the first protocol to a management module of the network element. The management module of the network element prepares a response to the management request complying with the first protocol. A processor executable service at the network element encapsulates the response using the second protocol. The encapsulated response is transmitted to the computing system using the third protocol. The response complying with the first protocol is extracted from the encapsulated response. The response complying with the first protocol is provided to the management application. | 2013-01-31 |
20130031231 | METHOD AND APPARATUS FOR NOTIFYING ACCOUNT INFORMATION OF A DATA-TYPE-ORIENTED USER EQUIPMENT - The invention discloses a method of and apparatus for notifying account information of a data-type-oriented user equipment. The method comprises the steps of: determining whether each user satisfies a predetermined condition; and if the predetermined condition is satisfied, then causing a packet data network gateway to activate account information notification control information for the user, wherein the account information notification control information includes a network address of a web notification server. With the invention, the terminal user can get to know his own account information more timely to thereby adjust the usage mode of the wireless broadband data service according to the account information, for example, lower the frequency of accessing the service or disable the service for a duration to thereby avoid a large bill or perform prepayment charging to thereby prevent an important connection from breaking down due to an overdue account. An operator and a service provider can improve the user experience of the wireless broadband data service to thereby reduce the complaint of the user and avoid the loss of customers in the field of wireless broadband data services. In the inventive method, good compatibility with other types of user equipments is possible while offering the new function. | 2013-01-31 |
20130031232 | System and Method For Sharing Electronic Information - In one embodiment, the present invention provides a system for interfacing at least one requestor to at least one performer, wherein the at least one requestor is able to transfer electronic data, or portions thereof, ubiquitously with at least one performer, regardless of the format that the at least one requestor and the at least one performer manage their respective electronic information. In particular, the present invention provides a system, wherein the at least one requestor and the at least one performer update electronic, and/ or modify electronic information, or portions thereof, dynamically, in response to requests and responses from at the at least one requestor and the at least one performer. At any given time, any requestor may become a performer and vice versa. | 2013-01-31 |
20130031233 | NETWORK FILTERING IN A VIRTUALIZED ENVIRONMENT - A physical host executes a hypervisor or virtual machine monitor (VMM) that instantiates at least one virtual machine (VM) and a virtual input/output server (VIOS). The VIOS determines by reference to a policy data structure a disposition of a packet of network communication with the VM, where the disposition includes one of dropping the packet and forwarding the packet. Thereafter, the determined disposition is applied to a subsequent packet in a same packet flow as the packet. | 2013-01-31 |
20130031234 | METHODS AND APPARATUS TO COLLABORATIVELY MANAGE A CLIENT USING MULTIPLE SERVERS - An example device includes a processor configured to execute an Open Mobile Alliance (OMA) Device Management (DM) server, wherein the OMA DM server uses or includes an interface to send communications directly to a second OMA DM server for delegating management of a DM client. | 2013-01-31 |
20130031235 | METHOD, NETWORK MANAGEMENT CENTER, AND A RELATED DEVICE FOR CONFIGURING A NETWORK POLICY FOR A VIRTUAL PORT - A method, a network management center, and a related device. The method includes: obtaining a physical network policy group and a media access control (MAC) address of a virtual port; associating the physical network policy group and the MAC address of the virtual port to form a virtual port policy association table; and selecting the physical network policy group corresponding to the MAC address carried by a request from the virtual port policy association table, and delivering the physical network policy group to a physical switch sending the request. When a virtual machine (VM) on the server is migrated, the method may be used to migrate the network policy for the virtual port on a real-time basis. Therefore, the real-time effect of services provided by the VM is improved in the virtualization process of the server. | 2013-01-31 |
20130031236 | Method and System for Platform Level Data Model for Indications Based Event Control and Data Transfer - For a platform level data model for indications based event control and data transfer, a management controller may enable performing indications based management operations that may be based on a management service utilizing CIM Indications model. The management controller may enable communication of indications based messaging and/or data. The indications may be triggered based on events generated and/or triggered in a plurality of managed entities. The events generation may be performed dynamically within the plurality of managed entities, or may be initiated via the management controller. The management controller may enable processing of partially generated indications, via the plurality of managed entities, and/or as pass-through router for full indications processed via the plurality of managed entities. The indications based management operations may also comprise subscription related operations wherein the management controller may enable performing processing of subscription requests, modifications, and/or deletions to facilitate external access via the device. | 2013-01-31 |
20130031237 | NETWORK COMPONENT MANAGEMENT - A network component management system includes a first network element and a second network element. The second network element is at a customer location and is configured to communicate with the first network element over a communication network. A computing device is configured to communicate with the first and second network elements over the communication network and includes a visibility tool configured to actively monitor the second network element and present a status of the second network element. The status of the second network element indicates whether the second network element is provisioned and able to communicate over the communication network. A method includes querying the network element, determining the status of the network element, presenting the status, and initiating a troubleshooting procedure if the network element is not provisioned or is unable to communicate over the communication network. | 2013-01-31 |
20130031238 | HEALTH AND WELFARE MONITORING OF NETWORK SERVER OPERATIONS - Methods, systems and computer program products for monitoring and analysis of network servers and web analytics programs are disclosed. A monitoring program, for example, is configured to monitor the status of web analytics program(s) associated with one or more network servers. The monitoring program may monitor server-specific metrics such as server free disk space, server available memory, server on-line/off-line status, report processing time, difference between system time and log file time, table size details, etc. The program may be configured to present status indicators to the user that are indicative of the health of the web analytics program and/or server. A user may receive an alert generated by the monitoring program regarding a change in web analytics program status. Additionally, the monitoring program may be configured to automatically take corrective action to remedy or prevent a critical event that may cause loss of data or delay web analytics reporting. | 2013-01-31 |
20130031239 | DATA COMMUNICATION METHOD AND APPARATUS - There is provided a method of scheduling requests from a plurality of services to at least one data storage resource. The method comprises receiving, on a computer system, service requests from said plurality of services. The service requests comprise metadata specifying a service ID and a data size of payload data associated with said service request, and at least some of said service IDs have service throughput metadata specifying a required service throughput associated therewith. The method further includes arranging, in a computer system, said requests into FIFO throttled queues based on said service ID and then setting a deadline for processing of a request in a throttled queue. The deadline is selected in dependence upon the size of the request and the required service throughput associated therewith. Then, the deadline of each throttled queue is monitored and, if a request in a throttled queue has reached or exceeded the deadline the request is processed in a data storage resource. | 2013-01-31 |
20130031240 | Capacity Evaluation of Computer Network Capabilities - A method and apparatus are provided for evaluating the capacity of a capability enabled by network devices in a computer network. The method includes identifying a network capability enabled by one or more network devices, monitoring a plurality of hardware resources of the one or more network devices during implementation of one or more instances of the identified network capability and capturing respective device-specific metrics representative of a utilization level of each of the plurality of hardware resources during implementation of the one or more instances. The method also includes identifying which one of the plurality of hardware resources is most limiting for a remaining capacity of the identified network capability, calculating, based on the hardware resource that is most limiting for the remaining capacity of the identified network capability, a maximum remaining capacity for additional instances of the identified network capability, and providing an indication of the maximum remaining capacity of the identified network capability. | 2013-01-31 |
20130031241 | DISTRIBUTED SERVICE INSTANTIATION FOR INFORMATION-CENTRIC NETWORKS - An exemplary communication device includes a node having a processor configured to instantiate a service at the node responsive to the processor determining that the node is a superior instantiation candidate relative to a next upstream node on a downstream path of the service. An exemplary method of communicating includes instantiating a service at a node responsive to the node determining that the node is a superior instantiation candidate relative to a next upstream node on a downstream path of the service. | 2013-01-31 |
20130031242 | QUANTITATIVE MANAGEMENT ASSESSMENTS OF DATA COMMUNICATION NETWORKS WITH CONVERGED ARCHITECTURES - Techniques for quantitative converged network assessment are described. Performance information, associated with network infrastructure elements and application or service elements of a computer network, is received. One or more key performance indicators of a reference network architecture are compared with at least a portion of the performance information. A plurality of first scores is determined based on the comparison of the one or more key performance indicators and at least a portion of the performance information. Based on the plurality of first scores, a single second score is determined that indicates a converged state of the computer network with respect to the reference network architecture. | 2013-01-31 |
20130031243 | METHODS, SYSTEMS, AND COMPUTER-READABLE MEDIA FOR SELF-LEARNING INTERACTIVE COMMUNICATIONS PRIVILEGES FOR GOVERNING INTERACTIVE COMMUNICATIONS WITH ENTITIES OUTSIDE A DOMAIN - Methods, systems, and computer-readable media for self-learning interactive communications privileges for governing interactive communications with entities outside a domain are disclosed. The interactive communications privileges can be used to process interactive communications requests between entities inside and outside a domain. The requested interactive communications are allowed if the interactive communications privileges configured for the entity outside the domain allow for the requested interactive communications. The interactive communications privileges are determined in an automated, self-learning manner in response to monitoring communication interactions between the entities inside and outside the domain. In this manner, the interactive communications privileges are not required to be provisioned and maintained by an administrator. The interactive communications privileges can be determined by gathering insight about the entities outside the domain. Insight about an entity outside the domain is information that is useful in determining which interactive communications privileges to configure for an entity outside the domain. | 2013-01-31 |
20130031244 | Greening the Network with the Power Consumption Statuses of Network Components - In an embodiment, the disclosure includes an apparatus comprising a data store which comprises cost data associated with use of a path in a communications network. The data store also comprises power consumption data associated with the use of the path. The apparatus further comprises at least one processor configured to determine a score for the path based on the cost data and the power consumption data. The disclosure also includes an apparatus comprising a path computation element (PCE) configured to receive data from a plurality of network elements (NEs). The data comprises cost and power consumption data for establishing a path between a plurality of the NEs. The PCE is configured to determine a score for the path based on the cost and power consumption data. | 2013-01-31 |
20130031245 | GENERATING A CONFIGURATION FILE BASED UPON AN APPLICATION REGISTRY - A system and method are provided for generating a configuration file based upon an application registry. The method, for example, includes, but is not limited to, determining, by a processor, which users are logged into the server that are associated with a predetermined group, determining, by the processor, for each user logged into the server associated with the predetermined group, which applications each user is running, and generating, by the processor, the configuration file based upon which applications each user is running and storing the registry file in a memory. | 2013-01-31 |
20130031246 | NETWORK MONITORING CONTROL APPARATUS AND MANAGEMENT INFORMATION ACQUISITION METHOD - A network monitoring control apparatus includes: a traffic information acquisition unit to acquire traffic information of a network component included in a network; a decision information switching unit to set decision information for the network component based on a comparison result between the traffic information and one of a congestion decision threshold and a congestion recovery decision threshold of the network component; and a management information acquisition unit to acquire management information of the network component based on the decision information. | 2013-01-31 |
20130031247 | GENERATING DISPERSED STORAGE NETWORK EVENT RECORDS - A method begins by a dispersed storage (DS) processing module collecting an event record, a record regarding processing of an event request, and a plurality of records regarding processing of a plurality of sub-event requests to produce a collection of records. The event record includes information regarding an event, wherein the event is a user access operation or a system administrative operation initiated by a device affiliated with the DSN. The record regarding processing of the event request includes information regarding a dispersed storage (DS) processing module of the DSN processing the event request to produce the plurality of sub-event requests. The plurality of records regarding processing of the plurality of sub-event requests includes information regarding a plurality of DS units of the DSN processing the plurality of sub-event requests. The method continues with the DS processing module evaluating the collection of records to produce performance information regarding the DSN. | 2013-01-31 |
20130031248 | NODE DETECTION APPARATUS, NODE DETECTION METHOD AND COMPUTER READABLE MEDIUM - There is provided a node detection apparatus including: an acquisition section that acquires address information of communication equipment managed by a DNS server; an operation confirmation section that confirms operations of the communication equipment based on the address information acquired by the acquisition section; and a node registration section that registers the communication equipment having the address information acquired by the acquisition section as an operating node, based on a result of the operation confirmation by the operation confirmation section. | 2013-01-31 |
20130031249 | SYSTEM AND METHOD FOR SERVICING FIELD DEVICES IN AN AUTOMATION PLANT - A system for servicing field devices in an automation plant, comprising: a computing unit, which accesses the field devices via a communications network; a communications hardware; a server; an interpreter for electronic device descriptions; and a software component. The server, the interpreter for the electronic device descriptions and the software component are associated with the communications hardware. The software component, upon the occurrence of an event, identifies, by means of scanning, or polling, the field devices arranged in the communications network and utilizes the identification of the field devices, in order to activate corresponding electronic device descriptions in the interpreter and to provide correspondingly prepared information via the server to the computing unit for the purpose of servicing the field devices. | 2013-01-31 |
20130031250 | APPARATUS, METHOD AND SYSTEM FOR IMPROVING APPLICATION PERFORMANCE ACROSS A COMMUNICATIONS NETWORK - An apparatus, method and system to enable dynamic replication of Web servers across a wide area in response to access patterns by Web clients as well as in response to customer requests. The method for dynamically replicating one or more parent nodes on a network in response to a user request by a policy manager. The policy manager transmits the user request to an event module. The event module transmits the user request to a data consistency module, wherein the data consistency module maintains integrity of the data on the parent node. The event system communicates with a resource management module to ensure proper utilization of network resources, and transmits the routing request to a request routing module for appropriately balancing the network load. The request routing module is capable of providing optimal routing based on the network resources. | 2013-01-31 |
20130031251 | COMPLEX EVENT PROCESSING SYSTEM AND METHOD - A complex event processing system comprises a complex event processing engine ( | 2013-01-31 |
20130031252 | FAIL-OPEN NETWORK TECHNIQUES - A network device may receive, from a user device, a request for network access to a network and communicate a request, to a subscriber data storage, for subscriber data, corresponding to the user device, to verify whether the user device may be granted network access. The network platform may determine, in response to communicating the request to the subscriber data storage, that the subscriber data storage is non-responsive and executing a fail-open function in response to determining that the subscriber data storage is non-responsive. The fail-open function may include processing the request for network access without subscriber data from the subscriber data storage and granting network access to the user device without verifying that the user device is permitted to access the network. | 2013-01-31 |
20130031253 | NETWORK MANAGEMENT SYSTEM SCHEDULING FOR LOW POWER AND LOSSY NETWORKS - In one embodiment, a network management system (NMS) determines an intent to initialize a request-response exchange with a plurality of clients in a low power and lossy network (LLN). In response, the NMS adaptively schedules corresponding responses from the clients to distribute the responses across a period of time based on a network state of the LLN. Accordingly, requests may be generated by the NMS with an indication of a corresponding schedule to be used by the clients to respond, and transmitted into the LLN to solicit the responses, which are then received at the NMS according to the indicated schedule. | 2013-01-31 |
20130031254 | Sharing A Transmission Control Protocol Port By A Plurality Of Applications - Methods, apparatuses, and computer program products for sharing a transmission control protocol (TCP) port by a plurality of applications are provided. Embodiments include receiving, by a transmission controller from a client, a first TCP packet that includes an indication of a new TCP connection for a TCP port; determining, by the transmission controller, an origination of the first TCP packet; identifying, by the transmission controller, a TCP sequence number range associated with the determined origination; selecting, by the transmission controller, an initial sequence number (ISN) within the identified TCP sequence number range; and sending, by the transmission controller to the client, a second TCP packet that includes the selected ISN. | 2013-01-31 |
20130031255 | Hierarchical Delegation and Reservation of Lookup Keys - A method of reserving lookup keys in a computer communication system including a hierarchy of key manager nodes includes receiving a first reservation request at a first key manager node at a first level of the hierarchy of key manager nodes. The first reservation request requests reservation of a lookup key. The methods include determining whether or not the first key manager node has authority to grant the reservation request, and, in response to determining that the first key manager node does not have authority to grant the reservation request, sending a second reservation request requesting reservation of the lookup key to a second key manager node that is at a second level of the hierarchy of key manager nodes. | 2013-01-31 |
20130031256 | Method And Apparatus For Reliable Session Migration - Various embodiments provide a reliable session migration method and apparatus without requiring additional option headers to each packet or inducing transmission delay. This is achieved by utilizing aggregated checksums that facilitate session migration upon a migration event. Advantageously, some such embodiments may permit applications to continue when the endpoint device physically moves from one access network. Similarly, some such embodiments may allow dynamic migration access networks based on load, pricing or other factors. Moreover, some such embodiments may permit traffic to be split along multiple paths so as to increase the aggregate throughput. | 2013-01-31 |
20130031257 | Secure XDM Communication Between IMS Networks - Requests between first and second IMS network domains are communicated by receiving an XDM request in the first domain. The XDM request relates to an XML document that can be accessed via the XDM request from a location in the second domain. A SIP request is created that includes information identifying it as a request that relates to an XDM request. The SIP request is sent to the second domain so that the SIP request can be routed to the location in the second domain based on the identifying information in the SIP request. A connection for XDM requests between the first and second domains is established. | 2013-01-31 |
20130031258 | NETWORK CONNECTION DEVICE AND METHOD - A network connection device includes: one or more network devices; a network switching control section configured to determine a second network as a candidate network to which connection is subsequently switched from a first network to which a network device is currently connecting; a network relation state determination section configured to determine whether or not the network device is used for forming a PAN; a disconnection-caused disadvantage determination section configured to determine whether or not a disadvantage to a user will be caused by switching connection to the second network, based on the result of the determination by the network relation state determination section; and a switching acceptability determination section configured to prohibit switching connection to the second network when the result of the determination is that a disadvantage to the user will be caused. | 2013-01-31 |
20130031259 | Method of Discovering Operator-Provided Network Services Using IMS - A method, session managing node and arrangement for providing a network service address of at least one network service of a first operator IMS network to a third party service provider (3PSP) are disclosed, wherein the 3PSP has no business agreement with the first operator. A first application of a mobile station communicates with a second application via a first network, resulting in the setup of an IMS session between the mobile station and the 3PSP. A network service address of at least one network service, provided by the first operator IMS network, is inserted into a SIP message of the IMS session in a session managing node of the first operator IMS network and delivered to the 3PSP, where it is forwarded to the second application. If required, the second application may access the one or more network services via the first network, using the retrieved network address. | 2013-01-31 |
20130031260 | METHOD AND APPARATUS FOR ESTABLISHING AN AD-HOC BI-DIRECTIONAL NETWORK WITH AN OPTICAL IDENTIFIER - A method of managing the establishment of a bi-directional communication session between a first client device and at least a second client device comprising receiving data from the first and the second client devices. The reception of data from the first client device establishes a presence of the first client device on a network where the first client device has a first identification. The reception of data from the second client device establishes a presence of the second client device on a network where the second client device has a second identification. The embodiment also includes transmitting an optical identifier signal to the first client device, wherein the optical identifier signal enables the first client device to display an optical identifier comprising the first identification encoded therein. The optical identifier facilitates the establishment of the bi-directional communication session between the first and the second client devices. | 2013-01-31 |
20130031261 | PAIRING A DEVICE BASED ON A VISUAL CODE - A pairing with a computing device may be based on a visual code. A pairing, associated with application input, may be established between a first control module associated with a responding computing device and a second control module associated with the computing device. | 2013-01-31 |
20130031262 | METHODS FOR HANDLING MULTIPLE DEVICE MANAGEMENT (DM) SERVER ADDRESSES IN A DM ACCOUNT MANAGEMENT OBJECT (MO) - A method for handling multiple Device Management (DM) server addresses in a DM Account Management Object (MO) is provided. The method includes the steps of obtaining, by a DM client, a plurality of DM server addresses and a reference to one or more of the DM server addresses or an empty value from the DM Account MO, and building, by the DM client, a DM session according to the reference. | 2013-01-31 |
20130031263 | DYNAMIC RUNTIME CHOOSING OF PROCESSING COMMUNICATION METHODS - Techniques are described for assigning and changing communication protocols for a pair of processing elements. The communication protocol determines how the pair of processing elements transmits data in a stream application. The pair may be assigned a communication protocol (e.g., TCP/IP or a protocol that uses a relational database, shared file system, or shared memory) before the operator graph begins to stream data. This assignment may be based on a priority of the processing elements and/or a priority of the communication protocols. After the operator graph begins to stream data, the pair of processing elements may switch to a different communication protocol. The decision to switch the communication protocol may be based on whether the pair of processing elements or assigned communication protocol is meeting established performance standards for the stream application. | 2013-01-31 |
20130031264 | Hardware Bus Redirection Switching - Example embodiments relate to hardware bus redirection switching. In example embodiments, a computing device receives a selection of a new remote desktop protocol to be used for communication with a remote server. The computing device may then selectively enable hardware bus redirection for the new remote desktop protocol based on whether the new protocol supports hardware bus redirection. | 2013-01-31 |
20130031265 | SYSTEM AND METHOD FOR HEURISTIC DETERMINATION OF NETWORK PROTOCOLS - A system, method and computer program product are provided for heuristically identifying protocols during network analysis utilizing a network analyzer. First provided is a sequencing and reassembly (SAR) engine module for sequencing and/or re-assembling network communications. Coupled to the engine module is a plurality of protocol interpreter modules for interpreting protocols associated with the network communications. At least one of the protocol interpreter modules is adapted for heuristically identifying protocols associated with the network communications. | 2013-01-31 |
20130031266 | VARIABLE SPEED PLAYBACK - Provided are methods and systems for variable speed playback. In one aspect the disclosure provides for receiving content having a first playback speed, determining a second playback speed for at least a portion of the content based on a playback factor, associating the second playback speed with the portion of the content, and providing at least the portion of the content at the second playback speed to a display device. | 2013-01-31 |
20130031267 | Process for communication between a device running a mobile device platform and a server over the air, as well as related system - Process of communication via HTTP or HTTPS between a device running Java ME® and a server over the air, said server receiving and transmitting SOAP (Simple Object Access Protocol) messages from/to an operator on a host over a network and being capable of exchanging SOAP messages under Application Protocol Data Unit (APDU) data form/with the device, characterized in that the SOAP messages are translated from/to binary messages according to a protocol in the server, said binary messages being exchanged with the device, the binary messages being binary request messages or binary response messages. | 2013-01-31 |
20130031268 | REDUCING NETWORK LATENCY - A method of transmitting data for use at a data processing system and network interface device, the data processing system being coupled to a network by the network interface device, the method comprising: forming a message template in accordance with a predetermined set of network protocols, the message template including at least in part one or more protocol headers; forming an application layer message in one or more parts; updating the message template with the parts of the application layer message; processing the message template in accordance with the predetermined set of network protocols so as to complete the protocol headers; and causing the network interface device to transmit the completed message over the network. | 2013-01-31 |
20130031269 | Handling Perceived Packet Loops With Transparent Network Services - Techniques are provided to detect and correct for packet loops associated with network traffic that passes through a wide-area application services (WAAS) device in a data center network environment. The WAAS device receives a packet from a device in a first data center. The WAAS device determines the directionality of the packet relative to a destination device of the packet. The WAAS device also determines whether the packet has an indicator that associates the packet with the WAAS device. Based on whether the packet has an indicator that associates the packet with the wide area application services device, the WAAS device inserts an indicator within the packet when the directionality of the packet indicates that the packet is to be transmitted across a wide area network (WAN), wherein the indicator comprises information that associates the packet with the WAAS device. The WAAS device forwards the packet to a network based on its directionality. | 2013-01-31 |
20130031270 | Automatically Routing Super-Compute Interconnects - A mechanism is provided for automatically routing network interconnects in a data processing system. A processor in a node of a plurality of nodes receives network topology from neighboring nodes in the plurality of nodes within the data processing system. The processor constructs a system node map that identifies a physical connectivity between the node and the neighboring nodes. The processor programs a switch in the node with a connectivity map that indicates a set of point-to-point connections with the neighboring nodes. The set of point-to-point connections comprise locally-connected connections and pass-through connections. | 2013-01-31 |
20130031271 | VIRTUAL PRIVATE NETWORKING WITH MOBILE COMMUNICATION CONTINUITY - In general, a mobile virtual private network (VPN) is described in which service provider networks cooperate to dynamically extend a virtual routing area of a home service provider network to the edge of a visited service provider network and thereby enable IP address continuity for a roaming wireless device. In one example, a home service provider network allocates an IP address to a wireless device and establishes a mobile VPN. The home service provider network dynamically provisions a visited service provider network with the mobile VPN, when the wireless device attaches to an access network served by the visited service provider network, to enable the wireless device to exchange network traffic with the visited service provider network using the IP address allocated by the home service provider network. | 2013-01-31 |
20130031272 | PROVIDING SYNC NOTIFICATIONS TO CLIENT DEVICES - Providing synchronization notifications to a client device. In response to a server receiving notification that an event of interest has been received, a state of the client device is determined. The state indicates whether or not the client device has any outstanding sync notifications. In an embodiment, the state is determined based on a first parameter and a second parameter. When the state of the client device indicates that the client device has no outstanding sync notifications prior to the receipt the received notification, the first parameter is set equal to the second parameter, and the second parameter is updated after each successful device synchronization of the client device. A filter is applied prior to sending out the sync notification to the client device. | 2013-01-31 |
20130031273 | SCALABLE SYNCHRONIZATION OF EVENTS AMONG SERVER AND CLIENTS WITH VARYING LAG-TIMES - The invention relates generally to synchronizing functions on handheld devices and more particularly to precisely synchronizing a function among a large number of devices having multiple different platforms. The invention provides the ability to cause a large number of handheld devices to perform certain functions simultaneously, within seconds or fractions of a second of each other. In certain aspects, the invention provides an apparatus for synchronizing a function among devices, including one or more processors in communication with a memory and configured to, for each of the devices, send an event to the device, receive a timepacket, and send a return timepacket, thereby causing the device to receive the event and invoke the function after a delay. | 2013-01-31 |
20130031274 | MATCHING CLIENT DEVICE TO APPROPRIATE DATA PACKAGE - One or more techniques and/or systems are disclosed for matching a client device with an appropriate network service provider data package. A device ID for the client device can be decomposed to one or more device ID ranges in a device decomposition set. One or more ranges of client ID can be assigned to a network service provider data package, which can be decomposed into a set of package decomposition ranges in a package decompositions set. The device decomposition set can be compared to the package decomposition set, and if an intersection is identified between the sets, the network service provider data package can be provided to the client device. | 2013-01-31 |
20130031275 | PERIPHERAL DEVICE IDENTIFICATION FOR PAIRING - In one implementation, a pairing device provides an identify instruction to a peripheral device during a pairing process. The peripheral device generates an identification output in response to the identify instruction. | 2013-01-31 |
20130031276 | Programmable Waveform Technology for Interfacing to Disparate Devices - Various embodiments of a system, method, and memory-medium provide for configuration of a programmable waveform that allows for communication with one of a plurality of different target devices. The programmable waveform comprises one or more waveform parameters and one or more waveform lines. The waveform lines may comprise control lines and/or data lines. One or more of the waveform parameters may be set in response to user input, and corresponding signals based on the waveform lines may be generated in order to communicate with a target device selected from a variety of different possible target devices. Waveform parameters may include one or more of: setup time, hold time, lead time, trail time, idle time, clock frequency, clock duty cycle, number of data bits per transmission, number of data lines, pulse width, polarity, and phase. | 2013-01-31 |
20130031277 | METHOD FOR IDENTIFYING VERSION TYPE OF WINDOWS OPERATING SYSTEM ON A HOST BY A USB DEVICE - The invention provides a method for identifying version type of a Windows operating system on a host by USB device, relating to operating system field and including steps: A, USB device is powered on and initialized; B, the USB device performs USB enumeration, determines whether a first predetermined instruction is received in process of USB enumeration, if yes, determines the operating system is a first operating system and goes to Step D, if no, goes to C; C, the USB device determines the device type returned in process of USB enumeration, if it is a CCID device, determines whether the received instruction includes a second predetermined instruction, if yes, determines the operating system is a second operating system, if no, determines the operating system is a third operating system; when the device is an SCSI device, the USB device determines whether the second received SCSI instruction is a third predetermined instruction or fourth predetermined instruction, if it is the third predetermined instruction, determines that the operating system is a second operating system, if it is the fourth predetermined instruction, determines that the operating system is a third operating system; D, the USB device establishes communication with the host, waits for instruction sent by the host and returns related information to the host according to the determined type of the host operating system. | 2013-01-31 |
20130031278 | DATA STORAGE SYSTEM AND OPERATING METHOD THEREOF - A data storage system includes a sensor unit, a storage unit, and a data exchange unit. The data exchange unit connects to the sensor unit and the storage unit, and transmits a data message received from the sensor unit to the storage unit, wherein the data exchange unit need not know the addresses of the sensor unit and the storage unit ahead of time to be able to successfully transmit the data message to the storage unit requesting the data message. | 2013-01-31 |
20130031279 | DEFERRED TRANSFER OF CONTENT TO OPTIMIZE BANDWIDTH USAGE - In one embodiment, a method includes determining a request for a transfer of content where the request is associated with a user device. It is determined if a deferred transfer should be performed. The deferred transfer defers the transfer of the content with a completion by a completion time. The request is stored in a queue where the request is associated with the completion time. The method processes the request from the queue to transfer the content at a start time. The content is transferred by the completion time. The method then adjusts, for a user associated with the user device, a charging parameter for the transfer due to the transfer being deferred. | 2013-01-31 |
20130031280 | DETECTION DEVICE - A detection device to detect a power serving time of a super capacitor for a power-disconnected storage card and an amount of the data packets capable of being stored during the detected serving time is provided. The power-disconnected storage card includes a memory. The detection device includes a power supply unit, the super capacitor, a controller, a storage unit, and a detection unit. The storage unit stores the data packets. The detection unit includes a charge notification module, a data notification module and a time module. The charge notification module generates a first notification signal to the time module. The data notification module generates a second notification signal to the time module when the storage unit transmits the data packet to the memory. The time module records time when the memory completely store the data packet according to the first notification signal and the second notification signal. | 2013-01-31 |
20130031281 | USING A DMA ENGINE TO AUTOMATICALLY VALIDATE DMA DATA PATHS - The disclosed embodiments provide a system that uses a DMA engine to automatically validate DMA data paths for a computing device. During operation, the system configures the DMA engine to perform a programmable DMA operation that generates a sequence of memory accesses which validate the memory subsystem and DMA paths of the computing device. For instance, the operation may include a sequence of reads and/or writes that generate sufficient data traffic to exercise the computing device's I/O controller interface and DMA data paths to memory to a specified level. The system initiates this programmable DMA operation, and then checks outputs for the operation to confirm that the operation executed successfully. | 2013-01-31 |
20130031282 | DYNAMIC STABILIZATION FOR A STREAM PROCESSING SYSTEM - Disclosed are a method and a computer program storage product for dynamically stabilizing a stream processing system. The method includes receiving at least one computing resource allocation target. A plurality of downstream processing elements and an upstream processing element are associated with at least one input buffer. Each of the downstream processing elements consumes data packets produced by the upstream processing element received on an output stream associated with the upstream processing element. A fastest input rate among each downstream processing element in the plurality of downstream processing elements is identified. An output rate of the upstream processing element is set to the fastest input rate that has been determined for the plurality of downstream processing elements. | 2013-01-31 |
20130031283 | DATA TRANSFER APPARATUS, IMAGE PROJECTION APPARATUS, AND DATA TRANSFER METHOD - A data transfer apparatus includes a serial interface controller configured to perform data transfer between the data transfer apparatus and a destination device via a serial transmission line; and a transfer controller configured to control the data transfer, issue a read request for data to the destination device, and resume issue of the read request after elapse of a given retransmission time when a positive acknowledgement in response to the read request is not received from the destination device under a given condition. | 2013-01-31 |
20130031284 | BUS SYSTEM IN SOC AND METHOD OF GATING ROOT CLOCKS THEREFOR - A system-on-chip bus system includes a bus configured to connect function blocks of a system-on-chip to each other, and a clock gating unit connected to an interface unit of the bus and configured to basically gate a clock used in the operation of a bus bridge device mounted on the bus according to a state of a transaction detection signal. | 2013-01-31 |
20130031285 | APPARATUS FOR DETERMINING AND/OR MONITORING A CHEMICAL OR PHYSICAL PROCESS VARIABLE IN AUTOMATION TECHNOLOGY - An apparatus for determining and/or monitoring a chemical or physical, process variable in automation technology, comprising: a superordinated control unit; and a transmitter electronics having a first interface, a second interface and a third interface. The transmitter electronics communicates with the superordinated control unit by means of the first interface via a bus protocol. The transmitter electronics can be connected with a service unit via the second interface; and the third interface has a plurality of data channels for corresponding data source components; and wherein individual data channels are addressable and tunable via the service unit as a function of the connected data source components, so that data selected from the data for the connected data source components can be transmitted at the same time in at least one telegram to the superordinated control unit. | 2013-01-31 |
20130031286 | ACTIVE INFORMATION SHARING SYSTEM AND DEVICE THEREOF - An active information sharing system having a master device and a slave device is disclosed. When the master device is connected to a first host of an administrator, the master device automatically links to a server through a first network module of the first host, for sharing at least an information from the administrator. The master device correspondingly sets a first parameter to the slave device. When the slave device is connected to a second host of an invited client, the slave device automatically links to the server through a second network module of the second host according to the first parameter, for acquiring the information shared from the administrator. | 2013-01-31 |
20130031287 | INTERRUPT CONTROL APPARATUS AND INTERRUPT CONTROL METHOD - An interrupt control apparatus and interrupt control method reduce situations in which the output of interrupt information is suspended and thus reduce stress caused in a user, without missing the appropriate output timing for interrupt information having a high priority level. A priority level setting unit raises the value of a priority level for an interrupt voice message during a period in which the interrupt voice message is being outputted, and a voice output control unit, when interrupts from two or more overlapping interrupt voice messages occurs, carries out control in accordance with priority levels set for each of the two or more interrupt voice messages so that the interrupt voice message having the higher priority level value is preferentially outputted. | 2013-01-31 |
20130031288 | PCI-E SYSTEM HAVING RECONFIGURABLE LINK ARCHITECTURE - A peripheral component interconnect express (PCI-E) system has a reconfigurable link architecture. The system comprises a system slot adapted to receive a PCI-E compatible system controller, a plurality of peripheral slots adapted to receive a plurality of peripheral modules, and a reconfigurable switch fabric configured to create a variable number of PCI-E links between the system slot and the plurality of peripheral slots. | 2013-01-31 |
20130031289 | CONNECTING STRUCTURE FOR DETACHABLE ASSEMBLED ELECTRONIC DEVICE AND DETACHABLE ASSEMBLED ELECTRONIC DEVICE HAVING THE SAME - A detachable assembled electronic device includes a portable device having a display interface and a docking station. The docking station includes a docking station main body and a connecting structure. The connecting structure is assembled to the docking station main body to support the portable device, and includes a flexible element and a first section. The first section is wrapped by the flexible element. When the first section is attached to the portable device, the first section and the flexible element are adapted to support the portable device on the docking station main body, and the portable device is adapted to rotate relative to the docking station main body by the bending of the flexible element to form an operation angle. | 2013-01-31 |
20130031290 | System and Method for Implementing a Secure Processor Data Bus - System and method for implementing a secure processor data bus are described. One embodiment is a circuit comprising a processor disposed in a processor partition, the circuit further comprising a first set of peripherals disposed in a first peripheral partition; a second set of peripherals disposed in a second peripheral partition physically isolated from the first peripheral partition; a first state control register for controlling access to the first set of peripherals by the processor; and a second state control register for controlling access to the second set of peripherals by the processor. When the first and second state control registers are in a first mode of operation, the processor has read and write access to the first set of peripherals and write only access to the second set of peripherals. When the first and second state control registers are in a second mode of operation, the processor has read and write access to the second set of peripherals and read only access to the first set of peripherals. | 2013-01-31 |
20130031291 | SYSTEM AND METHOD FOR VIRTUAL PARTITION MONITORING - A method is provided in one example embodiment that includes rebasing a module in a virtual partition to load at a fixed address and storing a hash of a page of memory associated with the fixed address. An external handler may receive a notification associated with an event affecting the page. An internal agent within the virtual partition can execute a task and return results based on the task to the external handler, and a policy action may be taken based on the results returned by the internal agent. In some embodiments, a code portion and a data portion of the page can be identified and only a hash of the code portion is stored. | 2013-01-31 |
20130031292 | SYSTEM AND METHOD FOR MANAGING MEMORY PAGES BASED ON FREE PAGE HINTS - A host selects a memory page that has been allocated to a guest for eviction. The host may be a host machine that hosts a plurality of virtual machines. The host accesses a bitmap maintained by the guest to determine a state of a bit in the bitmap associated with the memory page. The host determines whether content of the memory page is to be preserved based on the state of the bit. In response to determining that the content of the memory page is not to be preserved, the host discards the content of the memory page. | 2013-01-31 |
20130031293 | SYSTEM AND METHOD FOR FREE PAGE HINTING - A processing device executing an operating system such as a guest operating system generates a bitmap wherein bits of the bitmap represent statuses of memory pages that are available to the operating system. The processing device frees a memory page. The processing device then sets a bit in the bitmap to indicate that the memory page is unused after the memory page is freed. | 2013-01-31 |
20130031294 | NETWORK FILTERING IN A VIRTUALIZED ENVIRONMENT - A physical host executes a hypervisor or virtual machine monitor (VMM) that instantiates at least one virtual machine (VM) and a virtual input/output server (VIOS). The VIOS determines by reference to a policy data structure a disposition of a packet of network communication with the VM, where the disposition includes one of dropping the packet and forwarding the packet. Thereafter, the determined disposition is applied to a subsequent packet in a same packet flow as the packet. | 2013-01-31 |
20130031295 | ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records. | 2013-01-31 |
20130031296 | SYSTEM AND METHOD FOR MANAGING ADDRESS MAPPING INFORMATION DUE TO ABNORMAL POWER EVENTS - A method and apparatus for managing address map information are disclosed. In one embodiment, an apparatus may comprise a processor configured to store address map changes to a first data storage medium, save the address map changes to a nonvolatile data storage medium when an abnormal power state is detected, and when the power state is no longer abnormal retrieve the last saved address map information and address map changes and update the address map information using the address map changes. The apparatus may be configured to retrieve the instructions for the processor operation over a network connection. | 2013-01-31 |
20130031297 | ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records. | 2013-01-31 |
20130031298 | INCLUDING PERFORMANCE-RELATED HINTS IN REQUESTS TO COMPOSITE MEMORY - A composite memory device that includes different types of non-volatile memory devices, which have different performance characteristics, is described. This composite memory device may receive requests, a given one of which includes a command, a logical address for at least a block of data associated with the command, and a hint associated with the command. For the given request, the composite memory device executes the command on the block of data at the logical address in at least one of the types of non-volatile memory devices. Furthermore, the composite memory device conditionally executes the hint based on one or more criteria, such as: available memory in the types of non-volatile memory devices, traffic through an interface circuit in the composite memory device, operational states of the types of non-volatile memory devices, a target performance characteristic of the composite memory device, and an environmental condition of the composite memory device. | 2013-01-31 |
20130031299 | DISK INPUT/OUTPUT (I/O) LAYER ARCHITECTURE HAVING BLOCK LEVEL DEVICE DRIVER - In general, embodiments of the present invention provide a disk an I/O layer architecture having a customized block-level device driver. In a typical embodiment, the architecture described herein comprises a file system layer being configured to handle user data; a buffer cache layer, adjacent the file system layer, the buffer cache layer being configured to handle page data; a block device driver layer adjacent the buffer cache layer, the block device driver layer being configured to handle block data, and the block device driver layer comprising an I/O scheduler layer and a device driver layer; and a storage unit layer adjacent the block device driver layer, the storage unit layer being configured to hand command data. Moreover, the storage unit layer can comprise a set (e.g., at least one) of semiconductor storage device (SSD) memory units, and the I/O scheduler layer can be configured to handle memory-based devices (e.g. a flash SSD memory device, a dynamic random access memory (DRAM) SSD memory device, etc.). | 2013-01-31 |
20130031300 | NON-VOLATILE MEMORY DEVICE, METHOD OF OPERATING THE SAME, AND MEMORY SYSTEM HAVING THE NON-VOLATILE MEMORY DEVICE - According to an aspect of the inventive concepts, there is provided a non-volatile memory device including a memory array with at least one stripe. The at least one stripe includes at least one parity page and at least one data page. The non-volatile memory device further includes a chip controller. The chip controller includes an operation module configured to perform an operation on data input from the outside of the memory device, to store a result of the performing, and to program the result of the performing into the at least one parity page. The chip controller further includes a data buffer configured to store the input data and to program the input data into the at least one data page. | 2013-01-31 |
20130031301 | BACKEND ORGANIZATION OF STORED DATA - A data units received from a host system are divided and/or redistributed among a plurality of data payloads, wherein boundaries of the data units are not aligned with boundaries of the data payloads. The plurality of data payloads are encoded into a respective plurality of codewords, and the plurality of codewords stored in the flash memory. Boundaries of the codewords are aligned with boundaries of the pages in the flash memory. | 2013-01-31 |
20130031302 | SYSTEMS AND METHODS FOR DETERMINING THE STATUS OF MEMORY LOCATIONS IN A NON-VOLATILE MEMORY - Systems and methods are provided for storing data in a portion of a non-volatile memory (“NVM”) such that the status of the NVM portion can be determined with high probability on a subsequent read. An NVM interface, which may receive write commands to store user data in the NVM, can store a fixed predetermined sequence (“FPS”) with the user data. The FPS may ensure that a successful read operation on a NVM portion is not misinterpreted as a failed read operation or as an erased NVM portion. For example, if the NVM returns an all-zero vector when a read request fails, the FPS can include at least one “1” or one “0”, as appropriate, to differentiate between successful and unsuccessful read operations. In some embodiments, the FPS may also be used to differentiate between disturbed data, which passes an error correction check, and correct data. | 2013-01-31 |
20130031303 | STACKED MEMORY DEVICES, SYSTEMS, AND METHODS - Memory requests for information from a processor are received in an interface device, and the interface device is coupled to a stack including two or more memory devices. The interface device is operated to select a memory device from a number of memory devices including the stack, and to retrieve some or all of the information from the selected memory device for the processor. Additional apparatus, systems and methods are disclosed. | 2013-01-31 |
20130031304 | DATA STORAGE IN NONVOLATILE MEMORY - A method for data storage in a nonvolatile memory device includes compressing current data. The compressed current data is written to a space of the nonvolatile memory device that does not include a most recently written data. If the compressed current data is successfully written, identification data is stored on the nonvolatile memory device. The identification data identifies the written compressed current data as a currently valid version. | 2013-01-31 |
20130031305 | INFORMATION PROCESSING SYSTEM INCLUDING SEMICONDUCTOR DEVICE HAVING SELF-REFRESH MODE - Disclosed herein is an information processing system having first and second devices. The second device alternately issues a self-refresh command and a self-refresh exit command to the first device. The first device performs a refresh operation once in response to the self-refresh command and updates a state of a DLL circuit in response to the self-refresh exit command. | 2013-01-31 |
20130031306 | APPARATUS AND METHOD FOR PREFETCHING DATA - Apparatuses and methods for prefetching data are disclosed. A method may include receiving a read request at a data storage device, determining a meta key in an address map that includes a logical block address (LBA) of the read request, wherein the meta key includes a beginning LBA and a size field corresponding to a number of consecutive sequential LBAs stored on the data storage device, calculating a prefetch operation to prefetch data based on addresses included in the meta key, and reading data corresponding to the prefetch operation and the read request. An apparatus may include a processor configured to receive a read request, determine a first meta key and a second meta key in an address map, calculate a prefetch operation based on addresses included in the first meta key and the second meta key, and read data corresponding to the prefetch operation and the read request. | 2013-01-31 |
20130031307 | STORAGE APPARATUS, METHOD THEREOF AND SYSTEM - A storage apparatus includes a memory that stores a job management information that registers a write job corresponding to a write command upon receiving the write command from other apparatus, a cache memory that stores data designated as target data by the write command, a storage drive that records the data stored in the cache memory to a storage medium based on the write job registered in the job management information, and a controller that controls a timing to output to the other apparatus a completion report of the write command based on a load condition of the storage device related to an accumulation count of write job acquired from the job management information. | 2013-01-31 |
20130031308 | DEVICE DRIVER FOR USE IN A DATA STORAGE SYSTEM - A device driver includes an aggregator aggregating data blocks into one or more container objects suited for storage in an object store; and a logger for maintaining in at least one log file for each data block an identification of a container object wherein the data block is stored with an identification of the location of the data block in the container object. | 2013-01-31 |
20130031309 | SEGMENTED CACHE MEMORY - A cache memory associated with a main memory and a processor capable of executing a dataflow processing task, includes a plurality of disjoint storage segments, each associated with a distinct data category. A first segment is dedicated to input data originating from a dataflow consumed by the processing task. A second segment is dedicated to output data originating from a dataflow produced by the processing task. A third segment is dedicated to global constants, corresponding to data available in a single memory location to multiple instances of the processing task. | 2013-01-31 |
20130031310 | COMPUTER SYSTEM - A computer system includes: a main storage unit, a processing executing unit sequentially executing processing to be executed on virtual processors; a level-1 cache memory shared among the virtual processors; a level-2 cache memory including storage areas partitioned based on the number of the virtual processors, the storage areas each (i) corresponding to one of the virtual processors and (ii) holding the data to be used by the corresponding one of the virtual processors; a context memory holding a context item corresponding to the virtual processor; a virtual processor control unit saving and restoring a context item of one of the virtual processors; a level-1 cache control unit; and a level-2 cache control unit. | 2013-01-31 |
20130031311 | INTERFACE APPARATUS, CALCULATION PROCESSING APPARATUS, INTERFACE GENERATION APPARATUS, AND CIRCUIT GENERATION APPARATUS - There is provided is an interface apparatus including: a stream converter receiving write-addresses and write-data, storing the received data in a buffer, and sorting the stored write-data in the order of the write-addresses to output the write-data as stream-data; a cache memory storing received stream-data if a load-signal indicates that the stream-data are necessarily loaded and outputting data stored in a storage device corresponding to an input cache-address as cache-data; a controller determining whether or not data allocated with a read-address have already been loaded, outputting the load-signal instructing the loading on the cache memory if not loaded, and outputting a load-address indicating a load-completed-address of the cache memory; and at least one address converter calculating which one of the storage devices the allocated data are stored in, by using the load-address, outputting the calculated value as the cache-address to the cache memory, and outputting the cache-data as read-data. | 2013-01-31 |
20130031312 | CACHE MEMORY CONTROLLER - A cache memory controller including: a pre-fetch requester configured to issue pre-fetch requests, each pre-fetch request having one of a plurality of different quality of services. | 2013-01-31 |
20130031313 | CACHE ARRANGEMENT - A first cache arrangement including an input configured to receive a memory request from a second cache arrangement; a first cache memory for storing data; an output configured to provide a response to the memory request for the second cache arrangement; and a first cache controller; the first cache controller configured such that for the response to the memory request output by the output, the cache memory includes no allocation for data associated with the memory request. | 2013-01-31 |
20130031314 | Support for Multiple Coherence Domains - A number of coherence domains are maintained among the multitude of processing cores disposed in a microprocessor. A cache coherency manager defines the coherency relationships such that coherence traffic flows only among the processing cores that are defined as having a coherency relationship. The data defining the coherency relationships between the processing cores is optionally stored in a programmable register. For each source of a coherent request, the processing core targets of the request are identified in the programmable register. In response to a coherent request, an intervention message is forwarded only to the cores that are defined to be in the same coherence domain as the requesting core. If a cache hit occurs in response to a coherent read request and the coherence state of the cache line resulting in the hit satisfies a condition, the requested data is made available to the requesting core from that cache line. | 2013-01-31 |
20130031315 | MULTI-DEVICE MEMORY SERIAL ARCHITECTURE - Subject matter disclosed herein relates to memory devices comprising a memory array, a first port to interface with a memory controller directly or indirectly via another memory device, a second port to interface with yet another memory device, and a switch to selectively electrically connect the memory controller to a circuit path leading to the second port or to the memory array, wherein the switch may be responsive to a signal generated by the memory controller. | 2013-01-31 |
20130031316 | SYSTEM AND METHOD FOR PROVIDING MORE LOGICAL MEMORY PORTS THAN PHYSICAL MEMORY PORTS - Some embodiments provide for a method of mapping a user design to a configurable integrated circuit (IC). The method is for a configurable IC that implements a user design with an associated user design clock cycle. The IC operates on a sub-cycle clock that has multiple sub-cycle periods within a user period of the user design clock cycle. The method identifies multiple port accesses to a first multi-port memory defined in the user design. The accesses are in a single user design clock cycle. The method maps the multiple port accesses to the first multi-port memory to multiple physical-port memory accesses to a second physical-port memory in the configurable IC during multiple sub-cycles associated with a single user design clock cycle. | 2013-01-31 |
20130031317 | METHOD AND APPARATUS FOR REDIRECTING DATA WRITES - Apparatuses and methods for redirecting data writes are disclosed. In one embodiment a controller may be configured to receive a command including write data and address data identifying a target zone of a data storage medium; determine whether the target zone contains sufficient available data sectors to store the write data; and record the write data to a common area of a different zone when the target zone does not contain sufficient available data sectors, the common area available to store data when a target zone lacks sufficient available data sectors. In another embodiment, a method may comprise receiving a write command identifying a target zone of a data storage medium; determining whether the target zone contains sufficient available data sectors to store the write data; and recording the write data to a common area of a different zone when the target zone does not contain sufficient available data sectors. | 2013-01-31 |
20130031318 | APPARATUS, METHOD AND ARTICLE FOR PROVIDING VEHICLE DIAGNOSTIC DATA - A network of collection, charging and distribution machines collects, charges and distributes portable electrical energy storage devices (e.g., batteries, supercapacitors or ultracapacitors). Vehicle diagnostic data of a vehicle using the portable electrical energy storage device is stored on a diagnostic data storage system of the portable electrical energy storage device during use of a respective portable electrical energy storage device by a respective vehicle. Once the user places the portable electrical energy storage device in the collection, charging and distribution machine, or comes within wireless communications range of a collection, charging and distribution machine, a connection is established between the collection, charging and distribution machine and the portable electrical energy storage device. The collection, charging and distribution machine then reads vehicle diagnostic data stored on the diagnostic data storage system of the portable electrical energy storage device and provides information regarding the diagnostic data. | 2013-01-31 |
20130031319 | INTERLEAVING OF MEMORY REPAIR DATA COMPRESSION AND FUSE PROGRAMMING OPERATIONS IN SINGLE FUSEBAY ARCHITECTURE - An approach for interleaving memory repair data compression and fuse programming operations in a single fusebay architecture is described. In one embodiment, the single fusebay architecture includes a multiple of pages that are used with a partitioning and interleaving approach to handling memory repair data compression and fuse programming operations. In particular, for each page in the single fusebay architecture, a memory repair data compression operation is performed on memory repair data followed by a fuse programming operation performed on the compressed memory repair data. | 2013-01-31 |
20130031320 | CONTROL DEVICE, CONTROL METHOD AND STORAGE APPARATUS - A control device includes a receiver that receives an instruction to update first data stored in a first volume to second data, and a copy processor that starts copying the first data into a second volume in response to the reception of the update instruction by the receiver and limits the start of copying of the first data from the second volume into a third volume until data that is stored in the first volume is completely copied into the second volume. | 2013-01-31 |
20130031321 | CONTROL APPARATUS, CONTROL METHOD, AND STORAGE APPARATUS - A control apparatus includes a processor. The processor determines, upon detecting a read error on a first volume of a storage under a non-equivalent state, a first storage area in which the read error has occurred. The first storage area is included in the first volume. The processor determines whether a write process has been conducted on the first storage area under the non-equivalent state. The processor determines whether a write process has been conducted on a second storage area under the non-equivalent state. The second storage area is included in a second volume of a storage and corresponds to the first storage area. The processor copies data stored in the second storage area to the first storage area when no write process has been conducted on the first storage area and the second storage area under the non-equivalent state. | 2013-01-31 |
20130031322 | Performing Redundant Memory Hopping - In one embodiment, the present invention includes a method for receiving an indication of a loss of redundancy with respect to a pair of mirrored memory regions of a partially redundant memory system, determining new mirrored memory regions, and dynamically migrating information stored in the original mirrored memory regions to the new mirrored memory regions. Other embodiments are described and claimed. | 2013-01-31 |
20130031323 | MEMORY DEVICE SHARING SYSTEM, MANAGING APPARATUS ACCESS CONTROL APPARATUS, METHODS THEREFOR, AND RECORDING MEDIUM - A memory device sharing system includes M (M represents an integer of 2 or greater) access control apparatus for sharing N (N represents an integer of 2 or greater) memory devices which store data, and a managing apparatus for managing access to the memory devices via the access control apparatus. The managing apparatus checks data stored in the N memory devices, generates data position information representative of the storage positions of data stored in any one of the N memory devices, and sends the data position information to the M access control apparatus. Each of the M access control apparatuses receives the data position information sent from the manager, and accesses the storage position indicated by the data position information if each of the M access control apparatuses receives an access request to access the data from an access request source. | 2013-01-31 |
20130031324 | PROTECTING AND MIGRATING MEMORY LINES - A data protection method is provided that includes determining a compressibility score of one or more lines of data stored in a memory. The memory includes a first area characterized by a first reliability level and a second area characterized by a second reliability level. Lines of data with a first compressibility score are migrated to the first area of the memory. Lines of data with a second compressibility score are migrated to the second area of the memory. | 2013-01-31 |
20130031325 | System for Updating an Associative Memory - A system includes an associative memory, a first table, a second table, a comparator, and an updater. The associative memory may include data and associations among data and may be built from the first table. The first table may include a record with a first and second field. The associative memory may be configured to ingest the first field and avoid ingesting the second field. The second table may include a record with a third field storing information indicating whether the first field has been ingested by the associative memory or has been forgotten by the associative memory. The comparator may be configured to compare the first and second table to identify one of whether the first field should be forgotten or ingested by the associative memory. The updater may be configured to update the associative memory by performing one of ingesting or forgetting the first field. | 2013-01-31 |
20130031326 | DEVICES, METHODS, AND SYSTEMS SUPPORTING ON UNIT TERMINATION - The present disclosure includes devices, methods, and systems supporting on unit termination. A number of embodiments include a number of memory units, wherein a memory unit includes termination circuitry, and a memory unit does not include termination circuitry. | 2013-01-31 |
20130031327 | SYSTEM AND METHOD FOR ALLOCATING CACHE MEMORY - Different processor elements in multi-task/multi-core system on chip may have different memory requirements at runtime. The method for adaptively allocating cache memory re-allocates the cache resource by updating the bank assignment table. According to the associativity-based partitioning scheme, centralized memory is separated into several groups of SRAM banks which are numbered differently. These groups are assigned to different processor elements to be L2 caches. The bank assignment information is recoded in bank assignment table, and is updated by system profiling engine. By changing the information in bank assignment table, the cache resource re-allocation for processor elements is achieved. | 2013-01-31 |
20130031328 | TECHNIQUES FOR BALANCING ACCESSES TO MEMORY HAVING DIFFERENT MEMORY TYPES - Embodiments of the present technology are directed toward techniques for balancing memory accesses to different memory types. | 2013-01-31 |