22nd week of 2015 patent applcation highlights part 61 |
Patent application number | Title | Published |
20150149613 | OPTIMIZED FRAMEWORK FOR NETWORK ANALYTICS - A system may receive raw information associated with a network and may prepare the raw information to create optimized information. The optimized information may include the raw information that has been sorted. The system may correlate the optimized information to create a set of correlated information. The system may aggregate at least two sets of correlated information to create aggregated information. The system may determine that network analytics are to be performed using the set of correlated information or the aggregated information. The system may determine information associated with performing the network analytics, including the set of correlated information or the aggregated information. The system may perform the network analytics based on the information associated with performing the network analytics. The system may provide a result associated with performing the network analytics. The result may indicate a manner in which to improve a performance of the network. | 2015-05-28 |
20150149614 | ASCERTAIN TETHERING OF DEVICE - Systems, methods and procedures are described for ascertaining tethering of a device in a communication network. In one implementation, a wireless device provides Internet connectivity to a computing device using a wireline or wireless transmission medium. In one arrangement, the wireless device, in the tethered arrangement with the computing device, may provide receiving and sending of data communications capability to the computing device. In one implementation, the communication entity that is hosting the network may ascertain that tethering is occurring by analyzing communications generated or passing through the computing device. | 2015-05-28 |
20150149615 | PROCESS CAGE PROVIDING ATTRACTION TO DISTRIBUTED STORAGE - A computer-implemented method may include running the process on a first processing node. The process running on the first processing node initially operates on first data. The method may include monitoring the process to identify a first data node that provides the first data to the process. In addition, the method may include determining whether performance would likely be improved by transferring the process to a second processing node. The method may include transferring the process to the second processing node. Further, the method may include transferring a result of the process operating on the first data from the second processing node to the first processing node. | 2015-05-28 |
20150149616 | SERVER AND SHARE LINK MANAGEMENT METHOD THEREOF - A server and a share link management method are provided. The server generates a first share link in response to a request, and generates a first workflow according to the first share link. The server transmits the first share link to at least one electronic device via a network according to the first workflow. The server monitors at least one first status which is generated by the at least one electronic device in response to the first share link, and adjusts the first workflow according to the at least one first status. The share link management method is applied to the server to implement the aforesaid operations. | 2015-05-28 |
20150149617 | CLOUD-BASED MONITORING APPARATUS - A cloud-based monitoring apparatus includes a cloud-based network system, a cloud-based database connected with the cloud-based network system, at least one gateway control unit connected with the cloud-based network system, at least one mobile apparatus connected signally with the cloud-based database, at least one basic detection module connected signally with the gateway control unit, at least one serial bus module connected signally with the gateway control unit, at least one analog I/O module connected signally with the gateway control unit, at least one digital I/O module connected signally with the gateway control unit, and an ISP connected signally with the cloud-based database. The gateway control unit, the basic detection module, the serial bus module, the analog I/O module, and the digital I/O module are all mounted in a monitored environment and have at least individual monitor parameters. | 2015-05-28 |
20150149618 | INFORMATION TECHNOLOGY RESOURCE MANAGEMENT - Embodiments relate to information technology resource management and scaling. According to one aspect, an upcoming event impacting an application operating on one or more resources is identified. A workload on the application is predicted based on the upcoming event identified and historical data about a previous event having similarity with the upcoming event. The prediction is performed using a predefined rule. A number of resources required to process the predicted workload is ascertained using a past scaling history of the application. The resources are scaled based on the ascertained number of said resources determined before the occurrence of the event. | 2015-05-28 |
20150149619 | USER STATE CONFIRMATION SYSTEM, USER STATE CONFIRMATION METHOD, SERVER DEVICE, COMMUNICATION TERMINAL DEVICE, SERVER COMPUTER PROGRAM, AND TERMINAL COMPUTER PROGRAM - In order to remotely confirm a state of monitored person without using monitoring sensors, a user state confirmation system ( | 2015-05-28 |
20150149620 | CALCULATING THE EFFECT OF AN ACTION IN A NETWORK - A method for calculating the effect of an action on a network includes creating a mapping of a plurality of devices of a networked computing environment. In one embodiment, the mapping describes a relationship between a primary device and at least one device of the plurality of devices. In another embodiment, the method includes determining a plurality of potential actions to be performed on the primary device. In a further embodiment, the method includes calculating an effect of a potential action of the plurality of potential actions on the plurality of devices in response to simulating performing the potential action on the primary device. In yet another embodiment, the method includes performing an optimization action in response to calculating the effect of the potential action. In certain embodiments, the optimization action maximizes availability of the networked computing environment. | 2015-05-28 |
20150149621 | METHOD AND SURVEY SERVER FOR GENERATING PERFORMANCE METRICS OF URLS OF A WEBSITE - A method and survey server for generating performance metrics of URLs of a website. The survey server collects website visit data from a plurality of user devices. The website visit data comprise URLs of webpages of the website displayed on each specific user device during a visiting of the website by a user of the specific user device. The survey server also collects participation data from some of the plurality of user devices. The survey participation data correspond to survey information received from the users of the user devices in relation to the visit of the website, and comprise an indication of the users having either or not fulfilled a purpose of visit. The survey server further analyzes the website visit data and the survey participation data to generate performance metrics of the URLs of the website with respect to fulfilling the purpose of visit of the website. | 2015-05-28 |
20150149622 | Scheduling Requests for Data Transfers in a Multi-Device Storage System - Apparatus and method for scheduling requests for data transfers in a multi-device storage system. In some embodiments, a system includes at least one server coupled to a pool of storage devices to transfer data from the storage devices to client devices responsive to requests. A request scheduler is adapted to receive into a memory a plurality of requests each having a service identifier (ID) and a payload size, to set a deadline for each request responsive to the service ID and the payload size, to forward the requests to the server for processing in an order based on service ID and, responsive to the deadline being reached for a selected request, to advance the selected request for immediate processing by the server. | 2015-05-28 |
20150149623 | MANAGEMENT SYSTEM AND METHOD FOR CONTROLLING THE SAME - A management system configured to collect operational information of network devices provided on a plurality of local networks includes a first management unit configured to separately manage the collected operational information in a plurality of tenants to which different access rights are set, a second management unit configured to tally and manage in group units a part of the operational information of the plurality of tenants managed separately by the first management unit, and a providing unit configured to provide the tallied operational information managed by the second management unit, wherein the second management unit manages the tallied operational information so as to hide or exclude a part of the tallied operational information. | 2015-05-28 |
20150149624 | Fast Detection and Remediation of Unmanaged Assets - In one aspect, methods, system, and computer-readable media for monitoring unmanaged assets in a network having a plurality of managed machines include: at a first managed machine of the plurality of managed machines, wherein the plurality of managed machine are arranged in a linear communication orbit and have respective identifiers, and each managed machine is coupled to at least one respective neighbor by a corresponding local segment of the linear communication orbit: responding to a detection instruction for detecting unmanaged assets currently present in the network, by: scanning for live unmanaged machines within a selected portion of the network that is associated with a range of identifiers that includes identifiers between the respective identifiers of the first managed machine and a respective neighbor of the first managed machine; and generating a local report identifying one or more unmanaged machines that have been detected within the selected portion of the network. | 2015-05-28 |
20150149625 | METHOD AND SYSTEM FOR LOW-OVERHEAD LATENCY PROFILING - The present disclosure provides a method, non-transitory computer-readable storage medium, and computer system that implement a latency monitoring and reporting service configured to collect and report latency of service transactions. In one embodiment, a chronicler object is generated and transmitted to a charging engine, where the chronicler object is configured to collect a set of time points as the chronicler object travels through one or more components of the charging engine. Upon return of the chronicler object, the set of time points is extracted from the chronicler object and added to one of a plurality of accumulator objects. Each accumulator object includes a plurality of sets of time points from a plurality of chronicler objects that are received during a reporting window. The plurality of sets of times points of each accumulator object is used to calculate the latency of service transactions. | 2015-05-28 |
20150149626 | SYSTEM OF REDUNDANTLY CLUSTERED MACHINES TO PROVIDE FAILOVER MECHANISMS FOR MOBILE TRAFFIC MANAGEMENT AND NETWORK RESOURCE CONSERVATION - A method for providing fault tolerance in mobile traffic management services is provided. The method includes detecting, at a mobile device, that one component of multiple components for providing mobile traffic management services is non-operational, at capacity, or near capacity, identifying the mobile device serviced by the one component, retrieving information for the mobile device serviced by the one component, from a repository coupled to the one component and the multiple components, re-assigning the mobile device originally serviced by the one component to another one of the multiple components for servicing, and communicating with the another one of the multiple components for servicing communication requests of the mobile device. | 2015-05-28 |
20150149627 | METHOD AND APPARATUS FOR COORDINATING NETWORK - Embodiments of the present invention relate to a method and an apparatus for coordinating a network. The method includes: receiving or monitoring network information; determining, according to the network information, whether a network operation needs to be coordinated; and coordinating the network operation if determining, according to the network information, that the network operation needs to be coordinated. The apparatus provided in embodiments of the present invention includes: a network information acquiring unit, a coordination determining unit and a coordinating unit. The method and the apparatus for coordinating the network provided in embodiments of the present invention can reduce the probability of occurrence of various network problems due to that a network operation is fixed or is preset by an operator, so that the network operation can achieve an expected network objective, thereby reducing maintenance cost of the operator. | 2015-05-28 |
20150149628 | SYSTEM AND METHOD FOR A SERVICE METERING FRAMEWORK IN A NETWORK ENVIRONMENT - A method is provided in one example embodiment executed at a service metering framework (SMF) engine including a processor, and includes interfacing, by an event listener at the SMF engine, with an application being executed in a cloud by a remote client device, detecting a metering event associated with the application during execution of the application, receiving a value of at least one metering attribute associated with the metering event, and storing the at least one metering attribute and the value as a formatted metered record in a SMF database searchable according to the metering attribute. In a specific embodiment, the event listener exposes an application programming interface (API) of the SMF engine to the application to facilitate definitions of the metering event and the at least one metering attribute in the application. | 2015-05-28 |
20150149629 | USER ONLINE STATE QUERYING METHOD AND APPARATUS - A user state querying method is provided. The method includes: saving an association between a user identity of a user in a first service system and a user identity of the user in a second service system, where the first service system and the second service system are different service systems; obtaining an activation state of the user in the first service system and the second service system; receiving a state query request from the first service system; and sending the activation state of the user in the second service system to the first service system according to the association and a user identity included in the state query request. By using the present invention, a query of a user state between different systems can be implemented. | 2015-05-28 |
20150149630 | EVENT MANAGEMENT IN A DISTRIBUTED PROCESSING SYSTEM - Methods, systems, and computer program products for event management in a distributed processing system are provided. Embodiments include receiving, by the incident analyzer, one or more events from one or more resources, each event identifying a location of the resource producing the event; identifying, by the incident analyzer, an action in dependence upon the one or more events and the location of the one or more resources producing the one or more events; identifying, by the incident analyzer, a location scope for the action in dependence upon the one or more events; and executing, by the incident analyzer, the identified action. | 2015-05-28 |
20150149631 | CUSTOMER-DIRECTED NETWORKING LIMITS IN DISTRIBUTED SYSTEMS - Methods and apparatus for supporting customer-directed networking limits in distributed systems are disclosed. A client request is received via a programmatic interface, indicating a particular lower resource usage limit to be imposed on at least one category of network traffic at a particular instance of a network-accessible service. Resource usage metrics for one or more categories of network traffic at the particular instance are obtained. In response to a determination that resource usage at the particular instance has reached a threshold level, one or more responsive actions are initiated. | 2015-05-28 |
20150149632 | MINIMIZING SERVICE RESTART BY OPTIMALLY RESIZING SERVICE POOLS - A method, computer program product, and system for optimizing service pools supporting resource sharing and enforcing SLAs, to minimize service restart. A computer processor determines a first resource to be idle, wherein a service instance continues to occupy the first resource that is idle. The processor adds the first resource to a resource pool, wherein the service instance continues to occupy the first resource as a global standby service instance on the first resource. The processor receives a request for a resource, wherein the request for the resource includes a global name associated with a service that corresponds to the global standby service instance, and the processor allocates, from the resource pool, the first resource having the global standby service instance, based on the request for the resource that includes the global name associated with the service corresponding to the global standby service instance. | 2015-05-28 |
20150149633 | Method and Apparatus for Identifying Application Instances Within a Machine-to-Machine Network Domain - In one aspect of the teachings herein, a Services Capability Layer, SCL, within a Machine-to-Machine, M2M, network generates unique identifiers, for use in identifying individual application instances within the M2M domain. According to such operation, an SCL receives or otherwise obtains an application identifier for an application instance registering at the SCL, and generates a globally unique identifier for the application instance using the application identifier or an alias corresponding to it. As an example, the SCL appends to the application identifier or alias its own identifier, which is unique to that SCL, along with a random value. The resultant identifier is guaranteed to be unique for the individual application instance and the SCL uses the resultant identifier for identifying the application instance to other entities within the M2M domain. | 2015-05-28 |
20150149634 | Cloud Delivery Platform - Concepts and technologies disclosed herein are directed to a cloud delivery platform. The cloud delivery platform can publish a cloud deployable offering. The cloud delivery platform can order, from a cloud orchestrator, one or more resources to be utilized by the cloud deployable offering. The cloud delivery platform can provision the cloud deployable offering on the resource(s). The cloud delivery platform can manage the cloud deployable offering to ensure that the cloud deployable offering meets a level of service. The cloud delivery platform can monitor one or more components of the cloud delivery platform to determine whether an event has occurred, and in response to determining that an event has occurred, the cloud delivery platform can broadcast the event. | 2015-05-28 |
20150149635 | METHOD AND SYSTEM FOR DISTRIBUTED LOAD BALANCING - Load balancing includes receiving, from a client, a connection request to establish a connection with a server; determining load balancing state information based at least in part on the connection request; synchronizing the determined load balancing state information across a plurality of service engines, including to invoke an atomic read-miss-create (RMC) function on a distributed data store service; and distributing the connection to a selected server among a plurality of servers according to a result of the RMC function. | 2015-05-28 |
20150149636 | CROSS-PLATFORM WORKLOAD PROCESSING - According to one aspect of the present disclosure, a method and technique for workload processing is disclosed. The method includes: receiving a request to process a workload by a scheduler executing on a processor unit; accessing historical processing data by the scheduler to determine execution statistics associated with previous processing requests; determining whether the data of the workload is available for processing; in response to determining that the data is available for processing, determining whether a process for the workload is available; in response to determining that the process is available, determining resource availability on a computing platform for processing the workload; determining whether excess capacity is available on the computing platform based on the resource availability and the execution statistics; and in response to determining that excess capacity exists on the computing platform, initiating processing of the workload on the computing platform. | 2015-05-28 |
20150149637 | MINIMIZING SERVICE RESTART BY OPTIMALLY RESIZING SERVICE POOLS - A method, computer program product, and system for optimizing service pools supporting resource sharing and enforcing SLAs, to minimize service restart. A computer processor determines a first resource to be idle, wherein a service instance continues to occupy the first resource that is idle. The processor adds the first resource to a resource pool, wherein the service instance continues to occupy the first resource as a global standby service instance on the first resource. The processor receives a request for a resource, wherein the request for the resource includes a global name associated with a service that corresponds to the global standby service instance, and the processor allocates, from the resource pool, the first resource having the global standby service instance, based on the request for the resource that includes the global name associated with the service corresponding to the global standby service instance. | 2015-05-28 |
20150149638 | Resource Allocation - There is disclosed a resource allocation module configured to: allocate a first set of communication event resources for receiving communication event data at the computer device; allocate a second set of communication event resources for transmitting communication event data from the computer device; and reallocate resources from one of said sets to the other of said sets in dependence on an indication of the relative importance of the received communication event data compared to the transmitted communication event data. There is also provided a method and a computer program product. | 2015-05-28 |
20150149639 | BANDWIDTH ALLOCATION IN A NETWORKED ENVIRONMENT - Techniques for allocating bandwidth in a networked environment are described herein. A hub may include logic, at least partially comprising hardware logic. The logic is configured to receive data flow including packets at the hub. The logic is further configured to assign a weight to the packets based on the speed of the data flow, and allocate bandwidth of an upstream link based on the weight assigned to the packets. | 2015-05-28 |
20150149640 | FAST PROVISIONING VIRTUALIZATION NETWORK SERVICE FOR CLOUD COMPUTING - A cloud-based system and method for provisioning IT infrastructure systems is disclosed. The system and method provided constructs an infrastructure generally comprised of a processing component supplying the computational capacity for a platform element, comprising one or more processing elements, memory and I/O subsystems, a storage component utilizing commodity disk drives and comprised of one or more physical storage devices, and a network component providing a high speed connection among processing elements and the processing component to storage components. In addition, the system and method provide all features required for a complete, immediately usable infrastructure system including registration of IP addresses and domain names so that the user may have the system completely up and running without the aid of an administrator. | 2015-05-28 |
20150149641 | METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR MODIFYING BANDWIDTH AND/OR QUALITY OF SERVICE FOR A USER SESSION IN A NETWORK - Bandwidth and/or Quality of Service (QoS) for a user session may be modified in a network that includes a Regional/Access Network (RAN) that facilitates differentiated end-to-end data transport between a Network Service Provider (NSP) and/or an Application Service Provider (ASP) and a Customer Premises Network (CPN) that includes a Customer Premises Equipment (CPE) by receiving a request at the NSP and/or the ASP to change the bandwidth and/or QoS associated with the user's session. An Application Programming Interface (API) is used at the NSP and/or the ASP to communicate with the RAN to modify the bandwidth and/or QoS associated with the user's session. | 2015-05-28 |
20150149642 | DETERMINING COMPUTING-RELATED RESOURCES TO USE BASED ON CLIENT-SPECIFIED CONSTRAINTS - Techniques are described for facilitating a client's control over use of computing-related resources on the client's behalf. In some situations, a client's control is based on specifying a group of one or more resource usage constraints with a client resource constraint manager service, which provides information about the client-specified constraints to one or more other remote network services with which the client interacts. Those remote services then use that constraint information to determine whether and how to use computing-related resources on the client's behalf. For example, the resource usage constraints specified by a client may relate to one or more particular geographical areas and/or to one or more measures of relative proximity between computing-related resources (e.g., between multiple instances of a single type of computing-related resource provided by a single service, or between multiple distinct types of computing-related resources provided by multiple unaffiliated services). | 2015-05-28 |
20150149643 | DYNAMIC POLICY BASED DATA SESSION MIGRATION MECHANISM IN A COMMUNICATION NETWORK - Mitigating service interruptions within a mobile core network by dynamically managing communication sessions using a policy based network mechanism is presented herein. A method can include receiving policy information associated with redirection of an active communication session from a first device to a second device; receiving status information representing a characteristic of the active communication session; and in response to determining, based on the status information, that the characteristic satisfies a defined condition of the policy information, redirecting the active communication session from the first device to the second device. In an example, the method can further include redirecting the established communication session from the source device to the destination device in response to determining, based on the data session migration policy, that the established communication session is not associated with a dedicated bearer communication channel. | 2015-05-28 |
20150149644 | METHOD, STORAGE MEDIUM, AND APPARATUS FOR PERFORMING PEER TO PEER SERVICE BY USING CONTACTS INFORMATION - A method of performing a Peer to Peer (P2P) service with at least one second terminal by a first terminal is provided. The method includes transmitting contact information of the first terminal to the at least one second terminals; receiving contact information of the at least one second terminals from the at least one second terminal; receiving information on an application supporting the P2P service from the at least one second terminal; displaying identification information of the at least one second terminal and first identification information of the application by using the contact information of the at least one second terminal and the information on the application; and performing the P2P service with the at least one second terminal through the application. | 2015-05-28 |
20150149645 | Integrating Co-Browsing with Other Forms of Information Sharing - A co-browse service uses JavaScript to allow a web page shown in a user's browser to be viewed remotely. Updates to the web page are rendered into HTML and forwarded on the co-browse session. Aspects of the web page that should not be visible are specified in a list of masked elements which prevents the JavaScript from transmitting the content of those elements on the co-browse session. A person viewing the web page at the remote location can select objects to have those objects highlighted within the user's browser. Likewise the person viewing the web page may manipulate the objects by selecting objects and entering information into the objects. Updates to the web page are collected and aggregated such that only the most recent updates are forwarded on the co-browse session. Updates that don't affect the DOM, such as hover state, are also transmitted on the session. | 2015-05-28 |
20150149646 | DEVICE AND METHOD FOR MAINTAINING A COMMUNICATION SESSION DURING A NETWORK TRANSITION - Provided are a device and method for maintaining a communication session during a network transition. In one example, the method includes monitoring, by a client, a connection with a first network to determine whether a signal strength of the connection falls below a threshold value. The client establishes a connection with a second network if the signal strength of the connection with the first network falls below the threshold value. Establishing the connection with the second network includes obtaining an address and port assignment corresponding to the client from the second network. The client uses the obtained address and port assignment to maintain a communication session during the changeover from the first network to the second network. | 2015-05-28 |
20150149647 | Systems and Methods for Providing Context to SIP Messages - Systems, methods, and computer program products are provided for providing context through a scripting-type programming language to data included in a SIP message. The method includes defining one or more contexts through a scripting-type computer programming language. The one or more contexts reference a particular pre-defined portion of a SIP message and are provided by the scripting-type computer programming language. A series of SIP messages may then be received, where each SIP message in the series belongs to the same SIP message flow. After a particular SIP message in the series is received, the message is parsed to identify whether it includes any portion of data that can be referenced via one or more contexts. Any particular portion of data that can be referenced via a context is associated with a respective context such that the respective portion of data can be referenced by the context. | 2015-05-28 |
20150149648 | Systems and Methods for Processing SIP Message Flows - Systems, methods, and computer program products are provided for modifying a Session Initiation Protocol (SIP) messages. The method includes providing a scripting-type computer programming language that includes contexts that reference pre-defined portions of data of a SIP message and variables that store data associated with a SIP message flow. An interface for configuring rules to be executed when processing SIP messages is provided. Each rule includes an action that describes a modification to be made to a particular SIP message. When a SIP message is received, it is parsed to determine at least a context of a portion of the message. The parsing includes associating the portion of the message with a particular context. It is then determined whether a rule should be applied to the data associated referenced by the contexts, and if so, the SIP message is modified based on the actions associated with the rule. | 2015-05-28 |
20150149649 | Varied Wi-Fi Service Levels - In one embodiment, a method includes receiving a request from a client computing device of a user to access a communication network; and identifying a particular tier for the client computing device from among a number of tiers of service based at least in part on social-graph information of the user. Each tier of service includes one or more session settings of the communication network. The method also includes configuring a session of the communication network for the client computing device based at least in part on one or more of the session settings of the identified tier of service; and establishing the configured session between the client computing device and the communication network. | 2015-05-28 |
20150149650 | METHOD AND DEVICE FOR POSITIONING SESSION INITITAION PROTOCOL DIALOG - The disclosure discloses a Session Initiation Protocol (SIP) dialog positioning method and a SIP dialog positioning device. The method includes that: a network element receives an in-dialog SIP message with index information, wherein the index information is used for indicating a position of a dialog at a calling party or a called party; and the network element positions the dialog according to the index information. According to the disclosure, the index information for indicating the position of the dialog at the calling party or the called party is carried by the in-dialog SIP message, and the dialog is positioned according to the index information, so that such an implementation mode is simple and high in positioning speed, and the performance of the network element is improved. | 2015-05-28 |
20150149651 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR PROTOCOL ADAPTATION - A method in a protocol adaptor system for enabling a newly connected device to communicate with a generic application in a communications network, where the generic application uses a generic protocol. The method includes detecting that the device is unsupported by a specific protocol and that the device uses a variant of the specific protocol in the system. The method further includes determining a new fragment required for communication adaptation of the specific protocol, and retrieving the new fragment needed for adaptation of the specific protocol to said variant. The method further includes installing the fragment in the specific protocol, to enable communication between the generic application and the newly connected device. | 2015-05-28 |
20150149652 | METHOD AND APPARATUS FOR NETWORK STREAMING - A method of data streaming in a streaming system ( | 2015-05-28 |
20150149653 | METHOD AND APPARATUS FOR DISTRIBUTING MEDIA CONTENT - A system that incorporates teachings of the present disclosure may include, for example, initializing a boundary estimate for an optimization of a linear programming model describing a network of media servers for servicing requests for media content items from subscriber devices, where the boundary estimate is an estimate of an infeasible solution of the linear programming model, and calculating iteratively, using an exponential potential function, additional boundary estimates for the linear programming model, wherein the calculating resolves to an improved boundary estimate that corresponds to placement of copies of the media content items at the media servers subject to a set of constraints on storage capacity of media servers and on bandwidth for communication links in the network. Other embodiments are disclosed. | 2015-05-28 |
20150149654 | Modular Analog Frontend - A system may include a first stage comprising first signaling components for a first protocol, and a second stage comprising second signaling components for the first protocol and a second protocol. The system may further include logic configured to receive an incoming data stream, and determine a stream protocol for the data stream. The logic may be further configured to, responsive to the determination, activate the at least a portion of the first stage when the stream protocol is compliant with the first protocol, and when the stream protocol is compliant with the second protocol, deactivate the first stage. | 2015-05-28 |
20150149655 | PROGRESSIVE DOWNLOAD PLAYBACK - The present invention provides methods and systems for enabling content streaming on mobile devices. The methods and systems may include encoding a content stream; providing the encoded content stream to a splitter embodied in computer executable code, which splits the encoded content stream into at least two channels, with each channel having data of a characteristic chunk size; downloading at least one data chunk into a playback queue, wherein a download algorithm determines the at least one chunk to be downloaded; and providing the at least one downloaded chunk to a media player. | 2015-05-28 |
20150149656 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR DIAMETER ROUTING USING SOFTWARE DEFINED NETWORK (SDN) FUNCTIONALITY - Methods, systems, and computer readable media for routing a Diameter message are disclosed. According to one method, the method occurs at a Diameter routing node. The method includes receiving, from a Diameter routing controller (DRC) via a software defined network (SDN) related interface, Diameter routing information, wherein the Diameter routing information is determined using application layer information. The method also includes routing a Diameter message using the Diameter routing information. | 2015-05-28 |
20150149657 | Path Optimization for Adaptive Streaming - In one implementation, downloading of streaming content using a security as a service (SecaaS) system is more efficient because portions of the streaming content may not be inspected by the SecaaS. A first request to download content from a content provider is received, and a connection is initiated with a security provider, which inspects the first chunk of the content and generates a routing instruction based on the inspection of the first chunk of content. Based on the routing instructions and the inspection of the first chunk, a request for a second chunk of the streaming content is addressed to the content provider. The second chunk of the streaming content, circumvents the SecaaS system. | 2015-05-28 |
20150149658 | SOFTWARE UPGRADE OF ROUTERS - According to an example a router includes a control plane CPU, a data plane CPU, a first memory area and a second memory area independent from the first memory area. When the router upgrades its software, the control plane CPU is reset and clears the first memory area. After being reset, the control plane CPU loads a new version control plane program into the first memory area and runs the new version control plane program in the first memory area. | 2015-05-28 |
20150149659 | SYSTEMS AND COMPUTER IMPLEMENTED METHODS FOR SEMANTIC DATA COMPRESSION - Computer implemented methods and systems directed to a technological improvement in electronic data compression and transmission between two computer systems using semantic analysis are disclosed. The method includes the step of compressing, at a first computer, a plurality of queued artifacts based on one or more network decision variables. The compression includes prioritizing the queued artifacts. The compression further includes determining a first set of artifacts in a set of queued artifacts to transmit and a second set of artifacts in a set of queued artifacts to only send links. The compression further includes replacing unnecessary content in the set of queued artifacts with one or more identifiers. The method further includes the step of transmitting, from the first computer, one or more batches of the compressed data over a network to a second computer. | 2015-05-28 |
20150149660 | SERVER AND IDENTIFIER SYNCHRONIZATION METHOD - A server and an identifier synchronization method are provided, and the server includes a network card, hardware peripherals and a basic input output system. The network card stores at least one identifier. The basic input output system starts operating to acquire the at least one identifier of the network card and write the at least one identifier into each hardware peripheral after the server is booted. | 2015-05-28 |
20150149661 | SHARING SINGLE ROOT IO VIRTUALIZATION PERIPHERAL COMPONENT INTERCONNECT EXPRESS DEVICES - Systems and methods for sharing a single root I/O virtualization (SR-IOV) device ( | 2015-05-28 |
20150149662 | I/O MODULE AND PROCESS CONTROL SYSTEM - An I/O module according to one embodiment of the present invention includes a receiver electrically connectable to a field device and configured to receive first information autonomously transmitted from the field device, a storage storing the first information received by the receiver, and a comparator configured to compare the first information stored in the storage and second information newly received by the receiver and to rewrite the first information stored in the storage with the second information newly received by the receiver when the first information stored in the storage is different from the second information newly received by the receiver. | 2015-05-28 |
20150149663 | System and Method for Providing Performance Sampling in a Computing System - A method performed by a computer system, the method including maintaining a plurality of work-based counters, each of the work-based counters being associated with a respective functional entity of a plurality of functional entities, in response to determining that a first one of the work-based counters has reached a threshold, sampling a performance data of a first functional entity associated with the first one of the work-based counters, and presenting the sampled performance data to an analysis tool separate from an operating system of the computer system. | 2015-05-28 |
20150149664 | AN ELECTRONIC DEVICE HAVING A PLURALITY OF CPUs AND A METHOD - An electronic device includes a first CPU, a second CPU, an auxiliary storage unit, and a controller. The auxiliary storage unit includes a first starting program for the first CPU and a second starting program for the second CPU. The first CPU loads the first starting program via the controller, and causes the controller to load the second starting program in DMA transfer. The controller, if the controller is caused by the first CPU to transfer part of the first starting program while the controller is loading the second starting program, stops loading the second starting program. When completing the transfer of the part of the first starting program, the controller restarts loading the second starting program. | 2015-05-28 |
20150149665 | POLLING METHOD OF COMMUNICATION SYSTEM - The present disclosure relates to a polling method of communication system configured to reduce a polling time by receiving responses from a plurality of auxiliary devices using one time of response request signal by a main device during polling, the method including, requesting, by a main device, transmission of response request signals from a plurality of auxiliary devices connected to the main device (request step),
| 2015-05-28 |
20150149666 | Event System and Methods for Using Same - Event systems and methods are provided through which applications can manage input/output operations (“I/O”) and inter-processor communications. An event system in conjunction with fast I/O is operable to discover, handle and distribute events. The system and method disclosed can be applied to combinations that include event-driven models and event-polling models. In some embodiments, I/O sources and application sources direct events and messages to the same destination queue. In some embodiments, the system and methods include configurable event distribution and event filtering mechanisms operable to effect and direct event distribution for multiple event types using multiple methods. In some embodiments, the system disclosed includes enhanced event handler API's. Some embodiments include a multicast API operable to allow applications to perform multicasting in a single API call. In addition, various mechanisms of the disclosed event system can be combined with traditional operating systems. | 2015-05-28 |
20150149667 | INTERRUPT REDUCTION BY DYNAMIC APPLICATION BUFFERING - Systems and methods are disclosed for processing a queue associated with a request. An example system includes an input/output (I/O) interface that receives a request associated with a channel. The example system also includes an association module that determines whether a condition is satisfied. When the condition is determined to not be satisfied, the association module, after a hardware device completes processing the request, decrements an in-flight counter that represents a first amount of data in the channel. When the condition is determined to be satisfied, the association module, before the hardware device completes processing the request, decrements the in-flight counter. | 2015-05-28 |
20150149668 | COMBINATION COMPUTING DEVICE AND GAME CONTROLLER WITH FLEXIBLE BRIDGE SECTION - A combination generally directed to a combination computing device and game controller. The computing device provides a plurality of sides, in which each of the sides are disposed between an electronic display screen and a back of the computing device. The game controller provides a communication port interacting with the computing device, the communication port providing a communication link and a pair of confinement structures, the pair of confinement structures adjacent to and confining the computing device on at least two opposing sides of the plurality of sides of the computing device, and an input device attached to and in electronic communication with the communication port. The input device is a separate and distinct structure from the communication port, forming no structural portion of the communication port. | 2015-05-28 |
20150149669 | Dynamic Enhancement of Media Experience - The present disclosure relates to a method for enhancement of media experience that comprises transmitting, by a first computing device, a data stream stored in a first storage region of the first computing device to an output device connected to the first computing device, providing, by a trigger module, a trigger that is linked to the data stream, detecting, by a detection module, the trigger while the data stream is being transmitted, and providing, by a content module, additional data in response to detecting the trigger. Furthermore, a system for enhancement of media experience is described. | 2015-05-28 |
20150149670 | SYSTEM AND METHOD FOR CONTROLLING BUS-NETWORKED DEVICES VIA AN OPEN FIELD BUS - A system for controlling bus-networked devices includes a gateway including a memory unit and having an interface to an open field bus. A power supply unit supplies primary power for the gateway and bus subscribers. An auxiliary power supply unit supplies auxiliary power for the bus subscribers independent of bus functionality. A pluggable connection cable is electrically connects the gateway to the bus subscribers and transmits the primary and the auxiliary power and control and/or status information between the gateway and the bus subscribers. An application bus networks the bus subscribers to each other and is operable by the connection cable. A bus controller writes a target bus configuration of the application bus and stores the target bus configuration in a non-volatile manner in the memory unit. The bus controller is also configured to overwrite the target bus configuration with a present, actual bus configuration. | 2015-05-28 |
20150149671 | SYSTEMS AND METHODS FOR BIASING A BUS - A bi-directional differential bus interface that includes a differential transmitter having a non-inverting terminal and an inverting terminal, a differential receiver having a non-inverting terminal and an inverting terminal, and a biasing circuit that is electrically coupled to the non-inverting terminal of the differential transmitter and the inverting terminal of the differential transmitter. The biasing circuit is configured to generate a voltage between the non-inverting terminal of the differential transmitter and the inverting terminal of the differential transmitter that is approximately 200 mV or more in response to assertion of a control signal received on a control input of the biasing circuit. | 2015-05-28 |
20150149672 | CLOCKLESS VIRTUAL GPIO - A virtual GPIO architecture for an integrated circuit is provided that both serializes virtual GPIO signals and deserializes virtual GPIO signals without the need for an external clock. | 2015-05-28 |
20150149673 | FENCE MANAGEMENT OVER MULTIPLE BUSSES - Embodiments of a bridge unit and system are disclosed that may allow for processing fence commands send to multiple bridge units. Each bridge unit may process a respective portion of a plurality of transactions generated by a master unit. The master unit may be configured to send a fence command to each bridge unit, which may stall the processing of the command. Each bridge unit may be configured to determine if all transactions included in its respective portion of the plurality of transactions has completed. Once each bridge unit has determined that all other bridge units have received the fence command and that all other bridge units have completed their respective portions of the plurality of transactions that were received prior to receiving the fence command, all bridge units may execute the fence command. | 2015-05-28 |
20150149674 | EMBEDDED STORAGE DEVICE - An embedded storage device for use with a computer device is provided. The embedded storage device includes a microprocessor, a master storage unit, a slave storage unit, and a relay bus. The microprocessor provides a clock signal and creates data transmission link to the computer device. The master storage unit has a master clock pin, at least a master data pin, and a master control pin. The master control pin receives a command signal from the microprocessor. The slave storage unit has a slave clock pin and at least a slave data pin. The relay bus is coupled to the master storage unit and the slave storage unit to enable communication between the master storage unit and the slave storage unit, such that the command signal from the microprocessor is sent from the master storage unit to the slave storage unit via the relay bus. | 2015-05-28 |
20150149675 | MEMORY CONTROLLER, INFORMATION PROCESSING APPARATUS, AND METHOD OF CONTROLLING MEMORY CONTROLLER - A memory controller has a request holding unit holding a write request and a read request; a transmission unit transmitting any one of the write request and the read request to a memory through a transmission bus; a reception unit receiving read data corresponding to the read request through a reception bus; and a request arbitration unit performing: a first processing of transmitting the write request before the read request, when a first reception time is not later than a second reception time, and a second processing of transmitting the read request before the write request, when the first reception time is later than the second reception time. The first reception time is when reception of the read data is started when the write request is transmitted first, and the second reception time is when the reception of the read data is started when the read request is transmitted first. | 2015-05-28 |
20150149676 | SYSTEM FOR FORMULATING TEMPORAL BASES FOR OPERATION OF PROCESSES FOR PROCESS COORDINATION - A novel approach to coordinate processes in a process environment includes establishing a coherent temporal and resource framework for operation of selected processes in order to formulate a basis for coordination. A key aspect of the present innovation includes the novel techniques for coordinating processes including transmission of electromagnetism and transmission of electromagnetic radiation in a process environment by effecting periodic interruptions, based upon the abovementioned coherent temporal and resource framework, while maintaining the required operational and safety procedures. | 2015-05-28 |
20150149677 | HOT PLUGGING SYSTEM AND METHOD - A hot plugging system and method are disclosed, in which a pluggable device transmits a request to a host before the pluggable device is pulled out so that the host may select whether preserving a device status, and preserves the device status and then shuts down a power of the pluggable device directly when preserving is selected while shuts down directly the power of the pluggable device when no preserving is selected so that the pluggable device is selected to be recovered or initialized according to the device status, existing or non-existing, after being inserted onto the host again, whereby achieving in a technical efficacy of promoting the hot plugging recovery. | 2015-05-28 |
20150149678 | APPARATUS OF HIGH SPEED INTERFACE SYSTEM AND HIGH SPEED INTERFACE SYSTEM - Disclosed are an apparatus (equalizer module or receiving apparatus) of a high speed interface system and a high speed interface system, in which the resistance value of a termination resistor in a circuit for high speed interface is adjusted to follow that of a termination resistor of a sink circuit unit, thereby implementing efficient equalization and high speed interface, and a command bus (CBUS) is not built in an equalizer integrated circuit (IC), so that it is possible to simplify the configuration of the high speed interface system and improve the performance and efficiency of the high speed interface system. | 2015-05-28 |
20150149679 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR MANAGING CONCURRENT CONNECTIONS BETWEEN WIRELESS DOCKEE DEVICES IN A WIRELESS DOCKING ENVIRONMENT - Method, apparatus, and computer program product embodiments enable a wireless docking center device to manage one or more wireless and/or wired peripheral devices on behalf multiple wireless dockee devices. An example embodiment of the invention includes receiving, by a wireless docking center device, at least two request messages for peripheral functions from at least two wireless dockee devices, including a first request message for a peripheral function from a first wireless dockee device, and a second request message for a peripheral function from a second wireless dockee device; and allocating, by the wireless docking center device, the requested peripheral function to the first wireless dockee device, based on determining at least a characteristic of the first request message indicates that the first wireless dockee device is entitled to the peripheral function. | 2015-05-28 |
20150149680 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus having first and second buses, includes: a read/write command unit transmitting a read command or a write command to the first bus; a read command unit receiving a read command from the second bus; a write command unit receiving a write command from the second bus; and a command unit transmit the read command and the write command to the read/write command unit based on the read and write commands received by the read command unit and the write command unit. | 2015-05-28 |
20150149681 | METHODS FOR SHARING BANDWIDTH ACROSS A PACKETIZED BUS AND SYSTEMS THEREOF - A system, method, and computer readable medium for sharing bandwidth among executing application programs across a packetized bus for packets from multiple DMA channels includes receiving at a network traffic management device first and second network packets from respective first and second DMA channels. The received packets are segmented into respective one or more constituent CPU bus packets. The segmented constituent CPU bus packets are interleaved for transmission across a packetized CPU bus. | 2015-05-28 |
20150149682 | IN-VEHICLE SENSOR, IN-VEHICLE SENSOR SYSTEM, AND METHOD OF SETTING IDENTIFIERS OF IN-VEHICLE SENSORS IN IN-VEHICLE SENSOR SYSTEM - An in-vehicle sensor ( | 2015-05-28 |
20150149683 | PCI EXPRESS TRANSACTION DESCRIPTOR - A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses. | 2015-05-28 |
20150149684 | HANDLING TWO SES SIDEBANDS USING ONE SMBUS CONTROLLER ON A BACKPLANE CONTROLLER - Present disclosure relates to a computer-implemented method for handling two SES sidebands using one SMBUS controller. The method includes one or more of following operations: (a) establishing communication between a backplane controller and a host computer through HBA, (b) receiving control commands and control data from host computer for monitoring and controlling at least one drive of first and second group of drives, (c) determining address and device number of drive to which received control commands and control data are directed, (d) forwarding control commands and control data to first or second SMBUS sideband handler based on address received, (e) controlling the blinking of the LEDs of the drive by first or second SMBUS sideband handler, (f) generating responses by the first or second SMBUS sideband handler, (g) receiving responses by the SMBUS controller, and (h) sending the responses back to the host computer within a predetermined time period. | 2015-05-28 |
20150149685 | PCI-E STANDARD SELECTION SETTING SYSTEM AND MICROSERVER - A peripheral component interface-express (PCI-E) standard selection setting system and microserver are disclosed, in which a selection controller selects an arrangement setting in storage elements to arrange the PCI-E control chip, whereby each of the second PCI-E standard ports is or is not arranged as an upstream PCI-E standard port, so that a single PCI-E standard control chip may arrange one of the multitude of PCI-E standard ports as an upstream PCI-E standard port, so that the upstream PCI-E standard port may have a data transmission with one of the multitude of system on chips (SOCs) connected with the PCI-E standard control chip. | 2015-05-28 |
20150149686 | ADAPTER CARD WITH A COMPUTER MODULE FORM FACTOR - A system includes a circuit board with a Peripheral Component Interconnect Express (“PCIe”) backplane. The backplane is configured to receive processing power from a computer module. An adapter card having a computer module form factor is coupled to the PCIe backplane instead of the computer module. The adapter card includes a switch that aggregates one or more PCIe lanes and a transceiver. A communication link couples the transceiver to a remote processor device, which provides processing power to the circuit board. | 2015-05-28 |
20150149687 | ORDERED MEMORY PAGES TRANSMISSION IN VIRTUAL MACHINE LIVE MIGRATION - Systems and methods for virtual machine live migration. An example method may comprise: identifying, by a first computer system executing a virtual machine undergoing live migration to a second computer system, a plurality of stable memory pages comprised by an execution state of the virtual machine, wherein the plurality of stable memory pages comprises memory pages that have not been modified within a defined period of time; transmitting the plurality of stable memory pages to the second computer system; determining that an amount of memory comprised by a plurality of unstable memory pages is below a threshold value, wherein the plurality of unstable memory pages comprises memory pages that have been modified within the defined period of time; and transmitting the plurality of unstable memory pages to the second computer system. | 2015-05-28 |
20150149688 | ELECTRONIC DEVICE AND METHOD OF MANAGING MEMORY OF ELECTRONIC DEVICE - A method of managing a memory by an electronic device is provided. The method includes configuring a swap data amount per unit time, identifying an actual use amount of swap data, and comparing the identified actual use amount of the swap data with the configured swap data amount per unit time. | 2015-05-28 |
20150149689 | SYSTEMS AND METHODS FOR REVISING PERMANENT ROM-BASED PROGRAMMING - An application program stored in a ROM includes a function lookup data structure in which functions called by the application program have identifiers and memory addresses at which the function is located and can be executed. Upon startup, the function lookup data structure is copied to a RAM as a revised lookup data structure and is compared to a revision lookup data structure also written to that RAM or elsewhere. If the revision lookup data structure contains replacement functions having the same function identifiers but new memory addresses, these new memory addresses are written over the existing addresses in the revised lookup data structure for those replacement functions. The application program refers to the revised lookup data structure to find and execute the functions; thus the original application program on the ROM can continue to be used with revised functions. | 2015-05-28 |
20150149690 | RECORDING DEVICE, ACCESS DEVICE, RECORDING SYSTEM, AND RECORDING METHOD - A recording device operates in accordance with an instruction from an access device. The recording device comprising a nonvolatile memory that stores data, a communication unit that receives an instruction issued by the access device, and a memory controller that controls the nonvolatile memory. When a recording instruction for recording data into the nonvolatile memory is received from the access device, the memory controller starts recording of data into the nonvolatile memory. When the memory controller receives from the access device a suspension instruction for suspending the recording of data, the memory controller stores suspension information into the nonvolatile memory, the suspension information indicating a suspended position as a position in a recording area of the nonvolatile memory at which the data is being recorded upon reception of the suspension instruction. | 2015-05-28 |
20150149691 | Directly Coupled Computing, Storage and Network Elements With Local Intelligence - An apparatus that collapses computing, storage and networking elements into a tightly coupled, deeply vertically integrated highly scalable system that additionally provides augmented intelligence within each of the computing, storage and networking elements. A method to collapse computing, storage and networking elements while augmenting each of their intelligence. A system consisting of one or more scalable apparatus that collapse computing, storage and networking elements with augmented intelligence. | 2015-05-28 |
20150149692 | EFFICIENT REUSE OF SEGMENTS IN NONOVERWRITE STORAGE SYSTEMS - A non-overwrite storage system, such as a log-structured file system, that includes a non-volatile storage having multiple storage segments, a volatile storage having an unsafe free segments list (UFSL), and a controller for managing storage resources of the non-volatile storage. The controller can be configured to copy page data from used segment(s) of the non-volatile storage, write the copied page data to free segment(s) of the non-volatile storage, index the UFSL with indications of the used segment(s), and thereafter prevent reuse of the used segment(s) while the indications of the used segment(s) remain indexed in the UFSL. In some implementations, the non-overwrite storage system may be associated with flash storage system, and a flash controller can be configured perform a flush track cache operation to clear the indications of the used segment(s) from the UFSL, to enable reuse of segment(s) that were previously indexed to the UFSL. | 2015-05-28 |
20150149693 | Targeted Copy of Data Relocation - In a nonvolatile memory array that has a binary cache formed of SLC blocks and a main memory formed of MLC blocks, corrupted data along an MLC word line is corrected and relocated, along with any other data along the MLC word line, to binary cache, before it becomes uncorrectable. Subsequent reads of the relocated data directed to binary cache. | 2015-05-28 |
20150149694 | Adaptive Context Disbursement for Improved Performance in Non-Volatile Memory Systems - A controller circuit for a non-volatile memory of one or more memory circuits is described. The controller is connectable by a port with the memory circuits through a bus structure and can operate the memory circuits according to one or more threads. The controller includes a command processing section to issue high level commands for execution in the memory circuits and a memory circuit interface module to issue in sequence by the port to the memory circuits a series of instruction derived from the high level commands. A queue manager on the controller derives the series of instructions from the high level commands. When deriving a series of instruction from a set of high level data access commands, the queue manager can modify the timing for the issuance to the memory circuit interface module of memory circuit check status instructions based upon feedback from the memory circuit interface module and the state of earlier instruction in the series. | 2015-05-28 |
20150149695 | SYSTEM AND METHOD FOR COMPUTING MESSAGE DIGESTS - A data de-duplication approach leverages acceleration hardware in SSDs for performing digest computations used in de-duplication operations and support on behalf of an attached host, thereby relieving the host from the computing burden of the digest computation in de-duplication (de-dupe) processing. De-dupe processing typically involve computation and comparison of message digests (MD) and/or hash functions. Such MD functions are often also employed for cryptographic operations such as encryption and authentication. Often, SSDs include onboard hardware accelerators for MD functions associated with security features of the SSDs. However, the hardware accelerators may also be invoked for computing a message digest result and returning the result to the host, effectively offloading the burden of MD computation from the host, similar to an external hardware accelerator, but without redirecting the data since the digest computation is performed on a data stream passing through the SSD for storage. | 2015-05-28 |
20150149696 | Auto Resume of Irregular Erase Stoppage of a Memory Sector - Disclosed herein are system, method and/or computer program product embodiments for automatically resuming an irregular erasure stoppage in a sector of a memory system. An embodiment includes storing information related to any completed sub-stage of a multi stage erasure process and the corresponding memory sector address in a dedicated memory. After an irregular erasure stoppage occurs, an embodiment reads the information from the dedicated memory and resumes the erasure process of the memory sector from the last sub-stage completed. | 2015-05-28 |
20150149697 | SYSTEM AND METHOD FOR SUPPORTING ATOMIC WRITES IN A FLASH TRANSLATION LAYER - A method of maintaining and updating a logical-to-physical (LtoP) table in a storage device including a processor, a volatile memory, and a non-volatile memory, the storage device being in communication with a host utilizing atomic writes, the method including receiving, by the processor, data for storing at a plurality of physical addresses in the non-volatile memory, the data being associated with a plurality of logical addresses of the host, storing, by the processor, the plurality of physical addresses in an atomic segment in the volatile memory, storing, by the processor, one or more of zones of the LtoP table in the non-volatile memory, the one or more zones of the LtoP table corresponding in size to the atomic segment, and updating the one or more zones of the LtoP table with the plurality of physical addresses in the atomic segment. | 2015-05-28 |
20150149698 | ELIMINATING OR REDUCING PROGRAMMING ERRORS WHEN PROGRAMMING FLASH MEMORY CELLS - Mis-programming of MSB data in flash memory is avoided by maintaining a copy of LSB page data that has been written to flash memory and using the copy rather than the LSB page data read out of the flash cells in conjunction with the MSB values to determine the proper reference voltage ranges to be programmed into the corresponding flash cells. Because the copy is free of errors, using the copy in conjunction with the MSB values to determine the proper reference voltage ranges for the flash cells ensures that mis-programming of the reference voltage ranges will not occur. | 2015-05-28 |
20150149699 | Adaptive Erase of a Storage Device - The various implementations described herein include systems, methods and/or devices used to enable adaptive erasure in a storage device. The method includes performing a plurality of memory operations including read operations and respective erase operations on portions of one or more non-volatile memory devices specified by the read operations and respective erase operations, where the respective erase operations are performed using a first set of erase parameters that has been established as a current set of erase parameters prior to performing the respective erase operations. The method includes, in accordance with each erase operation of at least a subset of the respective erase operations, updating one or more erase statistics that correspond to performance of multiple erase operations. The method includes, in accordance with a comparison of the erase statistics with an erasure performance threshold, establishing a second set of erase parameters as the current set of erase parameters. | 2015-05-28 |
20150149700 | DIMM Device Controller Supervisor - The various implementations described herein include systems, methods and/or devices used to enable performing supervisory functions for a dual in-line memory module (DIMM), at a controller in the DIMM. The method includes upon power-up, determining a power supply voltage provided to the DIMM. In accordance with a determination that power supply criteria are satisfied, the method includes: (1) performing one or more power-up operations, including initiating a usage counter, (2) monitoring a temperature of the DIMM, (3) monitoring the DIMM for occurrence of one or more of a set of predetermined trigger events, and (4) in response to detecting one of the set of predetermined trigger events, logging information corresponding to the detected predetermined event. | 2015-05-28 |
20150149701 | TIME ESTIMATING METHOD, MEMORY STORAGE DEVICE, AND MEMORY CONTROLLING CIRCUIT UNIT - A time estimating method, a memory storage device, and a memory controlling circuit unit are provided for a rewritable non-volatile memory module having memory cells. The method includes: writing first data into first memory cells of the memory cells; reading the first memory cells according to a reading voltage, so as to determine whether each of the first memory cells belongs to a first state or a second state; and calculating a quantity of the first memory cells belonging to the first state, and obtaining a time information of the rewritable non-volatile memory module according to the quantity. | 2015-05-28 |
20150149702 | METHOD FOR DATA MANAGEMENT AND MEMORY STORAGE DEVICE AND MEMORY CONTROL CIRCUIT UNIT - A method for data management and a memory storage device and a memory control circuit unit thereof. The method includes: configuring a NVRAM and a VRAM; storing first data which includes writing data from a host system in the NVRAM; storing second data read from a rewritable non-volatile memory module in the VRAM; when the memory storage device is re-powered on after power failure, reading the first data from the NVRAM, so as to write the writing data into the rewritable non-volatile memory module. | 2015-05-28 |
20150149703 | APPARATUSES FOR SECURING PROGRAM CODE STORED IN A NON-VOLATILE MEMORY - An embodiment of an apparatus for securing program code stored in a non-volatile memory is introduced. A non-volatile memory contains a first region and a second region. Two NVMMCS (non-volatile memory management controllers respectively coupled to the two regions. A programming command-and-address decoder is coupled to the NVMMCS. The programming command-and-address decoder instructs the first NVMMC to erase data from the first region when receiving a command to erase the first region via a programming interface, and instructs the second NVMMC to erase data from the second region when receiving a command to erase the second region via the programming interface. | 2015-05-28 |
20150149704 | Transaction Private Log Buffering for High Performance of Transaction Processing - For each data change occurring transaction created as part of a write operation initiated for one or more tables in a main-memory-based DBMS, a transaction log entry can be written to a private log buffer corresponding to the transaction. All transaction log entries in the private log buffer can be flushed to a global log buffer upon completion of the transaction to which the private log buffer corresponds. | 2015-05-28 |
20150149705 | INFORMATION-PROCESSING SYSTEM - Information-processing system including a first information-processing unit, and a second information-processing unit, when the concept of wear leveling is applied to distribution of workloads to the respective information-processing units, the lives of the nonvolatile memories of the first information-processing unit and the second information-processing unit come to ends at almost exactly the same time, comprising a first counter that counts a number of times of writing in the first memory device, and a second counter that counts a number of times of writing in the second memory device, and assignment of workloads to the first information-processing unit and the second information-processing unit is performed based on a replacement time of the first memory device, a replacement time of the second memory device, output of the first counter, and output of the second counter. Thereby, the above described problem is solved. | 2015-05-28 |
20150149706 | SYSTEM AND METHOD FOR EFFICIENT FLASH TRANSLATION LAYER - A method of maintaining and updating a logical-to-physical (LtoP) table in a storage device including a processor, a volatile memory, and a non-volatile memory, the storage device being in communication with a host, the method including receiving, by the processor, data for storing at a physical address in the non-volatile memory, the data being associated with a logical address of the host, storing, by the processor, the physical address in a first LtoP zone of a plurality of LtoP zones of the LtoP table, the LtoP table being stored in the volatile memory, adding, by the processor, the first LtoP zone to a list of modified zones, and storing, by the processor, a second LtoP zone of the plurality of LtoP zones in the non-volatile memory when a size of the list of modified zones exceeds a threshold. | 2015-05-28 |
20150149707 | MICROCONTROLLER WITH INTEGRATED INTERFACE ENABLING READING DATA RANDOMLY FROM SERIAL FLASH MEMORY - A microcontroller includes a microprocessor, a serial flash memory interface, and input/output (I/O) terminals for coupling the serial flash memory interface to external serial flash memory. The microprocessor is operable to generate instruction frames that trigger respective commands to read data from specified addresses in the external serial flash memory. The serial flash memory interface receives and processes the instruction frames, obtains the data contained in the specified addresses in the external serial flash memory regardless of whether the specified addresses are sequential or non-sequential, and provides the data for use by the microprocessor. | 2015-05-28 |
20150149708 | B-FILE ABSTRACTION FOR EFFICIENTLY ARCHIVING SELF-EXPIRING DATA - Systems and methods are provided for data processing and storage management. In an illustrative implementation an exemplary computing environment comprises at least one data store, a data processing and storage management engine (B-File engine) and at least one instruction set to instruct the B-File engine to process and/or store data according to a selected data processing and storage management paradigm. In an illustrative operation, the illustrative B-File engine can generate a B-File comprising multiple buckets and store sample items in a random bucket according to a selected distribution. When the size of the B-FILE grows to reach a selected threshold (e.g., maximum available space), the B-File engine can shrink the B-File by discarding the largest bucket. Additionally, the B-File engine can append data to existing buckets and explicitly cluster data when erasing data such that data can be deleted together into the same flash block. | 2015-05-28 |
20150149709 | HYBRID STORAGE - Example control methods of hybrid storage are provided, which are applied to each HDD-type storage device and each SSD-type storage device in a storage system having one or more HDD-type storage devices and one or more SSD-type storage devices. Each HDD-type storage device in the storage system is connected to the SSD-type storage device. Each HDD-type storage device and each SSD-type storage device stores one or more data blocks respectively. Access information of each data block stored in a storage device is periodically acquired. A storage location of each data block in the storage system is adjusted according to the acquired access information of each data block. By using the technical solution of the present disclosure, the storage location of the data block is dynamically configured according to an access frequency so that advantages of different storage devices are fully utilized. | 2015-05-28 |
20150149710 | NONVOLATILE MEMORY DEVICE AND SUB-BLOCK MANAGING METHOD THEREOF - A nonvolatile memory device includes a memory block, a row decoder, a voltage generator and control logic. The memory block includes memory cells stacked in a direction intersecting a substrate, the memory block being divided into sub-blocks configured to be erased independently. The row decoder is configured to select the memory block by a sub-block unit. The voltage generator is configured to generate an erase word line voltage to be provided to a first word line of a selected sub-block of the sub-blocks and a cut-off voltage, higher than the erase word line voltage, to be provided to a second word line of the selected sub-block during an erase operation. The control logic is configured to control the row decoder and the voltage generator to perform an erase operation on the selected sub-block. | 2015-05-28 |
20150149711 | CACHE DECICE AND MEMORY SYSTEM - A virtual memory management apparatus of an embodiment is embedded in a computing machine | 2015-05-28 |
20150149712 | TRANSLATION LAYER IN A SOLID STATE STORAGE DEVICE - Solid state storage devices and methods for flash translation layers are disclosed. In one such translation layer, a sector indication is translated to a memory location by a parallel unit look-up table is populated by memory device enumeration at initialization. Each table entry is comprised of communication channel, chip enable, logical unit, and plane for each operating memory device found. When the sector indication is received, a modulo function operates on entries of the look-up table in order to determine the memory location associated with the sector indication. | 2015-05-28 |