41st week of 2019 patent applcation highlights part 39 |
Patent application number | Title | Published |
20190310871 | ZERO-LOSS WORKLOAD MOBILITY WITH SEGMENT ROUTING FOR VIRTUAL MACHINES - Techniques for zero-loss workload mobility with segment routing for virtual machines are presented. The techniques include receiving, by a virtual router, an electronic message destined for a first virtual machine running on a first physical machine and checking a first virtual machine state for the first virtual machine. In response to determining that it is associated with a running state indicating the first physical machine, inserting a segment routing header including an indication of the source virtual machine, the first physical machine, and the first virtual machine. In response to determining that it is associated with a migration state, inserting, by the virtual router, a segment routing header indicating the source virtual machine, an END.S for the first physical machine, the first virtual machine; and an END.SBUF for a second physical machine. The message is then routed based at least in part on the inserted segment routing header. | 2019-10-10 |
20190310872 | VIRTUAL MACHINE TO CONTAINER CONVERSION AND OPTIMIZATION - Technology for analyzing a target machine (e.g., virtual machine or physical machine) and converting the services of the target machine to one or more container images that can be run using operating system level virtualization. An example method may include: receiving, by a processing device, data of a virtual machine, the data indicating a configuration of the virtual machine and a set of processes executed by the virtual machine; identifying, by the processing device, computer code of a first process of the set of processes executed by the virtual machine; analyzing the computer code to detect a link between the first process and a second process of the set of processes; and building a container image in view of the data of the virtual machine and the identified link, wherein the container image comprises the computer code of the first process and computer code of the second process. | 2019-10-10 |
20190310873 | ATTACHING STORAGE RESOURCES TO VIRTUAL MACHINE INSTANCES - A data processing system includes one or more computer systems, each executing at least one hypervisor. Host bus adapters on the computer system are connectable to storage resources in at least one storage area network. The at least one hypervisor provides virtual instances of the host bus adapters as virtual host bus adapters, and a world-wide unique port number and a logical unit number are used to access a storage volume. A globally unique identifier is used to identify the storage volume. The system includes a management server comprising a management instance for evaluating a possibility of attaching storage resources to virtual machine instances generated by the hypervisor. | 2019-10-10 |
20190310874 | DRIVER MANAGEMENT METHOD AND HOST - Embodiments of the present disclosure disclose a driver management method and a host. The method includes: allocating a first hardware device to a target virtual machine on the host; obtaining, a target driver package of the first hardware device from N pre-stored driver packages, where the N driver packages are driver packages of N types of hardware devices, a type of the first hardware device is one of the N types of hardware devices, and N is a positive integer greater than or equal to 1; adding the target driver package into the target virtual machine to enable the target virtual machine to read the target driver package; and installing the target driver package, where a driver obtained by installing the target driver package is used by the target virtual machine to invoke the first hardware device in a hardware pass-through manner. | 2019-10-10 |
20190310875 | NONDISRUPTIVE UPDATES IN A NETWORKED COMPUTING ENVIRONMENT - As indicated above, aspects of the present invention provide an approach for facilitating nondisruptive virtual machine (VM) maintenance in a networked computing environment. In an embodiment, a request for an update to an active VM is received, and a copy of the active VM is taken to create a snapshot VM. An update is installed on the snapshot VM. While the snapshot VM is being updated, all changes made to the active VM are saved. Once the update is installed on the snapshot VM, the saved changes are applied to the snapshot VM. A switch is made over to the snapshot VM in real time so that the snapshot VM becomes the active VM. The process allows a user to work continuously with the software as a service (SaaS) VM without disruption. | 2019-10-10 |
20190310876 | COMBINED NETWORK AND PHYSICAL SECURITY APPLIANCE - The present disclosure describes a combined network and physical security appliance. The appliance may be wired to or communicate with automation systems, IoT devices, physical sensors, computing devices and servers on an internal or local network, and other computing devices on an external network. By combining network security and physical security into a single device, a combination security appliance may correlate physical sensor signals with packet inspection results, providing enhanced protection against network threats to physical security systems, and physical protection against network threats. | 2019-10-10 |
20190310877 | Managing Shared Resources in a Distributed Computing System - A distributed computing system includes several partitions that each contain a separate copy of shared resources that receive modifications via behaviors and transactions specified by a user. The transaction manager performs the requested behavior or transaction in parallel on each copy of the shared resources as indicated by a resource ID. This allows the distributed computing system to operate in parallel without competing for the same shared resource, avoiding deadlocks and race conditions. If a behavior or transaction fails while modifying a copy of a shared resource, the transaction manager prevents the behavior or transaction from modifying the remaining copies and preempts results from the failed behavior or transaction. The transaction manager reestablishes a consistent state across shared resources by rolling back the failed behavior or transaction, reverting each copy of the shared resources to its state prior to executing the behavior or transaction. | 2019-10-10 |
20190310878 | SERVICE PROCESSING METHOD AND APPARATUS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for transaction processing are provided. One of the methods includes: receiving a transaction request for a target transaction; loading a transaction template matching a transaction type of the target transaction; processing the target transaction according to the transaction template to obtain transaction information; and writing the transaction information into a blockchain. | 2019-10-10 |
20190310879 | SYSTEMS AND METHODS FOR AUTOMATICALLY MANAGING SCRIPTS FOR EXECUTION IN DISTRIBUTED COMPUTING ENVIRONMENTS - Aspects of the present disclosure involve automatically generating a script for, e.g., capturing configuration information associated within software services and related computing components accessible throughout a network (e.g., a cloud). The script may be executed to capture such data traffic of the software deployed within the network. | 2019-10-10 |
20190310880 | MANAGED ORCHESTRATION OF VIRTUAL MACHINE INSTANCE MIGRATION - A virtual machine running on a source host is determined to be migrated away from the source host. The virtual machine is migrated away from the source host at least by a target host being selected for the virtual machine and a state of the virtual machine being copied from the source host to the target host while the virtual machine continues to run on the source host. The virtual machine is further migrated from the source host by a change to the state of the virtual machine t running on the source host that resulted during the copying being propagated to the target host. The virtual machine is run on the target host such that the virtual machine running on the target host includes the change to the state. | 2019-10-10 |
20190310881 | MANAGED ORCHESTRATION OF VIRTUAL MACHINE INSTANCE MIGRATION - A migration of a virtual machine running on a source host is determined to be performed for the virtual from the source host to a target host. The virtual machine is migrated from the source host to the target host at least by copying a state of the virtual machine from the source host to the target host while the virtual machine is running on the source host. Packets are caused to be forwarded to the virtual machine running on the target host. | 2019-10-10 |
20190310882 | MULTIPLE APPLICATION INSTANCES IN OPERATING SYSTEMS THAT UTILIZE A SINGLE PROCESS FOR APPLICATION EXECUTION - Various examples for providing multiple instances of a client application in operating systems that limit execution of the client application to a single process are disclosed. A client device can include an operating system natively configured to generate a single process for an execution of the client application on the client device. A client application can be configured to, in the single process, generate sub-processes for execution of separate instances of the client application. The client application can include at least one user interface that permits creation of, termination of, or toggling between various instances of the client application. | 2019-10-10 |
20190310883 | METHOD AND SYSTEM FOR KERNEL ROUTINE CALLBACKS - Systems and methods are provided for kernel routine callbacks. Such methods may include hooking a pre-callback handler and a post-callback handler to a pre-existing operating system of a computing device. According to the pre-callback handler, a kernel routine request for a kernel routine to be performed in a kernel mode of the operating system is obtained, whether to allow the kernel routine to be performed is determined, and the kernel routine is caused to be performed in the kernel mode to generate kernel routine results. According to the post-callback handler, whether to allow the kernel routine results of the kernel routine to be returned is determined, and the kernel routine results of the kernel routine is caused to be returned to an application that is executed in a non-kernel mode of the operating system. | 2019-10-10 |
20190310884 | BUSINESS OPERATION METHOD AND APPARATUS, AND CLOUD COMPUTING SYSTEM - Various embodiments provide an operation method and apparatus, and a cloud computing system. Under the method, an operation target can be received; and an operation task to be executed for implementing the operation target of the business can be determined based on the operation target and current running data. The operation target can indicate a target topology and/or target software of the business, and the current running data can include a current topology of the business and currently running software. If there are a plurality of operation tasks, dependencies between the operation tasks can be determined, and the operation tasks can be executed based on the dependencies between the operation tasks. The method implements automatic execution of a maintenance operation and greatly improves efficiency of the maintenance operation in cloud computing. | 2019-10-10 |
20190310885 | COMPUTING ON TRANSIENT RESOURCES - Aspects of the technology described herein can facilitate computing on transient resources. An exemplary computing device may use a task scheduler to access information of a computational task and instability information of a transient resource. Moreover, the task scheduler can schedule the computational task to use the transient resource based at least in part on the rate of data size reduction of the computational task. Further, a checkpointing scheduler in the exemplary computing device can determine a checkpointing plan for the computational task based at least in part on a recomputation cost associated with the instability information of the transient resource. Resultantly, the overall utilization rate of computing resources is improved by effectively utilizing transient resources. | 2019-10-10 |
20190310886 | Scheduling of Operations for Actor Instances - There is provided mechanisms for scheduling operation of instances of actors on a runtime environment during a time period. A method is performed by a scheduler. The method comprises obtaining a total amount of available resource units for each of the instances to use during the time period. The method comprises obtaining an estimated usage of resource units per instance for the time period. The method comprises scheduling operation of the instances during the time period such that the estimated usage of resource units per instance is within each respective total amount of available resource units. | 2019-10-10 |
20190310887 | OPTIMIZING TIMEOUTS AND POLLING INTERVALS - An approach is provided for managing a timeout and polling interval of an operation of an application. A recommendation specifying the timeout and polling interval is selected. The timeout and polling interval are applied to a deployed image. Based on polling intervals, numbers of polls for operations, identifications of the operations, and environments of the operations, a minimum number of polls of the operation in an environment before a successful completion of the operation is determined and an old polling interval used between the polls of the operation is determined. If the minimum number of polls is greater than one, the polling interval specified in the recommendation is determined as the minimum number of polls multiplied by the old polling interval. If the minimum number of polls equals one, the polling interval specified in the recommendation is determined by decreasing the old polling interval by a configurable factor. | 2019-10-10 |
20190310888 | Allocating Resources in Response to Estimated Completion Times for Requests - Methods, and systems for processing a request sent to a virtual assistant management system and for allocating resources, e.g., computing resources, in response to such requests. One of the methods includes: receiving a request; determining a category of the request; determining actions associated with completing the request; using a machine learning model to estimate an amount of time to complete the request based on the category and the actions associated with completing the request, wherein the machine learning model was trained using features characterizing a previous request that was completed by an agent and the time it took the agent to complete the previous request; and rating an agent's performance based in part on a comparison of an amount of time it takes an agent to complete the request and the estimated amount of time to complete the request generated by the machine learning model. | 2019-10-10 |
20190310889 | MANAGING A VIRTUALIZED APPLICATION WORKSPACE ON A MANAGED COMPUTING DEVICE - Methods and systems for providing load balancing are provided. Example embodiments provide a Application Workspace System “AWS” which enables users to access remote server-based applications using the same interface that they use to access local applications, without needing to know where the application is being accessed. In one embodiment, a load balancing message bus is provided that performs load balancing and resource discovery within the AWS. For example, the AWS may use a broadcast message-bus based load balancing to determine which servers to use to launch remote application access requests or to perform session management. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims. | 2019-10-10 |
20190310890 | DYNAMIC MICRO-SERVICES RELATED JOB ASSIGNMENT - A device may receive a set of heartbeat messages. The set of heartbeat messages may be related to determining a respective priority of a set of computing nodes for processing a set of jobs. The device may identify a heartbeat message, of the set of heartbeat messages, associated with a lowest offset relative to offsets associated with other heartbeat messages of the set of heartbeat messages. The device may determine the respective priority of the set of computing nodes based on one or more factors related to the set of computing nodes or the set of heartbeat messages. The device may determine whether to perform a subset of the set of jobs based on the respective priority of the set of computing nodes. The device may perform a set of actions after determining whether to perform the subset of the set of jobs. | 2019-10-10 |
20190310891 | Methods and Systems for Automated Monitoring and Control of Adherence Parameters - Exemplary embodiments relate to systems for building a model of changes to data items when information the data items is limited or not directly observed. Exemplary embodiments allow properties of the data items to be inferred using a single data structure and creates a highly granular log of changes to the data item. Using this data structure, the time-varying nature of changes to the data item can be determined. The data structure may be used to identify characteristics associated with a regularly-performed action, to examine how adherence to the action affects a system, and to identify outcomes of non-adherence. Fungible data items may be mapped to a remediable condition or remedy class. This may be accomplished by automatically deriving conditions and remedial information from available information, matching the conditions to remedial classes or types via a customizable mapping, and then calculating adherence for the condition on the available information. | 2019-10-10 |
20190310892 | Determination of Workload Distribution across Processors in a Memory System - A memory system having a set of media, a set of resources, and a controller configured via firmware to use the set of resources in processing requests from a host system to store data in the media or retrieve data from the media. The memory system has a workload manager that analyzes activity records in an execution log for a time period where each of the activity records can indicate whether a processor of the controller is in an idle state during a time slot in the time period. The workload manager identifies idle time slots within the time period during which time slots one or more lightly-loaded processors in the plurality of processors are in the idle state, and adjusts a configuration of the controller to direct tasks from one or more heavily-loaded processors to the one or more lightly-loaded processors. | 2019-10-10 |
20190310893 | WORKLOAD MANAGEMENT WITH DATA ACCESS AWARENESS IN A COMPUTING CLUSTER - Embodiments for workload management with data access awareness in a computing cluster. In response to receiving an input workload for scheduling by a workload manager, a set of inputs is retrieved from a storage system by a data requirements evaluator module. The data requirements evaluator module generates a list of cluster hosts ranked for performing the input workload according to data access considerations and provides the ranked list of cluster hosts to a scheduler module. The scheduler module generates a scheduling of the input workload to certain hosts within the computing cluster where the generated scheduling is optimized with the data access considerations. | 2019-10-10 |
20190310894 | WORKLOAD MANAGEMENT WITH DATA ACCESS AWARENESS USING AN ORDERED LIST OF HOSTS IN A COMPUTING CLUSTER - Embodiments for workload management with data access awareness by ordering hosts for scheduling workloads in a computing cluster. In response to receiving an input workload for scheduling by a workload manager, a set of inputs is retrieved from a storage system by a data requirements evaluator module. The data requirements evaluator module generates a list of cluster hosts ranked for performing the input workload according to data access considerations. | 2019-10-10 |
20190310895 | WORKLOAD MANAGEMENT WITH DATA ACCESS AWARENESS BY AGGREGATING FILE LOCALITY INFORMATION IN A COMPUTING CLUSTER - Embodiments for workload management by aggregating locality information for a set of files in a cluster of hosts, from a file level to a level of the set of files in a cluster of hosts. To facilitate workload scheduling in the cluster, a subset of the set of files is selected. A set of storage size counters, each assigned to a host in the cluster, is reset. An overall storage size counter is reset, and the files in the subset of the set of files are scanned. For each scanned file, locality information of the file is retrieved and added to the storage size counters of the hosts, and a total size of the file is added to the overall storage size counter. An output proportion of the storage size counter of each host is then computed from the overall storage size counter. | 2019-10-10 |
20190310896 | THERMAL AND POWER MEMORY ACTIONS - Embodiments of the present disclosure relate to managing volatile and non-volatile memory. A set of volatile memory sensor data may be obtained. A set of non-volatile memory sensor data may be obtained. The set of volatile memory sensor data and the set of non-volatile memory sensor data may be analyzed. A memory condition may be determined to exist based on the analysis. In response to determining that the memory condition exists, one or more memory actions may be issued. | 2019-10-10 |
20190310897 | MEASURING UTILIZATION OF RESOURCES IN DATACENTERS - For measuring component utilization in a computing system, a server energy utilization reading of a statistical significant number of servers out of a total number of servers located in the datacenter is obtained by measuring, at predetermined intervals, a collective energy consumed by all processing components within each server. The collective energy is measured by virtually probing thereby monitoring an energy consumption of individual ones of all the processing components to each collect an individual energy utilization reading, where the individual energy utilization reading is aggregated over a predetermined time period to collect an energy consumption pattern associated with the server utilization reading. | 2019-10-10 |
20190310898 | SYSTEMS AND METHODS FOR IMPLEMENTING AN INTELLIGENT APPLICATION PROGRAM INTERFACE FOR AN INTELLIGENT OPTIMIZATION PLATFORM - Systems and methods for implementing an application programming interface (API) that controls operations of a machine learning tuning service for tuning a machine learning model for improved accuracy and computational performance includes an API that is in control communication the tuning service that: executes a first API call function that includes an optimization work request that sets tuning parameters for tuning hyperparameters of a machine learning model; and initializes an operation of distinct tuning worker instances of the service that each execute distinct tuning tasks for tuning the hyperparameters; executes a second API call function that identifies raw values for the hyperparameters; and generates suggestions comprising proposed hyperparameter values selected from the plurality of raw values for each of the hyperparameters; and executes a third API call function that returns performance metrics relating to a real-world performance of the subscriber machine learning model executed with the proposed hyperparameter values. | 2019-10-10 |
20190310899 | DYNAMIC ROUTING OF EVENTS TO DESTINATIONS - A method for dynamically routing of events to destinations based on mapping metadata is described. The method includes detecting, by a capture service of an application server, an event associated with values for one or more attributes that describe the event; mapping, by a metadata service of the application server, the event to a set of destinations based on the mapping metadata received by the application server at runtime, the values of the one or more attributes of the event, and permissions associated with a tenant; and storing, by a buffer of the application server, the event and the set of destinations. | 2019-10-10 |
20190310900 | SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM FOR ALLOCATING DIGITAL DATA PROCESSING SYSTEM RESOURCES - A computer-implemented method for allocating resources includes identifying event conditions matched by events identified based on a monitoring of events associated with an account. Modifiers associated with the matched event conditions are retrieved. An account modifier is computed based on the modifiers. A selected quantity of resources is determined by modifying a base resource level based on the account modifier and allocated for use in association with the account. The allocating includes initiating a blockchain transaction to update a distributed ledger to indicate the quantity of resources in association with an address associated with the account. The events are associated with consumption of computing resources. The modifiers associated with particular event conditions are updateable based on a comparison of computing resources consumed by events matching the particular event conditions with a resource allocation for those events. Related systems and computer-readable media are also disclosed. | 2019-10-10 |
20190310901 | IN-LINE EVENT HANDLERS ACROSS DOMAINS - Event handler records, for different event handlers in different domains, are stored in an event handler orchestrator service. The event handler records identify event handlers (in various domains) that are to handle events raised in separate domains. When an event is raised, the event handler records are filtered to identify an event handler that has indicated an interest in the raised event, and an end point corresponding to the identified event handler is provided back to the calling process. The calling process then invokes the event handler for which the end point is returned. | 2019-10-10 |
20190310902 | METHOD AND DEVICE FOR ALERTING THAT AN EVENT HAS OCCURRED - A method is proposed for alerting that an event has occurred. The method comprises: receiving a user request; interpreting the user request using a semantic engine and determining a request to subscribe to an event contained in the request; determining an event server on the basis of the event; sending to the event server a request to subscribe to the event; receiving a first message associated with an occurrence of the event; sending a second message informing of the occurrence of the event. | 2019-10-10 |
20190310903 | Selective Application Instance Activation - In one embodiment, a computer system stores entries for one or more instances of an application with keys generated for the instances in storage. The instances of the application are instantiated on the computer system. The computer system receives a request from the application with a current key for a current instance and parses the storage to determine if the current key is stored in the keys associated with the application. The computer system returns a response to the application with an indication whether the current key is stored as an entry in the one or more entries. The application uses the response to determine a redirection action to one of the one or more instances of the application when the current key is associated with an instance other than the current instance. | 2019-10-10 |
20190310904 | DEVICE DETECTION METHOD AND SYSTEM, ELECTRONIC DEVICE, CLOUD ROBOT SYSTEM, AND COMPUTER PROGRAM PRODUCT - The present application provides a device detection method and system, an electronic device, a cloud robot system. The method includes: in a first operating system, when a device detection instruction sent by a device detection program is detected, determining a driving program operation instruction corresponding to the device detection instruction, and transmitting the driving program operation instruction to a second operating system; in the second operating system, operating a corresponding driving program according to the driving program operation instruction, and feeding back an operation result to the first operating system; and in the first operating system, returning the operation result to the device detection program. | 2019-10-10 |
20190310905 | MEMORY SYSTEMS AND OPERATING METHODS OF MEMORY SYSTEMS - A memory system includes a processor that includes cores and a memory controller, and a first semiconductor memory module that communicates with the memory controller. The cores receive a call to perform a first exception handling in response to detection of a first error when the memory controller reads first data from the first semiconductor memory module. A first monarchy core of the cores performs the first exception handling and the remaining cores of the cores return to remaining operations previously performed. | 2019-10-10 |
20190310906 | SYSTEMS AND METHODS FOR REAL TIME COMPUTER FAULT EVALUATION - A method of evaluating real-time computer faults and using a fault evaluation (FE) platform is provided. The method includes ingesting log data associated with a computer system, the log data includes a plurality of fault events, a fault severity identifier associated with at least one fault event of the plurality of fault events. The method also includes selecting, from the plurality of fault events, a fault event set which corresponds to a time window and includes the at least one fault event. The method further includes generating a fault score for the at least one fault event and an aggregate fault score. The method also includes determining that the aggregate fault score exceeds a predefined threshold, and providing, to a configuration management platform, instructions to initiate a hardware component remediation process. | 2019-10-10 |
20190310907 | METHOD AND DEVICE FOR ERROR HANDLING IN A COMMUNICATION BETWEEN DISTRIBUTED SOFTWARE COMPONENTS - For error handling of data communications, in a transmission interval, between first and second tasks for which first and second time intervals are respectively predefined, (1) execution of the first task is omitted in a pending instance of the second time interval responsive to where the transmission interval immediately prior to the pending instance of the second time interval began in, and continued past an end point of, a most recent instance of the first time interval, which was during an immediately preceding instance of the second time interval; or (2) execution of the second task is omitted in the pending instance of the second time interval responsive to where a most recent prior execution of the second task began in, and continued past an end point in time of, a most recent instance of the second time interval immediately prior to the pending instance of the second time interval. | 2019-10-10 |
20190310908 | METHOD FOR CONTROLLING CORRECTABLE ERROR REPORTING FUNCTION FOR SERVER DEVICE - A method for controlling a correctable error reporting function and applicable to a server device is provided, including: receiving, by control unit, a plurality of first error messages sent by a first hardware component in which a plurality of correctable errors occurs in a plurality of hardware components; determining, by the control unit, according to the first error messages, error types of the errors occurring in the first hardware component; determining, by the control unit, whether the number of occurrences of the errors of the error types that occur in the first hardware component within first preset duration reaches a preset number of times; and if the determining result is yes, controlling, by the control unit, the first hardware component to stop performing an error reporting function corresponding to the first error type. | 2019-10-10 |
20190310909 | METHOD AND DEVICE FOR ERROR HANDLING IN A COMMUNICATION BETWEEN DISTRIBUTED SOFTWARE COMPONENTS - For error handling of data communications between first and second tasks in a data transmission interval, where first time intervals and second time intervals are predefined for the first and second tasks, respectively, the data transmission interval is omitted in one of the second time intervals when (1) execution of the first task immediately prior to the current second time interval, which began in a first time interval, during an immediately preceding second interval, continues past an end point of that first time interval, and an execution of the second task of the current second interval has begun, or (2) execution of the second task, which began in an immediately preceding one of the second intervals, continued past an end point of the preceding second interval and an execution of the first task the current second interval has already begun. | 2019-10-10 |
20190310910 | MEMORY SYSTEM AND OPERATING METHOD OF THE MEMORY SYSTEM - A memory system comprises: a memory cell array suitable for storing first data and a first parity, which is used to correct an error of the first data; and an error correcting circuit suitable for generating second data and a second parity, which includes bits obtained by correcting an error of the first parity and a bit obtained by correcting an error of a second sub-parity; wherein the error correcting circuit includes: a single error correction and double error detection (SECDED) parity generator suitable for generating a second pre-parity, which includes a first sub-parity and the second sub-parity; a syndrome decoder suitable for generating a first parity error flag and a first data error flag by decoding a syndrome; a SEC parity corrector suitable for correcting an error of the first parity based on the first parity error flag; a DED parity error detector suitable for generating a second sub-parity error flag based on an error information of the first data used to generate the second sub-parity; and a DED parity corrector suitable for correcting any error of the second sub-parity based on the second sub-parity error flag. | 2019-10-10 |
20190310911 | TECHNOLOGIES FOR PROVIDING ECC PRE-PROVISIONING AND HANDLING FOR CROSS-POINT MEMORY AND COMPUTE OPERATIONS - Technologies for provisioning error-corrected data for use in in-memory compute operations include a memory that includes a memory media having multiple memory partitions and media access circuitry coupled to the memory media. The media access circuitry is to receive a request to perform an in-memory compute operation on data from the memory media. The request specifies a memory partition of the memory media in which the data is located. The media access circuitry reads the data from the memory partition. The media access circuitry performs error correction on the read data to produce error-corrected read data and stores the error-corrected read data in a temporary buffer for access by one or more in-memory compute operations, in addition to the requested in-memory compute operation. | 2019-10-10 |
20190310912 | HIGH THROUGHPUT BIT CORRECTION OF DATA INSIDE A WORD BUFFER FOR A PRODUCT CODE DECODER - A product code decoder to implement a method of bit correction in a codeword buffer to support error correcting code (ECC). The method loads a location entry from a correction queue, where the location entry includes a data word address and bit location information. The method performs a fast path data word address comparison to determine whether data from the data word address is being processed by a previous entry from the correction queue. The method further combines a correction of the data at the data word address specified by the location entry with a correction of a copy of the data being processed based on a previous location entry, in response to a fast path data word address comparison match, and stores the combined data in the codeword buffer. | 2019-10-10 |
20190310913 | SYSTEM LEVEL DATA-LOSS PROTECTION USING STORAGE DEVICE LOCAL BUFFERS - A computing system comprises a host system, a first storage device, a second storage device, a third storage device, a fabric interconnect device and a controller separate from the host system. The first, second, and third storage devices comprise a first, second, and third local memory buffer. The fabric interconnect device is configured to connect the first, second, and third storage devices over a fabric network to the host system. In response to receiving a write operation from the host system, a controller (e.g., on the first storage device or the fabric interconnect device) is configured to calculate error-correction data (e.g., parity data) by using data-protection operations (e.g., XOR operation(s)) directly on data stored on the first, second, and third local memory buffer, without having to rely on computing resources of the host system. | 2019-10-10 |
20190310914 | BIT INTERLEAVER FOR LOW-DENSITY PARITY CHECK CODEWORD HAVING LENGTH OF 16200 AND CODE RATE OF10/15 AND 256-SYMBOL MAPPING, AND BIT INTERLEAVING METHOD USING SAME - A bit interleaver, a bit-interleaved coded modulation (BICM) device and a bit interleaving method are disclosed herein. The bit interleaver includes a first memory, a processor, and a second memory. The first memory stores a low-density parity check (LDPC) codeword having a length of 16200 and a code rate of 10/15. The processor generates an interleaved codeword by interleaving the LDPC codeword on a bit group basis. The size of the bit group corresponds to a parallel factor of the LDPC codeword. The second memory provides the interleaved codeword to a modulator for 256-symbol mapping. | 2019-10-10 |
20190310915 | LOG-STRUCTURED ARRAY (LSA) PARTIAL PARITY EVICTION AND REASSEMBLY - Embodiments for optimizing resource consumption through partial parity information eviction in a storage system of a data storage environment. One or more cooperative Redundant Array of Independent Disks (RAID) parity computations are performed by evicting partial parity data from a RAID controller memory to a storage entity prior to a full stripes worth of data being monotonically written to the storage entity. The storage entity assembles the partial parity data from the one or more cooperative RAID parity computations into a single parity computation valid for the full stripes worth of data, thereby offloading parity computation to the storage entity to more efficiently utilize the RAID controller memory resources. | 2019-10-10 |
20190310916 | DYNAMICALLY MERGING PARITY DATA FOR MULTIPLE DATA STRIPES - Methods that can dynamically merge parity data for multiple data stripes are provided. One method includes detecting, by a processor, a disk failure in a redundant array of independent disks (RAID) configuration and, in response to detecting the disk failure, merging parity data stored in a plurality of sets of segments in a stripe of the RAID configuration to free space in a set of parity segments of the plurality of sets of segments. Systems and computer program products for performing the method are also provided. | 2019-10-10 |
20190310917 | MEMORY SYSTEM AND METHOD OF CONTROLLING MEMORY SYSTEM - According to one embodiment, a memory system includes a nonvolatile memory having a first writing area and a second writing area, and a controller, in which the controller confirms whether processing of preserving data which has been written before shutdown which is not going through a predetermined shutdown procedure is being executed, in the nonvolatile memory, when the controller receives a write command, causes the nonvolatile memory to write data to the first writing area if the processing is not being executed, and causes the nonvolatile memory to write data to the second writing area if the processing is being executed. | 2019-10-10 |
20190310918 | Meta Data Protection against Unexpected Power Loss in a Memory System - A memory system having a set of non-volatile media, a volatile memory, a buffer memory, and a controller configured to process requests from a host system to store data in the non-volatile media or retrieve data from the non-volatile media. The buffer memory is capable of holding data for at least a predetermined period of time after the volatile memory loses data during an event of power outage in the memory system. A power manager monitors a power supply of the memory system to detect an onset of power outage and, in response to the onset of power outage, causes the controller to copy meta data in the volatile memory to the buffer memory. | 2019-10-10 |
20190310919 | DATA MANAGEMENT AND BACKUP FOR IMAGE AND VIDEO MEDIA - The management and backup of image and video media is automatically performed by evaluating a media file to characterize the content; transforming the media file, as by compressing it, based upon its evaluation and established policies and requirements; and storing the transformed media file with a high resolution in a storage tier having first access characteristics for a first retention period. Following the first retention period, the stored file is re-evaluated and further transformed and stored with lesser resolution in another tier having different access characteristics for a second retention period. Subsequently, the further transformed media file may be transformed again and stored in archive storage. | 2019-10-10 |
20190310920 | Pre-Fetching and Staging of Restore Data on Faster Tiered Storage - Pre-fetching and staging restore data is provided. A set of data corresponding to a client device is collected from each respective data source in a plurality of data sources. A score is determined for each set of data collected. A probability of receiving a request to restore backup data on the client device is predicted based on analysis of the set of data from each respective data source and the score for each set of data. It is determined whether the predicted probability of receiving a request to restore the backup data on the client device is greater than a threshold. In response to determining that the predicted probability of receiving a request to restore the backup data on the client device is greater than the threshold, the backup data of the client device is preemptively moved to a fastest data storage tier in a multi-tiered backup data storage system. | 2019-10-10 |
20190310921 | DATA STORAGE DEVICE AND OPERATION METHOD OPTIMIZED FOR RECOVERY PERFORMANCE, AND STORAGE SYSTEM HAVING THE SAME - A data storage device may include: a storage unit comprising a storage comprising a storage area divided into a plurality of blocks, and a controller configured to control a data input/output operation on the storage according to a request of a host device, collect information on a block, of the plurality of blocks, involved in a background operation which is performed while power is supplied, store the collected information as hint information, and resume a background operation started before a sudden power-off, based on the hint information, when power is resupplied after the sudden power-off. | 2019-10-10 |
20190310922 | MULTI-REPLICA DATA RESTORATION METHOD AND APPARATUS - Embodiments of this application provide a method and an apparatus for multi-replica data restoration. The method is applied to a distributed database and includes: when a first page in replica data of a first node has a fault, obtain N latest LSNs in data log information of a second node that corresponds to a first page identifier. The first page is any page that is in the replica data of the first node and that has a fault. The first node further determines a node corresponding to a largest LSN in the N latest LSNs in the data log information of the second node that corresponds to the first page identifier is a third node. Then the first node performs data restoration on the first page in the replica data of the first node according to replica data of the third node. | 2019-10-10 |
20190310923 | DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device includes a nonvolatile memory device and a controller which controls the nonvolatile memory device. When the data storage device is powered on after a sudden power off (SPO), the controller detects an erased page by scanning, without decoding, a first system data block of a nonvolatile memory device, performs simple decoding for first system data of first system pages before the erased page, and, if the simple decoding is a fail, recovers the first system data for which the simple decoding failed, by reading out second system data from corresponding second system pages of a second system data block as a duplicate block of the first system data block. | 2019-10-10 |
20190310924 | METHOD AND APPARATUS FOR FAILOVER PROCESSING - Embodiments of the present disclosure provide a method and apparatus for failover. In an embodiment is provided a method implemented at a first node in a cluster comprising a plurality of heterogeneous nodes. The method comprises: determining whether an application at a second node in the cluster is failed; and in response to determining that the application is failed, causing migration of data and services associated with the application from the second node to a third node in the cluster, the migration involving at least one node heterogeneous to the second node in the cluster. The present disclosure further provides a method implemented at the third node in the cluster and corresponding devices and computer program products. | 2019-10-10 |
20190310925 | INFORMATION PROCESSING SYSTEM AND PATH MANAGEMENT METHOD - A configuration of a redundancy group, which includes a control unit disposed in a storage node and set in an active mode for processing a request from a compute node and a control unit disposed in another storage node and set in a passive mode for taking over the process when a failure occurs in the control unit and the like, is inquired to the storage node, a plurality of paths from the compute node to a volume correlated with the redundancy group are set on the basis of the inquiry result, and the highest priority is set in a path connected to a storage node provided with the control unit of the active mode while the second highest priority is set in a path connected to a storage node provided with the control unit of the passive mode. | 2019-10-10 |
20190310926 | SERVER SYSTEM AND METHOD OF SWITCHING SERVER - A server system includes a primary server, at least one synchronous backup server, and at least one asynchronous backup server. The primary server includes a first processor. The at least one synchronous backup server, each includes a second processor configured to back up data of the primary server in a synchronous manner. The at least one asynchronous backup server, each includes a third processor configured to back up data of the primary server in an asynchronous manner. The first processor is configured to control each of one or more of the at least one asynchronous backup server to operate as a synchronous backup server when a number of the at least one synchronous backup server decreases due to a failure in at least one server included in the server system. | 2019-10-10 |
20190310927 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus includes a data acquisition unit to acquire input data that is time-series data; a sampling error upper limit calculation unit to calculate, when similar learning subsequences selected from among a plurality of learning subsequences extracted from learning data that is time-series data are integrated to generate a sample subsequence. The information processing apparatus further includes a sampling error upper limit using data taken from the input data, the sampling error upper limit being an upper limit on dissimilarity between the learning subsequences to be integrated; and a sample subsequence generation unit to generate the sample subsequence from the learning data using the sampling error upper limit. | 2019-10-10 |
20190310928 | INTEGRATION OF DIAGNOSTIC INSTRUMENTATION WITH MACHINE PROTECTION SYSTEM - In one embodiment, a portable monitoring system can include a secondary bus and a first monitoring circuit detachably coupled to the secondary bus. The first monitoring circuit can be configured to receive, from a first bus via a node comprising one or more gates, a first beacon packet of a monitoring system of an industrial machine. The first beacon packet can include a first system frame schedule indicative of a plurality of time slices during which a plurality of data packets can be configured to be broadcasted on the first bus of the monitoring system. The first monitoring circuit can also be configured to determine, a first set of time slices of the plurality of time slices during which a first set of data packets including data characterizing one or more predetermined operating parameters are broadcasted on the first bus. The first monitoring circuit can be further configured to transfer the first set of data packets from the first bus to the first monitoring circuit by activating the one or more gates in the node during a first set of time slices of the plurality of time slices. The one or more gates are configured to prevent transfer of an outgoing data packet to the first bus. | 2019-10-10 |
20190310929 | RISK-BASED SOFTWARE VALIDATION AND CHANGE CONTROL - Embodiments are directed to performing risk-based software validation and to applying change control when upgrading a software application. In one scenario, a computer system calculates a risk score for features in a software application. This risk score indicates a relative level of risk for installing and using the software application. The computer system performs internal usage testing to determine how the software application is recommended for use, and conducts use tests to determine how a specified client uses the features of the software application as compared to the determined recommended use. Then, based on the calculated risk and the determined use of the features, the computer system provides a recommendation for the specified client indicating which portions of the software application are to undergo client-specific validation. In another scenario, a computer system applies change control when upgrading a software application from a first version to a second version. | 2019-10-10 |
20190310930 | TESTING AND REPRODUCTION OF CONCURRENCY ISSUES - A method and system for testing a server code in a server concurrently handling multiple client requests create a job-specific breakpoint in the server code using a library application programming interface (API) that allows the job-specific breakpoint in the server code being enabled or disabled based on a job identifier. The library API controls the job-specific breakpoint in the server code via a plurality of readymade functions that execute, in a desired sequence, various synchronous and asynchronous program paths associated with the multiple client requests. By using the library API, the method and system are capable of establishing a new server connection with the server and retrieving the job identifier from the server associated with the established new server connection, pausing execution of a client job based on enabling the job-specific breakpoint, and resuming execution of the client job based on disabling the job-specific breakpoint. | 2019-10-10 |
20190310931 | SOFTWARE PERFORMANCE REGRESSION ANALYSIS USING EXECUTION TIMELINES - In one embodiment, a method receives execution timelines that include nodes representing function calls and execution times from executing a first version of an application and a second version of the application. The method selects pairs of nodes from a first set of nodes from a first execution timeline and a second set of nodes from a second execution timeline. The method then iterates through the pairs of nodes to determine (1) whether a node in the second execution timeline is not included in the first execution timeline and has an execution time slower than a set difference; or (2) whether a first node in the second execution timeline has an execution time slower than a second node in the first execution timeline by the set difference. A critical graph is generated that defines a path of nodes that lead to a node that has been identified as a performance regression cause. | 2019-10-10 |
20190310932 | Test Case Reduction for Code Regression Testing - In at least one embodiment, a system performs regression testing of software using selected test cases. In at least one embodiment, the system selects the test case for regression testing based on whether the test case correlates with modified code. In at least one embodiment, a test case correlates with the modified code if the test case tests all or a proper subset of the modified code. In at least one embodiment, if a test case does not test any of the modified code, then the test case is not used in the regression testing of the modified code. | 2019-10-10 |
20190310933 | EXTERNALIZED DEFINITION AND REUSE OF MOCKED TRANSACTIONS - Aspects of the embodiments include a system, method, and computer program operations, including compiling unit test code by a compiler run on a hardware computing system; identifying in the unit test code a call for a mocked transaction based on a name of the mocked transaction within the unit test code; identifying a location of the mocked transaction in a mocked transaction repository, the mocked transaction repository comprising mocked transaction code associated with the mocked transaction; executing the mocked transaction code associated with the mocked transaction; and outputting a response to the mocked transaction based at least in part on the unit test code | 2019-10-10 |
20190310934 | ADDRESS TRANSLATION FOR STORAGE DEVICE - Techniques are described for accessing data from a storage device. In one example, the storage device may include a storage medium comprising non-volatile memory, a network connection, and one or more processing entities. The one or more processors may be configured to receive a request from the network connection at the non-volatile memory storage device for accessing data associated with a file system object, the request comprising a virtual address offset, a file object identifier and a size of the data access, perform, at a flash translation layer of a storage device software stack executing on the one or more processing entities of the storage device, a translation from the virtual address offset to a physical address for the data stored on the non-volatile memory, using the virtual address offset and the file object identifier, and access the data from the physical address from the storage medium. | 2019-10-10 |
20190310935 | INTELLIGENT GARBAGE COLLECTOR FOR CONTAINERS - Methods, systems, and computer program products are included for the intelligent garbage collection of containers. An example method includes providing a garbage collection data structure, the garbage collection data structure including metadata and one or more resource consumption parameters corresponding to the container. The one or more resource consumption parameters are analyzed by a machine-learning function. Based on the analyzing, the container is classified into one or more classes, the one or more classes including at least one of a suspicious container class, a malicious container class, or a normal container class. Based on the classifying, one or more garbage collection actions are performed on the container, including at least one of generating an alert corresponding to the container or reducing the resource consumption of the container. | 2019-10-10 |
20190310936 | GARBAGE COLLECTION STRATEGY FOR MEMORY SYSTEM AND METHOD OF EXECUTING SUCH GARBAGE COLLECTION - Memory systems and components thereof execute an improved garbage collection (GC) strategy in the case of multiple sudden power offs (SPOs). Such a memory system comprises a memory device including single-level cell (SLC) memory blocks grouped into super blocks (SLC SBs) and multi-level cell (MLC) memory blocks grouped into SBs (MLC SBs); and a memory controller to execute a flash translation layer (FTL) to perform a garbage collection (GC) operation. The memory controller executes the GC operation after a sudden power off (SPO) by determining each MLC SB with user data opened before the SPO to be an unsafe super block (UB), copying data from pages in a select one of the UBs to pages in the SLC SBs, and copying data from the pages in the SLC SBs to pages in a select MLC SB not determined to be a UB. | 2019-10-10 |
20190310937 | TECHNIQUES TO FACILITATE A HARDWARE BASED TABLE LOOKUP - Techniques to facilitate a hardware based table look of a table maintained in or more types of memories or memory domains include examples of receiving a search request forwarded from a queue management device. Examples also include implementing table lookups to obtain a result and sending the result to an output queue of the queue management device for the queue management device to forward the result to a requestor of the search request. | 2019-10-10 |
20190310938 | TAPE DATA ACCESS WITH RANDOM ACCESS FEATURES - Retrieval of files containing audiovisual information from tape may be accelerated by storing non-sequentially read information in non-tape memory, and subsequently reading the file from tape, with reads of the non-sequentially read information fulfilled from the non-tape memory. In some embodiments a random access database is created when the file is opened or written to tape, and utilized to determine locations in the file of non-sequentially read information, or to determine the non-sequentially read information. | 2019-10-10 |
20190310939 | SELECTING RESOURCES TO MAKE AVAILABLE IN LOCAL QUEUES FOR PROCESSORS TO USE - Provided are a computer program product, system, and method for selecting resources to make available in local queues for processors to use. Each processor of a plurality of processors maintains a queue of resources for the processor to use when needed for processor operations. One of processors is selected. The selected processor accesses at least one available resource and includes the accessed at least one resource in the queue of the selected processor. | 2019-10-10 |
20190310940 | SELECTING RESOURCES TO MAKE AVAILABLE IN LOCAL QUEUES FOR PROCESSORS TO USE - Provided are a computer program product, system, and method for selecting resources to make available in local queues for processors to use. Each processor of a plurality of processors maintains a queue of resources for the processor to use when needed for processor operations. One of processors is selected. The selected processor accesses at least one available resource and includes the accessed at least one resource in the queue of the selected processor. | 2019-10-10 |
20190310941 | SECURE SPECULATIVE INSTRUCTION EXECUTION IN A DATA PROCESSING SYSTEM - A data processing system includes a processor, a cache memory, a speculative cache memory, and a control circuit. The processor is for executing instructions. The cache memory is coupled to the processor and is for storing the instructions and related data. A speculative cache is coupled to the processor and is for storing only speculative instructions and related data. The control circuit is coupled to the processor, to the cache memory, and to the speculative cache. The control circuit is for causing speculative instructions to be stored in the speculative cache in response to receiving an indication from the processor. Also, a method is provided for speculative execution in the data processing system. | 2019-10-10 |
20190310942 | COPYING FRAGMENTED FILES BETWEEN SEQUENTIAL STORAGE MEDIUMS - A computer-implemented method, according to one embodiment, includes: sending one or more instructions to calculate a combined size of fragments included in the fragmented files, sending one or more instructions to designate a portion of cache which corresponds to at least the combined size of the fragments, sending one or more instructions to send a copy of each non-fragmented file from a first drive directly to a second drive in which the second sequential storage medium is loaded, sending one or more instructions to use the designated portion of the cache to accumulate the fragments included in the fragmented files, and sending one or more instructions to send a copy of each of the fragments corresponding to a given fragmented file from the cache to the second drive in response to determining that all of the fragments corresponding to the given fragmented file have been accumulated in the cache. | 2019-10-10 |
20190310943 | Cache Partitioning to Accelerate Concurrent Workloads - Disclosed herein are system, method, and computer program product embodiments for cache partitioning to accelerate concurrent workload performance of in-memory databases. An embodiment operates by storing a first bitmask, associating the first bitmask with a first processor core, setting a subset of the bits of the first bitmask, wherein the subset of the bits of the first bitmask represents a first portion of shared last-level cache, and wherein any part of the first bitmask excluding the subset of the bits of the first bitmask represents a second portion of the lowest-level cache, and disallowing eviction of any cache line in the second portion of the lowest-level cache by the first processor core. | 2019-10-10 |
20190310944 | MEMORY SYSTEM, COMPUTING SYSTEM, AND METHODS THEREOF - According to various aspects, a memory system may include: a memory having a memory address space associated therewith to access the memory; a cache memory assigned to the memory; one or more processors configured to generate a dummy address space in addition to the memory address space, each address of the dummy address space being distinct from any address of the memory address space, and generate one or more invalid cache entries in the cache memory, the one or more invalid cache entries referencing one or more dummy addresses of the dummy address space. | 2019-10-10 |
20190310945 | Trusted out-of-band memory acquisition for IOMMU-based computer systems - An apparatus includes an interface and memory acquisition circuitry. The interface is configured to communicate over a bus operating in accordance with a bus protocol, which supports address-translation transactions that translate between bus addresses in an address space of the bus and physical memory addresses in an address space of a memory. The memory acquisition circuitry is configured to read data from the memory by issuing over the bus, using the bus protocol, one or more requests that (i) specify addresses to be read in terms of the physical memory addresses, and (ii) indicate that the physical memory addresses in the requests have been translated from corresponding bus addresses even though the addresses were not obtained by any address-translation transaction over the bus. | 2019-10-10 |
20190310946 | SEMICONDUCTOR MEMORY DEVICE FOR CONTROLLING AN ADDRESS FOR TEMPERATURE MANAGEMENT - A semiconductor memory device includes a cell circuit including a plurality of cell dies arranged in a cell die stack. The semiconductor device also includes a control circuit configured to control the cell circuit, wherein the control circuit includes an address decoder and an address conversion circuit. The address decoder is configured to decode an address signal provided by a host and to output address information including a first address which identifies a first cell die, of the plurality of cell dies, requested by the host. The address conversion circuit is configured to convert the first address to a second address using the address information and to provide the second address to the cell circuit, wherein the second address is used to identify a second cell die of the plurality of cell dies different from the first cell die. | 2019-10-10 |
20190310947 | MEMORY ACCESS BASED I/O OPERATIONS - The invention relates to a method for transferring data between a computer program executed by a processor and an input/output device using a memory accessible by the computer program and the input/output device. An operating system provides a trigger address range in a virtual address space assigned to the computer program. A page fault is caused by accessing the trigger address by the computer program. A page fault handler handling the page fault acquires information for identifying the data to be transferred using the trigger address. The acquired information is provided to the input/output device and the identified data is transferred between the memory and the input/output device. | 2019-10-10 |
20190310948 | APPARATUS AND METHOD FOR ACCESSING AN ADDRESS TRANSLATION CACHE - An apparatus and method are provided for accessing an address translation cache. The address translation cache has a plurality of entries, where each entry is used to store address translation data used when converting a virtual address into a corresponding physical address of a memory system. The virtual address is generated from a plurality of source values. Allocation circuitry is responsive to received address translation data, to allocate an entry within the address translation cache to store the received address translation data. A hash value indication is associated with the allocated entry, where the hash value indication is computed from the plurality of source values used to generate a virtual address associated with the received address translation data. Lookup circuitry is responsive to an access request associated with a target virtual address, to perform a lookup process employing a target hash value computed from the plurality of source values used to generate the target virtual address, in order to identify any candidate matching entry in the address translation cache. When there is at least one candidate matching entry, a virtual address check process is then performed in order to determine whether any candidate matching entry is an actual matching entry whose address translation data enables the target virtual address to be translated to a corresponding target physical address. Such an approach can significantly improve the performance of accesses to the address translation cache, and can also give rise to power consumption savings. | 2019-10-10 |
20190310949 | Supporting Concurrent Remove Operations and Add-To-Front Operations on a Least Recently Used (LRU) Queue - A remove operation and an add-to-front operation may be currently performed with respect to nodes in an Least Recently Used (LRU) queue. A remove operation for a node may proceed if a lock can be obtained on the node to be removed and a predecessor node. During the remove operation, an add-to-front operation may proceed if a lock can be obtained on a dummy node that precedes the current front node of the LRU queue. | 2019-10-10 |
20190310950 | DESTAGING PINNED RETRYABLE DATA IN CACHE - Provided are techniques for destaging pinned retryable data in cache. A ranks scan structure is created with an indicator for each rank of multiple ranks that indicates whether pinned retryable data in a cache for that rank is destageable. A cache directory is partitioned into chunks, wherein each of the chunks includes one or more tracks from the cache. A number of tasks are determined for the scan of the cache. The number of tasks are executed to scan the cache to destage pinned retryable data that is indicated as ready to be destaged by the ranks scan structure, wherein each of the tasks selects an unprocessed chunk of the cache directory for processing until the chunks of the cache directory have been processed. | 2019-10-10 |
20190310951 | SYSTEMS AND METHODS FOR PROVIDING ADAPTABLE VIRTUAL BACKPLANE SUPPORT FOR PROCESSOR-ATTACHED STORAGE RESOURCES - A method may include, in an information handling system comprising a processor and a management controller communicatively coupled to the processor and configured to provide management of the information handling system, executing by the management controller a management application for management of one or more storage resources of the information handling system, determining by the management controller whether one or more processor-attached storage resources are present in the information handling system, wherein the one or more processor-attached storage resources are coupled to the processor by other than a backplane of the information handling system, and responsive to determining that one or more processor-attached storage resources are present, executing by the management controller an adaptable virtual backplane that emulates a physical backplane to the management application as if the physical backplane were interfaced between the management application and the processor-attached storage resources. | 2019-10-10 |
20190310952 | Optimized Locking For Replication Solutions - A method for improving latency in storage systems, the method comprising, receiving one or more write commands from a host. Determining that one or more bits are not set for a grain associated to a write. Responsive to determining the one or more bits are not set for the grain associated to the write, sending a message to node M requesting node M to set the one or more bits for the grain associated to the write, requesting write data be transferred from the host. Transferring write data from the host. Submitting the write data to a local storage. Replicating the write data to a remote system and complete a write to the host, and notifying node M to clear the one or more bits for the write after a predetermined delay. | 2019-10-10 |
20190310953 | METHOD FOR SUPPORTING ERASURE CODE DATA PROTECTION WITH EMBEDDED PCIE SWITCH INSIDE FPGA+SSD - A topology is disclosed. The topology may include at least one Non-Volatile Memory Express (NVMe) Solid State Drive (SSD) and a Peripheral Component Interconnect Express (PCIe) switch. The PCIe switch may include an external connector to enable the PCIe switch to communicate with a processor, at least one connector to enable the PCIe switch to communicate with the NVMe SSD, a Power Processing Unit (PPU) to configure the PCIe switch, and an Erasure Coding controller including circuitry to apply an Erasure Coding scheme to data stored on the NVMe SSD. | 2019-10-10 |
20190310954 | Data Access Method and Related Device - A data method and a related device to resolve a disadvantage encountered when a first device accesses data of a second device. The method is applied to the first device, and the first device is coupled to the second device using a Universal Serial Bus (USB) interface. The method includes displaying, by the first device, an interface to which the second device is mapped, accessing data of the second device using the interface, receiving, by the first device, an instruction entered for the interface, displaying the data of the second device, receiving, by the first device, an operation instruction entered for the data, and processing, by the first device, the data according to the operation instruction. | 2019-10-10 |
20190310955 | METHODS AND DEVICES THAT UTILIZE HARDWARE TO MOVE BLOCKS OF OPERATING PARAMETER DATA FROM MEMORY TO A REGISTER SET - A hardware based block moving controller of an active device such as an implantable medical device that provides electrical stimulation reads a parameter data from a block of memory and then writes the parameter data to a designated register set of a component that performs an active function. The block of memory may include data that specifies a size of the block of memory to be moved to the register set. The block of memory may also include data that indicates a number of triggers to skip before moving a next block of memory to the register set. A trigger that causes the block moving controller to move the data from the block of memory to the register set may be generated in various ways such as through operation of the component having the register set or by a separate timer. | 2019-10-10 |
20190310956 | METHOD FOR SUPPORTING ERASURE CODE DATA PROTECTION WITH EMBEDDED PCIE SWITCH INSIDE FPGA+SSD - A Peripheral Component Interconnect Express (PCIe) switch with Erasure Coding logic is disclosed. The PCIe switch may include an external connector to enable the PCIe switch to communicate with a processor and at least one connector to enable the PCIe switch to communicate with at least one storage device. The PCIe switch may include a Power Processing Unit (PPU) to handle configuration of the PCIe switch. The Erasure Coding logic may include an Erasure Coding Controller with circuitry to apply an Erasure Coding scheme to data stored on the storage device, and a snooping logic including circuitry to intercept a data transmission received at the PCIe switch and modify the data transmission responsive to the Erasure Coding scheme. | 2019-10-10 |
20190310957 | METHOD FOR SUPPORTING ERASURE CODE DATA PROTECTION WITH EMBEDDED PCIE SWITCH INSIDE FPGA+SSD - A topology is disclosed. The topology may include at least one Non-Volatile Memory Express (NVMe) Solid State Drive (SSD), a Field Programmable Gate Array (FPGA) to implement one or more functions supporting the NVMe SSD, such as data acceleration, data deduplication, data integrity, data encryption, and data compression, and a Peripheral Component Interconnect Express (PCIe) switch. The PCIe switch may communicate with both the FPGA and the NVMe SSD. | 2019-10-10 |
20190310958 | MECHANISM TO IDENTIFY FPGA AND SSD PAIRING IN A MULTI-DEVICE ENVIRONMENT - A system is disclosed. The system may include a Solid State Drive (SSD) and a co-processor. The SSD may include storage for data, storage for a unique SSD identifier (ID), and storage for a unique co-processor ID. The co-processor include storage for the unique SSD ID, and storage for the unique co-processor ID. A hardware interface may permit communication between the SSD and the co-processor. | 2019-10-10 |
20190310959 | BIMODAL PHY FOR LOW LATENCY IN HIGH SPEED INTERCONNECTS - Systems, methods, and apparatuses including a Physical layer (PHY) block coupled to a Media Access Control layer (MAC) block via a PHY/MAC interface. Each of the PHY and MAC blocks include a plurality of Physical Interface for PCI Express (PIPE) registers. The PHY/MAC interface includes a low pin count PIPE interface comprising a small set of wires coupled between the PHY block and the MAC block. The MAC block is configured to multiplex command, address, and data over the low pin count PIPE interface to access the plurality of PHY PIPE registers, and the PHY block is configured to multiplex command, address, and data over the low pin count PIPE interface to access the plurality of MAC PIPE registers. The PHY block may also be selectively configurable to implement a PIPE architecture to operate in a PIPE mode and a serialization and deserialization (SERDES) architecture to operate in a SERDES mode. | 2019-10-10 |
20190310960 | STANDARDIZED HOT-PLUGGABLE TRANSCEIVING UNIT, HOSTING UNIT AND METHOD FOR APPLYING DELAYS BASED ON PORT POSITIONS - Transceiving and hosting units applying delays based on port positions. The transceiving unit is adapted for insertion into one port among a plurality of ports of the hosting unit. The transceiving unit receives IP packets and applies a delay to the IP packets. The delay is based on a position of the one port into which the transceiving unit is inserted among the plurality of ports of the hosting unit. The transceiving unit transmits the delayed IP packets to the hosting unit. Alternatively, the hosting unit comprising the plurality of ports (including ports adapted for receiving transceiving units) applies a delay to IP packets received via one port among the plurality of ports. The delay is based on a position of the one port among the plurality of ports. Furthermore, an orchestration method implemented by an orchestration server may be used for determining the delays based on the positions. | 2019-10-10 |
20190310961 | AUTHENTICATION AND INFORMATION SYSTEM FOR REUSABLE SURGICAL INSTRUMENTS - An authentication and information system for use in a surgical stapling system includes a microprocessor configured to demultiplex data from a plurality of components in the surgical system. The authentication and information system can include one wire chips and a coupling assembly with a communication connection. | 2019-10-10 |
20190310962 | METHOD OF PERFORMING AUTOMATIC COMMISSIONING OF A NETWORK - The invention describes a method of performing automatic commissioning of a network (N) comprising a plurality of network devices ( | 2019-10-10 |
20190310963 | DIRECTION INDICATOR - An indication of a direction of transmission over the switching fabric is inserted into a data packet that is transmitted from a tile. The indication of direction may indicate directions from the transmitting tile in which intended recipient tiles are present. The switching fabric prevents (e.g. by blocking the data packet at one of a series of latches) the transmission in a direction not indicated in the data packet. Hence, power saving may be achieved, by preventing the unnecessary transmission of data packets over parts of the switching fabric. | 2019-10-10 |
20190310964 | SPECULATIVE READ MECHANISM FOR DISTRIBUTED STORAGE SYSTEM - Provided is an apparatus directing to a speculative read mechanism for a distributed storage system. The apparatus includes a remote direct memory access (RDMA) network interface card (RNIC) ( | 2019-10-10 |
20190310965 | MASSIVELY PARALLEL HIERARCHICAL CONTROL SYSTEM AND METHOD - An electronic control system is disclosed for controlling individually controllable elements of an external component. In one embodiment the system may include a state translator subsystem for receiving a state command from an external subsystem. The state translator subsystem may have at least one module for processing the state command and generating operational commands for controlling the elements to achieve a desired state or condition. A programmable calibration command translation layer (PCCTL) subsystem may be included which receives and uses the operational commands to generate granular level commands for controlling the elements. A feedback control layer subsystem may be included which applies the granular level commands to the elements, and further modifies the granular level commands as needed to control the elements in closed loop fashion. | 2019-10-10 |
20190310966 | METHOD AND APPARATUS FOR FAULT-TOLERANT MEMORY MANAGEMENT - A device and method for providing a fault-tolerant file system. The fault-tolerant file system attempts to minimize the number of writes used when updating file system data structures. In one embodiment, file system data, including file system metadata, is stored in a fault-tolerant tree including a working state and a transacted state. In one embodiment, a change list is used to track blocks that have been updated, instead of cascading updates to leaf nodes up the tree, and a delta block is used to further minimize block updates when adding or removing nodes from the tree. In one embodiment, a Q-Block is used to prevent cycles when adding and removing free blocks from an allocation tree. Metadata values are stored in the tree in a way that allows certain metadata values to be inferred when not present in the tree, thus conserving space and lowering query time. | 2019-10-10 |
20190310967 | DATA TAGGING - A method for characterizing data elements in an enterprise including ascertaining at least one of an access metric and a data identifier for each of a plurality of data elements and employing the at least one of an access metric and a data identifier to automatically apply a metatag to ones of the plurality of data elements. | 2019-10-10 |
20190310968 | MANAGING DELETIONS FROM A DEDUPLICATION DATABASE - An information management system can manage the removal of data block entries in a deduplicated data store using working copies of the data block entries residing in a local data store of a secondary storage computing device. The system can use the working copies to identify data blocks for removal. Once the deduplication database is updated with the changes to the working copies (e.g., using a transaction based update scheme), the system can query the deduplication database for the database entries identified for removal. Once identified, the system can remove the database entries identified for pruning and/or the corresponding deduplication data blocks from secondary storage. | 2019-10-10 |
20190310969 | System and Methods for Implementing a Server-Based Hierarchical Mass Storage System - Setting up and supporting the computer infrastructure for a remote satellite office is a difficult task for any information technology department. To simplify the task, an integrated server system with a hierarchical storage system is proposed. The hierarchical storage system includes the ability to store data at an off-site cloud storage service. The server system is remotely configurable and thus allows the server to be configured and populated with data from a remote location. | 2019-10-10 |
20190310970 | DETECTING QUASI-IDENTIFIERS IN DATASETS - Quasi-identifiers (QIDs) are detected in a dataset using a set of computing tasks. The dataset has a plurality of records and a set of attributes. An index is generated for the dataset. The index has an indicator for each attribute value of each record in the dataset. Each indicator specifies all the records in the dataset having the same value for the attribute. Each task is assigned an attribute combination and a subset of the plurality of records in the dataset and is passed to a thread for execution on computing resources. The executing task inspects the set of records specified by the index indicator for each attribute value in the attribute combination to produce a result. The result of at least one task identifies a unique record for the associated attribute combination. The attribute combination producing the unique record is a QID. | 2019-10-10 |