52nd week of 2019 patent applcation highlights part 43 |
Patent application number | Title | Published |
20190391863 | Artificial Creation Of Dominant Sequences That Are Representative Of Logged Events - Dominant sequences that are representative of logged events can be artificially created. Initially, a graph comprising multiple nodes and edges between pairs of nodes is generated from logged information. The weights, or values, associated with edges are incremented as the log data reveals a temporal relationship between two nodes. Subsequently, a set of candidate trajectories, with each candidate trajectory representing a sequence of events, are generated by repeatedly traversing the generated graph in a random manner by commencing at randomly selected nodes and then proceeding in a random manner to subsequent nodes in accordance with the edge values, for a random quantity of steps. The candidate trajectories are filtered to eliminate those that are impossible or improbable based on a comparison between individual candidate trajectories and the quantity of occurrences within the logs. Scoring is based both on a quantity of occurrences as well as a quantity of steps. | 2019-12-26 |
20190391864 | EARLY DETECTION OF EXECUTION ERRORS - Certain aspects of the present disclosure provide apparatus and techniques for communicating error information during memory operations. For example, certain aspects of the present disclosure may provide a method for memory operations. The method generally including receiving a command from a host device, performing memory operations corresponding to the command received from the host device, detecting an error during the memory operations, and communicating the error based on the detection, wherein the error is communicated before receiving another command from the host device. | 2019-12-26 |
20190391865 | MEMORY SUB-SYSTEM WITH DYNAMIC CALIBRATION USING COMPONENT-BASED FUNCTION(S) - A system includes a memory circuitry configured to receive a command, and in response to the command: generate a first read result based on reading a set of memory cells using a first read voltage; and generate a second read result based on reading the set of memory cells using a second read voltage, wherein: the first read voltage and the second read voltage are separately associated with a read level voltage initially assigned to read the set of memory cells, and the first read result and the second read result are for calibrating the read level voltage. | 2019-12-26 |
20190391866 | DYNAMIC CLOUD DEPLOYMENT AND CALIBRATION TOOL - Systems, apparatus and methods for intelligent deployment(s) of application objects are provided. The systems, apparatus and methods may include one or more dynamic parameters retrieved from metadata table(s). The parameter(s) may be used to calibrate the deployment(s). The parameter(s) may be associated with previous failed deployment(s). Calibration may be automatic. Calibration may include email sending and/or email previewing components. A testing environment may be used prior to actual deployment. | 2019-12-26 |
20190391867 | DATA RECOVERY AFTER STORAGE FAILURE IN A MEMORY SYSTEM - Exemplary methods, apparatuses, and systems include a memory controller receiving a first physical address corresponding to a logical address and data and initiating storage of the data at the first physical address. The memory controller sends a message indicating that the data has been successfully stored at the first physical address before determining if the data was successfully stored at the first physical address. Upon determining that the data failed to store at the first physical address, the memory controller retrieves the data from a volatile memory associated with the first physical address. The memory controller sends a request and receives a second physical address for the retrieved data. The memory controller initiates storage of the data at the second physical address. | 2019-12-26 |
20190391868 | POWER ERROR MONITORING AND REPORTING WITHIN A SYSTEM ON CHIP FOR FUNCTIONAL SAFETY - Methods, systems and apparatuses may provide for technology that includes a system on chip (SoC) having an integrated voltage regulator and a power management controller, and a first communication path coupled to the power management controller, wherein the first communication path is to carry power error information to the power management controller. The technology may also include a second communication path coupled to an error pin of the SoC, wherein the second communication path is to carry the power error information to the error pin, and wherein the power error information is associated with the integrated voltage regulator. | 2019-12-26 |
20190391869 | SUPPORTING RANDOM ACCESS OF COMPRESSED DATA - A processing device comprising compression circuitry to: determine a compression configuration to compress source data; generate a checksum of the source data in an uncompressed state; compress the source data into at least one block based on the compression configuration, wherein the at least one block comprises: a plurality of sub-blocks, wherein the plurality of sub-block includes a predetermined size; a block header corresponding to the plurality of sub-blocks; and decompression circuitry coupled to the compression circuitry, wherein the decompression circuitry to: while not outputting a decompressed data stream of the source data: generate index information corresponding to the plurality of sub-blocks; in response to generating the index information, generate a checksum of the compressed source data associated with the plurality of sub-blocks; and determine whether the checksum of the source data in the uncompressed format matches the checksum of the compressed source data. | 2019-12-26 |
20190391870 | METHOD AND APPARATUS FOR IMPROVED DATA RECOVERY IN DATA STORAGE SYSTEMS - A method and apparatus for improved data recovery in data storage systems is described. When errors occur while retrieving a plurality of codewords from a plurality of storage devices, a long vector may be formed from the plurality of codewords and decoded by a special, long parity check matrix to re-create data stored on the plurality of storage devices when normal decoding efforts fail. | 2019-12-26 |
20190391871 | BYPASSING ERROR CORRECTION CODE (ECC) PROCESSING BASED ON SOFTWARE HINT - Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive metadata from an application, wherein the meta data indicates one or more processing operations which can accommodate a predetermined level of bit errors in read operations from memory, determine, from the metadata, pixel data for which error correction code bypass is acceptable, and generate one or more error correction code bypass hints for subsequent cache access to the pixel data for which error correction code bypass is acceptable, and transmit the one or more error correction code bypass hints to a graphics processing pipeline. Other embodiments are also disclosed and claimed. | 2019-12-26 |
20190391872 | DYNAMIC DATA VERIFICATION AND RECOVERY IN A STORAGE SYSTEM - In one implementation, a method comprises storing verification data and erasure codes separately in a plurality of storage devices. The method further comprises determining, by a processing device, whether data to be written to the plurality of storage devices is lost or corrupted using the verification data and the erasure codes. | 2019-12-26 |
20190391873 | OVERWRITING DATA OBJECTS IN A DISPERSED STORAGE NETWORK - A method for execution by a dispersed storage and task (DST) processing unit includes determining to determine to overwrite an original data object stored in a plurality of storage units with an updated data object. Validation level data can be determined, where the validation level data indicates a data object overwrite level, a data region overwrite level, or a data segment overwrite level. Checksum metadata associated with the original data object can be retrieved in response to determining to overwrite an original data object. Overwriting of a subset of data regions or data segments of the original data object can be foregone in response to generating validation data that indicates their checksums in the checksum metadata compare favorably to corresponding overwrite checksum values. | 2019-12-26 |
20190391874 | MEMORY STORAGE APPARATUS WITH DYNAMIC DATA REPAIR MECHANISM AND METHOD OF DYNAMIC DATA REPAIR THEREOF - The disclosure is directed to a memory storage apparatus having a dynamic data repair mechanism. The memory storage apparatus includes a connection interface; a memory array; and a memory control circuit configured at least to: receive, from the connection interface, a write command which includes a user data and an address of the user data; encode the user data as a codeword which includes the user data and parity bits; write the codeword, in a first memory location of the memory array, as a written codeword; perform a read procedure of the written codeword to determine whether the written codeword is erroneously written; and store a redundant codeword of the user data in a second memory location in response to having determined that the written codeword is erroneously written. | 2019-12-26 |
20190391875 | OVERWRITING DATA OBJECTS IN A DISPERSED STORAGE NETWORK - A method for execution by a dispersed storage and task (DST) processing unit includes determining to determine to overwrite an original data object stored in a plurality of storage units with an updated data object. Validation level data can be determined, where the validation level data indicates a data object overwrite level, a data region overwrite level, or a data segment overwrite level. Checksum metadata associated with the original data object can be retrieved in response to determining to overwrite an original data object. Overwriting of a subset of data regions or data segments of the original data object can be foregone in response to generating validation data that indicates their checksums in the checksum metadata compare favorably to corresponding overwrite checksum values. | 2019-12-26 |
20190391876 | Method and Apparatus for Non-Volatile Memory Array Improvement Using a Command Aggregation Circuit - A queue-based non-volatile memory (NVM) hardware assist card, information handling system, and method are disclosed herein. An embodiment of the queue-based NVM hardware assist card includes a plurality of downstream ports configured to be connected to a corresponding plurality of actual queue-based NVM storage devices, a plurality of upstream ports configured to appear as a plurality of apparent queue-based NVM storage devices, and a distinct upstream port of a different type than the plurality of upstream ports, the distinct upstream port for interacting with a host processor to receive a consolidated processing NVM command from and to return a consolidated processing NVM command completion indication, the queue-based NVM hardware assist card configured to aggregate multiple of the NVM command completion messages received via respective ones of the plurality of downstream ports from respective ones of the plurality of actual queue-based NVM storage devices and to generate the consolidated processing NVM command completion indication. | 2019-12-26 |
20190391877 | DECENTRALIZED RAID SCHEME HAVING DISTRIBUTED PARITY COMPUTATION AND RECOVERY - A computer-implemented method, according to one embodiment, includes: receiving a write request at a storage system which includes more than one storage device, determining a storage location for data included in the write request, and determining a storage location for parity information corresponding to the data included in the write request. A first copy of the data included in the write request is sent to a first storage device which corresponds to the storage location for the data included in the write request. Moreover, a second copy of the data included in the write request is sent to a second storage device which corresponds to the storage location for the parity information. One or more instructions to compute the parity information via a decentralized communication link with the remaining storage devices are sent to the second storage device. The first storage device is different than the second storage device. | 2019-12-26 |
20190391878 | Source Volume Backup with Adaptive Finalization Apparatuses, Methods and Systems - The Source Volume Backup with Adaptive Finalization Apparatuses, Methods and Systems (“SVBAF”) transforms backup request inputs via SVBAF components into backup response outputs. A set of blocks to be copied from a source volume to a target volume is designated and copied while an operating system is configured to write to the source volume. Blocks of the source volume that were written to by the operating system while the operating system was configured to write to the source volume are identified Finalization settings are analyzed to determine whether to enter a CoW mode. If the CoW mode should not be entered, the designated set of blocks is changed to include at least one of the identified blocks and a pass is repeated. Otherwise, the operating system is instructed to enter the CoW mode and bring the target volume into a state consistent with a state of the source volume. | 2019-12-26 |
20190391879 | ON-DEMAND MULTITENANT DATA BACKUP AND RESTORE - Methods, systems, and computer program products are provided. Tenant data of a multitenant relational database system is backed up by adding a value of a current version identifier for the tenant data to previous valid version identifiers for the tenant data, and changing the value of the current version identifier for the tenant data to a next previously-unused value. The tenant data is restored by changing the value of the current version identifier to a value of one of the previous valid version identifiers, and deleting, from the previous valid version identifiers, previous valid version identifiers that are not less recent than the changed value of the current version identifier. The tenant is provided with a view of the tenant data included in only a latest valid version of each respective record from among all valid versions of the each respective record. | 2019-12-26 |
20190391880 | APPLICATION BACKUP AND MANAGEMENT - A data management and storage (DMS) cluster of peer DMS nodes manages data of an application distributed across a set of machines of a compute infrastructure. A DMS node associates a set of machines with the application, and generates data fetch jobs for the set of machines for execution by multiple peer DMS nodes. The DMS node determining whether each of the data fetch jobs for the set of machines is ready for execution by the peer DMS nodes. In response to determining that each of the data fetch jobs is ready for execution, the peer DMS nodes execute the data fetch jobs to generate snapshots of the set of machines. The snapshots may be full or incremental snapshots, and collectively form a snapshot of the application. | 2019-12-26 |
20190391881 | NON-BLOCKING SECONDARY READS - Described herein are embodiments of a database system. The database system receives a read command to read at least some stored data. The database system may generate a plurality of snapshots of data stored in a first data storage node of a plurality of data storage nodes. The database system may determine whether data is being written to the first data storage node. Responsive to determining that data is not being written to the first data storage node, the database system may process the read command at least in part by reading at least some data stored on the first data storage node. Responsive to determining that data is being written to the first data storage node, the database system may process the read command at least in part by reading at least some data from a snapshot of the plurality of snapshots. | 2019-12-26 |
20190391882 | DATA BACKUP SYSTEM AND METHOD - A backup system for backing up data on a computer system, comprising: a plurality of storage devices, the storage devices can be of any type known in the industry such as USB, SATA, SD etc. Storage devices may be built in the device or external devices. The same system may have storage devices that are of the same type (all internal or all external) or a mixture (some internal some external). One or more connector for connecting the plurality of storage devices to the computer system. The storage devices may each have a unique connector (wired or wireless) to the computer system or alternatively, one connector can be connected each time to another storage device. The system also comprises a control module for controlling the connection between the plurality of storage devices to said computer system such that at any given time at least one but not all storage devices are connected to the computer system. The control module selects the storage device or devices to be connected to the computer system according to a predetermined schedule. | 2019-12-26 |
20190391883 | APPLICATION MIGRATION BETWEEN ENVIRONMENTS - A data management and storage (DMS) cluster of peer DMS nodes manages migration of an application between a primary compute infrastructure and a secondary compute infrastructure. The secondary compute infrastructure may be a failover environment for the primary compute infrastructure. Primary snapshots of virtual machines of the application in the primary compute infrastructure are generated, and provided to the secondary compute infrastructure. During a failover, the primary snapshots are deployed in the secondary compute infrastructure as virtual machines. Secondary snapshots of the virtual machines are generated, where the secondary snapshots are incremental snapshots of the primary snapshots. In failback, the secondary snapshots are provided to the primary compute infrastructure, where they are combined with the primary snapshots into construct a current state of the application, and the application is deployed in the current state by deploying virtual machines on the primary compute infrastructure. | 2019-12-26 |
20190391884 | NON-BLOCKING BACKUP IN A LOG REPLAY NODE FOR TERTIARY INITIALIZATION - Disclosed herein are system, method, and computer program product embodiments for non-blocking backup for tertiary initialization in a log replay only node. An embodiment operates by performing a standard log replay on a secondary server and briefly suspending the standard log replay in response to tertiary initialization. Further, the secondary server may determine backup block information and perform a page-aligned backup process from the secondary server to a tertiary server. Additionally, the secondary server may determine log replay block information, and perform a modified log replay concurrently with the backup process based on the backup block information. | 2019-12-26 |
20190391885 | ZERO DATA LOSS TRANSFER PROTOCOL - A method for reliable data synchronization within a network is disclosed. The producer system stories data in a persistent data store and produces one or more data updates. The producer system simultaneously transmits the data updates to a consumer system and initiating storage of the data updates at the producer system. When storage of the data updates at the producer system is complete, the producer system transmits a first acknowledgment to the consumer system. The producer system determines whether a second acknowledgment has been received from the consumer system, wherein the second acknowledgment indicates that the consumer system has successfully stored the data updates at the consumer system. In accordance with a determination that the second acknowledgment has been received from the consumer system, the producer system changes the temporary status of the data updates stored at the producer system to a permanent status. | 2019-12-26 |
20190391886 | TECHNOLOGIES FOR LIMITING PERFORMANCE VARIATION IN A STORAGE DEVICE - Systems and methods for limiting performance variation in a storage device are described. Storage devices receive work requests to perform one or more operations from other computing devices, such as a host computing device. Completing the work requests may take a response time. In some embodiments, if the response time of executing the work request exceeds a threshold, the storage device may assign additional computing resources to complete the work request. | 2019-12-26 |
20190391887 | TRANSFORMATION DRIFT DETECTION AND REMEDIATION - In various example embodiments, a system, computer-readable medium and method to detect and dynamically correct a transformation drift in a data pipeline, the method comprising detecting a change in a transformation performed by an upstream subsystem of the data pipeline on a data field of an output dataset of the upstream subsystem; classifying the data field as an impacted data field; identifying, based on the topology information, a downstream subsystem of the data pipeline downstream of the upstream subsystem; identifying an input dataset of the downstream subsystem including the impacted data field; and performing a corrective transformation on the impacted data field of the input dataset of the downstream subsystem | 2019-12-26 |
20190391888 | METHODS AND APPARATUS FOR ANOMALY RESPONSE - Examples of the present disclosure relate to a method for anomaly response in a system on chip. The method comprises measuring a magnitude of a transient anomaly event in an operating condition of the system on chip. Based on the magnitude it is determined, for each of a plurality of components of the system on chip, an indication of susceptibility of that component to an anomaly event of the measured magnitude. Based on the determined indications of susceptibility for each of the plurality of components, an anomaly response action is determined. The method then comprises performing the anomaly response action. | 2019-12-26 |
20190391889 | ALLOCATING PART OF A RAID STRIPE TO REPAIR A SECOND RAID STRIPE - Managing a redundant array of independent disks (RAID) storage array involves assigning first and second stripes to span respective first and second sets of disks. A subset of drives in the first set fails such that the first stripe is in a first state wherein a failure of another drive in the first set will lead to data loss in the first stripe. It is determined that the second stripe is in a fault-tolerant state such that the second stripe can have failures of two drives in the second set before the second stripe is in the first state, Part of an operational disk of the second set used by the second stripe is allocated to the first stripe to replace at least part of the subset of failed drives. | 2019-12-26 |
20190391890 | SINGLE PORT DATA STORAGE DEVICE WITH MULTI-PORT VIRTUALIZATION - Multi-port data storage device capabilities can be provided by a remote host connected to a diskgroup that has a first single port data storage device and a second single port data storage device. Initialization of a first logical volume and a second logical volume in each single port data storage device allows a data access request to be serviced from the remote host with the first logical volume of the first single port data storage device. Data virtualized from the first logical volume of the first single port data storage device to the second volume of the second single port data storage device allows accessing the second volume of the second single port data storage device in response to a request to a data request to the first volume of the first single port data storage device. | 2019-12-26 |
20190391891 | NON-INTRUSIVE, LIGHTWEIGHT MEMORY ANOMALY DETECTOR - A lightweight, non-intrusive memory anomaly detector has been designed that focuses on time sub-windows in the time-series data for selected memory related metrics that can efficiently be collected by probes or agents without being intrusive with the virtual machines (VMs) being monitored. In addition, the memory anomaly detector extracts features from those sub-windows of correlated features to present a smaller input vector to two classifiers: a fuzzy rule-based classifier and an artificial neural network. This allows the memory anomaly detector to be “lightweight” because it is less computationally expensive to run a smaller artificial neural network. The fuzzy rule-based classifier applies fuzzy rules to the input vector and provides classification labels, which are used to train an artificial neural network (ANN). After being trained, the trained ANN is refined with supervised feedback and presents its output of classification probabilities for application performance analysis. | 2019-12-26 |
20190391892 | SYSTEM AND METHOD FOR ASSISTING USER TO RESOLVE A HARDWARE ISSUE AND A SOFTWARE ISSUE - The present disclosure relates to system(s) and method(s) for assisting a user to resolve a hardware issue and a software issue. The system identifies, a target cluster, associated with a new ticket received from the user, from the set of clusters. Further, the system recommends one or more runbook scripts, from a runbook repository, associated with the new ticket. The system further identifies a new runbook script, corresponding to the new ticket, from a set of external repositories. Further, the system executes at least one of the one or more runbook scripts or the new runbook script, associated with the new ticket. The system further generates a document based on the execution of the one or more runbook scripts or the new runbook script, thereby assisting the user to resolve a target issue. | 2019-12-26 |
20190391893 | RECOGNITION OF OPERATIONAL ELEMENTS BY FINGERPRINT IN AN APPLICATION PERFORMANCE MANAGEMENT SYSTEM - An application performance management system is disclosed. Operational elements are dynamically discovered and extended when changes occur. Programmatic knowledge is captured. Particular instances of operational elements are recognized after changes have been made using a fingerprint/signature process. Metrics and metadata associated with a monitored operational element are sent in a compressed form to a backend for analysis. Metrics and metadata from multiple similar systems may be used to adjust/create expert rules to be used in the analysis of the state of an operational element. A 3-D user interface with both physical and logical representations may be used to display the results of the performance management system. | 2019-12-26 |
20190391894 | Method for Error Management in Bus Communication and Bus Communication System - A method for error management in bus communication is disclosed. A first bus subscriber generates a first bus message and writes a bus error code into a bus data area of a first bus message. The second bus subscriber identifies the error by evaluating the bus error code. The first bus subscriber stores an error identification of the error, generates a first bus message and writes the bus error code into the bus data area of the first bus message. A second bus message with a request for transmission of the error identification is generated by the second bus subscriber. A third bus message is generated by the first bus subscriber and the stored error identification is written into the bus data area of the third bus message. The second bus subscriber identifies the errors by evaluating the bus error code and the error identification. | 2019-12-26 |
20190391895 | DETERMINATION OF SUBJECT MATTER EXPERTS BASED ON ACTIVITIES PERFORMED BY USERS - A method for determining subject matter experts includes monitoring, by a computer, an activity performed by a user during a period of time, the activity including a sequence of operations, based on the sequence of operations, determining a topic of the activity performed by the user, recognizing, by the computer, a hesitation pattern of the user associated with the topic during the period of time, based on the recognized hesitation pattern, calculating a confidence indicator associated with the activity, the confidence indicator specifies a confidence of the user in performing the activity, based on the confidence indicator being lower than a confidence threshold, identifying one or more subject matter experts for the determined topic, and prompting the user to submit a support request to at least one of the identified subject matter experts. | 2019-12-26 |
20190391896 | METHODS FOR GENERATING A BRANCH HEALTH INDEX - An embodiment of the present invention is directed to generating a branch health index. The innovative method is directed to generating a Branch Health Index (BHI) designed to provide a comprehensive measurement for ATM performance. According to an embodiment of the present invention, BHI takes into account ATM availability, service response/repair times and customer impacts to score every ATM on a predetermined scale. The method applies a weighted scoring algorithm designed to take into account a multitude of attributes. The score may then be used to determine graphical status, such as a red/amber/green status of the ATM. | 2019-12-26 |
20190391897 | ADAPTIVE THRESHOLDS FOR CONTAINERS - A method includes identifying container metrics for containers running in a container environment, collecting container data for the containers, and generating an adaptive threshold for a given one of the identified container metrics. The adaptive threshold specifies one or more values for the given container metric for a designated time period. The adaptive threshold is generated utilizing a scoring algorithm that determines a range of accepted container behavior for the designated time period by analyzing the collected container data using one or more machine learning algorithms. The method also includes monitoring behavior of the containers during the designated time period utilizing the adaptive threshold, and generating an alert responsive to detecting that the monitored behavior of a given one of the containers is outside the range of accepted container behavior for the designated time period specified by the given adaptive threshold for the given container metric. | 2019-12-26 |
20190391898 | PROVIDING RECOMMENDATION(S) FOR A COMPUTING DEVICE EXECUTING A GAMING APPLICATION - In some examples, a server may receive, from a computing device, a device profile identifying a gaming application and metrics associated with execution of a gaming application. The server may compare the device profile with other device profiles associated with other computing devices, determine a similarity index of the device profile with the other device profiles, and select a subset of the other device profiles based at least in part on the similarity index. The server may determine configuration differences between the device profile of the computing device and individual device profiles of the subset of the other device profiles and send the configuration recommendations to the computing device. The recommendations may include at least one of (1) modifying settings of an operating system of the computing device, (2) modifying settings of the gaming application, (3) changing a hardware component or peripheral device associated with the computing device. | 2019-12-26 |
20190391899 | COMPUTER PROGRAM STORED IN COMPUTER READABLE MEDIUM, DATABASE SERVER AND AUDIT PERFORMING SERVER - A computer program stored in a computer readable storage medium according to an exemplary embodiment of the present disclosure includes: commands for making a computer perform operations, in which the operations include: receiving query performance details generated while performing a query from a database server; storing the received query performance details in a storage unit; generating an audit log based on the query performance details and audit setting information stored in the storage unit; and storing the audit log in an audit log storage unit. | 2019-12-26 |
20190391900 | COSTING OF RAW-DEVICE MAPPING (RDM) DISKS - Disclosed are various embodiments for costing Raw-Device Mapping (RDM) disks. A pseudo-datastore is created. The pseudo-datastore represents the RDM disk. The RDM disk includes a mapping file that exposes direct access to a disk identified by a logical unit number (LUN). A unit rate is assigned to the pseudo-datastore, the unit rate representing a cost per unit of storage provided by the RDM disk. Usage of the RDM disk is monitored. A cost is calculated for the usage of the RDM disk for a period of time based on the unit rate assigned to the pseudo-datastore. | 2019-12-26 |
20190391901 | ADAPTIVE BASELINING AND FILTERING FOR ANOMALY ANALYSIS - To adapt anomaly detection to changing canonical behavior and reduce the chances of feeding in feature value combinations that appear to be outliers but correspond to canonical behavior, multi-variate non-parametric density estimation is employed. An adaptive canonical behavior filter builds a sample dataset from observed time-series values of memory related metrics and then performs kernel density estimation on the sample dataset. With the resulting probability density function, the adaptive canonical behavior filter filters out subsequently observed time-series values of the memory related metrics that fall within a canonical behavior range that is specified/configured. | 2019-12-26 |
20190391902 | VISUALIZING A USER-CENTRIC STACK FRAME GRAPH - A computer-implemented method is presented for visualizing a stack frame graph of software resources on a user interface of a computing device. The computer-implemented method includes determining a priority of each stack frame by employing a call stack analysis technique, selecting a most important frame from stack traces of a targeted application, and displaying, on the user interface, call stacks representing each frame. | 2019-12-26 |
20190391903 | METHOD AND SYSTEM FOR REPOSITIONING OF A PLURALITY OF STATIC ANALYSIS ALARMS - This disclosure relates generally to a system and a method for repositioning of a plurality of static analysis alarms is provided. The proposed repositioned techniques, reposition each of the static analysis alarms from the set of static analysis alarms up or down the application code from the program points of their original reporting, for reducing the number of static analysis alarms reported or for reporting them closer to their causes or for both the objectives. Further the proposed repositioning techniques also ensure that the repositioning of the static analysis alarms is without affecting the errors uncovered by them. Further the disclosure also proposes to maintain traceability links between a repositioned static analysis alarm and its corresponding static analysis alarm(s). Further the disclosure proposes to display the repositioned static analysis alarms to the user instead of the set of static analysis alarms, for reducing redundancy from reporting and manual inspections of the set of static analysis alarms. Furthermore the disclosure proposes to display the traceability links only if a user requests for the same. | 2019-12-26 |
20190391904 | AUTOMATED BUG FIXING - Disclosed is a system for removing bugs present in a software code. A determination module determines a usage pattern of a software code by using an Artificial Neural Network (ANN) technique. A comparison module compares the usage pattern with a set of pre-stored usage patterns of software applications similar to the software code. An execution module executes a set of test suites, on the software code, associated to at least one software application of the software applications, when a usage pattern of the at least one software application is matched with the usage pattern of the software code. An identification module identifies a code snippet comprising the bug. A recommendation module recommends a code patch, corresponding to the code snippet, from a ranked list of code patches determined by a Deep RNN technique. Further, a replacement module replaces the code patch with the code snippet thereby removing the bug. | 2019-12-26 |
20190391905 | DEBUGGING SYSTEMS - A method of generating program analysis data for analysing the operation of a computer program. The method comprises, executing an instrumented process of the computer program to define a reference execution of the program, intercepting a call to a library function by the instrumented process, executing the library function in an uninstrumented process, for the uninstrumented process, capturing in a log, only data generated by or modified through the execution of the library function required by the instrumented process to continue execution of the program, and wherein the captured log is arranged to enable deterministically reproducing the effect of the library function call on the instrumented process upon re-running of the reference execution based upon the captured log to generate the program analysis data. | 2019-12-26 |
20190391906 | REDUCING LIKELIHOOD OF CYCLES IN USER INTERFACE TESTING - A method for testing a user interface includes determining states and state transitions associated with the user interface. A first plurality of states and a first plurality of state transitions of the user interface may be explored. A subset of a second plurality of states and a second plurality of state transitions of the user interface may also be explored. Paths that lead to cycles within the subset of the second plurality of states and the second plurality of state transitions may be penalized. | 2019-12-26 |
20190391907 | SYSTEM AND METHOD FOR AUTOMATING FUNCTIONAL TESTING - Various methods, apparatuses/systems, and media for implementing an automation suite module (ASM) for automated functional testing are provided. A receiver receives a request for testing an application. A repository stores a plurality of test classes related to the request. A processor accesses the repository; creates a plurality of packages, each package including one or more test classes among the plurality of test classes; causes, in response to receiving the request for testing the application, a graphical user interface (GUI) to display the plurality of packages with their respective test classes; generates, by utilizing the GUI, a plurality of test blocks by receiving selection of one or more test classes from one or more packages among the plurality of packages; generates, by utilizing the GUI, a custom test suite by receiving selection of one or more test blocks from the plurality of test blocks; and executes the test classes based on the custom test suite to test the application. | 2019-12-26 |
20190391908 | METHODS AND DEVICES FOR INTELLIGENT SELECTION OF CHANNEL INTERFACES - A method includes performing, by a processor, identifying a first interface associated with a first functionality of a plurality of functionalities of an application, where a portion of the first functionality has changed within the application, identifying a second functionality of the plurality of functionalities of the application that has not changed, where the second functionality is associated with a plurality of second interfaces, selecting a testing interface out of the plurality of second interfaces associated with the second functionality, and executing a test case for testing the application using the first interface associated with the first functionality and using the testing interface associated with the second functionality, but refraining from using remaining ones of the second interfaces associated with the second functionality for executing the test case. | 2019-12-26 |
20190391909 | METHOD FOR TESTING AIR TRAFFIC MANAGEMENT ELECTRONIC SYSTEM, ASSOCIATED ELECTRONIC DEVICE AND PLATFORM - The invention relates to an air traffic management electronic system, including the steps of:
| 2019-12-26 |
20190391910 | SOFTWARE-TESTING DEVICE, SOFTWARE-TESTING SYSTEM, SOFTWARE-TESTING METHOD, AND PROGRAM - A software-testing device includes a conversion unit configured to convert a PLC program for operating a programmable logic controller into a general-purpose language program described in a general-purpose programming language, and a test execution unit configured to perform a test on the general-purpose language program. | 2019-12-26 |
20190391911 | CONTAINER TESTING USING A DIRECTORY AND TEST ARTIFACTS AND/OR TEST DEPENDENCIES - A system for testing container applications includes a memory, a processor in communication with the memory, a test manager, and a test controller. The test manager runs on a host operating system and creates a test container image including test artifact(s) and/or test dependency(ies). Then, the test manager distributes a set of tests, which are accessible to a test container created from the test container image. The test manager distributes the tests by populating a directory with the set of tests and mounting the directory to the test container. Additionally, the test manager executes the test container image. The test controller is associated with the test container and executes the set of tests accessible to the test container using the test artifact(s) and/or the test dependency(ies). The test controller also monitors the set of tests executed by the test container. Feedback corresponding to the set of tests is provided. | 2019-12-26 |
20190391912 | CLUSTERED STORAGE SYSTEM WITH STATELESS INTER-MODULE COMMUNICATION FOR PROCESSING OF COUNT-KEY-DATA TRACKS - A storage system in one embodiment comprises multiple storage nodes each comprising at least one storage device. Each of the storage nodes further comprises a set of processing modules configured to communicate over one or more networks with corresponding sets of processing modules on other ones of the storage nodes. The sets of processing modules of the storage nodes each comprise at least one control module. The storage system is configured to assign portions of a logical address space of the storage system to respective ones of the control modules, to receive a plurality of tracks of data records in a count-key-data format, and to store the tracks in respective ones of the portions of the logical address space assigned to respective ones of the control modules. Each of the tracks is stored in its entirety in the portion of the logical address space assigned to a corresponding one of the control modules. | 2019-12-26 |
20190391913 | MEMORY MANAGEMENT METHOD, MEMORY STORAGE DEVICE AND MEMORY CONTROL CIRCUIT UNIT - A memory management method for a memory storage device including a rewritable non-volatile memory module is provided according to an exemplary embodiment of the disclosure. The method includes: receiving a first command and performing a first operation corresponding to the first command; transmitting a completion message to a host system corresponding to a completion of the first operation; detecting command processing information; determining a transmission mode of an interruption message according to the command processing information; and transmitting the interruption message to the host system according to the transmission mode. | 2019-12-26 |
20190391914 | MEMORY MANAGEMENT METHOD AND STORAGE CONTROLLER - A memory management method is provided. The method includes selecting a target physical programming unit; using a first read voltage corresponding to a first type physical page of the target physical programming unit to read a plurality of target memory cells of the target physical programming unit, so as to calculate a first bit value ratio; if the first bit value ratio is not smaller than a first preset threshold, using a second read voltage corresponding to the first type physical page of the target physical programming unit to read the plurality of target memory cells of the target physical programming unit, so as to calculate a second bit value ratio; and determining whether the first type physical page of the target physical programming unit is empty by comparing the first bit value ratio and the second bit value ratio. | 2019-12-26 |
20190391915 | MEMORY SYSTEM AND OPERATING MEHTOD THEREOF - An operating method of a memory system includes receiving a read request for sequential target user data; determining whether compressed target map data corresponding to the read request are retrieved from a map cache region within a memory; loading the compressed target map data and compressed candidate map data from a memory device of the memory system, when the compressed target map data are not retrieved from the map cache region, the compressed candidate map data being selected from among compressed map data in a compressed map table, according to a rule; and storing the loaded compressed target map data and compressed candidate map data in the map cache region. | 2019-12-26 |
20190391916 | METHOD FOR MANAGING FLASH MEMORY MODULE AND ASSOCIATED FLASH MEMORY CONTROLLER AND ELECTRONIC DEVICE - The present invention provides a method for managing a flash memory module, wherein the flash memory module includes a plurality of blocks, and the method includes the steps of: building a garbage collection look-up table, wherein the garbage collection look-up table records a plurality of sets of importance information respectively corresponding to the plurality of blocks, and each set of importance information is used to represent a priority of performing a garbage collection operation on a corresponding block; and when performing the garbage collection operation is required, referring to the garbage collection look-up table to select a specific block that has a top priority of the garbage collection operation, and performing the garbage collection operation starting from the specific block. | 2019-12-26 |
20190391917 | SHALLOW CACHE FOR CONTENT REPLICATION - Embodiments relate to efficiently replicating data from a source storage space to a target storage space. The storage spaces share a common namespace of paths where content units are stored. A shallow cache is maintained for the target storage space. Each entry in the cache includes a hash of a content unit in the target storage space and associated hierarchy paths in the target storage space where the corresponding content unit is stored. When a set of content units in the source storage space is to be replicated at the target storage space, any content unit with a hash in the cache is replicated from one of the associated paths in the cache, thus avoiding having to replicate content from the source storage space. | 2019-12-26 |
20190391918 | STREAMING ENGINE WITH FLEXIBLE STREAMING ENGINE TEMPLATE SUPPORTING DIFFERING NUMBER OF NESTED LOOPS WITH CORRESPONDING LOOP COUNTS AND LOOP OFFSETS - A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces address of data elements for the nested loops. A steam head register stores data elements next to be supplied to functional units for use as operands. A stream template specifies loop count and loop dimension for each nested loop. A format definition field in the stream template specifies the number of loops and the stream template bits devoted to the loop counts and loop dimensions. This permits the same bits of the stream template to be interpreted differently enabling trade off between the number of loops supported and the size of the loop counts and loop dimensions. | 2019-12-26 |
20190391919 | Dynamically determining tracks to prestage from storage to cache using a machine learning module - Provided are a computer program product, system, and method for determining tracks to prestage into cache from a storage. Information is provided related to determining tracks to prestage from the storage to the cache in a stage group of sequential tracks including a trigger track comprising a track number in the stage group at which to start prestaging tracks and Input/Output (I/O) activity information to a machine learning module. A new trigger track in the stage group at which to start prestaging tracks is received from the machine learning module having processed the provided information. The trigger track is set to the new trigger track. Tracks are prestaged in response to processing an access request to the trigger track in the stage group. | 2019-12-26 |
20190391920 | DYNAMICALLY DETERMINING TRACKS TO PRESTAGE FROM STORAGE TO CACHE BY TRAINING A MACHINE LEARNING MODULE - Provided are a computer program product, system, and method for determining tracks to prestage into cache from a storage. Information is provided related to determining tracks to prestage from the storage to the cache in a stage group of sequential tracks including a trigger track comprising a track number in the stage group at which to start prestaging tracks and Input/Output (I/O) activity information to a machine learning module. A new trigger track in the stage group at which to start prestaging tracks is received from the machine learning module having processed the provided information. The trigger track is set to the new trigger track. Tracks are prestaged in response to processing an access request to the trigger track in the stage group. | 2019-12-26 |
20190391921 | TAGS AND DATA FOR CACHES - A device includes a memory controller and a cache memory coupled to the memory controller. The cache memory has a first set of cache lines associated with a first memory block and comprising a first plurality of cache storage locations, as well as a second set of cache lines associated with a second memory block and comprising a second plurality of cache storage locations. A first location of the second plurality of cache storage locations comprises cache tag data for both the first set of cache lines and the second set of cache lines. | 2019-12-26 |
20190391922 | TEMPORARILY SUPPRESSING PROCESSING OF A RESTRAINED STORAGE OPERAND REQUEST - Processing of a storage operand request identified as restrained is selectively, temporarily suppressed. The processing includes determining whether a storage operand request to a common storage location shared by multiple processing units of a computing environment is restrained, and based on determining that the storage operand request is restrained, then temporarily suppressing requesting access to the common storage location pursuant to the storage operand request. The processing unit performing the processing may proceed with processing of the restrained storage operand request, without performing the suppressing, where the processing can be accomplished using cache private to the processing unit. Otherwise the suppressing may continue until an instruction, or operation of an instruction, associated with the storage operand request is next to complete. | 2019-12-26 |
20190391923 | ALLOCATION OF CACHE STORAGE AMONG APPLICATIONS THAT INDICATE MINIMUM RETENTION TIME FOR TRACKS IN LEAST RECENTLY USED DEMOTING SCHEMES - A computational device receives an indication of a minimum retention time in a cache for a plurality of tracks of an application. In response to determining that tracks of the application that are stored in the cache exceed a predetermined threshold in the cache, the computational device demotes one or more tracks of the application from the cache even though a minimum retention time in cache has been indicated for the one or more tracks of the application, while performing least recently used (LRU) based replacement of tracks in the cache. | 2019-12-26 |
20190391924 | INFORMATION PROCESSING APPARATUS, ARITHMETIC PROCESSING DEVICE, AND METHOD FOR CONTROLLING INFORMATION PROCESSING APPARATUS - A tag match determination unit determines, in response to an acquisition request for predetermined data, whether predetermined data is present in a primary cache. When the predetermined data is not present in the primary cache, the move-in buffer outputs the acquisition request for the predetermined data to a secondary cache management unit or the storage device and holds determination purpose information based on state information on a predetermined area that stores therein the predetermined data. A storage processing unit determines, when an acquired response from the secondary cache management unit or the storage device is a predetermined type, based on the determination purpose information, whether or not to acquire the state information stored in the primary cache; invalidates the predetermined area when it is determined not to acquire the state information; and stores, in the predetermined area, the predetermined data included in the response. | 2019-12-26 |
20190391925 | CONTENT ADDRESSABLE STORAGE SYSTEM CONFIGURED FOR EFFICIENT STORAGE OF COUNT-KEY-DATA TRACKS - A storage system in one embodiment comprises a plurality of storage devices and a storage controller. The storage system is configured by the storage controller to receive a plurality of data records in a count-key-data format, to separate count and key portions of the data records from remaining portions of the data records, to store the count and key portions of the data records in at least one designated page of a set of pages of a logical storage volume of the storage system, and to store the remaining portions of the data records in one or more other pages of the set of pages of the logical storage volume of the storage system. The designated page of the set of pages of the logical storage volume may comprise a first page of the set of pages, and the one or more other pages of the set of pages may comprise respective ones of a sequence of consecutive pages following the first page. | 2019-12-26 |
20190391926 | APPARATUS AND METHOD AND COMPUTER PROGRAM PRODUCT FOR GENERATING A STORAGE MAPPING TABLE - The invention introduces an apparatus for generating a storage mapping table at least including a direct memory access controller for reading first physical location (PL) information corresponding to a logical location of the storage mapping table; an expanding circuit for obtaining the first PL information and expanding the first PL information into second PL information; and a controller for transmitting the second PL information to a host. | 2019-12-26 |
20190391927 | Secure userspace networking for guests - Secure userspace networking for guests is disclosed. For example, a memory is associated with a guest, which is associated with a virtual device. A hypervisor associated with the guest executes on a processor to map a queue associated with the virtual device to an address space identifier. A request associated with the queue is detected. A page table associated with the virtual device is located based on the address space identifier. The request is translated with the first page table yielding a memory address of a message. | 2019-12-26 |
20190391928 | METHOD AND APPARATUS FOR PERFORMING OPERATIONS TO NAMESPACES OF A FLASH MEMORY DEVICE - The invention introduces a method for performing operations to namespaces of a flash memory device, at least including the steps: receiving a namespace setting-update command from a host, requesting to update a namespace size of a namespace; determining whether the updated namespace size of the namespace can be supported; and when the updated namespace size of the namespace can be supported, updating a logical-physical mapping table of the namespace to enable the namespace to store user data of the updated namespace size. | 2019-12-26 |
20190391929 | HARDWARE-BASED VIRTUAL-TO-PHYSICAL ADDRESS TRANSLATION FOR PROGRAMMABLE LOGIC MASTERS IN A SYSTEM ON CHIP - An example programmable integrated circuit (IC) includes a processing system having a processor, a master circuit, and a system memory management unit (SMMU). The SMMU includes a first translation buffer unit (TBU) coupled to the master circuit, an address translation (AT) circuit, an AT interface coupled to the AT circuit, and a second TBU coupled to the AT circuit, and programmable logic coupled to the AT circuit in the SMMU through the AT interface. | 2019-12-26 |
20190391930 | INTEGRATION OF APPLICATION INDICATED MAXIMUM TIME TO CACHE TO LEAST RECENTLY USED TRACK DEMOTING SCHEMES IN A CACHE MANAGEMENT SYSTEM OF A STORAGE CONTROLLER - A computational device receives an indication that specifies a maximum retention time in cache for a first plurality of tracks, wherein no maximum retention time is specified for a second plurality of tracks. A plurality of insertions points are generated in a least recently used (LRU) list, wherein different insertion points in the LRU list correspond to different amounts of time that a track of the first plurality of tracks is expected to be retained in the cache, wherein the LRU list is configured to demote both tracks of the first plurality of tracks and the second plurality of tracks from the cache. | 2019-12-26 |
20190391931 | INTEGRATION OF APPLICATION INDICATED MINIMUM TIME TO CACHE TO LEAST RECENTLY USED TRACK DEMOTING SCHEMES IN A CACHE MANAGEMENT SYSTEM OF A STORAGE CONTROLLER - A minimum retention time in cache is indicated for a first plurality of tracks, where no minimum retention time is indicated for a second plurality of tracks. A cache management application demotes a track of the first plurality of tracks from the cache, in response to determining that the track is a least recently used (LRU) track in a LRU list of tracks in the cache and the track has been in the cache for a time that exceeds the minimum retention time. | 2019-12-26 |
20190391932 | INTEGRATION OF APPLICATION INDICATED MINIMUM TIME TO CACHE AND MAXIMUM TIME TO CACHE TO LEAST RECENTLY USED TRACK DEMOTING SCHEMES IN A CACHE MANAGEMENT SYSTEM OF A STORAGE CONTROLLER - A computational device receives indications of a minimum retention time and a maximum retention time in cache for a first plurality of tracks, wherein no indications of a minimum retention time or a maximum retention time in the cache are received for a second plurality of tracks. A cache management application demotes a track of the first plurality of tracks from the cache, in response to determining that the track is a least recently used (LRU) track in a LRU list of tracks in the cache and the track has been in the cache for a time that exceeds the minimum retention time. The cache management application demotes the track of the first plurality of tracks, in response to determining that the track has been in the cache for a time that exceeds the maximum retention time. | 2019-12-26 |
20190391933 | ALLOCATION OF CACHE STORAGE AMONG APPLICATIONS BASED ON APPLICATION PRIORITY AND MINIMUM RETENTION TIME FOR TRACKS IN LEAST RECENTLY USED DEMOTING SCHEMES - A computational device receives an indication of minimum retention times in a cache for a plurality of tracks for applications. In response to determining that a first type of application has not specified a maximum percentage of cache for allocation to the first type of application, the maximum percentage of cache for allocation to the first type of application is set to a default value. In response to determining that a second type of application has not specified a maximum percentage of cache for allocation to the second type of application, an entirety of the cache or a percentage of the cache that is greater than the default value is allocated for the second type of application. A least recently used based replacement of tracks is performed in the cache while attempting to satisfy the minimum retention times and the maximum percentage of cache that are allocated. | 2019-12-26 |
20190391934 | MAPPING ATTRIBUTES OF KEYED ENTITIES - One or more mappings each define a correspondence between one or more input attributes of an input entity and one or more output attributes of an output entity, where the input entity includes one or more key attributes identified as part of a unique key, and the output entity includes one or more key attributes identified as part of a unique key. Generating instances of the output entity includes: determining one or more mapped input attributes of the input entity that correspond to each of the key attributes of the output entity, based on the mappings; and comparing the mapped input attributes with the key attributes of the input entity to determine whether the mapped input attributes include: (1) all of the key attributes of the input entity, or (2) fewer than all of the key attributes of the input entity. | 2019-12-26 |
20190391935 | SPI-Based Data Transmission Method and Device - A serial peripheral interface (SPI)-based data transmission method, including sending, by a first device, a first query request to a second device through a universal asynchronous receiver/transmitter (UART) interface, where the first query request queries the second device for an SPI mode supported by the second device, sending, by the first device, in response to the first device determining, according to a first query response returned by the second device, that the second device supports an SPI master mode, an SPI connection establishment request to the second device, where the SPI connection establishment request causes the second device to initiate establishment of an SPI connection to the first device, and performing, by the first device, through the SPI, and after the first device establishes the SPI connection to the second device, at least one of receiving data sent by the second device, or sending data to the second device. | 2019-12-26 |
20190391936 | PREDICTIVE PACKET HEADER COMPRESSION - Packets may be compressed based on predictive analyses. For example, in one embodiment, it is determined that an explicit value for a particular header field can be inferred by the receiver agent, a packet header is constructed that either omits the header field or includes a differential value for the header field in lieu of the explicit value for the header field. The packet header may be decompressed upon receipt by deriving the explicit value for the particular header field. | 2019-12-26 |
20190391937 | APPARATUS AND METHOD FOR MEMORY MANAGEMENT IN A GRAPHICS PROCESSING ENVIRONMENT - An apparatus and method are described for implementing memory management in a graphics processing system. For example, one embodiment of an apparatus comprises: a first plurality of graphics processing resources to execute graphics commands and process graphics data; a first memory management unit (MMU) to communicatively couple the first plurality of graphics processing resources to a system-level MMU to access a system memory; a second plurality of graphics processing resources to execute graphics commands and process graphics data; a second MMU to communicatively couple the second plurality of graphics processing resources to the first MMU; wherein the first MMU is configured as a master MMU having a direct connection to the system-level MMU and the second MMU comprises a slave MMU configured to send memory transactions to the first MMU, the first MMU either servicing a memory transaction or sending the memory transaction to the system-level MMU on behalf of the second MMU. | 2019-12-26 |
20190391938 | SEMICONDUCTOR DEVICE AND SEMICONDUCTOR SYSTEM - Provided are a semiconductor device and a semiconductor system. A semiconductor device includes a non-volatile memory; a device interface circuit which receives an input/output (I/O) request from a host; and a device controller which executes a data access according to the I/O request on the non-volatile memory, and transmits an interrupt to the host a predetermined time before completion of the data access. | 2019-12-26 |
20190391939 | HIGH PERFORMANCE INTERCONNECT - A physical layer (PHY) is coupled to a serial, differential link that is to include a number of lanes. The PHY includes a transmitter and a receiver to be coupled to each lane of the number of lanes. The transmitter coupled to each lane is configured to embed a clock with data to be transmitted over the lane, and the PHY periodically issues a blocking link state (BLS) request to cause an agent to enter a BLS to hold off link layer flit transmission for a duration. The PHY utilizes the serial, differential link during the duration for a PHY associated task selected from a group including an in-band reset, an entry into low power state, and an entry into partial width state | 2019-12-26 |
20190391940 | TECHNOLOGIES FOR INTERRUPT DISASSOCIATED QUEUING FOR MULTI-QUEUE I/O DEVICES - Technologies for interrupt disassociated queuing for multi-queue input/output devices includes determining whether a network packet has arrived in an interrupt-disassociated queue and delivering the network packet to an application managed by the compute node. The application is associated with an application thread and the interrupt-disassociated queue may be in a polling mode. Subsequently, in response to a transition event, the interrupt-disassociated queue may be transitioned to an interrupt mode. | 2019-12-26 |
20190391941 | Apparatus and Mechanism to Bypass PCIe Address Translation By Using Alternative Routing - An address space field is used in conjunction with a normal address field to allow indication of an address space for the particular address value. In one instance, one address space value is used to indicate the bypassing of the address translation used between address spaces. A different address space value is designated for conventional operation, where address translations are performed. Other address space values are used to designate different transformations of the address values or the data. This technique provides a simplified format for handling address values and the like between different devices having different address spaces, simplifying overall computer system design and operation. | 2019-12-26 |
20190391942 | SEMICONDUCTOR DEVICE AND BUS GENERATOR - Even under various conditions, stay of request on a bus is eliminated, and memory efficiency can be improved. Each of a master A, a master B, and a master X issues an access request to a memory. A memory controller receives an access request through a bus. A central bus control unit controls output of an access request issued by a master to the memory controller through granting the master an access right to the memory. The central bus control unit manages the number of rights that can be granted, which indicates the number of the access rights that can be granted, based on an access size of an access request issued by the master to which the access right is granted, and performs grant of the access right within a range of the number of rights that can be granted. | 2019-12-26 |
20190391943 | SEMICONDUCTOR DEVICE AND BUS GENERATOR - A master issues an access request to the memory. The memory controller receives the access request via a bus. An access control unit controls an output of the access request issued by the master to the memory controller by the granting an access right. The access control unit manages a number of grantable rights indicating a number to which the access rights can be granted based on a weight of 0 or more and less than 1 according to a probability that the granted access right is used, and grants the access right within a range of the number of grantable rights. | 2019-12-26 |
20190391944 | ADAPTABLE CONNECTOR WITH EXTERNAL I/O PORT - An adaptable connector, a non-standard PCIe module, and a computer readable medium are disclosed. The adaptable connector for a PCIe interface allows for multiple standard PCIe modules and non-standard PCIe modules at different times An external I/O port has a set of non-PCIe I/O signal lanes coupled to the adaptable connector in lieu of a set of root port host PCIe signal lanes when a non-standard PCIe module is mated to the adaptable connector. | 2019-12-26 |
20190391945 | HIGH PERFORMANCE INTERCONNECT PHYSICAL LAYER - A supersequence is generated that includes a sequence including an electrical ordered set (EOS) and a plurality of training sequences. The plurality of training sequences include a predefined number of training sequences corresponding to a respective one of a plurality of training states with which the supersequence is to be associated, each training sequence in the plurality of training sequences is to include a respective training sequence header and a training sequence payload, the training sequence payloads of the plurality of training sequences are to be sent scrambled and the training sequence headers of the plurality of training sequences are to be sent unscrambled. | 2019-12-26 |
20190391946 | DAS STORAGE CABLE IDENTIFICATION - A network system for identifying a cable connection is provided. The network system includes a management server, a server device, and a storage device. The management server includes system-management software. The server device is connected to the management server. The server device includes a BMC configured to communicate with the system-management software of the management server. The storage device includes at least one cable port configured to receive a storage cable that connects the storage device to the server device. The cable port includes a non-volatile memory, an indicator light, and a I | 2019-12-26 |
20190391947 | DEVICES AND METHODS FOR DECOUPLING OF PHYSICAL LAYER - A device with a physical layer (PHY) core component, a PHY I/O component, a decoupling I/O component, and a decoupling core component, where the PHY core component is adjacent to the PHY I/O component, the PHY I/O component is adjacent to the decoupling I/O component, the decoupling I/O component is adjacent to the decoupling core component and is positioned a first distance away from the PHY core component, and the decoupling core component is adjacent to an edge of the device and is positioned a second distance away from the PHY core component. | 2019-12-26 |
20190391948 | ELECTRONIC SYSTEM CAPABLE OF DETECTING NUMBER OF HOT PLUG INSERTION AND EXTRACTION CYCLES - An electronic system capable of detecting a number of hot-plug insertion/extraction cycles including a host device, at least one peripheral device, and at least one storage device is provided. The host device includes a controller and at least one connection socket. The controller has at least one detection pin. Each connection socket is coupled to a corresponding detection pin. The peripheral device includes at least one connector. The connector is hot-pluggably and electrically connected to the connection socket of the host device. The storage device stores the number of hot-plug insertion/extraction cycles of the connector in the peripheral device. When the connector of the peripheral device is connected to the connection socket of the host device, the controller reads the number of hot-plug insertion/extraction cycles from the storage device and increases the number of hot-plug insertion/extraction cycles of the connector in the peripheral device. | 2019-12-26 |
20190391949 | METHOD FOR INTERFACE INITIALIZATION USING BUS TURNAROUND - An example method for initializing an interface includes driving a low voltage signal on data lanes and clock lanes. The method further includes performing a reset sequence and an initialization of a link configuration register. The method also includes driving a high voltage signal to the clock lanes and the data lanes. The method further includes driving a bus turn-around (BTA) sequence on the data lanes. The method also includes detecting that the BTA is acknowledged by a host controller. | 2019-12-26 |
20190391950 | USB TYPE-C SIDEBAND SIGNAL INTERFACE CIRCUIT - A USB-C controller, disposed on an integrated circuit (IC), comprises a first pair of terminals to communicate with a first communication protocol that is other than USB, a second pair of terminals to communicate with a second communication protocol that is other than USB, and a third pair of terminals, each of which is to be coupled to a corresponding SBU1 terminal or SBU2 terminal of a Type-C receptacle. The USB-C controller further includes: a multiplexer to selectively couple the first pair of terminals to the third pair of terminals and the second pair of terminals to the third pair of terminals: and logic to control the multiplexer according to a mode enabled within a configuration channel (CC) signal. | 2019-12-26 |
20190391951 | Storage Control Interposers In Data Storage Systems - Systems, methods, apparatuses, and architectures for storage interposers are provided herein. In one example, an apparatus includes a host connector configured to couple to one or more host systems over associated host Peripheral Component Interconnect Express (PCIe) interfaces, and PCIe switch circuitry configured to receive storage operations over the host connector that are issued by the one or more host systems. The PCIe switch circuitry is configured to monitor when ones of the storage operations correspond to an address range and responsively indicate the ones of the storage operations to a control module. The control module is configured to selectively direct delivery of the ones of the storage operations to corresponding storage areas among one or more storage devices based at least on addressing information monitored for the ones of the storage operations in the PCIe switch circuitry. | 2019-12-26 |
20190391952 | Method and Apparatus for Device Identification Using a Serial Port - A method of detecting the presence of a specific medical device connected to a computer by an RS232 serial line. In one embodiment, the method includes the steps of providing an RS232 connector having a specified computer side output pin connected to specific device specified computer side input pins; raising the voltage high on the specified computer side output pin; determining which computer side input pins are voltage high; and signaling which external medical device is in communication with the computer in response to which computer side input pins are voltage high. In another embodiment, the computer side output pin is selected from one of the data terminal ready (DTR) and ready to send (RTS) pins. In yet another embodiment, the computer side input pins are selected from one or more of data set ready DSR, data carrier detect (CD) and clear to send (CTS). | 2019-12-26 |
20190391953 | ASYNCHRONOUS TRANSCEIVER FOR ON-VEHICLE ELECTRONIC DEVICE - An on-vehicle system comprises a Clock Extension Peripheral Interface (CXPI) bus and a device coupled to the CXPI bus as a slave node. The device comprises a transceiver configured to: generate a first signal by delaying an inverted signal of a transmission data signal; generate a second signal based on the transmission data signal, where the second signal has a low slew rate; selectively output the first signal or the second signal as a third signal, in response to a selector signal; and generate a clock signal in response to the third signal, where the clock signal is at a high level when the third signal is at a low level, and where the clock signal is at the low level when the third signal is at the high level. | 2019-12-26 |
20190391954 | METHODS AND SYSTEMS TO ACHIEVE MULTI-TENANCY IN RDMA OVER CONVERGED ETHERNET - A method for providing multi-tenancy support for RDMA in a system that includes a plurality of physical hosts. Each each physical host hosts a set of data compute nodes (DCNs). The method, at an RDMA protocol stack of the first host, receives a packet that includes a request from a first DCN hosted on a first host for RDMA data transfer from a second DCN hosted on a second host. The method sends a set of parameters of an overlay network that are associated with the first DCN to an RDMA physical network interface controller of the first host. The set of parameters are used by the RDMA physical NIC to encapsulate the packet with an RDMA data transfer header and an overlay network header by using the set of parameters of the overlay network to transfer the encapsulated packet to the second physical host using the overlay network. | 2019-12-26 |
20190391955 | CONFIGURING COMPUTE NODES IN A PARALLEL COMPUTER USING REMOTE DIRECT MEMORY ACCESS ('RDMA') - Configuring compute nodes in a parallel computer using remote direct memory access (‘RDMA’), the parallel computer comprising a plurality of compute nodes coupled for data communications via one or more data communications networks, including: initiating, by a source compute node of the parallel computer, an RDMA broadcast operation to broadcast binary configuration information to one or more target compute nodes in the parallel computer; preparing, by each target compute node, the target compute node for receipt of the binary configuration information from the source compute node; transmitting, by each target compute node, a ready message to the target compute node, the ready message indicating that the target compute node is ready to receive the binary configuration information from the source compute node; and performing, by the source compute node, an RDMA broadcast operation to write the binary configuration information into memory of each target compute node. | 2019-12-26 |
20190391956 | Cloud Sharing and Selection of Machine Learning Models for Service Use - An approach is provided in which an information handling system performs multiple tests using a cognitive service and multiple trained machine learning models on user data corresponding to a user application. For each of the multiple tests, a different one of the trained machine learning models is utilized. The information handling system generates results from the tests and then selects at least one of the trained machine learning models based on the test results. In turn, the information handling system assigns the cognitive service and the selected trained machine learning models to the user application. | 2019-12-26 |
20190391957 | AUTOMATIC ARCHIVING OF DATA STORE LOG DATA - Methods, systems, and computer-readable media for automatic archiving of data store log data are disclosed. One or more operation records in a log are selected for archival. The one or more operation records comprise data indicative of operations performed on one or more data objects of a data store. The one or more operation records are selected for archival prior to deletion from the log. The one or more operation records are replicated from the log to an archive. Based at least in part on the replicating, the one or more operation records in the log are marked as archived. Based at least in part on the marking as archived, the deletion of the one or more operation records from the log is permitted. | 2019-12-26 |
20190391958 | SYSTEMS AND METHODS FOR ELECTRONICALLY GENERATING SUBMITTAL REGISTERS - A system and method for generating a submittal register for various construction projects or other items is disclosed. Among other things, the system and method include inputting the construction project specifications in a file format, such as PDF, into a web application interface to convert the same to a text file, applying an algorithm to the text file, identifying all required submittals into a spreadsheet, running a quality control check of the generated spreadsheet, applying an analysis program to the spreadsheet, generating final submittal register by the program, and delivering the final submittal register. | 2019-12-26 |
20190391959 | INTEROPERABILITY BETWEEN CONTENT MANAGEMENT SYSTEM AND COLLABORATIVE CONTENT SYSTEM - A content management system and a collaborative content system implement interoperability features that allow a user to perform certain interactions with a collaborative content item via the interface of the content management system. For instance, the collaborative content system can outsource access permissions for the collaborative content item to the content management system. When the collaborative content system receives a user's request to access the collaborative content item, the collaborative content system requests permissions data for the collaborative content item from the content management system and then determines based on the permissions data whether to grant access to the user. The content management system can also outsource the account storage capacity for the collaborative content item to the collaborative content system. As a result, a collaborative content item that is stored in association with a user account on the content management system is not counted against the user account's storage capacity. | 2019-12-26 |
20190391960 | Data Cluster Migration Using An Incremental Synchronization - In some embodiments, during synchronizing of files in a source data set to a destination data set, a method receives a set of events that occurred at the source data set after replicating an image of the source data set to the destination data set. The method analyzes the set of events to determine if an exception to a first set of rules for performing a set of operators on the destination data set for the set of events occurs. A second set of rules for the exception is selected based on analyzing the set of events. The method processes the set of operators for the set of events according to the second set of rules to synchronize data from the first data set to the second data set based on the set of events. The processing of the set of operators uses the second set of rules. | 2019-12-26 |
20190391961 | Storing Data Files in a File System - A mechanism is provided for storing data files in a file system. The file system provides a plurality of reference data files, where each reference data file in the plurality of data files represents a group of similar data files. The mechanism creates a new data file and associated the new data file with one reference data file in the plurality of data ides thus defining an associated reference data file of the plurality of reference data files. The mechanism informs the file system about the association of the new data file with the associated reference data file. The mechanism compresses the new data file using the associated reference data file thereby forming a compressed data file. The mechanism stores the compressed data file together with information about the association of the new data file with the associated reference data file. | 2019-12-26 |
20190391962 | DETERMINING CHUNK BOUNDARIES FOR DEDUPLICATION OF STORAGE OBJECTS - Described are a method, system, and computer program product for deduplicating a storage object. A hash of a window of data of a storage object is determined and a determination is made as to whether the window of data of the storage object corresponds to a chunk boundary. A determination is made as to whether the hash of the object matches one pseudo fingerprints in a list of at least one pseudo fingerprint. A storage object chunk boundary based on the window of data is stored in response to the window of data corresponding to the chunk boundary or in response to determining that the hash of the object matches one of the pseudo fingerprints. A determination is made of a new window of data in the storage object following the window of data when the window of data is not an end of data of the storage object. | 2019-12-26 |