30th week of 2020 patent applcation highlights part 45 |
Patent application number | Title | Published |
20200233752 | MOSTLY UNIQUE FILE SELECTION METHOD FOR DEDUPLICATION BACKUP SYSTEMS - Embodiments for a mostly unique file selection process for a deduplication backup system are described. The process assigns tags to files. A tag serves as a hint about the similarity of files in a deduplication file system. It is expected that files from the same client machine will be assigned the same tag. The tag is the smallest unit of migration and serves as a hint of the similarity of the files. The MUFS process measures the uniqueness using a u-index that is a function of the total unique size of a tag relative to the total size of the tag. A load balancer then selects the most unique tags for migration to free the maximum space. It uses the u-index to measure the uniqueness percentage of a tag, so that tags with the highest u-index are selected for migration to free up maximum space on the source node. | 2020-07-23 |
20200233753 | RESTORATION OF A MESSAGING APPLICATION - A computer implemented method is provided for restoring a device from a backup copy. If the device has a messaging application installed on the device, then a list of contacts for the messaging application on the device is extracted. A request is transmitted to each contact in the list of contacts, the request comprising a user id for the messaging application and a timestamp for the backup copy. One or more replies are received back from one or more of the contacts, each reply comprising messaging content, and the received messaging content is combined with content present in the messaging application on the device. | 2020-07-23 |
20200233754 | DATA PROTECTION AUTOMATIC OPTIMIZATION SYSTEM AND METHOD - A system includes a memory and at least one processor to continually analyze at least one of metrics, events, and conditions in a computer network, under normal operating conditions in the computer network, obtain a first level of data from at least one hardware device in the computer network, detect that one of a condition and an event has occurred in the computer network, automatically transmit an instruction to modify the first level of data obtained from the at least one hardware device to a second level of data more robust than the first level of data when one of the condition and the event has occurred, collect the second level of data from the at least one hardware device, and store the second level of data obtained from the at least one hardware device. | 2020-07-23 |
20200233755 | SYSTEM AND METHOD FOR INTELLIGENT DATA-LOAD BALANCING FOR BACKUPS - A method for backing up databases includes generating a Database-Host Mapping (DHM) associated with a backup request in response to receiving the backup request, performing a database redistribution analysis based on the DHM and a preferred server order list (PSOL) to generate a Host-Database Mapping (HDM), and initiating a backup of a plurality of databases using the plurality of hosts specified in the HDM. | 2020-07-23 |
20200233756 | CREATING A CUSTOMIZED BOOTABLE IMAGE FOR A CLIENT COMPUTING DEVICE FROM AN EARLIER IMAGE SUCH AS A BACKUP COPY - According to certain aspects, a method of creating customized bootable images for client computing devices in an information management system can include: creating a backup copy of each of a plurality of client computing devices, including a first client computing device; subsequent to receiving a request to restore the first client computing device to the state at a first time, creating a customized bootable image that is configured to directly restore the first client computing device to the state at the first time, wherein the customized bootable image includes system state specific to the first client computing device at the first time and one or more drivers associated with hardware existing at time of restore on a computing device to be rebooted; and rebooting the computing device to the state of the first client computing device at the first time from the customized bootable image. | 2020-07-23 |
20200233757 | MEMORY DEVICES INCLUDING EXECUTION TRACE BUFFERS - A memory device includes a non-volatile memory to store data, an execution trace buffer, and a media controller. The media controller receives data-modifying commands and adds the data-modifying commands to the execution trace buffer. The media controller executes the data-modifying commands to modify the data stored in the non-volatile memory and detects errors in the data stored in the non-volatile memory. The media controller repeats execution of data-modifying commands from the execution trace buffer in response to detecting an error. | 2020-07-23 |
20200233758 | Integrated Circuit Chip with Cores Asymmetrically Oriented With Respect To Each Other - An integrated circuit (IC) chip can include a given core at a position in the IC chip that defines a given orientation, wherein the given core is designed to perform a particular function. The IC chip can include another core designed to perform the particular function. The other core can be flipped and rotated by 180 degrees relative to the given core such that the other core is asymmetrically oriented with respect to the given core. The IC chip can also include a compare unit configured to compare outputs of the given core and the other core to detect a fault in the IC chip. | 2020-07-23 |
20200233759 | MEMORY SYSTEM - A main memory includes unit memory regions, a redundancy memory region for replacing one or more of the unit memory regions, an address wrapper for generating an address increase/decrease control signal in first and second address wrapping modes, a column decoder for sequentially selecting memory cells in a faulty memory region where a fault has occurred, among the unit memory regions in the first address wrapping mode, and sequentially selecting redundancy memory cells in the redundancy memory region in the second address wrapping mode, based on a column address and the address increase/decrease control signal, and a data input/output circuit for outputting data read from the faulty memory region as backup data to a temporary memory in the first address wrapping mode, and outputting the backup data as restoration data to the redundancy memory region in the second address wrapping mode. | 2020-07-23 |
20200233760 | DECENTRALIZED DATA PROTECTION SYSTEM USING LOCAL METADATA - In a decentralized system of nodes configured to provide data protection functionality, wherein at least a subset of the nodes store and share data using content-addresses managed via a distributed hash table in each of the subset of nodes, a given one of the subset of nodes locally stores: a data protection policy to be implemented by the given node; data protected on the given node; and metadata comprising information indicating placement of a given data set on one or more other ones of the subset of nodes. The given node accesses the locally stored metadata to manage protection of the given data set on the one or more other ones of the subset of nodes. | 2020-07-23 |
20200233761 | DISTRIBUTED PROCESSING METHOD AND DISTRIBUTED PROCESSING SYSTEM - A distributed processing method to receive data by a plurality of servers each including a processor and a memory, and process the data by replicating, the method includes a first determination step in which the servers each receive the replicated data, and a first determination unit determines a degree of consistency of the received data and an output step in which the servers each receive a determination result of the degree of consistency of the data from the first determination unit, and if the determination result includes data that guarantees consistency, the server outputs the data that guarantees consistency. A first number of servers that are to receive the data is set in advance based on a prescribed allowable number of failures that defines the number of servers that can have failures, and an allowable number of byzantine failures that defines the number of servers that can have byzantine failures. | 2020-07-23 |
20200233762 | Method and Apparatus for Redundancy in Active-Active Cluster System - A method is applied to a system including a host cluster and at least one pair of storage arrays. The host cluster includes a quorum host, which includes a quorum unit. The quorum host is an application host having a quorum function. A pair of storage arrays includes a first storage array and a second storage array. The quorum host receives a quorum request, temporarily stops delivering a service to the first storage array and the second storage array, determines, from the first storage array and the second storage array, which is a quorum winning storage array and which is a quorum losing storage array according to logic judgment, stops the service with the quorum losing storage array, sends quorum winning information to the quorum winning storage array, and resumes the delivered service between the host cluster and the quorum winning storage array. | 2020-07-23 |
20200233763 | FACILITATING COMMUNICATION AMONG STORAGE CONTROLLERS - A method, system and computer program product for facilitating communication among storage controllers of a storage system. The method comprises detecting an event indicative of status change in a storage system having a plurality of storage controllers; determining that it is needed to communicate the event from a first storage controller to a second storage controller of the storage controllers; transmitting a message about the event from the first storage controller to a host in response to failure of a dedicated link between the first storage controller and the second storage controller; and forwarding the message from the host to the second storage controller. | 2020-07-23 |
20200233764 | REPLICATION OF DATA IN A GEOGRAPHICALLY DISTRIBUTED STORAGE ENVIRONMENT - Described herein, system that facilitates replication of data in a geographically distributed storage environment. According to an embodiment, a system can comprise storing a first data chunk at a first site of a first region in a geographically diverse data storage system, determining a second region in the geographically diverse data storage system for storage of a first copy of the first data chunk, wherein the first copy is stored at a second site located within the second region, and determining a third region in the geographically diverse data storage system for storage of a second copy of the first data chunk, wherein the second copy is stored at a third site located within the third region. | 2020-07-23 |
20200233765 | PROACTIVE DISK RECOVERY OF STORAGE MEDIA FOR A DATA STORAGE SYSTEM - The described technology is generally directed towards proactive disk recovery that operates when a failing disk is detected in a data-protected cloud data storage system. A proactive recovery process evaluates the chunks of a failing disk one-by-one. If a system process is scheduled to handle that chunk, the chunk is skipped, with recovery delegated to the system process. For non-delegated chunks protected by mirroring, a chunk copy is read by the proactive disk recovery process from a good disk copy, and copied to a new location. For non-delegated chunks protected by erasure coding, the chunk fragment is read and validated. If a portion is consistent, the proactive recovery process stores the portion to a new location on a good disk. If a portion is inconsistent, the process initiates recovery of the portion, e.g., via a fragment recovery task, for copying to a new location on a good disk. | 2020-07-23 |
20200233766 | Information Handling System And Methods To Detect Power Rail Failures And Test Other Components Of A System Motherboard - Embodiments of information handling systems (IHSs) and methods are provided herein to automatically detect failure(s) on one or more power rails provided on a system motherboard of an IHS. One embodiment of such a method may include determining if a power rail test should be performed each time an information handling system (IHS) is powered on or rebooted. If a power rail test is performed, the method may perform a current measurement for each of the power rails separately to obtain actual current values for each power rail, compare the actual current values obtained for each power rail to expected current values stored for each power rail, and detect a failure on at least one of the power rails if the actual current value obtained for the at least one power rail differs from the expected current value stored for the at least one power rail by more than a predetermined percentage or amount. | 2020-07-23 |
20200233767 | DEBUG SYSTEM - A debug system is provided. The debug system includes a debug card and an electronic device. The debug card displays a debug result corresponding to a debug code. The debug card includes a first port. The first port has a first pin and a second pin. An identification signal having a first logic level is applied to the first pin. The electronic device includes a processor and a second port. The processor performs a debug operation to provide the debug code. The second port has a third pin and a fourth pin. When the second port is electrically connected to the first port, the third pin receives the identification signal and provides the debug code to the first port through the fourth pin according to the identification signal. The second pin receives the debug code. | 2020-07-23 |
20200233768 | Secure Method for Managing a Virtual Test Platform - The technology disclosed relates to implementing a virtual test platform (VTP) and running virtual test applications (VTAs) from an unsecured location. Using a phone home service, the VTP establishes a secure tunnel connection with a test controller. The VTP receives configuration information for a VTA from the test controller. If the VTA is not stored on the VTP, the VTP retrieves the VTA from a repository specified by the test controller. The configuration information from the test controller includes information needed for the VTP to set up a second secure tunnel. The VTP establishes the second secure tunnel and launches the VTA. The VTP relays information sent through the second tunnel to the VTA, and also relays messages from the VTA back to the test controller. | 2020-07-23 |
20200233769 | MEMORY SYSTEM - A memory system includes a first memory chip, and a controller that includes a first circuit, a second circuit, and a third circuit. The third circuit is configured to manage a first differential power consumption value that is a difference between first and second power consumption values. The first power consumption value is on first power that the first memory chip consumes while executing a first operation. The second power consumption value is on second power that the first memory chip consumes when suspending the first operation. The third circuit is configured to determine whether causing the first memory chip to suspend the first operation to execute a second operation is possible based on the first differential power consumption value. | 2020-07-23 |
20200233770 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system includes a status information register configured for checking threshold voltages of select transistors included in memory blocks, storing status information on a check result, and outputting a code based on the status information, a status monitor configured to receive the code from the status information register, determine a number of select transistors that have shifted according to the code, and output status signal based on the number of the select transistors that have shifted, and a central processing unit configured for outputting a setup command set for setting parameters of the memory blocks, outputting a re-program command set for re-programming the select transistors, or outputting a bad block address for processing the memory blocks as bad blocks in response to the status signals. | 2020-07-23 |
20200233771 | RECORDING PROCESSOR INSTRUCTION EXECUTION CYCLE AND NON-CYCLE COUNT TRACE EVENTS - A program is executed on a processor to produce execution events. The execution events are traced using a first trace mode during a first portion of the program execution, wherein a portion of trace information for the execution events is omitted from a trace report while tracing in the first trace mode. The mode of tracing is dynamically changed to a second trace mode in response to an event trigger, such that all execution events that occur during the change of mode are captured. Execution events are traced during a second portion of the program execution using the second trace mode, wherein additional trace information for the execution events is included in the trace report while tracing in the second trace mode. The trace mode may be dynamically switched between the two trace modes during execution of the program. | 2020-07-23 |
20200233772 | DEVICE, SYSTEM AND METHOD FOR IDENTIFYING A SOURCE OF LATENCY IN PIPELINE CIRCUITRY - Techniques and mechanisms for determining a latency event to be represented in performance monitoring information. In an embodiment, circuit blocks of a pipeline experience respective latency events at variously times during tasks by the pipeline which service a workload. The circuit blocks send to an evaluation circuit of the pipeline respective event signals which each indicate whether a respective latency event has been detected. The event signals are communicated in parallel with at least a portion of the pipeline. In response to a trigger event in the pipeline, the evaluation circuit selects an event signal, based on relative priorities of the event signals, which provides a sample indicating a detected latency event. Based on the selected event signal, a representation of the indicated latency event in provided to latency event count or other value performance monitoring information. In another embodiment, different time delays are applied to various event signals. | 2020-07-23 |
20200233773 | METHODS AND SYSTEMS FOR STATUS DETERMINATION - Methods and systems for status determination are disclosed. A computing device may determine a status of the computing device or another computing device. One or more actions may be taken based on the status of the computing device or the another computing device. | 2020-07-23 |
20200233774 | System and Method for Efficient Estimation of High Cardinality Time-Series Models - A system includes a metric data store configured to receive and store a time-series of values of a first metric, a seasonal trend identification module configured to determine a periodicity profile for the first metric, and a modeling module configured to generate an autoregressive moving average (ARMA) model. The modeling module includes a seasonal model module configured to generate a first model of the time-series of values, a non-seasonal model module configured to generate a second model of the time-series of values, and a combination module configured to generate a third model based on the first and second models. The modeling module is configured to, in response to determining that a first periodicity profile describes the time-series of values, output the third model as the ARMA model. The system includes an envelope determination module configured to determine a normal behavior of the first metric based on the ARMA model. | 2020-07-23 |
20200233775 | DYNAMICALLY MAINTAINING ALARM THRESHOLDS FOR SOFTWARE APPLICATION PERFORMANCE MANAGEMENT - Embodiments of the present disclosure relate to dynamically maintaining alarm thresholds for software application performance management. Other embodiments may be described and/or claimed. | 2020-07-23 |
20200233776 | ADAPTIVE PERFORMANCE CALIBRATION FOR CODE - Embodiments generally relate to performance testing of software code. In some embodiments, a method includes executing a software program, where the software program includes at least one target portion of code to be performance tested. The method further includes receiving a data stream, where the data stream includes a plurality of events, and where the at least one target portion of code processes the plurality of events based on an event rate. The method further includes monitoring for failures associated with the at least one target portion of code processing the plurality of events. The method further includes modifying the event rate if at least one failure is detected, where the event rate is modified until no failures are detected. The method further includes generating a performance report if no failures are detected during a target success time period. | 2020-07-23 |
20200233777 | SCALABLE INCREMENTAL ANALYSIS USING CALLER AND CALLEE SUMMARIES - A method may include generating, by performing a full analysis of code and for each component of the code, summaries including: (i) a forward summary including a forward flow and (ii) a backward summary including a backward flow, obtaining a modification to a modified component, determining that one of the summaries for the modified component is invalid, and in response to determining that a summary for the modified component is invalid: obtaining the forward flow from the forward summary of the modified component, obtaining the backward flow from the backward summary of the modified component, generating a local flow by performing an incremental analysis of the modified component using the forward flow of the modified component and the backward flow of the modified component, and detecting a defect in the code using the forward flow of the modified component, the local flow, and the backward flow of the modified component. | 2020-07-23 |
20200233778 | CONTROL-PROGRAM-DEVELOPMENT SUPPORTING APPARATUS, CONTROL-PROGRAM-DEVELOPMENT SUPPORTING SYSTEM, CONTROL-PROGRAM-DEVELOPMENT SUPPORTING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A control-program-development supporting apparatus ( | 2020-07-23 |
20200233779 | SOFTWARE TRACING IN A MULTITENANT ENVIRONMENT - Software tracing can be accomplished in a multitenant environment according to various examples of the present disclosure. In one example, a processing device can receive tracing information and a tenant identifier. The tracing information can indicate a sequence in which a group of microservices forming a software application executed in response to a request transmitted to the software application. The tenant identifier can correspond to a particular tenant among a group of tenants having access to an instance of the software application. The processing device can then select, based on the tenant identifier, a particular collector from among a group of collectors corresponding to the group of tenants. The processing device can forward the tracing information to the particular collector for causing the tracing information to be stored in a datastore corresponding to the particular tenant. | 2020-07-23 |
20200233780 | PERFORMANCE ENGINEERING PLATFORM AND METRIC MANAGEMENT - A flexible, adaptive performance test platform allows a test developer to customize performance tests to more realistically determine the impact of network behavior on a system under test. The test platform may be accessed through the use of a Graphic User Interface (GUI) by all developers within an enterprise to generate and execute performance tests prior to release of new systems by the enterprise. In one aspect, the test platform enables developers to share performance tests, thereby leveraging existing work product to reduce the overall system development time. In another aspect, the test platform enables developers to customize performance tests, providing the flexibility to easily specify a duration, scale, geography and/or resource for the test. In another aspect, the test platform enables developers to customize and monitor one or more metrics in accordance with the particular performance goals of the SUT, to enable a developer to more easily identify system issues. | 2020-07-23 |
20200233781 | SCALABLE EXECUTION TRACING FOR LARGE PROGRAM CODEBASES - Indications of a plurality of events whose occurrence is detected in a particular execution of a program are obtained. One or more partitions of a trace object corresponding to the execution are constructed, including a first partition corresponding to a first subset of the events. The first partition comprises a header portion which includes a compressed representation of one or more event chains, and a data portion comprising a compressed events record indicating an occurrence, during the execution, of a particular sequence of events indicated by an event chain. The trace object is stored. | 2020-07-23 |
20200233782 | TEST CYCLE OPTIMIZATION USING CONTEXTUAL ASSOCIATION MAPPING - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for test cycle optimization using contextual association mapping. In one aspect, a method includes obtaining an artefact that includes a collection of reference items, where each reference item includes a sequence of words, generating candidate tags from each of the reference items based on the sequences of words in the reference items, selecting a subset of the candidate tags as context tags based on an amount that the candidate tags appear in the reference items, obtaining a sample item that includes a sequence of words, identifying a subset of the context tags in the sequence of words in the sample item, and classifying a subset of the reference items as contextually similar to the sample item based the context tags that were identified. | 2020-07-23 |
20200233783 | TEST PATTERN GENERATION DEVICE - A test pattern generation device that generates test input sequences for testing a sequence program calculates all possible states and state changes taken for each of input signals of the sequence program, generates test patterns in which each of the calculated state changes of the input signals is combined with the states or the state changes of another of the input signals, and generates the test input sequences on the basis of the generated test patterns. | 2020-07-23 |
20200233784 | Systems and Methods for Quality Control of an Enterprise IT Environment - A system for auditing an enterprise IT environment includes a multi-tier hierarchy generator configured to generate a multi-tier hierarchy, between and within each tier, maps the IT environment across a plurality of software applications of different types via which the IT environment is implemented. The system includes a test generation engine configured to generate test cases based on intake data about the IT environment. The system includes an auditing unit configured to test the IT environment based on the test cases and the multi-tier hierarchy. | 2020-07-23 |
20200233785 | Self-Curative Computer Process Automates - Systems, methods, and products, are described herein for self-curative computer process automates. Execution of an automate for testing of an application is initiated. The application includes a plurality of user interface elements, each user interface element having a plurality of properties. A change to a user interface element of the plurality of user interface elements during the execution of the automate is identified based on a change to at least one property of the plurality of properties associated with the user interface element. A modification to the plurality of properties associated with the user interface element is generated based on a ranking of the plurality of user interface elements, the generated modification curing the change to the at least one property. The generated modification is caused to display on a graphical user interface for further acceptance of the generated modification to the automate. | 2020-07-23 |
20200233786 | DEFAULT MOCK IMPLEMENTATIONS AT A SERVER - A system may include a mocking server and one or more tenants served by the mocking server. A tenant may test an application programming interface (API) using a mocking service. For example, the mocking server may run a mock implementation of an API based on an API specification, and the server may expose an endpoint of the mock implementation for API testing. In some cases, the API specification may use an additional service. The mocking server may need an implementation for this additional service in order to test the API. For improved efficiency and reliability of the mocking service, the mocking server may store pre-configured mock implementations for various service or complementary APIs, which can be publicly accessed and shared across multiple different tenants. The pre-configured mock implementations may enable a user to test an API without providing a mock implementation for each additional service indicated in the API specification. | 2020-07-23 |
20200233787 | API SPECIFICATION PARSING AT A MOCKING SERVER - A system may include a mocking server and one or more tenants served by the mocking server. A tenant may test an application programming interface (API) by creating a mock implementation of the API using a mocking service provided by the mocking server. The mocking server may generate a mock implementation of the API based on an API specification and expose an endpoint of the mock implementation for the user to perform testing. The user may provide an identifier for an API specification to the mocking server. The mocking server may retrieve the API specification from a source, parse the API specification in memory to create a mock model of the API, and generate a mock implementation for the API from the mock model. The mocking server includes an authentication mechanism to ensure that users accessing the API specification or running a mock implementation of the API are authorized. | 2020-07-23 |
20200233788 | SYSTEM AND METHOD FOR SCALABLE AUTOMATED USER INTERFACE TESTING - Systems and methods for providing automated testing of user interfaces is disclosed. The system is configured to communicate with one or more client devices that each include a common user interface of an application and receive at least one request for identifying errors associated with the common user interface. The system also receives at least one constraint associated with one or more portions of the common user interface. The system then generates navigational state information associated with the at least one constraint and identifies errors using the generated navigational state information associated with the common user interface. | 2020-07-23 |
20200233789 | USER DEFINED MOCKING SERVICE BEHAVIOR - A mocking service allows a mocking instance of an API specification to receive behavior parameters with requests for resources of the API specification. The mocking service may digest these parameters and generate a response according to the parameters and using the instance of the API specification. The dynamic responses allow a service to be configured for interacting with an API corresponding to the API specification and for interacting with different response scenarios of the API. The parameters may define response behaviors such as a fixed time until a response is received, a variable time until a response is received, error rate, error codes, validations, etc. In some cases, based on a behavior parameter indicating a request for random data for a requested resource, the mock implementation of the API may generate and return random data according to variables defined in the API specification. | 2020-07-23 |
20200233790 | DESIGNER DEFINED MOCKING SERVICE BEHAVIOR - A mocking service generates a mock implementation of an API based on a API specification. Request and response behavior of the mock implementation of the API may be controlled by a separate API behavior file. The API behavior file may be parsed by the mocking service to generate behavior logic. When an API request is transmitted to the mock implementation of the API, the behavior logic is invoked and may control execution of the mock implementation of the API and the details of a generated response. Behaviors defined in the API behavior file may be global or resource specific and may include, for example, time delays, error rates, error codes, conditions, response overrides, etc. | 2020-07-23 |
20200233791 | Anomaly Feedback Monitoring and Detection System - Disclosed herein are system, method, and computer program product embodiments for providing anomaly feedback monitoring and detection. An embodiment operates by determining a first set of data corresponding to an anomaly indicating an undesirable data state for a first application. A subset of data from a second set of data corresponding to the undesirable data state is identified, wherein the second set of data is associated with communications between the first application and a second application. A notification identifying the anomaly is provided. Feedback associated with the anomaly is received. Data corresponding to the anomaly is updated based on the feedback. | 2020-07-23 |
20200233792 | SCALABLE ENTERPRISE PLATFORM FOR AUTOMATED FUNCTIONAL AND INTEGRATION REGRESSION TESTING - A scalable enterprise platform for automated functional and integration regression testing is provided. Embodiments of the disclosed system facilitate the testing of any number of different software systems in development, even where the systems have unique dataset formats. Embodiments of the present invention provide a common method to generate logging and results reports across the platform, thereby providing simpler results analysis. Embodiments may also standardize the query set and facilitate the capability to analyze large results sets. Furthermore, embodiments of the disclosed system may combine the original data to the validated data to allow testers to analyze the testing results. In addition, embodiments of the present invention supports secured separation of testing domains. In at least one embodiment, the system includes a centralized user interface system that provides users with different domains to securely access one or more testing domains. | 2020-07-23 |
20200233793 | BEHAVIOR DRIVEN DEVELOPMENT TEST FRAMEWORK FOR APPLICATION PROGRAMMING INTERFACES AND WEBSERVICES - Systems and methods for testing software such as webservices and APIs using behavior-driven development (BDD) language are disclosed. Software such as, for example, an Application Programming Interface (API) or webserver is tested using a BDD expression such as, for example, a Gherkin. The Gherkin may be converted into machine-executable code for the test. The machine-executable code may be executed if the software is available. A response output generated by the software may be validated based on validation information of input data. A report based on the validation may be generated. | 2020-07-23 |
20200233794 | ELECTRONIC PRODUCT TESTING SYSTEMS - An electronic product testing system includes: a testing device having a processing unit configured to provide a digital image that includes a feature of a product to be tested based at least in part on an instruction file; wherein the testing device is configured to test the product based at least in part on a set of digits by submitting the set of digits for processing by a repository. | 2020-07-23 |
20200233795 | DATA STORAGE SYSTEM AND PRECHARGE POLICY SETTING METHOD THEREFOR - A data storage system includes a memory device including a plurality of memory cells which are coupled to a plurality of row lines, and configured to communicate with a host device through at least one port; and a memory controller configured to select one of a first precharge policy and a second precharge policy according to a precharge control signal, and control the row lines based on access addresses for the row lines according to the selected precharge policy, wherein, under the first precharge policy, one of a first precharge scheme and a second precharge scheme is applied, and under the second precharge policy, both the first and second precharge schemes are applied at different times. | 2020-07-23 |
20200233796 | STORAGE DEVICE, COMPUTING SYSTEM INCLUDING STORAGE DEVICE, AND METHOD OF OPERATING THE SAME - A memory controller may control a memory device for storing logical to physical (L2P) mapping information, the memory controller comprising: a map data storage configured to store a plurality of L2P address segments included in the L2P mapping information; and a map data manager configured to: provide at least one L2P address segment of the plurality of L2P address segments to the host in response to a map data request of the host; and remove a L2P address segment from the map data storage, wherein the L2P address segment is selected, among the plurality of L2P address segments, based on a least recently used (LRU) frequency and whether the L2P address segment is provided to the host. | 2020-07-23 |
20200233797 | USING A RAW MIRROR TO INITIALIZE A STORAGE SYSTEM - A method of initializing a data storage system (DSS) is provided. The method includes (a) in response to the DSS booting, reading data from a first plurality of disks marked as part of a raw mirror which mirrors configuration data of the DSS between the first plurality of disks; (b) comparing sequence numbers from the read data read and selecting data from a disk of the first plurality having a latest sequence number; (c) obtaining configuration data of the DSS from the selected data; (d) using the configuration data to construct a topology of the DSS which includes information describing a relationship between a second plurality of disks of the DSS, RAID groups of the DSS, and logical disks presented to users, the second plurality of disks being larger than and including the first plurality of disks; and (e) initializing the RAID groups and the logical disks described by the topology based on the information of the topology. | 2020-07-23 |
20200233798 | COMPUTER MEMORY MAPPING AND INVALIDATION - Techniques are provided for computer memory mapping and allocation. In an example, a virtual memory address space is divided into an active half and a passive half. Processors make memory allocations to their respective portions of the active half until one processor has made a determined number of allocations. When that occurs, and when all memory in the passive half that has been allocated has been returned, then the active and passive halves are switched, and all processors are switched to making allocations in the newly-active half. | 2020-07-23 |
20200233799 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR MANAGING STORAGE SYSTEM - Storage system management is provided. Metadata in a first version at a first time point of the storage system is obtained, here the metadata in the first version describes reference relations between at least one data block in a chunk included in the storage system and at least one object stored in the storage system at the first time point. Metadata in a second version at a second time point of the storage system is obtained, the second time point being after the first time point. The chunk included in the storage system is managed based on a determined difference between the metadata in the first version and the metadata in the second version. By means of the technical solution of the present disclosure, chunks in the storage system may be managed more effectively, and the chunk reclaiming efficiency may be increased. | 2020-07-23 |
20200233800 | JUST-IN-TIME DATA PROVISION BASED ON PREDICTED CACHE POLICIES - Systems, and methods are provided for predicting a cache policy based on application input data. Inputs provided to an application and corresponding to a usage pattern of the application can be received. The inputs can be used with a predictive model to determine a cache policy corresponding to a datastore. The cache policy can include output data to be provided via in the datastore and subsequently provided to a computing device in a just-in-time manner. The predictive model can be trained to output the cache policy based on input data received from a usage point, a provider point, or a datastore configuration. | 2020-07-23 |
20200233801 | TRADING OFF CACHE SPACE AND WRITE AMPLIFICATION FOR B(epsilon)-TREES - Certain aspects provide systems and methods for performing an operation on a B | 2020-07-23 |
20200233802 | MULTIPLE CACHE FRAMEWORK FOR MANAGING DATA FOR SCENARIO PLANNING - The embodiments disclosed herein relate to computing a transportation plan for transporting goods from one place to another across a number of shipments and that satisfy multiple shipment orders. The transportation plan may specify a transportation channel that includes one or more segments selected from service provider rate offerings that may include a means of transportation, starting location, destination location, and cost of the segment. An actionable transportation plan may be computed based on current transportation planning data. Alternative plans may be computed for a variety of scenarios in which hypothetical changes are introduced to the transportation planning data. Any combination of an actionable transportation plan and alternative plans may be computed concurrently with computations sharing a common cache of production data. | 2020-07-23 |
20200233803 | EFFICIENT HARDWARE ARCHITECTURE FOR ACCELERATING GROUPED CONVOLUTIONS - Hardware accelerators for accelerated grouped convolution operations. A first buffer of a hardware accelerator may receive a first row of an input feature map (IFM) from a memory. A first group comprising a plurality of tiles may receive a first row of the IFM. A plurality of processing elements of the first group may compute a portion of a first row of an output feature map (OFM) based on the first row of the IFM and a kernel. A second buffer of the accelerator may receive a third row of the IFM from the memory. A second group comprising a plurality of tiles may receive the third row of the IFM. A plurality of processing elements of the second group may compute a portion of a third row of the OFM based on the third row of the IFM and the kernel as part of a grouped convolution operation. | 2020-07-23 |
20200233804 | ACCELERATING REPLICATION OF PAGE TABLES FOR MULTI-SOCKET MACHINES - Described herein is a method for tracking changes made by an application. Embodiments include determining, by a processor, a write-back of a cache line from a hardware unit associated with a socket of a plurality of sockets to a page table entry of a page table in a memory location associated with the processor. Embodiments include adding, by the processor, the cache line to a list of dirty cache lines. Embodiments include, for each respective cache line in the list of dirty cache lines, identifying, by the processor, a memory location associated with a respective socket of the plurality of sockets corresponding to the respective cache line and updating, by the processor, an entry of a page table replica at the memory location based on the respective cache line. | 2020-07-23 |
20200233805 | METHOD AND APPARATUS FOR PERFORMING PIPELINE-BASED ACCESSING MANAGEMENT IN A STORAGE SERVER - A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer. | 2020-07-23 |
20200233806 | APPARATUS, METHOD, AND SYSTEM FOR ENHANCED DATA PREFETCHING BASED ON NON-UNIFORM MEMORY ACCESS (NUMA) CHARACTERISTICS - Apparatus, method, and system for enhancing data prefetching based on non-uniform memory access (NUMA) characteristics are described herein. An apparatus embodiment includes a system memory, a cache, and a prefetcher. The system memory includes multiple memory regions, at least some of which are associated with different NUMA characteristic (access latency, bandwidth, etc.) than others. Each region is associated with its own set of prefetch parameters that are set in accordance to their respective NUMA characteristics. The prefetcher monitors data accesses to the cache and generates one or more prefetch requests to fetch data from the system memory to the cache based on the monitored data accesses and the set of prefetch parameters associated with the memory region from which data is to be fetched. The set of prefetcher parameters may include prefetch distance, training-to-stable threshold, and throttle threshold. | 2020-07-23 |
20200233807 | SECURE MEMORY REPARTITIONING TECHNOLOGIES - Secure memory repartitioning technologies are described. Embodiments of the disclosure may include a processing device including a processor core and a memory controller coupled between the processor core and a memory device. The memory device includes a memory range including a section of convertible pages that are convertible to secure pages or non-secure pages. The processor core is to receive a non-secure access request to a page in the memory device, responsive to a determination, based on one or more secure state bits in one or more secure state bit arrays, that the page is a secure page, insert an abort page address into a translation lookaside buffer, and responsive to a determination, based on the one or more secure state bits in the one or more secure state bit arrays, that the page is a non-secure page, insert the page into the translation lookaside buffer. | 2020-07-23 |
20200233808 | DATA PROCESSING SYSTEMS - A cache is disclosed in which a dedicated cache portion comprising one or more extra lines dedicated for data of a particular data type is provided alongside a shared cache portion. So long as there is a cache line available in the shared cache portion, data can be written into the shared cache portion. However, when the shared cache portion is fully locked such that no new data can be written into the shared cache portion, data can instead be written to its respective dedicated cache portion, effectively bypassing the fully locked shared cache portion. | 2020-07-23 |
20200233809 | MEASUREMENT SYSTEM AND METHOD FOR OPERATING A MEASUREMENT SYSTEM - Provided is a method for operating a measurement system including an evaluation module and several measuring elements. The evaluation module and the measuring elements are connected via a communication line. The method includes detecting measurement data via the several measuring elements. At least two of the measuring elements detect the measurement data at least partially at the same time. The method further includes: buffering the detected measurement data in the respective measuring element; and reading out the measurement data buffered in the measuring elements with the evaluation module via the communication line. | 2020-07-23 |
20200233810 | FACILITATION OF IMPACT NODE REBOOT MANAGEMENT IN A DISTRIBUTED SYSTEM - Node resets in a distributed environment can be disruptive due to the need to reset shared state. However, a central system can notify all other nodes asynchronously of a pending event, and then multiple nodes can use that notification to mitigate costs when it actually happens. For example, in anticipation of a first node leaving a group of nodes, a second node can reduce its cache to store the cache from the first node. Additionally, a client device can be directed to the second node so as not to interrupt a service provided to the client device by the first node. | 2020-07-23 |
20200233811 | SYSTEMS AND METHODS FOR REPLACING DATA RETRIEVED FROM MEMORY - An electronic system such as an imaging system may include processing circuitry and memory circuitry. Data replacement circuitry may be interposed between the processing circuitry and the memory circuitry. In some implementations, the memory circuitry may be a read-only memory, and data replacement circuitry may be used to selectively replace executable firmware instructions stored on the read-only memory. The selective replacement operations may be based on an address that processing circuitry provides to access the memory circuitry. The data replacement circuitry may be implemented separately from the processing circuitry and the memory circuitry and may include a comparator block, registers, and switching circuitry. | 2020-07-23 |
20200233812 | STORAGE DEVICE AND METHOD OF OPERATING THE SAME - A memory controller for controlling a memory device including a plurality of pages is provided. The memory controller comprises: an input data controller configured to receive data to be stored in a page selected from among the plurality of pages; a sequence information generator configured to generate sequence information indicating a sequential order of a program operation of storing the data in the first page based on sequential orders of program operations performed before the program operation; and a write operation controller configured to control the memory device to store the data in a first area of the first page and to store history information in a second area of the first page, wherein the history information includes a physical address of the first page and the sequence information corresponding to the data. | 2020-07-23 |
20200233813 | HOST ADDRESS SPACE IDENTIFIER FOR NON-UNIFORM MEMORY ACCESS LOCALITY IN VIRTUAL MACHINES - Aspects of the disclosure provide for implementing host address space identifiers for non-uniform memory access (NUMA) locality in virtual machines. A method of the disclosure includes determining, by a virtual machine (VM) executed by a processing device and managed by a hypervisor, that a memory page of the guest is to be moved from a first virtual non-uniform memory access (NUMA) node of the VM to a second virtual NUMA node of the VM. The method further includes updating, by the VM in a guest page table, upper bits of a guest physical address (GPA) of the memory page to include a host address space identifier (HASID) of the second virtual NUMA node, and causing an execution control to be transferred from the VM to the hypervisor due to a page fault resulting from attempting to access the updated GPA. | 2020-07-23 |
20200233814 | PROGRAMMABLE ADDRESS RANGE ENGINE FOR LARGER REGION SIZES - Examples described herein relate to a computing system supporting custom page sized ranges for an application to map contiguous memory regions instead of many smaller sized pages. An application can request a custom range size. An operating system can allocate a contiguous physical memory region to a virtual address range by specifying a custom range sizes that are larger or smaller than the normal general page sizes. Virtual-to-physical address translation can occur using an address range circuitry and translation lookaside buffer in parallel. The address range circuitry can determine if a custom entry is available to use to identify a physical address translation for the virtual address. Physical address translation can be performed by transforming the virtual address in some examples. | 2020-07-23 |
20200233815 | STORAGE IN A NON-VOLATILE MEMORY - A non-volatile memory is organized in pages and has a word writing granularity of one or more bytes and a block erasing granularity of one or more pages. Logical addresses are scrambling into physical addresses used to perform operations in the non-volatile memory. The scrambling includes scrambling logical data addresses based on a page structure of the non-volatile memory and scrambling logical code addresses based on a word structure of the non-volatile memory. | 2020-07-23 |
20200233816 | MULTIPLE GUARD TAG SETTING INSTRUCTION - An apparatus has memory access circuitry to perform a tag-guarded memory access operation in response to a target address. The tag-guarded memory access operation comprises: comparing an address tag associated with the target address with a guard tag stored in a memory system in association with a block of one or more memory locations comprising an addressed location identified by the target address, and generating an indication of whether a match is detected between the guard tag and the address tag. An instruction decoder decodes a multiple guard tag setting instruction to control the memory access circuitry to trigger memory accesses to update the guard tags associated with at least two consecutive blocks of one or more memory locations. | 2020-07-23 |
20200233817 | DRIVER-TO-DRIVER COMMUNICATION - An example system for driver-to-driver communication can include a first driver located on a first network device and including a transmit data mover (XDM) to send a preformatted message over a fabric interconnect to a second driver located on a second network device. The example system can also include the second driver located on the second network device and including a receive data mover (RDM) to receive the preformatted message, generate an interrupt responsive to receipt of the preformatted message, and route the interrupt to the second driver. The second driver can read the preformatted message responsive to receipt of the interrupt. | 2020-07-23 |
20200233818 | METHODS, FLASH MEMORY CONTROLLER, AND ELECTRONIC DEVICE FOR SD MEMORY CARD DEVICE - A method for controlling data transmission mode of an SD memory card device, which at least operates under an SD mode, includes: sending a first power signal from an electronic device to the SD memory card device via pin VDD | 2020-07-23 |
20200233819 | MEMORY RANK DESIGN FOR A MEMORY CHANNEL THAT IS OPTIMIZED FOR GRAPH APPLICATIONS - An apparatus is described. The apparatus includes a rank of memory chips to couple to a memory channel. The memory channel is characterized as having eight transfers of eight bits of raw data per burst access. The rank of memory chips has first, second and third X4 memory chips. The X4 memory chips conform to a JEDEC dual data rate (DDR) memory interface specification. The first and second X4 memory chips are to couple to an eight bit raw data portion of the memory channel's data bus. The third X4 memory chip to couple to an error correction coding (ECC) information portion of the memory channel's data bus. | 2020-07-23 |
20200233820 | ENHANCING PROCESSING PERFORMANCE OF A DNN MODULE BY BANDWIDTH CONTROL OF FABRIC INTERFACE - An exemplary computing environment having a DNN module can maintain one or more bandwidth throttling mechanisms. Illustratively, a first throttling mechanism can specify the number of cycles to wait between transactions on a cooperating fabric component (e.g., data bus). Illustratively, a second throttling mechanism can be a transaction count limiter that operatively sets a threshold of a number of transactions to be processed during a given transaction sequence and limits the number of transactions such as multiple transactions in flight to not exceed the set threshold. In an illustrative operation, in executing these two exemplary calculated throttling parameters, the average bandwidth usage and the peak bandwidth usage can be limited. Operatively, with this fabric bandwidth control, the processing units of the DNN are optimized to process data across each transaction cycle resulting in enhanced processing and lower power consumption. | 2020-07-23 |
20200233821 | UNIDIRECTIONAL INFORMATION CHANNEL TO MONITOR BIDIRECTIONAL INFORMATION CHANNEL DRIFT - An N-bit bus includes (N−1) bidirectional interfaces to couple to (N−1) bidirectional signal lines to exchange (transmit and receive) signals between companion devices. The bus includes two unidirectional signal line interfaces. The first is a unidirectional receive interface to couple to a unidirectional signal line to receive signals from the companion device. The second is a unidirectional transmit interface to couple to a unidirectional signal line to transmit signals to the companion device. The bus provides N signal lines for the N-bit bus in each direction, with an additional “backwards facing” signal line. The backwards facing signal line can allow the devices to prepare for a switch in the direction of the N-bit bus. | 2020-07-23 |
20200233822 | DISPLAY APPARATUS AND CONTROL METHOD FOR HIGH DISPLAY BANDWIDTH THEREOF - A display apparatus is provided. The display apparatus includes a display panel and a display controller. The display controller is electrically connected to a USB Type-C interface of a host via a USB Type-C interface of the display apparatus. In response to the USB Type-C interface of the display apparatus being in a USB Type-C default pin-assignment mode, the display controller receives an image signal from the host via two USB SuperSpeed channels of the USB Type-C interface of display apparatus. In response to a display mode of the display apparatus satisfying a specific condition, the display controller controls the USB Type-C interface of the display apparatus to enter a USB Type-C first pin assignment mode, so that the host utilizes the four USB SuperSpeed channels of the USB Type-C interface of the display apparatus to transmit the image signal to the display controller. | 2020-07-23 |
20200233823 | Connector, NVMe Storage Device, and Computer Device - A connector includes a first pin which is configured to indicate an in-service signal, a second pin which is configured to indicate a power supply signal, a third pin which is configured to indicate a clock signal, and a fourth pin; the first pin which is configured to indicate a PCIe port signal; the first pin, the second pin, the third pin, and the fourth pin have an equal length; and the connector includes a first face and a second face, a limiting structure is arranged on the first face, the limiting structure is a boss or a groove, and the first pin is located in the middle of the first face. | 2020-07-23 |
20200233824 | INTERFACES SWITCHING CIRCUIT AND DEVICE - Provided is an interface switching circuit which is arranged on a first circuit board and a second circuit board. The first circuit board is provided with a Type-C interface, a protocol configuration chip, a HUB module, a video interface module, a USB interface module, and a network interface module. The second circuit board is provided with a power module. The Type-C interface is connected to the HUB module, the video interface module, and the protocol configuration chip respectively. The HUB module is connected to the USB interface module, the protocol configuration chip and the network interface module respectively, and the video interface module is connected to the protocol configuration chip. The first circuit board is electrically connected to the second circuit board, so that the power module is electrically connected to the Type-C interface, the protocol configuration chip, the HUB module and the network interface module. | 2020-07-23 |
20200233825 | MODULAR SYSTEM ARCHITECTURE FOR SUPPORTING MULTIPLE SOLID-STATE DRIVES - A rack-mountable data storage system includes: a chassis including one or more switchboards; a midplane interfacing with the one or more switchboards; and one or more data storage devices removably coupled to the midplane using a connector. At least one data storage device of the one or more data storage devices include a logic device to interface with the midplane. The logic device provides a device-specific interface of a corresponding data storage device with the midplane. The at least one data storage device is configured using the logic device according to a first protocol based on a signal on a pin of the connector, and the at least one data storage device is reconfigurable according to a second protocol based on a change of the signal on the pin of the connector using the logic device. | 2020-07-23 |
20200233826 | METHOD FOR MAINTAINING MEMORY SHARING IN A COMPUTER CLUSTER - A method includes: by an application executed by a first node, determining whether a non-transparent bridge between the first node and a second node is in a disconnected state; sending a re-initialization request from the application to a driver executed by the first node when the NTB is in the disconnected state; re-initializing a memory of the first node upon the driver receiving the re-initialization request; transmitting a result message related to the re-initialization of the memory to the second node; and implementing a memory-sharing procedure upon completing the re-initialization of the memory and receiving, from the second node, another result message related to re-initialization of a memory of the second node. | 2020-07-23 |
20200233827 | BUS DECODE AND TRIGGERING ON DIGITAL DOWN CONVERTED DATA IN A TEST AND MEASUREMENT INSTRUMENT - A test and measurement instrument including a digital down converter configured to receive a bus signal and output in-phase and quadrature-phase baseband component waveform data, a trace generator configured to receive the in-phase and quadrature-phase baseband component waveform data and generate at least one radio frequency versus time trace, a decoder configured to receive the at least one radio frequency versus time trace and decode the bus signal based on the at least one radio frequency versus time trace and a wireless modulation scheme, and a trigger configured to capture at least a portion of the bus signal based on the decoded bus signal. | 2020-07-23 |
20200233828 | SEMICONDUCTOR LAYERED DEVICE WITH DATA BUS INVERSION - Apparatuses and methods of data transmission between semiconductor chips are described. An example apparatus includes: a data bus inversion (DBI) circuit that receives first, second and third input data in order, and further provides first, second and third output data, either with or without data bus inversion. The DBI circuit includes a first circuit that latches the first input data and the third input data; a second circuit that latches the second input data; a first DBI calculator circuit that performs first DBI calculation on the latched first input data and the latched second input data responsive to the first circuit latching the first input data and the second circuit latching the second input data, respectively; and a second DBI calculator circuit that performs second DBI calculation on the latched second data and the latched third input data responsive to the first circuit latching the third input data. | 2020-07-23 |
20200233829 | MULTI-LANE SYSTEM POWER MANAGEMENT INTERFACE - Systems, methods, and apparatus related to the operation of a multilane serial bus communicate the configuration of lanes used to handle a transaction over the serial bus through signaling transmitted at the commencement of the transaction. The method includes asserting a multilane bus request by initiating a pulse on a secondary data lane of the serial bus while the clock lane is idle, participating in a first bus arbitration procedure executed using the secondary data lane after the pulse is terminated, providing initial signaling on the secondary data lane after winning the first bus arbitration procedure to indicate a set of data lanes to be used during a transaction, and executing a first transaction using the set of data lanes. The set of data lanes may include the primary data lane and the secondary data lane. The initial signaling may include a sequence start condition. | 2020-07-23 |
20200233830 | ADJUSTABLE POWER DELIVERY SCHEME FOR UNIVERSAL SERIAL BUS - Described is an apparatus which comprises: an adjustable power supply source to generate an adjustable power supply; a node to provide the adjustable power supply to a device; and a bus which is operable to: send a first message to the device indicating that the adjustable power supply source is capable of dynamically providing an adjustable power supply; and receive a request from the device, the request indicating a new voltage or current specification. | 2020-07-23 |
20200233831 | SELF-CONFIGURING SSD MULTI-PROTOCOL SUPPORT IN HOST-LESS ENVIRONMENT - A device that may configure itself is disclosed. The device may include an interface that may be used for communications with a chassis. The interface may support a plurality of transport protocols. The device may include a Vital Product Data (VPD) reading logic to read a VPD from the chassis and a built-in self-configuration logic to configure the interface to use one of the transport protocols and to disable alternative transport protocols, responsive to the VPD. | 2020-07-23 |
20200233832 | METHOD FOR TRAINING MULTICHANNEL DATA RECEIVER TIMING - An apparatus includes a first device having a clock signal and configured to communicate, via a data bus, with a second device configured to assert a data strobe signal and a plurality of data bit signals on the data bus. The first device may include a control circuit configured, during a training phase, to determine relative timing between the clock signal, the plurality of data bit signals, and the data strobe signal. The first device may determine, using a first set of sampling operations, a first timing relationship of the plurality of data bit signals relative to the data strobe signal, and determine, using a second set of sampling operations, a second timing relationship of the plurality of data bit signals and the data strobe signal relative to the clock signal. During an operational phase, the control circuit may be configured to use delays based on the first and second timing relationships to sample data from the second device on the data bus. | 2020-07-23 |
20200233833 | GROUP-BASED DATA REPLICATION IN MULTI-TENANT STORAGE SYSTEMS - Distributed storage systems, devices, and associated methods of data replication are disclosed herein. In one embodiment, a server in a distributed storage system is configured to write, with an RDMA enabled NIC, a block of data from a memory of the server to a memory at another server via an RDMA network. Upon completion of writing the block of data to the another server, the server can also send metadata representing a memory location and a data size of the written block of data in the memory of the another server via the RDMA network. The sent metadata is to be written into a memory location containing data representing a memory descriptor that is a part of a data structure representing a pre-posted work request comgured to write a copy of the block of data from the another server to an additional server via the RDMA network. | 2020-07-23 |
20200233834 | COMPUTER-IMPLEMENTED APPARATUS AND METHOD FOR PROCESSING DATA - Provided is a computer-implemented apparatus for processing data, having a digital chip having at least one part that is reconfigurable by a number N of configuration descriptions, with N≥1, a determined configuration description from the number N for reconfiguring the reconfigurable part, and a providing unit for providing an identifier specific to the determined configuration description by using a number A of derivation parameters comprising the determined configuration description, with A≥1, is proposed, wherein the part reconfigured with the determined configuration description) is set up to perform a cryptographic function on determined data by using the provided specific identifier to generate cryptographically processed data. This allows security-relevant functions to be implemented as configuration descriptions. This has the advantage that the security when processing data in digital chips is increased. | 2020-07-23 |
20200233835 | SNAPSHOT ARCHIVE MANAGEMENT - At least a portion of data of a tree data structure is serialized into a first set of flat data. At least a portion of a structure of the tree data structure is serialized to reproduce at least the portion of the structure in a second set of flat data. It is determined to access a desired data of the tree data structure from an archive. The second set of flat data is read to reconstitute at least the portion of a structure of the tree data structure. The reconstituted structure of the tree data structure is used to determine a data offset associated with the first set of flat data, wherein the data offset corresponds to the desired data. The desired data is accessed using the data offset associated with the first set of flat data. | 2020-07-23 |
20200233836 | COMPUTER SYSTEM AND METHOD OF PRESENTING INFORMATION USEFUL FOR ACHIEVING PURPOSES RELATED TO OBJECT - Provided is a computer system to present information useful for achieving purposes related to an object by utilizing AI prediction. The computer system manages a prediction model for predicting an object event based on evaluation data and feature profiling database that defines a change rule of each of the plurality of feature values included in the evaluation data, generates change policy data by changing the plurality of feature values included in the evaluation data based on the feature profiling database, calculates an evaluation value indicating effectiveness of the change policy data, and generates display data for presenting the change policy data and the evaluation value as information useful for achieving purposes related to the object. | 2020-07-23 |
20200233837 | INTELLIGENT METHOD TO INDEX STORAGE SYSTEM FILES ACCOUNTING FOR SNAPSHOTS - Indexing files to account for snapshots can include generating, based on a scan of the storage system, one or more file records. Each file record of the one or more file records can correspond to a file stored on the storage system at the time of the scan. The file records can be maintained based on one or more notifications received from the storage system. A snapshot list of the storage system can be maintained, the snapshot list having snapshot data corresponding to one or more snapshots stored on the storage system. A search result can be generated that satisfies a search parameter based at least on a) the one or more file records and/or b) the snapshot list. | 2020-07-23 |
20200233838 | INTELLIGENT METHOD TO GENERATE AND SYNC VIRTUAL MACHINE FILE METADATA FOR SEARCH - File metadata of a virtual machine can be generated when performing a backup of a virtual machine. A backup copy of the virtual machine and the file metadata can be stored in a backup storage system. The file metadata can be retrieved from the storage system in a manner that is decoupled from the performance of the backup of the virtual machine. The file metadata can be used for searching for files within the backup copy of the virtual machine. | 2020-07-23 |
20200233839 | DEFRAGMENTING METADATA OF A FILESYSTEM - A device implementing a system for defragmenting metadata of a filesystem includes a processor configured to, in response to receiving a trigger from a server remote from the device, obtain the metadata from a first data structure, the first data structure comprising a first set of one or more nodes and a second set of one or more nodes, and insert the metadata obtained from the first data structure into a third set of one or more nodes of a second data structure, wherein the third set of one or more nodes omits one or more entries from the second set of nodes. The at least one processor is further configured to, in accordance with a determination that the metadata was successfully inserted into the second data structure, provide the second data structure as a replacement of the first data structure for the filesystem. | 2020-07-23 |
20200233840 | EFFICIENT DATABASE MIGRATION USING AN INTERMEDIARY SECONDARY STORAGE SYSTEM - A portion of contents of a database is received from a first server. The received contents of the database is stored in a secondary storage system that tracks changes between different backup versions of contents of the database. A request to migrate the contents of the database to a second server is received. A version of contents of the database is provided to the second server using the secondary storage system. The secondary storage system is configured to determine an amount of changes to the database content from one of the versions of the database content provided to the second server and the amount of changes is utilized in determining whether to quiesce the database hosted on the first server. | 2020-07-23 |
20200233841 | EFFICIENT DATABASE TABLE ROTATION AND UPGRADE - A database server may include a master table schema that defines a database table's configuration and an arrangement for corresponding shadow tables. The shadow tables contain data related to contiguous and non-overlapping time periods and writing to the shadow tables occurs in a rotational fashion so that only one active table is written to at any point. The server may upgrade the master table schema. The server then may determine that a rotation event has occurred where a first shadow table is active and a second shadow table is associated with an oldest of the contiguous and non-overlapping time periods. In response, the server may delete data in the second table, determine that the schema has been upgraded since the second table was most recently active, upgrade the second table's schema to match the schema, and set the second table to active enabling writing to the second table. | 2020-07-23 |
20200233842 | FILTER SUGGESTION FOR SELECTIVE DATA IMPORT - When tenants migrate data from on-premises archiving solutions to a hosted service, tenants should maintain just enough data for compliance purposes and dispose of data that is no longer needed to reduce overall liability and compliance risk exposure. Embodiments are directed to providing selective import of data to a hosted service through a security and compliance system associated with the hosted service to reduce overall liability and compliance risk exposure. Data, usage pattern and security/compliance policies associated with a tenant of the hosted service may be analyzed. A model for importing tenant data may be created based on the analysis. A suggestion may be presented to the tenant based on the model, where the suggestion includes a filter for importing tenant data. In response to receiving a confirmation to implement the suggestion, the filter may be applied to the tenant data as it is imported to the hosted service. | 2020-07-23 |
20200233843 | SYSTEMS, METHODS, AND DATA STRUCTURES FOR HIGH-SPEED SEARCHING OR FILTERING OF LARGE DATASETS - An inline tree data structure and one or more auxiliary data structure encode a multitude of data records of a dataset; data fields of the dataset define a tree hierarchy. The inline tree comprises one binary string for each data record that are all the same length, are arranged in an ordered sequence that corresponds to the tree hierarchy, and include an indicator string indicating position in the tree hierarchy of each data record relative to an immediately adjacent data record. A search program is guided through the dataset by interrogating each indicator string in the inline tree data structure so as to reduce unnecessary interrogation of data field values. | 2020-07-23 |
20200233844 | EVOLUTION OF COMMUNITIES DERIVED FROM ACCESS PATTERNS - A system, method, and machine-readable storage medium for resolving a candidate community are provided. In some embodiments, a method includes obtaining a candidate community and a neighbor set for the candidate community, the neighbor set including zero or more stable communities. The method also includes resolving the candidate community as being a new stable community if the neighbor set is empty. The method further includes resolving the candidate community as being part of a matching stable community if a hash value of the candidate community matches a hash value of one or more stable communities included in the neighbor set. The method also includes resolving the candidate community as being a new stable community if an entropy value is greater than a threshold, the entropy value being based on the candidate community and the neighbor set. | 2020-07-23 |
20200233845 | FILE INDEXING FOR VIRTUAL MACHINE BACKUPS BASED ON USING LIVE BROWSE FEATURES - An illustrative file indexing approach enhances what was previously possible with hypervisor-free live browsing of virtual machine (VM) block-level backup copies. Capabilities are described for indexing files discovered in VM block-level backup copies, including indexing of directory structures and file content. The illustrative file indexing functionality activates a live-browse session to discover files present within VM block-level backup copies and indexes file names and directory structures as created by an original source VM, resulting in an illustrative file index. The illustrative file indexing functionality optionally indexes file contents within VM block-level backup copies, resulting in an illustrative content index. The file index and content index are retained in persistent data structure(s) stored apart from the VM block-level backup copies. The indexes are searchable without mounting or live-browsing the VM block-level backup copies. In some embodiments the file index and the content index are consolidated. | 2020-07-23 |
20200233846 | FILE INDEXING FOR VIRTUAL MACHINE BACKUPS IN A DATA STORAGE MANAGEMENT SYSTEM - An illustrative file indexing approach enhances what was previously possible with hypervisor-free live browsing of virtual machine (VM) block-level backup copies. Capabilities are described for indexing files discovered in VM block-level backup copies, including file content. The illustrative file indexing functionality activates a live-browse session to discover files present within VM block-level backup copies and indexes file names and directory structures as created by an original source VM, resulting in an illustrative file index. The illustrative file indexing functionality optionally indexes file contents within VM block-level backup copies, resulting in an illustrative content index. The file index and content index are retained in persistent data structure(s) stored apart from the VM block-level backup copies. The indexes are searchable without mounting or live-browsing the VM block-level backup copies. In some embodiments the file index and the content index are consolidated. An enhanced storage manager is also disclosed. | 2020-07-23 |
20200233847 | INCREMENTAL DYNAMIC DOCUMENT INDEX GENERATION - A contextual index compendium that includes contextual index item generation rules that define document index entry generation transforms usable to transform text of the documents into embedded document index entries of document indexes within the documents is obtained by a processor. Using the document index entry generation transforms defined within the contextual index item generation rules in association with a document that includes embedded document index entries that are both embedded at locations of associated text distributed throughout the document and added as part of a document index within the document, new text of the document is programmatically transformed into at least one new document index entry in response to determining that at least one portion of the new text includes candidate text that is not already indexed within the existing embedded document index entries and the document index within the document. | 2020-07-23 |
20200233848 | ELASTIC DATA PARTITIONING OF A DATABASE - A database entry may be stored in a container in a database table corresponding with a partition key. The partition key may be determined by applying one or more partition rules to one or more data values associated with the database entry. The database entry may be an instance of one of a plurality of data object definitions associated with database entries in the database. Each of the data object definitions may identify a respective one or more data fields included within an instance of the data object definition. | 2020-07-23 |
20200233849 | Database Modification and Processing System - Disclosed herein are system, method, and computer program product embodiments for database modification and processing functionality. An embodiment operates by providing a batch of values stored in rows corresponding to a particular column responsive to a request to encrypt the values of the particular column. Encrypted values corresponding the batch of values are received and stored in a hidden column. A status of the rows corresponding to batch of values of the hidden column is updated to indicate in which rows of the hidden column the received encrypted values have been stored. Updated encrypted values are received and stored in the hidden column. The particular column is replaced with the hidden column. | 2020-07-23 |
20200233850 | DATABASE SYSTEM FOR TRIGGERING EVENT NOTIFICATIONS BASED ON UPDATES TO DATABASE RECORDS - A data processing system is disclosed for accessing databases and updated data items and triggering event notifications. The data processing system may comprise a first database including a plurality of records, and a second database including a plurality of trigger indicators. The database system may further include a hardware processor configured to execute computer-executable instructions in order to: receive an update data item; identify a record corresponding to the update data item; cause an update to the record based on information included with the update data item; identify a trigger indicator corresponding to the update to the record; determine that a type of the trigger indicator matches a type of the update to the record; and generate an event notification including information included in the update. | 2020-07-23 |
20200233851 | RESOURCE EXPLOITATION MANAGEMENT SYSTEM, METHOD AND PROGRAM PRODUCT - A resource exploitation management system, method and a computer program product therefor. A description of new geological evidence for a geological resource is received, e.g., as one or more triples describing the evidence. Keywords in the description are matched against keywords in representations in a geological resource database. Geological relations are inferred from the descriptions and matched against predefined geological relations from the geological resource database. Consistent triple matches are merged with the geological resource database. The confidence level for merged matches is updated in the geological resource database. | 2020-07-23 |