03rd week of 2022 patent applcation highlights part 43 |
Patent application number | Title | Published |
20220019488 | Site Specific Notifications - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating site specific notifications for geographic sites. A set of sites and a log requirements for the sites are associated. Each log requirement specifies a particular log item to be completed by a user by a completion time, and each site specifies a particular physical location. Sites and site users are also associated. Site users for a site are users specified as being responsible for the site. Each site user is associated with at least one log requirement for the site. Each site has an associated notified user. As site users complete logs for the site, or fail to complete logs, the notified user for the site is notified. | 2022-01-20 |
20220019489 | VARIABLE SELECTION OF DIFFERENT VERSIONS OF AN EVENT HANDLER - Embodiments of the present invention provide a method, system and computer program product for variable event handling in a multi-tenant environment. In an embodiment of the invention, a method for variable event handling in a multi-tenant environment includes receiving an event placed on an event bus in an event driven data processing system, the event corresponding to a multiplicity of different instances of a single event handler, with each instance having been adapted to process the event. The method additionally includes decoding the event to identify a version of a target application for the event and matching the version of the target application to an end point for a particular one of the different event handlers. Finally, the method includes routing the event to the matched end point. | 2022-01-20 |
20220019490 | BLOCKCHAIN EVENT PROCESSING METHOD AND APPARATUS - The present specification provides a blockchain event processing method and apparatus, applied to a control component of distributed event processing centers connected to a node device of a blockchain network. The distributed event processing centers obtain respective blockchain event streams from the node device, and deliver the obtained respective blockchain event streams to respective triggers included in the distributed event processing centers, so that when a blockchain event included in the blockchain event streams meets a corresponding trigger condition, a trigger of the triggers pushes the blockchain event to a service system connected to the trigger. The method includes: obtaining push speeds of a plurality of triggers included in an event processing center of the distributed event processing centers; and if a difference among the push speeds of the plurality of triggers is greater than a predetermined first threshold, establishing a new event processing center between the event processing center and a child event processing center of the event processing center, and moving the plurality of triggers to the new event processing center. | 2022-01-20 |
20220019491 | METHOD OF DETERMINING SPLIT SCHEME, DETERMINING DEVICE, AND COMPUTING SYSTEM - With respect to a method of determining a split scheme, the method includes calculating, by one or more processors, data related to data transfer time, for each combination of parallelization axes at respective layers of a hierarchical memory computer, based on data transfer methods, a size of a problem to be calculated, and communication bandwidths between the layers. The data transfer methods are determined by the parallelization axes, and the parallelization axes indicate how to split the problem. The method further includes determining, by the one or more processors, a combination of the parallelization axes based on the data related to the data transfer time calculated for each combination of the parallelization axes. | 2022-01-20 |
20220019492 | SYSTEM AND METHOD FOR DISCOVERING INTERFACES IN A NETWORK - A system and method for discovering interfaces in a network is provided wherein a remote system is configured to discover by connecting to plurality of servers, and intelligently stitching together interfaces that exist between different applications. The remote system is configured to identify these interfaces through message queue servers in the network and their queue managers. Further, stitching together the interfaces is done based upon the hops that a message performs from one system to its target application using message queues. Interface name is created by reading message header, applications involved and queue properties. The system capabilities also include tracking the usage of interfaces based on the traffic that is flowing through them. The system provides a repeatable process to obtain accurate repository of interfaces. | 2022-01-20 |
20220019493 | SYSTEMS AND METHODS FOR ON DEMAND SERVICE INTEGRATION - Systems and methods for on demand service integration. A system includes at least one processor and a storage medium storing instructions that, when executed by the one or more processors, cause the at least one processor to perform operations including receiving from a customer system a request to integrate a partner service with an integrator instance at the customer system and searching an integrator database for a partner service communication template based on the received request. The operations also include configuring the integrator instance to receive communications from a partner service instance based on the communication template and establishing a communication link between the integrator instance and the partner service instance. | 2022-01-20 |
20220019494 | FAILURE IMPACT ANALYSIS OF NETWORK EVENTS - Failure impact analysis (or “impact analysis”) is a process that involves identifying effects of a network event that are may or will results from the network event. In one example, this disclosure describes a method that includes generating, by a control system managing a resource group, a resource graph that models resource and event dependencies between a plurality of resources within the resource group; detecting, by the control system, a first event affecting a first resource of the plurality of resources, wherein the first event is a network event; and identifying, by the control system and based on the dependencies modeled by the resource graph, a second resource that is expected to be affected by the first event. | 2022-01-20 |
20220019495 | MACHINE LEARNING-BASED TECHNIQUES FOR PROVIDING FOCUS TO PROBLEMATIC COMPUTE RESOURCES REPRESENTED VIA A DEPENDENCY GRAPH - Methods, systems, apparatuses, and computer-readable storage mediums are described for machine learning-based techniques for reducing the visual complexity of a dependency graph that is representative of an application or service. For example, the dependency graph is generated that comprises a plurality of nodes and edges. Each node represents a compute resource (e.g., a microservice) of the application or service. Each edge represents a dependency between nodes coupled thereto. A machine learning-based classification model analyzes each of the nodes to determine a likelihood that each of the nodes is a problematic compute resource. For instance, the classification model may output a score indicative of the likelihood that a particular compute resource is problematic. The nodes and/or edges having a score that exceed a predetermined threshold are provided focus via the dependency graph. | 2022-01-20 |
20220019496 | ERROR DOCUMENTATION ASSISTANCE - An error documentation system including tools to collect and analyze application error data for individual development teams and tools to share documented defects and solutions across development teams during any stage of development cycle. The system may receive and analyze event logs for error events triggered by applications on end-user devices. The system may automatically generate defect tickets and/or ticket entries for defects identified in event logs. The system may train one or more machine learning (ML) models to correlate input with identified defects from a defects database. In response to identifying correlated identified defects, the system may generate ticket entries indicating the correlated identified defects and associated solutions for the defects. The system may provide an interface for users to query the data stored in the database. | 2022-01-20 |
20220019497 | SYSTEM AND METHOD FOR ALARM CORRELATION AND AGGREGATION IN IT MONITORING - A system for alarm correlation and aggregation. The system includes a computing device. The computing device has a process and a storage device storing computer executable code. The computer executable code, when executed at the processor, is configured to: provide a plurality of alarms triggered by components of the system; provide aggregation patterns; perform iteratively until a criterion is met: generating itemsets from the alarms using the aggregation patterns, computing a new aggregation pattern from the generated itemsets using frequent itemset mining, and updating the aggregation pattern using the new aggregation pattern to obtain updated aggregation patterns; and aggregate the alarms using the updated aggregation patterns to obtain aggregated alarms. | 2022-01-20 |
20220019498 | DYNAMICALLY CREATING A CONTACT ADDRESS TO CUSTOMER SUPPORT BASED ON INFORMATION ASSOCIATED WITH A COMPUTING DEVICE - In some examples, a computing device may determine that an issue (e.g., crash, restart etc.) occurred, gather context data (e.g., logs, device profile, etc.) associated with the issue, and generate a contact address to technical support based on the context data. The computing device may upload the context data to a location accessible to a server. After a user of the computing device initiates a communication to technical support using the contact address, the server may automatically route the call, based on the contact address, to a particular technician that has experience addressing the issue. The server may retrieve the context data and use machine learning to determine recommendations to address the issue. The machine learning may prioritize the recommendations and provide the context data and the prioritized recommendations to enable the particular technician to quickly resolve the issue. | 2022-01-20 |
20220019499 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system includes: a memory device; and a controller suitable for controlling the memory device and including a buffer memory, wherein the controller performs error history logging into the buffer memory in response to a logging start command from a host, stops the error history logging in response to a logging stop command from the host, and provides the host with the logged error history in response to an output command from the host. | 2022-01-20 |
20220019500 | FUSE LOGIC TO PERFORM SELECTIVELY ENABLED ECC DECODING - Fuse logic is configured to selectively enable certain group of fuses of a fuse array to support one of column (or row) redundancy in one application or error correction code (ECC) operations in another application. For example, the fuse logic may decode the group of fuses to enable a replacement column (or row) of memory cells in one mode or application, and decodes a subset of the group of fuses to retrieve ECC data corresponding to a second group of fuses are encoded to enable a different replacement column or row of memory cells in a second mode or application. The fuse logic includes an ECC decode logic circuit that is selectively enabled to detect and correct errors in data encoded in the second group of fuses based on the ECC data encoded in the subset of fuses of the first group of fuses. | 2022-01-20 |
20220019501 | ADJUSTING READ THROUGHPUT LEVEL FOR A DATA RECOVERY OPERATION - An error associated with a read operation corresponding to a target memory die of a memory sub-system is detected. In response to detecting the error, a first read throughput level of the memory sub-system is identified. The first read throughput level is adjusted to a second read throughput level. A read retry operation associated with the target memory die is executed at the second read throughput level. | 2022-01-20 |
20220019502 | MEMORY DEVICE ACTIVITY-BASED COPYING DEFECT MANAGEMENT DATA - Various embodiments described herein provide for copying (e.g., to cache) a portion of defect management data for a block of a memory device, such as a non-volatile memory device of a memory sub-system, based on activity of the memory device. For instance, the portion of defect management data can be copied from a first-type memory device of the memory sub-system to a second-type memory device of the memory sub-system, where the first-type memory device stores defect management data for a working set of blocks of the non-volatile memory device being operated upon by the memory sub-system, where the second-type memory device is used to store defect management data for an active block of the working set of blocks, and where the second-type memory device has a faster access (e.g., read or write access) than the first-type memory device. | 2022-01-20 |
20220019503 | METHOD AND SYSTEM FOR DESYNCHRONIZATION RECOVERY FOR PERMISSIONED BLOCKCHAINS USING BLOOM FILTERS - A method for recovery of missing or extra data using a bloom filter includes: storing a plurality of transaction messages, each including a transaction value; generating a bloom filter of the transaction messages, the bloom filter being generated using a number of hash rounds and with a size at least double the number of transaction messages; generating a recover message including the number of transaction messages, the number of hash rounds, the size, and the generated bloom filter; transmitting the recover message to a consensus node; receiving a response message from the consensus node, the response message including at least one additional transaction message; and inserting the at least one additional transaction message into the plurality of transaction messages. | 2022-01-20 |
20220019504 | RESTORATION OF A COMPUTING SESSION - According to an aspect, a method of restoring a computing session includes receiving, over a network, session data from a server computer, where the session data includes information about at least one session item that is active during a computing session of a first computing device. The at least one session item includes at least one of a web application or a native application. The method includes restoring the at least one session item of the computing session on a second computing device based on the session data, where the at least one session item is arranged on a user interface of the second computing device according to a display arrangement that corresponds to a display arrangement of the at least one session item on a user interface of the first computing device. | 2022-01-20 |
20220019505 | MESSAGE PERSISTENCE IN A ZONED SYSTEM - A plurality of storage controllers configured to initiate an action based on redundant copies of metadata, such that a source authority of one of the plurality of storage controllers receives a message, records the message redundantly throughout the plurality of storage controllers, and delivers the message to a destination authority of a further one of the storage controllers responsive to achieving a level of redundancy for the redundant copies of the metadata regarding the message is provided, wherein at least one of the plurality of storage controllers comprises a zoned storage drive. | 2022-01-20 |
20220019506 | High Availability For Persistent Memory - Techniques for implementing high availability for persistent memory are provided. In one embodiment, a first computer system can detect an alternating current (AC) power loss/cycle event and, in response to the event, can save data in a persistent memory of the first computer system to a memory or storage device that is remote from the first computer system and is accessible by a second computer system. The first computer system can then generate a signal for the second computer system subsequently to initiating or completing the save process, thereby allowing the second computer system to restore the saved data from the memory or storage device into its own persistent memory. | 2022-01-20 |
20220019507 | REACTIVE READ BASED ON METRICS TO SCREEN DEFECT PRONE MEMORY BLOCKS - A variety of applications can include apparatus and/or methods to preemptively detect defect prone memory blocks in a memory device and handle these memory blocks before they fail and trigger a data loss event. Metrics based on memory operations can be used to facililtate the examination of the memory blocks. One or more metrics associated with a memory operation on a block of memory can be tracked and a Z-score for each metric can be generated. In response to a comparison of a Z-score for a metric to a Z-score threshold for the metric, operations can be performed to control possible retirement of the memory block beginning with the comparison. Additional apparatus, systems, and methods are disclosed. | 2022-01-20 |
20220019508 | DISTRIBUTED DATA STORE FOR TESTING DATA CENTER SERVICES - Architectures and techniques are described that can enhance or improve testing procedure that tests operation of services provided by a data center. Advantageously, the testing dataset can be distributed on test clients allowing the testing procedure to scale to any suitable size, while providing integrity checking for a dataset that includes snapshots and scalability to millions of files, while supporting multiple readers/writers. | 2022-01-20 |
20220019509 | SYSTEM AND METHOD FOR CONTEXT-BASED PERFORMANCE OPTIMIZATION OF AN INFORMATION HANDLING SYSTEM - An information handling system includes a storage device configured to store contextual inputs obtained from components associated with the information handling system. A processor obtains telemetry data from one or more of the components, the telemetry data including contextual inputs according to a user context and a system context. The processor determines a recommendation that includes first applications to be preloaded, and second applications whose status is to be changed based on the contextual inputs according to the user context. The recommendation further includes one or more system settings to be adjusted based on the contextual inputs according to the system context. The processor determine a first ordered list of the first applications to be preloaded according to a first priority and a second ordered list of the second applications whose status is to be changed according to a second priority based on the recommendation, and preloads the first applications based on the first ordered list. The processor changes the status of one the second applications based on the second ordered list according to the second priority, and adjusts the one or more system settings on the information handling system based on the recommendation. | 2022-01-20 |
20220019510 | DYNAMIC CONFIGURATION TRACE CAPTURE TECHNIQUE - A dynamic configuration trace capture technique enables software developers to monitor, diagnose and solve errors associated with application development and production. A client library of an investigative platform is loaded in a user application and interacts with an agent process to instrument executable code of the user application. A dynamic configuration specifies information, such as methods and associated arguments, variables and data (values), to instrument. The client library may re-load the dynamic configuration at the capture points, which may change the frequency of tracing a method and its associated information. The dynamic configuration may be defined per method, argument, variable, and/or data structure. The dynamic configuration may be initially deployed with default heuristics defined in the client library. The technique enables optional overrides, either by manual provision of adjustments by a user via a user interface infrastructure of the investigative platform, or as specified by the agent after retrieving a configuration file, an environment variable, etc. from a configuration service. | 2022-01-20 |
20220019511 | INSTRUMENTATION OVERHEAD REGULATION TECHNIQUE - An instrumentation overhead regulation technique regulates an amount of work performed by a client library of an investigative platform used to monitor, diagnose and solve errors associated with application development and production. The client library calculates processing resources utilized during its runtime activity to enable adjustment of the amount of work it performs based on the measured activity. An agent may determine the overhead activity impact to user application performance by monitoring processing resource metrics of the user application. The agent analyzes the calculated overhead and processing resource metrics to render decisions to automatically regulate the capture fidelity of the client library. Regulation of the capture fidelity may be implemented by modifying parameters of a dynamic configuration. If results of the analysis indicate a potential issue, the amount of work the client library performs may be trimmed to ensure that the calculated overhead of the client library and its impact on user application performance does not exceed a predetermined threshold. | 2022-01-20 |
20220019512 | TRACE ANOMALY GROUPING AND VISUALIZATION TECHNIQUE - A trace anomaly grouping and visualization technique logically groups traces with anomalies to cases to enable software developers to monitor, diagnose and visualize the anomalies, as well as to solve the anomalies during application development and production. A client library of an investigative platform collects signals from traces (trace signal information). The technique organizes (groups) related trace signals of methods with anomalies (e.g., exceptions, performance abnormalities such as slowness) into datasets (denominated as “cases”) based on common cause for an anomaly and correlates the signals to identify a case. The collected information may be used to differentiate between root causes of the anomalies using a comparative visualization of traces displayed on a standard user interface of the investigative platform. As such, the technique facilitates an understanding of differences among traces of executable code that resulted in the failure and traces without failure by providing the ability to comparatively examine views of those traces displayed on the standard UI. Signals of two or more traces may be selected and displayed side-by-side for comparison. The traces may be selected from a general notion of a healthy trace and a failed trace. | 2022-01-20 |
20220019513 | PROGRAM EXECUTION MONITORING USING DEEP MEMORY TRACING - A user-annotated reference implementation identifies variable values generated by the reference implementation during its execution. A software implementation under analysis is executed. Variable values in the running memory of the program code of the software implementation under analysis, during its execution, are identified and copied. The variable values traced from the running memory of the program code are compared against the annotated variable values generated by the reference implementation, to determine a similarity between the program code under analysis, and the reference implementation. An output is generated that is indicative of whether the traced variables from the program code under analysis are the same as the annotated variable values generated by the reference implementation. | 2022-01-20 |
20220019514 | SYSTEMS, METHODS, AND MEDIA FOR PROVING THE CORRECTNESS OF SOFTWARE ON RELAXED MEMORY HARDWARE - Mechanisms for proving the correctness of software on relaxed memory hardware are provided, the mechanisms comprising: receiving a specification, a hardware model, and an implementation for the software to be executed on the relaxed memory hardware; evaluating the software using a sequentially consistent hardware model; evaluating the software using a relaxed memory hardware model and at least one of the following conditions: a data-race-free (DRF)-kernel condition; a no-barrier-misuse condition; a memory-isolation condition; a transactional-page-table condition; a write-once-kernel-mapping condition; and a weak-memory-isolation condition; and outputting an indication of whether the software is correct based on the evaluating the software using the sequentially consistent hardware model and the evaluating the software using the relaxed memory hardware model. | 2022-01-20 |
20220019515 | PIPELINE FOR VALIDATION PROCESS AND TESTING - Techniques are described herein for implementing a testing and evaluation pipeline. The techniques include receiving testing specifications for validating an XR application executing on XR capable devices and mapping individual testing specifications to a corresponding XR capable device including the XR application. Upon mapping the individual testing specifications, testing configurations for an evaluation pipeline is determined. The evaluation pipeline may include one or more computing instances that execute one or more validation tests for the XR application executing on the corresponding XR capable device according to the individual testing specifications and the testing configurations. The one or more computing instances may operate in parallel to perform the one or more validation tests concurrently. Based at least on test results generated from the one or more computing instances and one or more evaluation criteria, the XR application executing on the corresponding XR capable device may be validated. | 2022-01-20 |
20220019516 | DETERMINING A RECOMMENDED SOFTWARE-STACK FOR A TARGET SOFTWARE ITEM - A recommended software-stack can be determined for a target software item. For example, a system can receive an input specifying a target software item and a characteristic of a computing environment in which the target software item is to be executed. The system can then generate software-stack candidates for the target software item, the software-stack candidates having unique configurations of software components. The system can determine a respective score for each software-stack candidate of the software-stack candidates based on the characteristic of the computing environment and a unique configuration of software components forming the software-stack candidate. The system can select a particular software-stack candidate from the software-stack candidates as a recommended software-stack, based on the respective score for the particular software-stack candidate having a predefined attribute. The system can then generate an output indicating the recommended software-stack to enable the recommended software-stack to be included in the computing environment. | 2022-01-20 |
20220019517 | DYNAMIC LIBRARY REPLACEMENT TECHNIQUE - A dynamic library replacement technique enables replacement of original functions or methods of application libraries based on analysis of traces captured by a client library of an investigative platform. Traces captured from the user application are analyzed to identify the original methods of the user application that may be replaced by the client library. The original methods may be identified based on estimated performance improvements determined from the analysis of the captured traces. The improved method replacements and estimated performance improvements may be graphically presented to a user via a user interface (UI) infrastructure of the investigative platform. Replacement of the improved methods may be defined in the dynamic configuration or interactively via the UI infrastructure and continued performance monitoring reported. The specific performance for any method may be monitored along with a fidelity of the monitored method. For pure functions (methods) without side-effects, the improved replacement method and original application method may be compared for the same data. | 2022-01-20 |
20220019518 | INVESTIGATIVE PLATFORM FOR SOFTWARE APPLICATION DEVELOPMENT AND PRODUCTION - An investigative platform enables software developers to monitor and diagnose anomalies associated with application development and production. A client library interacts with a separate agent to instrument executable code of a user application. The client library transfers executable code and trace information captured from the user application to the agent to isolate the capture from the executing user application. The agent buffers, examines, and performs further processing (such as compression) on the captured traces, and sends the information as substantially compressed traces to an analysis and persistent storage (APS) infrastructure. A consumer service loads the information into a durable message queue for processing by stages of an analysis pipeline of the APS infrastructure. Processing by the stages of the analysis pipeline results in findings, such as trace amalgamation into cases. A data service of the APS infrastructure provides the processed information to a user interface infrastructure for graphic and interactive presentation reporting to a user. | 2022-01-20 |
20220019519 | CONSERVATION OF NETWORK ADDRESSES FOR TESTING IN A VIRTUALIZED COMPUTING SYSTEM - An example method of testing a cluster network for an application management system having a cluster of virtual machines (VMs) is described. The VMs execute on a virtualization layer in a cluster of hosts connected to a physical network, and the application management system integrated with the virtualization layer. The method includes: receiving, at an edge node from an external network, a plurality of test applications; executing, at the edge node, the plurality of test applications, the edge node connected to the cluster network, the plurality of test applications communicating, through the cluster network, with a master server of the application management system, and with applications executing in the VMs managed by the master server; and returning, from the edge node, responses generated by the plurality of test applications to the external network. | 2022-01-20 |
20220019520 | METHOD AND SYSTEM FOR AUTOMATICALLY TESTING EVENT-DRIVEN MICROSERVICES - A method for facilitating automated testing of event-driven microservices is provided. The method includes receiving a scenario that includes a set of instructions to test a microservice; automatically generating, based on the scenario, a production event relating to an action to be performed and a consumption event relating to a record of the performed action; automatically generating a first test event using the production event; outputting the first test event to the microservice; automatically retrieving a first result relating to the execution of the first test event by the microservice by using the consumption event; and validating the first result based on the scenario. The method further includes displaying the first result and a notification on a graphical user interface based on an outcome of the validating. | 2022-01-20 |
20220019521 | Supporting Web Components In A Web Testing Environment - This document describes techniques and apparatuses for supporting web components associated with a document object model (DOM) corresponding to a data file in a web testing environment. A user interaction, relative to a web page or web application from which the DOM is rendered, is monitored in the web testing environment. The monitoring identifies a target element selected by the user that is referenced in a shadow DOM associated with the DOM. One or more parent shadow host elements of the DOM are identified relative to the target element. The one or more shadow host elements define a reduced path, with respect to a tree data structure representing the DOM and the shadow DOM, for linking a document object of the DOM to the target element. Indicia identifying the one or more shadow host elements as linking the document object of the DOM to the target element are recorded. The recorded indicia define the reduced path for identifying the target element relative to the document object in the tree data structure during a replay of a recorded session of the user interaction. A target element may also be identified and replaced with a different object by referencing the one or more shadow host elements. | 2022-01-20 |
20220019522 | AUTOMATED SEQUENCING OF SOFTWARE TESTS USING DEPENDENCY INFORMATION - Dependency information can be used for automatic sequencing of software tests. For example, a computing device can receive dependency information indicating dependency relationships among software tests usable to test a target software item. The computing device can determine assignments of the software tests to different testing phases in a sequence of testing phases based on the dependency information. This may involve the computing device assigning each software test to a particular testing phase based on whether the software test is a dependency of or is a dependent on another software test, such that each testing phase in the sequence of testing phases is assigned a unique subset of software tests. The computing device can then generate an output indicating the assignments of the software tests to the different testing phases in the sequence of testing phases. | 2022-01-20 |
20220019523 | EXECUTING INTEGRATION SCENARIO REGRESSION TESTS IN CUSTOMER LANDSCAPES - The present disclosure involves systems, software, and computer implemented methods for executing integration scenario regression tests in customer landscapes. One example method includes identifying a request to create a test case for an integration scenario for a cloud platform customer. The test case is created for the scenario, including enabling the test case to run in an isolated customer environment specific to the customer. An update to the cloud platform is identified. The update is provisionally applied to the cloud platform for the customer. The test case is executed in the isolated customer environment, to test the scenario for the customer. A determination is made as to whether execution of the test case succeeded. In response to determining successful test case execution, the update to the cloud platform is finalized for the customer. In response to determining unsuccessful test case execution, the update is rolled back for the customer. | 2022-01-20 |
20220019524 | SYSTEM AND METHOD FOR VALIDATING CLOUD-NATIVE APPLICATIONS FOR A PRODUCTION-READY DEPLOYMENT - An information handling system includes a repository that stores a pre-production validation suite and a production validation suite. The pre-production validation suite includes first validation factors, and the production validation suite includes second validation factors. A processor may deploy an application in a pre-production environment, and validate the application in the pre-production environment using the pre-production validation suite. If the application passes the pre-production validation suite, then the processor may deploy the application in a production environment. The processor also may validate the application in the production environment using the production validation suite, assign a score associated with each one of the first validation factors and each one of the second validation factors, and generate a report based on the score associated with each one of the first validation factors and each one of the second validation factors. | 2022-01-20 |
20220019525 | METHOD AND APPARATUS FOR DATA READS IN HOST PERFORMANCE ACCELERATION MODE - The invention relates to methods, and an apparatus for data reads in a host performance acceleration (HPA) mode. One method is performed by a host side to include: issuing a switch command to a flash controller to request the flash controller to activate an HPA function, and an acquisition function for a logical-block-address to physical-block-address (L2P) mapping table; issuing a write_multiple_block command to the flash controller to transfer a data block to a flash controller, where the data block includes a region number and a sub-region number; issuing a read_multiple_block command to the flash controller to obtain a plurality of L2P mapping entries corresponding to the region number and the sub-region number from the flash controller. The host side and the flash controller communicate with each other in an embedded multi-media card (eMMC) protocol. | 2022-01-20 |
20220019526 | METHOD AND APPARATUS FOR DATA READS IN HOST PERFORMANCE ACCELERATION MODE - The invention relates to methods, and an apparatus for data reads in a host performance acceleration (HPA) mode. One method is performed by a host side to include: searching an HPA buffer in a system memory for a logical-block-address to physical-block-address (L2P) mapping entry corresponding to a logical block address (LBA); issuing a switch command to a flash controller to request the flash controller to activate an HPA function, and does not activate an acquisition function for an L2P mapping table, where the host side and the flash controller communicate with each other in an embedded multi-media card (eMMC) protocol; issuing a write_multiple_block command to the flash controller to transfer a first data block to the flash controller, which includes the first L2P mapping entry; and issuing a read_multiple_block command to obtain data corresponding to the first L2P mapping entry from the flash controller. | 2022-01-20 |
20220019527 | Temperature-Based Data Storage Processing - A data storage device monitors a storage media temperature and adjusts data storage operations of the storage device based on the monitored and/or a predicted future temperature of the storage media. In one approach, data is stored in a first mode (e.g., a TLC mode) in a non-volatile storage media. One or more temperatures associated with the non-volatile storage media are monitored using at least one sensor to collect sensor data. The manner of storage of the data in the storage device is adjusted based on the collected sensor data. The adjusting comprises compressing the data to provide compressed data, and storing the compressed data in a second mode (e.g., an SLC mode) in the non-volatile storage media. | 2022-01-20 |
20220019528 | Upgrading On-Disk Format Without Service Interruption - A logical map represents fragments from separate versions of a data object. Migration of data from a first (old) version to the second (new) version happens gradually, where write operations go to the new version of the data object. The logical map initially points to the old data object, but is updated to point to the portions of the new data object as write operations are performed on the new data object. A background migration copies data from the old data object to the new data object. | 2022-01-20 |
20220019529 | Upgrading On-Disk Format Without Service Interruption - A logical map represents fragments from separate versions of a data object. Migration of data from a first (old) version to the second (new) version happens gradually, where write operations go to the new version of the data object. The logical map initially points to the old data object, but is updated to point to the portions of the new data object as write operations are performed on the new data object. A background migration copies data from the old data object to the new data object. | 2022-01-20 |
20220019530 | Adaptive Address Tracking - Described apparatuses and methods track access metadata pertaining to activity within respective address ranges. The access metadata can be used to inform prefetch operations within the respective address ranges. The prefetch operations may involve deriving access patterns from access metadata covering the respective ranges. Suitable address range sizes for accurate pattern detection, however, can vary significantly from region to region of the address space based on, inter alia, workloads produced by programs utilizing the regions. Advantageously, the described apparatuses and methods can adapt the address ranges covered by the access metadata for improved prefetch performance. A data structure may be used to manage the address ranges in which access metadata are tracked. The address ranges can be adapted to improve prefetch performance through low-overhead operations implemented within the data structure. The data structure can encode hierarchical relationships that ensure the resulting address ranges are distinct. | 2022-01-20 |
20220019531 | Allocating Variables to Computer Memory - A method of allocating variables to computer memory includes determining at compile time when each of the plurality of variables is live in a memory region and allocating a memory region to each variable wherein at least some variables are allocated at compile time to overlapping memory regions to be stored in those memory regions at runtime at non-overlapping times. | 2022-01-20 |
20220019532 | Memory Mapping for Hibernation - A computing system has a processing device (e.g., CPU, FPGA, or GPU) and memory regions (e.g., in a DRAM device) used by the processing device during normal operation. The computing system is configured to: monitor use of the memory regions in volatile memory; based on monitoring the use of the memory regions, identify at least one of the memory regions of the volatile memory; initiate a hibernation process; and during the hibernation process, copy data stored in the identified memory regions to non-volatile memory. | 2022-01-20 |
20220019533 | MANAGING PROCESSING OF MEMORY COMMANDS IN A MEMORY SUBSYSTEM WITH A HIGH LATENCY BACKING STORE - A method is described for managing the issuance and fulfillment of memory commands. The method includes receiving, by a cache controller of a memory subsystem, a first memory command corresponding to a set of memory devices. In response, the cache controller adds the first memory command to a cache controller command queue such that the cache controller command queue stores a first set of memory commands and sets a priority of the first memory command to either a high or low priority based on (1) whether the first memory command is of a first or second type and (2) an origin of the first memory command. | 2022-01-20 |
20220019534 | SPACE AND TIME CACHE COHERENCY - Various embodiments include methods and devices for virtual cache coherency. Embodiments may include receiving a snoop for a physical address from a coherent processing device, determining whether an entry for the physical address corresponding to a virtual address in a virtual cache exists in a snoop filter, and sending a cache coherency operation to the virtual cache in response to determining that the entry exists in the snoop filter. | 2022-01-20 |
20220019535 | PREFETCH BUFFER OF MEMORY SUB-SYSTEM - Various embodiments described herein provide for using a prefetch buffer with a cache of a memory sub-system to store prefetched data (e.g., data prefetched from the cache), which can increase read access or sequential read access of the memory sub-system over that of traditional memory sub-systems. | 2022-01-20 |
20220019536 | PREFETCH FOR DATA INTERFACE BRIDGE - Various embodiments described herein provide for using a prefetch buffer for a data interface bridge, which can be used with a memory sub-system to increase read access or sequential read access of data from a memory device coupled to the data interface bridge. | 2022-01-20 |
20220019537 | Adaptive Address Tracking - Described apparatuses and methods track access metadata pertaining to activity within respective address ranges. The access metadata can be used to inform prefetch operations within the respective address ranges. The prefetch operations may involve deriving access patterns from access metadata covering the respective ranges. Suitable address range sizes for accurate pattern detection, however, can vary significantly from region to region of the address space based on, inter alia, workloads produced by programs utilizing the regions. Advantageously, the described apparatuses and methods can adapt the address ranges covered by the access metadata for improved prefetch performance. A data structure may be used to manage the address ranges in which access metadata are tracked. The address ranges can be adapted to improve prefetch performance through low-overhead operations implemented within the data structure. The data structure can encode hierarchical relationships that ensure the resulting address ranges are distinct. | 2022-01-20 |
20220019538 | DEMAND DELAY AND DATA VALUE CORRELATED MEMORY PRE-FETCHING SYSTEMS AND METHODS - Systems, apparatuses, and methods for predictive memory access are described. Memory control circuitry instructs a memory array to read a data block from or write the data block to a location targeted by a memory access request, determines memory access information including a data value correlation parameter determined based on data bits used to indicate a raw data value in the data block and/or an inter-demand delay correlation parameter determined based on a demand time of the memory access request, predicts that read access to another location in the memory array will subsequently be demanded by another memory access request based on the data value correlation parameter and/or the inter-demand delay correlation parameter, and instructs the memory array to output another data block stored at the other location to a different memory level that provides faster data access speed before the other memory access request is received. | 2022-01-20 |
20220019539 | Elastic Columnar Cache for Cloud Databases - A method for providing elastic columnar cache includes receiving cache configuration information indicating a maximum size and an incremental size for a cache associated with a user. The cache is configured to store a portion of a table in a row-major format. The method includes caching, in a column-major format, a subset of the plurality of columns of the table in the cache and receiving a plurality of data requests requesting access to the table and associated with a corresponding access pattern requiring access to one or more of the columns. While executing one or more workloads, the method includes, for each column of the table, determining an access frequency indicating a number of times the corresponding column is accessed over a predetermined time period and dynamically adjusting the subset of columns based on the access patterns, the maximum size, and the incremental size. | 2022-01-20 |
20220019540 | In-Memory Distributed Cache - A method for an in-memory distributed cache includes receiving a write request from a client device to write a block of client data in random access memory (RAM) of a memory host and determining whether to allow the write request by determining whether the client device has permission to write the block of client data at the memory host, determining whether the block of client data is currently saved at the memory host, and determining whether a free block of RAM is available. When the client device has permission to write the block of client data at the memory host, the block of client data is not currently saved at the memory host, and a free block of RAM is available, the write request is allowed and the client is allowed to write the block of client data to the free block of RAM. | 2022-01-20 |
20220019541 | MACHINE LEARNING BASED CACHE MANAGEMENT - Techniques are disclosed for dynamically managing a cache. Certain techniques include clustering I/O requests into a plurality of clusters by a machine-learning clustering algorithm that collects the I/O requests into clusters of similar I/O requests based on properties of the I/O requests. Further, certain techniques include identifying, for a received I/O request, a cluster stored in the cache. Certain techniques further include loading a set of blocks of the identified cluster into the cache. | 2022-01-20 |
20220019542 | HIERARCHICAL MEMORY SYSTEMS - Apparatuses, systems, and methods for hierarchical memory systems are described. A hierarchical memory system can leverage persistent memory to store data that is generally stored in a non-persistent memory, thereby increasing an amount of storage space allocated to a computing system at a lower cost than approaches that rely solely on non-persistent memory. An example method includes receiving a request to access data via an input/output (I/O) device, determining whether the data is stored in a non-persistent memory device or a persistent memory device, and redirecting the request to access the data to logic circuitry in response to determining that the data is stored in the persistent memory device. | 2022-01-20 |
20220019543 | DIGITAL SIGNAL PROCESSOR, DSP SYSTEM, AND METHOD FOR ACCESSING EXTERNAL MEMORY SPACE - A digital signal processor, a digital signal processing (DSP) system, and a method for accessing external memory space are disclosed. The digital signal processor may include: a digital signal processing (DSP) core; and a program port and a data port which are connected to the DSP core and configured to access an external memory, where the program port and the data port are respectively configured to communicate with a memory management unit configured for management of an access address. | 2022-01-20 |
20220019544 | METHOD AND SYSTEM FOR FACILITATING COMMUNICATION BETWEEN INTERCONNECT AND SYSTEM MEMORY ON SYSTEM-ON-CHIP - A memory management system for facilitating communication between an interconnect and a system memory of a system-on-chip includes a plurality of memory controllers coupled with the system memory, and processing circuitry coupled with the interconnect and the plurality of memory controllers. The processing circuitry is configured to receive a transaction request from the interconnect, and identify a memory controller of the plurality of memory controllers that is associated with the received transaction request. Further, the processing circuitry is configured to provide the transaction request to the identified memory controller for an execution of a transaction associated with the received transaction request. The processing circuitry is further configured to receive a transaction response to the provided transaction request from the memory controller, and provide the received transaction response to the interconnect after a previous transaction response associated with a previous transaction request is provided to the interconnect. | 2022-01-20 |
20220019545 | Hybrid On/Off-Chip Memory Architecture For Graph Analytics - The increased use of graph algorithms in diverse fields has highlighted their inefficiencies in current chip-multiprocessor (CMP) architectures, primarily due to their seemingly random-access patterns to off-chip memory. Here, a novel computer memory architecture is proposed that processes operations on vertex data in on-chip memory and off-chip memory. The hybrid computer memory architecture utilizes a vertex's degree as a proxy to determine whether to process related operations in on-memory or off-chip memory. The proposed computer memory architecture manages to provide up to 4.0× improvement in performance and 3.8× in energy benefits, compared to a baseline CMP, and up to a 2.0× performance boost over state-of-the-art specialized solutions. | 2022-01-20 |
20220019546 | SYSTEMS, METHODS, AND DEVICES FOR TIME SYNCHRONIZED STORAGE DELIVERY - A method includes receiving, at a first computing device, a first input/output (IO) command from a first artificial intelligence processing unit (AI PU), the first IO command associated with a first AI model training operation. The method further includes receiving, at the first computing device, a second IO command from a second AI PU, the second IO command associated with a second AI model training operation. The method further includes assigning a first timestamp to the first IO command based on a first bandwidth assigned to the first AI model training operation. The method further includes assigning a second timestamp to the second IO command based on a second bandwidth assigned to the second AI model training operation. | 2022-01-20 |
20220019547 | METHOD AND APPARATUS FOR DATA READS IN HOST PERFORMANCE ACCELERATION MODE - The invention relates to methods, and an apparatus for data reads in a host performance acceleration (HPA) mode. One method is performed in a host side to include: obtaining a value of an extended device-specific data (Ext_CSD) register in a flash controller from the flash controller, where the host side and the flash controller communicate with each other in an embedded multi-media card (eMMC) protocol; and allocating space in a system memory as an HPA buffer, and storing a plurality of first logical-block-address to physical-block-address (L2P) mapping entries obtained from the flash controller when the value of the Ext_CSD register comprises information indicating that an HPA function is supported, where each L2P mapping entry stores information indicating which physical address that user data of a corresponding logical address is physically stored in a flash device. | 2022-01-20 |
20220019548 | NESTED COMMANDS FOR RADIO FREQUENCY FRONT END (RFFE) BUS - Nested commands for a radio frequency front end (RFFE) bus are provided. In particular, timing commands may be nested inside a normal data flow. On receipt of a nested timing command, a slave on the RFFE bus suspends or halts an active command and addresses the timing command. On completion of the timing command, the slave returns to the halted command. By allowing such nested commands, counters in the slave that would otherwise be used to track triggers may be eliminated or reduced and power may be conserved by placing a clock signal associated with the bus into a low power mode. | 2022-01-20 |
20220019549 | PROTECTING A SYSTEM FROM ATTACK VIA A DEVICE ATTACHED TO A USB PORT - A method for protecting a system from a malicious USB device. The method includes one or more computer processors interrupting a universal serial bus (USB) enumeration process corresponding to a first USB device operatively couple to a system. The method further includes determining whether the first USB device is a human interface device (HID) based on a set of descriptor values corresponding to the first USB device. The method further includes responding to determining that that first USB device is a HID by generating a validation challenge. The method further includes presenting the validation challenge to a user of the system. The method further includes responding to determining that the user fulfils one or more actions of the validation challenge by resuming the USB enumeration process corresponding to the first USB device. | 2022-01-20 |
20220019550 | Host Connected Computer Network - A processor comprises a plurality of processing units on an integrated circuit interconnected by an exchange. The exchange has a group of exchange paths extending between first and second portions of the integrated circuit. Each group has a first exchange block in the first portion and a second exchange block in the second portion. The processor has a first external interface in the first portion a second external interface in the second portion and a routing bus which routes packets between the external interfaces and the exchange blocks. The first external interface exchanges packets between the integrated circuit and a host. The second interface exchanges packets between the integrated circuit and another integrated circuit. Errors may be trapped when packets are wrongly addressed. A network of such processors is also provided. | 2022-01-20 |
20220019551 | COMMUNICATION DEVICE, INFORMATION PROCESSING SYSTEM, AND COMMUNICATION METHOD - A communication device mounted in each of a plurality of information processing devices connected to a fabric, the communication device comprises: a serial interface that transmits and receives a first packet compliant with a Peripheral Component Interconnect Express (PCIe) standard; a requester unit that acquires the first packet from the serial interface and converts the first packet that has been acquired into a second packet that is transmitted and received via the fabric among a plurality of the information processing devices sharing a memory space that is virtually extended by using a device identifier specific to each of the information processing devices; a fabric communication unit that transmits and receives the second packet via the fabric; and a completer unit that acquires the second packet from the fabric communication unit and generating a response packet to a request included in the second packet that has been acquired. | 2022-01-20 |
20220019552 | Routing in a Network of Processors - A processor in a network has a plurality of processing units arranged on a chip. An on-chip interconnect enables data to be exchanged between the processing units. A plurality of external interfaces are configured to communicate data off chip in the form of packets, each packet having a destination address identifying a destination of the packet. The external interfaces are connected to respective additional connected processors. A routing bus routes packets between the processing units and the external interfaces. A routing register defines a routing domain for the processor, the routing domain comprising one or more of the additional processor, and at least a subset of further additional processors of the network, wherein the additional processors of the subset are directly or indirectly connected to the processor. The routing domain can be modified by changing the contents of the routing register as a sliding window domain. | 2022-01-20 |
20220019553 | SYNCHRONIZING STORAGE POLICIES OF OBJECTS MIGRATED TO CLOUD STORAGE - One or more computer processors to receive an object to store in a cloud storage environment, wherein the cloud storage environment includes a default storage policy. The one or more processors determine whether the object includes a foreign policy as an attribute of metadata associated with the object. The one or more processors, responsive to determining the object includes the foreign policy as an attribute of the metadata associated with the object, determine whether the foreign policy includes storage rules that differ from the default storage policy of the cloud storage environment, and the one or more processors, responsive to determining the storage rules included in the foreign policy of the metadata of the object differ from the default storage policy of the cloud storage environment, store the object based on the storage rules of the foreign policy, and ignore the default storage policy of the cloud storage environment. | 2022-01-20 |
20220019554 | DATA MIGRATION MANAGEMENT AND MIGRATION METRIC PREDICTION - A query specifying a source repository and a target repository is received from a client device. A source index is generated that corresponds to the source repository and represents a snapshot of metadata associated with data contained in the source data repository. The source index is filtered based on filtering criteria specified by the query to obtain a filtered source index. Attributes of data corresponding to the filtered source index are determined as well as data retrieval type parameters. Without initiating a data migration of the data corresponding to the filtered source index from the source repository to the target repository, predicted data migration metrics associated with the data migration are determined and presented to an end user of the client device. The end user is provided with the capability to initiate or forego the data migration based on an evaluation of the predicted data migration metrics. | 2022-01-20 |
20220019555 | SNAPSHOT AND RESTORATION OF DISTRIBUTED FILE SYSTEM - In some examples, a data management system processes snapshots of a distributed file system, the distributed file system having files, each file comprising multiple data chunks. The data management system performs operations including storing file-to-chunk mapping in file system metadata; creating, for each chunk, a chunk generation ID by associating each chunk with a file system generation ID; in a next-generation snapshot of the distributed file system, incrementing, for all chunks in the next-generation snapshot, the respective chunk generation IDs; and taking a snapshot of the file system metadata and storing an updated file-to-chunk mapping associated with the next-generation snapshot. | 2022-01-20 |
20220019556 | FACILITATING QUICK EVALUATION OF TRIGGER CONDITIONS FOR BUSINESS RULES THAT MODIFY CUSTOMER SUPPORT TICKETS - When a customer-support ticket is created or updated in an online customer-support system, the system applies a set of triggers, which modify the ticket based on business rules, to the ticket, wherein each trigger performs actions that modify the ticket when conditions for parameters associated with the ticket are satisfied. During this process, the system evaluates condition nodes in condition graphs for the set of triggers, wherein a condition graph for a trigger is a directed graph comprised of condition nodes that specify conditions on one or more parameters associated with the ticket. During this evaluation, if a valid path through a condition graph comprising satisfied condition nodes is discovered, the system fires a trigger associated with the condition graph. Also, while evaluating the condition nodes, the system performs one or more range-searching operations to quickly evaluate conditions for frequently occurring parameters in the condition graphs. | 2022-01-20 |
20220019557 | ATTACHABLE-AND-DETACHABLE DATABASE SESSIONS - In an embodiment, a database platform receives a request from a client for creation of an attachable-and-detachable database session, and responsively creates the requested attachable-and-detachable database session for the client. The database platform sets the attachable-and-detachable database session as a current database session for the client at the database platform. The database platform determines that the client has detached from the attachable-and-detachable database session, and thereafter continues to maintain the attachable-and-detachable database session in data storage at the database platform. | 2022-01-20 |
20220019558 | COMPUTER-IMPLEMENTED METHODS - A computer-implemented method of creating a database of characterising codes, each characterising code being indicative of character of a respective example of a physical system. The method comprises the steps of performing for each example: (a) receiving data including respective values of a plurality of parameters associated with the system; (b) identifying, from a plurality of data clusters, a data cluster for each value, said data clusters each defining a range of possible values of the respective parameter; (c) assigning each parameter to its identified data cluster; (d) generating, from the assigned data clusters, a characterising code for that example including a unique label for each of the identified data clusters; and (e) storing the characterising code in a database of characterising codes. | 2022-01-20 |
20220019559 | Blockchain Services - A blockchain is generated as a cloud-based software service in a blockchain environment. The blockchain immutably archives particular usage of any device, perhaps as requested by a user. The user may thus peruse past or historical usage (such as message logs) and individually select historical messages that are desired for a blockchain recordation in the blockchain. Moreover, the usage may be publicly ledgered by still other blockchains, thus providing two-way ledgering for improved record keeping. | 2022-01-20 |
20220019560 | DYNAMICALLY MOVING VIRTUAL MACHINE (VM) DATA BASED UPON CONTEXT - Systems and methods for dynamically moving virtual machine (VM) data based upon context are described. In some embodiments, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: select a VM having a plurality of VM files; identify, among the plurality of VM files, a movable VM file; and transfer the movable VM file from a first storage tier to a second storage tier based upon a usage classification associated with the movable VM file. | 2022-01-20 |
20220019561 | EVENT-BASED GENERATION OF CONTEXT-AWARE TELEMETRY REPORTS - Systems and methods utilize telemetry data to provide administrators with metric information related to a detected IHS (Information Handling System) event, such as an error condition, where the provided metric information is particularized to the context of the event. A remote access controller (RAC) of the IHS stores metric reports received from metric sources. The RAC receives an indication of the event that specifies a first IHS component as a source of the event and specifies a time associated with the event. The RAC identifies stored metric reports generated by the first component prior to the first time and identifies stored metric reports generated by components that are logically and/o physically related to the first component. The RAC generates an event report that includes the metric reports generated by the first component prior to the first time and the metric reports generated by components related to the first component. | 2022-01-20 |
20220019562 | DATA COMPRESSION BASED ON KEY-VALUE STORE - Methods, systems, and apparatus for, for data compression based on a key-value store. In one aspect, a method includes generating, at a server, a current dictionary based on a plurality of key-values stored in a storage system of the server; receiving a key-value pair transmitted by a client device; and performing, at the server, data compression on a key-value in the key-value pair by using the current dictionary; and storing the key-value in the storage system of the server. | 2022-01-20 |
20220019563 | SYSTEMS AND METHODS FOR LOGICALLY COPYING DATA FROM A SOURCE DATABASE TO ONE OR MORE TARGET DATABASES - A system and method are provided for logically copying data from a source database to a first target database and a second target database. Based on table partition information, the source database is queried to collect partition metadata information for a first set of partitions and a second set of partitions. A first set of the partition metadata information for the first set of partitions and a second set of the partition metadata information for the second set of partitions can be used to create at least one extent chunk for each partition of a table. The source database can be queried, based on a first set of extent chunks and a second set of extent chunks, for a first set of data to be written to the first target database and a second set of data from the source database to be written to the second target database. | 2022-01-20 |
20220019564 | MANAGING STORAGE DEVICE SCRUBBING - From among physical storage devices (PSDs) of a storage system, a set of two or more of the PSDs that are eligible for scrubbing may be determined; and from among the set, a relative eligibility of the PSDs may be determined. Conformance prediction analysis may be applied to determine the set and the relative eligibility of PSDs of the set. The conformance prediction analysis may determine a scrubbing eligibility classification (e.g., label), and a confidence value for the classification, which may serve as the relative eligibility of the PSD. The eligible PSDs may be ranked in an order according to determined confidence values, and may be further classified according to such order. The future workload of the storage system may be forecasted, and the scrubbing of PSDs may be scheduled based on the forecasted workload of the system and the relative eligibilities of the set of PSDs. | 2022-01-20 |
20220019565 | AUTONOMOUS DATABASE DEFRAGMENTATION - Techniques are disclosed relating to performing database defragmentation operations by autonomously rebuilding index objects stored in one or more tablespaces of a database. In various embodiments, the disclosed techniques include autonomously performing defragmentation operations for one or more tablespaces in a database in an online manner such that a server system may continue to service data access requests while performing the defragmentation operations. In one non-limiting embodiment, for example, the disclosed techniques include selecting a first tablespace to defragment based on its level of fragmentation (e.g., relative to the other tablespaces). The server system may then rebuild index objects, from the first tablespace, to a new tablespace such that the index objects are stored in the new tablespace without fragmentation. The server system may then reclaim at least a portion of the storage space used to store the first tablespace and repeat, as desired, to autonomously defragment multiple tablespaces. | 2022-01-20 |
20220019566 | SYSTEM AND METHOD FOR INTEGRATING SYSTEMS TO IMPLEMENT DATA QUALITY PROCESSING - System and method for integrating systems to implement data quality processing. A business rule creation module is configured to create a business rule associated with a business term. A data quality specification module is configured to create a data quality specification based on the business rule. The data quality specification comprises (1) an identity of a column of a table stored in a database comprising data to be tested; (2) a test to perform on the data to be tested; and (3) reference data required to perform the test on the data. A validation module is configured to receive the data quality specification; retrieve data associated with the column from the database; and test the retrieved data in accordance with the test using the reference data. A result publication module is configured to return a result of the test to the data quality specification module. | 2022-01-20 |
20220019567 | DATA STORAGE USING VECTORS OF VECTORS - The systems and methods described here can reduce the storage space required (memory and/or disk) to store certain types of data, provide efficient (fast) creation, modification and retrieval of such data, and support such data within the framework of a multi-version database. In some embodiments, the systems and methods can store each field of a set of records as a vector of values, e.g., a data vector. A set of records can be represented using a vector hash vector, or “vhash” vector, wherein each element of the vhash vector contains a unique identifier of a data vector, based on a cryptographic hash of the data vector. A header table can store associations between labels and “vhash” vectors that pertain to those labels. Identical data vectors can be re-used between different record sets or vhash vectors needing that vector, thus saving space. | 2022-01-20 |
20220019568 | Cosharding and Randomized Cosharding - The technology relates to cosharding tables within a distributed storage system. A data table including one or more rows may be received. Each row in the data table may include an identifier key and pieces of data. Each piece of data in the data table may be indexed into individual rows of an index table, wherein each row in the index table includes data associated with the identifier key of the data table from which the piece of data in the respective row was indexed. The index table may be sharded into splits, wherein the sharding includes assigning each row of the index table into one of the splits based on the identifier key of the data table from which the piece of data in the respective row was indexed. The splits may be stored into two or more portions of the distributed storage system. | 2022-01-20 |
20220019569 | METHODS AND SYSTEM FOR CONCURRENT UPDATES OF A CUSTOMER ORDER - An order management system in electronic communication with a database may be configured to manage concurrent update requests for the order data stored in the database. In particular, the order management system may be configured to receive two or more order modification requests, determine that the second order modification request includes one or more aspects that conflict with the modified order and includes one or more aspects that do not conflict with the modified order, send a notification to the second user, wherein the notification includes the one or more aspects that conflict with the modified order, and modify the modified order according to the one or more aspects of the second order modification request that do not conflict with the modified order to create a second modified order. | 2022-01-20 |
20220019570 | TRANSACTIONAL PROCESSING OF CHANGE TRACKING DATA - Systems, methods, and devices for transactional processing of change tracking data for a database are discussed. A method includes generating a micro-partition based on execution of a transaction on a table of a database, the micro-partition reflecting changes made to the table by the transaction. A change tracking entry is generated in response to the execution of the transaction. The change tracking entry includes an indication of one or more modifications made to the table by the transaction and an indication of the micro-partition generated based on the execution of the transaction. The change tracking entry is stored in the micro-partition as metadata. At least one existing micro-partition is removed from the table, responsive to storing the change tracking entry. | 2022-01-20 |
20220019571 | AUTO DETECTION OF MATCHING FIELDS IN ENTITY RESOLUTION SYSTEMS - Methods, computer program products and/or systems are provided that perform the following operations: obtaining payload attribute fields; determining potential matching fields from the payload attribute fields; determining a matching function for each of the potential matching fields; determining an attribute score for each of the potential matching fields based on the matching function; obtaining a score list for a reference data set; determining a correlation of the attribute score for each of the potential matching fields with the reference data set score list; selecting new matching fields from the potential matching fields based at least in part on the correlation; determining an optimal weight for each of the selected new matching fields; selecting attribute fields for matching from the selected new matching fields based on a threshold rate for false positives and false negatives; and providing the attribute fields for matching and the associated optimal weight for the attribute fields. | 2022-01-20 |
20220019572 | REAL-TIME ANOMALY DETECTION - This disclosure provides systems, methods and apparatuses for detecting anomalous activity in an electronic system. In some implementations, a system generates a set of model parameters based on a number (n) of historical datapoints in a dataset, where each datapoint represents activity detected in the electronic system over a respective period of time. The system receives a first new data point for the data set and generates a first test parameter based on a value of the first new datapoint and an average and a measure of spread of the n historical datapoints. The system further compares the first test parameter to the set of model parameters and determines whether the first new datapoint represents an anomaly based at least in part on the comparison. | 2022-01-20 |
20220019573 | CONCURRENT HASH MAP UPDATES - Approaches in accordance with various embodiments can perform spatial hash map updates while ensuring the atomicity of the updates for arbitrary data structures. A hash map can be generated for a dataset where entries in the hash map may correspond to multiple independent values, such as pixels of an image to be rendered. Update requests for independent values may be received on multiple concurrent threads, but change requests for independent values corresponding to a hash map entry can be aggregated from a buffer and processed iteratively in a single thread for a given hash map entry. In the case of multi-resolution spatial hashing where data can be stored at various discretization levels, this operation can be repeated to propagate changes from one level to another. | 2022-01-20 |
20220019574 | GENERATING AND UTILIZING PRE-ALLOCATED STORAGE SPACE - Systems and methods for pre-allocating and utilizing storage space in a relational database are provided. In embodiments a method includes: obtaining transaction data including data regarding record insertions in a relational database, wherein each record of the record insertions is associated with a key value; identifying a type of each of the record insertions as either a random insertion type or a key range insertion type based on the database transaction data, wherein the random insertion type comprises records associated with respective key values inserted in a random order, and the key range insert type comprises records associated with a range of key values inserted within a certain time period; predicting a new range of key values associated with future record insertions based on the type of each of the record insertions; and pre-allocating page space in one or more pages of the relational database for the future record insertions. | 2022-01-20 |
20220019575 | System And Method For Augmenting Database Applications With Blockchain Technology - A method for augmenting a database application with blockchain technology is disclosed. The method involves recording data modifications made by a database application into a corresponding database as well as on a blockchain for global consensus confirmation. This is done without changing the existing application architecture and with minimal code changes to the existing application. Records in the database requiring synchronization with the blockchain are subjected to consensuses voting, and unauthorized database changes are rolled back, thereby granting immutability and non-repudiation characteristics to a traditional database application. Records in databases are thus made globally consistent. An existing database application can be deployed on a blockchain without significantly modifying the code. Multiple applications can synchronize data through a common blockchain, which greatly simplifies building blockchain applications. | 2022-01-20 |
20220019576 | USING STORED EXECUTION PLANS FOR EFFICIENT EXECUTION OF NATURAL LANGUAGE QUESTIONS - An analysis system connects to a set of data sources and perform natural language questions based on the data sources. The analysis system connects with the data sources and retrieves metadata describing data assets stored in each data source. The analysis system generates an execution plan for the natural language question. The analysis system finds data assets that match the received question based on the metadata. The analysis system ranks the data assets and presents the ranked data assets to users for allowing users to modify the execution plan. The analysis system may use execution plans of previously stored questions for executing new questions. The analysis system supports selective preprocessing of data to increase the data quality. | 2022-01-20 |
20220019577 | PREVIEW GENERATION OPERATIONS FOR FILES IN A NETWORK-ACCESSIBLE SYSTEM - An information management system is provided herein that combines data backup and data migration operations such that data appears available in a network-accessible folder when in fact the data is stored as a secondary copy in a secondary storage device. For example, a user can indicate that a first file should be added to the network-accessible folder. A client computing device can transmit the first file to a secondary storage computing device that performs a backup operation to store a backup copy of the first file in the secondary storage device. The secondary storage computing device can also generate an index of the first file, which includes a location of the backup copy of the first file, and transmit the index to a server that manages the network-accessible folder. Thus, the backup copy of the first file can be retrieved if the first file is selected via the network-accessible folder. | 2022-01-20 |
20220019578 | DETERMINING DATA STRUCTURES FOR SPATIAL DATA BASED ON SPATIAL DATA STATISTICS - Some embodiments provide a non-transitory machine-readable medium that stores a program. The program identifies a first data structure having a first type. The first data structure is configured to store a set of geometries. The program further identifies a second data structure associated with the first data structure. The second data structure is configured to store modifications to the set of geometries. The program also perform a merge operation on the first data structure and the second data structure to form a third data structure. | 2022-01-20 |
20220019579 | ENTERPRISE KNOWLEDGE GRAPHS USING MULTIPLE TOOLKITS - Examples described herein generally relate to a computer system including a knowledge graph storing a plurality of entities. A mining of a set of enterprise source documents within an enterprise intranet is performed, by a plurality of knowledge mining toolkits, to determine a plurality of entity names. The plurality of entity names are linked based on entity metadata by traversing various relationships between people, files, sites, groups, associated with entities. An entity record is generated within a knowledge graph for a mined entity name from the linked entity names based on an entity schema and ones of the set of enterprise source documents associated with the mined entity name. The entity record includes attributes aggregated from the ones of the set of enterprise source documents associated with the mined entity name. | 2022-01-20 |
20220019580 | METHOD AND SYSTEM FOR TEXT UNDERSTANDING IN AN ONTOLOGY DRIVEN PLATFORM - Embodiments of methods and systems for informatics systems are disclosed. Such informatics systems may utilize a unifying format to represent text to facilitate linking between data from the text and one or more ontologies, and the commensurate ability to mine such data. | 2022-01-20 |
20220019581 | DOCUMENT RETRIEVAL APPARATUS, DOCUMENT RETRIEVAL SYSTEM, DOCUMENT RETRIEVAL PROGRAM, AND DOCUMENT RETRIEVAL METHOD - A document retrieval apparatus includes an input reception unit configured to receive an input of a keyword, a document acquisition unit configured to acquire an author's name and a document file from a digital document database which stores document files of text data obtained by performing a character recognition process with respect to document image data of handwritten documents, and names of authors who wrote the handwritten documents, a keyword acquisition unit configured to reference an associating keyword database which stores information associating the authors' names, keywords, and associating keywords, and acquire an associating keyword of the input keyword, from the input keyword received by the input reception unit and the author's name acquired by the document acquisition unit, a document search unit configured to search the document file acquired by the document acquisition unit, using the input keyword and the acquired associating keyword, and a search result output unit configured to output a search result of the document search unit. | 2022-01-20 |
20220019582 | INFORMATION COMPUTING APPARATUS, INFORMATION COMPUTING METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM - An information processing device according to the present application includes an acquisition unit, a selection unit, and a search unit. The acquisition unit acquires a search query input by a user. The selection unit selects a type of target information to be searched on the basis of the search query acquired by the acquisition unit. The search unit searches for information corresponding to the search query from the type of target information to be searched selected by the selection unit. | 2022-01-20 |
20220019583 | SERVER AND INFORMATION PROCESSING METHOD - A server receives a first type of time-series vital data from a first system that collects the first type of time-series vital data from a first group of subjects, and receives a second type of time-series vital data from a second system that collects a second type of time-series vital data from a second group of subjects. Provided are a storage unit for storing a plurality of types of time-series vital data including the first type and the second type, a receiving unit for receiving a request for provision of the time-series vital data from an external device that specifies at least a part of the time-series vital data stored in a vital data database, and a transmitting means for transmitting to the external device the time-series vital data selected in accordance with a request for provision from among the time-series vital data stored in the vital data database. | 2022-01-20 |
20220019584 | System and Method for Optimizing Execution of Rules Modifying Search Results - A method, system and computer-usable medium for optimizing of search rules modifying search results. A rules service is initiated prior to executing a given search query from a shopper. A search rule evaluation is performed for the given search query and implementing a search rule that causes actions defined by the search rule to be applied to the given search request query. A list of search rules implemented or fired for each given search query is stored. A tracking record is built based on search rule evaluation that includes the list of implemented or fired rules and rule impact tracking (RIT) records. | 2022-01-20 |
20220019585 | TERM VECTOR MODELING OF DATABASE WORKLOADS - Techniques for managing database workloads using similarity measures based on queries executed are described. Classical techniques from information retrieval are applied to the domain of database workload management. Specifically, the technique of using document term vectors to compute similarity measures are applied using the conceptual mapping of SQL workloads as “documents” composed of SQL queries as “terms.” The techniques include generating two or more sets of workloads with each workload representing a set of queries executed on at least one database. Based on the sets of workloads, workload term vectors are calculated that represent the set of queries executed on the database. Then, based on the calculated workload vectors, a similarity score is generated between the two or more sets of workloads. | 2022-01-20 |
20220019586 | PREDICTED PROPERTIES FOR DATABASE QUERY PLANNING - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using machine learned models to predict properties for database query planning. One of the methods includes receiving a query to be executed over one or more relations of a database. A query planner generates a candidate query plan comprising a plurality of operators to be executed to generate query results for the query. A predicted property of the portion of the query plan when executed on the database is computed for each of one or more portions of the query plan, including providing a respective representation of each portion of the query plan as input to a trained model configured to generate predicted properties of tuples generated by portions of query plans when executed on the database. A score for the candidate query plan is computed using the predicted property generated by the trained model. | 2022-01-20 |
20220019587 | ACCESS PATH OPTIMIZATION - A computer-implemented method for access path optimization is provided according to embodiments of the present disclosure. In the method, a plurality of real values of an access path factor may be collected during a specified time period. One of the real values may be generated when a query is executed on a first access path. Then, at least one second access path may be generated for the query based on the plurality of real values of the access path factor. Moreover, an optimal access path for the query may be identified from the first access path and the at least one second access path. | 2022-01-20 |