47th week of 2021 patent applcation highlights part 46 |
Patent application number | Title | Published |
20210365352 | PROVIDING ADDITIONAL STACK TRACE INFORMATION FOR TIME-BASED SAMPLING IN ASYNCHRONOUS EXECUTION ENVIRONMENTS - The present disclosure describes methods, systems, and computer program products for providing additional stack trace information for time-based sampling (TBS) in asynchronous execution environments. One computer-implemented method includes determining whether time-based sampling is activated to capture a time-based sampling data during execution of a JavaScript function; in response to determining that the time-based sampling is activated to capture the time-based sampling data, determining whether a callback stack trace is active; in response to determining that the callback stack trace is active, loading the callback stack trace; retrieving a current stack trace of the JavaScript function; and saving the loaded callback stack trace and the current stack trace of the JavaScript function as the time-based sampling data. | 2021-11-25 |
20210365353 | Crash Simulator Device - A crash test simulator device for re-creating a software crash scenario within a virtual environment using artificial intelligence processes to consider a large group of variables that may be relevant to the crash incident. The crash test simulator device includes a production environment monitoring engine configured to monitor a user's interaction with an application implemented within a production environment, and generate information used to re-create a crash incident within a virtual environment. | 2021-11-25 |
20210365354 | ACCELERATING APPLICATION INTERROGATION AND INTERACTION MAPPING - Various embodiments of the present technology generally relate to the characterization and improvement of software applications. More specifically, some embodiments relate to systems and methods for modeling code behavior and generating characterizations of the code behavior and interrelations based on the code models. Some embodiments include user interfaces that provide views into the characterizations and interrelations of the application features and components that can give developers insights quickly and efficiently into applications which may be used for future developments and modifications to the application and to avoid introducing bugs into the application. | 2021-11-25 |
20210365355 | TEST CASE GENERATION APPARATUS, TEST CASE GENERATION METHOD, AND COMPUTER READABLE MEDIUM - If an (i−1)-th test case which is a test case for step 1 to step (i−1) is stored, a generation control unit ( | 2021-11-25 |
20210365356 | AUTOMATED DETERMINATION OF SOFTWARE TESTING RESOURCES - Systems and methods are disclosed that determine a duration and resources for testing software. In some implementations, the system performs operations including determining functions performed by applications of the software product, and determining categories based on the functions, the categories including a lowest-criticality category and a highest-criticality category. The operations also include determining degrees of change to the applications and test scripts corresponding to the degrees of change. The operations also include generating a data structure based on the categories and the degrees of change, the data structure including columns identifying the categories in an order from lowest to highest criticality. The operations also include determining weights corresponding to distances of the categories from the highest-criticality category. The operations also include determining a set of test scripts based on the weights, the test scripts, and the degree of change indicators, and determining the resources based on the set of test scripts. | 2021-11-25 |
20210365357 | CREATING AN INTELLIGENT TESTING QUEUE FOR IMPROVED QUALITY ASSURANCE TESTING OF MICROSERVICES - Described is a system for creating an intelligent testing queue for improved quality assurance (QA) testing of services (or microservices). The system may perform a graphical analysis of interactions between services to derive testing constraints. For example, the system may monitor services to identify interactions (e.g. API accesses) between the services, and store the interactions as a directed graph representation. The system may traverse the directed graph (e.g. via a breadth-first search) to determine service dependencies. Based on the probability of failure for the testing operations and the service dependencies, the system create a specialized testing queue. By performing testing operations according to the specialized queue, the system may improve certain metrics associated with QA processes such as mean time to failure (MTTF) and mean time to diagnose (MTTD). | 2021-11-25 |
20210365358 | DIAGNOSING ANOMALIES DETECTED BY BLACK-BOX MACHINE LEARNING MODELS - A computer-implemented method, a computer program product, and a computer system for diagnosing anomalies detected by a black-box machine learning model. A computer determines a local variance of a test sample in a test dataset, where the local variance represents uncertainty of a prediction by the black-box machine learning model. The computer initializes optimal compensations for the test sample, where the optimal compensations are optimal perturbations to test sample values of respective components of a multivariate input variable. The computer determines local gradients for the test sample. Based on the local variance and the local gradients, the computer updates the optimal compensations until convergences of the optimal compensations are reached. Using the optimal compensations, the computer diagnoses the anomalies detected by the black-box machine learning model. | 2021-11-25 |
20210365359 | METHOD AND SYSTEM FOR IDENTIFICATION AND ELIMINATION OF FALSE POSITIVES FROM DELTA ALARMS - This disclosure relates generally to field of elimination of false positives during static analysis of an application code, and, more particularly, to a method and a system for identification and elimination of false positives from delta alarms. Existing static analysis tools report/generate numerous static alarms for a version and the same static alarm also gets reported for the subsequent versions, which are referred to as repeated alarms, while static alarms remaining after the suppression of the repeated alarms, are called delta alarms. In an embodiment, the disclosed method and system for identification and elimination of false positives from delta alarms, wherein the delta alarms are post-processed to identify of a set of false positives using a version aware static analysis technique based on a set of reuse computation techniques implementing conservative or an aggressive approach based on a dynamic configuration input. | 2021-11-25 |
20210365360 | MAPPING A VIRTUAL ADDRESS USING A CONTENT ADDRESSABLE MEMORY (CAM) - Methods, apparatuses, and systems related to mapping a virtual address using a content addressable memory (CAM) are described. In a memory system including a memory and a content addressable memory (CAM), a select line of the CAM can be coupled to a corresponding select line of the memory, which allows the memory system to map a virtual address of a memory device directly to the corresponding select line of the memory. An example method can include receiving, from a host at a memory device comprising a memory array and a content addressable memory (CAM), a first virtual address to be searched among virtual addresses stored within the CAM, identifying, in response to receipt of the first virtual address, a select line of a plurality of select lines of the CAM associated with a second virtual address matching the first virtual address, and activating, in response to identifying the select line of the CAM, a corresponding select line of the memory coupled to the identified select line of the CAM. | 2021-11-25 |
20210365361 | METHOD AND APPARATUS FOR DESIGNING DUAL-MIRROR SHARED CONF PARTITION FILE - A method, an apparatus, a device and a computer readable storage medium for designing a dual-mirror shared conf partition file are provided. The method includes pre-configuring disk space occupation capacities for a first flash mirror file, a second flash mirror file, and a shared conf partition of the BMC, and generating a FW file of the BMC which does not include a shared conf partition file. The method further includes mounting partitions configured in one of the first and second flash mirror files firstly and then mounting the shared Conf partition at last, in response to an instruction for starting the one of the first and second flash mirror files. The shared conf partition stores a configuration file of the BMC. The shared conf partition and the configuration file are automatically generated when the BMC runs normally for the first time. | 2021-11-25 |
20210365362 | SYSTEM AND METHOD FOR FACILITATING MITIGATION OF READ/WRITE AMPLIFICATION IN DATA COMPRESSION - The system can receive data to be written to a non-volatile memory in the distributed storage system. The received data can include a plurality of input segments. The system can assign consecutive logical block addresses (LBAs) to the plurality of input segments. The system can then compress the plurality of input segments to generate a plurality of fixed-length compressed segments, with each fixed-length compressed segment aligned with a physical block address (PBA) in a set of PBAs. The system compresses the plurality of input segments to enable an efficient use of storage capacity in the non-volatile memory. Next, the system can write the plurality of fixed-length compressed segments to a corresponding set of PBAs in the non-volatile memory. The system can then create, in a data structure, a set of entries which map the LBAs of the input segments to the set of PBAs. This data structure can be used later by the system when processing a read request including a LBA. | 2021-11-25 |
20210365363 | MAPPING A VIRTUAL ADDRESS USING A PHYSICAL ADDRESS - Methods, apparatuses, and systems related to mapping a virtual address using a physical address are described. In a memory system including a memory (e.g., cache) and a content addressable memory (CAM), the CAM can be configured to search data requested by a host from the memory based on multiple indicators stored in the CAM. For example, in the event that the data stored in the memory is not searchable based on a particular indicator such as a virtual address of a memory array (e.g., main memory), the CAM be configured to search the data based on another indicator such as a physical address of the memory array. Searching the data based on multiple indicators can resolve a synonym problem. | 2021-11-25 |
20210365364 | HOST-BASED FLASH MEMORY MAINTENANCE TECHNIQUES - Devices and techniques are disclosed herein for allowing host-based maintenance of a flash memory device. In certain examples, memory write information can be encrypted at the memory device and provided to the host for updating and maintaining memory device maintenance statistics. | 2021-11-25 |
20210365365 | Using Data Mirroring Across Multiple Regions to Reduce the Likelihood of Losing Objects Maintained in Cloud Object Storage - Techniques for using data mirroring across regions to reduce the likelihood of losing objects in a cloud object storage platform are provided. In one set of embodiments, a computer system can upload first and second copies of a data object to first and second regions of the cloud object storage platform respectively, where the first and second copies are identical. The computer system can then attempt to read the first copy of the data object from the first region. If the read attempt fails, the computer system can retrieve the second copy of the data object from the second region. | 2021-11-25 |
20210365366 | RESOURCE CACHING METHOD AND ELECTRONIC DEVICE SUPPORTING THE SAME - An electronic device is disclosed that includes a display, a communication circuitry, a first memory storing a native web application including a first resource, a second memory loaded with instructions included in the native web application, and a processor operatively connected with the display, the communication circuitry, the first memory, and the second memory. The processor is configured to transmit a request to identify whether the first resource is changed to an external electronic device, via the communication circuitry, when a specified condition is met, use the first resource in response to receiving a response that the first resource is not changed from the external electronic device or not responding to a connection to the external electronic device, receive a second resource replacing the first resource from the external electronic device, via the communication circuitry, in response to receiving a response that the first resource is changed from the external electronic device, and use the second resource. In addition, various embodiments recognized through the specification are possible. | 2021-11-25 |
20210365367 | STORAGE DEVICE AND OPERATING METHOD THEREOF - A storage device includes a nonvolatile memory including a plurality of first blocks having memory cells each configured to store one bit of data and a plurality of second blocks having memory cells each configured to multiple bits of data; and a controller configured to determine whether or not a number of use-completed second blocks, each of which has a first threshold number or less of valid pages, among use-completed second blocks of the plurality of second blocks, is equal to or larger than a second threshold number and to select, according to a determination result, a victim block on which garbage collection is to be performed among used-completed first blocks of the plurality of first blocks or the use-completed second blocks each having the first threshold number or less of valid pages. | 2021-11-25 |
20210365368 | Flush Method for Mapping Table of SSD - A flush method of a solid-state drive (SSD) comprises mapping a group of data to multiple segments of a mapping table; generating a recording table for recording a total value and multiple count values corresponding to the multiple segments; when one of the data is written, increasing one of the multiple count values corresponding to one of the multiple segments of the one of the data by a number, and increasing the total value by another number; determining whether the one of the multiple count values is greater than one of multiple optimized thresholds corresponding to the one of the multiple count values; and executing a flush operation to write the one of the multiple segments into a memory and restoring the mapping table if the one of the multiple count values is greater than the one of the multiple optimized thresholds. | 2021-11-25 |
20210365369 | STORAGE DEVICE AND OPERATING METHOD THEREOF - A memory controller controls an address such that a number of chips included in a memory device can increase. The memory controller includes a flash translation layer configured to translate a logical block address received from a host into a physical block address, wherein the flash translation layer determines an addressing unit of at least one of a plurality of addresses in the physical block address based on a request received from the host and a command controller configured to generate a command representing the addressing unit based on the request. | 2021-11-25 |
20210365370 | MEMORY FOR STORING DATA BLOCKS - A mechanism for storing data blocks in memory space. The memory space is divided into a plurality of memory buffers of two or more different predetermined sizes. Thus, the size and location of memory buffers (within the memory space) are pre-allocated. An index is generated that identifies a size and availability of each memory buffer in the divided memory space. Each index entry of the index corresponds or maps to a different memory buffer. A data block can be stored in the memory space by processing the index to identify a suitable memory buffer for storing the data block. | 2021-11-25 |
20210365371 | A METHOD AND APPARATUS TO USE DRAM AS A CACHE FOR SLOW BYTE-ADDRESSIBLE MEMORY FOR EFFICIENT CLOUD APPLICATIONS - Various embodiments are generally directed to virtualized systems. A first guest memory page may be identified based at least in part on a number of accesses to a page table entry for the first guest memory page in a page table by an application executing in a virtual machine (VM) on the processor, the first guest memory page corresponding to a first byte-addressable memory. The execution of the VM and the application on the processor may be paused. The first guest memory page may be migrated to a target memory page in a second byte-addressable memory, the target memory page comprising one of a target host memory page and a target guest memory page, the second byte-addressable memory having an access speed faster than an access speed of the first byte-addressable memory. | 2021-11-25 |
20210365372 | MEMORY CONTROLLER AND METHOD OF OPERATING THE SAME - An electronic device includes a memory controller having an improved operation speed. The memory controller includes a main memory, a processor configured to generate commands for accessing data stored in the main memory, a scheduler configured to store the commands and output the commands according to a preset criterion, a cache memory configured to cache and store data accessed by the processor among the data stored in the main memory, and a hazard filter configured to store information on an address of the main memory corresponding to a write command among the commands, provide a pre-completion response for the write command to the scheduler upon receiving the write command, and provide the write command to the main memory. | 2021-11-25 |
20210365373 | Control of Cache Data - A machine-implemented method for controlling transfer of at least one data item from a data cache component, in communication with storage using at least one relatively higher-latency path and at least one relatively lower-latency path, comprises: receiving metadata defining at least a first characteristic of data selected for inspection; responsive to the metadata, seeking a match between said at least first characteristic and a second characteristic of at least one of a plurality of data items in the data cache component; selecting said at least one of the plurality of data items where the at least one of the plurality of data items has the second characteristic matching the first characteristic; and passing the selected one of the plurality of data items from the data cache component using the relatively lower-latency path. | 2021-11-25 |
20210365374 | ALIASED MODE FOR CACHE CONTROLLER - An apparatus includes first CPU and second CPU cores, a L1 cache subsystem coupled to the first CPU core and comprising a L1 controller, and a L2 cache subsystem coupled to the L1 cache subsystem and to the second CPU core. The L2 cache subsystem includes a L2 memory and a L2 controller configured to operate in an aliased mode in response to a value in a memory map control register being asserted. In the aliased mode, the L2 controller receives a first request from the first CPU core directed to a virtual address in the L2 memory, receives a second request from the second CPU core directed to the virtual address in the L2 memory, directs the first request to a physical address A in the L2 memory, and directs the second request to a physical address B in the L2 memory. | 2021-11-25 |
20210365375 | APPARATUSES AND METHODS TO PERFORM CONTINUOUS READ OPERATIONS - Apparatuses, systems, and methods to perform continuous read operations are described. A system configured to perform such continuous read operations enables improved access to and processing of data for performance of associated functions. For instance, one apparatus described herein includes a memory device having an array that includes a plurality of pages of memory cells. The memory device includes a page buffer coupled to the array and a continuous read buffer. The continuous read buffer includes a first cache to receive a first segment of data values and a second cache to receive a second segment of the data values from the page buffer. The memory device is configured to perform a continuous read operation on the first and second segments of data from the first cache and the second cache of the continuous read buffer. | 2021-11-25 |
20210365376 | Adaptive Cache - Described apparatuses and methods form adaptive cache lines having a configurable capacity from hardware cache lines having a fixed capacity. The adaptive cache lines can be formed in accordance with a programmable cache-line parameter. The programmable cache-line parameter can specify a capacity for the adaptive cache lines. The adaptive cache lines may be formed by combining respective groups of fixed-capacity hardware cache lines. The quantity of fixed-capacity hardware cache lines included in respective adaptive cache lines may be based on the programmable cache-line parameter. The programmable cache-line parameter can be selected in accordance with characteristics of the cache workload. | 2021-11-25 |
20210365377 | SYSTEM, METHOD, AND APPARATUS FOR ENHANCED POINTER IDENTIFICATION AND PREFETCHING - System and method for prefetching pointer-referenced data. A method embodiment includes: tracking a plurality of load instructions which includes a first load instruction to access a first data that identifies a first memory location; detecting a second load instruction which accesses a second memory location for a second data, the second memory location matching the first memory location identified by the first data; responsive to the detecting, updating a list of pointer load instructions to include information identifying the first load instruction as a pointer load instruction; prefetching a third data for a third load instruction prior to executing the third load instruction; identifying the third load instruction as a pointer load instruction based on information from the list of pointer load instructions and responsively prefetching a fourth data from a fourth memory location, wherein the fourth memory location is identified by the third data. | 2021-11-25 |
20210365378 | METHOD OF CACHE PREFETCHING THAT INCREASES THE HIT RATE OF A NEXT FASTER CACHE - The size of a cache is modestly increased so that a short pointer to a predicted next memory address in the same cache is added to each cache line in the cache. In response to a cache hit, the predicted next memory address identified by the short pointer in the cache line of the hit along with an associated entry are pushed to a next faster cache when a valid short pointer to the predicted next memory address is present in the cache line of the hit. | 2021-11-25 |
20210365379 | Method and Apparatus for Cache Slot Allocation Based on Data Origination Location or Final Data Destination Location - Operational information in a storage system is collected regarding storage media storage tiers, devices, drives, tracks on drives, and logical storage layers, to determine an estimated amount of time it will take to write data from cache to the intended drive when a new write operation arrives at the storage system. This information is then used to decide which type of cache is most optimal to store the data for the write operation, based on the estimated amount of time it will take to write data out from the cache. By allocating cache slots from a faster cache to write operations that are expected to quickly be written out to memory, and allocating cache slots from the slower cache to write operations that are expected to take more time to be written out to memory, it is possible to increase the availability of the cache slots in the faster cache. | 2021-11-25 |
20210365380 | Embedding Data in Address Streams - Techniques and devices are described for embedding data in an address stream on an interconnect, such as a memory bus. Addresses in an address stream indicate at least part of a location in memory (e.g., a memory page and offset), whereas data embedded in the address stream can indicate when metadata or other information is available to lend context to the addresses in the address stream. The indication of data in the address stream can be communicated using, for example, a mailbox, a preamble message in a messaging protocol, a checksum, repetitive transmission, or combinations thereof. The indication of data can be recorded from the address stream and may later be used to interpret memory traces recorded during a test or can be used to communicate with a memory device or other recipient of the data during testing or regular operations. | 2021-11-25 |
20210365381 | MICROPROCESSOR ARCHITECTURE HAVING ALTERNATIVE MEMORY ACCESS PATHS - The present invention is directed to a system and method which employ two memory access paths: 1) a cache-access path in which block data is fetched from main memory for loading to a cache, and 2) a direct-access path in which individually-addressed data is fetched from main memory. The system may comprise one or more processor cores that utilize the cache-access path for accessing data. The system may further comprise at least one heterogeneous functional unit that is operable to utilize the direct-access path for accessing data. In certain embodiments, the one or more processor cores, cache, and the at least one heterogeneous functional unit may be included on a common semiconductor die (e.g., as part of an integrated circuit). Embodiments of the present invention enable improved system performance by selectively employing the cache-access path for certain instructions while selectively employing the direct-access path for other instructions. | 2021-11-25 |
20210365382 | MEMORY SYSTEM, MEMORY CONTROLLER, AND OPERATION METHOD THEREOF - The memory system is provided to include a memory device, and a memory controller configured to control the memory device. The memory controller is configured to transmit, after the host completes a first initial setting operation for the memory system, mapping information between a logical address and a physical address to a host in order to load the mapping information between the logical address and the physical address into a host memory area located in the host, and to transmit, before the host executes a second initial setting operation for the memory system, to the host, updated mapping information between the logical address and the physical address to update, based on a change made to the host memory area. | 2021-11-25 |
20210365383 | CONTENT ADDRESSABLE MEMORY (CAM) ADDRESSING - Apparatuses, systems, and methods for mapping a virtual address using a CAM are described. A parallel structure of a CAM can enable functions of a MMU to be integrated into a single operation performed using the CAM such that a virtual address of a memory array can be mapped directly to a row of a memory. An example method includes receiving an access command and address information for a memory array; identifying a virtual address and a physical address of the memory array based on the received address information; comparing, during a time period associated with the access command, the virtual address and the physical address to virtual addresses and physical addresses, respectively, of the memory array stored in a CAM; and accessing, during the time period, a row of the memory array coupled to a row of the CAM storing the virtual address and the physical address. | 2021-11-25 |
20210365384 | COMBINED PAGE FOOTER FOR PARALLEL METADATA STORAGE - Apparatus and method for managing metadata in a data storage device such as a solid-state drive (SSD). The metadata are stored in combined (combo) pages in a non-volatile memory (NVM) each having first and second level map entries. The second level map entries provide a logical-to-physical address translation layer for user data blocks stored to the NVM, and the first level map entries describe the second level map entries in the combo page. A global map structure is accessed to identify a selected combo page in the NVM associated with a pending access command. The first and second level map entries are retrieved from the combo page, and the second level map entries are used to identify a target location for the transfer of user data blocks to or from the NVM. | 2021-11-25 |
20210365385 | METHODS OF MEMORY ADDRESS VERIFICATION AND MEMORY DEVICES EMPLOYING THE SAME - A memory device and methods for operating the same are provided. The memory device includes an array of memory cells, a non-volatile memory, and a controller. The controller is configured to receive a read command to read a data word from an address of the array and decode the address to generate a decoded address. The controller is further configured to retrieve response data from the decoded address of the array, retrieve a location indicia corresponding to the decoded address from the non-volatile memory, and verify that the location indicia corresponds to the address. The controller can optionally be further configured to indicate an error if the location indicia does not correspond to the address. | 2021-11-25 |
20210365386 | HANDLING ADDRESS TRANSLATION REQUESTS - A memory management unit comprises an interface for receiving an address translation request from a device, the address translation request specifying a virtual request to be translated. Translation circuitry translates the virtual address into an intermediate address different from a physical address directly specifying a memory location. The interface provides an address translation response specifying the intermediate address to the device in response to the address translation request. This improves security by avoiding exposure of physical addresses to the device. | 2021-11-25 |
20210365387 | Pseudo-First In, First Out (FIFO) Tag Line Replacement - A method is provided that includes searching tags in a tag group comprised in a tagged memory system for an available tag line during a clock cycle, wherein the tagged memory system includes a plurality of tag lines having respective tags and wherein the tags are divided into a plurality of non-overlapping tag groups, and searching tags in a next tag group of the plurality of tag groups for an available tag line during a next clock cycle when the searching in the tag group does not find an available tag line. | 2021-11-25 |
20210365388 | METHOD OF ENCRYPTING DATA IN NONVOLATILE MEMORY DEVICE, NONVOLATILE MEMORY DEVICE AND USER DEVICE - A method of encrypting data in a nonvolatile memory device (NVM) includes; programming data in selected memory cells, sensing the selected memory cells at a first time during a develop period to provide random data, sensing the selected memory cells at a second time during the develop period to provide main data, encrypting the main data using the random data to generate encrypted main data, and outputting the encrypted main data to an external circuit, wherein the randomness of the random data is based on a threshold voltage distribution of the selected memory cells. | 2021-11-25 |
20210365389 | EXECUTION SPACE AGNOSTIC DEVICE DRIVERS - Embodiments described herein provide techniques to manage drivers in a user space in a data processing system. One embodiment provides a data processing system configured perform operations, comprising discovering a hardware device communicatively coupled to the communication bus, launching a user space driver daemon, establishing an inter-process communication (IPC) link between a first proxy interface for the user space driver daemon and a second proxy interface for a server process in a kernel space, receiving, at the first proxy interface, an access right to enable access to a memory buffer in the kernel space, and relaying an access request for the memory buffer from the user space driver daemon via a third-party proxy interface to enable the user space driver daemon to access the memory buffer, the access request based on the access right. | 2021-11-25 |
20210365390 | SYSTEM AND METHOD FOR MANAGING RESOURCES OF A STORAGE DEVICE AND QUANTIFYING THE COST OF I/O REQUESTS - One embodiment facilitates measurement of a performance of a storage device. During operation, the system determines a normalized cost for an I/O request, wherein the normalized cost is independent of an access pattern and a type of the I/O request, wherein the normalized cost is indicated by a first number of virtual I/O operations consumed by the I/O request, and wherein a virtual I/O operation is used as a logical unit of cost associated with physical I/O operations. The system identifies a performance metric for the storage device by calculating a second number of virtual I/O operations per second which can be executed by the storage device. The system allocates incoming I/O requests to the storage device based on the performance metric, e.g., to satisfy a Quality of Service requirement, thereby causing an enhanced measurement of the performance of the storage device. | 2021-11-25 |
20210365391 | MEMORY SUB-SYSTEM INCLUDING AN IN PACKAGE SEQUENCER SEPARATE FROM A CONTROLLER - An instruction can be received at a sequencer from a controller. The sequencer can be in a package including the sequencer and one or more memory components. The sequencer is operatively coupled to a controller that is separate from the package. A processing device of the sequencer can perform an operation based on the instruction on at least one of the one or more memory components in the package. | 2021-11-25 |
20210365392 | System on Chip, Access Command Routing Method, and Terminal - A system on chip, an access command routing method, and a terminal are disclosed. The system on chip includes an IP core and a bus. The IP core is configured to: obtain, based on an access address corresponding to an access command, an address range configuration identifier corresponding to the access address; and transmit the access command and the address range configuration identifier to the bus, where the address range configuration identifier is used by the bus to route the access command. The bus is configured to route the access command to a system cache or an external memory based on the address range configuration identifier. | 2021-11-25 |
20210365393 | MEMORY CONTROLLER, MEMORY SYSTEM, AND CONTROL METHOD OF MEMORY SYSTEM - A memory controller includes a host interface circuit connectable to a host device by a bus conforming to a memory card system specification, a data buffer circuit including a buffer memory, a tag information generation circuit configured to generate tag information associated with a command received by the host interface circuit, and a first register in which the tag information generated by the tag information generation circuit is stored, and a second register into which the tag information stored in the first register is copied after the command is fetched from the host interface circuit for processing. When a read request is made from the host interface circuit to the data buffer circuit, the data buffer circuit returns read data stored in the buffer memory upon confirming that the tag information stored in the first register and the tag information stored in the second register match each other. | 2021-11-25 |
20210365394 | DATA PROCESSING METHOD AND DEVICE - A data processing method includes sending, by a network interface card of a first device, a request packet to a second device. The request packet is used to request to read data in a destination storage area of the second device. The network interface card receives a response packet that is sent by the second device in response to the request packet. The response packet includes the data. The network interface card initiates, based on the response packet, direct memory access to a storage address to write the data into a memory area to which the storage address points. The first data does not need to be cached in a memory of the network interface card. Bandwidth resource usage and storage space usage of the memory of the network interface card can be reduced. | 2021-11-25 |
20210365395 | DIRECT MEMORY ACCESS (DMA) COMMANDS FOR NONCONTIGUOUS SOURCE AND DESTINATION MEMORY ADDRESSES - A processing device, operatively coupled with a plurality of memory devices, is configured to receive a DMA command for a plurality of data sectors to be moved from a source memory region to a destination memory region, the destination memory region comprises a plurality of noncontiguous memory addresses and the DMA command comprises a destination value referencing the plurality of noncontiguous memory addresses. The processing device further retrieves the plurality of noncontiguous memory addresses from a location identified by the destination value. The processing device then reads the plurality of data sectors from the source memory region. The processing device also performs, for each respective data sector of the plurality of data sectors associated with the DMA command, a write operation to write the respective data sector into a corresponding respective noncontiguous memory address from the plurality of noncontiguous memory addresses of the destination memory region. | 2021-11-25 |
20210365396 | SEMICONDUCTOR DEVICE AND METHOD FOR PROTECTING BUS - The master interface generates copy data by copying the first data, and generates an error detection code based on the copy data. The protocol conversion unit generates the second data by converting the first data from the first protocol to the second protocol. The slave interface detects errors in the copy data based on the error detection code. The slave interface also generates the first verification data by performing a conversion from one of the first protocol or the second protocol to the other for one of the second data or copy data. In addition, the slave interface compares the second verification data with the first verification data, using the other of the second data or copy as the second verification data. | 2021-11-25 |
20210365397 | STORAGE APPARATUS - A storage apparatus in which a controller reads out a port status of a switch in a short period of time is disclosed. The storage apparatus includes a switch and a plurality of storage controllers configured to communicate with each other through the switch. The switch includes a switch processor, a plurality of data ports, a switch integrated circuit, a memory, and a management interface. One of the plurality of data ports and a management port of the management interface are connected to each other. The switch processor stores a status of the plurality of data ports acquired from the switch integrated circuit into the memory. The plurality of storage controllers access the management interface through the plurality of data ports and the management port and receive the statuses of the plurality of data ports stored in the memory from the management interface. | 2021-11-25 |
20210365398 | ADAPTER CARDS FOR DISCRETE GRAPHICS CARD SLOTS - In one example, an adapter card may include a circuit board having a male interface to be inserted into a discrete graphics card slot and a peripheral component interconnect express (PCIe) slot to communicatively couple a PCIe device. Further, the adapter card may include a voltage converter circuit disposed on the circuit board to convert a first voltage associated with the discrete graphics card slot to a second voltage corresponding to the PCIe device and a level shifter circuit disposed on the circuit board to modify a signal level in the discrete graphics card slot to a signal level in the PCIe device. | 2021-11-25 |
20210365399 | DATA LINK CHANGES BASED ON REQUESTS - An electronic device includes a transmit buffer, a receive buffer, a communication port, and a controller. The controller is to: communicate with a target device via a data link established via the communication port; determine a throughput ratio between the transmit buffer and the receive buffer; in response to a determination that the throughput ratio is above a threshold, transmit a request to the target device to change an aspect of the data link, where the request includes a payload size indicating an amount of data to be transmitted from the electronic device to the target device; and in response to receiving a grant message associated with the request, increase an amount of transmit lanes within the data link from the electronic device to the target device. | 2021-11-25 |
20210365400 | ADAPTOR DEVICE - An adaptor device including a first interface, a second interface, a negotiation circuit and a type C manager and controller is provided. The first interface is a universal serial bus (USB) 2.0 interface, and the second interface is a type C USB interface. When the first interface receives a first mode swap request, the type C manager and controller transmits a first mode swap signal in a type C format through the second interface according to the first mode swap request; when the second interface receives a second mode swap request, the negotiation circuit transmits a second mode swap signal in a USB 2.0 format through the first interface according to the second mode swap request. | 2021-11-25 |
20210365401 | Device Process Scheduling - A device for contactless communication with a terminal, comprising: an antenna for communication with the terminal; an embedded chip configured to communicate with the terminal in accordance with a contactless transmission protocol whereby a message sent by the terminal sets a specified initial waiting time for a response from the embedded chip to maintain a connection with the terminal, the embedded chip being configured to communicate requests to the terminal to extend the waiting time for response; and a module configured to perform processing formed of a plurality of discrete operations, the module being configured to, in response to completing a subset of one or more discrete operations within a waiting time interval set by the terminal, send a first type of command to the embedded chip if the processing is not complete; wherein the embedded chip is further configured to, in response to receiving the first type of command, communicate a request to the terminal to extend the waiting time for response. | 2021-11-25 |
20210365402 | COMPUTING EFFICIENT CROSS CHANNEL OPERATIONS IN PARALLEL COMPUTING MACHINES USING SYSTOLIC ARRAYS - An apparatus to facilitate computing efficient cross channel operations in parallel computing machines using systolic arrays is disclosed. The apparatus includes a plurality of registers and one or more processing elements communicably coupled to the plurality of registers. The one or more processing elements include a systolic array circuit to perform cross-channel operations on source data received from a single source register of the plurality of registers, the systolic array circuit modified to receive inputs from the single source register and route elements of the single source register to multiple channels in the systolic array circuit. | 2021-11-25 |
20210365403 | Event Messaging in a System Having a Self-Scheduling Processor and a Hybrid Threading Fabric - Representative apparatus, method, and system embodiments are disclosed for a self-scheduling processor which also provides additional functionality. Representative embodiments include a self-scheduling processor, comprising: a processor core adapted to execute a received instruction; and a core control circuit adapted to automatically schedule an instruction for execution by the processor core in response to a received work descriptor data packet. In another embodiment, the core control circuit is also adapted to schedule a fiber create instruction for execution by the processor core, to reserve a predetermined amount of memory space in a thread control memory to store return arguments, and to generate one or more work descriptor data packets to another processor or hybrid threading fabric circuit for execution of a corresponding plurality of execution threads. Event processing, data path management, system calls, memory requests, and other new instructions are also disclosed. | 2021-11-25 |
20210365404 | FILE SYSTEM CONTENT ARCHIVING BASED ON THIRD-PARTY APPLICATION ARCHIVING RULES AND METADATA - An information management system according certain aspects for archiving file system content may include a third-party application archiving data agent configured to: access third-party application archiving rules for archiving data to one or more secondary storage devices, wherein the third-party application archiving rules are defined by a third-party application to archive files associated with the third-party application; access third-party metadata associated with a plurality of files in a file system, wherein the plurality of files is associated with the third-party application and the third-party metadata is defined by the third-party application; determine whether to archive one or more files of the plurality of files based at least in part on the third-party application archiving rules and the third-party metadata; and in response to determining that a first file of the plurality of files should be archived, archive the first file to the one or more secondary storage devices. | 2021-11-25 |
20210365405 | DATABASE-LEVEL CONTAINER GROUP MANAGEMENT - A container group is created using a database deployment infrastructure (DI) administrator (HA). API privileges for the container group are granted, using the HA, to a container group administrator (GA). API privileges for a container created in the container group using the GA are granted, using the GA, to a container administrator (CA). API privileges for the container are granted, using the CA, to a container developer (CD). Schema privileges for the container are granted, using the CA, to a container consumer (CC). API privileges for the container group are revoked, using the HA, from the GA. The container group is dropped using the HA. | 2021-11-25 |
20210365406 | METHOD AND APPARATUS FOR PROCESSING SNAPSHOT, DEVICE, MEDIUM AND PRODUCT - The present disclosure discloses a method and apparatus for processing a snapshot, a device, a medium and a product, and relates to the field of computer, and particularly to the field of cloud computing technology, distributed storage technology or cloud storage technology. An implementation of method may include: acquiring meta information of snapshots; acquiring, for each snapshot in the snapshots, a sub-data list of the each snapshot based on the meta information of the each snapshot; determining target sub-data in the acquired sub-data list according to a data determination mode corresponding to a snapshot level of the each snapshot; and deleting the target sub-data. | 2021-11-25 |
20210365407 | INCREMENTALLY IMPROVING CLUSTERING OF CROSS PARTITION DATA IN A DISTRIBUTED DATA SYSTEM - Methods and systems are provided for improved access to rows of data in a distributed data system. Each data row is associated with a partition. Data rows are distributed in one or more files and an impure file includes data rows associated multiple partitions. A clustering set is generated from a plurality of impure files by selecting a candidate impure file based on file access activity metrics and one or more neighbor impure files. Data rows of the impure files included in the clustering set are sorted according to their respective associated partitions. A set of disjoint partition range files are generated based on the sorted data rows of the impure files included in the clustering set. Each file of the set of disjoint partition range files is transferred to a respective target partition. | 2021-11-25 |
20210365408 | METHODS AND SYSTEMS FOR DEPICTION OF PROJECT DATA VIA TRANSMOGRIFICATION USING FRACTAL-BASED STRUCTURES - In a system for efficiently organizing, storing, accessing, and analyzing project data and for visualizing project progress, for a specified project, a reference fractal-based structure is selected based on, at least in part, the type of the specified project and/or a mapping between project types and reference fractal-based structures. The project files are organized and stored in a file structure that corresponds to the selected reference fractal-based structure, so that the file structure can be transmogrified displayed as a viewable fractal-based structure, that can indicate process of different tasks and subtasks of the project based on, in part, the status of the tasks and subtasks that is derived from the project files. | 2021-11-25 |
20210365409 | DELETING ORPHAN ARCHIVED FILES FROM STORAGE ARRAY USING A TIME-BASED DECISION ALGORITHM - Methods, apparatus, and processor-readable storage media for deleting orphan archived files from a storage array using a time-based decision algorithm are provided herein. An example computer-implemented method includes traversing a database of a local storage system to identify a record associated with a stub file, wherein the record is indicative of a time of a client operation, involving the stub file, on a file system of the local storage system; identifying a particular snapshot in a set of available snapshots of the file system; and providing an indication to a cloud storage platform to delete a cloud object corresponding to the stub file in response to determining that the time of the client operation occurred earlier than a snapshot time associated with the particular snapshot in the set. | 2021-11-25 |
20210365410 | MULTIDIRECTIONAL SYNCHRONIZATION OF CLOUD APPLICATION DATA - Embodiments of the disclosure relate to systems and methods for multi-directional synchronization of application data between application platforms. In various embodiments, application data is received from multiple application platforms. Once received, an object mapping is used to determine a synchronization data object that is mapped to data objects from the application data. A function mapping is used to determine one or more functions to apply to the application data. Data objects from the application data that are mapped to the synchronization data object are compared to detect conflicts between fields. In response to detecting a conflict, the conflict is resolved based on configuration rules that indicate priority between fields of different data objects. Once the conflict is resolved, the data objects are merged into a modified synchronization data object that represents an updated version of the synchronization data object. Changes between the synchronization data object and the modified synchronization data object are identified and propagated to application platforms to perform updates to application data managed by the application platforms. | 2021-11-25 |
20210365411 | ASYNCHRONOUS HOST FILE SYSTEM BASED DATA REPLICATION - A write operation storing data in a first storage device is duplicated to a first replication file. A set of differences between a first version of the first replication file determined at a first time and a second version of the first replication file determined at a second time is determined, the set of differences comprising a set of results of duplicated write operations occurring between the first time and the second time. At a second file system, storage of the set of differences in a second storage device is caused, creating a duplicate in the second storage device of the data stored in the first storage device. | 2021-11-25 |
20210365412 | SUPPORTING MULTIPLE OPERATIONS IN TRANSACTION LOGGING FOR A CLOUD ENABLED FILE SYSTEM - Methods, apparatus, and processor-readable storage media for supporting multiple operations in transaction logging for a cloud enabled file system are provided herein. An example computer-implemented method includes obtaining a plurality of file system operations to be performed on a cloud enabled file system; executing the plurality of file system operations as a single file system transaction; and maintaining a transaction log for the single transaction, the transaction log comprising information for one or more sub-transactions that were completed in conjunction with said executing, wherein the one or more sub-transactions correspond to at least a portion of the plurality of file system operations. | 2021-11-25 |
20210365413 | SYSTEMS AND METHODS FOR SEARCHING DEDUPLICATED DATA - A search term is received at deduplicated storage storing data segmented into segments. Segment fingerprints are generated and metadata maintained to allow reconstruction of the segmented data. The metadata includes fingerprint listings indicating sequences according to which the segments should be reconstructed. The segments are read to determine whether there are any matches of the search term. Matches are recorded in a results table. A first fingerprint listing associated with a first object is read. The results table is queried for fingerprints in the first fingerprint listing to determine whether the first object references any matches in the results table. | 2021-11-25 |
20210365414 | DATA MASKING IN A MICROSERVICE ARCHITECTURE - A method includes retrieving, with a masker controller job, an object and an associated object ID from a masking bucket that is defined in storage, making a copy of the object, with a masker worker microservice, masking the copy of the object to create a masked object, transmitting the masked object to an object access microservice, with the object access microservice, transmitting the masked object to a deduplication microservice, with the deduplication microservice, deduplicating the masked object, and storing the masked object in the storage. | 2021-11-25 |
20210365415 | SCALING OUT DATA PROTECTION INFRASTRUCTURE - Embodiments for optimizing data storage instances in a cloud environment in which metadata is stored and accessed separately from content data in multiple different instances of data storage units. A metadata and content data storage instance optimization process determines the status of different instances of virtual storage resources for both metadata and content data. Full instances are powered down when they are not needed, empty instances are deleted, and data of partially full instances is moved to other appropriate instances to create empty instances that can be deleted. The data storage instance optimization process is provided as part of a data protection search process that provides an execution environment and user interface to the optimization process. | 2021-11-25 |
20210365416 | MOUNT PARAMETER IN FILE SYSTEMS - Aspects pertaining to a distributed file system are described. In an example, system call is made by a server computing device on receiving a request for accessing a file, by a client device operating on a first computing architecture. A mount parameter is received in response to the system calls, wherein the mount parameter is to prescribe a computing architecture of a computing device to access the file system. Attribute information pertaining to the file to be accessed may thus be modified on determining the mount parameter to prescribe a first computing architecture for the client device, wherein modified attribute information is of a bit-size conforming to the first computing architecture. | 2021-11-25 |
20210365417 | FILE MANAGEMENT DEVICE AND NON-TRANSITORY COMPUTER READABLE MEDIUM - A file management device includes a processor configured to: add specific information to a file to be managed, the specific information specifying the file to be managed which is stored in a file system, the specific information being managed in association with a storage location on the file system; and when the file downloaded to an outside of the file system is re-uploaded from an external device to the file system, acquire the specific information added to the file, and specify the storage location on the file system based on the specific information. | 2021-11-25 |
20210365418 | VIRTUAL FILE ORGANIZER - A virtual file organization system, method and program product are disclosed. Included is a system that assigns classification tags to files stored within a storage system based on a natural language processing (NLP) context analysis of each file; and a virtual smart folder that is viewable within a user interface, wherein: opening the virtual smart folder causes a set of virtual subfolders to be displayed in which each virtual subfolder includes a category title; opening of a virtual subfolder causes a set of files residing at disparate locations in the storage system to be displayed; and the files displayed by opening the virtual subfolder each include an assigned classification tag that is associated with the category title of the virtual subfolder. | 2021-11-25 |
20210365419 | AUTOMATIC SCHEMA UPGRADE IN A BIG DATA PIPELINE - Disclosed are embodiments for providing batch performance using a stream processor. In one embodiment, a method is disclosed comprising receiving an event that includes a plurality of fields and extracting needed fields from the plurality of fields. The method then serializes the plurality of fields and generates a new event that includes the set of needed fields and a hidden field, the value of the hidden field comprising the serialized fields. The method then transmits the new event for processing using at least one processing stage of a stream processor. In response, the method reserializes a processed event generated by the stream processor and outputs the reserialized event to a downstream consumer. | 2021-11-25 |
20210365420 | DATA CORRECTNESS OPTIMIZATION - The present invention relates to providing a ground truth dataset and optimizing data correctness using the ground truth dataset. A plurality of datasets is received from different data sources. The datasets comprise a plurality of data elements. Each data element includes an identifier and at least one attribute value associated with the identifier. Data correctness values are determined for the attribute values. A data correctness value is associated with a probability that an attribute value is correct. A data element with a single data correctness value is added to the ground truth dataset for each attribute value for each identifier with which a respective attribute value is associated based on the determined data correctness values for the attribute values such that the data correctness values in the ground truth dataset define probability distributions of data correctness for the attribute values. A new dataset can be received from a data source with the new dataset comprising a plurality of data elements and each data element including an identifier and at least one attribute value associated with the identifier. Data correctness values for the attribute values of the data elements of the new dataset can be determined based on the data correctness values for the attribute values of the data elements with known identifiers of the ground truth dataset. A known identifier is an identifier which is included in the ground truth dataset and in the new dataset. | 2021-11-25 |
20210365421 | DATA ANALYSIS METHOD, COMPUTER DEVICE AND STORAGE MEDIUM - A data analysis method is provided, the data analysis method obtains to-be-analyzed data, determines whether there is abnormal data in the to-be-analyzed data according to a first preset rule, and searches an operation instruction corresponding to the abnormal data in a first database in response that there is abnormal data in the to-be-analyzed data. The operation instruction is executed, and an execution result of the operation instruction is output, an abnormal cause of the abnormal data is determined according to the execution result. By utilizing the data analysis method, the data analysis is performed more intelligently, quickly and accurately, the efficiency of data analysis can be improved, and the abnormal analyzed data can be classified to store. A data analysis device for applying the method and a computer device applying method are also provided. | 2021-11-25 |
20210365422 | RAPID AND ROBUST PREDICATE EVALUATION - Various approaches for accelerating data access to a computer memory and predicate evaluation includes storing, in the computer memory, (i) base data as multiple base columns, (ii) multiple sketched columns each corresponding to a base column in the base data and having smaller code values compared thereto, and (iii) a compression map for mapping one or more base columns to the corresponding sketched column; applying the compression map to a query having a predicate; determining data on the sketched column that satisfies the predicate; and evaluating the predicate based at least in part on the determined data on the sketched column without accessing the base column in the base data. | 2021-11-25 |
20210365423 | METHOD AND SYSTEM FOR LEXICAL DATA PROCESSING - There is disclosed a method and system to operate a software application entirely based on a unitary lexicon data structure (LDS) record comprising a plurality of data field definition blocks stored in memory, with one LDS record for each lexicon term. The LDS is used to develop computerized lexicons and deploy them for use to operate a lexical application with all data displayed for viewing and input by the user on a single screen to which all desired data items come, rather than the user navigating to fields statically located on a multitude of screens. Each LDS record contains a whole set of data in memory, with data duplicated across LDS records in order to bypass the need for the application to interoperate with a database to input and display related data. There is a graphical icon also of a unitary format into which all data is input and displayed. Input data items are related to one another in the icon, regardless of whether a relational database is configured to interoperate with the system. | 2021-11-25 |
20210365424 | DATA STORAGE USING VECTORS OF VECTORS - The systems and methods described here can reduce the storage space required (memory and/or disk) to store certain types of data, provide efficient (fast) creation, modification and retrieval of such data, and support such data within the framework of a multi-version database. In some embodiments, the systems and methods can store each field of a set of records as a vector of values, e.g., a data vector. A set of records can be represented using a vector id vector, or “vid” vector, wherein each element of the vid vector contains a reference to the memory location of a data vector. A header table can store associations between labels and “vid” vectors that pertain to those labels. Identical data vectors can be re-used between different record sets or vid vectors needing that vector, thus saving space. | 2021-11-25 |
20210365425 | Multiple Dimension Layers for an Analysis Data System and Method - A system and method are presented that analyze evaluation data concerning a subject using attributes that are logically arranged in a geometric structure such as a rectangular array. A plurality of dimension layers is laid on top of the logical arrangement of data. Each dimension layers assigns values to a plurality of dimensions based on the value of neighboring attribute groups. Each dimension layer can be associated with one or more reporting configurations that contain descriptors for the defined dimensions as well as formatting instructions for report-like output. | 2021-11-25 |
20210365426 | Generating Compact Data Structures for Monitoring Data Processing Performance Across High Scale Network Infrastructures - A compact data structure generation engine can be used to generate a compact data structure that represents performance data for high-scale networks. The compact data structure representing the performance data can be used to monitor the operation performed on or by a computer system to identify potentially anomalous conditions. In response, a corrective action can be taken to address the issue. This can be useful, for example, in improving the efficiency, effectiveness, and reliability of the computer system during operation. | 2021-11-25 |
20210365427 | ADAPTIVE DATABASE COMPACTION - Adaptive database compaction technology automatically identifies cost-effective opportunities for database compaction. The adaptive compaction technology may maintain a baseline sleeve representing a performance indicator's normal range, track the current performance indicator values, and initiate compaction of a database when a compaction trigger based on at least the performance indicator occurs. The performance indicator may be a ratio of logical size to physical size, and may be based on samples from a proper subset of the database. Kernel overhead may be recognized. A low-fragmentation secondary replica may be selected, compacted, and promoted to replace the prior primary replica. Secure cloud blob storage may be used. A compaction decision may allow, delay, or even prevent compaction after the trigger is met. An automatic balance between computational costs of compaction and costs of continued database use without compaction is kept, and may be tunable by an administrator. | 2021-11-25 |
20210365428 | INTEGRATED DATA ANALYSIS - Systems and methods are provided for integrated data analysis. At least one object that is responsive to a first search query is determined. The object is stored in an object model that is managed by a first computing platform, and the at least one object is associated with one or more properties. One or more data sets that are responsive to a second search query are determined. The data sets are managed by a second computing platform. The one or more data sets are determined related to the at least one object. The at least one object is updated to include at least one property that references at least one analysis that relies on the one or more data sets. | 2021-11-25 |
20210365429 | OBJECT SHARING BY ENTITIES USING A DATA STRUCTURE - In some examples, a system provides a data structure containing an entry to store information for an object for sharing by a plurality of entities. The system allocates the object to a first entity of the plurality of entities based on an atomic access of the entry, the atomic access to update, in one operation, multiple information elements in the entry relating to allocation of the object. The system returns, to the first entity, a handle to the object, the handle based on a value in the entry. | 2021-11-25 |
20210365430 | TECHNIQUE FOR REPLICATION OF DATABASE - Disclosed is a computer program stored in a computer readable storage medium, which includes encoded commands, wherein executing the computer program by one or more processors of a computer system allows the one or more processors to perform steps for change data capture (CDC) between a source database and a target database. The steps may include: a resource data acquisition step of acquiring resource data which is data obtained by monitoring a current resource of a source database server; a comparison information generation step of generating comparison information by comparing the resource data and a predetermined source database server load threshold value; a mode determination step of determining an operation mode of a changed data capture process which operates in the source database server based on the comparison information, the operation mode including a union mode and a division mode; and a mode changing step of changing the operation mode of the process to a mode determined in the mode determination step. | 2021-11-25 |
20210365431 | FALSE SUBMISSION FILTER DEVICE, FALSE SUBMISSION FILTER SYSTEM, FALSE SUBMISSION FILTER METHOD, AND COMPUTER READABLE MEDIUM - In an SNS server ( | 2021-11-25 |
20210365432 | PARALLEL AUDIT CYCLES BETWEEN PRIMARY AND SECONDARY EVENT FEEDS - Disclosed are embodiments for providing batch performance using a stream processor. In one embodiment, a method is disclosed comprising completing a first audit for a primary event type, the first audit generating a set of primary events and completing a second audit for a secondary event type, the second audit generating a draft set of secondary events and an auxiliary feed of un-joined secondary events. The method then performs a join audit check on the auxiliary feed of un-joined secondary events and a set of flags, each flag in the set of flags indicating that a respective un-joined secondary event was properly joined. Based on the results of the join audit check, the method replays a subset of the un-joined secondary events in the auxiliary feed upon determining that the join audit check failed. | 2021-11-25 |
20210365433 | METHOD AND APPARATUS FOR MANAGING DATA BASED ON BLOCKCHAIN - A blockchain based data management method performed by a computing device according to an embodiment of the present disclosure includes recording a deletion event for off-chain data and time information of the deletion event in a blockchain network, and selectively deleting the off-chain data based on a validity verification result calculated for the time information, wherein the validity verification result is calculated using a time-consensus algorithm of the blockchain network. | 2021-11-25 |
20210365434 | APPARATUS AND METHOD FOR PROVIDING SENSOR DATA BASED ON BLOCKCHAIN - Disclosed herein are an apparatus and method for providing sensor data in a sensor device based on a blockchain. A method for providing sensor data in a sensor device based on a blockchain may include creating a device record using encrypted device identification information, registering the device record in the blockchain, creating an event record using event information collected from a sensor, registering the header of the event record, including information about a link to the device record, in the blockchain, and distributing the body of the event record, the body being linked to the header of the event record. | 2021-11-25 |
20210365435 | ASYNCHRONOUS DATABASE SESSION UPDATE - Systems and methods for handling database transactions within a database session. A first client request to update a first data piece of a database session is received. A first response to the first client request indicates an update of the first data piece in accordance with the first request and publishes the update to enable further processing of the updated first data piece. An indication indicates that the updated first data piece is to be further updated and/or a second data piece of the database session is to be updated. A second client request is received to update a third data piece of the database session and a second response to the second client request indicates an update of the third data piece in accordance with the second request and an update of the updated first data piece in accordance with the indication. | 2021-11-25 |
20210365436 | CLIENT WRITE EVENT SEQUENCING SYSTEM IN A FEDERATED DATABASE PLATFORM - Various embodiments are directed to a federated network and database platform that is configured to sequence client write events occurring among several autonomous software applications. The federated network and database platform includes a client event sequencing server that is configured to receive a migration corpus of client write events from at least one software application server and assign a back-date time stamp to each client write event of the migration corpus. Upon receiving a new client write event associated with the software application, the client event sequencing server is configured to assign a current time stamp to the new client write event and store the new client write event to a client event sequencing database in a manner that positions the new client write event relative to the back-dated migration corpus of client write events. | 2021-11-25 |
20210365437 | ACCOUNT-LEVEL NAMESPACES FOR DATABASE PLATFORMS - A database platform receives an object identifier from a client in association with a database session. The client is associated with a customer account of the database platform, and the database session is associated with the client. In response to receiving the object identifier, the database platform identifies a resolution namespace for the object identifier, where the resolution namespace for the object identifier is a namespace that is specified in the object identifier if the object identifier includes a specified namespace, and where the resolution namespace is otherwise a current account-level namespace of the database session. The database platform resolves the object identifier with reference to the identified resolution namespace for the object identifier, including identifying an object corresponding to the object identifier in the customer account. | 2021-11-25 |
20210365438 | NAMESPACE-BASED SYSTEM-USER ACCESS OF DATABASE PLATFORMS - A database platform authenticates a system user for access via an application to a database that is associated with a customer account of the database platform. The system user is a first object in a first account-level namespace of the customer account, and the first account-level namespace is distinct from a default account-level namespace of the customer account. The database platform sends, as the system user, a query to the database via the application. The database platform receives, as the system user, results of the query from the database, and stores, as the system user, the results of the query in a first-namespace stage, which is a second object in the first account-level namespace. | 2021-11-25 |
20210365439 | DISTRIBUTED TRANSACTION EXECUTION IN DISTRIBUTED DATABASES - Client systems of a distributed database system execute transactions on data stored within the distributed database system. The client systems communicate directly with database nodes of the distributed database system in order to execute transactions. The client systems interact with the database nodes of the distributed database system via a client-side interface that performs various operations to execute transactions at the distributed database nodes, including retrieving records, staging mutations or insertions, committing mutations or insertions, or rolling back mutations or insertions on records stored on the distributed database nodes. Interactions between the client-side interface and the database nodes of the distributed database system are further configured to prevent conflicts between different transactions executed by one or more client systems at the database nodes. | 2021-11-25 |
20210365440 | DISTRIBUTED TRANSACTION EXECUTION MANAGEMENT IN DISTRIBUTED DATABASES - Client systems of a distributed database system manage execution of transactions on data stored within the distributed database system. The client systems communicate directly with database nodes of the distributed database system in order to manage transactions. The client systems interact with the database nodes of the distributed database system via a client-side interface that performs various operations to execute transactions at the distributed database nodes, including retrieving records, staging mutations or insertions, committing mutations or insertions, or rolling back mutations or insertions on records stored on the distributed database nodes. Interactions between the client-side interface and the database nodes of the distributed database system are further configured to prevent conflicts between different transactions executed by the same or different client systems at the database nodes. | 2021-11-25 |
20210365441 | GROWING DYNAMIC SHARED MEMORY HASH TABLE - A method and apparatus of a device that grows and/or shrinks a table that is shared between a writer and a plurality of readers is described. In an exemplary embodiment, a device receives an entry to be added to the shared table. In response to receiving the entry, the device remaps shared table to add a new storage segment to the shared table. The device further adds the entry to the shared table, where the entry is stored in the new storage segment. In addition, the device updates a shared table characteristic to indicate that the shared table has changed. The device further shrinks the shared table by remapping the table to remove a segment of the table. | 2021-11-25 |
20210365442 | SYSTEMS AND METHODS FOR ELECTRONIC NOTIFICATION QUEUES - Systems and methods including one or more processors and one or more non-transitory storage devices storing computing instructions configured to run on the one or more processors and perform storing one or more notifications in a data store; receiving a new notification; determining a respective number of notifications in each respective segment of a plurality of approximately equal segments by subtracting a cumulative number of notifications in the plurality of approximately equal segments from a preceding number of notifications in a preceding segment of the plurality of approximately equal segments; using the respective number of notifications in each respective segment of the plurality of approximately equal segments to determine a number of the one or more notifications; when the number of the one or more notifications is equal to or greater than a maximum number of notifications, removing, from the data store, at least one notification of the one or more notifications; and before or after removing the at least one notification, storing the new notification in the data store. Other embodiments are disclosed herein. | 2021-11-25 |
20210365443 | SIMILARITY-BASED VALUE-TO-COLUMN CLASSIFICATION - Methods and systems for similarity-based value-to-column classification are disclosed. A method includes: receiving, by a computing device, a natural language search query; determining, by the computing device, a filtering phrase in the natural language search query using a natural language understanding model; encoding, by the computing device, the filtering phrase; retrieving, by the computing device, a plurality of encoded columns; for each of the plurality of encoded columns, the computing device determining a similarity score based on a similarity between the encoded filtering phrase and the encoded column; and outputting, by the computing device, a column corresponding to an encoded column of the plurality of encoded columns having a highest similarity score. | 2021-11-25 |
20210365444 | METHOD AND APPARATUS FOR PROCESSING DATASET - The present disclosure discloses a method and apparatus for processing a dataset. The method includes: obtaining a first text set meeting a preset similarity matching condition with a target text from multiple text blocks provided by a target user; obtaining a second text set from the first text set, in which each text in the second text set does not belong to a same text block as the target text; generating a negative sample set of the target text based on content of a candidate text block to which each text in the second text set belongs; generating a positive sample set of the target text based on content of a target text block to which the target text belongs; and generating a dataset of the target user based on the negative sample set and the positive sample set, and training a matching model based on the dataset. | 2021-11-25 |
20210365445 | TECHNOLOGIES FOR COLLECTING, MANAGING, AND PROVIDING CONTACT TRACING INFORMATION FOR INFECTIOUS DISEASE RESPONSE AND MITIGATION - Disclosed embodiments are related to technologies for the provision of contact tracing services (CTS) in an affordable and non-intrusive means for individuals to check in and check out of gathering places so that their contact information can be stored and made available to contact tracers. A gathering place operator scans a machine-readable element (MRE) of a contact tracing participant that enters or exits the gathering place. The MRE encodes a unique identifier (UID) generated by the CTS for the participant, and the scan captures the UID along with a location and a timestamp at entry or exit of the gathering place. The UID, location, and timestamp are provided to the CTS for storage in a contact tracing database, which is used for providing contact tracing information to contact tracers. Other embodiments may be described and/or claimed. | 2021-11-25 |
20210365446 | DYNAMICALLY UPDATED DATA SHEETS USING ROW LINKS - In response to a determination that a first logical table is to be created in a data sheet, a respective row identifier is generated and stored for individual rows of the first logical table. To indicate a relationship between a particular cell of another logical table and a particular row of the first logical table, the row identifier of the particular row is stored. After a modification of a value stored in the particular row, the row identifier is used to determine an updated value to be displayed in the particular cell of the other logical table. | 2021-11-25 |
20210365447 | SYSTEM AND METHOD FOR COMPARING AND SELECTIVELY MERGING DATABASE RECORDS - Embodiments of the present invention allow a Source database and a Target database to be compared, with source-only, target-only, and difference objects presented on a graphical user interface in a manner that visually shows the characterization of each displayed object and each displayed object is user-selectable via the graphical user interface to obtain information about the object and differences between the source database and the target database with respect to the object. Some exemplary embodiments are discussed herein with reference to databases such as the Standard Database (SDB) for Intergraph Smart™ Reference Data product from Intergraph Corporation and are referred to generally as the “SDB Merge Tool,” although the disclosed concepts can be applied more generally to other types of databases. | 2021-11-25 |
20210365448 | METHOD FOR RECOMMENDING CHART, ELECTRONIC DEVICE, AND STORAGE MEDIUM - The disclosure provides a method and an apparatus for recommending a chart, an electronic device, and a storage medium. The method may include: generating an input vector of at least one input field relative to each chart based on the at least one input field obtained in advance; calculating a similarity of the input vector and a predetermined feature vector corresponding to each chart; obtaining a target chart corresponding to the at least one input field based on the similarity between the input vector and the predetermined feature vector corresponding to each chart; and sending the target chart to a terminal device. | 2021-11-25 |
20210365449 | CALLABORATIVE SYSTEM AND METHOD FOR VALIDATING EQUIPMENT FAILURE MODELS IN AN ANALYTICS CROWDSOURCING ENVIRONMENT - A method for dynamically creating and validating a predictive analytics model to transform data into actionable insights the method comprising: identifying an event, selectively tagging, based on analytics expertise, at least one time series data area (data area), where the identified event occurred, comparing data area where the identified event is tagged with the data area where the identified event is not tagged, building, based on analytics expertise, the predictive analytics model embodying the classification generated by selective tagging, displaying the visual indicia generated by executing the predictive analytics model, and validating the predictive model based on a feedback from at least one domain expert. | 2021-11-25 |
20210365450 | SYSTEM AND METHOD FOR TRANSFORMATION OF UNSTRUCTURED DOCUMENT TABLES INTO STRUCTURED RELATIONAL DATA TABLES - Embodiments described herein transforms a complex and usually unstructured table to a relational table based on the header pattern. Specifically, the original complex table is expanded into a single dimensional relational database format, in which each cell corresponds to one or more corresponding categories or subcategories from the original header. The transformed one-dimensional relational table is then populated with the corresponding cell values from the original table. In this way, data from the original complex and unstructured data table can be stored at a relational database. | 2021-11-25 |
20210365451 | QUERY CONTENT-BASED DATA GENERATION - Query content-based data generation includes obtaining a query having an outer query and one or more subqueries, performing subquery transformation on each subquery, which converts predicates of the subqueries to be predicates of the outer query, and thereby obtain a transformed query, generate from the transformed query block(s) each having a list of predicates selected from the transformed query, processing each query block for column information, including column range information and column relationship information, and generating data and populating a dataset having table(s) and respective column(s) for each of the table(s). Generating the data uses the column range information and the column relationship information to select data for the dataset such that data records from the dataset are produced as results to executing the obtained query against the dataset. | 2021-11-25 |