02nd week of 2022 patent applcation highlights part 47 |
Patent application number | Title | Published |
20220012169 | SYSTEMS AND METHODS FOR TESTING SOFTWARE APPLICATIONS - Methods and systems are presented for testing software applications in a production-like environment that simulates real-world failures of production environments. A production environment has production applications and databases configured to process user requests from users for conducting transactions with a service provider. A testing system provides an intermediate interface that enables a software application operating in the test environment to access at least one of a production application or a production database. The intermediate interface can be configured based on different failure configurations to simulate production component failures in the production environment. Failure injection and randomized failure modes can be employed, including for network-related failures (latency, dropped packets, connections, etc.) that might occur in the production environment. | 2022-01-13 |
20220012170 | METHOD AND APPARATUS FOR DATA INTEGRATION FRAMEWORK - Various methods, apparatuses/systems, and media for integrating data are provided. A processor implements a data processing framework configured to run native on a big data platform and abstracts data processing constructs to a user friendly template, thereby eliminating necessity of user initiated tasks of instantiating language level objects. The processor also implements a core set of data pipeline configurations on the template configured to initiate a chain of user defined data transformations. A receiver operatively connected with the processor via a communication network receives input of the chain of the user defined data transformations. The processor tests each transformation independently of each other and outputs data integration solutions on the big data platform based on a positive test result. | 2022-01-13 |
20220012171 | METHODS, APPARATUSES, DEVICES, AND SYSTEMS FOR TESTING BIOMETRIC RECOGNITION DEVICE - Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage media, for testing performance of a biometric characteristic recognition device. One of the methods includes obtaining a target simulated component for testing performance of the biometric characteristic recognition device from a plurality of simulated components. A mechanical arm is controlled, based on a camera component arranged on the mechanical arm, to input biometric characteristic information of the target simulated component to the biometric characteristic recognition device. A recognition result is obtained from the biometric characteristic recognition device and a performance test result of the biometric characteristic recognition device is determined based on the recognition result. | 2022-01-13 |
20220012172 | FLASH SECURE ERASE - A system with storage memory and a processing device has a logical deletion to physical erasure time bound. The system dereferences data, responsive to a direction to delete the data. The system monitors physical blocks in storage memory for live data and the dereferenced data. The system cooperates garbage collection with monitoring the physical blocks, so that at least a physical block having the dereferenced data is garbage collected and erased within a logical deletion to physical erasure time bound. | 2022-01-13 |
20220012173 | FLEXIBLE CONFIGURATION OF MEMORY MODULE DATA WIDTH - A memory system has a configurable mapping of address space of a memory array to address of a memory access command. In response to a memory access command, a memory device can apply a traditional mapping of the command address to the address space, or can apply an address remapping to remap the command address to different address space. | 2022-01-13 |
20220012174 | STORAGE DEVICE AND METHOD OF OPERATING THE SAME - A storage device communicates with a host including a host memory. The storage device includes a semiconductor memory device and a device memory. The semiconductor memory device includes a plurality of non-volatile memory cells. The device memory stores validity information of host performance booster (HPB) sub-regions included in each of HPB regions cached in the host memory. The storage device determines to deactivate at least one HPB region among the HPB regions cached in the host memory based on the validity information included in the device memory, and transfers a message recommending to deactivate the determined HPB region to the host. | 2022-01-13 |
20220012175 | SEMICONDUCTOR DEVICE - A semiconductor device includes a core which includes a first cache and a second cache; and a third cache configured to connect to the core, wherein the core is configured to: hold a read instruction that is issued from the first cache to the second cache, hold a write-back instruction that is issued from the first cache to the second cache, process the read instruction and the write-back instruction, determine whether a target address of the read instruction is held by the first cache by using a cache tag that indicates a state of the first cache, and when data of the target address is held by the first cache, abort the read instruction until it is determined that data of the target address is not held by the first cache. | 2022-01-13 |
20220012176 | MEMORY CACHE MANAGEMENT BASED ON STORAGE CAPACITYFOR PARALLEL INDEPENDENT THREADS - A request to write a first data item associated with a first thread to a memory device is received. The memory device includes a first portion and a second portion. The first portion includes a cache that includes a first block to be utilized for data caching and a second block and a third block to be used for block compaction. The second block is associated with a high modification frequency and the third block is associated with a low modification frequency. In response to determining a first memory page in the first block is available for writing the first data item, the first data item is written to the first memory page. A determination is made that a memory page criterion associated with the first thread has been satisfied. In response to identifying each of a set of second memory pages associated with the first thread written to at least one of the second block or the third block, the data of first memory page and each of the set of second memory pages is copied to the second portion of the memory device. The first memory page is marked as invalid on the first block and each of the set of second memory pages associated with the first thread are marked as invalid on at least one of the second block or the third block. | 2022-01-13 |
20220012177 | APPLICATION MAPPING ON HARDENED NETWORK-ON-CHIP (NOC) OF FIELD-PROGRAMMABLE GATE ARRAY (FPGA) - Methods and example implementations described herein are generally directed to the addition of networks-on-chip (NoC) to FPGAs to customize traffic and optimize performance. An aspect of the present application relates to a Field-Programmable Gate-Array (FPGA) system. The FPGA system can include an FPGA having one or more lookup tables (LUTs) and wires, and a Network-on-Chip (NoC) having a hardened network topology configured to provide connectivity at a higher frequency that the FPGA. The NoC is coupled to the FPGA to receive an profile information associated with an application, retrieve at least a characteristic, selected form any of combination of any or combination of a bandwidth requirement, latency requirement, protocol requirement and transactions, associated with the application from the profile information, generate at least one application traffic graph having mapping information based on the characteristic retrieved, and map the application traffic graph generated with into the FPGA using the hardened NoC. | 2022-01-13 |
20220012178 | LAST-LEVEL COLLECTIVE HARDWARE PREFETCHING - A last-level collective hardware prefetcher (LLCHP) is described. The LLCHP is to detect a first off-chip memory access request by a first processor core of a plurality of processor cores. The LLCHP is further to determine, based on the first off-chip memory access request, that first data associated with the first off-chip memory access request is associated with second data of a second processor core of the plurality of processor cores. The LLCHP is further to prefetch the first data and the second data based on the determination. | 2022-01-13 |
20220012179 | CACHE CONTROL APPARATUS AND CACHE SYSTEM CONTROL METHOD - A cache control apparatus includes a data unit configured to store data on an index-specific basis, a tag unit configured to store, on the index-specific basis, a tag and a flag indicating whether the data has an uncorrectable error, and a control unit configured to refer to the flag, upon detecting a tag hit by performing a read access to the tag unit, to determine whether an uncorrectable error exists in the data corresponding to the tag hit, wherein the control unit performs process scheduling such that the read access to the tag unit and another access to the tag unit are performed simultaneously. | 2022-01-13 |
20220012180 | MEMORY SYSTEM FOR META DATA MANAGEMENT AND OPERATING METHOD OF MEMORY SYSTEM - A memory system comprising: a controller generates meta data in accordance with normal data being stored in a non-volatile memory device, and a buffer memory stores multiple meta slices configuring the meta data, the controller classifies an updated slice of the multiple meta slices as a first dirty slice, classifies a flushed slice of the first dirty slices as a second dirty slice, classifies a flushed slice of the second dirty slices as the meta slice and classifies an updated slice of the second dirty slices as a third dirty slice, classifies a flushed slice of the third dirty slices as the second dirty slice, and permits an update of each of the first to third dirty slices in a section in which a flush operation for each of the first to third dirty slices is performed. | 2022-01-13 |
20220012181 | MEMORY SYSTEM AND METHOD OF OPERATING METHOD THEREOF - An operating method of a memory system that includes a memory device including a plurality of blocks and a controller including a memory in which a first open block list and a second open block list are stored, the method comprising receiving a write request and a logical address from a host; converting the logical address into a first virtual address; converting the first virtual address into a physical address; performing a first error checking operation of checking a mapping relationship between the first virtual address and the physical address based on the first open block list; performing a second error checking operation of checking whether the physical address is included in the second open block list; and performing a write operation on an open block corresponding to the physical address when it is determined that the physical address is not allocated more than once. | 2022-01-13 |
20220012182 | REBUILDING LOGICAL-TO-PHYSICAL ADDRESS MAPPING WITH LIMITED MEMORY - Exemplary methods, apparatuses, and systems include reading logical-to-physical (L2P) table entries from non-volatile memory into volatile memory. Upon detection of a trigger to recover L2P data that was unmerged with the L2P table entries, a copy of an L2P journal is read from non-volatile memory. The L2P journal includes the L2P data that was unmerged with the L2P table entries. One or more of the L2P table entries are updated using the L2P data from the L2P journal. | 2022-01-13 |
20220012183 | METHODS AND SYSTEMS FOR TRANSLATING VIRTUAL ADDRESSES IN A VIRTUAL MEMORY BASED SYSTEM - An information handling system and method for translating virtual addresses to real addresses including a processor for processing data; memory devices for storing the data; and a memory controller configured to control accesses to the memory devices, where the processor is configured, in response to a request to translate a first virtual address to a second physical address, to send from the processor to the memory controller a page directory base and a plurality of memory offsets. The memory controller is configured to: read from the memory devices a first level page directory table using the page directory base and a first level memory offset; combine the first level page directory table with a second level memory offset; and read from the memory devices a second level page directory table using the first level page directory table and the second level memory offset. | 2022-01-13 |
20220012184 | ACCESSING COMPRESSED COMPUTER MEMORY - A method for accessing compressed computer memory residing in physical computer memory is disclosed. In the method, compressed memory blocks are represented as sectors, wherein all sectors contain a fixed number of compressed memory blocks, have a fixed logical size in the form of the fixed number of compressed memory blocks, and have varying physical sizes in the form of the total size of data stored in the respective compressed memory blocks. The method involves providing sector-based translation metadata to keep track of the sectors within a compressed memory page, receiving a physical memory access request comprising an address in the physical computer memory, using the address in the physical memory access request to derive a memory block index, using the memory block index and the fixed logical size of the sectors to determine a sector id, using the sector-based translation metadata to locate a sector having the sector id in the compressed memory page, and using the address of the physical memory access request to locate the requested data within said sector. | 2022-01-13 |
20220012185 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - Embodiments of the present disclosure relate to a memory system and an operating method thereof. According to the embodiments of the present disclosure, the memory system may generate a nonce based on a physical address of a target area of a memory device using a cryptographic algorithm, and request the memory device to authenticate the nonce. When the authentication for the nonce succeeds, the memory controller may set an authority to perform a read, write or erase operation on the target area. Through this operation, the memory system can prevent data leakage or damage by a user who has no access authority. | 2022-01-13 |
20220012186 | DIVERSIFYING A BASE SYMMETRIC KEY BASED ON A PUBLIC KEY - A symmetric key that is stored at a device may be received. A public key from a remote entity may also be received at the device. Furthermore, a derived key may be generated based on a one way function between the symmetric key that is stored at the device and the public key that is received from the remote entity. The derived key may be encrypted with the public key and transmitted to the remote entity. The encryption of the derived key with the public key may provide secure transmission of the derived key to an authorized remote entity with a private key that may be used to decrypt the encrypted derived key. | 2022-01-13 |
20220012187 | METHOD AND APPARATUS TO AUTHENTICATE A MEMORY MODULE - A cryptographic hash based on content of a Sideband Bus Device (SPD) Hub and serial number identifiers for components on a memory module is provided. The cryptographic hash provides the ability to mitigate various supply chain attacks by binding the SPD Hub content to a memory module certificate that is used for authentication. Based on the cryptographic signatures, a certificate is trusted by the platform so the binding of the SPD hub content to the memory module certificate creates a secure way to ensure the components on the memory module have not been tampered with and that the reported attributes of the memory module are correct. | 2022-01-13 |
20220012188 | OBJECT AND CACHELINE GRANULARITY CRYPTOGRAPHIC MEMORY INTEGRITY - Technologies disclosed herein provide one example of a system that includes processor circuitry and integrity circuitry. The processor circuitry is to receive a first request associated with an application to perform a memory access operation for an address range in a memory allocation of memory circuitry. The integrity circuitry is to determine a location of a metadata region within a cacheline that includes at least some of the address range, identify a first portion of the cacheline based at least in part on a first data bounds value stored in the metadata region, generate a first integrity value based on the first portion of the cacheline, and prevent the memory access operation in response to determining that the first integrity value does not correspond to a second integrity value stored in the metadata region. | 2022-01-13 |
20220012189 | SHARING MEMORY AND I/O SERVICES BETWEEN NODES - A shared memory controller is to service load and store operations received, over data links, from a plurality of independent nodes to provide access to a shared memory resource. Each of the plurality of independent nodes is to be permitted to access a respective portion of the shared memory resource. Interconnect protocol data and memory access protocol data are sent on the data links and transitions between the interconnect protocol data and memory access protocol data can be defined and identified. | 2022-01-13 |
20220012190 | Methods and Apparatus for Improving SPI Continuous Read - Method and apparatus for improving continuous read operations with expanded serial interface are provided. In one aspect, a device comprises: a memory configured to store data; a buffer configured to receive data from outside of the device and transfer the received data to the memory; a plurality of input pins configured to be coupled to an expanded serial peripheral interface (xSPI); and a processor configured to: select a slave device, through the xSPI, from a plurality of slave devices, send instruction data to the slave device for data reading, receive data, through the xSPI, from the selected slave device, and receive a signal on a data strobe line of the xSPI and determine data reading operations based on the received signal. | 2022-01-13 |
20220012191 | REDUNDANCY RESOURCE COMPARATOR FOR A BUS ARCHITECTURE, BUS ARCHITECTURE FOR A MEMORY DEVICE IMPLEMENTING AN IMPROVED COMPARISON METHOD AND CORRESPONDING COMPARISON METHOD - Disclosed herein is a redundancy resource comparator for a bus architecture of a memory device for comparing an address signal being received from an address signal bus and a redundancy address being stored in a latch of the memory device. Disclosed is also a corresponding bus architecture and comparison method. | 2022-01-13 |
20220012192 | STORAGE DEVICE ADAPTIVELY SUPPORTING PLURALITY OF PROTOCOLS - A storage device includes a memory device and a controller. The controller includes a programmable logic device which is reconfigurable, based on requests which are received from an outside of the storage device, to adaptively support a plurality of protocols depending on the requests. As the programmable logic device is programmed to support a first protocol among the plurality of protocols based on a first request which is received from the outside of the storage device with regard to the first protocol, the programmable logic device processes the first request in compliance with the first protocol, and the controller communicates with the memory device based on the first request such that the memory device stores or outputs data corresponding to the first request. | 2022-01-13 |
20220012193 | BIDIRECTIONAL INTERFACE CONFIGURATION FOR MEMORY - Apparatuses and methods can be related to configuring interface protocols for memory. An interface protocol can define the commands received by a memory device utilizing transceivers, receivers, and/or transmitters of an interface of a memory device. An interface protocol used by a memory device can be implemented utilizing a decoder of signals provided via a plurality of transceivers of the memory device. The decoder utilized by a memory device can be selected by setting a mode register of the memory device. | 2022-01-13 |
20220012194 | APPARATUS AND METHOD FOR DATA TRANSMISSION AND READABLE STORAGE MEDIUM - The present application provides an apparatus, a method for data transmission and a readable storage medium, the apparatus includes a front-end processor, a transmission controller and a back-end processor. In the front-end processor, a DMA controller is respectively connected with the transmission controller, a memory controller, respective transmission buffers and a transmission scheduler. The DMA controller is configured to: receive a request for data transmission sent by the back-end processor, receive read data from the memory controller, and send it to the transmission buffers. The memory controller is configured to: control, according to a data reading instruction, the memory to read data, and send the read data to the DMA controller. The transmission scheduler is configured to: control multiple transmission buffers to write data sent by the DMA controller, and control the multiple transmission buffers to read data, and transmit, through the transmission controller, the data to the back-end processor. | 2022-01-13 |
20220012195 | ENABLING LOGIC FOR FLEXIBLE CONFIGURATION OF MEMORY MODULE DATA WIDTH - A memory system has a configurable mapping of address space of a memory array to address of a memory access command. A controller provides command and enable information specific to a memory device. The command and enable information can cause the memory device to apply a traditional mapping of the command address to the address space, or can cause the memory device to apply an address remapping to remap the command address to different address space. | 2022-01-13 |
20220012196 | LAYERED READY STATUS REPORTING STRUCTURE - A storage system includes a controller; a first storage device including a first ready/busy pin and a second storage device including a second ready/busy pin; a first data bus communicatively coupled between the controller, the first storage device, and the second storage device; and a first shared ready/busy signal channel communicatively coupled to the first ready/busy pin of the first storage device, the second ready/busy pin of the second storage device, and the controller according to a wire-sharing protocol, wherein the first storage device is configured to send the first device ID and status information associated with the first storage device to the controller via the first shared ready/busy signal channel and the second storage device is configured to send the second device ID and status information associated with the second storage device to the controller via the first shared ready/busy signal channel. | 2022-01-13 |
20220012197 | METHOD AND DEVICES FOR CONTROLLING OPERATIONS OF A CENTRAL PROCESSING UNIT - Control circuitry controls the operations of a central processing unit, CPU, which is associated with a nominal clock frequency. The CPU is further coupled to an I/O range and configured to deliver input to an application. The control circuitry controls the CPU to poll the I/O range for input to the application. The control circuitry also monitors whether or not each poll results in input to the application and adjusts a clock frequency at which the CPU operates to a clock frequency lower than the nominal clock frequency if a pre-defined number of polls resulting in no input is detected. | 2022-01-13 |
20220012198 | DIRECTED INTERRUPT FOR MULTILEVEL VIRTUALIZATION WITH INTERRUPT TABLE - An interrupt signal is provided to a first guest operating system. A bus attachment device receives an interrupt signal from a bus connected module with an interrupt target ID identifying a processor assigned for use by the guest operating system as a target processor for handling the interrupt signal. The bus attachment device translates the received interrupt target ID to a logical processor ID of the target processor using an interrupt table entry stored in a memory section assigned to a second guest operating system hosting the first operating system and forwards the interrupt signal to the target processor for handling. The logical processor ID of the target processor is used to address the target processor directly. | 2022-01-13 |
20220012199 | DIGITAL SIGNAL PROCESSING CIRCUIT AND CORRESPONDING METHOD OF OPERATION - An embodiment circuit comprises a plurality of processing units, a plurality of data memory banks configured to store data, and a plurality of coefficient memory banks configured to store twiddle factors for fast Fourier transform processing. The processing units are configured to fetch, at each of the FFT computation stages, input data from the data memory banks with a burst read memory transaction, fetch, at each of the FFT computation cycles, different twiddle factors in a respective set of the twiddle factors from different coefficient memory banks of the coefficient memory banks, process the input data and the set of twiddle factors to generate output data, and store, at each of the FFT computation stages, the output data into the data memory banks with a burst write memory transaction. | 2022-01-13 |
20220012200 | MANAGING IO PATH BANDWIDTH - Bandwidth consumption for IO paths between a storage system and host may be managed. It may be determined whether there is congestion on a front-end port (FEP) link. For example, the storage system may monitor for a notification from the switch in accordance with a Fibre Channel (FC) protocol. If a notification is received indicating congestion on an FEP link, the bandwidth thresholds (BWTs) for one or more IO paths between the storage system and one or more hosts that include the FEP link may be reduced. The host port BWTs may continue to be reduced until a congestion notification communication has not been received for a predetermined amount of time, in response to which the host port BWTs for one or more host port links on IO paths that include the FEP link may be increased. Similar techniques may be employed for an FEP link determined to be faulty. | 2022-01-13 |
20220012201 | Scatter and Gather Streaming Data through a Circular FIFO - Systems, apparatuses, and methods for performing scatter and gather direct memory access (DMA) streaming through a circular buffer are described. A system includes a circular buffer, producer DMA engine, and consumer DMA engine. After the producer DMA engine writes or skips over a given data chunk of a first frame to the buffer, the producer DMA engine sends an updated write pointer to the consumer DMA engine indicating that a data credit has been committed to the buffer and that the data credit is ready to be consumed. After the consumer DMA engine reads or skips over the given data chunk of the first frame from the buffer, the consumer DMA engine sends an updated read pointer to the producer DMA engine indicating that the data credit has been consumed and that space has been freed up in the buffer to be reused by the producer DMA engine. | 2022-01-13 |
20220012202 | DYNAMIC FUNCTIONAL INPUT/OUTPUT (IO) DEVICE INTERFACE SELECTION THROUGH POWER STATE TRANSITIONS - In one embodiment, an apparatus includes interconnect circuitry to implement one or more layers of a Universal Serial Bus (USB)-based protocol. The interconnect circuitry can implement a first USB-based interface and a second USB-based interface. The apparatus further includes telemetry circuitry to generate telemetry data, cause the telemetry data to be transmitted via the first USB-based interface, detect a power state transition in the apparatus, cause the telemetry data to be buffered based on detecting the power state transition, and cause the buffered telemetry data to be transmitted via the second USB-interface based on a set interface request indicating the second USB-interface. | 2022-01-13 |
20220012203 | FLEX BUS PROTOCOL NEGOTIATION AND ENABLING SEQUENCE - Systems, methods, and devices can involve a host device that includes a root complex, a link, and an interconnect protocol stack coupled to a bus link. The interconnect protocol stack can include multiplexing logic to select one of a Peripheral Component Interconnect Express (PCIe) upper layer mode, or an accelerator link protocol upper layer mode, the PCIe upper layer mode or the accelerator link protocol upper layer mode to communicate over the link, and physical layer logic to determine one or more low latency features associated with one or both of the PCIe upper layer mode or the accelerator link protocol upper layer mode. | 2022-01-13 |
20220012204 | UTILIZING INTEGRATED LIGHTING TO STREAMLINE SYSTEM SETUP AND DEBUGGING - A system setup data structure comprising cable couplings between a first plurality of ports of a first electrical component and a second plurality of ports of a second electrical component is received. A first illumination component associated with a first port of the first plurality of ports and a second illumination component associated with a second port of the second plurality of ports are activated, wherein the first port and the second port correspond to one of the cable couplings included in the system setup data structure. A determination is made as to whether a cable has been coupled to the first port and the second port. In response to determining that the cable has been coupled to the first port and the second port, a visual indication is provided that the cable has been correctly coupled at the first electrical component and the second electrical component. | 2022-01-13 |
20220012205 | PORT DESCRIPTOR CONFIGURED FOR TECHNOLOGICAL MODIFICATIONS - A port descriptor of a selected port descriptor version is obtained. The selected port descriptor version is one port descriptor version of a plurality of port descriptor versions available for selection. The port descriptor of the selected port descriptor version includes information relating to a port of the computing environment and is configured to include technology information indicating whether the port is part of a multiple lane connector packaging. A determination is made using the port descriptor of one or more operational attributes of the port. Action is taken based on the one or more operational attributes of the port. | 2022-01-13 |
20220012206 | VERSATILE ADAPTOR FOR HIGH COMMUNICATION LINK PACKING DENSITY - An adaptor is described. The adaptor includes a first interface. The first interface is designed to support traffic and command flows to multiple transceivers through a single instance of the first interface. The adaptor includes multiple interfaces on a transceiver side. The multiple interfaces are to mate to respective transceivers. The multiple interfaces are different than the first interface, wherein the first interface is a QSFP interface and the multiple interfaces are SFP interfaces. The adaptor includes a flex cable between the first interface and the multiple interfaces. The adaptor includes electronic circuitry to translate QSFP commands received at the first interface into SFP commands presented to the respective transceivers through the multiple interfaces. | 2022-01-13 |
20220012207 | TECHNIQUES TO TRANSFER DATA AMONG HARDWARE DEVICES - Apparatuses, systems, and techniques to route data transfers between hardware devices. In at least one embodiment, a path over which to transfer data from a first hardware component of a computer system to a second hardware component of a computer system is determined based, at least in part, on one or more characteristics of different paths usable to transfer the data. | 2022-01-13 |
20220012208 | CONFIGURING A FILE SERVER - For two or more processing nodes of a cluster of a file server, the IO modules associated with the nodes may be required to be part of a same sub-network. A cluster may be configured to ensure that, for each processing node of the cluster, at least one other processing node of the cluster is associated with an IO module on a same sub-network as the IO module associated with the processing node. The user may configure a file server to ensure that a primary node and one or more failover nodes of the file server are on a same sub-network. When configuring IO modules, physical ports having similar or same characteristics may be configured to be on a same sub-network. By doing so, and restricting nodes of a file server to being on a same sub-network, a relatively seamless failover between nodes of a file server may be achieved. | 2022-01-13 |
20220012209 | APPARATUS, SYSTEM AND METHOD TO SAMPLE PAGE TABLE ENTRY METADATA BETWEEN PAGE WALKS - An apparatus of a computing system, the computing system, a method to be performed at the apparatus, and a machine-readable storage medium. The apparatus includes control circuitry to: perform a page walk operation on a page table structure of a pooled memory; based on the page walk operation, determine page table entries (PTEs) corresponding to a workload to be executed by the computing system; and during a time interval not including a page walk operation by the control circuitry, perform a plurality of sampling operations, individual ones of the sampling operations including determining PTE metadata corresponding to at least some of the PTEs. | 2022-01-13 |
20220012210 | SPEEDUP BUILD CONTAINER DATA ACCESS VIA SYSTEM CALL FILTERING - A method includes receiving a system call from an application within a container executing on an operating system, the system call comprising a synchronization operation to synchronize memory of the application to storage. The method further includes determining, by the kernel, whether a system call filtering policy associated with the container indicates that the system call is to be prevented. preventing, by the kernel, performance of the synchronization operation in view of the system call filtering policy. | 2022-01-13 |
20220012211 | SYNCHRONOUS REPLICATION FOR SYNCHRONOUS MIRROR COPY GUARANTEE - Techniques are provided for synchronous replication for synchronous mirror copy guarantee. A file system dependent technique for synchronous mirror copy guarantee is provided by overriding default behavior of a persistent fence so that the persistent fence is activated to block operations targeting a storage object having a synchronous replication relationship based upon the synchronous replication relationship being out of sync. The default behavior of the persistent fence is overridden to allow operations to be executed upon the storage object based upon the synchronous replication relationship being in sync. A file system independent technique for synchronous mirror copy guarantee is provided by intercepting operations before the operations are received by a file system. The operations are selectively forwarded to the file system or not based upon a state of a synchronous replication relationship. | 2022-01-13 |
20220012212 | SPEEDUP CONTAINERS IN PRODUCTION BY IGNORING SYNC TO FILE SYSTEM - A method includes receiving, an operation from a container to synchronize container data from memory to a file system mounted by the container and determining whether the file system indicates that the operation is to be ignored. The method further includes, in response to determining that the file system indicates that the operation is to be ignored, preventing, by the operating system kernel executing on the processing device, performance of the operation. | 2022-01-13 |
20220012213 | SPATIAL-TEMPORAL STORAGE SYSTEM, METHOD, AND RECORDING MEDIUM - A spatial-temporal storage method, system, and non-transitory computer readable medium include dynamically managing a plurality of region servers for querying spatiotemporal data in noSQL databases. | 2022-01-13 |
20220012214 | Techniques and Architectures for Utilizing a Change Log to Support Incremental Data Changes - Techniques and mechanisms for incremental data ingestion are disclosed. Raw data is received from multiple disparate sources to be consumed in an environment for collecting unformatted raw data. The environment has at least a delta data table and a delta notification table. A write to an entry in the delta data table is attempted. Entries to the delta data table specify at least records indicating changes to objects in the environment. A write a corresponding entry to the delta notification table is attempted in response to a successful write attempt to the delta data table. The delta notification table entry includes information about delta data table entries for a specified period. At least one data consumer is notified that the delta data table has been modified. | 2022-01-13 |
20220012215 | System and Method for Early Tail-Release in a Log Structure Log Using Multi-Line PLB Structure Supporting Multiple Partial Transactions - A method, computer program product, and computer system for obtaining, by a computing device, one or more pages from a log to complete a write transaction. Parity of a line in a multi-line physical layer block may be calculated. The one or more pages may be written to the line in the multi-line physical layer block. The parity to the line may be written in the multi-line physical layer block. A bitmap associated with the physical layer block may be updated based upon, at least in part, writing the one or more pages and the parity to the line in the multi-line physical layer block. | 2022-01-13 |
20220012216 | MONITORING DATABASE MANAGEMENT SYSTEMS CONNECTED BY A COMPUTER NETWORK - A database management system (DBMS) tracking system tracks location and status of DBMSs running on servers, for example, servers of an organization connected by a network. The DBMS tracking system periodically receives information describing servers from a directory service. The DBMS tracking system maintains a DBMS tracking table. The DBMS tracking system installs database agents on DBMSs. The database agents send tracking messages and status messages to the DBMS tracking system. The DBMS tracking system updates the information stored in the DBMS tracking table based on the tracking messages and status messages. | 2022-01-13 |
20220012217 | ARCHIVING ACCELERATOR-ONLY DATABASE TABLES - A DBMS manages a high-performance accelerated database that is synchronized with a conventional client database. The accelerated database contains both “regular” accelerated tables, which each duplicate a table of the client database, and accelerator-only tables (AOTs) that are unique to the accelerated database and that may be used for analytical purposes. AOT rows are archived by moving the rows to a dedicated accelerator-only archive stored in the accelerated database. When a user query attempts to access accelerator-only data, the DBMS rewrites the query to adapt the requested operations to the accelerated database's partitioned archive/non-archive structure. The rewritten query specifies steps for accessing archived and non-archived accelerator-only data without forcing the DBMS front-end to generate a merged view of archived and non-archived accelerator-only data. If the accelerator-only archives are stored in a read-only format, the rewriting also adds predictive operations that prevent queries from altering the archives. | 2022-01-13 |
20220012218 | TECHNIQUES FOR EFFICIENT DATA DEDUPLICATION - Data deduplication techniques may use a fingerprint hash table and a backend location hash table in connection with performing operations including fingerprint insertion, fingerprint deletion and fingerprint lookup. Processing I/O operations may include: receiving a write operation that writes data to a target logical address; determining a fingerprint for the data; querying the fingerprint hash table using the fingerprint to determine a matching entry of the fingerprint hash table for the fingerprint; and responsive to determining that the fingerprint hash table does not have the matching entry that matches the fingerprint, performing processing including: inserting a first entry in the fingerprint hash table, wherein the first entry includes the fingerprint for the data and identifies a storage location at which the data is stored; and inserting a second entry in a backend location hash table, wherein the second entry references the first entry. | 2022-01-13 |
20220012219 | ENTITY RESOLUTION OF MASTER DATA USING QUALIFIED RELATIONSHIP SCORE - A first score associated with matching between entity records of a plurality of entities of master data of an MDM system is received. A set of entity records with a first score above a lower threshold score and below an upper threshold score is identified as unresolved; neither confirmed as matched or unmatched. A second score associated with relationships between entity records is generated. Overall scores for pairs of the set of entity records are determined by combining the first matching score with the second relationship score. The overall score of respective pairs of the set of entities is compared to the upper threshold, and if the upper threshold is exceeded, then the information of the pair of entity records of the set of entity records are combined into a single record, and redundant entity records are removed from the MDM system. | 2022-01-13 |
20220012220 | DATA ENLARGEMENT FOR BIG DATA ANALYTICS AND SYSTEM IDENTIFICATION - The present invention may include a computer receives raw data. The computer converts the raw data into a dataset, where the dataset comprises independent variables and dependent variables. Then, the computer clusters the dataset to determine a corresponding target value to each of a plurality of clusters. The computer constructs a nonlinear programming problem based on a prior experience and generates an enlarged dataset by solving the nonlinear programming problem. | 2022-01-13 |
20220012221 | GENERATING A QUERY RESPONSE BY COMBINING PARTIAL RESULTS FROM SEPARATE PARTITIONS OF EVENT RECORDS - Embodiments are directed are towards a method for generating a query response, which comprises creating two or more partitions of event records from raw data stored in a data store, wherein each event record in the two or more partitions of event records includes a portion of the raw data and is associated with a time stamp derived from the raw data. The method also comprises generating a summarization table for each partition of the two or more partitions that: (a) identifies a field value comprising a value that corresponds to an associated field extracted from a respective event record; and (b) for the field value, includes a posting value to the respective event record within a respective partition. The method further comprises generating partial results for a received query using summarization tables in the partitions and generating a response to the query by combining the partial results. | 2022-01-13 |
20220012222 | Indexing Elements in a Source Array - A hardware-implemented method of indexing data elements in a source array is provided. The method comprises generating a number of shifted copy arrays; receiving indices for indexing the source array; and retrieving one or more data elements from the shifted copy arrays, according to the received indices. Also disclosed is a related processing system comprising a memory and hardware for indexing data elements in a source array in the memory. | 2022-01-13 |
20220012223 | SELECTING BACKING STORES BASED ON DATA REQUEST - Techniques for improving database searches are described herein. In an embodiment, a server computer system stores one or more first datasets in a first data repository and one or more second datasets in a second data repository. The server computer receives a request to perform an analysis on a particular dataset. The server computer determines that the particular dataset is stored in the first data repository and the second data repository. Based, at least in part, on an attribute of the request, the server computer selects the second data repository and responds to the request with data from the particular dataset stored in the second data repository. | 2022-01-13 |
20220012224 | SYSTEM AND METHODS FOR CREATING AND MANAGING DYNAMIC ELEMENTS - Embodiments of the present invention provide a system and method for inserting a dynamic element into electronic content using a client application. The dynamic element includes a key and a corresponding value, the key and the value of the dynamic element stored in a database of a dynamic element management system (DEMS). The method performed by the DEMS includes: receiving a dynamic element insertion request from the client application; identifying and forwarding one or more keys corresponding to one or more suggested dynamic elements to the client application for rendering the one or more keys on a user interface of a client device; receiving indication of selection of a key from the one or more keys; retrieving a value of the dynamic element corresponding to the selected key from the database; and communicating the value of the dynamic element to the client application for rendering in line with the electronic content. | 2022-01-13 |
20220012225 | CONCURRENCY CONTROL METHOD OF DATABASE TRANSACTION COMBINING OPTIMISTIC LOCK AND PESSIMISTIC LOCK - A concurrency control method of database transaction combining an optimistic lock and a pessimistic lock includes: integrating a lock in each fragment in a storage range, using a lock table globally, and recording a lock status of the each fragment in the lock table; before reading a data object of a fragment in the storage range, first querying in the lock table whether the data object of the fragment is locked by other reading-writing transactions; if the data object of the fragment is locked by other reading-writing transactions, blocking the current reading operation, and repeating the current reading operation, and if the current reading operation is blocked for more than a given time limit, since the pessimistic lock is invalid, intervening by the optimistic lock, and continuing reading the single row data of the current fragment. | 2022-01-13 |
20220012226 | Technique for Concurrency Control - A technique for concurrency control of transactions in a system comprising a plurality of application instances accessing a database system is provided. A method implementation of the technique is performed by the database system and comprises receiving (S | 2022-01-13 |
20220012227 | ALERT FEED AND SUBSCRIPTION MANAGEMENT - Described herein are a system, apparatus, device, method, and/or computer program product embodiments and/or combinations and sub-combinations thereof for managing alerts and subscriptions in a cloud collaborative system. In one embodiment, a modification notice identifying a change to a field of a record is obtained, and the change is compared with a criterion specified in a subscription for a user. If the change satisfies the criterion, an alert is generated based on the modification notice. An alert GUI is transmitted to a user device to display alerts for the user. The user may access an expanded view of the record through the alert in the alert GUI. The user may also publish the alert to a chat session involving other users of the cloud collaborative system. | 2022-01-13 |
20220012228 | SECURE INFORMATION RETRIEVAL AND UPDATE - A secure storage module of a client device interacts with a set of secure storage servers to securely store data items of the client on the servers, such that no individual server has the data in readable (non-obfuscated) form. Additionally, the client secure storage module and the servers interact to allow the client device to read a given portion of the original data items from the servers, such that none of the servers can determine which portion of the original data is being requested. Similarly, the interactions of the client secure storage module and the servers allows the client device to update a given portion of the original data on the servers to a new value, such that none of the servers can determine which portion is being updated and that none of the servers can determine either the prior value or new value or the difference between the new value and the prior value. | 2022-01-13 |
20220012229 | CONFIGURABLE FRAMEWORK FOR GENERATING AND VALIDATING MULTI-CHANNEL ELECTRONIC DATA REPORTS - Embodiments of the present invention provide a system for generating and validating multi-channel electronic data reports. The system is typically configured for generating a configurable framework, creating a package of the configurable framework, allowing a resource entity system of an entity to download the configurable framework, identifying initiation of download of the configurable framework, causing user interface to prompt input of one or more configurable parameters associated with an application present in the resource entity system, receiving the one or more configurable parameters, and integrating the configurable framework with the application to: generate one or more electronic data reports associated with the application, transmit the one or more electronic data reports to one or more end users, and track delivery of the one or more electronic data reports to the one or more end users based on the one or more configurable parameters. | 2022-01-13 |
20220012230 | MANAGEMENT SYSTEM, ACQUISITION DEVICE, AND MANAGEMENT METHOD - A management system includes processing circuitry configured to acquire, from a device configured to store shared common definition information indicating a definition of a common list serving as a list of files and directories likely to be described in the collation information and a definition of an element attribute serving as conditions for determining elements of a list of files and directories in the collation information in an individual file management device, first common definition information corresponding to identification information of the first common definition information input to the acquisition device, compare the first common definition information acquired with the file managed to distinguish whether there is a file satisfying all the conditions of the first common definition information, and output, to the generation device, second common definition information corresponding to the file managed among the first common definition information acquired based on a distinction result. | 2022-01-13 |
20220012231 | AUTOMATIC CONTENT-BASED APPEND DETECTION - Automatic append includes: identifying, based at least in part on contents of a first data set comprising a first plurality of columns and contents of a second data set comprising a second plurality of columns, a plurality of matching columns and a plurality of non-matching columns. The matching columns comprise one or more columns among the first plurality of columns; and corresponding one or more matching columns among the second plurality of columns. The non-matching columns comprise: one or more columns among the first plurality of columns that do not match with any columns among the second plurality of columns; and one or more columns among the second plurality of columns that do not match with any columns among the first plurality of columns. Automatic append further includes obtaining a user specification of a first one or more non-matching columns to be appended to a second one or more non-matching columns, the first one or more non-matching columns and the second one or more non-matching columns being selected among the plurality of non-matching columns; and appending the first data set and the second data set according to at least the identified plurality of matching columns and the user specification. | 2022-01-13 |
20220012232 | TECHNIQUES FOR CONCURRENT DATA VALUE COMMITS - The present disclosure relates to a system and techniques for preventing corruption of snapshot data by limiting the visibility of committed data. To do this, the system may maintain an index that indicates the highest transaction identifier value such that no future commits will have a transaction identifier less than or equal to the indexed transaction identifier value. In embodiments, if a read is performed, only transactions having a transaction identifier less than or equal to the index value can be read. Each time that a transaction is committed, the index value is updated to the transaction identifier for the transaction having the highest transaction identifier without any intermediary transactions. | 2022-01-13 |
20220012233 | Creation of a Blockchain with Blocks Comprising an Adjustable Number of Transaction Blocks and Multiple Intermediate Blocks - A computer-implemented method for creating a blockchain with blocks that includes an adjustable number of transaction blocks and multiple intermediate blocks, and to a storage medium of a device participating in a distributed database system comprising such a blockchain, wherein a first intermediate block is provided and at least one second intermediate block is generated, where the second intermediate block references a block preceding the second intermediate block and at least the first intermediate block. | 2022-01-13 |
20220012234 | METHOD AND APPARATUS FOR GENERATION AND PROMOTION OF TYPE AHEAD RESULTS IN A MULTI-SOURCE AGRICULTURAL PARCEL SEARCH - An automated method for search and generation of relevant search results includes: receiving freeform search characters input by a user; simultaneously searching a first freeform entry source and a second freeform entry source to obtain corresponding first suggested freeform entries and second suggested freeform entries; ranking the first suggested freeform entries according to first rules of relevancy to the user, and generating first ranked suggested freeform entries; ranking the second suggested freeform entries according to second rules of relevancy to the user, and generating second ranked suggested freeform entries; combining the first and second ranked suggested freeform entries into a combined set of suggested freeform entries, and ranking the combined set of suggested freeform entries according to combined rules of relevancy to the user, and generating combined ranked suggested freeform entries; and transmitting the combined ranked suggested freeform entries to the user for selection of a desired type ahead entry. | 2022-01-13 |
20220012235 | SYSTEMS AND METHODS FOR TARGETED DATA DISCOVERY - Various embodiments provide methods, apparatus, systems, computing devices, computing entities, and/or the like for identifying targeted data for a data subject across a plurality of data objects in a data source. In accordance with one embodiment, a method is provided comprising: receiving a request to identify targeted data for a data subject; identifying a first data object using metadata for a data source that identifies the first data object as associated with a first targeted data type for a data portion from the request; identifying a first data field from a graph data structure of the first data object that identifies the first data field as used for storing data having the first targeted data type; and querying the first data object based on the first data field and the data for the first targeted data type to identify a first targeted data portion for the data subject. | 2022-01-13 |
20220012236 | PERFORMING INTELLIGENT AFFINITY-BASED FIELD UPDATES - Described herein is a method, system, and non-transitory computer readable medium for updating fields in records. Initially, fields are displayed according to how frequently the fields are updated. One of the fields is selected and then records of a record type including the selected field are displayed. One of the records is selected and a form is displayed that enables a user to update the value stored in the selected field of the selected record. | 2022-01-13 |
20220012237 | NORMALIZATION AND EXTRACTION OF LOG DATA - Extracting data from traffic logs using regular expressions. A traffic log is received from a network device. A characterization of an extraction of data from the traffic log is determined. The traffic log is parsed by applying a first regular expression to the traffic log according to the characterization of the extraction of data from the traffic log to generate parsed data. Data is extracted from the traffic log by applying a second regular expression to the parsed data according to the characterization of the extraction of data from the traffic log to generate extracted data. | 2022-01-13 |
20220012238 | DATACUBE ACCESS CONNECTORS - A multidimensional database query engine processes a query request by forming a logical plan of subqueries for retrieving and assembling the data called for by the query request. A multidimensional database connector is invoked to transform a logical plan that defines and orders each subquery into a physical plan for accessing the data repositories where the data satisfying the query is stored. The query engine is invoked or called by an application and receives a query plan indicative of data repositories interrogated by query instructions in the query plan. For each data repository of the plurality of data repositories that may be interrogated by the query plan, a connector is defined based on commands for accessing each data repository. The connector associates each query instruction from the query plan with a corresponding repository command for accessing the data repository. | 2022-01-13 |
20220012239 | SYSTEMS AND METHODS FOR MANAGEMENT OF MULTI-TENANCY DATA ANALYTICS PLATFORMS - A data analytics system configured to perform operations is disclosed. The operations can include creating, in response to instructions received from a user, a first pipeline. This pipeline can be configured to extract data from an append-only first data store, extract identifying characteristics from the extracted data, provide the identifying characteristics to an identity service, and receive a tenancy identifier from the identity service. The pipeline can further be configured to create a data object in a second data store using the extracted data; create a tenancy object in a metadata store, the tenancy object associated with the data object, the metadata store implementing a hierarchical data object ownership graph; and associate the tenancy object with a parent object in the hierarchical data object ownership graph. The data analytics system can then tear down the first pipeline. | 2022-01-13 |
20220012240 | TECHNIQUES FOR MAINTAINING STATISTICS IN A DATABASE SYSTEM - Techniques are provided for gathering statistics in a database system. The techniques involve gathering some statistics using an “on-the-fly” technique, some statistics through a “high-frequency” technique, and yet other statistics using a “prediction” technique. The technique used to gather each statistic is based, at least in part, on the overhead required to gather the statistic. For example, low-overhead statistics may be gathered “on-the-fly” using the same process that is performing the operation that affects the statistic, while statistics whose gathering incurs greater overhead may be gathered in the background, while the database is live, using the high-frequency technique. The prediction technique may be used for relatively-high overhead statistics that can be predicted based on historical data and the current value of predictor statistics. | 2022-01-13 |
20220012241 | PIPELINE SYSTEMS AND METHODS FOR USE IN DATA ANALYTICS PLATFORMS - A data analytics system including an append-only first data store accessible to multiple clients and a second data store is disclosed. The data analytics system can be configurable to, in response to receiving first instructions from a first target system of a first client, the first target system separate from the data analytics system, create a first pipeline between the append-only first data store and the second data store. The first pipeline can be configured according to the first instructions to generate a client-specific data object and store the client-specific data object in the second data store. The data analytics system can be configurable to tear down the first pipeline upon completion of storing the client-specific data object in the second data store. | 2022-01-13 |
20220012242 | HIERARCHICAL DATACUBE QUERY PLAN GENERATION - A multidimensional database query engine processes a query request by forming a logical plan of subqueries for retrieving and assembling the data called for by the query request. A multi-pass analysis identifies a granularity of facts needed to fulfill the query request. A recursive analysis parses the query request and identifies components comprising the full query request. The analysis derives a subquery from each component, and identifies dependencies on other subqueries. The subqueries are arranged in a tree structure based on the dependencies. The tree represents subqueries as nodes, with query operations denoted by parent nodes for the dependent subqueries. The result is a hierarchical tree of subqueries associated based on operations between the subqueries and dependent subqueries descending from their parent subqueries. | 2022-01-13 |
20220012243 | DYNAMIC ROUTING METHOD AND APPARATUS FOR QUERY ENGINE IN PRE-COMPUTING SYSTEM - The present application discloses a dynamic routing method and apparatus for a query engine in a pre-computing system. The method includes: pre-obtaining cube data under a preset dimensional combination in a pre-computing system; determining a degree of aggregation of the cube data selected as expected under the preset dimensional combination after a query request is received; executing query processing on the query request in a first distributed query engine when the degree of aggregation of the cube data under the preset dimensional combination is high; and switching to a second distributed query engine to execute query processing on the query request when the degree of aggregation of the cube data under the preset dimensional combination is low. The present application solves the technical problem that the query response speed of the pre-computing query system is not ideal. Through the present application, the sub-second high-performance query response can be achieved. At the same time, as a result, higher concurrency can be supported so as to meet business needs, and the stability of the query system is simultaneously guaranteed. | 2022-01-13 |
20220012244 | ANTICIPATING QUERIES FOR INTERACTIVE METRICS BASED ON USAGE - A videogame metrics query system, and related method, has one or more databases and a speculative cache. The system stores videogame metrics and tracks queries relating to videogame metrics. The system generates multiple queries, based on a received query and tracked queries. The system generates a combined query that has greater computational efficiency of execution. From executing the combined query, the system extracts query results relevant to the received query, and caches remaining results in the speculative cache. | 2022-01-13 |
20220012245 | PARTITION KEY ADJUSTMENT BASED ON QUERY WORKLOAD - Disclosed is a computer-implemented method to adjust partition keys. The method includes identifying a target table that is a target of a query, the target table including a set of initial partitions. The method also includes determining a set of common queries, wherein each of the common queries are configured to retrieve data from the target table. The method further includes identifying a plurality of core ranges. The method includes merging the core ranges into a new set of partitions. The method further includes setting, in response to the merging, updated partition keys. Further aspects of the present disclosure are directed to systems and computer program products containing functionality consistent with the method described above. | 2022-01-13 |
20220012246 | PREFIX N-GRAM INDEXING - A table organized into a set of batch units is accessed. A set of N-grams are generated for a data value in the source table. The set of N-grams include a first N-gram of a first length and a second N-gram of a second length where the first N-gram corresponds to a prefix of the second N-gram. A set of fingerprints are generated for the data value based on the set of N-grams. The set of fingerprints include a first fingerprint generated based on the first N-gram and a second fingerprint generated based on the second N-gram and the first fingerprint. A pruning index that indexes distinct values in each column of the source table is generated based on the set of fingerprints and stored in a database with an association with the source table. | 2022-01-13 |
20220012247 | PREFIX INDEXING - A table organized into a set of batch units is accessed. A set of N-grams are generated for a data value in the source table. The set of N-grams include a first N-gram of a first length and a second N-gram of a second length where the first N-gram corresponds to a prefix of the second N-gram. A set of fingerprints are generated for the data value based on the set of N-grams. The set of fingerprints include a first fingerprint generated based on the first N-gram and a second fingerprint generated based on the second N-gram and the first fingerprint. A pruning index that indexes distinct values in each column of the source table is generated based on the set of fingerprints and stored in a database with an association with the source table. | 2022-01-13 |
20220012248 | STREAMS RUNTIME PROCESSING RATE ADJUSTMENT - A stream of tuples is monitored. The stream of tuples is to be processed by a plurality of processing elements of a stream application that operate on one or more compute nodes, each processing element having one or more stream operators. A processing rate of a first stream operator of the stream application is calculated. The processing rate is based on the number of tuples that are processed by the first stream operator. It is determined that the processing rate of the first stream operator meets a predetermined tuple processing criterion. The processing rate of the first stream operator is adjusted based on the predetermined tuple processing criterion. | 2022-01-13 |
20220012249 | SYSTEM AND METHOD FOR OBJECT-ORIENTED PATTERN MATCHING IN ARBITRARY DATA OBJECT STREAMS - A system and method for applying extended regular expressions against arbitrary data objects, wherein a state machine maintains an internal state model for the system, an object analysis server receives data objects from a data source, and the object analysis server analyzes the structure and contents of the objects, compares them against received search pattern, and directs the state machine to update the state model based on either or both of the analysis and comparison operations. | 2022-01-13 |
20220012250 | DATA ANALYTICS PLATFORM USING CONFIGURABLE FLOW SPECIFICATIONS - A data analytics system is disclosed that is configured to perform operations comprising creating at least one data storage, creating a metadata store separate from the at least one data storage, creating a flow storage, and configuring a flow service using first received instructions. The flow service is configured to obtain a first flow from the flow storage, obtain metadata from the metadata storage, and execute the flow. The flow execution can include obtaining input data from the at least one data storage, generating output data at least in part by validating, transforming, and serializing the input data using the metadata, and generating additional metadata describing the output data. The flow execution can further include providing the output data for storage in the at least one data storage and providing the additional metadata for storage in the metadata storage. | 2022-01-13 |
20220012251 | MULTI-TENANCY DATA ANALYTICS PLATFORM - A data analytics system is disclosed that is configured to perform operations including receiving input data at a first storage location and configuring a flow service to execute a flow. The flow execution can include creating a pipeline using the flow and metadata associated with the flow, the pipeline configured to perform a data transformation specified in the flow. The flow execution can further include determining a tenancy associated with the input data using the flow. The flow execution can also include generating, using the pipeline, output data from the input data and storing, using the pipeline, the output data in a second storage location associated with the tenancy. | 2022-01-13 |
20220012252 | AUTOMATED DATASET CALCULATION USING GEOSPATIAL RELATIONSHIPS - Collecting data and augmenting a dataset based on geospatial relationships. A primary dataset is obtained. At a user interface, a list of dataset types for collecting data are provided. A secondary dataset is obtained based on a received dataset type selection. For each record of the primary dataset, a reduced secondary dataset is determined based on filter parameters selected at the user interface. Filter parameters include at least a geospatial relationship based on geospatial data from the primary and secondary datasets. For each record of the primary dataset, a corresponding value is derived from records of the reduced secondary dataset based on identified geospatial relationships to the corresponding record from the primary dataset. An augmented primary dataset is generated comprising data from the primary dataset and an additional data field comprising, for each record of the primary dataset, the respective corresponding value. | 2022-01-13 |
20220012253 | METHOD AND APPARATUS FOR RAPID SEARCH FOR AGRICULTURAL PARCELS AND GENERATION OF RELEVANT SEARCH RESULTS - An automated method for search and generation of relevant search results includes: receiving both discrete parameters and freeform search characters input by a user; simultaneously searching a first freeform entry source and a second freeform entry source to obtain corresponding first and second suggested freeform entries; ranking the first suggested freeform entries according to first rules of relevancy to the user, and generating first ranked suggested freeform entries; ranking the second suggested freeform entries according to second rules of relevancy to the user, and generating second ranked suggested freeform entries; combining the first and second ranked freeform entries into a combined set of suggested freeform entries, and ranking the combined set of suggested freeform entries according to combined rules of relevancy to the user, and generating combined ranked suggested freeform entries; and transmitting the combined ranked suggested freeform entries to the user for selection of a desired type ahead entry. | 2022-01-13 |
20220012254 | VIEWPORT LOCATION BASED METHOD AND APPARATUS FOR GENERATION AND PROMOTION OF TYPE AHEAD RESULTS IN A MULTI-SOURCE AGRICULTURAL PARCEL SEARCH - An automated method for search includes: receiving search characters and a viewport location on a displayed geographic area corresponding to an area of interest; simultaneously searching a first entry source and a second entry source to obtain corresponding first suggested entries and second suggested entries, where the first suggested entries correspond to geographic locations that are closer to the viewport location; ranking the first suggested entries according to first rules of relevancy, and generating first ranked suggested entries; ranking the second suggested entries according to second rules of relevancy, and generating second ranked suggested entries; combining the first and second ranked suggested entries into a combined set of suggested entries, and ranking the combined set of suggested entries according to combined rules of relevancy, and generating combined ranked suggested entries; and transmitting the combined ranked suggested entries to a user for selection of a desired type ahead entry. | 2022-01-13 |
20220012255 | ATHLETE DATA AGGREGATION SYSTEM - An athlete data aggregation system for is a single point for collection, aggregation, visualization and selective distribution of quantitative and qualitative athlete related data. The system includes facilitating the plurality of athletes and a plurality of stake-holders to enter qualitative and quantitative athlete related information on a web-based platform, collecting the qualitative and quantitative athlete related information, analyzing the qualitative and quantitative athlete related information, aggregating and visualizing the qualitative and quantitative athlete related information, and selectively distributing the qualitative and quantitative athlete related information to the plurality of athletes and the plurality of stake-holders. The system also provides development tools for athletes, assessment tools for athletes and keeps the athletes engaged not only with each other but also with various stake-holders. | 2022-01-13 |
20220012256 | METHOD, SYSTEM AND PROGRAM PRODUCT FOR MONITORING EAS DEVICES - A method of monitoring Emergency Alert System (EAS) devices includes providing a system, the system including processor(s) in communication with memory(ies) storing instructions for execution by the processor(s), the instructions enabling monitoring of EAS devices, monitoring by the system the EAS devices for all changes to configuration settings and updates to software and firmware for the EAS devices (“changes”), the system further including database(s) automatically storing data regarding the changes, wherein data regarding changes to configuration settings comprises a copy of the configuration settings, wherein the copy is stored chronologically, and the monitoring includes avoiding use of a threshold. The system creates secondary instance(s) of the database(s), monitors for failures of the database(s) and automatically fail(s) over to the secondary instance(s) when fail(s) occur, notifying by the system designated receiver(s) of the changes, and assisting with filtering and/or sorting of selected data from the database. | 2022-01-13 |
20220012257 | WORKFLOW SERVICE APPLICATION SEARCHING - Disclosed are various approaches for workflow service application searching. In some aspects, a search query is entered through a search element of a workflow application on a client device. A request is transmitted from a workflow application to a workflow service, to search within an application based on the search query. Application content corresponding to the search query and the application is received from the workflow service. A search result is provided based on the application content and without opening the application on the client device. | 2022-01-13 |
20220012258 | METHODS AND SYSTEMS OF A MATCHING PLATFORM FOR ENTITIES - In one aspect, a computerized method for implementing a matching platform for entities includes the step of, in a real-time data processing layer, implementing a real-time linking on an input event stream. The method includes storing an output of the real-time linking in a state change store. The method includes the step of, in a high-throughput layer, implementing a high-throughput linking of entities from a batch data source stream. The method includes storing an output of the high-throughput linking of entities in a state store to generate a unified and consistent view of the entities across a different representation of the entities. The method includes implementing an on-demand linking using the state change store and the state store. | 2022-01-13 |
20220012259 | Techniques and Architectures for Providing Atomic Transactions Across Multiple Data Sources - Techniques and mechanisms for ingesting data through an atomic transaction are disclosed. Raw data is received from multiple disparate sources to be consumed in an environment that does not support atomic write operations to data consumers. The environment has at least a data table and a notification table. A write to an entry in the data table having an associated version is attempted. The data table entry corresponds to the data to be consumed. A write to a corresponding entry to the notification table is attempted in response to a successful write attempt to the data table. The notification table entry includes information about the corresponding data table entry. The version associated with the data table is modified in response to successful writes of both the data table entry and the notification table entry. At least one data consumer is notified that the data table version has been modified. | 2022-01-13 |
20220012260 | MULTISTAGE DATA SNIFFER FOR DATA EXTRACTION - A multistage data sniffer instance can include a first stage that scans a given file for a set of data fields based on a configuration file for a selected format of the given file. The multistage data sniffer instance can also include a second stage that evaluates a value in each data filed in the set of data fields for the selected format to determine a validity of values in the set of data fields. The multistage data sniffer instance can further include a third stage that extracts data within the plurality of fields of the given file, aggregates the data based on a predetermined set of rules defined in the configuration file and outputs a data to a data mart database characterizing the aggregated data. | 2022-01-13 |
20220012261 | WAFER-LEVEL PACKAGE ASSEMBLY HANDLING - A chuck assembly includes an upper surface configured to support a wafer-level package assembly and a clamping mechanism securing the wafer-level package assembly to the upper surface. | 2022-01-13 |
20220012262 | MULTIDIMENSIONAL DATA VISUALIZATION APPARATUS, METHOD, AND PROGRAM - An embodiment of the present invention is provided with a projective transform model including a plurality of nodes and a projection table, the plurality of nodes each holding a reference vector having a dimension corresponding to the dimension of multi-dimensional data. The projection table indicates the correspondence relation between the number of each node and a coordinate in a two-dimensional space as a projection target of the reference vector held by the node. First in a learning phase, multi-dimensional input data of a positive example and a negative example is acquired, the amplitude characteristic amounts thereof are calculated, and this amplitude characteristic amount data is learned as the reference vectors of the nodes for each sample. Subsequently, the Euclidean distance between coordinates when the nodes learned based on the amplitude characteristic amount data of the positive example and the nodes learned based on the amplitude characteristic amount data of the negative example are projected into the two-dimensional space in accordance with the projection table is calculated, and coordinates in the projection table are updated so that the calculated Euclidean distance becomes equal to or larger than a threshold value. | 2022-01-13 |
20220012263 | MONITORING DATABASE AGENTS RUNNING ON SERVERS CONNECTED BY A COMPUTER NETWORK - A system monitors database agents associated with DBMSs running on servers, for example, servers of an organization connected by a network. The system determines whether each database agent is running according to a schedule and whether the database agent is running the correct version of a script. The system may generate a report describing differences between database agents that are running on database instances and a master configuration of database agents representing the expected configuration of the database agent. If a database instance is executing a configuration of a database agent that is different from the master configuration of the database agent, the system updates the database agent executing on the database instance to ensure that the configuration matches the master configuration. | 2022-01-13 |
20220012264 | Pipelining Paxos State Machines - Paxos transactions are pipelined in a distributed database formed by a plurality of replica servers. A leader server is selected by consensus of the replicas, and receives a lock on leadership for an epoch. The leader gets Paxos log numbers for the current epoch, which are greater than the numbers allocated in previous epochs. The leader receives database write requests, and assigns a Paxos number to each request. The leader constructs a proposed transaction for each request, which includes the assigned Paxos number and incorporates the request. The leader transmits the proposed transactions to the replicas. Two or more write requests that access distinct objects in the database can proceed simultaneously. The leader commits a proposed transaction to the database after receiving a plurality of confirmations for the proposed transaction from the replicas. After all the Paxos numbers have been assigned, inter-epoch tasks are performed before beginning a subsequent epoch. | 2022-01-13 |
20220012265 | PREDICTIVE AND ADAPTIVE QUEUE FLUSHING FOR REAL-TIME DATA RECONCILIATION BETWEEN LOCAL AND REMOTE DATABASES - Predictive queue flushing for real-time synchronization of data sets between two data stores, comprising a data synchronization software module that interfaces with each data store, and uses a queue monitor to record and store changes to data on each data store and calculate velocity and acceleration of event arrivals, and a policy manager to manage synchronization, and a query generator to incorporate policies from the policy manager and measurements from the queue monitor to direct the data synchronization software module, flushing the change queue in accordance with the established synchronization policy, yielding synchronized shared data sets. | 2022-01-13 |
20220012266 | SYSTEMS AND METHODS FOR SPECIFYING OLAP CUBE AT QUERY TIME - Systems, methods, and storage media for generating an online analytical processing cube (MAP) are disclosed. Exemplary implementations may: receive a cube definition file; access a data-source; generate a data-source property configuration for the data-source using the cube definition file to; determine each of respective parsed data from the data-source is a key, attribute, or measure; and generate the OLAP cube by combining the cube definition file and the data-source property configuration for the determined parsed data from the data-source. | 2022-01-13 |
20220012267 | SYSTEMS AND METHODS FOR MACHINE-AUTOMATED CLASSIFICATION OF WEBSITE INTERACTIONS - A system includes a processor and memory. The memory stores a model database including models and a classification database including classification scores corresponding to an input. The memory stores instructions for execution by the processor. The instructions include, in response to receiving a first input from a user device of a user, determining, for the first input, classification scores for classifications by applying the models to the first input. Each model determines one of the classification scores. The instructions include storing the classification scores as associated with the first input in the classification database and identifying the first input as within a first classification in response to a first classification score corresponding to the first classification exceeding a first threshold. The instructions include transmitting, for display on an analyst device, the first input based on the first classification to a first analyst queue associated with the first classification. | 2022-01-13 |
20220012268 | SYSTEM AND METHOD FOR SMART CATEGORIZATION OF CONTENT IN A CONTENT MANAGEMENT SYSTEM - In accordance with an embodiment, systems and methods described herein can be used, for example with a content management system, to provide recommendations to categorize/classify content into user-defined categories, which in turn provides an opportunity for content managers to place new content into accurate categories effortlessly, based on previously evaluated/categorized content. A recommendation system or tool can use artificial intelligence (AI) techniques to continuously learn from past data, and assist in placing content into a relevant category through automatic categorization/classification of newly created/edited content. The recommendation tool can be implemented and applied across diverse domains by generating feature vectors from contents, creating clusters in the feature space based on previously categorized content, and recommending a category for new content through feature space distance calculation from the clusters. | 2022-01-13 |