17th week of 2022 patent applcation highlights part 45 |
Patent application number | Title | Published |
20220129367 | METHODS, SYSTEMS, AND MEDIA FOR A MICROSERVICES ORCHESTRATION ENGINE - Systems, methods and media are directed to a microservices orchestration engine, which includes an engine framework and an orchestrator. The engine framework combines an input request with framework specifications to form a job stream and define communication between the orchestrator and microservice provision components that execute microservices to be executed in a non-production environment during a test of computer-executable code. The orchestrator receives the formed job stream and sends a plurality of tasks based on the formed job stream. The tasks are executed by respective microservice provision components that execute microservices, enabling the test of computer-executable code in the non-production environment. | 2022-04-28 |
20220129368 | System, Method, and Computer Program Product for Operating Dynamic Shadow Testing Environments - Described are a system, method, and computer program product for operating dynamic shadow testing environments for machine-learning models. The method includes storing a testing policy including an identifier of a machine-learning model and an identifier of a transaction service. The method includes generating a shadow testing environment operating the transaction service using the machine-learning model. The method also includes receiving, at a transaction service provider system, a transaction authorization request including transaction data of a transaction associated with a payment device. The method further includes identifying the machine-learning model associated with the transaction based on a parameter of the transaction data. The method further includes determining, based on the identifier of the machine-learning model, the testing policy and the shadow testing environment. The method further includes replicating the transaction data in the shadow testing environment as input for testing the transaction service using the machine-learning model. | 2022-04-28 |
20220129369 | GENERATING TEST ACCOUNTS IN A CODE-TESTING ENVIRONMENT - Systems, media, and methods for automatically generating test accounts using a test account generator are disclosed. Responsive to an indication of a selection of a product of interest from among a list of products, fields of information corresponding to the selected product, as well as enabling the testing of the test account, are generated. Permission to access a plurality of servers containing data corresponding to the fields of information is requested. Responsive to permissions to access the plurality of servers being granted, a test data set including data corresponding to the fields of information is produced and transmitted to the test account generator. Upon receiving the test data set, the included data is populated into corresponding fields of information to generate the test account. | 2022-04-28 |
20220129370 | SYSTEMS AND METHOD FOR TESTING COMPUTING ENVIRONMENTS - Systems and methods are disclosed herein for improving data migration operations including testing and setup of computing environments. In one example, the method may include receiving data for one or more application programming interfaces (APIs). The method may further include generating one or more tests to test the one or more APIs in a first computing environment, testing the APIs, storing the results in a database, and performing a change data capture operation. The method may further include augmenting the one or more tests with the CDC data to generate an updated test. The method may further include testing, using the updated test, a second set of the one or more APIs and comparing the test results. The method may also include outputting a confidence score indicating a correlation between the first environment and the second environment. | 2022-04-28 |
20220129371 | Initialization Sequences for Automatic Software Test Generation - A computer-implemented method comprising, during execution of a software program comprising a procedure, determining whether an execution of the procedure satisfies a predetermined coverage criterion. In accordance with a determination that the execution of the procedure satisfies the predetermined coverage criterion, recording information related to the execution of the procedure to a log, the information based on data received from instrumented code included in the software program, and automatically generating an arrange section of a unit test for the procedure based on an initialization sequence determined from the recorded log, the initialization sequence comprising a sequence of program instructions which when executed invoke the procedure. | 2022-04-28 |
20220129372 | A/B TESTING SAMPLE RATIO MISMATCH SOLVER - A method of executing an A/B test includes configuring the A/B test to comprise a first plurality of users in a control group and a second plurality of users in a test group, wherein the first plurality of users and the second plurality of users are to be provided two different versions of a webpage. The method further includes, while the A/B test is executing, determining, by a processing device, that a sample ratio mismatch corresponding to the second plurality of users has occurred, wherein the sample ratio mismatch is determined before the A/B test ends executing. The method further includes, in response to the determining, ending the executing of the A/B test before a previously scheduled end of the A/B test. | 2022-04-28 |
20220129373 | MEMORY SYSTEM, DATA STORAGE DEVICE, USER DEVICE AND DATA MANAGEMENT METHOD THEREOF - A data management method of a data storage device having a data management unit different from a data management unit of a user device receives information regarding a storage area of a file to be deleted, from the user device, selects a storage area which matches with the data management unit of the data storage device, from among the storage area of the deleted file, and performs an erasing operation on the selected storage area which matches with the data management unit. | 2022-04-28 |
20220129374 | MEMORY SYSTEM, DATA STORAGE DEVICE, USER DEVICE AND DATA MANAGEMENT METHOD THEREOF - A data management method of a data storage device having a data management unit different from a data management unit of a user device receives information regarding a storage area of a file to be deleted, from the user device, selects a storage area which matches with the data management unit of the data storage device, from among the storage area of the deleted file, and performs an erasing operation on the selected storage area which matches with the data management unit. | 2022-04-28 |
20220129375 | MEMORY SYSTEM, DATA STORAGE DEVICE, USER DEVICE AND DATA MANAGEMENT METHOD THEREOF - A data management method of a data storage device having a data management unit different from a data management unit of a user device receives information regarding a storage area of a file to be deleted, from the user device, selects a storage area which matches with the data management unit of the data storage device, from among the storage area of the deleted file, and performs an erasing operation on the selected storage area which matches with the data management unit. | 2022-04-28 |
20220129376 | LOGICAL-TO-PHYSICAL MAPPING OF DATA GROUPS WITH DATA LOCALITY - A system includes integrated circuit (IC) dies having memory cells and a processing device, which is to perform operations including generating a number of zone map entries for zones of a logical block address (LBA) space that are sequentially mapped to physical address space of the plurality of IC dies, wherein each zone map entry corresponds to a respective data group that has been sequentially written to one or more IC dies; and generating a die identifier and a block identifier for each data block of multiple data blocks of the respective data group, wherein each data block corresponds to a media block of the plurality of IC dies. | 2022-04-28 |
20220129377 | Efficiently Purging Non-Active Blocks in NVM Regions Using Virtblock Arrays - Techniques for efficiently purging non-active blocks in an NVM region of an NVM device using virtblocks are provided. In one set of embodiments, a host system can maintain, in the NVM device, a pointer entry (i.e., virtblock entry) for each allocated data block of the NVM region, where page table entries of the NVM region that refer to the allocated data block include pointers to the pointer entry, and where the pointer entry includes a pointer to the allocated data block. The host system can further determine that a subset of the allocated data blocks of the NVM region are non-active blocks and can purge the non-active blocks from the NVM device to a mass storage device, where the purging comprises updating the pointer entry for each non-active block to point to a storage location of the non-active block on the mass storage device. | 2022-04-28 |
20220129378 | COALESCING READ COMMANDS BY LOCATION FROM A HOST QUEUE - Method and apparatus for managing data in a storage device, such as a solid-state drive (SSD). A non-volatile memory (NVM) is arranged into multiple garbage collection units (GCUs) each separately erasable and allocatable as a unit. Read circuitry applies read voltages to memory cells in the GCUs to sense a programmed state of the memory cells. Calibration circuitry groups different memory cells from different GCUs into calibration groups that share a selected set of read voltages. A read command queue accumulates pending read commands to transfer data from the NVM to a local read buffer. Read command coalescing circuitry coalesces selected read commands from the queue into a combined command for execution as a single batch command. The combined batch command may include read voltages for use in retrieval of the requested data. | 2022-04-28 |
20220129379 | CACHE MEMORY MANAGEMENT - One or more aspects of the present disclosure relate to cache memory management. In embodiments, a global memory of a storage array into one or more cache partitions based on an anticipated activity of one or more input/output (IO) service level (SL) workload volumes can be dynamically partitioned. | 2022-04-28 |
20220129380 | VOLUME TIERING IN STORAGE SYSTEMS - An apparatus comprises a processing device configured to receive a request to create a given storage volume in a storage system, the storage system providing a plurality of storage features. The processing device is also configured to select, for the given storage volume, one of a set of one or more volume tiers, each of the volume tiers specifying whether respective ones of the plurality of storage features provided by the storage system are enabled or disabled for storage volumes associated with that volume tier. The processing device is further configured to create the given storage volume in the storage system, and to associate the selected volume tier with the given storage volume, wherein associating the selected volume tier with the given storage volume comprises enabling or disabling respective ones of the plurality of storage features provided by the storage system as specified by the selected volume tier. | 2022-04-28 |
20220129381 | BLOCKCHAIN CACHE SYSTEM - The present disclosure provides systems, methods, and computer program products for obtaining data from a blockchain. An example system may comprise a cache engine comprising cache storage and a blockchain crawler. The blockchain crawler may be configured to obtain blockchain data from the blockchain and write a subset of the blockchain data to the cache storage. The subset of the blockchain data may satisfy a query generated by the cache engine. The system may further comprise a blockchain query service communicatively coupled to the cache engine. The blockchain query service may comprise state storage and a cache crawler. The cache crawler may be configured to obtain cache data from the cache storage and update a state of the state storage based at least on the cache data. | 2022-04-28 |
20220129382 | Memory Circuit and Cache Circuit Configuration - A memory circuit includes a stack of first dies including multiple sets of memory cells of a first type, a second die including multiple sets of memory cells of a second type, a third die, and an interposer carrying the first, second, and third dies. The second die includes a first set of input/output (I/O) terminals on a top surface of the second die and a second set of I/O terminals on a bottom surface of the second die. The stack of first dies is coupled to the second die through the first set of I/O terminals. The interposer is coupled to the second die through the second set of I/O terminals. The third die is positioned aside the second die and in communication with the second die through the interposer. | 2022-04-28 |
20220129383 | ELECTRONIC DEVICE, AUTOMOTIVE DEVICE, AND DATA CENTER - An electronic device includes a processor configured to control a system, a main memory including a first region configured to store normal data according to a normal operation and a second region configured to store monitoring data, a first cache configured to be activated in response to a monitoring enable signal of the processor and to access the second region, and a second cache configured to load the normal data according to the normal operation of the processor. | 2022-04-28 |
20220129384 | SERVER RECOVERY FROM A CHANGE IN STORAGE CONTROL CHIP - A method comprises configuring an address-to-SC unit (A2SU) of each of a plurality of CPU chips based on a number of valid SC chips in the computer system. Each of the plurality of CPU chips is coupled to each of the SC chips in a leaf-spine topology. The A2SU is configured to correlate each of a plurality of memory addresses with a respective one of the valid SC chips. The method further comprises, in response to detecting a change in the number of valid SC chips, pausing operation of the computer system including operation of a cache of each of the plurality of CPU chips; while operation of the computer system is paused, reconfiguring the A2SU in each of the plurality of CPU chips based on the change in the number of valid SC chips; and in response to reconfiguring the A2SU, resuming operation of the computer system. | 2022-04-28 |
20220129385 | FAST CACHE TRACKING TO SUPPORT AGGRESSIVE PREFETCHING - A Bloom filter is used to track contents of a cache. A system checks the Bloom filter before deciding whether to prefetch an address (by hashing the address and checking a value of the Bloom filter at an index based on the hash). This allows the system to utilize more aggressive prefetching schemes by reducing the risk of wasteful redundant prefetch operations. | 2022-04-28 |
20220129386 | MEMORY BALLOONING RELATED MEMORY ALLOCATION TECHNIQUES FOR VIRTUAL MACHINES - Systems and methods for encryption support for virtual machines. An example method may comprise maintaining, by a virtual machine running on a host computer system, a list of free memory pages, wherein each entry in the list references a set of memory pages that are contiguous in a guest address space; receiving, from a hypervisor of the host computer system, a request for guest memory to be made available to the hypervisor, wherein the request comprises a minimum size of guest memory requested and a maximum size of guest memory; and responsive to identifying, in the list of free memory pages, a set of contiguous guest memory pages that is greater than or equal to the minimum size of memory requested, and less than or equal to the maximum size of memory requested, releasing the set of contiguous guest memory pages to the hypervisor. | 2022-04-28 |
20220129387 | NAMESPACE MAPPING STRUCTURAL ADJUSTMENT IN NON-VOLATILE MEMORY DEVICES - A computer storage device having a host interface, a controller, non-volatile storage media, and firmware. The firmware instructs the controller to: allocate a named portion of the non-volatile storage device; generate, according to a first block size, first block-wise mapping data; translate, using the first block-wise mapping data, logical addresses defined in the named portion to logical addresses defined for the entire non-volatile storage media, which can then be further translated to physical addresses in a same way for all named portions; determine a second block size; generate, according to the second block size, second block-wise mapping data; translate, using the second block-wise mapping data, the logical addresses defined in the named portion to the logical addresses defined for the entire non-volatile storage media. | 2022-04-28 |
20220129388 | INTEGRATED NON-VOLATILE MEMORY ASSEMBLY WITH ADDRESS TRANSLATION - A non-volatile storage system includes a memory controller connected to an integrated memory assembly. The integrated memory assembly includes a memory die comprising non-volatile memory cells and a control die bonded to the memory die. The memory controller receives commands from a host, performs logical address to physical address translation (“address translation”) operations for the commands, and instructs the integrated memory assembly to perform one or more operations in support of the command. The control die also includes the ability to perform the address translation. When performing a command from the host, the memory controller can choose to perform the necessary address translation or instruct the control die to perform the address translation. When the control die performs the address translation, the resulting physical address is used by the control die to perform one or more operations in support of the command. | 2022-04-28 |
20220129389 | Online Security Services based on Security Features Implemented in Memory Devices - A security server to provide security services over a computer network based on security features of memory devices connected to host systems. For example, the security features of a memory device can include a unique device secret, a cryptographic engine, and an access controller to implement access privileges represented by cryptographic keys. After receiving identity data that is generated by the memory device and represented by a cryptographic key, the security server can determine authenticity of the memory device based on its copy of the unique device secret of the memory device. The security server can generate a verification code for a command and cause the command and the verification code to be communicated to the memory device, where the access controller of the memory device validates the verification code in determining whether to block execution of the command in the memory device. | 2022-04-28 |
20220129390 | Monitor Integrity of Endpoints having Secure Memory Devices for Identity Authentication - A security server to manage integrity of packages stored in an endpoint based on identity authentication implemented using security features of a memory device configured in the endpoint. For example, the security server validates identity data generated by the memory device based at least in part on a secret of the memory device. The server can extract, from the identity data, health information of a package stored in the endpoint and determined, based at least in part on the health information, whether or not to update or repair the package currently stored in the endpoint. | 2022-04-28 |
20220129391 | Track Activities of Endpoints having Secure Memory Devices for Security Operations during Identity Validation - A security server to implement security operations during validation of the identity of an endpoint based on activity data of the endpoint. For example, a server system stores data representative of preferences for the endpoint. After receiving, a validation request containing identity data generated by a memory device configured in the endpoint, the server system can validate the identity data based at least in part on a secret of the memory device. If the identity data is valid, the server system can further determine whether an activity, as identified by the identity data and/or the validation request, satisfies a condition specified for the endpoint. If so, the server system can perform a security operation associated with the condition in providing a validation response in responding to the validation request. | 2022-04-28 |
20220129392 | SEMICONDUCTOR DEVICE WITH SECURE ACCESS KEY AND ASSOCIATED METHODS AND SYSTEMS - Memory devices, systems including memory devices, and methods of operating memory devices are described, in which security measures may be implemented to control access to a fuse array (or other secure features) of the memory devices based on a secure access key. In some cases, a customer may define and store a user-defined access key in the fuse array. In other cases, a manufacturer of the memory device may define a manufacturer-defined access key (e.g., an access key based on fuse identification (FID), a secret access key), where a host device coupled with the memory device may obtain the manufacturer-defined access key according to certain protocols. The memory device may compare an access key included in a command directed to the memory device with either the user-defined access key or the manufacturer-defined access key to determine whether to permit or prohibit execution of the command based on the comparison. | 2022-04-28 |
20220129393 | Dynamically Managing Protection Groups - Dynamically managing protection groups, including: identifying a protection group of storage resources, the protection group associated with a protection group management schedule that identifies one or more protection group management operations to be performed; detecting a membership change in the protection group; and updating, in dependence upon the change in the protection group, the protection group management schedule. | 2022-04-28 |
20220129394 | MANAGED NAND FLASH MEMORY REGION CONTROL AGAINST ENDURANCE HACKING - The disclosed embodiments are directed to improving the lifespan of a memory device. In one embodiment, a system is disclosed comprising: a host processor and a memory device, wherein the host processor is configured to receive a write command from a virtual machine, identify a region identifier associated with the virtual machine, augment the write command with the region identifier, and issue the write command to the memory device, and the memory device is configured to receive the write command, identify a region comprising a subset of addresses writable by the memory device using a region configuration table, and write the data to an address in the subset of addresses. | 2022-04-28 |
20220129395 | System for Improving Input / Output Performance - In one embodiment, data communication apparatus includes a network interface including one or more ports for connection to a packet data network and configured to receive content transfer requests from at least one remote device over the network, a storage sub-system to be connected to local peripheral storage devices, and including at least one peripheral interface, and a memory sub-system including a cache and RAM, and processing circuitry to manage transfer of content between the remote device(s) and the local peripheral storage devices via the peripheral interface(s) and the cache, responsively to the content transfer requests, while pacing commencement of serving of respective ones of the content transfer requests responsively to a metric of the storage sub-system so that while ones of the content transfer requests are being served, other ones of the content transfer requests pending serving are queued in at least one pending queue. | 2022-04-28 |
20220129396 | MEMORY DEVICE INTERFACE COMMUNICATING WITH SET OF DATA BURSTS CORRESPONDING TO MEMORY DIES VIA DEDICATED PORTIONS FOR COMMAND PROCESSING - A set of memory commands associated with one or more memory dies of a memory device are communicated via a first portion of an interface to the memory device. Communication of a set of data bursts corresponding to the set of memory commands to the one or more memory dies via a second portion of the interface is caused, wherein one or more of the set of memory commands is communicated via the first interface concurrently with one or more of the set of data bursts. | 2022-04-28 |
20220129397 | STORAGE SYSTEM - The present application provides a storage system, including a plurality of storage chips, each storage chip including a data output unit, the data output units sharing a power supply and a ground terminal, and the data output unit including: a pull-up unit having a control terminal, a first terminal and a second terminal, a first input signal being inputted to the control terminal, the first terminal being electrically connected to the power supply, the second terminal being connected to an output terminal of the data output unit, and the pull-up unit being a first NMOS transistor; and a pull-down unit having a control terminal, a first terminal and a second terminal, a second input signal being inputted to the control terminal, the first terminal being electrically connected to the ground terminal, and the second terminal being connected to the output terminal of the data output unit. | 2022-04-28 |
20220129398 | TUNNELING OVER UNIVERSAL SERIAL BUS (USB) SIDEBAND CHANNEL - Tunneling over Universal Serial Bus (USB) sideband channel systems and methods provide a way to tunnel I2C transactions between a master and slaves over USB 4.0 sideband channels. More particularly, a slave address table lookup (SATL) circuit is added to a host circuit. Signals from an I2C bus are received at the host, and any address associated with a destination is translated by the SATL. The translated address is passed to a low-speed interface associated with a sideband channel in the host circuit. Signals received at the low-speed interface are likewise reverse translated in the SATL and then sent out through the I2C bus. In this fashion, low-speed I2C signals may be routed over the sideband channel through the low-speed sideband interface portion of the USB interface. | 2022-04-28 |
20220129399 | DIRECT MEMORY ACCESS TRACKING FOR PASS-THROUGH DEVICES IN VIRTUALIZED ENVIRONMENTS - Systems, apparatuses and methods may provide for a frontend driver that notifies a hypervisor of a map request from a guest driver of a device, wherein the device is passed through to and directly controlled by a virtual machine, and wherein the map request is associated with an attempt of the device to access a guest memory page in a virtualized execution environment. The frontend driver may also determine whether the guest memory page is pinned and send a map hypercall to the hypervisor if the guest memory page is not pinned. Additionally, the hypervisor may determine that the guest memory page is pinned, determine, based on a direct memory access (DMA) bitmap, that an unmap request from the guest driver has been issued, and unpin the guest memory page. | 2022-04-28 |
20220129400 | SYNCHRONIZING A LOW VOLTAGE DRIVE CIRCUIT TO A BUS WHEN COUPLING THERETO - A method for execution by a low voltage drive circuit (LVDC) operably coupled to a bus includes, when activated, setting data reception for a control channel of a plurality of channels on the bus, where the control channel is a sinusoidal signal having a known frequency. The method further includes receiving the control channel and capturing a cycle of the control channel when the control channel is void of a data communication. The method further includes comparing the cycle of the control channel with a cycle of a first receive clock signal of the LVDC and when the cycle of a first receive clock signal compares unfavorably to the cycle of the control channel, adjusting phase and/or frequency of the cycle of the first receive clock signal to substantially match phase and/or frequency of the cycle of the control channel to produce an adjusted first receive clock signal. | 2022-04-28 |
20220129401 | CONFIGURABLE INPUT/OUTPUT DEVICE AND OPERATION METHOD THEREOF - A configurable input/output device includes a plurality of input/output terminals, a routing module, and a first universal input/output channel. The input/output terminals are connected a plurality of field devices. The input/output terminals receive a plurality of input signals from the field devices, and output a plurality of output signals to the field devices. At least two of the input signals are different, at least two of the output signals are different, and at least two the field devices are different. The routing module is connected to the input/output terminals. The first universal input/output channel is connected to the routing module. The routing module controls connections between the first universal input/output channel and the input/output terminals. The routing module also controls the transceiving sequence for the input signals and the output signals. | 2022-04-28 |
20220129402 | AUTOMATIC SWITCHING SYSTEM AND METHOD OF FRONT-END PROCESSOR - This application discloses an automatic switching system and method for a front end processor (FEP). The system includes: at least one external device and a FEP assembly. The FEP assembly is connected to the at least one external device. The FEP assembly provides services upward by using a primary memory, a primary TO manager, a secondary memory, and a secondary TO manager, and is connected downward to the at least one external device by using at least one primary driver and at least one secondary driver. The FEP assembly is configured to use the at least one secondary driver as a new primary driver when there is a fault in a communication link between the at least one primary driver and the at least one external device, to transmit a control instruction to the at least one external device and acquire data from the at least one external device. | 2022-04-28 |
20220129403 | SUPERIMPOSING BUTTERFLY NETWORK CONTROLS FOR PATTERN COMBINATIONS - A multilayer butterfly network is shown that is operable to transform and align a plurality of fields from an input to an output data stream. Many transformations are possible with such a network which may include separate control of each multiplexer. This invention supports a limited set of multiplexer control signals, which enables a similarly limited set of data transformations. This limited capability is offset by the reduced complexity of the multiplexor control circuits. This invention used precalculated inputs and simple combinatorial logic to generate control signals for the butterfly network. Controls are independent for each layer and therefore are dependent only on the input and output patterns. Controls for the layers can be calculated in parallel. | 2022-04-28 |
20220129404 | PIN CONNECTION PROTOCOL UPDATING - A computing device is provided, including a processor having a plurality of pins that are electrically coupled to a connector via respective traces. The computing device may further include a memory device storing a state table that maps the plurality of pins to a respective plurality of connection protocols. The processor may be configured to implement control logic for the plurality of pins at least in part by receiving a selection of a pin of the plurality of pins. Implementing the control logic may further include receiving an updated connection protocol for the selected pin. Implementing the control logic may further include updating the state table such that the selected pin is mapped to the updated connection protocol. Implementing the control logic may further include, via the connector, establishing a connection to an external device using the updated connection protocol implemented at the selected pin. | 2022-04-28 |
20220129405 | ELECTRONIC DEVICE - An electronic device is provided. The electronic device includes a board, a first latch mechanism, and an expansion card. A controller is disposed on the board. The first latch mechanism is disposed on the board. The first latch mechanism is electrically connected to the controller. The expansion card is plugged in the first latch mechanism and disposed over the board. The expansion card is electrically connected to the controller through the first latch mechanism. The controller determines a connecting condition of the first latch mechanism according to a connecting signal provided by the expansion card. | 2022-04-28 |
20220129406 | ELECTRONIC DEVICE AND OPERATION METHOD THEREOF - An electronic device includes a first processor, a second processor; and a communication interface. The second processor transmits, via a hardware wire, state information indicating a state of the first processor to the communication interface; and based on the first processor entering a suspend mode from a normal mode, change the state information from a first value to a second value, and the communication interface, based on the state information being changed, disconnects the first processor from the communication interface by turning off power of an universal serial bus (USB) interface. Moreover, the second processor, based on the first processor entering the normal mode from the suspend mode, changes the state information from the second value to the first value, and the communication interface, based on the state information being changed, connects the first processor to the communication interface by turning on the power of the USB interface. | 2022-04-28 |
20220129407 | BOARD PORTAL SUBSIDIARY MANAGEMENT SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT - A board portal system provides the ability to manage multiple boards, where each of the boards may be a separate legal entity. The board portal may provide the ability to establish links between the multiple boards and create parent-child relationships with subsidiary boards. With the board portal, users can create content and make it viewable and accessible across multiple boards that related through a parent-child relationship. At the same time, the board portal maintains a requisite level of separation between the related boards in the portal using encryption and/or other separation techniques. As a result, the board portal facilitates flexible workflow patterns and communication processes based on the proper hierarchical structure that exists between the parent organization and its subsidiaries. | 2022-04-28 |
20220129408 | DATA ACTOR AND DATA PROCESSING METHOD THEREOF - Provided is a data actor, which is in data communication with direct upstream actor and/or downstream actor. The data actor includes a message bin, a finite state machine, a processing component and an output data cache. The message bin is configured to receive a message from the upstream actor and/or the downstream actor; the finite state machine is configured to change a current state of the actor based on the received message in the message bin and an operation of the processing component; when a state of the finite state machine reaches a trigger condition, the processing component directly reads output data in a readable state in an output data cache of the upstream actor and executes a predetermined operation, and then stores result data subsequent to execution of the predetermined operation in an output data cache of the data actor. | 2022-04-28 |
20220129409 | ARITHMETIC LOGIC UNIT LAYOUT FOR A PROCESSOR - A processor has first, second and third ALUs. The first ALU has on a first side an input and an output. The second ALU has a first side facing the first side of the first ALU, an input and an output on the first side of the second ALU and being in a rotated orientation relative to the input and the output of the first side of the first ALU, and an output on a second side of the second ALU. The third ALU has a first side facing the second side of the second ALU, and an input and an output on the first side of the third ALU. The input of the first side of the first ALU is logically directly connected to the output of the first side of the second ALU. | 2022-04-28 |
20220129410 | SYSTOLIC ARRAY DEVICE - A systolic array device according an embodiment includes a plurality of processing units arranged in a matrix form of M by N (M and N are natural numbers). Each of the processing units includes: a processing element configured to perform a predetermined processing based on data received from a processing unit arranged adjacent to one side of the corresponding processing unit to output a result thereof; and a transfer part configured to perform one of an operation of transferring the received data to another processing unit arranged adjacent to the other side of the corresponding processing unit and an operation of transferring the result. | 2022-04-28 |
20220129411 | PARTITIONED TEMPLATE MATCHING AND SYMBOLIC PEEPHOLE OPTIMIZATION - Systems and techniques that facilitate partitioned template matching and/or symbolic peephole optimization are provided. In various embodiments, a system can comprise a template component, which can perform template matching on a Clifford circuit associated with a set of qubits. In various aspects, the system can comprise a partition component, which can partition, prior to the template matching, the Clifford circuit into a computation stage, a Pauli stage, and a SWAP stage. In various instances, the template matching can be performed on the computation stage. In various embodiments, the system can comprise a symbolic component, which can select a subset of qubits from the set of qubits, rewrite at least one entangling gate in the computation stage such that a target of the at least one entangling gate is in the subset of qubits, and replace the at least one rewired entangling gate with a symbolic Pauli gate. In various cases, the symbolic Pauli gate can be a Pauli gate that is controlled by a symbolic variable. In various aspects, the system can comprise a peephole component, which can perform peephole optimization on the subset of qubits with the symbolic Pauli gate by implementing a dynamic programming algorithm. | 2022-04-28 |
20220129412 | FILE SYSTEMS WITH GLOBAL AND LOCAL NAMING - A method for data storage includes specifying a plurality of File Systems (FSs) for use by multiple clients, including assigning to the FSs both respective global identifiers and respective client-specific names. The plurality of FSs is managed using the global identifiers, and files are stored for the clients in the FSs using the client-specific names. | 2022-04-28 |
20220129413 | ELECTRONIC FILE MIGRATION SYSTEM AND METHODS OF PARTITIONING MIGRATED DATA BETWEEN STORAGE SYSTEMS - An electronic file migration system has a processor. A memory is coupled to the processor, the memory storing program instructions that when executed by the processor, causes the processor to: migrate files from a first storage system to a second storage system, wherein a first set of files are copied as completed files to the second storage system and a second set of files have symbolic links written on the second storage system directed to the second set of files stored on the first storage system. | 2022-04-28 |
20220129414 | FORMAT AGNOSTIC DATABASE MIGRATION - A staging engine of a staging server receives a request to change a production database from a client device. The staging engine of the staging server accesses one or more schemas corresponding to the production database and determines one or more migration commands based on the received request and the accessed one or more schemas. The one or more migration commands correspond to a difference between a current structure of the production database and a final structure of the production database after the production database is updated. The staging engine transmits the one or more migration commands to a migration engine, wherein the migration engine asynchronously applies changes to the production database according to the one or more migration commands. | 2022-04-28 |
20220129415 | View Filtering for a File Storage System - Systems and methods for view filtering for a file storage system are described. An illustrative method includes receiving a request to access content of a managed directory of a file system; filtering, without regard to how the content of the managed directory is being accessed, the content of the managed directory based on a visibility filter policy attached to the managed directory; and providing, responsive to the request, the filtered content of the managed directory. | 2022-04-28 |
20220129416 | DYNAMIC STORAGE GROUP RESIZING DURING CLOUD SNAPSHOP SHIPPING - A cloud tethering subsystem is configured to ship snapshots of an application production storage group to a cloud repository. Dynamic storage group resizing operations are allowed on the application production storage group after creation of a snapshot and before transmission of the snapshot to the cloud, or while the snapshot is accessing data of the application production storage group in connection with shipping the snapshot to the cloud. Example dynamic storage group resizing operations include adding one or more volumes to the application production storage group, removing one or more volumes from the application production storage group, and resizing one or more of the volumes of the application production storage group. The cloud tethering subsystem maintains information about the size of the snapshot at the time of creation and uses the snapshot size to prevent dynamic storage group resizing operations from interfering with cloud snapshot shipping operations. | 2022-04-28 |
20220129417 | Code Similarity Search - A method for determining code similarity includes receiving a file, identifying executable portions of the file, dividing the executable portions of the file into code blocks, generating a hash to represent each code block, and storing the file in a database as a sequence of the hashes representing the code blocks. The method further includes receiving a query to identify whether a first file stored in the database is similar to any other file stored in the database. The method additionally includes determining whether any hash associated with the first file matches any of the hashes associated with each other file stored in the database. When one of the hashes associated with the first file matches one of the hashes associated with a second file stored in the database, the method also includes responding to the query that the second file is similar to the first file. | 2022-04-28 |
20220129418 | METHOD FOR DETERMINING BLOOD RELATIONSHIP OF DATA, ELECTRONIC DEVICE AND STORAGE MEDIUM - A method for determining blood relationship of data, an electronic device and a storage medium. The specific implementation solution is: acquiring data to be processed and initial meta information corresponding thereto; matching the initial meta information with respective reference meta information sets, respectively, to determine a target meta information set that matches the initial meta information; determining the blood relationship corresponding to the data according to the target meta information set. | 2022-04-28 |
20220129419 | METHOD AND APPARATUS FOR PROVIDING METADATA SHARING SERVICE - A method for providing metadata sharing service may include obtaining a sharing event for a predetermined range path based on a current location of a first target object, determining whether a second original name of a second target object previously registered with a name duplicating with a first original name of the first target object according to the sharing event exists in a sharing table, generating and registering a first unique name different from a second unique name for the second original name of the second target object in the sharing table in response to the existence of a second original name previously registered with a name duplicating with the first original name, and sharing a predetermined range path based on a current location of a first target object of the first unique name according to the sharing event through a virtual drive. | 2022-04-28 |
20220129420 | METHOD FOR FACILITATING RECOVERY FROM CRASH OF SOLID-STATE STORAGE DEVICE, METHOD OF DATA SYNCHRONIZATION, COMPUTER SYSTEM, AND SOLID-STATE STORAGE DEVICE - A method for facilitating recovery from a crash of a solid-state storage device (SSD) is adapted to be implemented by an SSD controller of the SSD that receives a write request. The method includes: assigning a write request identifier (WID) and a request size in a spare area of each written page of the SSD; counting a number of appearances of the WID in all written page(s) to result in a WID count; determining whether the WID count is equal to the request size; and determining that the write request is completed and is eligible for recovery after a crash of the SSD when it is determined that the WID count is equal to the request size. | 2022-04-28 |
20220129421 | SYSTEM AND METHODS FOR BANDWIDTH-EFFICIENT ENCODING OF GENOMIC DATA - A system and methods for bandwidth-efficient encoding of genome and bioinformatic sequence datasets comprising a sequence analyzer configured to: analyze a received sequence dataset to determine a sequence dataset file type, scan the sequence dataset to maintain a count of unique characters contained therein, identify positions where the unique character count increases by a power of two, deconstruct the sequence dataset into a plurality of sourceblocks at the identified positions, and encode the plurality of sourceblocks using a data deconstruction engine and library management module to assign each sourceblock a reference code. | 2022-04-28 |
20220129422 | EFFICIENT METHOD TO IDENTIFY CHANGED OBJECTS FOR LARGE SCALE FILE SYSTEM BACKUP - One example method includes identifying changed objects in a filesystem. Entry lists of a previous backup and a current backup are processed at the same time. The comparison allows objects in the filesystem to be identified as unchanged, modified, new, or deleted relative to a previous backup. | 2022-04-28 |
20220129423 | METHOD FOR ANNOTATING DATA, RELATED APPARATUS AND COMPUTER PROGRAM PRODUCT - The present disclosure provides a method and apparatus for annotating data, an electronic device, a computer readable storage medium and a computer program product, and relates to the field of artificial intelligence technology such as data annotation and deep learning. A specific implementation of the method comprises: acquiring an original annotation algorithm corresponding to to-be-annotated data, and then extracting, from the original annotation algorithm, an anchor point used to mark a modifiable part of a code corresponding to a preset function in a code segment of the original annotation algorithm; acquiring an annotation requirement corresponding to the to-be-annotated data, and determining a target anchor point corresponding to the annotation requirement; modifying an implementation parameter of the target anchor point based on the annotation requirement, to generate a target annotation algorithm; and finally processing the to-be-annotated data using the target annotation algorithm, to obtain an annotation result. | 2022-04-28 |
20220129424 | MAINTAINING FOREIGN KEY REFERENCES ACROSS DOMAINS - Disclosed herein are system, method, and computer program product embodiments for creating an enterprise data model that defines entities and relationships between the entities spanning multiple environments and for deploying and maintaining artifacts across the environments using metadata specified in the enterprise data model. By embedding metadata that describes foreign key references within an enterprise data model, a unifying enterprise data model may manage artifacts across multiple domains while implementing a physical, cross-domain, data architecture. Such an enterprise data model may provide an organization with a 360-degree view of the information harnessed across the organization's technical landscape and may allow the organization to easily rollout a comprehensive data warehousing solution. | 2022-04-28 |
20220129425 | DATA MIGRATION AND INTEGRATION SYSTEM - A data migration and integration system is disclosed. In various embodiments, the system includes a memory configured to store a mapping from a source schema to a target schema; and a processor coupled to the memory and configured to migrate to a target schema an instance of source data organized according to the source schema, including by using a chase engine to perform an ordered sequence of steps comprising adding a bounded layer of new elements to a current canonical chase state associated with migrating the source data to the target schema; adding coincidences associated with one or more of the target schema data integrity constraints and a mapping from the source schema to the target schema; and merging equal elements based on the coincidences; and repeat the preceding ordered sequence of steps iteratively until an end condition is met. | 2022-04-28 |
20220129426 | VERSATILE DATA REDUCTION FOR INTERNET OF THINGS - One example method includes collaborative deduplication. A deduplication engine implemented at a cloud level collaborates or coordinates with an extension engine of the deduplication at an edge node. This allows data ingested at a node to be collaboratively deduplicated prior to transfer to the cloud and after transfer to the cloud. | 2022-04-28 |
20220129427 | A KIND OF MONITORING METHOD FOR DRINKING BEHAVIOR OF LABORATORY MICE, AND ITS SYSTEM AND DEVICE - The embodiment of the present invention discloses a kind of monitoring method for drinking behavior of laboratory mice, and its system and device, and thus obtains raw data of the drinking behavior of mice, and then stores the raw data; as well as conducts data filtering on the raw data to filter out false data of jitter from the raw data, wherein, the false data of jitter consists of the data triggered by measurement signal transient and jitter caused by accidentally touch of the mice body, and their exploratory action; and then completes index statistics and analysis of the raw data filtered out false data of jitter, and thus forms indicator data, as well as stores the indicator data; after that, the indicator data is displayed in classification of real-time data, historical data and report data. The present invention utilizes the serial filtering mechanism of data filtering and matching, and thus reduces the amount of invalid trigger data, and there is no accumulated performance loss of data processing, as well as has efficient, accurate and real processing capabilities to process experimental batch data, and can provide reliable and objective statistical indicators to laboratory personnel, as well as may detect multiple laboratory mice at the same time, and thus, greatly improves the efficiency of the experiment and the accuracy of the experimental results. | 2022-04-28 |
20220129428 | DATABASE KEY COMPRESSION - Techniques are disclosed relating to compressing database keys. A computer system may receive a request to write a database record to a storage medium. The database record may include a database key and a corresponding data value. The computer system may compress the database key by replacing a portion of the database key with particular data that identifies a location of a reference database key and an amount of similarity determined between the database key and the reference database key. The computer system may write the database record to the storage medium. The database record may include the compressed database key and the corresponding data value. | 2022-04-28 |
20220129429 | METHOD AND SYSTEM FOR CLONING ENTERPRISE CONTENT MANAGEMENT SYSTEMS - Cloning enterprise content management systems is described. A first remote procedure call is executed to a source database management system associated with a source enterprise content management system to retrieve a source object type from the source enterprise content management system. A second remote procedure call is executed to a target database management system associated with a target enterprise content management system to create a target object type in the target enterprise content management system, wherein the target object type is based on the source object type. Source metadata tables associated with the source object type are retrieved from the source enterprise content management system. The source metadata tables are stored as target metadata tables in the target enterprise content management system. | 2022-04-28 |
20220129430 | OPTIMIZING STORAGE AND RETRIEVAL OF COMPRESSED DATA - In some examples, a computer system may receive a plurality of chunks of data of a data object. The system may compress the plurality of chunks of data to obtain a plurality of compressed chunks, and may determine whether the plurality of compressed chunks together are less than a threshold size. Based on determining that the plurality of compressed chunks together are less than the threshold size, the system may add, to respective entries in a map data structure, respective sizes of the plurality of compressed chunks. In addition, the system may compact the map data structure by combining values in at least two of the respective entries, and may store the plurality of compressed chunks and the compacted map data structure. | 2022-04-28 |
20220129431 | SYSTEM AND METHOD FOR GENERATING MULTI-CATEGORY SEARCHABLE TERNARY TREE DATA STRUCTURE - Systems, methods, and computer-readable media are disclosed herein that generate a ternary tree data structure that includes multiple categories (e.g., terminologies) using dynamic array modifications that facilitate sharing of one or more nodes across categories. A plurality of different categories may be added and stored within a single ternary tree data structure such that each categories may be separately queried using the single ternary data structure. | 2022-04-28 |
20220129432 | METHOD AND SYSTEM FOR CREATING RAPID SEARCHABLE ALTERED DATA IN A DATABASE - A method comprises receiving, by a server computer, a request message comprising at least a credential from a client device. The server computer can hash the credential to form an altered value. The server computer can then determine whether or not the altered value matches one of the hashed values stored in the database. If the altered value matches a matched hashed value, the server computer can determine a range of a plurality of ranges. The range can be associated with the matched hashed value. The server computer can then determine a data item associated with the range. The server computer can provide the data item to the client device. | 2022-04-28 |
20220129433 | Building of Tries Over Sorted Keys - Techniques are disclosed relating to building an in-memory multi-level data structure useable to determine presence or absence of key ranges in files consisting of database records. In various embodiments, a computer system operates a database, including maintaining a set of records having a set of corresponding keys that are accessible in key-sorted order and generates a multi-level data structure that facilitates key range lookups against the set of records. The generating may include accessing ones of the set of keys in key-sorted order and determining, for a particular accessed key that includes a set of characters, an intermediate level within the multi-level data structure and a subset of the characters of the particular accessed key for insertion. The computer system may insert, starting at the intermediate level, information that identifies the subset of characters, with the inserting being performed without traversing any levels before the intermediate level. | 2022-04-28 |
20220129434 | FAST CIRCULAR DATABASE - A data management system and associated data management method is disclosed herein. An exemplary method for managing data includes receiving data records timestamped with times spanned by a defined time interval; generating a data cube that includes data planes, wherein each data plane contains a set of data records timestamped with times spanned by the defined time interval; generating an index hypercube for the data cube, wherein dimensions of the index hypercube represent hash values of index keys defined for accessing the data cube; and generating an indexed data cube for storing in a database, wherein the indexed data cube includes the data cube and the index hypercube. The index hypercube includes index hypercube elements, where each index hypercube element represents a unique combination of hashed index key values that map to a data plane in the data cube. | 2022-04-28 |
20220129435 | QUERYING FOR CUSTOM DATA OBJECTS - A relational database system may receive from a client a query that is supported by the relational database system, the relational database system being configured to store a plurality of data objects such that each data object is associated with a respective data table of a plurality of data tables. The system may determine that the query is indicative of a data object type that is associated with data stored in a data system separate from the relational database system. The system may identify a schema of the data object type using a schema record maintained by the relational database system, transmit to the separate data system a request for data associated with the query, receive requested data, and return a query response including the requested data. | 2022-04-28 |
20220129436 | SYMBOLIC VALIDATION OF NEUROMORPHIC HARDWARE - Systems are provided that can produce symbolic and numeric representations of the neural network outputs, such that these outputs can be used to validate correctness of the implementation of the neural network. In various embodiments, a description of an artificial neural network containing no data-dependent branching is read. Based on the description of the artificial neural network, a symbolic representation is constructed of an output of the artificial neural network, the symbolic representation comprising at least one variable. The symbolic representation is compared to a ground truth symbolic representation, thereby validating the neural network system. | 2022-04-28 |
20220129437 | CONFIGURATION METADATA RECOVERY - Technology for configuration metadata recovery that detects a reliability failure regarding configuration metadata stored in non-volatile data storage of a data storage system. The configuration metadata indicates how a metadata database is stored in the non-volatile data storage of the data storage system. In response to detection of the reliability failure regarding the configuration metadata, the technology identifies valid generations of the configuration metadata that are currently stored in the non-volatile data storage of the data storage system, and determines a user-selected one of the valid generations of the configuration metadata. The metadata database is accessed based on the user-selected one of the valid generations of the configuration metadata. | 2022-04-28 |
20220129438 | METHODS AND APPARATUS FOR A DISTRIBUTED DATABASE THAT ENABLES DELETION OF EVENTS - In some embodiments, an apparatus includes a memory associated with an instance of a distributed database at a compute device configured to be included within a first group of compute devices. The apparatus is configured to determine an order for each event from the set of events based on different configurations of an event consensus protocol. The different configurations are logically related to different configurations of compute devices that implement the distributed database. The apparatus is configured to determine a current state of the instance of the distributed database based on the order determined for each event from the set of events and generate a signed state associated with the instance of the distributed database based on a hash value associated with the current state. The apparatus sends a signal to post into the instance of the distributed database an event that includes a transaction indicative of the signed state. | 2022-04-28 |
20220129439 | HIGH THROUGHPUT BLOCKCHAIN CONSENSUS SYSTEMS AND METHODS WITH LOW FINALIZATION TIME - The present invention is directed blockchain systems and censuses protocols that adopt a pipelining technique. The systems and protocols involve a committee of consensus nodes that include proposer nodes and voter nodes. Each proposer node can send two or more unnotarized proposals to the voter nodes, and the voter nodes can vote on an unnotarized proposal when they have the same freshest notarized chain or block. A sequence number is provided to facilitate the operation of the systems and protocols. The sequence number can be used to determine the freshest notarized chain or block and the finalized chain and switch proposer node. The systems and protocols also provide other features such as chain syncer, committee election scheme, and committee reconfiguration. The systems and protocols further provide a simple finalization process and thus have a low finalization time. | 2022-04-28 |
20220129440 | MUSS - Map User Submission States - The present disclosure provides systems and methods for an interactive user interface that allows for one or more submissions of update information related to a point of interest to be reviewed. The system may receive the submission from a computing device. The system may analyze the submission to determine the type of content, such as the name, address, website, photo, etc. related to the point of interest. The type of content may be classified using a machine learning model. The model may compare the content of the submission to a model of the type of content to determine whether the submission is approved for publishing or whether additional information is needed. The system may transmit one or more notifications to the computing device. The notifications may include updates on the workflow status of the submission. | 2022-04-28 |
20220129441 | DOUBLE SIGNING PROTECTION AND PREVENTION IN A BLOCKCHAIN COMPUTER SYSTEM - In an embodiment, a method comprises storing, in one or more digital data repositories, a body of code configured to execute one or more operations of a Proof of Stake (PoS) consensus algorithm on a blockchain node of a blockchain network, the one or more operations including a signing of a block for inclusion in the blockchain network; before executing the body of code on the blockchain node of a blockchain network, generating and transmitting a request to execute the body of code to a server computer, the request comprising a global identification value associated with the blockchain node of the blockchain network; in response to transmitting the request, receiving a response from the server computer, the response indicating whether the global identification value is locked; in response determining, based on the response, that the global identification value is not locked, executing the body of code on the node of the blockchain network, wherein executing the body of code on the node includes performing the signing of the block for inclusion in the blockchain network; transmitting the signed block to one or more nodes of the blockchain network. | 2022-04-28 |
20220129442 | DATABASE WRITEBACK USING AN INTERMEDIARY STATEMENT GENERATOR - Database writeback using an intermediary statement generator including receiving, by a statement generator, a table update request to update a table within a database on a cloud-based data warehouse, wherein the table update request comprises an update value and a selection of a row and a column from the table; verifying, by the statement generator, that the selection is updatable; generating, by the statement generator based on the selection and in response to the verification, an update database statement comprising a table identifier, a column identifier, a row identifier, and the update value; and sending, by the statement generator, the update database statement to the database on the cloud-based data warehouse, wherein the table of the database is updated in response to receiving the update database statement. | 2022-04-28 |
20220129443 | DOCUMENT MANAGEMENT SYSTEM AND RELATED METHOD - Systems, methods, devices and computer readable media for accessing a document are described herein. A virtual file system comprising one or more virtual files is provided at a computing device. A document authoring application obtains a blockchain reference from a virtual file. The virtual file corresponds to a document stored in a blockchain by a document management system. The blockchain reference is indicative of the blockchain having stored therein the document. The document authoring application transmits a document access request comprising the blockchain reference to the document management system. The document management system receives a temporary file corresponding to a latest version of the document from the document management system. The document authoring application outputs at least in part the contents of the document from the temporary. | 2022-04-28 |
20220129444 | DATA ACCESS SYSTEM - A data access system includes: a data storage medium, a record medium, a first controller, and a second controller. The record medium includes a first record area and a second record area. The first controller includes a first metadata area. The second controller includes a second metadata area. The first controller is connected to the data storage medium and the record medium and corresponds to the first record area. The second controller is connected to the data storage medium and the record medium and corresponds to the second record area. The first controller receives first data, and writes the first data into the data storage medium in a log manner to update the first metadata area, and correspondingly generates a first record in the first record area. The second controller updates the second metadata area according to the first record in the first record area. | 2022-04-28 |
20220129445 | KEYSPACE REFERENCES - Techniques are disclosed relating to tracking record writes for keyspaces across a set of database nodes. A first database node of a database system may receive a request to perform a database transaction that includes writing a particular record for a key included in a keyspace. The first database node may access a keyspace reference catalog that stores a plurality of indications of when keyspaces were written to by database nodes of the database system. In response to determining that a second database node has written a record for the keyspace within a particular time frame, the first database node may send a request to the second database node for information indicating whether the second database node has written a record for the key. Based on a response that is received from the second database node, the first database node may determine whether to write the particular record. | 2022-04-28 |
20220129446 | Distributed Ledger Management Method, Distributed Ledger System, And Node - A distributed ledger management method implemented by a predetermined management node configured to perform processing, includes: building a distributed ledger node for endorsement in restoration of a distributed ledger system or addition of a distributed ledger node ( | 2022-04-28 |
20220129447 | METHOD, SYSTEM, DEVICE AND MEDIUM FOR QUERYING PRODUCT HISTORY - A method for querying a product history is disclosed. The method includes receiving a product query request including at least one product query parameter for a target product to a product graph database that stores a relational map constructed based on a manufacturing process of the target product and describing entities including product entities and manufacturing entities and entity relations therebetween involved in the manufacturing process, querying the product graph database according to the product query parameter to obtain product history data of the target product by searching for a product entity corresponding to the target product as a target product entity in the relational map according to the parameter, searching for associated manufacturing entities of the target product entity according to the entity relations, obtaining the product history data based on the associated manufacturing entities, and sending a notification message to notify obtained product history data. | 2022-04-28 |
20220129448 | INTELLIGENT DIALOGUE METHOD AND APPARATUS, AND STORAGE MEDIUM - An intelligent dialogue method and apparatus and medium are provided. The method includes: obtaining a pre-matching result by pre-matching a query to be processed with a table content of a target table; extracting a character segment having a highest matching degree with the attribute value from the query based on the attribute value having the highest matching degree with the query; determining a target attribute value semantically associated with the character segment based on the attribute value of each column attribute; generating a structured query language (SQL) query statement corresponding to the query based on the query, the attribute name of each column attribute, the highest matching level of the attribute name, the highest matching level of the attribute value and the target attribute value; and generating a reply statement based on a result obtained by searching a database based on the SQL query statement. | 2022-04-28 |
20220129449 | Database Facet Search - A system and method are presented that utilize facet modifications to alter user input values to perform a facet search on a found set of data records. The facet modifications are associated with item records, such as by using type attributes. Facet modifications can be associated with query information that is utilized to create a query on a user interface. Input received from the query is then modified according to the facet modifier in order to create facet search parameters. The facet search parameters are used to perform a facet search to narrow the found set of item records. In some embodiments, facet modifications are stored in facet modifier records. Multiple facet modifications can be stored in a single facet modifier record. A single query input can be manipulated by multiple facet modifications to create separate facet search parameters. | 2022-04-28 |
20220129450 | SYSTEM AND METHOD FOR TRANSFERABLE NATURAL LANGUAGE INTERFACE - A computer system and method for answering a natural language question is provided. The system comprises at least one processor and a memory storing instructions which when executed by the processor configure the processor to perform the method. The method comprises receiving a natural language question, generating a SQL query based on the natural language question, generating an explanation regarding a solution to the natural language question as answered by the SQL query, and presenting the solution and the explanation. | 2022-04-28 |
20220129451 | EFFICIENT COMPILATION OF GRAPH QUERIES ON TOP OF SQL BASED RELATIONAL ENGINE - Techniques support graph pattern matching queries inside a relational database management system (RDBMS) that supports SQL execution. The techniques compile a graph pattern matching query into a SQL query that can then be executed by the relational engine. As a result, techniques enable execution of graph pattern matching queries on top of the relational engine by avoiding any change in the existing SQL engine. | 2022-04-28 |
20220129452 | SYSTEM AND METHOD FOR COMPARING AND UTILIZING ACTIVITY INFORMATION AND CONFIGURATION INFORMATION FROM MULTIPLE DEVICE MANAGEMENT SYSTEMS - A method of aggregating and using medical device data from a plurality of remote institutions. The system and method electronically receives at a central computer system a plurality of established medical device data, each of the plurality of established medical device data being received from a respective medication delivery system, each of the respective medication delivery systems having a respective plurality of medical devices within the respective remote institution, such as medication delivery pumps, associated therewith and utilized therein. The system and method electronically combines and stores the plurality of established medical device data from each of the plurality of remote institutions within a memory, and electronically provides a remote client computer access to at least one of a central reporting application adapted for providing summary information to the remote client computer about the medical device data, and/or other applications. | 2022-04-28 |
20220129453 | DATABASE SYSTEM AND QUERY EXECUTION METHOD - A database system includes a plurality of DBMSs included in a plurality of nodes. Each DBMS is a first or a second DBMS. The first DBMS transfers a search query and does not execute data retrieval, and the second DBMS executes data retrieval. The plurality of nodes configure one or more node groups. Each node group includes a first node and one or more second nodes. In each node group, the first node is a logical computer that provides a first storage area and executes a first DBMS and the second node is a logical computer that provides a second storage area and executes a second DBMS, each node in the node group stores the same database therein, and data retrieval from the database in the node group is executed by one or more second DBMSs in the node group. | 2022-04-28 |
20220129454 | DYNAMIC PRESENTATION OF SEARCHABLE CONTEXTUAL ACTIONS AND DATA - Disclosed methods and systems allow a central server to monitor electronic units of work accessible to a group of computers and generate a nodal data structure representing the units of work. The server then uses various protocols, such as hashing algorithms and/or executing artificial intelligence and machine learning models to identify similar and/or related units of work. The server then merges/links the nodes corresponding to the similar/related units of work. The server also monitors all user activities. When a user or a software system/service accesses electronic content on his, her, or its electronic device, the server identifies a node corresponding to the accessed electronic content and associated unit(s) of work and presents searchable data and actions related to the identified node and any related/linked nodes. | 2022-04-28 |
20220129455 | TECHNIQUES FOR IN-MEMORY DATA SEARCHING - One embodiment of the invention is directed to a method for performing efficient data searches in a distributed computing system. The method may comprise, receiving a search request including a key. The key may be provided to a block-based table manager via a programming interface external to a virtual machine executing on a computer system. The programming interface may provide a translation between a first programming framework of the virtual machine and a second programming framework of the block-based table manager. Providing the key may cause the block-based table manager to conduct a search for a value corresponding to the key. The value may be provided in response to the search request. Utilizing such block-based tables may enable a data search to be performed using on-board memory of computing node operating within a distributed computing system. | 2022-04-28 |
20220129456 | INTEGRATION OF TIMESERIES DATA AND TIME DEPENDENT SEMANTIC DATA - Techniques for processing combinations of timeseries data and time-dependent semantic data are provided. The timeseries data can be data from one or more Internet of things (IOT) devices having one or more hardware sensors. The semantic data can be master data. Disclosed techniques allow for time dependent semantic data to be used with the timeseries data, so that semantic data appropriate for a time period associated with the timeseries data can be used. Changes to semantic data are tracked and recorded, where the changes can represent a new value to be used going forward in time or an update to a value for a prior time period. Timeseries data and semantic data can be stored with identifiers that facilitate their combination, such as date ranges, identifiers of analog world objects, or identifiers for discrete sets of semantic data values. | 2022-04-28 |
20220129457 | AUTOMATIC DATA-SCREENING FRAMEWORK AND PREPROCESSING PIPELINE TO SUPPORT ML-BASED PROGNOSTIC SURVEILLANCE - The disclosed embodiments relate to a system that automatically selects a prognostic-surveillance technique to analyze a set of time-series signals. During operation, the system receives the set of time-series signals obtained from sensors in a monitored system. Next, the system determines whether the set of time-series signals is univariate or multivariate. When the set of time-series signals is multivariate, the system determines if there exist cross-correlations among signals in the set of time-series signals. If so, the system performs subsequent prognostic-surveillance operations by analyzing the cross-correlations. Otherwise, if the set of time-series signals is univariate, the system performs subsequent prognostic-surveillance operations by analyzing serial correlations for the univariate time-series signal. | 2022-04-28 |
20220129458 | METHOD FOR GENERATING IDENTIFICATION ID, AND APPARATUS IMPLEMENTING THE SAME METHOD - A method performed by a computing device for generating an identification identifier (ID) according to an embodiment of the present disclosure includes obtaining an instance ID for identifying each of a plurality of service instances, and generating an identification ID for identifying a data item sequentially generated by the respective service instance. The identification ID may include the instance ID, a sequence number, and generation time information. | 2022-04-28 |
20220129459 | BUILDING MANAGEMENT SYSTEM WITH DECLARATIVE VIEWS OF TIMESERIES DATA - A building management system (BMS) includes building equipment configured to provide raw data samples of one or more data points in the BMS. The BMS further includes a data collector configured to collect raw data samples from the building equipment and generate one or more raw data timeseries comprising a plurality of the raw data samples. The BMS also includes a timeseries processing engine. The timeseries processing engine is configured to identify one or more timeseries processing workflows that apply to the raw data timeseries, each of the workflows comprising a predefined sequence of timeseries processing operation. The timeseries processing engine is further configured to process the raw data timeseries using the identified timeseries processing workflows to generate one or more derived data timeseries. The BMS further includes a timeseries storage interface configured to store the raw data timeseries and the derived data timeseries in a timeseries database. | 2022-04-28 |
20220129460 | AUTO-SCALING A QUERY ENGINE FOR ENTERPRISE-LEVEL BIG DATA WORKLOADS - Aspects of the present invention disclose a method, computer program product, and system for auto-scaling a query engine. The method includes one or more processors monitoring query traffic at the query engine. The method further includes one or more processors classifying queries by a plurality of service classes based on a level of complexity of a query. The method further includes one or more processors comparing query traffic for each service class with a concurrency threshold of a maximum number of queries of the service class allowed to be concurrently processed. The method further includes one or more processors instructing auto-scaling of a cluster of worker nodes to change a number of worker nodes available in the cluster based on the comparison, over a defined period of time, of the query traffic relative to a defined upscaling threshold and a defined downscaling threshold. | 2022-04-28 |
20220129461 | EFFICIENT COMPILATION OF GRAPH QUERIES INCLUDING COMPLEX EXPRESSIONS ON TOP OF SQL BASED RELATIONAL ENGINE - Techniques support graph pattern matching queries inside a relational database management system (RDBMS) that supports SQL execution. The techniques compile a graph pattern matching query into a SQL query that can then be executed by the relational engine. As a result, techniques enable execution of graph pattern matching queries on top of the relational engine by avoiding any change in the existing SQL engine. | 2022-04-28 |
20220129462 | SYSTEM AND METHOD FOR EFFICIENT PROCESSING AND MANAGING OF REPORTS DATA AND METRICS - Systems and methods for data reporting using a data aggregator and a data retrieval tool such as a file intelligence service. The data aggregator stores two sets of data reporting tables and designates a first one of the sets of tables as an active set and the second one of the sets as a non-active set. The active set of tables stores data corresponding to a most recently successfully completed search. The non-active set stores data retrieved by the data retrieval tool from disparate data sources according to current search. The data in the active set of tables is immediately available for use in requested reports. When the data aggregator completes the current search, it designates the non-active set of tables as the active set so that the data therein becomes available for use in requested reports. | 2022-04-28 |
20220129463 | QUERY EXECUTION VIA COMPUTING DEVICES WITH PARALLELIZED RESOURCES - A computing device includes a computing device controller hub and a plurality of parallelized nodes coupled to the computing device controller hub. Each node of the plurality of parallelized nodes includes a central processing module, a main memory, and at least one disk memory. The plurality of computing devices is operable to collectively execute query requests against at least one database table stored by the plurality of computing devices based on each node of each computing device performing corresponding operations independently from other nodes of the plurality of parallelized nodes. | 2022-04-28 |
20220129464 | RE-ORDERED PROCESSING OF READ REQUESTS - A method includes determining, in accordance with a first ordering, a plurality of read requests for a memory device. The plurality of read requests are added to a memory device queue for the memory device in accordance with the first ordering. The plurality of read requests in the memory device queue are processed, in accordance with a second ordering that is different from the first ordering, to determine read data for each of the plurality of read requests. The read data for the each of the plurality of read requests is added one of a set of ordered positions, based on the first ordering, of a ring buffer as the each of the plurality of reads requests is processed. The read data of a subset of the plurality of read requests is submitted based on adding the read data to a first ordered position of the set of ordered positions of the ring buffer. | 2022-04-28 |
20220129465 | EFFICIENT COMPILATION OF GRAPH QUERIES INVOLVING LONG GRAPH QUERY PATTERNS ON TOP OF SQL BASED RELATIONAL ENGINE - Techniques support graph pattern matching queries inside a relational database management system (RDBMS) that supports SQL execution. The techniques compile a graph pattern matching query into a SQL query that can then be executed by the relational engine. As a result, techniques enable execution of graph pattern matching queries on top of the relational engine by avoiding any change in the existing SQL engine. | 2022-04-28 |
20220129466 | COMPRESSING DATA SETS FOR STORAGE IN A DATABASE SYSTEM - A method includes determining a data set for storage that includes a plurality of uncompressed data slabs in accordance with a serialized data slab ordering. A storage data set that includes a plurality of compressed data slabs is created based on the data set in accordance with the serialized data slab ordering. Each compressed data slab of the plurality of compressed data slabs is generated from at least one corresponding uncompressed data slab of the plurality of uncompressed data slabs that includes a plurality of values based on generating compressed data for each compressed data slab based on the at least one corresponding uncompressed data slab, and generating compression information for each compressed data slab. The storage data set is stored via a plurality of computing devices. | 2022-04-28 |