52nd week of 2020 patent applcation highlights part 47 |
Patent application number | Title | Published |
20200401498 | METHODS, CIRCUITS, APPARATUS, SYSTEMS AND ASSOCIATED SOFTWARE MODULES FOR EVALUATING CODE BEHAVIOR - Disclosed are methods, circuits, apparatus, systems and associated software modules for dynamically evaluating code behavior in runtime. There is provided a code testing platform and/or framework which may include: (1) a code execution environment instancing module (CEEIM), (2) code execution resources, (3) executed code isolation logic, and (4) code call response logic. The CEEIM may instance, on a computing platform, a code execution environment (CEE) which is at least partially isolated from external resources functionally associated with the computing platform. The CEE may include code execution resources adapted to execute code whose behavior is to be evaluated, wherein a resource call generated from code execution may be analyzed by the code isolation logic and may under certain conditions be routed to the code call response logic. | 2020-12-24 |
20200401499 | MANAGING AND MAINTAINING MULTIPLE DEBUG CONTEXTS IN A DEBUG EXECUTION MODE FOR REAL-TIME PROCESSORS - A real-time debugger implementation maintains and manages multiple debug contexts allowing developers to interact with real-time applications without “breaking” the system in which the debug application is executing. The debugger allows multiple debug contexts to exist and allows break points in real-time and non-real-time code portions of one or more applications executing on a debug enabled core of a processor. A debug monitor function may be implemented as a hardware logic module on the same integrated circuit as the processor. Higher priority interrupt service requests may be serviced while otherwise maintaining a context for the debug session (e.g., stopped at a developer defined breakpoint). Accordingly, the application developer executing the debugger may not have to be concerned with processing occurring on the processor that may be unrelated to the current debug session. | 2020-12-24 |
20200401500 | MANAGING AND MAINTAINING MULTIPLE DEBUG CONTEXTS IN A DEBUG EXECUTION MODE FOR REAL-TIME PROCESSORS - A real-time debugger implementation maintains and manages multiple debug contexts allowing developers to interact with real-time applications without “breaking” the system in which the debug application is executing. The debugger allows multiple debug contexts to exist and allows break points in real-time and non-real-time code portions of one or more applications executing on a debug enabled core of a processor. A debug monitor function may be implemented as a hardware logic module on the same integrated circuit as the processor. Higher priority interrupt service requests may be serviced while otherwise maintaining a context for the debug session (e.g., stopped at a developer defined breakpoint). Accordingly, the application developer executing the debugger may not have to be concerned with processing occurring on the processor that may be unrelated to the current debug session. | 2020-12-24 |
20200401501 | MANAGING AND MAINTAINING MULTIPLE DEBUG CONTEXTS IN A DEBUG EXECUTION MODE FOR REAL-TIME PROCESSORS - A real-time debugger implementation maintains and manages multiple debug contexts allowing developers to interact with real-time applications without “breaking” the system in which the debug application is executing. The debugger allows multiple debug contexts to exist and allows break points in real-time and non-real-time code portions of one or more applications executing on a debug enabled core of a processor. A debug monitor function may be implemented as a hardware logic module on the same integrated circuit as the processor. Higher priority interrupt service requests may be serviced while otherwise maintaining a context for the debug session (e.g., stopped at a developer defined breakpoint). Accordingly, the application developer executing the debugger may not have to be concerned with processing occurring on the processor that may be unrelated to the current debug session. | 2020-12-24 |
20200401502 | Detecting Hard-Coded Strings In Source Code - Methods and systems for detecting hard-coded strings in source code are described herein. According to an aspect of an example method, a first list of strings may be generated via a processor. The first list of strings may include strings that are embedded in source code of an application. A second list of strings may be generated. The second list of strings may include strings that are rendered via a user interface of the application. Each string of the first list of strings may be compared against the strings of the second list of strings. Based on the comparison, a filtered list of strings may be generated by removing, from the first of strings, at least one string that does not have a match in the second list of strings. By this method, the software development process, and especially updating, maintaining, and localizing code, may become more efficient and cost-effective. | 2020-12-24 |
20200401503 | System and Method for Testing Artificial Intelligence Systems - A method and a system are provided of testing AI applications/systems on a System for testing one or more artificial intelligence System (STAIS) connectable to under-test AI systems (UTAIS) via the AI test connectors/interfaces through AI test platform. The method includes test modeling, data preparation, testing script generation, testing automation, and quality assurance. The system automatically identifies, analyzes, and displays all quality issues of UTAIS. | 2020-12-24 |
20200401504 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR CONFIGURING A TEST SYSTEM USING SOURCE CODE OF A DEVICE BEING TESTED - Methods, systems, and computer readable media for configuring a test system using source code of a device being tested are disclosed. According to one method, the method occurs at a network equipment test device. The method includes receiving one or more device source files associated with a device under test (DUT); analyzing the one or more device source files to determine configuration source code for configuring at least one test system resource in the network equipment test device, wherein analyzing the one or more device source files includes identifying functionality of the DUT based on device source code portions and determining, using the device source code portions, the configuration source code for testing the functionality of the DUT; configuring, using the configuration source code, the at least one test system resource; and testing the DUT using the at least one test system resource. | 2020-12-24 |
20200401505 | SYSTEM AND METHOD FOR AUTOMATED TESTING OF APPLICATION PROGRAM INTERFACE (API) - The present invention relates to a method for automated testing of an Application Program Interface (API). A test requirement data is received to test an API from a first database. Further, the test requirement data is translated into a first set of vectors. Furthermore, one or more test scripts from a plurality of test scripts stored in a second database is selected based on output of the trained artificial neural network. The output indicative of a probability of effectiveness associated with the one or more test scripts is generated using the first set of vectors as inputs to a trained artificial neural network. The one or more test scripts are executed to test and validate the API. | 2020-12-24 |
20200401506 | System and Method for Performing Automated API Tests - A framework and a method for ad-hoc batch testing of APIs are provided, where batches of API calls are dynamically generated directly through the framework according inputs identifying the required tests and the sources of the test data, rather than through execution of prewritten test scripts that explicitly write out the test API calls in preset sequences. When performing the validation for an API test, a test payload is generated for the test, an endpoint is called using the test payload to obtain the response used for validation, where generating the test payload includes determining an API reference corresponding to the test, obtaining relevant data from the test data according to a reference key in the test, generating input assignment operations for one or more input parameters in the API reference according to the relevant data, and generating an API call based on the API reference. | 2020-12-24 |
20200401507 | METHOD FOR AUTOMATICALLY VALIDATING COTS AND DEVICE FOR IMPLEMENTING THE METHOD - A method for testing a software component implemented in a host system on the basis of one or more test campaigns, a test campaign includes computer test cases and being associated with input test data. The method comprises the steps of: executing the computer test cases of each test campaign for an operating time of the software component, which provides output test data associated with each test campaign; determining a reference operating model and a data partition on the basis of the input and output test data associated with each test campaign; running the software component using input production run data, which provides output production run data; determining an operating characteristic of the software component on the basis of the reference operating models according to a comparison between the input and output production run data and the data from the data partitions associated with the one or more test campaigns. | 2020-12-24 |
20200401508 | METHOD AND COMPUTER PROGRAM PRODUCT AND APPARATUS FOR MULTI-NAMESPACE DATA ACCESS - The invention introduces a method for multi-namespace data access, performed by a controller, at least including: obtaining a host write command from a host, which includes user data and metadata associated with one Logical Block Address (LBA) or more; and programming the user data and the metadata into a user-data part and a metadata part of a segment of a Logical Unit Number (LUN), respectively, wherein a length of the metadata part is the maximum metadata length of a plurality of LBA formats that the controller supports. | 2020-12-24 |
20200401509 | APPARATUS AND METHOD AND COMPUTER PROGRAM PRODUCT FOR HANDLING FLASH PHYSICAL-RESOURCE SETS - The invention introduces an apparatus for handling flash physical-resource sets, at least including a random access memory (RAM), a processing unit and an address conversion circuit. The RAM includes multiple segments of temporary space and each segment thereof stores variables associated with a specific flash physical-resource set. The processing unit accesses user data of a flash physical-resource set when executing program code of a Flash Translation Layer (FTL). The address conversion circuit receives a memory address issued from the FTL, converts the memory address into a relative address of one segment of temporary space associated with the flash physical-resource set and outputs the relative address to the RAM for accessing a variable of the associated segment of temporary space. | 2020-12-24 |
20200401510 | DATA STORAGE DEVICE FOR MANAGING MEMORY RESOURCES BY USING FLASH TRANSLATION LAYER WITH CONDENSED MAPPING INFORMATION - There is provided a data storage device for managing memory resources by using a flash translation layer (FTL) for condensing mapping information. The FTL divides a total logical address space for input and output requests of a host into n virtual logical address streams, generates a preliminary cluster mapping table in accordance with stream attributes of the n virtual logical address streams, generates a condensed cluster mapping table by performing a k-mean clustering algorithm on the preliminary cluster mapping table, and generates a cache cluster mapping table configured as a part of a condensed cluster mapping table frequently referred to by using a DFTL method. The FTL extends a space of data buffers allotted to non-mapped physical address streams to a DFTL cache map in a data buffer of a volatile memory device by the condensed cluster mapping table. | 2020-12-24 |
20200401511 | ATTACHABLE PROTECTIVE DATA STORAGE DEVICE - A molded reinforced polymer case, which houses an integrated solid-state data storage drive, which attaches to a laptop. The solid-state drive connects to the laptop drive through a connective data cable. | 2020-12-24 |
20200401512 | APPARATUS AND SYSTEM FOR OBJECT-BASED STORAGE SOLID-STATE DEVICE - An object-based storage system comprising a host system capable of executing applications for and with an object-based storage device (OSD). Exemplary configurations include a call interface, a physical layer interface, an object-based storage solid-state device (OSD-SSD), and are further characterized by the presence of a storage processor capable of processing object-based storage device algorithms interleaved with processing of physical storage device management. Embodiments include a storage controller capable of executing recognition, classification and tagging of application files, especially including image, music, and other media. Also disclosed are methods for initializing and configuring an OSD-SSD device. | 2020-12-24 |
20200401513 | GARBAGE COLLECTION ADAPTED TO HOST WRITE ACTIVITY - Systems and methods for adapting garbage collection (GC) operations in a memory device to a host write activity are described. A host write progress can be represented by an actual host write count relative to a target host write count. The host write activity may be estimated in a unit time such as per day, or accumulated over a specified time period. A memory controller can adjust an amount of memory space to be freed by a GC operation according to the host write progress. The memory controller can also dynamically reallocate a portion of the memory cells between a single level cell (SLC) cache and a multi-level cell (MLC) storage according to the host write progress. | 2020-12-24 |
20200401514 | GARBAGE COLLECTION ADAPTED TO MEMORY DEVICE LIFE EXPECTANCY - Systems and methods for adapting garbage collection (GC) operations in a memory device to an estimated device age are discussed. An exemplary memory device includes a memory controller to track an actual device age, determine a device wear metric using a physical write count and total writes over an expected lifetime of the memory device, estimate a wear-indicated device age, and adjust an amount of memory space to be freed by a GC operation according to the wear-indicated device age relative to the actual device age. The memory controller can also dynamically reallocate a portion of the memory cells between a single level cell (SLC) cache and a multi-level cell (MLC) storage according to the wear-indicated device age relative to the actual device age. | 2020-12-24 |
20200401515 | GARBAGE COLLECTION ADAPTED TO USER DEVICE ACCESS - Systems and methods for adapting garbage collection (GC) operations in a memory device to a pattern of host accessing the device are discussed. The host access pattern can be represented by how frequent the device is in idle states free of active host access. An exemplary memory device includes a memory controller to track a count of idle periods during a specified time window, and to adjust an amount of memory space to be freed by a GC operation in accordance with the count of idle periods. The memory controller can also dynamically reallocate a portion of the memory cells between a single level cell (SLC) cache and a multi-level cell (MLC) storage according to the count of idle periods during the specified time window. | 2020-12-24 |
20200401516 | Data Storage Devices and Data Processing Methods - A data storage device includes a memory device and a memory controller. The memory device includes multiple memory blocks. The memory controller determines whether execution of a garbage collection procedure is required according to a number of spare memory blocks. When the execution of the garbage collection procedure is required, the memory controller determines an execution period according to a latest editing status of a plurality of open memory blocks; starts the execution of the garbage collection procedure so as to perform at least a portion of the garbage collection procedure in the execution period; and suspends the execution of the garbage collection procedure when the execution period has expired but the garbage collection procedure is not finished. The memory controller further determines a time interval for continuing the execution of the garbage collection procedure later according to the latest editing status of the open memory blocks. | 2020-12-24 |
20200401517 | ARENA-BASED MEMORY MANAGEMENT - An arena-based memory management system is disclosed. In response to a call to reclaim memory storing a group of objects allocated in an arena, an object not in use of the group of objects allocated in the arena is collected. A live object of the plurality of objects is copied from the arena to a heap. | 2020-12-24 |
20200401518 | MEMORY CONTROLLER AND MEMORY SYSTEM HAVING THE MEMORY CONTROLLER - There are provided a memory controller for performing a program operation and a memory system having the memory controller. The memory system includes a memory device including first and second planes each including a plurality of m-bit (m is a natural number of 2 or more) multi-level cell (MLC) blocks; and a memory controller for allocating a first address corresponding to a first MLC block of the m-bit MLC blocks in which first m-bit MLC data is to be programmed and a second address corresponding to a second MLC block of the m-bit MLC blocks in which second m-bit MLC data is to be programmed, and transmitting the allocated addresses and logical page data included in the m-bit MLC data to the memory device. The memory controller differently determines a transmission sequence of the logical page data according to whether the addresses correspond to the same plane among the planes. | 2020-12-24 |
20200401519 | REGION BASED SPLIT-DIRECTORY SCHEME TO ADAPT TO LARGE CACHE SIZES - Systems, apparatuses, and methods for maintaining region-based cache directories split between node and memory are disclosed. The system with multiple processing nodes includes cache directories split between the nodes and memory to help manage cache coherency among the nodes' cache subsystems. In order to reduce the number of entries in the cache directories, the cache directories track coherency on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. Each processing node includes a node-based cache directory to track regions which have at least one cache line cached in any cache subsystem in the node. The node-based cache directory includes a reference count field in each entry to track the aggregate number of cache lines that are cached per region. The memory-based cache directory includes entries for regions which have an entry stored in any node-based cache directory of the system. | 2020-12-24 |
20200401520 | MEMORY CACHE-LINE BOUNCE REDUCTION FOR SHARED I/O RING STRUCTURES - A system includes a memory, a producer processor and a consumer processor. The memory includes a shared ring buffer, which has a partially overlapping active ring and processed ring. The producer processor is in communication with the memory and is configured to receive a request associated with a memory entry, store the request in a first slot of the shared ring buffer at a first offset, receive another request associated with another memory entry, and store the other request in a second slot (in the overlapping region adjacent to the first slot) of the shared ring buffer. The consumer processor is in communication with the memory and is configured to process the request and write the processed request in a third slot (outside of the overlapping region at a second offset and in a different cache-line than the second slot) of the shared ring buffer. | 2020-12-24 |
20200401521 | VOLATILE MEMORY CACHE LINE DIRECTORY TAGS - An example memory system may include a central processing unit (CPU) comprising a CPU cache, a storage class memory, a volatile memory and a memory controller. The memory controller is to store, in the storage class memory, a first cache line including first data and a first directory tag corresponding to the first data. The memory controller is to further store, in the storage class memory, a second cache line including second data and a second directory tag corresponding to the second data. The memory controller is to store, in the volatile memory, a third cache line that comprises the first directory tag and the second directory tag, the third cache line excluding the first data and the second data. | 2020-12-24 |
20200401522 | SYSTEMS AND METHODS FOR PROVIDING CONTENT - Systems, methods, and non-transitory computer-readable media can determine that a user is interacting with a software application running on a computing device. One or more content items to be prefetched for the software application are identified based on one or more machine learning models. A request to prefetch the one or more content items for the software application is generated. | 2020-12-24 |
20200401523 | PREFETCHING IN A LOWER LEVEL EXCLUSIVE CACHE HIERARCHY - According to one general aspect, an apparatus may include a multi-tiered cache system that includes at least one upper cache tier relatively closer, hierarchically, to a processor and at least one lower cache tier relatively closer, hierarchically, to a system memory. The apparatus may include a memory interconnect circuit hierarchically between the multi-tiered cache system and the system memory. The apparatus may include a prefetcher circuit coupled with a lower cache tier of the multi-tiered cache system, and configured to issue a speculative prefetch request to the memory interconnect circuit for data to be placed into the lower cache tier. The memory interconnect circuit may be configured to cancel the speculative prefetch request if the data exists in an upper cache tier of the multi-tiered cache system. | 2020-12-24 |
20200401524 | HIGH-FREQUENCY AND LOW-POWER L1 CACHE AND ASSOCIATED ACCESS TECHNIQUE - A high-frequency and low-power L1 cache and associated access technique. The method may include inspecting a virtual address of an L1 data cache load instruction, and indexing into a row and a column of a way predictor table using metadata and a virtual address associated with the load instruction. The method may include matching information stored at the row and the column of the way predictor table to a location of a cache line. The method may include predicting the location of the cache line within the L1 data cache based on the information match. A hierarchy of way predictor tables may be used, with higher level way predictor tables refreshing smaller lower level way predictor tables. The way predictor tables may be trained to make better predictions over time. Only selected circuit macros need to be enabled based on the predictions, thereby saving power. | 2020-12-24 |
20200401525 | Storage System and Method for Enabling Host-Driven Regional Performance in Memory - A storage system and method for enabling host-driven regional performance in memory are provided. In one embodiment, a method is provided comprising receiving a directive from a host device as to a preferred logical region of a non-volatile memory in a storage system; and based on the directive, modifying a caching policy specifying which pages of a logical-to-physical address map stored in the non-volatile memory are to be cached in a volatile memory of the storage system. Other embodiments are provided, such as modifying a garbage collection policy of the storage system based on information from the host device regarding a preferred logical region of the memory. | 2020-12-24 |
20200401526 | STREAMING ENGINE WITH EARLY AND LATE ADDRESS AND LOOP COUNT REGISTERS TO TRACK ARCHITECTURAL STATE - A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces address of data elements. A steam head register stores data elements next to be supplied to functional units for use as operands. The streaming engine stores an early address of next to be fetched data elements and a late address of a data element in the stream head register for each of the nested loops. The streaming engine stores an early loop counts of next to be fetched data elements and a late loop counts of a data element in the stream head register for each of the nested loops. | 2020-12-24 |
20200401527 | COHERENT MEMORY ACCESS - Apparatuses and methods related to providing coherent memory access. An apparatus for providing coherent memory access can include a memory array, a first processing resource, a first cache line and a second cache line coupled to the memory array, a first cache controller, and a second cache controller. The first cache controller coupled to the first processing resource and to the first cache line can be configured to provide coherent access to data stored in the second cache line and corresponding to a memory address. A second cache controller coupled through an interface to a second processing resource external to the apparatus and coupled to the second cache line can be configured to provide coherent access to the data stored in the first cache line and corresponding to the memory address. Coherent access can be provided using a first cache line address register of the first cache controller which stores the memory address and a second cache line address register of the second cache controller which also stores the memory address. | 2020-12-24 |
20200401528 | METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR MAINTAINING A CACHE - A method, system, and computer program product for maintaining a cache obtain request data associated with a plurality of previously processed requests for aggregated data; predict, based on the request data, (i) a subset of the aggregated data associated with a subsequent request and (ii) a first time period associated with the subsequent request; determine, based on the first time period and a second time period associated with a performance of a data aggregation operation that generates the aggregated data, a third time period associated with instructing a memory controller managing a cache to evict cached data stored in the cache and load the subset of the aggregated data into the cache; and provide an invalidation request to the memory controller managing the cache to evict the cached data stored in the cache and load the subset of the aggregated data into the cache during the third time period. | 2020-12-24 |
20200401529 | GPU CACHE MANAGEMENT BASED ON LOCALITY TYPE DETECTION - Wavefront loading in a processor is managed and includes monitoring a selected wavefront of a set of wavefronts. Reuse of memory access requests for the selected wavefront is counted. A cache hit rate in one or more caches of the processor is determined based on the counted reuse. Based on the cache hit rate, subsequent memory requests of other wavefronts of the set of wavefronts are modified by including a type of reuse of cache lines in requests to the caches. In the caches, storage of data in the caches is based on the type of reuse indicated by the subsequent memory access requests. Reused cache lines are protected by preventing cache line contents from being replaced by another cache line for a duration of processing the set of wavefronts. Caches are bypassed when streaming access requests are made. | 2020-12-24 |
20200401530 | FLATFLASH SYSTEM FOR BYTE GRANULARITY ACCESSIBILITY OF MEMORY IN A UNIFIED MEMORY-STORAGE HIERARCHY - Various embodiments are provided for providing byte granularity accessibility of memory in a unified memory-storage hierarchy in a computing system by a processor. A location of one or more secondary memory medium pages in a secondary memory medium may be mapped into an address space of a primary memory medium to extend a memory-storage hierarchy of the secondary memory medium. The one or more secondary memory medium pages may be promoted from the secondary memory medium to the primary memory medium. The primary memory medium functions as a cache to provide byte level accessibility to the one or more primary memory medium pages. A memory request for the secondary memory medium page may be redirected using a promotion look-aside buffer (“PLB”) in a host bridge associated with the primary memory medium and the secondary memory medium. | 2020-12-24 |
20200401531 | MEMORY ACCESS - A method for managing memory access for implementing at least one layer of a convolutional neural network is provided. The method comprises predicting an access procedure in relation to a portion of memory based on a characteristic of the convolutional neural network. In response to the prediction, the method comprises performing an operation to obtain and store a memory address translation, corresponding to the portion of memory, in storage in advance of the predicted access procedure. An apparatus is provided comprising at least one processor and storage. The apparatus is configured to predict an access procedure in relation to a portion of memory which is external to the processor. In response to the prediction, the apparatus is configured to obtain and store a memory address translation corresponding to the portion of memory in storage in advance of the predicted access procedure. | 2020-12-24 |
20200401532 | LOOKAHEAD PRIORITY COLLECTION TO SUPPORT PRIORITY ELEVATION - A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value. | 2020-12-24 |
20200401533 | MEMORY DEVICE WITH CRYPTOGRAPHIC KILL SWITCH - The disclosed embodiments describe devices and methods for preventing unauthorized access to memory devices. The disclosed embodiments utilize a one-time programmable (OTP) memory added to both a memory device and a processing device. The OTP memory stores encryption keys and the encryption and decryption of messages between the two devices are used as a heartbeat to determine that the memory device has not been separated from the processing device and, in some instances, connected to a malicious processing device. | 2020-12-24 |
20200401534 | STORAGE CLASS MEMORY WITH IN-MEMORY ONE-TIME PAD SECURITY - A memory device includes a memory module that encrypts and decrypts data with a key. To encrypt, the memory module performs a first modified XOR operation in which a ciphertext has a same logical value as a corresponding key when the data has a low logical value and the ciphertext has an inverse of the logical value of the corresponding key when the data is at a high logical value. To decrypt, the memory module performs a second modified XOR operation in which the logical value of the ciphertext forms the logical value of the data when the corresponding key is at the low logical value and the inverse of the logical value of the ciphertext forms the logical value of the corresponding data when the corresponding key is at the high logical value. | 2020-12-24 |
20200401535 | MEMORY MODULE DATA OBJECT PROCESSING SYSTEMS AND METHODS - The present disclosure provides methods, apparatus, and systems for implementing and operating a memory module that receive, using dedicated processing circuitry implemented in a memory module, a first data object and a second data object. The memory module performs pre-processing of the first data object and post-processing of the second data object. | 2020-12-24 |
20200401536 | WAVE PIPELINE INCLUDING SYNCHRONOUS STAGE - A wave pipeline includes a data path and a clock path. The data path includes a plurality of wave pipeline data stages and a synchronous data stage between a data input node and a data output node. The synchronous data stage includes a first data latch to latch the data from the synchronous data stage. The clock path includes a plurality of clock stages corresponding to the plurality of wave pipeline data stages between an input clock node and a return clock node. Each clock stage has a delay configured to be equal to a delay of the corresponding wave pipeline data stage. The wave pipeline includes a second data latch to latch the data on the data output node in response to a return clock signal on the return clock node. The first data latch latches the data from the synchronous data stage in response to a clock signal on the clock path. | 2020-12-24 |
20200401537 | QUALITY OF SERVICE POLICY SETS - Disclosed are systems, computer-readable mediums, and methods for managing input-output operations within a system including at least one client and a storage system. A processor receives information regarding allocated input-output operations (IOPS) associated with a client accessing a storage system storing client data. The information includes a number of allocated total IOPS, a number of allocated read IOPS, and a number of allocated write IOPS. The processor also receives a requested number of write IOPS associated with the at least one client's request to write to the storage system. The processor determines a target write IOPS based on the number of allocated total IOPS, the number of allocated write IOPS and the requested number of write IOPS, and executes the determined target write IOPS within the first time period. | 2020-12-24 |
20200401538 | INTEGRATED CIRCUITS FOR GENERATING INPUT/OUTPUT LATENCY PERFORMANCE METRICS USING REAL-TIME CLOCK (RTC) READ MEASUREMENT MODULE - An integrated circuit includes technology for generating input/output (I/O) latency metrics. The integrated circuit includes a real-time clock (RTC), a read measurement register, and a read latency measurement module. The read latency measurement module includes control logic to perform operations comprising (a) in response to receipt of read responses that complete read requests associated with an I/O device, automatically calculating read latencies for the completed read requests, based at least in part on time measurements from the RTC for initiation and completion of the read requests; (b) automatically calculating an average read latency for the completed read requests, based at least in part on the calculated read latencies for the completed read requests; and (c) automatically updating the read measurement register to record the average read latency for the completed read requests. Other embodiments are described and claimed. | 2020-12-24 |
20200401539 | APPLICATION PROCESSOR SUPPORTING INTERRUPT DURING AUDIO PLAYBACK, ELECTRONIC DEVICE INCLUDING THE SAME AND METHOD OF OPERATING THE SAME - An application processor includes a system bus, as well as a host processor, a voice trigger system, and an audio subsystem that are electrically connected to the system bus. The voice trigger system performs a voice trigger operation and issues a trigger event based on a trigger input signal that is provided through a trigger interface. The audio subsystem processes audio streams that are replayed or recorded through an audio interface, and receives an interrupt signal through the audio interface while an audio replay operation is performed through the audio interface. | 2020-12-24 |
20200401540 | DMA-Scatter and Gather Operations for Non-Contiguous Memory - A direct memory access (DMA) controller, includes circuitry configured to load a DMA transfer descriptor configured to define which memory elements within a contiguous block of n memory elements are to be included in a given DMA transfer. The circuitry is further configured to, based on the DMA transfer descriptor, determine whether each memory element within the contiguous block of n memory elements is to be included in the given DMA transfer, including a determination that two or more non-contiguous sub-blocks of memory elements within the contiguous block of n memory elements are to be transferred. The circuitry is further configured to, based on the determination of whether each memory element within the contiguous block of n memory elements is to be included in the given DMA transfer, perform the DMA transfer of memory elements determined to be included within the given DMA transfer. | 2020-12-24 |
20200401541 | UNIFIED CACHE FOR DIVERSE MEMORY TRAFFIC - A unified cache subsystem includes a data memory configured as both a shared memory and a local cache memory. The unified cache subsystem processes different types of memory transactions using different data pathways. To process memory transactions that target shared memory, the unified cache subsystem includes a direct pathway to the data memory. To process memory transactions that do not target shared memory, the unified cache subsystem includes a tag processing pipeline configured to identify cache hits and cache misses. When the tag processing pipeline identifies a cache hit for a given memory transaction, the transaction is rerouted to the direct pathway to data memory. When the tag processing pipeline identifies a cache miss for a given memory transaction, the transaction is pushed into a first-in first-out (FIFO) until miss data is returned from external memory. The tag processing pipeline is also configured to process texture-oriented memory transactions. | 2020-12-24 |
20200401542 | METHOD AND APPARATUS FOR IMPLEMENTING DATA TRANSMISSION, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM - This application discloses a method and an apparatus, an electronic device, and a computer-readable storage medium for implementing data transmission. The method is executed by an electronic device providing a computing service, and is applied to execution of data transmission between two buses of different types, wherein one of the two buses is associated with an FPGA instance among multiple FPGA instances run by the computing service and the other of the two buses corresponds to an external device to the electronic device, the method including: obtaining an access instruction from an initiator through a first bus of the two buses for data read/write in a target, wherein the initiator and the target are associated with the first bus and a second bus of the two buses, and comprise one and the other of the FPGA instance and the external device, respectively; buffering the access instruction into an instruction storage area corresponding to the access instruction; and transmitting the access instruction buffered in the instruction storage area to the target continuously, and suspending transmission of the access instruction to the target once a flow control is imposed. | 2020-12-24 |
20200401543 | AUTO-ADDRESSING WITH POSITION DETERMINATION OF BUS SUBSCRIBERS - A position-sensing method and device for sensing the installation location (F | 2020-12-24 |
20200401544 | System and Method for High Configurability High-Speed Interconnect - An information handling system includes first and second devices, a connectivity switch, and a baseboard management controller. The first and second devices are configured to communicate with first and second processors of the information handling system. The connectivity switch is connected between the first and second devices and the first and second processors. The connectivity switch operates in one of a plurality of configurations including a first configuration, a second configuration, and a third configuration. Each of the configurations provides a different connectivity between the first device, the second device, the first processor, and the second processor. The baseboard management controller determines a setup of the first and second devices, and provides a connectivity indication signal to the connectivity switch based on the setup of the first and second devices. The connectivity indication signal identifies one of the configurations for the connectivity switch. | 2020-12-24 |
20200401545 | BUS SYSTEM AND METHOD OF CONTROLLING THE SAME - A bus system comprises a master, a first slave, a second slave, and a bus. The master is configured to be able to issue a second request to the second slave after issuing a first request to the first slave and before receiving a response to the first request. The bus comprises: a determination unit configured to, upon receiving the second request, determine whether to permit a transfer of the second request to the second slave; and a suspending unit configured to suspend the transfer of the second request to the second slave while it is determined by the determination unit that the transfer is not permitted. The determination unit determines whether or not the transfer is permitted based on a notification from the first slave regarding processing of the first request. | 2020-12-24 |
20200401546 | LIGHTWEIGHT PROXY FOR HANDLING SCSI COMMANDS IN AN ACTIVE ARRAY -STANDBY ARRAY CONFIGURATION - An apparatus, system, and method are disclosed that service SCSI commands, including SCSI PGR commands in the standby node of a storage system that operates in an Asymmetric Logic Unit Access (ALUA) mode. The apparatus, system, and method service SCSI PGR commands without maintaining peer/proxy port information. The apparatus, system, and method service SCSI commands by forwarding/proxying commands between the active node and standby node, in both directions and use a modified command descriptor block (MCDB) message to conduct the communications between the nodes. | 2020-12-24 |
20200401547 | DAISY CHAIN CONNECTION SYSTEM AND SYSTEM CONTROL METHOD - A system having information equipment connected in a daisy chain, where power supply control of the daisy chain-connected information equipment is performed without having to add a dedicated power supply control device. In a daisy chain connection system, second information equipment comprise a control unit and a power supply unit, and a first information equipment and the control unit of the second information equipment include a communication circuit capable of wired communication, and the first information equipment and the power supply unit of the second information equipment include a wireless circuit capable of wireless communication. When turning OFF a power supply to any one of the second information equipment, the first information equipment requests the power supply unit to stop the power supply by using wireless communication, and the power supply unit performs control for stopping the power supply to the control unit according to the request. | 2020-12-24 |
20200401548 | EMBEDDED CONTROLLER, ELECTRONIC DEVICE, AND METHOD FOR FILTERING SPI BUS COMMAND IN RELATION TO WRITE PROTECTION - An embedded controller connected with a main control module through a first interface module is connected with an SPI storage through a second interface module. The main control module outputs SPI bus commands to an SPI storage through the embedded controller. The embedded controller includes an EC FW block and an SPI bus command filter module. The EC FW block stores at least one limited SPI bus command. The SPI bus command filter module can switch between an enable mode and a disable mode. In the enable mode, the SPI bus command filter module filters out the SPI bus command from the main control module based on the at least one limited SPI bus command, and blocks the SPI bus command, thus performing a write protection of the SPI storage. | 2020-12-24 |
20200401549 | MULTI-I/O SERIAL PERIPHERAL INTERFACE FOR PRECISION CONVERTERS - A Multi-I/O SPI for precision converters supports a Dual/Quad/Octal SPI to support the speed requirements for digital transmission and also includes a special mode that can be enabled by hardware and/or software to remove the bit scrambling requirement dictated by the JEDEC standard. The special mode removes the scramble requirement and associates each of the bidirectional data lines to a specific channel. The special mode provides backward compatibility that permits the precision converter to be used with controllers that do not natively support the JEDEC standard. Also, the Multi-I/O SPI includes registers divided into a primary region that is accessed only in default mode at power-up for write and/or read operations, and a secondary region that is accessed by any mode enabled in the control register. By restricting access to the “control” register area to a pre-defined mode in the converter at power-up, the access mode can be controlled. | 2020-12-24 |
20200401550 | AUTONOMOUS MEMORY ARCHITECTURE - An autonomous memory device in a distributed memory sub-system can receive a database downloaded from a host controller. The autonomous memory device can pass configuration routing information and initiate instructions to disperse portions of the database to neighboring die using an interface that handles inter-die communication. Information is then extracted from the pool of autonomous memory and passed through a host interface to the host controller. | 2020-12-24 |
20200401551 | METHODS AND SYSTEMS FOR ACCESSING HOST MEMORY THROUGH NON-VOLATILE MEMORY OVER FABRIC BRIDGING WITH DIRECT TARGET ACCESS - Embodiments described herein provide a method for accessing a host memory through non-volatile memory over fabric bridging with direct target access. A first memory access command encapsulated in a first network packet is received at a memory interface unit and from a remote direct memory access (RDMA) interface and via a network fabric. The first memory access command is compliant with a first non-volatile memory interface protocol and the first network packet is compliant with a second non-volatile memory interface protocol. The first network packet is unwrapped to obtain the first memory access command. The first memory access command is stored in a work queue using address bits of the work queue as a pre-set index of the first memory access command. The first memory access command is sent from the work queue based on the pre-set index to activate a first target storage device. | 2020-12-24 |
20200401552 | METHODS AND SYSTEM FOR AN INTEGRATED CIRCUIT - Various embodiments of the present technology may provide methods and system for an integrated circuit. The system may provide a plurality of integrated circuits (i.e., slave devices) connected to and configured to communicate with a host device. Each integrated circuit may comprise a register storing a common default address. Each integrated circuit may further comprise an interface circuit configured to overwrite the default address of one integrated circuit with a new address while preventing changes to the remaining integrated circuits. | 2020-12-24 |
20200401553 | DEVICES FOR TIME DIVISION MULTIPLEXING OF STATE MACHINE ENGINE SIGNALS - A device includes a plurality of blocks. Each block of the plurality of blocks includes a plurality of rows. Each row of the plurality of rows includes a plurality of configurable elements and a routing line, whereby each configurable element of the plurality of configurable elements includes a data analysis element comprising a plurality of memory cells, wherein the data analysis element is configured to analyze at least a portion of a data stream and to output a result of the analysis. Each configurable element of the plurality of configurable elements also includes a multiplexer configured to transmit the result to the routing line. | 2020-12-24 |
20200401554 | SELECTIVE DATA MIGRATION AND SHARING - Systems and methods for sharing information from a first user account to a second user account selectively and seamlessly. The systems and methods can be implemented by server(s) that analyze electronic transactions between the first user account and the second user account to determine appropriate queries for the accounts for sharing information. Such queries can include queries for file permissions. Also, the server(s) can generate sharing instructions according to results of the queries. The server(s) can also select electronic content items for sharing according to the generated sharing instructions. And, the server(s) can direct storage of a copy of the selected electronic content items of the first user account into a data structure of the second user account, such that access by the second user account to the selected items from the first account is as seamless as accessing electronic content items originated by the second user account. | 2020-12-24 |
20200401555 | IDENTIFICATION AND RECOMMENDATION OF FILE CONTENT SEGMENTS - One disclosed method involves determining at least first and second segments of content represented by a first file, determining first data corresponding to occasions on which the first segment has been previously accessed, and determining second data corresponding to occasions on which the second segment has been previously accessed. Based at least in part on the first data and the second data, the first segment may be determined to be more likely relevant to a first user than the second segment. | 2020-12-24 |
20200401556 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIUMS FOR IMPLEMENTING A DATA PROTECTION POLICY FOR A TRANSFERRED ENTERPRISE APPLICATION - Methods, systems, and computer readable mediums for logically remediating infrastructure resource components are disclosed. According to one example, the method includes capturing metadata specifying both a data protection policy applied to an enterprise application supported by a host computing system and a location of backup file data associated with the enterprise application and transferring the enterprise application and the metadata from the host computing system to a target computing system. The method further includes utilizing the metadata to reconstruct the data protection policy for the transferred enterprise application on the target computing system, wherein the metadata specifies a data protection solution for each of a plurality of resource components supporting the transferred enterprise application on the target computing system. | 2020-12-24 |
20200401557 | METADATA COMPACTION IN A DISTRIBUTED STORAGE SYSTEM - Systems and methods for metadata compaction in a distributed storage system with a file system interface are described. A file system interface and an object storage system interface use a metadata index for mapping object identifiers from the object storage system to location identifiers for the file system. When the metadata index includes a number of entries for continuous data blocks with overlapping intervals, a defragmentation operation may generate a defragmented entry for a defragmentation interval overlapping the overlapping data blocks. | 2020-12-24 |
20200401558 | Caching with Dynamic and Selective Compression of Content - Dynamic and selective compression for content caching is provided for improving content delivery over a digital network. The dynamic and selective compression increased server cache size for higher cache-hit ratios that offset delays associated with compressing and decompressing content. The dynamic compression involves switching between an available set of compression tools in order to compress different files with the compression tool that is optimal for that file. The selective compression involves selectively compressing the content or files with the optimal compression tool when at least a threshold amount of space savings is obtained in an acceptable amount of time. Thus, the caching server caches compressed copies of a first set of files compressed with a first compression tool, compressed copies of a second set of files compressed with a different second compression tool, and an uncompressed third set of files. | 2020-12-24 |
20200401559 | FILE TRANSFERRING USING ARTIFICIAL INTELLIGENCE - A file transfer system that includes a data source, a destination device, and a transfer server. The transfer server is configured to receive a file from the data source and determine a file size of the file. The transfer server is further configured to determine an available disk space for the destination device, to compare the available disk space to the file size of the file, and to determine that the available disk space is less than the file size of the file. In response to the determination, the transfer server is further configured to determine a file type for the file based on content within the file and to identify metadata linked with the determined file type. The transfer server is further configured to extract data from the file corresponding with the identified metadata and to send the data to the destination device. | 2020-12-24 |
20200401560 | EVALUATING PENDING OBJECT REPLICATION RULES - Techniques for replication rule evaluation are provided. A replication rule is received at a first node of a plurality of nodes where the replication rule defines object replication among the plurality of nodes. The replication rule is labeled as pending, where a second replication rule on the first node is labeled as active. Upon receiving a request to predict an effect of the replication rule, a first object of a plurality of objects is identified based on the first replication rule. Upon determining that the first object is present on the first node but is not present on a second node of the plurality of nodes, an indication of the first object is outputted. | 2020-12-24 |
20200401561 | METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MANAGING DATA OBJECT - The present disclosure relates to a method, device and computer program product for managing a data object. In the method for managing a data object, a copy request for obtaining a copy of the data object is received from a requestor application system. At least one copy record associated with the data object is obtained from a group of copy records comprised in a copy blockchain associated with the data object. The number of copies of the data object is determined based on the at least one copy record. The copy request is handled based on the determined number of the copies. By means of immutable copy records comprised in the copy blockchain, the data object can be prevented from being illegally copied, and then higher security can be provided. Further, there is provided a device and computer program product for managing a data object. | 2020-12-24 |
20200401562 | PARALLEL PROCESSING OF FILTERED TRANSACTION LOGS - Example storage systems and methods provide data storage management using parallel processing of filtered transaction logs. Transaction logs are comprised of log entries corresponding to storage operations for at least one storage node. Sets of log entries are sequentially retrieved from the transaction log and filtered through multiple transaction log filters to generate multiple subsets of the log entries. Different metadata operations are executed in parallel using the different filtered subsets of log entries. | 2020-12-24 |
20200401563 | SUMMARIZING STATISTICAL DATA FOR DATABASE SYSTEMS AND/OR ENVIRONMENTS - Database values and their associated indicators can be arranged in multiple “buckets.” Adjacent buckets can be combined into a single bucket successively based one or more criteria associated with the indicators to effectively reduce the number of buckets until a desired number is reached. | 2020-12-24 |
20200401564 | MANAGEMENT OF INSTRUMENTATION DATA USING A RESOURCE DATABASE - Methods, systems, and computer programs encoded on computer storage medium, for receiving a request for instrumentation data describing one or more devices in the computing environment, the request received from a resource management interface; retrieving the instrumentation data describing the one or more devices from a resource database; identifying a schema associated with the resource management interface; converting the instrumentation data describing the one or more devices based on the identified schema associated with the resource management interface; and transmitting the instrumentation data describing the one or more devices to the resource management interface. | 2020-12-24 |
20200401565 | AUTOMATICALLY RANK AND ROUTE DATA QUALITY REMEDIATION TASKS - In an approach for automatically ranking and routing data quality remediation tasks, a processor analyzes a data set ingested by a repository to produce a set of data quality problems. A processor computes a score for each data quality problem of the set of data quality problems. A processor identifies a route to send each data quality problem of the set of data quality problems. A processor exports each data quality problem according to the score and the route. | 2020-12-24 |
20200401566 | Hybrid Data Integration Platform - In general, embodiments of the present invention provide systems and computer readable media for implementing a single data integration platform that supports multiple data access interfaces to a single corpus of stored dynamic data collected from multiple data sources. In embodiments, the data integration platform includes a record tables layer that stores a group of data records and supports a CRUD interface for accessing the data records; a resolution mapping layer that stores a set of entities generated by a many-to-one mapping of data records to entities using entity resolution; and an entities layer that stores resolved entities which may be accessed via either a search interface based on search criteria or a hybrid search interface that supports “get via record id” queries. | 2020-12-24 |
20200401567 | Object Storage System with Versioned Meta Objects - Example object storage systems, meta object generators, and methods provide versioned meta objects for internal operational data that may be replicated between data object stores. A meta object may be generated that includes meta object data, such as internal operational data. A meta object identifier may be assigned to the meta object. A first version identifier may be associated with the meta object data and a second version identifier may be associated with a marker for the meta object, where the marker prevents exposure as a user data object. | 2020-12-24 |
20200401568 | DATABASE JOURNAL REDACTION - A database management system stores an entry in a journal. The journal, upon storage of the entry, comprises a leaf node with a hash value based at least in part on the entry, and a hierarchy of interior nodes based at least in part upon the leaf node. In response to a request to delete the entry, the entry is deleted but the hash value is retained. A cryptographic proof of a second entry stored in the journal is based at least in part on the retained hash value. | 2020-12-24 |
20200401569 | SYSTEM AND METHOD FOR DATA RECONCILIATION - A system for data reconciliation is provided. The data reconciliation system includes a data processing subsystem. The data processing subsystem includes a computation module, configured to generate hash values for a set of tables located in a source database by a hashing technique and also configured generate hash values for a set of tables located in a destination database by the hashing technique. The data processing subsystem also includes an analysis module, configured to analyse the hash values located in the source database and the hash values located in the destination database by a pre-determined rule. The data processing subsystem also includes a suggestion module, configured to suggest output based on the analysis result. A data memory subsystem is configured to store the generated hash values for the source database and the generated hash values for the destination database. Present invention provides safe migration or transferring of data. | 2020-12-24 |
20200401570 | DATABASE INDEXING IN PERFORMANCE MEASUREMENT SYSTEMS - A performance measurement indexing system indexes a data store containing data entries indicative of message processing by an application. The application includes a plurality of checkpoints, and the data store contains data logged upon each message traversing the checkpoints in the application. The performance measurement indexing system determines which data entries relate to messages that satisfy a delay condition, and limits queries run on the data store to those data entries, thereby increasing the speed and efficiency with which queries can be serviced. | 2020-12-24 |
20200401571 | Human Experiences Ontology Data Model and its Design Environment - Structuring data, content, video, texts, and other narratives for researching complex adaptive systems requires new approaches and flexible data modelling tools. Those tools have to allow conceptualizing contexts and other semantic structures of the information. The invention addresses the problem with a method for designing and transmitting semantic digital codes and semantic digital structures of human experiences. The method can be applied to the processes of seeking patterns in big volume of information within various professional areas like Social Studies, Genomics, Mathematical Sociology, Digital Humanities, and other knowledge domains. The frameworks and semantic digital structures of human experiences provided with the Human Experiences Ontology Data Model can be integrated into metadata, systems of tags and training datasets for Machine Learning. The semantic digital structures of human experiences which are created in one professional area or language can be transmitted to other knowledge domains and languages. | 2020-12-24 |
20200401572 | MANAGING COMPLIANCE AND E-DISCOVERY DATA USING A CASE MANAGER CLIENT - Complying with a legal discovery request or a compliance request often involves querying several different bodies of documents. In various embodiments, an administrator creates a case that is managed by a case managing computer program and stored as a case manager client in the information management system. For instant access to the data, the system obtains copies of the index items corresponding to the data and stores the index items on the case manager client. The index items include links to the corresponding data objects. In some embodiments, the system also copies over the case data to a separate data store associated with the case manager client. After the case data has been obtained by the case manager client, the system may re-index the copied case data and update the associated indexes. | 2020-12-24 |
20200401573 | BLOCKCHAIN AS A SERVICE METHOD, APPARATUS, AND SYSTEM - Methods, apparatus, and system to provide access to a blockchain computer system as a service to a non-blockchain computer system through an application programming interface, wherein the application programming interface is configured through a portal computer and wherein application programming interface calls are implemented by an application programming interface processing computer. | 2020-12-24 |
20200401574 | SYSTEMS AND METHODS TO FACILITATE RAPID DATA ENTRY FOR DOCUMENT REVIEW - A computer-implemented method that includes generating a graphical user interface including a coding interface and a document viewer interface. The coding interface displays a grid that includes a plurality of cells representing a plurality of documents and a plurality of fields. A selection of one of the cells is received. The cell represents a selected one of the documents, and a selected one of the fields. A rendering of the selected document is automatically displayed in the document viewer interface. A value entered into the selected cell is received and the value is transmitted to a database for storage thereby. | 2020-12-24 |
20200401575 | STREAMLINED DATABASE COMMIT FOR SYNCHRONIZED NODES - Techniques for streamlined commit procedures between synchronized nodes are provided. A request to commit a transaction is transmitted from a first node, where the request instructs a second node to retain any locks related to the transaction. A response is received, from the second node, indicating that the transaction was successfully committed. Upon receiving the response, the transaction is committed on the first node. Upon successfully committing the transaction on the first node, first cleanup request is transmitted to the second node, where the cleanup request instructs the second node to release any locks related to the transaction. | 2020-12-24 |
20200401576 | INTERACTING WITH REAL-WORLD ITEMS AND CORRESPONDING DATABASES THROUGH A VIRTUAL TWIN REALITY - A system comprises at least one cloud server of a cloud server computer system comprising at least one processor and memory storing a persistent virtual world system comprising one or more virtual objects including virtual data and models. The virtual objects comprise one or more of a virtual twin, a pure virtual object, or an application, wherein at least one of the virtual objects represents a store of real-world items connected to a periodically-updated database associated to the products of the at least one store. Users may access the store through the persistent virtual world system via a user device enabling interactions with and between elements within the store. | 2020-12-24 |
20200401577 | Block Verification Device, Block Verification Method, and Program - A consensus is formed for a private database shared within a group. A block verification device includes: a private database shared within a group; a communication unit receiving a list of transactions from an overall leader device; a transaction processing unit executing the transactions identified based on the list and outputting the execution results; and a block processing unit generating a proposal including a list, a digest of the private database after execution of the transactions, and a private dataset included in the transactions. The communication unit transmits the proposal to another block verification device belonging to the same group, and transmits the list, the digest of the private database, and the digest of the private dataset to all the other block verification devices when it is determined that a consensus for the proposal is formed. | 2020-12-24 |
20200401578 | SYSTEM AND METHOD FOR MANAGING A BLOCKCHAIN CLOUD SERVICE - In accordance with an embodiment, described herein is a system and method for implementing a distributed ledger a blockchain cloud service. The blockchain cloud service can include nodes of the distributed ledger and a management console component. The management console component can include a web application running in a script runtime environment, a plurality of backend of APIs for communicating with various nodes of the blockchain cloud service, and a plurality of client APIs configured to be invoked by a client application. The plurality of client APIs uses one or more of the plurality of backend APIs in provisioning the distributed ledger as a blockchain cloud service, and in managing the managing the blockchain cloud service. | 2020-12-24 |
20200401579 | REVIEWER RECOMMENDATION - Recommending reviewers from which to request reviews is disclosed. A list of potential reviewers is received. A determination is made that at least one potential reviewer included in the list of potential reviewers should be targeted with a request to review an entity. The transmission of a review request to the potential reviewer is facilitated. | 2020-12-24 |
20200401580 | INTERACTION BETWEEN VISUALIZATIONS AND OTHER DATA CONTROLS IN AN INFORMATION SYSTEM BY MATCHING ATTRIBUTES IN DIFFERENT DATASETS - A computer-implemented method includes analyzing a first dataset to extract metadata that corresponds to a first visualization; analyzing a second dataset to extract metadata; comparing the metadata of the datasets; deriving based on the comparing, a level of correlation between attributes of the datasets; establishing a score for each of the levels of correlation; determining that a first attribute of the first dataset and a first attribute of the second dataset are a match in response to the establishing of a score for the level of correlation of the first attributes of the datasets; determining that the datasets are related in response to the determining that the first attributes of the datasets are a match; and directing the displaying of a second visualization, the second visualization being a visual representation that includes data from the second dataset. | 2020-12-24 |
20200401581 | UTILIZING APPROPRIATE MEASURE AGGREGATION FOR GENERATING DATA VISUALIZATIONS OF MULTI-FACT DATASETS - A computer receives a visual specification, which specifies a data source, visual variables, and data fields from the data source. Each visual variable is associated with either data fields (e.g., dimension and/or measures) or filters. The computer obtains a data model encoding the data source as a tree of related logical tables. Each logical table includes logical fields, each of which corresponds to either a data field or a calculation that spans logical tables. The computer generates a dimension subquery for the dimensions and the filters. The computer also generates, for each measure, an aggregated measure subquery grouped by the dimensions. The computer forms a final query by joining the dimension subquery to each of the aggregated measure subqueries. The computer subsequently executes the final query and displays a data visualization according to the results of the final query. | 2020-12-24 |
20200401582 | LOW-LATENCY TYPEAHEAD WITH IMPROVED RECALL - In an embodiment, the disclosed technologies include a method for generating a typeahead suggestion list for an input field of a search interface, including receiving, as digital input, a query string that has been extracted from the input field and context data that comprises one or more of: a search term that has been extracted from another input field of the search interface or a search criterion that has been extracted from a member profile that is associated with the query string via an online connection network; executing, on digital data extracted from the online connection network, one or more machine-readable queries that comprise one or more of the query string and the context data, to produce a set of candidate entities; outputting at least part of the set of candidate entities as a suggestion list that the search interface may display in association with the input field to facilitate query formulation via the input field. | 2020-12-24 |
20200401583 | METHOD AND COMPUTING DEVICE FOR GENERATING A SEARCH QUERY FOR A GRAPH DATABASE - In an embodiment, a method for generating a search query for a graph database includes displaying a list of vertex properties on a user interface; receiving, via the user interface, a selection of one or more of the displayed vertex properties; forming a graph database query based on selection; and displaying a report containing a result of the query. | 2020-12-24 |
20200401584 | UTILIZING PERSISTENT AND NON-PERSISTENT CONNECTIONS FOR GENERATING A JOB RESULT FOR A JOB - A method to assist with processing distributed jobs by retrieving and/or synchronizing supplemental job data. The method includes receiving a request pertaining to a job from a first virtualized execution environment using a non-persistent connection between the first virtualized execution environment and a second virtualized execution environment, transmitting, by the secondary machine using a persistent connection between the first virtualized execution environment and the second virtualized execution environment, a task request for supplemental information pertaining to the job, generating a job result for the job based on the supplemental information received from the first virtualized execution environment via the persistent connection, and transmitting, to the first virtualized execution environment, the job result for the job using the non-persistent connection. | 2020-12-24 |
20200401585 | SPATIAL JOINS IN MULTI-PROCESSING COMPUTING SYSTEMS INCLUDING MASSIVELY PARALLEL PROCESSING DATABASE SYSTEMS - Improved techniques for performing Spatial Joins multi-processing computing systems and environments are disclosed. One or more intersection of bounds (or limits) of data sets is determined as a join bounding space. The join bounding space is in a space (Global space or Global universe) where a spatial join between (or for) the data can be performed. The determined join bounding space can be partitioned into sub-partitions of the join bounding space. The sub-partitions of the join bounding space can assigned respectively to multiple processing unit for processing in parallel in. In addition, distribution cost information associated with the cost of distribution of the datasets (and/or their components) to the processing units of a multi-processing system can be provided and/or used to effectively distribute and/or redistribute processing of the Spatial Join between the processing units of a multi-processing system. Join pairs (or tasks) can be distributed (or redistributed) based on a distribution strategy in an effort to avoid relatively high distribution costs that can occur in Spatial Joins that can exceed the worst-case situations that can occur simpler (non-spatial) join operations. | 2020-12-24 |
20200401586 | SYSTEMS AND METHODS FOR PERFORMING FUNNEL QUERIES ACROSS MULTIPLE DATA PARTITIONS - Data may be queried and analyzed in order to draw insights. One type of data query that may be performed is a funnel query. A funnel query is a query characterized by a sequence of events, e.g.: “In the last N days, how many unique users performed event A, then event B, and then event C”. Systems and methods for performing funnel queries are provided herein. In some embodiments, the speed at which a computer can answer a funnel query may be increased. In some embodiments, a bitmap is used to eliminate one or more sequences of events that would otherwise need to be traversed during the funnel query. In some embodiments, a sequence of events is stored across multiple data partitions, each data partition covering a different period of time. | 2020-12-24 |
20200401587 | METHOD AND A SYSTEM FOR FUZZY MATCHING OF ENTITIES IN A DATABASE SYSTEM BASED ON MACHINE LEARNING - A method and system of matching field values of a field type are described. Blurring operations are applied on a first and second values to obtain blurred values. A first maximum score is determined from first scores for blurred values, where each one of the first scores is indicative of a confidence that a match of the first and the second values occurs with knowledge of a first blurred value. A second maximum score is determined from second scores for the blurred values, where each one of the second scores is indicative of a confidence that a non-match of the first and the second values occurs with knowledge of the first blurred value. Responsive to determining that the first maximum score is greater than the second maximum score, an indication that the first value matches the second value is output. | 2020-12-24 |
20200401588 | Scaling By Dynamic Discovery For Data Processing - Aspects described herein may relate to methods, systems, and apparatuses that partitions searchable content and distributes the segments across a plurality of processing nodes, which in turn further sub-partitions the partitions for processing by local search actor in order to increase the speed with which a search request from a user is processed. Processing nodes available to receive partitioned searchable content are registered with an external storage device. The external storage device also maintains a global results collector that compiles results from the partitions of searchable content. Respective local collector actors receive compiled results from local search actors for a processing node and the compiled results are sent to the global results collector for compiling for the plurality of processing nodes. Results of the user search request are then provided to the user. | 2020-12-24 |
20200401589 | SYSTEMS AND METHODS FOR BITMAP FILTERING WHEN PERFORMING FUNNEL QUERIES - Data may be queried and analyzed in order to draw insights. One type of data query that may be performed is a funnel query. A funnel query is a query characterized by a sequence of events, e.g.: “In the last N days, how many unique users performed event A, then event B, and then event C”. Systems and methods for performing funnel queries are provided herein. In some embodiments, the speed at which a computer can answer a funnel query may be increased. In some embodiments, a bitmap is used to eliminate one or more sequences of events that would otherwise need to be traversed during the funnel query. In some embodiments, a sequence of events is stored across multiple data partitions, each data partition covering a different period of time. | 2020-12-24 |
20200401590 | TRANSLATING A NATURAL LANGUAGE QUERY INTO A FORMAL DATA QUERY - A computer-implemented method for generating ground-truth for natural language querying may include providing a knowledge graph as data model, receiving a natural language query from a user and translating the natural language query into a formal data query. The method can also include visualizing the formal data query to the user and receiving a feedback response from the user. The feedback response can include a verified and/or edited formal data query. The method can also include storing the natural language query and the corresponding feedback response as ground-truth pair. Corresponding system and a related computer program product may be provided. | 2020-12-24 |
20200401591 | DISSIMILAR QUERY ENGINE - A dissimilar query engine is configured to translate a query including a first query language to a second query language. One dissimilar query engine includes a configuration script handler configured to retrieve at least one pair of query language configuration scripts, a uniform query language compiler configured to translate the first query language to a uniform query language using the at least one pair of query language configuration scripts, and a uniform query language interpreter configured to translate the uniform query language to the second query language using the at least one pair of query language configuration scripts. | 2020-12-24 |
20200401592 | Dynamic Rebuilding of Query Execution Trees and Reselection of Query Execution Operators - A method dynamically selects query execution operators. A database engine receives a query, parses the query to form a query execution tree, and compiles the tree to form a first executable plan that includes in-memory operators. The database engine executes the first plan, including executing in-memory operators in parallel. While executing a first in-memory operator, insufficient memory is detected. In response, the database engine aborts the execution, and recompiles the query tree in two ways, forming a second executable plan that replaces the first in-memory operator with a first spooling operator. The first spooling operator executes within a fixed volatile memory budget and swaps to non-volatile memory according to the budget. A third executable plan retains the first in-memory operator, but schedules it to run serially. The database engine selects either the second plan or the third plan, and executes the selected plan to return results for the query. | 2020-12-24 |
20200401593 | Dynamic Phase Generation And Resource Load Reduction For A Query - Techniques are described for dynamic phase generation and load reduction for a query. A query, for instance, is based on user input of a query in a natural language (NL) form, e.g., an NL query. Generally, an NL query may include multiple terms and/or phrases that make up a complex query, such as a sentence in a human-readable language. Accordingly, to enable a query result to be generated, the NL query is parsed into multiple logical sections and query contexts are determined for the logical sections. A set of search phases is generated based on the logical sections and the query contexts. The search phases can then be executed in a specific execution order to generate a query result for the NL query. | 2020-12-24 |
20200401594 | RESCALING LAYER IN NEURAL NETWORK - In an example embodiment, a platform is provided that utilizes information available to a computer system to feed a neural network. The neural network is trained to determine both the probability that a searcher would select a given potential search result if it was presented to him or her and the probability that a subject of the potential search result would respond to a communication from the searcher. These probabilities are combined to produce a single score that can be used to determine whether to present the searcher with the potential search result and, if so, how high to rank the potential search result among other search results. During the training process, a rescaling transformation for each input feature is learned and applied to the values for the input features. | 2020-12-24 |
20200401595 | ESTIMATING THE NUMBER OF DISTINCT ENTITIES FROM A SET OF RECORDS OF A DATABASE SYSTEM - A method and system for estimating a number of distinct entities in a set of records are described. For each one of a subset of records, a set of match rule keys are generated based on a set of match rules. Each match rule from the set of match rules defines a match between records, and each match rule key from the set of match rule keys includes at least a key field value. A high order key for the record is determined based on the match rule keys, and a counter associated with the high order key is incremented. When each record from the subset of records has been processed by determining the match rule keys, and incrementing the counter(s) of the high order keys, a sum of a number of counters that have a non-zero value is performed to estimate the distinct entities in the records. | 2020-12-24 |
20200401596 | TEST DATA INTEGRATION SYSTEM AND METHOD THEREOF - A test data integration system and a method thereof are provided. The method includes: collecting, by each of a plurality of client devices, a plurality of test information obtained from coupled automatic test equipment when performing a test operation, and transmitting the plurality of test information to a server; receiving, by the server, the plurality of test information, and generating a graphical user interface according to the plurality of test information and displaying an integration analysis result corresponding to the plurality of test information. | 2020-12-24 |
20200401597 | DEVICE, SYSTEM AND METHOD FOR INTEROPERABILITY BETWEEN DIGITAL EVIDENCE MANAGEMENT SYSTEMS - A device, system and method for interoperability between digital evidence management systems (DEMS) is provided. A DEMS proxy computing device received, from a requesting device, a search string requesting digital evidence. The proxy provides, to a plurality of separate DEMS devices maintained by separate public safety agencies: corresponding search strings; and identification information identifying one or more of: a public safety role of a user of the requesting device, and a public safety agency membership of the user. The proxy receives, from at least a particular DEMS device, of the plurality of separate DEMS devices, a digital evidence record based on the search string, the digital evidence record describing a piece of digital evidence managed by the particular DEMS device, and including chain-of-custody information. The proxy provides, to the requesting device, the digital evidence record and the chain-of-custody information. | 2020-12-24 |