38th week of 2020 patent applcation highlights part 49 |
Patent application number | Title | Published |
20200293429 | Semiconductor Apparatus and Debug System - It is an object of the present invention to provide a debug system that accesses a semiconductor apparatus from the outside by a simple configuration at less overhead. | 2020-09-17 |
20200293430 | ENVIRONMENT MODIFICATION FOR SOFTWARE APPLICATION TESTING - Examples of techniques for environment modification for software application testing are described herein. An aspect includes, based on starting testing of an application under test using a test case in a testing environment, determining whether modification of the testing environment is enabled. Another aspect includes, based on determining that modification of the testing environment is enabled, modifying the testing environment. Another aspect includes running the testing of the application under test using the test case in the modified testing environment | 2020-09-17 |
20200293431 | TEST SYSTEM - A test system comprises a display and a processor. The processor is configured to generate a graphical user interface that is displayed on the display. The graphical user interface generated provides a visual programming editor. The visual programming editor comprises at least one of a first visual programming member and a second visual programming member. The processor is configured to provide a semantic zoom function for the graphical user interface. The semantic zoom function is configured to provide a semantic zooming of at least one of the first visual programming member and the second visual programming member such that a subgroup assigned to the respective visual programming member is displayed on the display via the graphical user interface. | 2020-09-17 |
20200293432 | COMPUTER CODE TEST SCRIPT GENERATING TOOL USING VISUAL INPUTS - A tool includes an interface, a memory, a conversion engine, an identifier tool, and a script engine. The interface communicatively couples the tool to a server. The tool obtains a plurality of visual inputs from a computer program specification document. The memory stores the plurality of visual inputs and a set of known computer code elements. Each respective element of the set of known computer code elements includes predetermined testing criteria for testing computer code that includes the respective element. The conversion engine generates a plurality of textual objects from the plurality of visual inputs. The identifier determines whether each respective textual object matches a respective element of the set of known computer code elements. If a match is found, the identifier tool associates the predetermined testing criteria of the respective element to the respective textual object. The script engine generates a test script using the predetermined testing criteria. | 2020-09-17 |
20200293433 | Automating Identification of Test Cases for Library Suggestion Models - A method, system, and apparatus are disclosed for adding library models to a library knowledge base by defining a template for a library configuration file that conveys information about each library model, custom inputs and code snippets to facilitate library comparison operations, and education content for the library model, where the library configuration file template may be automatically filled by populating selected data fields in the template with information identifying the library model, scraping documentation pages to extract test cases, and then scraping test case code to extract the test case input parameters for input to an input/output matching engine to evaluate a repository of code snippets and identify a set of functionally similar code snippets for inclusion one or more data fields in the template. | 2020-09-17 |
20200293434 | TEST SELECTION DEVICE - An OS difference information acquisition unit | 2020-09-17 |
20200293435 | FUNCTION MODIFICATION FOR SOFTWARE APPLICATION TESTING - Examples of techniques for function modification for software application testing are described herein. An aspect includes, based on a function call to a function by an application under test that is being tested using a test case in a testing environment, determining whether modification of the function is enabled. Another aspect includes, based on determining that modification of the function is enabled, running the function in a modified mode. | 2020-09-17 |
20200293436 | VALIDATION OF MOBILE DEVICE WORKFLOWS - Methods, systems, and apparatus, including computer programs encoded on a computer-readable storage media, for validation of mobile device workflows. In some implementations, a mobile device application to be tested is identified. An installation of the application on each of a plurality of remote mobile devices, including mobile devices having different hardware configurations and different operating system configurations, is initiated. Usage of the application by instructing the remote mobile devices to perform a series of operations using the application is simulated. Performance of the respective mobile devices during the simulated usage is measured. A document indicating performance of the application across the different mobile device configurations is generated. | 2020-09-17 |
20200293437 | DEPENDENCY MAPPING BETWEEN PROGRAM CODE AND TESTS TO RAPIDLY IDENTIFY ERROR SOURCES - An example system includes (i) a software product having a plurality of code units that accesses a database, (ii) a processor, and (iii) a non-transitory computer readable storage medium having stored thereon software tests and instructions that cause the processor to: execute the software tests on a first version of the software product; determine a first mapping between each respective software test and one or more of the code units; determine a second mapping between each respective software test and one or more data units in the database; determine that, between a second version and the first version of the software product, a particular code and data unit have changed; select, from the first and the second mappings, a set of software tests with mappings to the particular code unit or data unit; and execute the set of software tests on the second version of the software product. | 2020-09-17 |
20200293438 | SYSTEMS AND METHODS FOR MEMORY SYSTEM MANAGEMENT - Methods of mapping memory regions to processes based on thermal data of memory regions are described. In some embodiments, a memory controller may receive a memory allocation request. The memory allocation request may include a logical memory address. The method may further include mapping the logical memory address to an address in a memory region of the memory system based on thermal data for memory regions of the memory system. Additional methods and systems are also described. | 2020-09-17 |
20200293439 | MEMORY SYSTEM - According to one embodiment, a memory system includes a NAND flash memory that has a first area, a second area, and a third area, and a controller that controls data transfer between a host device and the memory system. The controller writes data transmitted from the host device to the first area by a first method of storing 1-bit data per memory cell, and at a first timing, reads at least a part of data stored in the first area to generate one unit data, compresses the unit data, and writes the compressed unit data to the second area. At a second timing, the controller decompresses the read compressed unit data from the second area, and writes the decompressed unit data to the third area by a second method of storing a plurality of bits of data per memory cell. | 2020-09-17 |
20200293440 | FLASH MEMORY CONTROLLER, SD CARD DEVICE, METHOD USED IN FLASH MEMORY CONTROLLER, AND HOST DEVICE COUPLED TO SD CARD DEVICE - A flash memory controller includes a processing circuit which is arranged for receiving a first command and a first portion address parameter, receiving a second command and a second portion address parameter, obtaining a complete address parameter by combining the first portion address parameter with the second portion address parameter, and performing a corresponding operation upon a flash memory according to the complete address parameter and a command type of the second command. | 2020-09-17 |
20200293441 | DATA STORAGE DEVICES AND DATA PROCESSING METHODS - A data storage device includes a memory device and a memory controller. The memory controller is configured to configure a first predetermined memory block which is an SLC memory block and a second predetermined memory block which is a MLC memory block as buffers to receive data. The memory controller determines to use which scheme to receive data in a predetermined period dynamically according to an amount of valid data stored in the memory device. When the memory controller determines to use a first scheme, the memory controller uses the first predetermined memory block to receive data. When the memory controller determines to use a second scheme, the memory controller uses the first predetermined memory block and the second predetermined memory block to receive data. When the memory controller determines to use a third scheme, the memory controller uses the second predetermined memory block to receive data. | 2020-09-17 |
20200293442 | METHOD FOR MANAGING A MEMORY APPARATUS - A memory apparatus includes: a plurality of non-volatile (NV) memory elements each including a plurality of physical blocks; a volatile memory for storing a global page address linking table; a transmission interface, for receiving commands from a host; and a processing unit, for obtaining a first host address and first data from a first host command, and a second host address and second data from a second host command, linking the first host address to a first page of a physical block and storing the first data in the first page, and linking the second host address to a second page of the physical block and storing the second data in the second page to build a local page address linking table; wherein a difference value of the first host address and the second host address is greater than a number of pages of the physical block. | 2020-09-17 |
20200293443 | GARBAGE COLLECTION - AUTOMATIC DATA PLACEMENT - A Solid State Drive (SSD) is disclosed. The SSD may include flash memory to store data. An SSD controller may manage reading and writing data to the flash memory. The SSD may include an automatic stream detection logic to select a stream identifier responsive to attributes of data. A garbage collection logic may select an erase block and program valid data in the erase block into a second block responsive to a stream ID determined the automatic stream detection logic. The stream ID may be determined after the garbage collection logic has selected the erase block for garbage collection. | 2020-09-17 |
20200293444 | SYSTEMS AND METHODS FOR IMPLEMENTING A FOUR-DIMENSIONAL SUPERBLOCK - A solid state drive (SSD) is presented herein that includes a plurality of memory dies communicatively arranged in a plurality of communication channels such that each respective memory die is associated with a respective one communication channel of the plurality of communication channels, each respective memory die comprises one or more die regions, and each of the one or more die regions comprises a plurality of physical blocks configured to store data. The SSD further includes a memory controller communicatively coupled to the plurality of memory dies. The memory controller is configured to, upon a first power up of the SSD, determine a parameter of the SSD and for each of the one or more die regions, associate, based on the parameter, a number of physical blocks of the plurality of physical blocks with a block region of a plurality of block regions. | 2020-09-17 |
20200293445 | ADAPTIVE CACHE RECONFIGURATION VIA CLUSTERING - A method of dynamic cache configuration includes determining, for a first clustering configuration, whether a current cache miss rate exceeds a miss rate threshold. The first clustering configuration includes a plurality of graphics processing unit (GPU) compute units clustered into a first plurality of compute unit clusters. The method further includes clustering, based on the current cache miss rate exceeding the miss rate threshold, the plurality of GPU compute units into a second clustering configuration having a second plurality of compute unit clusters fewer than the first plurality of compute unit clusters. | 2020-09-17 |
20200293446 | IN-MEMORY NORMALIZATION OF CACHED OBJECTS TO REDUCE CACHE MEMORY FOOTPRINT - Database objects are retrieved from a database and parsed into normalized cached data objects. The database objects are stored in the normalized cached data objects in a cache store, and tenant data requests are serviced from the normalized cached data objects. The normalized cached data objects include references to shared objects in a shared object pool that can be shared across different rows of the normalized cached data objects and across different tenant cache systems. | 2020-09-17 |
20200293447 | DESTAGING METADATA TRACKS FROM CACHE - Provided are a computer program product, system, and method for destaging metadata tracks from cache A counter for a metadata track is updated in response to modifying the metadata track in the cache, wherein there are counters for metadata tracks in the cache. The metadata track is destaged from the cache in response to the counter for the metadata track being less than a threshold value. The counter for the metadata track is decremented based on a number of modified metadata tracks in the cache. | 2020-09-17 |
20200293448 | COHERENCY MAINTENANCE VIA PHYSICAL CACHE COORDINATE COMPARISON - Utilizing physical cache address comparison for maintaining coherency. Operations are performed on data in lines of a cache of the computing system and virtual addresses are loaded into a cache controller. The virtual addresses correspond with lines associated with performing the operations. A physical address of a line is determined in response to having performed a first cache directory lookup of the line. The physical address from the first operation is compared with other physical addresses associated with other operations to determine whether the other operations utilize the same physical address as the first operation. In response to matching physical locations, determinations are made as to whether a conflict exists in the data at the physical addresses that match. Thus, the coherency maintenance is free from looking up virtual addresses to determine whether the line of the cache includes incoherent data. | 2020-09-17 |
20200293449 | EFFICIENT EARLY ORDERING MECHANISM - Data units are stored in private caches in nodes of a multiprocessor system, each node containing at least one processor (CPU), at least one cache private to the node and at least one cache location buffer (CLB) private to the node. In each CLB location information values are stored, each location information value indicating a location associated with a respective data unit, wherein each location information value stored in a given CLB indicates the location to be either a location within the private cache disposed in the same node as the given CLB, to be a location in one of the other nodes, or to be a location in a main memory. Coherence of values of the data units is maintained using a cache coherence protocol. The location information values stored in the CLBs are updated by the cache coherence protocol in accordance with movements of their respective data units. | 2020-09-17 |
20200293450 | DATA PREFETCHING FOR GRAPHICS DATA PROCESSING - Embodiments are generally directed to data prefetching for graphics data processing. An embodiment of an apparatus includes one or more processors including one or more graphics processing units (GPUs); and a plurality of caches to provide storage for the one or more GPUs, the plurality of caches including at least an L1 cache and an L3 cache, wherein the apparatus to provide intelligent prefetching of data by a prefetcher of a first GPU of the one or more GPUs including measuring a hit rate for the L1 cache; upon determining that the hit rate for the L1 cache is equal to or greater than a threshold value, limiting a prefetch of data to storage in the L3 cache, and upon determining that the hit rate for the L1 cache is less than a threshold value, allowing the prefetch of data to the L1 cache. | 2020-09-17 |
20200293451 | MEMORY SYSTEM FOR MEMORY SHARING AND DATA PROCESSING SYSTEM INCLUDING THE SAME - A data processing system includes a host processor, a processor suitable for processing a task instructed by the host processor, a memory, shared by the host processor and the processor, that is suitable for storing data processed by the host processor and the processor, respectively, and a memory controller suitable for checking whether a stored data processed by the host processor and the processor are reused, and for sorting and managing the stored data as a first data and a second data based on the check result. | 2020-09-17 |
20200293452 | MEMORY DEVICE AND METHOD INCLUDING CIRCULAR INSTRUCTION MEMORY QUEUE - A memory device includes a memory bank including one or more bank arrays, a PIM circuit configured to perform an operation logic processing operation, and an instruction memory including first to m | 2020-09-17 |
20200293453 | COMPUTING SYSTEM AND METHOD USING BIT COUNTER - A computing system using a bit counter may include a host device; a cache configured to temporarily store data of the host device, and including a plurality of sets; a cache controller configured to receive a multi-bit cache address from the host device, perform computation on the cache address using a plurality of bit counters, and determine a hash function of the cache; a semiconductor device; and a memory controller configured to receive the cache address from the cache controller, and map the cache address to a semiconductor device address. | 2020-09-17 |
20200293454 | MEMORY SYSTEM - A memory system includes: a non-volatile first memory; a second memory which is a set-associative cache memory including a plurality of ways; and a memory controller The first memory stores a plurality of pieces of first information each of which associates a logical address indicating a location in a logical address space of the memory system with a physical address indicating a location in the first memory. The plurality of pieces of first information includes second information and third information. The second information associates a logical address with a physical address in a first unit. The third information associates a logical address with a physical address in a second unit different from the first unit. The memory controller caches the second information only in a first way. The memory controller caches the third information only in a second way different from the first way. | 2020-09-17 |
20200293455 | CACHE ADDRESS MAPPING METHOD AND RELATED DEVICE - This application discloses a cache address mapping method and a related device. The method includes: obtaining a binary file, the binary file including a first hot section; obtaining alignment information of a second hot section, the second hot section is a hot section that has been loaded into a cache, and the alignment information includes a set index of a last cache set occupied by the second hot section; and performing an offset operation on the first hot section based on the alignment information. According to embodiments of the present invention, a problem of a conflict miss of a cache in an N-way set associative structure can be resolved without increasing physical hardware overheads, thereby improving a cache hit rate. | 2020-09-17 |
20200293456 | PREEMPTIVE PAGE FAULT HANDLING - Methods and apparatus relating to predictive page fault handling. In an example, an apparatus comprises a processor to receive a virtual address that triggered a page fault for a compute process, check a virtual memory space for a virtual memory allocation for the compute process that triggered the page fault and manage the page fault according to one of a first protocol in response to a determination that the virtual address that triggered the page fault is a last page in the virtual memory allocation for the compute process, or a second protocol in response to a determination that the virtual address that triggered the page fault is not a last page in the virtual memory allocation for the compute process. Other embodiments are also disclosed and claimed. | 2020-09-17 |
20200293457 | APPARATUS AND METHOD - Apparatus comprises two or more processing devices each having an associated translation lookaside buffer to store translation data defining address translations between virtual and physical memory addresses, each address translation being associated with a respective virtual address space; and control circuitry to control the transfer of at least a subset of the translation data from the translation lookaside buffer associated with a first processing device to the translation lookaside buffer associated with a second, different, processing device. | 2020-09-17 |
20200293458 | STORAGE DEVICE AND METHOD OF OPERATING THE SAME - Provided herein may be a storage device and a method of operating the same. The method of operating a storage device including a replay protected memory block (RPMB) may include receiving a write request for the RPMB from an external host, selectively storing data in the RPMB based on an authentication operation, receiving a read request from the external host, and providing result data to the external host in response to the read request, wherein the read request includes a message indicating that a read command to be subsequently received to from the external host is a command related to the result data. | 2020-09-17 |
20200293459 | SYSTEMS AND METHODS FOR DETECTING EXPECTED USER INTERVENTION ACROSS MULTIPLE BLADES DURING A KEYBOARD, VIDEO, AND MOUSE (KVM) SESSION - Embodiments of systems and methods for detecting expected user intervention across multiple blades during a Keyboard, Video, and Mouse (KVM) session are discussed. In an embodiment, a chassis may include an Enclosure Controller (EC) coupled to a plurality of Information Handling Systems (IHSs) in a chassis, the EC comprising: a processor; and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the EC to: select a first IHS to initiate a first KVM session; register for a notification from the first IHS while the first IHS performs one or more operations; switch to a second IHS to initiate a second KVM session; and during the second KVM session, receive the notification. | 2020-09-17 |
20200293460 | ELECTRONIC DEVICE FOR CONTROLLING EXTERNAL CONVERSION DEVICE - In one embodiment, an electronic device controls an external conversion device connected to an external device. The electronic device includes a connection terminal formed on a portion of an outer surface thereof, a first converter connected to the connection terminal and configured to convert signals, and a processor operatively connected to the first converter. The processor is configured, when the external conversion device is connected to the connection terminal, and the external device is connected to the external conversion device, to identify a type of the external device connected to the external conversion device by receiving an operation signal of the external device through the external conversion device, the connection terminal, and the first converter, and to input or output a signal corresponding to the external device through the external conversion device. | 2020-09-17 |
20200293461 | TRAINING AND OPERATIONS WITH A DOUBLE BUFFERED MEMORY TOPOLOGY - System and method for training and performing operations (e.g., read and write operations) on a double buffered memory topology. In some embodiments, eight DIMMs are coupled to a single channel. The training and operations schemes are configured with timing and signaling to allow training and operations with the double buffered memory topology. In some embodiments, the double buffered memory topology includes one or more buffers on a system board (e.g., motherboard). | 2020-09-17 |
20200293462 | APPARATUS AND METHODS FOR ACCELERATING TASKS DURING STORAGE CACHING/TIERING IN A COMPUTING ENVIRONMENT - An apparatus for accelerating tasks during storage caching and tiering includes a processor. First and second storage units are coupled to the processor. A memory unit is coupled to the processor. The memory unit is configured to receive a write data operation. An amount of dirty data in the first storage unit is determined based on the received write data operation. The dirty data includes data present in the first storage unit to be synced to the second storage unit. A sync rate associated with a read data operation from the first storage unit to the second storage unit is decelerated when the amount of dirty data is less than a first threshold value. A write rate associated with a write data operation to the first storage unit is accelerated when the amount of dirty data is less than the first threshold value. | 2020-09-17 |
20200293463 | ACCURATE CAN-BASED DISTRIBUTED CONTROL SYSTEM SIMULATIONS - Embodiments of the present invention are directed to a computer-implemented method for simulating a plurality of electronic control units (“ECU”s) in communication over a simulated bus. The method includes simulating an operation of a first ECU and an operation of a second ECU and performing arbitration at a packet-level granularity at a packet transmission start point with respect to a first packet sent to the simulated bus by the first ECU and a second packet sent to the simulated bus by the second ECU. The method identifies an initially winning ECU in the arbitration and a zone from the packet transmission start point to a bit where the initially winning ECU is determined to win based on the arbitration and continues the simulation of the operation of the first ECU and the operation of the second ECU to the end of the zone. | 2020-09-17 |
20200293464 | TERMINATION OF NON-VOLATILE MEMORY NETWORKING MESSAGES AT THE DRIVE LEVEL - A storage device, such as a solid-state drive, is configured to receive messages, such as non-volatile memory networking messages issued by a host processor or computer, that utilize NVM Express over Fabric (NVMe-oF) or NVMe/Transmission Control Protocol communication formatting for transmitting the command over a network fabric. The storage device is configured to terminate the NVMe-oF or NVMe/TCP formatted communication at the storage drive level. The storage device may further be configured to issued reply messages that include NVMe-oF or NVMe/TCP formatting for the communication formatting used to deliver the reply messages over a network fabric. | 2020-09-17 |
20200293465 | MULTI-PROTOCOL SUPPORT FOR TRANSACTIONS - Examples described herein relate to executing a poller to poll for received communications over multiple transport layer protocols from a connection to identify a received communication from one of the multiple transport layer protocols and identify a second received communication from a different one of the multiple transport layer protocols. A change to the different one of the multiple transport layer protocols occurs in response to failure of the one of the multiple transport layer protocols or slow transport rate using the one of the multiple transport layer protocols. In some examples, the poller is executed in user space and transport layer protocol processing of the received communication and second received communication occur in kernel space. | 2020-09-17 |
20200293466 | SYSTEM AND METHOD FOR SERIAL INTERFACE MEMORY USING SWITCHED ARCHITECTURE - A memory system for storing and retrieving data may include a controller, a first switch, a second switch connected to the first switch via an interconnecting bus, and a plurality of memory devices. The controller may have a first serial interface. The first switch may have one or more serial interfaces and one or more memory ports. The first serial interface of the controller may be communicatively connected to a first serial interface of the one or more serial interfaces of the first switch via a first serial bus. Each of the one or more memory ports of the first switch may be communicatively connected to a subset of the plurality of memory devices via a memory bus. The first switch may transfer data between the controller and the subsets of the plurality of memory devices via the one or more memory ports. | 2020-09-17 |
20200293467 | MEMORY CONTROLLER - The application discloses a memory controller coupled to a memory module for controlling access to the memory module, wherein the memory module comprises one or more memory groups each having a plurality of memory blocks, and the memory controller comprising: a registering clock driver coupled to the memory module for providing to the memory module a data access command so as to control access to the memory module; one or more data buffers coupled to the registering clock driver, and each data buffer coupled to a memory group via a memory group data interface; wherein at least one of the memory group data interfaces comprises a plurality of data buses each coupled to one or more memory blocks of the memory group that the memory group data interface coupled to, such that the memory group can exchange data with the data buffer via the plurality of data buses under the control of the registering clock driver. | 2020-09-17 |
20200293468 | MEMORY SYSTEM DESIGN USING BUFFER(S) ON A MOTHER BOARD - A mother board topology including a processor operable to be coupled to one or more communication channels for communicating commands. The topology includes a first communication channel electrically coupling a first set of two or more dual in-line memory modules (DIMMs) and a first primary data buffer on a mother board. The topology includes a second communication channel electrically coupling a second set of two or more DIMMs and a second primary data buffer on the mother board. The topology includes a third channel electrically coupling the first primary data buffer, the primary second data buffer, and the processor. | 2020-09-17 |
20200293469 | ASYMMETRIC-CHANNEL MEMORY SYSTEM - An expandable memory system that enables a fixed signaling bandwidth to be configurably re-allocated among dedicated memory channels. Memory channels having progressively reduced widths are dedicated to respective memory sockets, thus enabling point-to-point signaling with respect to each memory socket without signal-compromising traversal of unloaded sockets or costly replication of a full-width memory channel for each socket. | 2020-09-17 |
20200293470 | EVALUATION APPARATUS, SEMICONDUCTOR APPARATUS, AND TRANSMISSION CONTROL METHOD - According to one embodiment, there is provided an evaluation apparatus including a first data bus and a transmission device. The transmission device is electrically connected to the first data bus at an output side thereof and configured to receive data and another signal different from the data. The transmission device is configured to supply the data to the first data bus in a first period during which a valid signal is in an active level, and supply the another signal to the first data bus in a second period during which the valid signal is in a non-active level. | 2020-09-17 |
20200293471 | Local Internal Discovery and Configuration of Individually Selected and Jointly Selected Devices - A memory controller interfaces with one or more memory devices having configurable width data buses and configurable connectivity between data pins of the memory devices and data pins of the memory controller. Upon initialization of the memory devices, the memory controller automatically discovers the connectivity configuration of the one or more memory devices, including both individually selected and jointly selected devices. After discovering connectivity of the connected devices, the memory controller configures the memory devices according to the discovered connectivity and assigns unique addresses to jointly selected devices. | 2020-09-17 |
20200293472 | METHOD FOR REMOTELY TRIGGERED RESET OF A BASEBOARD MANAGEMENT CONTROLLER OF A COMPUTER SYSTEM - A method for remotely triggered reset of a baseboard management controller (BMC) of a computer system is disclosed. The computer system includes a first computer node and a second computer node. The method includes: (A) receiving, by a first BMC of the first computer node, from a computer device and via a network, a reset command which indicates that reset of a second BMC of the second computer node should be triggered; and (B) transmitting, by the first BMC and to the second BMC via electrical connection between the first and second BMCs, a reset signal that corresponds to the reset command, so as to trigger reset of the second BMC. | 2020-09-17 |
20200293473 | ARITHMETIC PROCESSOR AND CONTROL METHOD FOR ARITHMETIC PROCESSOR - One aspect of the present disclosure relates to an arithmetic processor including a detection unit that detects instruction information, wherein an instruction including a processing instruction to be performed after completion of DMA (Direct Memory Access) in a DMA request instruction is described in the instruction information and a data processing unit that uses data transferred by the DMA request instruction to execute an operation corresponding to the processing instruction based on the instruction information detected by the detection unit. | 2020-09-17 |
20200293474 | METHOD FOR MANAGING ACCESS TO A SHARED BUS AND CORRESPONDING ELECTRONIC DEVICE - In accordance with an embodiment, a method for managing access to a bus shared by interfaces includes: when to the bus is granted to one of the interfaces, triggering a counting having a minimum counting period; and when at least one access request to the bus emanating from at least one other of the interfaces is received during the minimum counting period, releasing the access granted to the one of the interfaces, and creating an arbitration point at an end of the minimum counting period. | 2020-09-17 |
20200293475 | SYSTEM AND INTERFACE CIRCUIT FOR DRIVING DATA TRANSMISSION LINE TO TERMINATION VOLTAGE - A system includes a data transmission unit, a termination resistor and a data reception unit. The data transmission unit may drive a data transmission line based on data, and drive the data transmission line to a voltage level corresponding to a termination voltage during a specified operation period. The termination resistor may be coupled between the data transmission line and a termination node. The data reception unit may receive a signal transmitted through the data transmission line. | 2020-09-17 |
20200293476 | HANDLING OPERATION COLLISIONS IN A NON-VOLATILE MEMORY - A first operation identifier is assigned to a current operation directed to a memory component, the first operation identifier having a first entry in a first data structure that associates the first operation identifier with a first buffer identifier. It is determined whether the current operation collides with a prior operation assigned a second operation identifier, the second operation identifier having a second entry in the first data structure that associates the second operation identifier with a second buffer identifier. A latest flag is updated to indicate that the first entry is a latest operation directed to an address (1) in response to determining that the current operation collides with the prior operation and that the current and prior operations are read operations, or (2) in response to determining to determining that the current operation does not collide with a prior operation. | 2020-09-17 |
20200293477 | REMOTELY CONTROLLED TECHNICIAN SURROGATE DEVICE - A remote technical support system includes an edge device that operates as a highly secured conduit for a technician to view, access, and control a target device via a secure protocol over a connection medium between the edge device and the target device. The edge device's architecture allows it to selectively present numerous peripheral devices to the target device. The architectural components of the edge device can be controlled by a technician through a secure connection with a trusted server which allows authorized to access the edge device. The edge device also relays technician commands to and obtains diagnostic information from the target device and communicates feedback to the technician over the secure connection. The commands may be relayed to the target via the one or more selectively connected USB peripherals. | 2020-09-17 |
20200293478 | Embedding Rings on a Toroid Computer Network - A computer comprising a plurality of interconnected processing nodes arranged in a configuration with multiple layers, arranged along an axis, comprising first and second endmost layers and at least one intermediate layer between the first and second endmost layers is provided. Each layer comprises a plurality of processing nodes connected in a ring by an intralayer respective set of links between each pair of neighbouring processing nodes, the links adapted to operate simultaneously. Nodes in each layer are connected to respective corresponding nodes in each adjacent layer by an interlayer link. Each processing node in the first endmost layer is connected to a corresponding node in the second endmost layer. Data is transmitted around a plurality of embedded one-dimensional logical rings with an asymmetric bandwidth utilisation, each logical ring using all processing nodes of the computer in such a manner that the plurality of embedded one-dimensional logical rings operate simultaneously. | 2020-09-17 |
20200293479 | EXTENSION APPARATUS FOR UNIVERSAL SERIAL BUS INTERFACE - An extension apparatus for a universal serial bus (USB) interface includes a transmitting device, a receiving device and an electrical signal network cable. The transmitting device includes the following elements: a first packet-processing unit to receive a first interface packet and generate an original data accordingly, a first buffering unit to temporarily store the original data, and a first data-converting unit to generate and output a network packet signal based on the original data. The receiving device includes the following elements: a second data-converting unit to receive the network packet signal and generate the original data accordingly, a second buffering unit to temporarily store the original data, and a second packet-processing unit to receive the original data and generate the first interface packet. The electrical signal network cable is electrically coupled between the transmitting device and the receiving device to transmit the network packet signal. | 2020-09-17 |
20200293480 | BIMODAL PHY FOR LOW LATENCY IN HIGH SPEED INTERCONNECTS - Systems, methods, and apparatuses including a Physical layer (PHY) block coupled to a Media Access Control layer (MAC) block via a PHY/MAC interface. Each of the PHY and MAC blocks include a plurality of Physical Interface for PCI Express (PIPE) registers. The PHY/MAC interface includes a low pin count PIPE interface comprising a small set of wires coupled between the PHY block and the MAC block. The MAC block is configured to multiplex command, address, and data over the low pin count PIPE interface to access the plurality of PHY PIPE registers, and the PHY block is configured to multiplex command, address, and data over the low pin count PIPE interface to access the plurality of MAC PIPE registers. The PHY block may also be selectively configurable to implement a PIPE architecture to operate in a PIPE mode and a serialization and deserialization (SERDES) architecture to operate in a SERDES mode. | 2020-09-17 |
20200293481 | DATA COMMUNICATION CIRCUIT - In an embodiment, a method includes receiving in parallel first data and second data; and delivering in series the first and second data, where the first data comprises electric power delivery configuration data. In some embodiments, delivering in series the first and second data includes delivering the first and second data wirelessly. | 2020-09-17 |
20200293482 | LOW VOLTAGE DRIVE CIRCUIT WITH VARIABLE FREQUENCY CHARACTERISTICS AND METHODS FOR USE THEREWITH - A low voltage drive circuit includes a transmit digital to analog circuit that converts transmit digital data into analog outbound data by: generating a DC component; a first plurality of oscillations, wherein each oscillation of the first plurality of oscillations has first unique oscillation characteristics; selecting one of the first plurality of oscillations in accordance with a first portion of the transmit digital data to produce a first selected oscillation; generating a second plurality of oscillations, wherein each oscillation of the second plurality of oscillations has second unique oscillation characteristics; selecting one of the second plurality of oscillations in accordance with a second portion of the transmit digital data to produce a second selected oscillation, and outputting the first selected oscillation and the second selected oscillation on an n-bit-by-n-bit basis to produce an oscillating component, wherein the DC component is combined with the oscillating component to produce the analog outbound data. A drive sense circuit drives an analog transmit signal onto a bus, wherein the analog outbound data is represented within the analog transmit signal as variances in loading of the bus in a first frequency range and wherein analog inbound data is represented within an analog receive signal as variances in loading of the bus in a second frequency range. | 2020-09-17 |
20200293483 | TRANSFERRING DATA BETWEEN SOLID STATE DRIVES (SSDs) VIA A CONNECTION BETWEEN THE SSDs - A first solid state drive (SSD) includes a first built-in network interface device configured to communicate via a network fabric, and a second SSD includes a second built-in network interface device configured to communicate via the network fabric. A connection is opened between the first SSD and the second SSD over the network fabric. Based on a non-volatile memory over fabric (NVMe-oF) communication protocol, an NVMe command to transfer data between the first SSD and the second SSD over the connection is encapsulated in a capsule. The capsule is sent from the first SSD to the second SSD over the connection via the network fabric. The second SSD executes the NVMe command in the capsule to transfer the data between the first SSD and the second SSD over the connection. | 2020-09-17 |
20200293484 | SERIAL PERIPHERAL INTERFACE MASTER - A Serial Peripheral Interface (SPI) master ( | 2020-09-17 |
20200293485 | ON DEMAND MULTIPLE HETEROGENEOUS MULTICORE PROCESSORS - Embodiments of the present invention provide a processor design that provides on-demand access to multiple (heterogeneous) types of multicore components, resulting in both increased performance and power reduction in a system on a chip integrated circuit. In a typical embodiment, the integrated circuit (e.g., processor) includes a first set of multicore processing components, where the first set of multicore processing components includes a plurality of homogeneous first components. The integrated circuit also includes a second set of multicore processing components includes a plurality of homogeneous second components that are functionally different (homogeneous) from the first components. Coupled to first set of components and the second set of components is a component controller that controls the first and second set of components, by selectively activating and deactivating the first and second components in response to changes in processing demand. | 2020-09-17 |
20200293486 | ANALOG PROCESSOR COMPRISING QUANTUM DEVICES - Analog processors for solving various computational problems are provided. Such analog processors comprise a plurality of quantum devices, arranged in a lattice, together with a plurality of coupling devices. The analog processors further comprise bias control systems each configured to apply a local effective bias on a corresponding quantum device. A set of coupling devices in the plurality of coupling devices is configured to couple nearest-neighbor quantum devices in the lattice. Another set of coupling devices is configured to couple next-nearest neighbor quantum devices. The analog processors further comprise a plurality of coupling control systems each configured to tune the coupling value of a corresponding coupling device in the plurality of coupling devices to a coupling. Such quantum processors further comprise a set of readout devices each configured to measure the information from a corresponding quantum device in the plurality of quantum devices. | 2020-09-17 |
20200293488 | SCALAR CORE INTEGRATION - Methods and apparatus relating to scalar core integration in a graphics processor. In an example, an apparatus comprises a processor to receive a set of workload instructions for a graphics workload from a host complex, determine a first subset of operations in the set of operations that is suitable for execution by a scalar processor complex of the graphics processing device and a second subset of operations in the set of operations that is suitable for execution by a vector processor complex of the graphics processing device, assign the first subset of operations to the scalar processor complex for execution to generate a first set of outputs, assign the second subset of operations to the vector processor complex for execution to generate a second set of outputs. Other embodiments are also disclosed and claimed. | 2020-09-17 |
20200293489 | DATA CAPTURING AND STRUCTURING METHOD AND SYSTEM - A method for a data capturing and structuring includes determining at least one data capture mode for processing a non-electronic data record into an electronic data record and selecting a record owner having a plurality of existing data records to be associated with the electronic data record. The method also includes capturing the non-electronic data record into the electronic data record and collecting metadata from data associated with the record owner and the electronic data record and data generated during the capturing. Further, the method includes creating structured data records by combining the electronic data record and the metadata and exporting the structured data records. | 2020-09-17 |
20200293490 | FILE STORING METHOD, TERMINAL, AND COMPUTER-READABLE STORAGE MEDIUM - The present disclosure discloses a file storing method which comprises: when a file storing instruction being received, determining a file name corresponding to the file to be stored; performing feature calculation on the determined file name to obtain a feature value; performing modulo operation to the feature value using the total number of file directories to obtain a modulus value, wherein the modulo operating is carried out by dividing the total number of the file directories by the feature value; determining a serial number corresponding to the file name based on the modulus value; and based on a mapping relationship between a preset serial number and a directory, determining the directory corresponding to the serial number, and storing the file in the determined directory. The present disclosure further discloses a terminal and a computer-readable storage medium. The flexibility of file storing is improved and the use of the directories is more balanced. | 2020-09-17 |
20200293491 | Intelligent File Recommendation Engine - Methods and systems for recommending files to users are described herein. Files may be recommended to a user within a file sharing service. A recommender system may intelligently recommend files to users according to their preferences through machine learning. In addition, a recommender system may recommend files based on what is popular within a group to which the user belongs. The recommendations may be adjusted based on user interaction with one or more recommended files. | 2020-09-17 |
20200293492 | INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING INFORMATION PROCESSING PROGRAM - An information processing apparatus includes a display control section that performs control of displaying plural files stored in a real storage area on a display area representing a virtual storage area; an association section that generates relevant data in which the plural files displayed in the display area are associated with each other; and a storage section that stores the relevant data in a referable location in a case where the display area is hidden. | 2020-09-17 |
20200293493 | Enabling User Interaction with Shared Content During a Virtual Meeting - A method of and system for enabling interactions with a document being presented during a virtual meeting is carried out by making a copy of the document available to meeting attendees for restricted use. The method may include receiving a request from a presenter client device to initiate presentation of the document during the virtual meeting, accessing a copy of the document, enabling display of the document at a meeting participant client device, enabling the meeting participant to interact with the document via the participant client device by moving to a first portion of the document different from a second portion of the document being currently presented by the presenter client device, receiving a request via the participant client device to synchronize with the presentation being presented by the presenter client device, providing a synchronization signal for synchronizing with the presentation, and enabling display of the second portion of the document at the meeting participant client device. | 2020-09-17 |
20200293494 | DATA SYNCHRONIZATION METHODS, APPARATUSES, AND DEVICES - An application is used to identify a data identifier associated with data to be queried. The application and the data identifier are used to identify data in a cache associated with the data identifier and a first version identifier corresponding to the data in the cache associated with the data identifier. In response to determining that the first version identifier does not match a second version identifier, the application is used to obtain data from a database associated with the data identifier, and the application is used to replace the data in the cache associated with the data identifier with the data from the database associated with the data identifier. | 2020-09-17 |
20200293495 | USING AND TRAINING A MACHINE LEARNING MODULE TO DETERMINE ACTIONS TO BE TAKEN IN RESPONSE TO FILE SYSTEM EVENTS IN A FILE SYSTEM - Provided are a computer program product, system, and method for using and training a machine learning module to determine actions to be taken in response to file system events in a file system. A file system event is detected. An action to be performed corresponding to the file system event is selected from an action list. A determination is made as to whether an outcome in the computing system resulting from the performed action satisfies an outcome threshold. A machine learning module is trained to increase a likelihood of selecting the performed action corresponding to the file system event when the outcome satisfies the outcome threshold. The machine learning module is trained to decrease a likelihood of selecting the performed action corresponding to the file system event when the outcome does not satisfy the outcome threshold. | 2020-09-17 |
20200293496 | INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing apparatus includes a recording unit, setting unit, and display controller. The recording unit records user operation information including an execution date and time of an operation performed by user on a file and identification information of user who performed the operation. The setting unit sets a degree of association to increase in a case where two pieces of user operation information are recorded in the memory in response to operations performed by identical user on two different files within a certain period of time, the degree of association being set in degree-of-association information generated in association with a combination of types of operations performed on the two files, the degree of association indicating degree of association between the two files. The display controller performs, in a case where a user selects a file as a target of operation, control to display degree-of-association information corresponding to the file. | 2020-09-17 |
20200293497 | COMPRESSED SENSING USING NEURAL NETWORKS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for compressed sensing using neural networks. One of the methods includes receiving an input measurement of an input data item; for each of one or more optimization steps: processing a latent representation using a generator neural network to generate a candidate reconstructed data item, processing the candidate reconstructed data item using a measurement neural network to generate a measurement of the candidate reconstructed data item, and updating the latent representation to reduce an error between the measurement and the input measurement; and processing the latent representation after the one or more optimization steps using the generator neural network to generate a reconstruction of the input data item. | 2020-09-17 |
20200293498 | DYNAMICALLY-ADJUSTABLE DEDUPLICATION ORDER OF OPERATIONS - An information management system can dynamically adjust the order in which compression and deduplication operations are performed based on the type of file for which a secondary copy operation is requested. For example, the information management system can determine whether the file for which a secondary copy operation is requested is a first type of file that is text-based and/or that has data blocks that are highly compressible or a second type of file that may have data blocks that are not highly compressible. If the information management system determines that the file is of the second type, the information management system can perform a deduplication operation prior to performing a compression operation. Furthermore, the information management system can dynamically adjust the size of various deduplication blocks based on the type of file for which a secondary copy operation is requested. | 2020-09-17 |
20200293499 | PROVIDING SCALABLE AND CONCURRENT FILE SYSTEMS - Techniques are disclosed for providing scalable and concurrent file systems. A backend storage system comprising an interface and a processing unit may be configured to perform the techniques. The interface may present the file system storing objects representative of data. The processing unit may receive, from a frontend host system coupled to the backend storage system, a plurality of client operations to perform with respect to the objects identified by the client operations, and identify an object type associated with each of the identified objects. The processing unit may select, based on the object types, one or more backend operators that implement the plurality of client operations, and apply the backend operators to the identified objects. | 2020-09-17 |
20200293500 | IMAGE BLOCKCHAIN SYSTEM AND METHOD - A method includes a full node storing a blockchain and being one of a plurality of full nodes forming a blockchain network receiving a message comprising update image data and a smart contract identifier from a first user device. The full node can then determine stored image data associated with the smart contract identifier. The full node can also determine whether or not image comparison data based on received image data and stored image data is consistent with a smart contract associated with the smart contract identifier and can then generate an entry for a block of the blockchain, comprising at least the smart contract identifier, the updated image data, and image comparison data. The full node can generate the block of the blockchain and transmit the block to the plurality of full nodes. The plurality of full nodes respectively verify the block. The full node can store the block on the blockchain and transmit an indication of whether or not the block was stored on the blockchain to the first user device. | 2020-09-17 |
20200293501 | DATA PLACEMENT FOR A DISTRIBUTED DATABASE - According to an aspect, a method for data placement in a distributed database includes obtaining access pattern information relating to end clients that requested access to data stored in a first regional quorum of replicas located within a first region, where the first regional quorum includes a first lead replica. The method includes identifying a placement algorithm from a configuration file associated with the distributed database, and executing the placement algorithm to generate a suggested placement for the data based on the obtained access pattern information, where the suggested placement includes a second regional quorum of replicas located in a second region different than the first region, and the second regional quorum includes a second lead replica. The method includes transmitting a migration request to the distributed database to transfer the data from the first regional quorum to the second regional quorum. | 2020-09-17 |
20200293502 | SYSTEMS AND METHODS FOR DATABASE MANAGEMENT SYSTEM (DBMS) DISCOVERY - The present disclosure is directed to a discovery process that enables discovery of database management systems (DBMSs) hosted by at least one client device of a client network. The disclosed discovery process involves a discovery server disposed on the client network accessing the client device hosting the DBMS to collect configuration item (CI) data on the configuration and operation of management, extraction, and replication processes of the DBMS. More specifically, this discovery process involves the discovery server requesting and receiving certain CI data from the management process of the DBMS, requesting and receiving certain CI data from an operating system (OS) of the client devices, as well as parsing and retrieving certain CI data from configuration and report files of the DBMS. Additionally, the disclosed discovery process is designed to be performed without being granted special or additional privileges within the DBMS itself. | 2020-09-17 |
20200293503 | GENERIC AUTONOMOUS DATABASE TUNING AS A SERVICE FOR MANAGING BACKING SERVICES IN CLOUD - A system and method are disclosed to facilitate a database tuning as a service offered by a cloud platform as a service provider. A throttling detection engine, associated with a database service instance, may periodically determine if an automated database tuning process should be performed. When it is determined that the automated database tuning process should be performed, the throttling detection engine may transmit database performance metrics. A database tuner as a service, coupled to the throttling detection engine, may access aggregated database performance metrics of the database service instance and determine a set of tunable parameters associated with the database service instance. The database tuner as a service may then execute the automated database tuning process to recommend, using an intelligent algorithm, a new set of configurations for the set of tunable parameters to be applied to the database service instance. | 2020-09-17 |
20200293504 | CORRELATING AND REFERENCING BLOCKCHAINS - Systems and methods for correlating and referencing blockchains are described herein. An example method may include providing a database configured to store at least one grid. The grid comprises positions referenced by coordinates. The method may include acquiring, by a processor communicatively coupled to the database, a plurality of blockchains. The method may further include mapping, by the processor, the blockchains to the positions within the grid. The method may include acquiring, by the processor, a subset of coordinates ({P}) corresponding to a subset of the positions within the grid and a set of specifications ({S}). The specifications include an instruction for selection of blocks of one or blockchains mapped to on or more position of the subset of the positions. The method may include associating a function F({P}, {S}) with a further position within the grid, wherein the function F ({P}, {S}) operates on contents of the selected blocks. | 2020-09-17 |
20200293505 | SENSOR DATA ANALYSIS SYSTEM AND METHOD - A data aggregation system is described, wherein the data aggregation system may include: a plurality of sensors distributed throughout an environment; a tile database comprising a memory for storing a hierarchy of tiled layers, wherein each layer in the hierarchy of tiled layers comprises a plurality of tiles; a tiling server, the tiling server configured to: receive sensor data from one or more sensors in the plurality of sensors; assign the sensor data to a base tile in a first layer in the hierarchy of tiled layers based on one or more properties of the one or more sensors; retrieve one or more aggregate tiles from the tile database based on an identity of the base tile in the first layer, the one or more aggregate tiles each taken from one or more further layers in the hierarchy of tiled layers; determine aggregate sensor data for each of the retrieved one or more aggregate tiles based on the sensor data stored on the base layer tile; assign the determined aggregate sensor data to the corresponding one or more aggregate tiles; and output the one or more aggregate tiles. | 2020-09-17 |
20200293506 | BULK-LOAD FOR B-TREES - Embodiments described herein are related to bulk loading data into a B-tree. Embodiments include generating a first leaf node of a B-tree by allocating a first page for the first leaf node from a leaf page queue comprising a first plurality of sequential pages; and writing one or more tuples to the first page allocated for the first leaf node. Embodiments further include generating an parent node for the first leaf node and a second leaf node of the B-tree by allocating a third page for the parent node from an parent page queue comprising a second plurality of sequential pages, the parent node comprising a first indication of the first leaf node and a second indication of the second leaf node, the first indication and the second indication stored in the third page allocated for the parent. | 2020-09-17 |
20200293507 | Auto Unload - A system for unloading tables of a database is provided. In some aspects, the system performs operations including determining that a number of accesses to a table occurring within a time period has satisfied an access threshold. The operations may further include identifying, in response to the determining, a first timestamp indicating a most recent access to the table. The operations may further include determining whether a difference between a current timestamp and the first timestamp satisfies a first time threshold. The operations may further include comparing, in response to the difference satisfying the first time threshold, a ratio of the difference and a size of the table to a ratio threshold. The operations may further include unloading, in response to satisfying the ratio threshold, the table. The operations may further include adjusting, based on the feedback, the first time threshold and/or the ratio threshold. | 2020-09-17 |
20200293508 | METHOD, APPARATUS AND SYSTEM FOR UPDATING GEOMAGNETIC INFORMATION - Embodiments of present disclosure disclose a method for updating geomagnetic information performed at a computing device, which belong to the field of electronic technologies. The method includes: receiving target geomagnetic information and target location assistance information that are transmitted by a terminal during positioning; determining a target geographical location corresponding to the target location assistance information in a case that geomagnetic information corresponding to a current geographical location of the terminal is invalid; and updating geomagnetic information corresponding to the target geographical location according to the target geomagnetic information in a prestored correspondence between a geographical location and geomagnetic information. According to the embodiments of the present disclosure, efficiency of updating geomagnetic information can be improved. | 2020-09-17 |
20200293509 | TECHNIQUE FOR LOG RECORDS MANAGEMENT IN DATABASE MANAGEMENT SYSTEM - Disclosed is a computer program stored in a computer-readable storage medium including encoded commands according to an exemplary embodiment of the present disclosure. When the computer program is executed by one or more processors, the computer program allows the one or more processors to perform a method for managing undo information in a database management system (DBMS). The method may include: forming undo information corresponding to an update request by a first transaction in response to the update request by the first transaction in the database management system (DBMS); determining an undo memory chunk to be allocated to the undo information from an undo memory pool on a memory, the undo memory chunk having a variable size; and maintaining the undo information on a space of the memory by using the determined undo memory chunk. | 2020-09-17 |
20200293510 | INFORMATION LINKAGE SYSTEM AND INFORMATION MANAGEMENT METHOD - It is provided an information linkage system, which is configured to allow a plurality of organizations to register and update data, and is formed of a computer including: a calculation device configured to execute predetermined calculation processing, to thereby implement the following functional modules; and a storage device accessible to the calculation device, the information linkage system comprising: an information linkage control module configured to receive a registration request for data, an update request for data, and an acquisition request for data from a plurality of external systems; an information linkage database in which data is allowed to be registered and updated; an information linkage database access module configured to access the information linkage database in response to a request received by the information linkage control module; and a reliability calculation module configured to calculate reliability information relating to the data stored in the information linkage database. | 2020-09-17 |
20200293511 | CONFIGURATION-FREE ALERT MONITORING - Systems and methods for generating an event-based data set using a computer implemented asset monitoring system are provided. An asset repository stores data related to one or more commissioned assets of an asset monitoring system. When event data is received from an asset, whether an asset maintenance record corresponding to the asset exists in the asset repository is determined based on comparing the data in the asset repository to the event data. When the asset maintenance record is determined to not exist in the asset repository, an asset identification record corresponding to the asset is rendered. The asset identification record comprises the event data and additional asset-related data collected from the asset. An event-based data set is generated based on the asset identification record. | 2020-09-17 |
20200293512 | DATA COMPARTMENTS FOR READ/WRITE ACTIVITY IN A STANDBY DATABASE - A method for creating a standby database with read/write access capability while also maintaining a data consistency with a primary database, is provided. The method includes syncing the primary database with a physical standby mirror existing on the standby database, creating a first data compartment and a second data compartment on the standby database, separate from the physical standby mirror, applying a change made to the first data object on the primary database to the corresponding first data object on the physical standby mirror; and determining whether the change should be applied to the corresponding first data object stored on the first data compartment in accordance with data merge rules associated with the first data compartment and the second data compartment. | 2020-09-17 |
20200293513 | CONCURRENT MULTIPLE HIERARCHICAL DATA STRUCTURES WITH CONSISTENT DATA - A method may include maintaining first data structure with records organized in a first hierarchy, and maintaining a second data structure with records organized in a second hierarchy. The method may additionally include receiving a change request for the value stored in the first record. The change request may be received from a parent in the second data structure of the first record. The method may further include sending a notification to the parent in the first data structure that the parent in the second data structure is attempting to change the first record. | 2020-09-17 |
20200293514 | MANAGING ACCESS BY THIRD PARTIES TO DATA IN A NETWORK - Systems and methods for managing access to data in a network are provided. In embodiments, a method includes: receiving, by a computer device, a search request regarding data of a participant, the search request including participant parameters associated with the participant; generating, by the computer device, a record of data associated with the participant based on the search request; sending, by the computer device, a request for data to third party nodes of a blockchain system based on the search request and the record; receiving, by the computer device, results from the blockchain system, the results including at least one set of data from a first node of the third party nodes; and determining, by the computer device, that the set of data requires updating based on the results. | 2020-09-17 |
20200293515 | SERVICE PROCESSING SYSTEM AND METHOD BASED ON BLOCKCHAIN - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for blockchain-based file querying are provided. One of the methods includes: receiving a query request for a target file, the query request comprising identification information of a user and the target file; obtaining the target file based on the identification information of the user and the target file; providing a query page of the target file, the query page comprising interactive elements for selecting whether to upload the target file to a blockchain; receiving a user selecting to upload the target file to the blockchain; hashing the target file to generate a digital digest; signing the digital digest according to an asymmetric encryption algorithm using a private key associated with a cryptographic key pair to obtain a digital signature; and uploading the target file, the digital signature, and a public key associated with the cryptographic key pair. | 2020-09-17 |
20200293516 | SYSTEM AND METHOD FOR DELETING NODE IN BLOCKCHAIN NETWORK - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for deleting a node in a blockchain network are provided. One of the methods includes: obtaining, by a first consensus node of the blockchain network, a transaction comprising a request for deleting a second consensus node of the blockchain network; in response to that consensus verification of the transaction succeeds, executing the transaction and sequentially numbering in a node list a plurality of remaining consensus nodes of the blockchain network excluding the second consensus node; and performing view change for the remaining consensus nodes to participate in future consensus verification. | 2020-09-17 |
20200293517 | GOAL-DIRECTED SEMANTIC SEARCH - Embodiments of goal-directed semantic search allow a user to discover how one or more entities (organizations, people, events, places, etc.) are related without having to explicitly define multiple queries to discover those relationships. In those cases in which more than one structure could be used, separate hypotheses are generated, one for each such structure or relationship. Hypotheses are scored during the process. | 2020-09-17 |
20200293518 | USING A SINGLE-ENTRY ACCESS POINT TO ARCHIVE DATA IN AND OUT OF AN ELECTRONIC DOCUMENT REVIEW AND REPORTING SYSTEM - An approach is provided for using a single-entry access point to archive data in and out of an electronic document review and reporting system. In an embodiment, a method comprises receiving, by a data access system, a reporting data request for reporting data, and accessing the reporting data. Based on the reporting data, the data access system generates particular reporting data that includes one or more of: global trend reports, statistical reports, or executive summary reports. The data access system transmits the particular reporting data to a client device to cause the client device to generate a graphical user interface and display the particular reporting data using the graphical user interface. Upon receiving the particular reporting data, the client device uses the graphical user interface to generate one or more graphs based on the particular reporting data and causes displaying the graphs on a computer display of the client device. | 2020-09-17 |
20200293519 | SOLUTION FOR IMPLEMENTING COMPUTING SERVICE BASED ON STRUCTURED QUERY LANGUAGE STATEMENT - Syntax parsing on a SQL statement is performed to determine whether an extended syntax identifier exists in the SQL statement, where the extended syntax identifier indicates a target computing service for the SQL statement. It is determined that the extended syntax identifier exists in the SQL statement. A computing service description statement in a first statement format is generated based on the SQL statement, where the first statement format is a statement format that can be recognized by a target computing framework. The computing service description statement is submitted to the target computing framework. Data queried by the SQL statement is invoked, in the target computing framework based on the computing service description statement, to perform target computation, where the SQL statement includes a computing element needed by the target computing service. | 2020-09-17 |
20200293520 | METADATA-BASED GENERAL REQUEST TRANSLATOR FOR DISTRIBUTED COMPUTER SYSTEMS - In an embodiment, a method comprises storing metadata that maps a domain model to data stored in a plurality of data stores, each data store being associated with a particular query language, the domain model describing the data and relationships between the data. The method comprises receiving a request for data stored in a first data store of, the request being in a request query language based on the domain model. The method comprises generating an abstract syntax tree indicating a field selection, an entity path, and a condition based on the request; generating a structure model comprising one or more aggregation levels for one or more entities; and generating annotations comprising query language aliases for portions of the request and correlating the portions of the request with the metadata. The method comprises generating queries in a first query language associated with the first data store based on the annotations; and sending the queries to the first data store. | 2020-09-17 |
20200293521 | OPTIMIZED SEARCH SERVICE - A method for an optimized search service comprising a search engine, two search indexes and a search term suggestion service may be provided. The method comprises collecting search queries, search results and search term suggestions, determining an acceptance rate value for each search term suggestion for the two search indexes, determining a first search configuration of a first index having an acceptance rate value below a first threshold value, determining a second search configuration of a second index including parameters for controlling search term suggestions for at least one search index having an acceptance rate value above a second threshold value, and having a search configuration that is compatible according to a compatibility value to the first search configuration, wherein the first index and the second index have similar content, and copying a selected set of parameters of the configuration of the second search index into the first index. | 2020-09-17 |
20200293522 | METHOD AND APPARATUS FOR PROCESSING A QUERY ON A PLURALITY OF OBJECT INSTANCES - A computer-implemented method processes a query on instances of an object of an object-oriented environment. The object has a root object and member fields which are sub-objects of the root object. The root object and each sub-object correspond to an entity represented by and stored in form of a table. The method includes analyzing the query to identify those objects which are necessary to execute the query. For each of the primary key values of a table corresponding to a root object, the method includes:
| 2020-09-17 |
20200293523 | METADATA-DRIVEN DATA MAINTENANCE - Techniques and solutions are provided for metadata-driven data maintenance. One or more data object queries are obtained from one or more data object frameworks. One or more sets of data objects are received based on the one or more data object queries. One or more data object nets are built based on the one or more sets of data objects and the one or more data object frameworks and respectively associated with one or more processes. The one or more data object nets and their associated processes are analyzed. Data object maintenance is performed on the data objects of the one or more data object nets based on the analysis of the one or more data object nets and their associated processes. | 2020-09-17 |
20200293524 | COMPUTER ARCHITECTURE FOR PERFORMING ERROR DETECTION AND CORRECTION IN A CORRELITHM OBJECT PROCESSING SYSTEM - A correlithm object processing system includes a reference table that stores a plurality of correlithm objects, and a first node communicatively coupled to a second node by a communication channel. The first node is configured to receive a particular one of the plurality of correlithm objects from the second node over the communication channel. The first node determines distances between the received correlithm object and each of the plurality of correlithm objects stored in the reference table. The first node further identifies one of the plurality of correlithm objects from the reference table with the shortest distance, and outputs the identified correlithm object. | 2020-09-17 |
20200293525 | Cognitive Process Lifecycle - A system, method, and computer-readable medium are disclosed for cognitive information processing. The cognitive information processing includes receiving data from a plurality of data sources; processing the data from the plurality of data sources via an augmented intelligence system, the augmented intelligence system executing on a hardware processor of an information processing system, the augmented intelligence system and the information processing system providing a cognitive computing function, the cognitive computing function comprising a cognitive process, the cognitive process being developed via a plurality of phases; and, promoting the cognitive process from one operational environment to another operational environment. | 2020-09-17 |
20200293526 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - [Object], to provide an information processing system, an information processing method, and a storage medium capable of more precisely specifying an affinity between persons by using time-series data. | 2020-09-17 |
20200293527 | MULTIVARIATE TIME-SERIES DATA SEARCH - The example embodiments are directed to a system and method which can perform a text-based search for a temporal data pattern in time-series data. The text-based search process is significantly faster than a distance measurement-based search performed based on temporal pattern comparisons. In one example, the method may include storing previously recorded temporal patterns of time-series data, determining a set of optimal bin boundaries based on the previously recorded temporal patterns, where the set of optimal bin boundaries divide the observed range of time-series data into a plurality of discrete bins each labeled with a respective symbol, transforming the previously recorded temporal patterns of time-series data into symbol strings based on the set of optimal bin boundaries, where a symbol string is based on data points in the plurality of discrete bins, and storing the symbol strings within a symbol storage. | 2020-09-17 |
20200293528 | SYSTEMS AND METHODS FOR AUTOMATICALLY GENERATING STRUCTURED OUTPUT DOCUMENTS BASED ON STRUCTURAL RULES - Methods and systems for ingesting unstructured data and generating, based on structural rules, structured output reports that are easily digestible are provided. In embodiments, unstructured data is received from at least one source. At least a portion of the unstructured data is classified into an appropriate category. A citation is selected to be included in the at least one structured report, and at least one structural rule is applied to the selected citation to determine at least one field associated with the selected citation. The structural rule defines the at least one field. Information relevant to the at least one field is identified based on the classified unstructured data, and the at least one field is populated with the information identified as relevant. The at least one structured report is generated based at least in part on the populated information. | 2020-09-17 |
20200293529 | ANSWER FACTS FROM STRUCTURED CONTENT - In one aspect, a method includes receiving a query determined to be a question query that seeks an answer response and data identifying resources determined to be responsive to the query; identifying structured content set in a top-ranked subset of the resources, each structured content set being content arranged according to related attributes in one of the resources; for each identified structured content set, determining whether the query matches the structured content set based on terms of the query matching related attributes of the structured content set; selecting one of the structured content sets for which the query is determined to match; generating, from the selected structured content set, a structured fact set from the related attributes that matched the terms of the query; and providing the structured fact set with search results that identify the resources determined to be responsive to the query. | 2020-09-17 |