26th week of 2021 patent applcation highlights part 49 |
Patent application number | Title | Published |
20210200633 | INCREASED MEMORY ACCESS PARALLELISM USING PARITY - Disclosed in some examples are memory devices which increase a parallelism of host operations of a memory device. While a first block of data from a first stripe in a first memory die is being read, blocks of data belonging to a second stripe stored in memory dies other than the first memory die are concurrently read. This includes reading the parity value of the second stripe. The parity data, along with the blocks of data from the second stripe from dies other than the first die are then used to determine the block of data of the second stripe stored in the first memory die without actually reading the value from the block in the first memory die. This reconstruction may be done in parallel with additional read operations for other data performed on the first die. | 2021-07-01 |
20210200634 | STORAGE DEVICE AND OPERATING METHOD OF STORAGE DEVICE - A storage device includes a nonvolatile memory device, and a controller that reads first data from the nonvolatile memory device. When a number of first errors of the first data is not smaller than a first threshold value, the controller determines whether the first errors include timing errors arising from a variation of signal transmission timings between the nonvolatile memory device and the controller and performs a retraining operation on the signal transmission timings when the first errors include the timing errors. | 2021-07-01 |
20210200635 | NAND DEVICE MIXED PARITY MANAGEMENT - Devices and techniques for NAND device mixed parity management are described herein. A first portion of data that corresponds to a first data segment and a second data segment—respectively defined with respect to a structure of a NAND device—are received. A parity value using the first portion of data and the second portion of data is computed and then stored for error correction operations. | 2021-07-01 |
20210200636 | ERROR CODE CALCULATION ON SENSING CIRCUITRY - Examples of the present disclosure provide apparatuses and methods for error code calculation. The apparatus can include an array of memory cells that are coupled to sense lines. The apparatus can include a controller configured to control a sensing circuitry, that is coupled to the sense lines, to perform a number of operations without transferring data via an input/output (I/O) lines. The sensing circuitry can be controlled to calculate an error code for data stored in the array of memory cells and compare the error code with an initial error code for the data to determine whether the data has been modified. | 2021-07-01 |
20210200637 | MANAGING STORAGE OF MULTIPLE PLANE PARITY DATA IN A MEMORY SUB-SYSTEM - Host data to be written to a storage area including a set of multiple planes of a memory device is received. A first parity generation operation based on a portion of the set of multiple planes of the host data to generate a set of multi-plane parity data is executed. The set of multi-plane parity data is stored in in a cache memory of a controller of a memory sub-system. A second parity generation operation based on the set of the multiple planes of the host data to generate a set of multi-page parity data is executed. The set of multi-page parity data in the cache memory of the controller of the memory sub-system is stored. A data recovery operation is performed based on the set of multi-plane parity data and the set of multi-page parity data. | 2021-07-01 |
20210200638 | STORAGE SYSTEM SPANNING MULTIPLE FAILURE DOMAINS - A plurality of failure domains are communicatively coupled to each other via a network, and each of the plurality of failure domains is coupled to one or more storage devices. A failure resilient stripe is distributed across the plurality of storage devices, such that two or more blocks of the failure resilient stripe are located in each failure domain. | 2021-07-01 |
20210200639 | STORAGE SYSTEM - This invention provides a storage system enabling it to properly rebuild a storage device involved in failure. In the storage system, a controller repairs data for which an access request has been issued, returns a reply to the source of the access request, and stores the repaired data. As regards data for which access is not requested, the controller executes rebuilding of storage regions corresponding to rebuild management units in priority-based order and changes priority for executing the rebuilding, based on access frequencies for a first period and access frequencies for a second period that is shorter than the first period. | 2021-07-01 |
20210200640 | BOOT DATA VALIDITY - Examples associated with boot data validity are described. One example includes determining whether NVRAM boot data structure is valid. When the NVRAM boot data structure is valid, a NVRAM boot data structure validity flag is set to indicate the boot data structure is invalid. The validity flag is set to indicate the NVRAM boot data structure is valid once a point in a startup process is reached that indicates the startup process will complete successfully. When the NVRAM boot data structure is invalid, errors identified in the NVRAM boot data structure are repaired, and the startup process is restarted. | 2021-07-01 |
20210200641 | PARALLEL CHANGE FILE TRACKING IN A DISTRIBUTED FILE SERVER VIRTUAL MACHINE (FSVM) ARCHITECTURE - System and method for implementing parallel Change File Tracking (CFT) between a distributed File Server Virtual Machine (FSVM) system and a scale-out backup system using underlying snapshot technology. The FSVM system executes efficient assignments of backup streams between worker nodes of the backup system and computing nodes of the FSVM system based on the number of available worker nodes at the backup system. The assignment of backup streams allows for parallel processing of incremental backup data based on successive data file snapshots. The parallel incremental backup may, for example, be per node, per share, or per data size across nodes or shares. | 2021-07-01 |
20210200642 | DYNAMIC TRIGGERING OF BLOCK-LEVEL BACKUPS BASED ON BLOCK CHANGE THRESHOLDS AND CORRESPONDING FILE IDENTITIES - A data storage management approach is disclosed that performs backup operations flexibly, based on a dynamic scheme of monitoring block changes occurring in production data. The illustrative system monitors block changes based on certain block-change thresholds and triggers block-level backups of the changed blocks when a threshold is passed. Block changes may be monitored in reference to particular files based on a reverse lookup mechanism. The illustrative system also collects and stores historical information on block changes, which may be used for reporting and predictive analysis. | 2021-07-01 |
20210200643 | AUTOMATED DISCOVERY OF DATABASES - In some examples, a networked computing system comprises a backup node cluster of a backup service in communication with a host database node cluster of a host, a host database at least initially undiscovered by the backup node cluster, one or more processors coupled with memory storing instructions that, when executed, perform operations comprising at least installing a backup agent on at least one node of the host database node cluster, registering the host at the backup service, based on the host registration, triggering a host database discovery process to discover the undiscovered database automatically, the discovery process including a discovery call, in response to the discovery call, receiving metadata relating to the discovered database, and communicating with the discovered database. | 2021-07-01 |
20210200644 | BACKUP AND TIERED POLICY COORDINATION IN TIME SERIES DATABASES - A data protection system configured to backup a time series database is provided. The data protection system may be integrated with or have access to consolidation policies of the time series database. The backup policy and backup retention policy are set by monitoring the consolidation policy and adjusting the backup policy to ensure that the data in the time series database is protected prior to being downscaled, discarded or otherwise consolidated. | 2021-07-01 |
20210200645 | AUTOMATED DISCOVERY OF DATABASES - In some examples, a networked computing system comprises a backup node cluster of a backup service in communication with a host database node cluster of a host, a host database at least initially undiscovered by the backup node cluster, one or more processors coupled with memory storing instructions that, when executed, perform operations comprising at least installing a backup agent on at least one node of the host database node cluster, registering the host at the backup service, based on the host registration, triggering a host database discovery process to discover the undiscovered database automatically, the discovery process including a discovery call, in response to the discovery call, receiving metadata relating to the discovered database, and communicating with the discovered database. | 2021-07-01 |
20210200646 | SYSTEM AND METHOD OF GENERATING AUTOMATIC CHECKPOINTS OF A DISTRIBUTED FILE SYSTEM - Disclosed herein are techniques for generating fractional checkpoints on a distributed file system by snapshotting subcomponents referred to as “file sets.” The techniques include capturing a present time; calculating from configured parameters a future wakeup time for a subsequent set of checkpoints from the present time; walking a database of meta file set objects to locate at least one meta file set object; calculating a retention period for a snapshot that is being created; and generating a global data-less snapshot for the meta file set object and remote data file set objects associated with the meta file set object, and then repeating the process for subsequent file set objects in the database. | 2021-07-01 |
20210200647 | STORAGE DEVICE AND METHOD OF OPERATING THE SAME - The present technology relates to an electronic device. A memory device having improved backup performance according to the present technology includes a memory device including a plurality of logical storage areas, and a memory controller. The memory controller controls the memory device to perform a memory operation on an original storage area of the plurality of logical storage areas according to a request of a host, and to perform a mirroring operation of copying the memory operation which was performed on the original storage area in a backup storage area of the plurality of logical storage areas based on whether the memory device is in an idle state. | 2021-07-01 |
20210200648 | DISTRIBUTED RECOVERY OF SERVER INFORMATION - In some examples, a first computing device may receive, from a server, an indication that the server has recovered data. For instance, the first computing device may store metadata including a mapping for one or more file systems accessed by one or more client devices. Furthermore, a second computing device may store a copy of the mapping stored on the first computing device. The first computing device may receive, from the server, a mapping of the one or more file systems determined by the server based on the recovered data. The first computing device may compare the mapping from the server with the mapping of the one or more file systems on the first computing device, and may send, to the server, information about changes determined between the two mappings to enable the server to update the mapping on the server based on the changes. | 2021-07-01 |
20210200649 | ERROR RECOVERY FOR NON-VOLATILE MEMORY MODULES - A memory controller includes a command queue, a memory interface queue, at least one storage queue, and a replay control circuit. The command queue has a first input for receiving memory access commands. The memory interface queue receives commands selected from the command queue and couples to a heterogeneous memory channel which is coupled to at least one non-volatile storage class memory (SCM) module. The at least one storage queue stores memory access commands that are placed in the memory interface queue. The replay control circuit detects that an error has occurred requiring a recovery sequence, and in response to the error, initiates the recovery sequence. In the recovery sequence, the replay control circuit transmits selected memory access commands from the at least one storage queue by grouping non-volatile read commands together separately from all pending volatile reads, volatile writes, and non-volatile writes. | 2021-07-01 |
20210200650 | METHOD AND SYSTEM FOR INDICATING BIOS POST STATUS FROM A CHASSIS IDENTIFY LED - A system and method for providing status information during a power-on self-test routine. The system includes a basic input output system operable to execute the power-on self-test routine and output the status of the power-on self-test routine. The system includes an externally visible indicator such as a server chassis identify LED. A controller is coupled to the basic input output system and the externally visible indicator. The controller is operable to receive the status from the basic input output system, and to control the externally visible indicator in response to the status received from the basic input output system. | 2021-07-01 |
20210200651 | METHOD AND APPARATUS FOR PERFORMING TEST FOR CPU, AND ELECTRONIC DEVICE - A method and an apparatus for performing a test for a CPU, and an electronic device. A decay command in a SETWP test and a command-executing duration corresponding to each command subsequent to the decay command can be automatically deployed. Thereby, the SETWP test is correctly performed for the CPU to obtain a test result. It is not necessary to rely on manual adjustment on a parameter of a delay corresponding to each command. | 2021-07-01 |
20210200652 | METHOD AND SYSTEM FOR INDICATING BIOS POST STATUS FROM STORAGE DRIVE LED - A system and method for providing a status indicator during a power-on self-test routine. A basic input output system is operable to execute the power-on self-test routine and output a status of the power-on self-test routine. A plurality of storage devices that each have an externally visible indicator. A controller is coupled to the basic input output system and the plurality of storage devices. The controller is operable to receive the status from the basic input output system and control the externally visible indicator of each of the storage devices in response to the status received from the basic input output system. | 2021-07-01 |
20210200653 | METHOD AND CONTROL SYSTEM FOR CONTROLLING AND/OR MONITORING DEVICES - Complex control instruction chains in a blockchain for a specific task for controlling devices to be managed in a simple manner is provided, which permits a prescribed validity to be assigned for a specific task of a blockchain-based device control, the validity being defined by the life cycle (e.g. the period of use) of a device, for example. | 2021-07-01 |
20210200654 | APPARATUS WITH TEMPERATURE MITIGATION MECHANISM AND METHODS FOR OPERATING THE SAME - Methods, apparatuses, and systems related to a memory device are described. The memory device may include a non-volatile (NV) memory and a controller. The controller may be configured to predict a temperature of the NV memory based on a real-time temperature of the controller. Based on the predicted temperature of the NV memory, the controller may execute a remedial action to reduce an actual temperature of the NV memory for executing an upcoming operation. | 2021-07-01 |
20210200655 | FILTERED QUERY-RETRY LOGGING IN A DATABASE ENVIRONMENT - Systems, methods, and devices for automatically retrying a query. A method includes receiving a query directed to database data and assigning execution of the query to one or more execution nodes of a database platform. The method includes determining that execution of the query was unsuccessful. The method includes assigning a first retry execution of the query on the first version of the database platform and assigning a second retry execution of the query on a second version of the database platform. | 2021-07-01 |
20210200656 | APPARATUS AND METHOD FOR ADAPTIVELY SCHEDULING WORK ON HETEROGENEOUS PROCESSING RESOURCES - An apparatus and method for intelligently scheduling threads across a plurality of logical processors. For example, one embodiment of a processor comprises: a plurality of logical processors including comprising one or more of a first logical processor type and a second logical processor type, the first logical processor type associated with a first core type and the second logical processor type associated with a second core type; a scheduler to schedule a plurality of threads for execution on the plurality of logical processors in accordance with performance data associated with the plurality of threads; wherein if the performance data indicates that a new thread should be executed on a logical processor of the first logical processor type, but all logical processors of the first logical processor type are busy, the scheduler to determine whether to migrate a second thread from the logical processors of the first logical processor type to a logical processor of the second logical processor type based on an evaluation of first and second performance values associated with execution of the first thread on the first or second logical processor types, respectively, and further based on an evaluation of third and fourth performance values associated with execution of the second thread on the first or second logical processor types, respectively. | 2021-07-01 |
20210200657 | Controlling Screen Time Based on Context - A computer-implemented technique controls consumption of applications by a supervisee (e.g., a child). The technique detects when a supervisee attempts to interact with an application. In response, the technique receives context input signals that describe a current context affecting the supervisee. The technique then generates an output result based on the current context information and a set of rules expressed by rule logic. The technique then controls interaction by the supervisee with the application based on the output result. In one implementation, the technique automatically generates the rule logic, which may correspond to a set of discrete rules and/or a machine-trained model that implicitly expresses the rules. At least some of the rules specify amounts of time allocated to the supervisee for interaction with the plural applications in plural contexts. According to another illustrative aspect, the technique uses a machine-trained model to automatically classify a new application. | 2021-07-01 |
20210200658 | METHOD AND APPARATUS FOR DETECTING A MONITORING GAP FOR AN INFORMATION HANDLING SYSTEM - An information handling system includes an analysis block configured to obtain monitoring results from a monitoring data repository, to analyze the monitoring results to identify at least one monitoring gap, and to provide a monitoring gap result identifying the at least one monitoring gap. A machine learning recommender produces a recommendation to reduce the monitoring gap, and a user interface displays the recommendation. | 2021-07-01 |
20210200659 | Determining Capacity in Storage Systems Using Machine Learning Techniques - Methods, apparatus, and processor-readable storage media for determining capacity in storage systems using machine learning techniques are provided herein. An example computer-implemented method includes obtaining capacity-related data from a storage system; forecasting, for a given temporal period, capacity of one or more storage objects of the storage system by applying machine learning techniques to at least a portion of the capacity-related data; aggregating the forecasted capacity for at least portions of the one or more storage objects; determining, based on the aggregated forecasted capacity of the storage objects, whether at least a portion of the storage system will run out of capacity in connection with the given temporal period; and performing one or more automated actions based at least in part on the determination as to whether the at least a portion of the at least one storage system will run out of capacity. | 2021-07-01 |
20210200660 | Method And System For Automatic Real-Time Causality Analysis Of End User Impacting System Anomalies Using Causality Rules And Topological Understanding Of The System To Effectively Filter Relevant Monitoring Data - A system and method is disclosed for the automated identification of causal relationships between a selected set of trigger events and observed abnormal conditions in a monitored computer system. On the detection of a trigger event, a focused, recursive search for recorded abnormalities in reported measurement data, topological changes or transaction load is started to identify operating conditions that explain the trigger event. The system also receives topology data from deployed agents which is used to create and maintain a topological model of the monitored system. The topological model is used to restrict the search for causal explanations of the trigger event to elements of that have a connection or interact with the element on which the trigger event occurred. This assures that only monitoring data of elements is considered that are potentially involved in the causal chain of events that led to the trigger event. | 2021-07-01 |
20210200661 | DEBUG SYSTEMS FOR DETERMINISTIC VALIDATION OF DATA STORAGE DEVICES - Systems and methods are disclosed for deterministically validating an SSD device, based on the occurrence of a triggering firmware event. In some implementations, a method is provided. The method comprising receiving an ID of a triggering firmware event from a computing device and receiving data of a cross feature event from the computing device. A storage device may execute a plurality of NVMe commands as part of a test to generate a plurality of firmware events. An ID of each of the plurality of firmware events is compared to the ID of the triggering firmware event and in response to an ID of one of the plurality of firmware events matching the ID of the triggering firmware event, the one of the plurality of firmware events may be identified as the triggering firmware event and an indication of the match may be generated. At least part of the data of the cross feature event is transmitted to the storage device to cause execution of the cross feature event during execution of the triggering firmware event. | 2021-07-01 |
20210200662 | SYSTEM AND METHOD TO USE PAST COMPUTER EXECUTABLE INSTRUCTIONS TO EVALUATE PROPOSED COMPUTER EXECUTABLE INSTRUCTIONS - Computer executable instructions including code sections are received and compared to previously analyzed computer executable instructions. The code sections are then analyzed and assigned a risk score. If the risk score is over a threshold, an alarm may be communicated or the system may substitute computer executable instructions that may have been created according to a standard or have been previously approved. | 2021-07-01 |
20210200663 | AUTOMATED UNIT TESTING IN A MAINFRAME CICS ENVIRONMENT - An automated system is presented for unit testing an application in a mainframe execution environment. A plurality of stub objects reside in the mainframe execution environment, such that each stub object in the plurality of stub objects represents a different stub type. A command translator table is configured with an entry for each command available for an online transaction processor. Each entry in the command translator table specifies a stub type for the command and includes a listing of possible arguments associated with the given command, such that each possible argument in the listing of possible arguments has a specified category type. A test configurator executes in the mainframe execution environment and is configured to receive and parse a test input file. A setup routine interacts with the test configurator to receive records from the test input file. | 2021-07-01 |
20210200664 | SYSTEMS AND METHODS FOR REMOTE MOBILE DEVELOPMENT AND TEST FEEDBACK - Systems and methods for remote mobile development and test feedback are disclosed. According to one embodiment, in an electronic device testing apparatus comprising at least one computer processor, a method for remote mobile development and test feedback may include: (1) receiving a test request comprising one or more tests to conduct on at least one electronic device in a device farm; (2) parsing the one or more test requests to identify the features to be tested; (3) identifying one or more test scripts that encompasses the features to be tested; (4) identifying a required software configuration on the at least one electronic device to conduct the one or more test; (5) installing the required software configuration on the at least one electronic device; (6) executing the test features; and (7) storing results of the test features. | 2021-07-01 |
20210200665 | RULE TESTING FRAMEWORK FOR EXECUTABLE RULES OF A SERVICE PROVIDER SYSTEM - There are provided systems and methods for a rule testing framework for executable rules of a service provider system. During processing rule implementation and/or testing for rules currently implemented in production systems, different values for the variables and attributes of the rule may be required to be tested to ensure proper rule functioning. In order to test the rule, the expression of the rule is determined, and each variable is considered in turn. The expression is evaluated so that the selected variable becomes the output of the expression. Thus, the values of the other variables may then be determined so that the selected variable is the output of the expression. The rule may then be tested for positive and negative values of the selected variable so that the rules functioning for the selected variable is tested. | 2021-07-01 |
20210200666 | MOVABLE PLATFORM AND ACTUATING ATTACHMENT - Disclosed herein is a movable platform (MP) for moving freight during cross-dock operations. The MP comprises a mechanical actuation assembly used to deploy a plurality of roller assemblies used for moving the MP. Also disclosed is an actuating attachment used to deploy the mechanical actuation assembly of the MP. The actuating attachment can be attached to a conveyance vehicle, such as a forklift, or built in to an automated guided vehicle. | 2021-07-01 |
20210200667 | MEMORY THIN PROVISIONING USING MEMORY POOLS - Examples described herein relate to memory thin provisioning in a memory pool of one or more dual in-line memory modules or memory devices. At any instance, any central processing unit (CPU) can request and receive a full virtual allocation of memory in an amount that exceeds the physical memory attached to the CPU (near memory). A remote pool of additional memory can be dynamically utilized to fill the gap between allocated memory and near memory. This remote pool is shared between multiple CPUs, with dynamic assignment and address re-mapping provided for the remote pool. To improve performance, the near memory can be operated as a cache of the pool memory. Inclusive or exclusive content storage configurations can be applied. An inclusive cache configuration can include an entry in a near memory cache also being stored in a memory pool whereas an exclusive cache configuration can provide an entry in either a near memory cache or in a memory pool but not both. Near memory cache management includes current data location tracking, access counting and other caching heuristics, eviction of data from near memory cache to pool memory and movement of data from pool memory to memory cache. | 2021-07-01 |
20210200668 | RESERVED MEMORY IN MEMORY MANAGEMENT SYSTEM - A memory management system, such as a virtual memory manager that manages a virtual memory space that includes volatile memory (e.g. DRAM) and non-volatile memory (e.g., flash memory) creates a reserved portion of memory in the volatile memory for at least one user application in one embodiment, and that reserved portion can also store content that it restricted to read only permission within the non-volatile memory. | 2021-07-01 |
20210200669 | SEPARATE CORES FOR MEDIA MANAGEMENT OF A MEMORY SUB-SYTEM - Methods, systems, and devices for separate cores for media management of a memory sub-system are described. A controller of a memory sub-system can include a first processing core and a second processing core for a garbage collection procedure. The first processing core can perform a first set of one or more operations associated with a read process of a first stage of a garbage collection procedure for a plurality of transfer units of the memory sub-system. The second processing core can perform a second set of one or more operations associated with a write process of the first stage of the garbage collection procedure, where the second set of one or more operations are concurrent with the first set of one or more operations. | 2021-07-01 |
20210200670 | ASYNCHRONOUS POWER LOSS RECOVERY FOR MEMORY DEVICES - An example memory sub-system includes a memory device and a processing device, operatively coupled to the memory device. The processing device is configured to maintain a logical-to-physical (L2P) table, wherein a region of the L2P table is cached in a volatile memory; maintain a write count reflecting a number of bytes written to the memory device; maintain a cache miss count reflecting a number of cache misses with respect to a cache of the L2P table; responsive to determining that a value of a predetermined function of the write count and the cache miss count exceeds a threshold value, copy the region of the L2P table to a non-volatile memory. | 2021-07-01 |
20210200671 | ONE-TIME PROGRAMMABLE MEMORY DEVICE AND FAULT TOLERANCE METHOD THEREOF - A one-time programmable memory device is provided in the invention. The one-time programmable memory device includes a one-time programmable memory and a memory controller. The one-time programmable memory includes a first block, a second block and a third block. The first block includes a plurality of initial-address-unit groups and each initial-address-unit group includes a plurality of initial address units and each initial address unit corresponds to a variable to record the storage address of its corresponding variable. The second block includes a plurality of initial address control units and each initial address control unit corresponds to one of the variables to record the corresponding initial-address-unit group of each variable. The third block includes a plurality of storage units and each storage unit has a corresponding storage address. The memory controller is configured to assign the storage addresses to the variables. | 2021-07-01 |
20210200672 | CORRUPTED STORAGE PORTION RECOVERY IN A MEMORY DEVICE - Devices and techniques for corrupted storage portion recovery in a memory device are described herein. A failure event can be detected during a garbage collection operation on a collection of storage portions (e.g., pages) in a memory array. Here, members of the collection of storage portions are being moved from a former physical location to a new physical location by the garbage collection operation. A reference to a former physical location of a possibly corrupt storage portion in the collection of storage portions can be retrieved in response to the failure event. Here, the possibly corrupt storage portion has already been written to a new physical location as part of the garbage collection operation. The possibly corrupt storage portion can then be rewritten at the new physical location using data from the former physical location. | 2021-07-01 |
20210200673 | MEMORY MANAGEMENT APPARATUS AND METHOD FOR COMPARTMENTALIZATION USING LINEAR ADDRESS METADATA - An apparatus and method for memory management using compartmentalization. For example, one embodiment of a processor comprises: execution circuitry to execute instructions and process data, at least one instruction to generate a system memory access request using a first linear address; and address translation circuitry to perform a first walk operation through a set of one or more address translation tables to translate the first linear address to a first physical address, the address translation circuitry to concurrently perform a second walk operation through a set of one or more linear address metadata tables to identify metadata associated with the linear address, and to use one or more portions of the metadata to validate access by the at least one instruction to the first physical address. | 2021-07-01 |
20210200674 | METHODS AND APPARATUS FOR PERSISTENT DATA STRUCTURES - A method may include storing at least a portion of a metadata buffer of a persistent data structure in volatile memory, and storing at least a portion of a data buffer of the persistent data structure in persistent memory. A system may include a processor, a volatile memory coupled to the processor, and a persistent memory coupled to the processor. The processor may be configured to execute procedures including storing at least a portion of a metadata buffer of a persistent data structure in volatile memory, and storing at least a portion of a data buffer of the persistent data structure in persistent memory. A method may include storing at least a portion of a transient part of a persistent data structure in volatile memory, and storing at least a portion of a persistent part of the persistent data structure in persistent memory. | 2021-07-01 |
20210200675 | SHARED READ - USING A REQUEST TRACKER AS A TEMPORARY READ CACHE - Disclosed embodiments relate to a shared read request (SRR) using a common request tracker (CRT) as a temporary cache. In one example, a multi-core system includes a memory and a memory controller to receive a SRR from a core when a Leader core is not yet identified, allocate a CRT entry and store the SRR therein, mark it as a Leader, send a read request to a memory address indicated by the SRR, and when read data returns from the memory, store the read data in the CRT entry, send the read data to the Leader core, and await receipt, unless already received, of another SRR from a Follower core, the other SRR having a same address as the SRR, then, send the read data to the Follower core, and deallocate the CRT entry. | 2021-07-01 |
20210200676 | Apparatus and Method of Read Leveling for Storage Class Memory - A method and apparatus for read wearing control for storage class memory (SCM) are disclosed. The read data control apparatus, located between a host and the SCM subsystem, comprises a read data cache, an address cache and an SCM controller. The address cache stores pointers pointing to data stored in logging area(s) located in the SCM. For a read request, the read wearing control determines whether the read request is a read data cache hit, an address cache hit or neither (i.e., read data cache miss and address cache miss). For the read data cache hit, the requested data is returned from the read data cache. For the address cache hit, the requested data is returned from the logging area(s) and the read data becomes a candidate to be placed in the read data cache. For read data cache and address cache misses, the requested data is returned from SCM. | 2021-07-01 |
20210200677 | ANONYMIZED NETWORK ADDRESSING IN CONTENT DELIVERY NETWORKS - Systems, methods, apparatuses, and software for a content delivery network that caches content for delivery to end user devices is presented. In one example, a content delivery network (CDN) is presented having a plurality of cache nodes that cache content for delivery to end user devices. The CDN includes an anonymization node configured to establish anonymized network addresses for transfer of content to cache nodes from one or more origin servers that store the content before caching by the CDN. The anonymization node is configured to provide indications of relationships between the anonymized network addresses and the cache nodes to a routing node of the CDN. The routing node is configured to route the content transferred by the one or more origin servers responsive to content requests of the cache nodes based on the indications of the relationships between the anonymous network addresses to the cache nodes. | 2021-07-01 |
20210200678 | REDUNDANT CACHE-COHERENT MEMORY FABRIC - A processor, including a core; and a cache-coherent memory fabric coupled to the core and having a primary cache agent (PCA) configured to provide a primary access path; and a secondary cache agent (SCA) configured to provide a secondary access path that is redundant to the primary access path, wherein the PCA has a coherency controller configured to maintain data in the secondary access path coherent with data in the main access path. | 2021-07-01 |
20210200679 | SYSTEM AND METHOD FOR MIXED TILE-AWARE AND TILE-UNAWARE TRAFFIC THROUGH A TILE-BASED ADDRESS APERTURE - In one aspect, space in a tile-unaware cache associated with an address aperture may be managed in different ways depending on whether a processing component initiating an access request through the aperture to a tile-based memory is tile-unaware or tile-aware. Upon a full-tile read by a tile-aware process, data may be evicted from the cache, or space may not be allocated. Upon a full-tile write by a tile-aware process, data may be evicted from the cache. In another aspect, a tile-unaware process may be supplemented with tile-aware features by generating a full tile of addresses in response to a partial-tile access. Upon a partial-tile read by the tile-unaware process, the generated addresses may be used to pre-fetch data. Upon a partial-tile write, the addresses may be used to evict data. Upon a bit block transfer, the addresses may be used in dividing the bit block transfer into units of tiles. | 2021-07-01 |
20210200680 | CACHE DYNAMIC RANDOM ACCESS MEMORY - Disclosed is a dynamic random access memory that has columns, data rows, tag rows and comparators. Each comparator compares address bits and tag information bits from the tag rows to determine a cache hit and generate address bits to access data information in the DRAM as a multiway set associative cache. | 2021-07-01 |
20210200681 | DATA STORAGE METHOD AND APPARATUS, AND SERVER - This disclosure relates to a data storage method and apparatus, and a server. The method includes receiving, by a first server, a write instruction sent by a second server, storing target data in a cache of a controller, detecting a read instruction for the target data, and storing the target data in a storage medium of a non-volatile memory based on the read instruction. In other words, when the second server needs to write the target data to the first server, the target data is not only written to the cache of the first server, but also written to the storage medium of the first server. This can ensure that the data in the cache is written to the storage medium promptly. | 2021-07-01 |
20210200682 | FULL MULTI-PLANE OPERATION ENABLEMENT - Methods, systems, and devices for full multi-plane operation enablement are described. A flash controller can determine that a first plane of a set of planes of a memory die is an invalid plane. The flash controller can issue a single descriptor associated with a multi-plane operation for the set of planes of the memory die. The single descriptor can include a plurality of commands for the multi-plane operation in which the first command of the plurality of commands can be a duplicate of a second command of the plurality of commands based on the first plane being the invalid plane. In some cases, a negative-and (NAND) controller can receive the single descriptor associated with the multi-plane operation for the set of planes of a memory die. The NAND controller can issue a plurality of commands for the multi-plane operation based on receiving the single descriptor. | 2021-07-01 |
20210200683 | EVICTION OF A CACHE LINE BASED ON A MODIFICATION OF A SECTOR OF THE CACHE LINE - An indication to perform an eviction operation on a cache line in a cache can be received. A determination can be made as to whether at least one sector of the cache line is associated with invalid data. In response to determining that at least one sector of the cache line is associated with invalid data, a read operation can be performed to retrieve valid data associated with the at least one sector. The at least one sector of the cache line that is associated with the invalid data can be modified based on the valid data. Furthermore, the eviction operation can be performed on the cache line with the modified at least one sector. | 2021-07-01 |
20210200684 | MEMORY TAGGING APPARATUS AND METHOD - An apparatus and method for tagged memory management. For example, one embodiment of a processor comprises: execution circuitry to execute instructions and process data, at least one instruction to generate a system memory access request having a first address pointer; and address translation circuitry to determine whether to translate the first address pointer with or without metadata processing, wherein if the first address pointer is to be translated with metadata processing, the address translation circuitry to: perform a lookup in a memory metadata table to identify a memory metadata value, determine a pointer metadata value associated with the first address pointer, and compare the memory metadata value with the pointer metadata value, the comparison to generate a validation of the memory access request or a fault condition, wherein if the comparison results in a validation of the memory access request, then accessing a set of one or more address translation tables to translate the first address pointer to a first physical address and to return the first physical address responsive to the memory access request. | 2021-07-01 |
20210200685 | MEMORY TAGGING METADATA MANIPULATION - An apparatus and method for tagged memory management, an embodiment including execution circuitry to generate a system memory access request having a first address pointer and address translation circuitry to determine whether to translate the first address pointer with metadata processing. The address translation circuitry is to access address translation tables to translate the first address pointer to a first physical address, perform a lookup in a memory metadata table to identify a memory metadata value associated with a physical address range including the first physical address, determine a pointer metadata value associated with the first address pointer, and compare the memory metadata value with the pointer metadata value; and when the comparison results in a validation of the memory access request, then return the first physical address. | 2021-07-01 |
20210200686 | MEMORY TAGGING APPARATUS AND METHOD - An apparatus and method for tagged memory management. For example, one embodiment of a processor comprises: execution circuitry to execute instructions and process data, at least one instruction to generate a system memory access request having a first address pointer; and address translation circuitry to determine whether to translate the first address pointer with or without metadata processing, wherein if the first address pointer is to be translated with metadata processing, the address translation circuitry to: perform a lookup in a memory metadata table to identify a memory metadata value, determine a pointer metadata value associated with the first address pointer, and compare the memory metadata value with the pointer metadata value, the comparison to generate a validation of the memory access request or a fault condition, wherein if the comparison results in a validation of the memory access request, then accessing a set of one or more address translation tables to translate the first address pointer to a first physical address and to return the first physical address responsive to the memory access request. | 2021-07-01 |
20210200687 | APPARATUS AND METHOD FOR EFFICIENT PROCESS-BASED COMPARTMENTALIZATION - An apparatus and method for efficient process-based compartmentalization. For example, one embodiment of a processor comprises: execution circuitry to execute instructions and process data; memory management circuitry coupled to the execution circuitry, the memory management circuitry to manage access to a system memory by a plurality of related processes using one or more process-specific translation structures and one or more shared translation structures to be shared by the related processes; and one or more control registers to store a process-specific base address pointer associated with a first process of the plurality of related processes and to store a shared base address pointer to identify the shared translation structures; wherein the memory management circuitry is to use the process-specific base address pointer in combination with a first linear address provided by the first process to walk the process-specific translation structures to identify any permissions and/or physical address associated with the first linear address, wherein if permissions are identified, the memory management circuitry is to use the permissions in place of any permissions specified in the shared translation structures. | 2021-07-01 |
20210200688 | APPARATUS AND METHOD FOR IMPROVING INPUT AND OUTPUT THROUGHPUT OF MEMORY SYSTEM - A memory system includes a plurality of memory dies configured to store data; and a controller coupled with the plurality of memory dies through a plurality of channels, wherein the controller decides whether to perform a pairing operation, by comparing the number of pieces of read data to be outputted to an external device, which are included in a first buffer, with an output count reference value, and wherein, in the case where the number of pieces of read data stored in the first buffer is greater than or equal to the output count reference value, the controller gathers other read requests and logical addresses corresponding thereto in a second buffer, and performs the pairing operation. | 2021-07-01 |
20210200689 | Storage System and Method for Secure Host Controller Memory Buffer Access - A storage system and method for secure host controller memory buffer access are provided. In one embodiment, a storage system is provided comprising a storage area configured to store a database comprising a submission queue and a completion queue dedicated for use by an authorized host, and a controller. The controller is configured to: receive a request to access the storage area; determine whether the request is from the authorized host or from an unauthorized host; in response to determining that the request is from the authorized host, grant the request; and in response to determining that the request is from an unauthorized host, deny the request. Other embodiments are provided. | 2021-07-01 |
20210200690 | FINE GRAINED MEMORY AND HEAP MANAGEMENT FOR SHARABLE ENTITIES ACROSS COORDINATING PARTICIPANTS IN DATABASE ENVIRONMENT - Many computer applications comprise multiple threads of executions. Some client application requests are fulfilled by multiple cooperating processes. Techniques are disclosed for creating and managing memory namespaces that may be shared among a group of cooperating processes in which the memory namespaces are not accessible to processes outside of the group. The processes sharing the memory each have a handle that references the namespace. A process having the handle may invite another process to share the memory by providing the handle. A process sharing the private memory may change the private memory or the processes sharing the private memory according to a set of access rights assigned to the process. The private shared memory may be further protected from non-sharing processes by tagging memory segments allocated to the shared memory with protection key and/or an encryption key used to encrypt/decrypt data stored in the memory segments. | 2021-07-01 |
20210200691 | MANAGEMENT OF RESOURCES IN A MODULAR CONTROL SYSTEM - A device may include a memory storing instructions and a processor configured to execute the instructions to receive, from a configuration client device, a request to register a resource; and identify a domain object associated with the resource, wherein the domain object corresponds to a logical entity representing a device or port, or corresponds to a logical entity controlling another resource included in another domain object. The processor may be further configured to select a domain object handler for the identified domain object; register the identified domain object with the selected domain object handler; and use the selected domain object handler to process messages associated with the registered domain object. | 2021-07-01 |
20210200692 | SIGNAL COMBINER - Examples relate to a signal combiner. A system may include a signal combiner to receive a first signal from a first peripheral device. The signal combiner may similarly receive a second signal from a second peripheral device. Further, the signal combiner may combine the first signal and the second signal into a combined signal. The system may also include a controller coupled to the signal combiner. The controller may receive the combined signal. | 2021-07-01 |
20210200693 | SEQUENCER CHAINING CIRCUITRY - A system can include a plurality of sequencers each configured to provide a number of sequenced output signals responsive to assertion of a respective sequencer enable signal provided thereto. The system can include chaining circuitry coupled to the plurality of sequencers. The chaining circuitry can comprise logic to: responsive to assertion of a primary enable signal received thereby, assert respective sequencer enable signals provided to the plurality of sequencers in accordance with a first sequence; and responsive to deassertion of the primary enable signal, assert the respective sequencer enable signals provided to the plurality of sequencers in accordance with a second sequence. | 2021-07-01 |
20210200694 | STAGING BUFFER ARBITRATION - Staging buffer arbitration includes: storing a plurality of memory access requests in a staging buffer; selecting a memory access request of the plurality of memory access requests from the staging buffer based on one or more arbitration rules; and moving the memory access request from the staging buffer to a command queue. | 2021-07-01 |
20210200695 | STAGING MEMORY ACCESS REQUESTS - Staging memory access requests includes receiving a memory access request directed to Dynamic Random Access Memory; storing the memory access request in a staging buffer; and moving the memory access request from the staging buffer to a command queue. | 2021-07-01 |
20210200696 | PIM DEVICE, COMPUTING SYSTEM INCLUDING THE PIM DEVICE, AND OPERATING METHOD OF THE PIM DEVICE - A processing in memory (PIM) device includes a memory configured to receive data through a first path from a host processor provided outside the PIM device, and an information gatherer configured to receive the data through a second path connected to the first path when the data is transferred to the memory via the first path, and to generate information by processing the data received through the second path. | 2021-07-01 |
20210200697 | DETERMINING WRITE COMMANDS FOR DELETION IN A HOST INTERFACE - An interface of a memory sub-system can determine that a particular write command received from a host has a same address as a subsequently received write command from the host. The interface can delete the particular write command if it is still in the interface or send a signal to delete the particular write command if the write command has already been provided from the interface. | 2021-07-01 |
20210200698 | PERFORMANCE OF MEMORY SYSTEM BACKGROUND OPERATIONS - Various examples are directed to devices and methods involving a host device and a memory system, the memory system comprising a memory controller and a plurality of memory locations. The memory system may send to the host device a first message describing background operations to be performed at the memory system. The memory system may receive from the host device a second message indicating permission to execute the background operations and may begin to execute at least one background operation. | 2021-07-01 |
20210200699 | NEUROMORPHIC MEMORY DEVICE AND METHOD - Apparatus and methods are disclosed, including memory devices and systems. Example memory devices, systems and methods include a stack of memory dies, a controller die, and a buffer. Example memory devices, systems and methods include one or more neuromorphic layers logically coupled between one or more dies in the stack of memory dies and a host interface of the controller die. | 2021-07-01 |
20210200700 | PERFORMANCE OF STORAGE SYSTEM BACKGROUND OPERATIONS - Methods for operating a memory device can include monitoring communications from a host device for a notification that a battery of the host device has entered a charging state and performing a background operation of the memory device responsive to receiving this notification. The notification can be an added functionality incorporated into a standardized interface. | 2021-07-01 |
20210200701 | VIRTUAL HEALTHCARE COMMUNICATION PLATFORM - A system comprising a pair of devices to enable communication between a first person and a second person; a body-suit to be worn by the first person; and a model replica of the body-suit configured to receive the tactile stimuli and/or the electrical stimuli from the second person and to convert the tactile stimuli and/or the electrical stimuli into the electrical signals which are conveyed to the body-suit over a network; wherein the body-suit is configured to replicate the tactile stimuli and/or the electrical stimuli of the model replica and convey the tactile stimuli and/or the electrical stimuli to the first person; and wherein the system allows a human to send a physical sensation of touch remotely to another human. | 2021-07-01 |
20210200702 | QUALITY OF SERVICE POLICY SETS - Disclosed are systems, computer-readable mediums, and methods for managing input/output operations within a system including at least one client and a storage system. A processor receives information regarding allocated input-output operations (IOPS) associated with a client accessing a storage system storing client data. The information includes a number of allocated total IOPS, a number of allocated read IOPS, and a number of allocated write IOPS. The processor also receives a requested number of write IOPS associated with the at least one client's request to write to the storage system. The processor determines a target write IOPS based on the number of allocated total IOPS, the number of allocated write IOPS and the requested number of write IOPS, and executes the determined target write IOPS within the first time period. | 2021-07-01 |
20210200703 | QUALITY OF SERVICE CONTROL OF LOGICAL DEVICES FOR A MEMORY SUB-SYSTEM - A processing device in a memory sub-system iteratively processes input/output (I/O) operations corresponding to a plurality of logical devices associated with a memory device. Tor each of the plurality of logical devices, the processing includes identifying a current logical device, determining one or more I/O operations in queue for the current logical device, and determining a number of operation credits associated with the current logical device. The number of credits is based at least in part on a set of quality of service (QoS) parameters for the current logical device. The processing further includes responsive to determining that the number of operation credits satisfies a threshold condition, performing the one or more I/O operations for the current logical device and identifying a subsequent logical device of the plurality of logical devices. | 2021-07-01 |
20210200704 | INPUT/OUTPUT COMMAND REBALANCING IN A VIRTUALIZED COMPUTER SYSTEM - The present disclosure provides new methods and systems for input/output command rebalancing in virtualized computer systems. For example, an I/O command may be received by a rebalancer from a virtual queue in a container. The container may be in a first virtual machine. A second I/O command may be received from a second virtual queue in a second container which may be located in a second virtual machine. The rebalancer may detect a priority of the first I/O command and a priority of the second I/O command. The rebalancer may then assign an updated priority each I/O command based on a quantity of virtual queues in the virtual machine of origin and a quantity of I/O commands in the virtual queue of origin. The rebalancer may dispatch the I/O commands to a physical queue. | 2021-07-01 |
20210200705 | TERMINAL, TERMINAL PERIPHERAL, SIGNAL TRANSMISSION SYSTEM AND SIGNAL SENDING AND RECEIVING METHOD - Provided are a terminal, a terminal peripheral, a signal transmission system, and a signal sending and receiving method. The terminal includes: a first audio module, which is connected to the USB receptacle in a terminal through an I2S bus channel and is configured to send a signal to be sent to a USB receptacle; the USB receptacle is configured to provide a physical connection interface between the terminal and a terminal peripheral. | 2021-07-01 |
20210200706 | High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays - Representative embodiments are disclosed for a rapid and highly parallel configuration process for field programmable gate arrays (FPGAs). In a representative method embodiment, using a host processor, a first configuration bit image for an application is stored in a host memory; one of more FPGAs are configured with a communication functionality such as PCIe using a second configuration bit image stored in a nonvolatile memory; a message is transmitted by the host processor to the FPGAs, usually via PCIe lines, with the message comprising a memory address and also a file size of the first configuration bit image in the host memory; using a DMA engine, each FPGA obtains the first configuration bit image from the host memory and is then configured using the first configuration bit image. Primary FPGAs may further transmit the first configuration bit image to additional, secondary FPGAs, such as via JTAG lines, for their configuration. | 2021-07-01 |
20210200707 | END-TO-END ISOLATION OVER PCIE - In some examples, a method includes receiving a transaction at an inbound port, the transaction including a requester identification (ID), a traffic class, and a peripheral component interconnect express (PCIe) address. The method includes providing an attribute based at least in part on the traffic class. The method includes providing a context ID based on the attribute and the requester ID. The method includes accessing a region of memory responsive to the transaction, the region of memory corresponding to the context ID. | 2021-07-01 |
20210200708 | IMAGE DISPLAY DEVICE AND PORT ARRANGEMENT - A control device including a USB port of a first interface capable of transmitting an image signal serving as a basis of an image to be displayed by the control device and capable of receiving power, and a port of a second interface capable of receiving power, is provided. The control device is accommodated in a housing, the USB port is disposed at a bottom surface which, given a surface of the housing where the touch panel is disposed as a front surface, is a side surface positioned in a longitudinal direction of the front surface, and the port is disposed at a rear surface opposite the front surface. | 2021-07-01 |
20210200709 | CIRCUITRY APPLIED TO ELECTRONIC DEVICE HAVING USB TYPE-C CONNECTOR AND ASSOCIATED ELECTRONIC DEVICE - A circuitry applied to an electronic device having a Universal Serial Bus (USB) type-C connector is provided. The circuitry includes a transceiver circuit, a physical layer circuit and a processing circuit. In operations of the circuitry, the transceiver circuit is coupled to the USB type-C connector. The physical layer circuit is configured to directly utilize a plurality of first signals from the USB type-C connector as at least one portion of Ethernet signals, and process the first signals to generate a plurality of processed first signals. The processing circuit is configured to process the processed first signals to generate an output signal. | 2021-07-01 |
20210200710 | PROCESSOR FOR CONFIGURABLE PARALLEL COMPUTATIONS - A flexible processor includes (i) numerous configurable processors interconnected by modular interconnection fabric circuits that are configurable to partition the configurable processors into one or more groups, for parallel execution, and to interconnect the configurable processors in any order for pipelined operations, Each configurable processor may include (i) a control circuit; (ii) numerous configurable arithmetic logic circuits; and (iii) configurable interconnection fabric circuits for interconnecting the configurable arithmetic logic circuits. | 2021-07-01 |
20210200711 | System and Method for Configurable Systolic Array with Partial Read/Write - A system is provided that includes a reconfigurable systolic array circuitry. The reconfigurable systolic array circuitry includes a first circuit block comprising one or more groups of processing elements and a second circuit block comprising one or more groups of processing elements. The reconfigurable systolic array circuitry further includes a first bias addition with accumulation circuitry configured to add a matrix bias to an accumulated value, to a multiplication product, or to a combination thereof. The reconfigurable systolic array circuitry additionally includes a first routing circuitry configured to route derivations from the first circuit block into the second circuit block, from the first circuit block into the first bias addition with accumulation circuitry, or into a combination thereof. | 2021-07-01 |
20210200712 | EFFECTIVE DEPLOYMENT OF SPREADSHEETS IN BROWSER ENVIRONMENTS - A file management system may include a file server that performs calculations of a spreadsheet file instance to generate a dataset that includes values in the spreadsheet file instance. The file management system also may include an application operating at a client device that is in communication with the file server via a network. The application may receive, via the network, a version of the dataset comprising the values generated by the calculations performed by the server. The application may visualize a spreadsheet at the user interface. The visualized spreadsheet may display at least a subset of the values. In one case, protected contents of one or more cells in the spreadsheet may be converted to other values when displayed at the user interface. | 2021-07-01 |
20210200713 | SYSTEMS AND METHODS FOR GENERATING A DATA STRUCTURE FROM MULTIPLE BIM FILES - A method includes receiving a first building information model (BIM) file and a second BIM file, the first and second BIM files both associated with a building comprising one or more assets, identifying a first set of BIM objects within the first BIM and a second set of BIM objects within the second BIM, the first set of BIM objects and the second set of BIM object each comprising one or more BIM objects associated with the one or more assets, identifying one or more relationships between objects of the first set of BIM objects and objects of the second set of BIM objects, applying a semantic description to the first set of BIM objects, the second set of BIM objects, and the one or more relationships, and generating a data structure comprising the first set of BIM objects, the second set of BIM objects, and the relationships. | 2021-07-01 |
20210200714 | Apparatus for Accessing Data from a Database as a File - An Apparatus is provided for allowing data to be operated upon by external file-based programs that are designed to work on files in a file system. The invention provides for an apparatus comprising a processor and memory, configured to perform enrolling File I/O access to a data object in a database for a client application by retrieving from the database a set of data describing a data object, generating a filename using the retrieved data and a file extension supplied by the client application, and correlating the filename with the data object. Another apparatus is provided, comprising a processor and memory, configured to perform receiving, from a program, File I/O requests, specifying a filename, that perform File I/O actions on a file in a file system, translating the received File I/O requests into Data Operations that perform equivalent File I/O actions on data in a database, and executing the Data Operations on a data object in the database, that has been enrolled and correlated with the filename. | 2021-07-01 |
20210200715 | REPLACING DATABASE TABLE JOIN KEYS WITH INDEX KEYS - Disclosed are embodiments for replacing database table join keys with index keys. In one embodiment, a method is disclosed comprising: receiving, by a processor, annotation data, the annotation data comprising a set of rows; retrieving, by the processor, a root dataset, the root dataset stored in one or more files; generating, by the processor, a row identifier for each row in the set of rows, the row identifier storing a plurality of fields enabling alignment of a respective row in the annotation data to a corresponding row in the root dataset; generating, by the processor, an annotation dataset, the annotation dataset comprising the set of rows and corresponding row identifiers; and writing, by the processor, the annotation dataset to at least one file, the at least one file separate from the one or more files. | 2021-07-01 |
20210200716 | PORTABLE SECURE DATA DELETION DEVICE AND METHOD FOR SECURE DATA DELETION - An apparatus and method for secure data deletion. The apparatus includes an enclosure, a connector for communicatively coupling to a computing device, a storage medium disposed within the enclosure and communicatively coupled to the connector, and self-contained software disposed on the storage medium. The software is configured to execute, on a processor, the steps of selecting for deletion files disposed on storage media accessible on the computing device, and performing a secure deletion action on the files. The software is further configured to execute the step of performing a deep scan for each file of the selected files. The deep scan includes canning storage media accessible on the computing device for files similar to the file, wherein files similar to the file are reformatted, duplicate, edited, changed, modified or similar files. | 2021-07-01 |
20210200717 | GENERATING FULL METADATA FROM PARTIAL DISTRIBUTED METADATA - Disclosed are embodiments for generating a dataset metadata file based on partial metadata files. In one embodiment, a method is disclosed comprising receiving data to write to disk, the data comprising a subset of a dataset; writing a first portion of the data to disk; detecting a split boundary after writing the first portion; recording metadata describing the split boundary; continuing to write a remaining portion of the data to disk; and after completing the writing of the data to disk: generating a partial metadata file for the data, the partial metadata file including the split boundary, and transmitting the partial metadata to a partial metadata collector. | 2021-07-01 |
20210200718 | METHOD AND SYSTEM TO PREFETCH DATA IN DATABASES - The present disclosure provides systems and methods for prefetching data in databases. One method for prefetching data in a database comprises receiving a database query on the database, determining one or more sets of adjacent columns access by the database query, and for each set of adjacent columns of the one or more determined sets, prefetching data in the adjacent columns. | 2021-07-01 |
20210200719 | COLLABORATIVE DOCUMENT ACCESS RECORDING AND MANAGEMENT - A method of providing user access history for a collaborative document includes receiving, by a server, a first request for the collaborative document from a client device of a user of a plurality of users that have permission to access the collaborative document; providing the collaborative document to the client device for presentation to the user in a user interface on the client device; determining whether a collaborator type of the user matches a predefined collaborator type; responsive to determining that the collaborator type of the user matches the predefined collaborator type: creating a first user access history for the collaborative document based on accesses of the collaborative document by one or more of the plurality of users, and providing the first user access history for the collaborative document to the client device for display within a consolidated view of the user interface presenting the collaborative document. | 2021-07-01 |
20210200720 | BINDING LOCAL DEVICE FOLDERS TO A CONTENT MANAGEMENT SYSTEM FOR SYNCHRONIZATION - The present technology can move operating system folders into a sync folder of a cross platform content management system, and redirect the operating system to look for the OS folders in the sync folder. The present technology also provides an invariant checker to make sure that another application has not moved the OS folders after they have been placed in the sync folder, and provides solutions when the OS folders are moved out of the sync folder of the content management system. Additionally, when OS folders for multiple client devices are in the sync folder on the content management system, the present technology can provide a mechanism to make the content items in an OS folder on a first client device also sync into an OS folder on second client device. | 2021-07-01 |
20210200721 | LOCK MANAGEMENT ASSOCIATED WITH A KEY-VALUE DATABASE SYSTEM - A first data structure lock to access a first data structure of a first set of data structures to perform an operation associated with a transaction is acquired. The operation associated with the transaction is executed, wherein the operation is one of inserting the transaction into the first data structure or removing the transaction from the first data structure. An oldest active transaction of the first data structure is identified. A globally oldest active transaction of the set of the data structures in view of the oldest active transaction is determined. A second set of data structures is accessed, the second set of data structures including information associated with completed transactions to identify a set of data locks associated with completed transactions each having a transaction completion identifier that satisfies a condition when compared to a transaction start identifier associated with the globally oldest active transaction. The set of data locks are released. | 2021-07-01 |
20210200722 | FACILITATING OUTLIER OBJECT DETECTION IN TIERED STORAGE SYSTEMS - Facilitating outlier object detection in tiered storage systems is provided herein. A system can comprise a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations can comprise determining respective parameters associated with objects of a group of objects of a tiered storage system. The respective parameters can comprise at least one of a size, an access percentage, or a cost. The operations also can comprise using the respective parameters associated with the objects of the group of objects as inputs and performing data clustering on the group of objects, resulting in at least one data cluster. Further, the operations can comprise selecting at least one object from the group of objects as at least one outlier object within the tiered storage system based on the at least one data cluster. | 2021-07-01 |
20210200723 | ACCESSING OBJECTS IN HOSTED STORAGE - A hosted storage system receives a storage request that includes a single object and conforms to an API implemented by the hosted storage system. The API is designed to only support a single object in a storage request. The hosted storage system, in response to determining that the single object is an archive file, extracts each of the bundled files from the archive file and stores each of the extracted files in the hosted storage system such that each of the extracted files is separately accessible by the client system over the network. | 2021-07-01 |
20210200724 | METHOD AND CONTROL SYSTEM FOR CONTROLLING AND/OR MONITORING DEVICES - A complex control instruction chains in a blockchain for a specific task for controlling devices to be managed in a simple manner is provided, which permit a prescribed validity to be assigned for a specific task of a blockchain-based device control, the validity being defined by the life cycle (e.g. the period of use) of a device, for example. | 2021-07-01 |
20210200725 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DATABASE CHANGE MANAGEMENT - Database servers may maintain a database according to a database schema. A database change management system can include a profile service configured to collect database profile information and a simulation service configured to receive a set of changes to be simulated for the database and simulate an application of the set of changes to the database. A forecast service can be configured to receive a result of a simulation from the simulation service and database profile information and generate a report indicative of a prediction of a failure or success of an implementation of the set of changes. | 2021-07-01 |
20210200726 | SYSTEM AND METHOD FOR PARALLEL SUPPORT OF MULTIDIMENSIONAL SLICES WITH A MULTIDIMENSIONAL DATABASE - A system and method is described for use with a multidimensional database computing environment to provide support for parallel calculation of multidimensional slices. Users are able to specify a set of slices and a number of parallel threads to employ. The multidimensional database environment generates tasks that include calculations and/or aggregations, which are able to be executed by the system in parallel. Also described herein are mechanisms of utilizing variables within the calculations performed by respective threads, and combining thread variables after execution. | 2021-07-01 |
20210200727 | METHOD AND SYSTEM FOR EXECUTING WORKLOAD ORCHESTRATION ACROSS DATA CENTERS - Methods, computer program products, computer systems, and the like providing for executing orchestration operations across data center infrastructures are disclosed. In one embodiment, the method includes analyzing a property graph to determine whether a node representing at least one entity in a first data center infrastructure has a contact point with a node representing one or more entities representing one or more core physical or hardware-based resources in a second data center infrastructure. If a contact point exist between nodes of associated with the first and second data centers, the orchestration operation is executed on the at least one entity in the first data center and a corresponding orchestration operation is executed on at least another entity in the second data center infrastructure represented at a contact point in the dependency relationships of the property graph. | 2021-07-01 |
20210200728 | DATABASE-DOCUMENTATION PROPAGATION VIA TEMPORAL LOG BACKTRACKING - Aspects of the present disclosure provide techniques for database documentation propagation. Embodiments include scanning a log comprising a plurality of database queries to identify one or more database queries of the plurality of database queries, the one or more database queries being associated with generating a new table of a database based on information in an existing table of the database. Embodiments include generating, based on the one or more database queries identified during the scanning, a directed acyclic graph (DAG) comprising: a first vertex representing the existing table; a second vertex representing the new table; and a directed edge connecting the first vertex to the second vertex. Embodiments include obtaining documentation associated with the existing table. Embodiments include propagating, based on the DAG, at least a subset of the documentation associated with the existing table to the new table. | 2021-07-01 |
20210200729 | EVENTUAL CONSISTENCY IN A DEDUPLICATED CLOUD STORAGE SYSTEM - One example method includes receiving a write request that includes a data structure version to be written, wherein the data structure version is associated with a unique identifier, storing the data structure version in association with the unique identifier, receiving a read request for a most recent version of the data structure and, when the stored data structure version is not the most recent version of the data structure, examining respective unique identifiers of each of a group of other stored data structure versions to determine which stored data structure version is the most recent. Finally, the example method includes returning the most recent data structure version, notwithstanding that one or more other data structure versions existed at the time that the read request was received. | 2021-07-01 |
20210200730 | DATABASES TO STORE DEVICE HISTORY DATA - An example of an apparatus including a network interface to receive enrollment data from a device, wherein the device is identified by a device identifier. The apparatus further includes a memory storage unit connected to the network interface. The memory storage unit receives device history data associated with the device identifier while the device is enrolled. The memory storage unit is also to maintain a database to store the device history data associated with the device identifier. The apparatus also includes a processor to receive a de-enrollment command associated with the device via the network interface. The de-enrollment command causes the processor to modify the device identifier stored in the database. | 2021-07-01 |
20210200731 | HORIZONTAL SKIMMING OF COMPOSITE DATASETS - Disclosed are embodiments for horizontally skimming composite datasets. In one embodiment, a method is disclosed comprising receiving a script, the script including commands to access a composite dataset; pre-processing the script to identify a set of columns; loading a metadata file associated with the composite dataset file; parsing the metadata file to identify one or more datasets that include a column in the set of columns; loading data from the one or more datasets; and executing the script on the one or more datasets | 2021-07-01 |
20210200732 | TREE-LIKE METADATA STRUCTURE FOR COMPOSITE DATASETS - Disclosed are embodiments for generating metadata files for composite datasets. In one embodiment, a method is disclosed comprising generating a tree representing a plurality of datasets; parsing the tree into an algebraic representation of the tree; identifying a plurality of terms in the algebraic representation, each term in the terms comprising at least two factors, each of the two factors associated with a dataset in the plurality of datasets; generating a metadata object of the plurality of terms; serializing the metadata object to generate serialized terms; and storing the serialized terms in a metadata file associated with the plurality of datasets. | 2021-07-01 |