31st week of 2019 patent applcation highlights part 52 |
Patent application number | Title | Published |
20190235958 | UTILIZING STORAGE UNIT LATENCY DATA IN A DISPERSED STORAGE NETWORK - A method for execution by a dispersed storage and task (DST) processing unit includes generating a first access request for transmission via a network to a first one of a plurality of storage units in a dispersed storage network (DSN). A first access response is received via the network from the first one of the plurality of storage units that includes a first access time duration. Access duration data is updated to include the first access time duration received from the first one of the plurality of storage units. A subset of storage units is selected from the plurality of storage units based on comparing a plurality of access time durations corresponding to the plurality of storage units included in the access duration data to perform a second data access. At least one second access request is generated for transmission via the network to the subset of storage units. | 2019-08-01 |
20190235959 | Proactive Node Preference Changing in a Storage Controller - Disclosed is a computer-implemented method in a storage controller of changing a preferred node from a first node to a second node, comprising: receiving a notification of a request to remove the first node; reporting ports on the first node as non-preferred instead of reporting them as preferred; reporting ports on the second node as preferred instead of reporting them as non-preferred; compiling a target port groups report for each of the first node and the second node; and raising an asymmetric access state changed unit attention notification. | 2019-08-01 |
20190235960 | MOTOR DRIVING DEVICE AND DETERMINATION METHOD - A motor driving device includes: a rectifier circuit for rectifying an AC input voltage supplied from an AC power supply to a DC voltage; a smoothing capacitor for smoothing the rectified DC voltage; a relay that outputs a contact signal when the input voltage is input to the rectifier circuit from the AC power supply; an input voltage detector for detecting the input voltage; a capacitor voltage detector for detecting the capacitor voltage; a volatile first storage; a nonvolatile second storage; and a backup start determiner for determining whether or not to start a backup operation of transferring the information stored in the first storage to the second storage, based on at least one of the contact signal, the input voltage and the capacitor voltage. | 2019-08-01 |
20190235961 | IDENTIFYING REDUNDANT NODES IN A KNOWLEDGE GRAPH DATA STRUCTURE - A method, computer system, and computer program product for eliminating a redundant node from a knowledge graph is provided. A structural analysis of a knowledge graph is performed by determining that two nodes have a similar structure. An empirical analysis is performed by determining a search result correlation of potentially redundant nodes, said search result correlation comprising a correlation of search result nodes generated from different search queries to said knowledge graph or a correlation of search results due to selected search result nodes in subtrees of said potentially redundant nodes. Results of said structural analysis and said empirical analysis are combined to generate a redundancy confidence level value for two said nodes. One of said two nodes is determined as redundant. One of said two redundant nodes is removed from the knowledge graph. | 2019-08-01 |
20190235962 | Management of changed-block bitmaps - An apparatus includes an interface and a processor. The interface is configured to communicate with a computing system in which one or more workloads issue storage commands for execution in a storage volume comprising multiple storage blocks. The computing system continually updates a data structure that tracks which of the storage blocks of the storage volume have changed due to the storage commands. The processor is configured, in response to a request to create a copy of the storage volume, to instruct the computing system to (i) create a copy of the data structure and reset the data structure, (ii) create the copy of the storage volume, while continuing to execute the storage commands and update the data structure, and (iii) after the copy of the storage volume is completed, merge the data structure into the copy of the data structure. | 2019-08-01 |
20190235963 | FILE CORRUPTION RECOVERY IN CONCURRENT DATA PROTECTION - An incremental backup system that performs the following (not necessarily in the following order): (i) making a plurality of time-ordered journal entries; (ii) determining that a corruption condition exists; (iii) responsive to a corruption condition, constructing a first incremental mirror data set that reflects a backup data set and all journal entries up to a first corrupted journal entry which is the earliest in time journal entry, of the plurality of journal entries, that is a corrupted journal entry; (iv) responsive to a corruption condition, constructing a second incremental mirror data set that reflects the backup data set and all journal entries up to the first corrupted journal entry; and (v) checking for corruption in the first and second incremental mirror data sets to determine the latest uncorrupted version of the data set. | 2019-08-01 |
20190235964 | RECOVERING A FAILED CLUSTERED SYSTEM USING CONFIGURATION DATA FRAGMENTS - A computer-implemented method according to one embodiment includes identifying one or more accessible server nodes within a plurality of nodes of a failed clustered system, retrieving a plurality of fragments of configuration data from the one or more accessible server nodes, and constructing a backup state for the failed clustered system, utilizing the plurality of fragments of the configuration data. | 2019-08-01 |
20190235965 | MEMORY DATA PRESERVATION SOLUTION - Systems and methods are provided for preserving data in memory modules of a computer system. An exemplary method can detect that a software preservation process is needed for a computer system, and thereafter performs the software preservation process. The software preservation process can begin by detecting the initiation of a reduced power mode in a computer system. A syncing process of data contents can then be initiated in a processing unit of the computer system. Next, the computer system can automatically save data contents of a memory module. The software preservation process is completed by turning off a power supply unit of the computer system. | 2019-08-01 |
20190235966 | Methods and Systems for Energy Efficient Data Backup - Methods and systems of awakening one or more clients for performance of data backup are disclosed. According to some embodiments, the method selects one or more clients for data backup. The method initiates a wake-up call for each of the selected clients. The method determines whether one or more of the selected clients are awake. In response to a determination that the one or more of the selected clients are awake, the method instructs the awakened selected clients to provide data for backup. | 2019-08-01 |
20190235967 | Effective Data Change Based Rule to Enable Backup for Specific VMware Virtual Machine - One embodiment is related to a method for backing up virtual machines, comprising: determining whether virtual machines comprised in a backup policy group are to be backed up based on a present time and a backup schedule associated with the backup policy group; in response to a determination that the virtual machines comprised in the backup policy group are to be backed up, determining a data change ratio since a previous backup for each virtual machine comprised in the backup policy group; and backing up each virtual machine comprised in the backup policy group that has a data change ratio since the previous backup that meets a data change threshold associated with the backup policy group. | 2019-08-01 |
20190235968 | ARCHIVING NAS SERVERS TO THE CLOUD - A technique for archiving NAS (network attached storage) servers includes replicating multiple locally-backed volumes, which support respective file systems of a NAS server, to respective cloud-backed volumes backed by a cloud-based data store. After replication has updated the cloud-backed volumes with contents from the locally-backed volumes, the technique further includes performing a group snapshot operation on the cloud-backed volumes. The group snapshot operation creates a point-in-time version of the cloud-backed volumes, which provides a replica of the NAS server archived in the cloud. | 2019-08-01 |
20190235969 | Systems and Method to Make Application Consistent Virtual Machine Backup Work in Private Network - One embodiment is related to a method for backing up application transaction data through a virtual backup proxy node, comprising: mounting an application transaction data disk image at the virtual backup proxy node, the application transaction data disk image comprising application transaction data generated by an application running on a virtual machine; and copying the transaction data disk image to a storage device for backup protection through a first network. | 2019-08-01 |
20190235970 | DISASTER RECOVERY REHEARSAL OF A WORKLOAD - Examples disclosed herein relate performing a disaster recovery rehearsal of a workload in a three-datacenter topology. A workload may be selected on a computing system at a first datacenter location of a three-datacenter topology, for performing a disaster recovery rehearsal. The three-datacenter topology may comprise a first datacenter location, a second datacenter location and a third datacenter location. At least one of second datacenter location or third datacenter location may be selected for performing the disaster recovery rehearsal. A configuration of the workload may be cloned to generate a cloned workload. A resource may be identified in a selected datacenter location for performing the disaster recovery rehearsal. The cloned workload may be applied to the resource in the selected datacenter location, and a result of running the cloned workload on the resource may be generated. The computing system may receive the result from the selected datacenter location. | 2019-08-01 |
20190235971 | Creation of Virtual Machine Packages Using Incremental State Updates - Described herein are systems and methods that manage machine backups, including the creation of virtual machine packages sufficient to instantiate virtual machines corresponding to the backups. In one aspect, a compute infrastructure includes many machines, which may be either physical or virtual. From time to time, snapshots of the states of these target machines are pulled and saved. Virtual machine packages corresponding to these snapshots are also created. A virtual machine package can be used to instantiate a virtual machine (VM) emulating the target machine with the saved state on a destination virtual machine platform. At some point, the initial VM package for a target machine is created by converting the snapshot to a VM package. However, this may take a long time. Later VM packages can instead be created by updating a prior VM package according to differences between the corresponding snapshots, rather than performing the full conversion process. | 2019-08-01 |
20190235972 | RESTORING NAS SERVERS FROM THE CLOUD - A technique for restoring NAS (network attached storage) servers that have been archived to the cloud includes querying, by a local data storage system, a cloud-based data store to identify a set of cloud-backed volumes that belong to an archived NAS server to be restored. The technique further includes rendering the identified cloud-backed volumes as respective writable LUNs (Logical UNits), accessing the writeable LUNs by the local data storage system, and processing data on the writeable LUNs to operate file systems of the NAS server that are stored in the writeable LUNs. | 2019-08-01 |
20190235973 | AUTOMATED RANSOMWARE IDENTIFICATION AND RECOVERY - A method for automated ransomware identification includes receiving a first series of data items for backup from a host system, identifying, using a heuristic, a first characteristic of the first series of data items, receiving a second series of data items for backup from the host system, identifying, using the heuristic, a second characteristic of the second series of data items, detecting that the second characteristic differs from the first characteristic in a manner consistent with a ransomware infection, and invoking a recovery procedure responsive to the detecting. | 2019-08-01 |
20190235974 | Transaction processing system, recovery subsystem and method for operating a recovery subsystem - A transaction processing system comprises a transaction processing (TP) subsystem ( | 2019-08-01 |
20190235975 | REPAIRING PARTIALLY COMPLETED TRANSACTIONS IN FAST CONSENSUS PROTOCOL - In an approach, a processor detects a transmission control protocol disconnection of a first distributed storage unit from a distributed storage network, wherein the distributed storage network comprises a set of distributed storage units. A processor identifies a transaction, wherein: the transaction is not in a final state, the transaction is a first proposal, from the first distributed storage unit, for the set of distributed storage units to store a dataset with a first revision number within the distributed storage network, and the dataset is broken into one or more data pieces to be written on the set of distributed storage units of the distributed storage network that approve the proposal. A processor identifies a timestamp of the transaction. A processor determines a stage the transaction has reached. A processor places the transaction in a final state based on the determined stage the transaction has reached. | 2019-08-01 |
20190235976 | MEMORY SYSTEM AND METHOD OF OPERATING THE SAME - Provided herein may be a memory system and a method of operating the memory system. The memory system may include: a memory device comprising a plurality of semiconductor devices each including a plurality of memory blocks; and a controller configured to generate at least one or more descriptors in response to a request from a host, and control internal operations of the plurality of semiconductor devices based on the respective at least one or more descriptors. The controller may generate and manage at least one or more descriptor indexes respectively corresponding to the at least one or more descriptors. When a failure occurs during the internal operations of the plurality of semiconductor devices, at least one descriptor corresponding to a memory block in which the failure has occurred is searched for using the at least one or more descriptor indexes. | 2019-08-01 |
20190235977 | MEMORY SYSTEM, A METHOD OF DETERMINING AN ERROR OF THE MEMORY SYSTEM AND AN ELECTRONIC APPARATUS HAVING THE MEMORY SYSTEM - A memory system including: a memory apparatus including a buffer die, core dies disposed on the buffer die, channels and a through silicon via configured to transmit a signal between the buffer die and at least one of the core dies; a memory controller configured to output a command signal and an address signal to the memory apparatus, to output a data signal to the memory apparatus and to receive the data signal from the memory apparatus; and an interposer including channel paths for connecting the memory controller and the channels, wherein the memory apparatus further includes a path selector for changing a connection state between the channels and channel paths, and when an error is detected in a first connection state between the channels and the channel paths, the path selector changes the first connection state to a second connection state. | 2019-08-01 |
20190235978 | Data Protection Cluster System Supporting Multiple Data Tiers - A hierarchical multi-level heterogeneous cluster data system having processing nodes at each of a plurality of cluster levels configured for different data tiers having different availability, accessibility and protection requirements. Each cluster level comprises groups of processing nodes arranged into a plurality of failover domains of interconnected nodes that exchange heartbeat signals to indicate that the nodes are alive and functioning. A master node of each failover domain is connected to a master node of a parent failover domain for exchanging heartbeat signals to detect failures of nodes at lower cluster levels. Upon a network partition, the nodes of the failover domain may be merged into another failover domain at the same or a higher cluster level to continue providing data services. The cluster has a global namespace across all cluster levels, so that nodes that are moved to different failover domains can be accessed using the same pathname. | 2019-08-01 |
20190235979 | SYSTEMS AND METHODS FOR PERFORMING COMPUTING CLUSTER NODE SWITCHOVER - The disclosed computer-implemented method for performing computing cluster node switchover may include (i) detecting an indication to switch an assignment of a transaction task away from a first network node in a computing cluster, (ii) executing, in response to detecting the indication, by each network node in a set of multiple network nodes within the computing cluster, a switchover algorithm to select a second network node, (iii) switching over the assignment of the transaction task from the first network node to the second network node, and (iv) performing, by the second network node, at least part of a remainder of the transaction task in response to switching over the assignment of the transaction task from the first network node to the second network node. Various other methods, systems, and computer-readable media are also disclosed. | 2019-08-01 |
20190235980 | CREATING DISTRIBUTED STORAGE DURING PARTITIONS - A system and method are provided for processing to create distributed volume in a distributed storage system during a failure that has partitioned the distributed volume (e.g. an array failure, a site failure and/or an inter-site network failure). In an embodiment, the system described herein may provide for continuing distributed storage processing in response to I/O requests from a source by creating the local parts of the distributed storage during the failure, and, when the remote site or inter-site network return to availability, the remaining part of the distributed volume is automatically created. The system may include an automatic rebuild to make sure that all parts of the distributed volume are consistent again. The processing may be transparent to the source of the I/O requests. | 2019-08-01 |
20190235981 | METHOD AND SYSTEM FOR FUNCTION-SPECIFIC TIME-CONFIGURABLE REPLICATION OF DATA MANIPULATING FUNCTIONS - The system ( | 2019-08-01 |
20190235982 | SYSTEMS AND METHODS FOR DETECTING AND REMOVING ACCUMULATED DEBRIS FROM A COOLING AIR PATH WITHIN AN INFORMATION HANDLING SYSTEM CHASSIS ENCLOSURE - Systems and methods are provided that may be implemented to detect impaired flow of cooling air within a chassis enclosure of an information handling system during system operation, and to implement a diagnostic or system boot mode to reverse direction of cooling air flow through the chassis enclosure after such detection of impeded cooling air flow so as to remove any dust or other accumulated debris that is causing the impeded cooling air flow. | 2019-08-01 |
20190235983 | EXPOSING AN INDEPENDENT HARDWARE MANAGEMENT AND MONITORING (IHMM) DEVICE OF A HOST SYSTEM TO GUESTS THEREON - The technology disclosed herein enables a guest executing in a host of a host computing system to access an IHMM device of the host computing system. In a particular embodiment, a method provides, in the host, providing a virtualized IHMM device to a guest IHMM device driver in the guest and exchanging IHMM information between the guest IHMM device driver and the virtualized IHMM device. The method further provides, translating the IHMM information between the virtualized IHMM device and a host IHMM device driver on the host. The host IHMM device driver interacts with the host IHMM device based on the IHMM information. | 2019-08-01 |
20190235984 | SYSTEMS AND METHODS FOR PROVIDING PREDICTIVE PERFORMANCE FORECASTING FOR COMPONENT-DRIVEN, MULTI-TENANT APPLICATIONS - A method for providing performance data is provided. The method detects a current composition and layout of a graphical user interface (GUI), by a processor configured to present the GUI via a display device communicatively coupled to the processor, wherein the current composition and layout comprises reusable software components; identifies, by the processor, performance characteristics associated with historical activity of a user and with each of the reusable software components of the current composition and layout; creates a statistical forecasting model, by the processor, based on the performance characteristics; generates a performance score based on the statistical forecasting model, by the processor, wherein the performance score indicates a loading time of the GUI; and presents the performance score, by the display device communicatively coupled to the processor. | 2019-08-01 |
20190235985 | HYBRID INSTRUMENTATION FRAMEWORK FOR MULTICORE LOW POWER PROCESSORS - Techniques are provided for redundant execution by a better processor for intensive dynamic profiling after initial execution by a constrained processor. In an embodiment, a system of computer(s) receives a request to profile particular runtime aspects of an original binary executable. Based on the particular runtime aspects and without accessing source logic, the system statically rewrites the original binary executable into a rewritten binary executable that invokes telemetry instrumentation that makes observations of the particular runtime aspects and emits traces of those observations. A first processing core having low power (capacity) performs a first execution of the rewritten binary executable to make first observations and emit first traces of the first observations. Afterwards, a second processing core performs a second (redundant) execution of the original binary executable based on the first traces. The second execution generates a detailed dynamic performance profile based on the second execution. | 2019-08-01 |
20190235986 | MANAGEMENT COMPUTER, DATA PROCESSING SYSTEM, AND DATA PROCESSING PROGRAM - Described is a means for switching a plurality of different data processing methods or means for changing the type of sensor data to be collected is required. A management computer includes a control unit which stores, in a memory, monitoring means for monitoring a plurality of operation processes to be monitored and management means connected to a network to manage management information about a plurality of different types of external devices for processing a plurality of types of sensor information via the network, the control unit executing the monitoring means and management means in a CPU, in which the control unit determines whether the plurality of operation processes to be monitored is changed, and instructs, when determining that the operation processes are changed, the plurality of different types of external devices to change the processing of the sensor information required to execute the operation processes before changing. | 2019-08-01 |
20190235987 | DUPLICATE BUG REPORT DETECTION USING MACHINE LEARNING ALGORITHMS AND AUTOMATED FEEDBACK INCORPORATION - Duplicate bug report detection using machine learning algorithms and automated feedback incorporation is disclosed. For each set of bug reports, a user-classification of the set of bug reports as including duplicate bug reports or non-duplicate bug reports is identified. Also for each set of bug reports, correlation values corresponding to a respective feature, of a plurality of features, between bug reports in the set of bug reports is identified. Based on the user-classifications and the correlation values, a model is generated to identify any set of bug reports as including duplicate bug reports or non-duplicate bug reports. The model is applied to classify a particular bug report and a candidate bug report as duplicate bug reports or non-duplicate bug reports. | 2019-08-01 |
20190235988 | PARALLEL EXECUTION OF CONTINUOUS DELIVERY PIPELINE SEGMENT MODELS - A method comprising storing a plurality of segment models. A first user input may be received. At least a first segment model of the plurality of segment models and a second segment model of the plurality of segment models are selected, the selection based on the first user input. A parallel execution dependency of the first and second segment models may be defined. A continuous delivery pipeline model comprising the first segment model, the second segment model, and the parallel execution dependency definition may be generated. A trigger event may be received. The continuous delivery pipeline model may be executed in response to the trigger event, the executing including the first segment model and the second segment model at least temporarily executing in parallel with each other. | 2019-08-01 |
20190235989 | QUEUE-LESS ANALYSIS OF APPLICATION CODE - A method for analysis of software programs or applications is disclosed. The method may include an execution broker application executing on a compute resource included in a computer network, receiving application code and analysis information and generating an execution environment on a different compute resource included in the computer network. The execution broker application may initiate performance of an analysis of the application code using the execution environment and storage of a result of the analysis on another compute resource included in the computer network. In response to determining the analysis has completed, the execution broker may deactivate the execution environment. | 2019-08-01 |
20190235990 | COMPUTER-IMPLEMENTED METHODS AND SYSTEMS FOR DETERMINING APPLICATION MATCHING STATUS - A server includes one or more processors configured to determine relationships between one or more executable files, one or more library files, and one or more application programming interfaces (APIs) of a first application, and compare the determined relationships of the first application to determined relationships between one or more executable files, one or more library files, and one or more APIs of a second application. An indication of a matching status between the first and second applications is provided based on the compare. | 2019-08-01 |
20190235991 | FACILITATING RECORDING A TRACE FILE OF CODE EXECUTION USING INDEX BITS IN A PROCESSOR CACHE - Facilitating recording a trace of code execution using a processor cache. A method includes identifying an operation by a processing unit on a line of the cache. Based on identifying the operation, accounting bits for the cache line are set. Setting the accounting bits includes (i) setting the accounting bits to a reserved value when the operation is a write and tracing is disabled, (ii) setting the accounting bits to an index of the processing unit when the operation is a write and the accounting bits for the cache line are set to a value other than the index of the processing unit, or (iii) setting the accounting bits to the index of the processing unit when the operation is a read that is consumed by the processing unit and the accounting bits for the cache line are set to a value other than the index of the processing unit. | 2019-08-01 |
20190235992 | Tenant Code Management in Multi-Tenant Systems - Systems and methods for managing tenant code for a multi-tenant system. Instrumentation code may be added to the tenant code and track its performance. The tenant code may be disabled when it is determined based on information from the instrumentation code that the tenant code is misbehaving. An approximate clock may be used to determine if the running time of the tenant code exceeds a threshold. | 2019-08-01 |
20190235993 | GENERATING AN INNER CLOUD ENVIRONMENT WITHIN AN OUTER CLOUD ENVIRONMENT FOR TESTING A MICROSERVICE APPLICATION - A microservice application can be tested inside an inner cloud environment that is within an outer cloud environment. For example, a software application can generate an inner cloud environment within an outer cloud environment in response to an event associated with a microservice application. The software application can then deploy another version of the microservice application in the inner cloud environment. The software application can perform at least one test on the other version of the microservice application in the inner cloud environment to determine a compatibility of the other version of the microservice application with the inner cloud environment. | 2019-08-01 |
20190235994 | TESTING EMBEDDED SYSTEMS AND APPLICATION USING HARDWARE-IN-THE-LOOP AS A SERVICE (HILAAS) - Embodiments for testing embedded systems and their applications in an Internet of Things (IoT) environment by a processor, denoted as a Hardware-in-the-Loop as a Service (HiLaaS). In a simulated environment, one or more simulated entities and one or more real entities in a networked system may be tested in real-time according to received control parameters, for a price. The price is estimated by the system, based on other parameters, and offered to the user to accept or reject. Alternatively, the user may specify the price, the system estimates control parameters, and the user can accept or reject the control parameters. One or more properties of the one or more entities, the network system, or combination thereof may be estimated based on the testing of the one or more simulated entities, when the price and control parameters are accepted. | 2019-08-01 |
20190235995 | NON-TRANSITORY COMPUTER-READABLE MEDIUM, INFORMATION PROCESSING APPARATUS, DEBUGGING SYSTEM AND DEBUGGING METHOD - A debugging apparatus sequentially receives logs generated accompanying an operation of a program to be debugged along with lapse of time, sets the logs in a predetermined range as processing batch data to be batch-processed, and groups the processing batch data. If the set of grouped logs does not satisfy the condition, the group is determined to be in an incomplete state and recorded, and when there exist common groups between the new processing batch data and the incomplete group, the log data of the incomplete group is added to the new processing data. | 2019-08-01 |
20190235996 | PARALLELIZABLE DATA-DRIVEN TESTING - Disclosed herein are methods, systems, and processes to generate and perform parallelizable data-driven single instruction multiple data (SIMD) tests. A base abstract class that defines shared testing parameters for tests to be performed on an application is designated. Inheriting classes of the base abstract class are defined and a data source key is derived from the inhering classes. A test data source is accessed to perform the tests on the application and a result of tests is generated based on the data source key. | 2019-08-01 |
20190235997 | CONTEXT-BASED DEVICE TESTING - Software applications are tested in different contexts, such as on different devices and under different conditions. During initial testing of an application, conditions of contexts are selected randomly, and the application is tested in each resulting context. After obtaining results from a sufficient number of contexts, the results are analyzed to create a predictive model indicating, for any postulated context, whether testing of the application is most likely to fail or to otherwise produce negative test results. The model is then analyzed to identify contexts that are most likely to produce negative results or failures, and those contexts are emphasized in subsequent application testing. | 2019-08-01 |
20190235998 | END-TO-END USER INTERFACE COMPONENT TESTING - Disclosed are examples of systems, apparatus, methods and computer program products for end-to-end user interface component testing in a database system. More specifically, techniques for efficient automation of end-to-end user interface component testing are described. | 2019-08-01 |
20190235999 | Automated Unit Testing In A Mainframe Environment - An automated system is presented for unit testing an application in a mainframe execution environment. The system includes a test configurator, a stub setup routine and an interceptor routine. The test configurator is configured to receive and parse a test input file, where the test input file includes a record for a particular file accessed by the application using the given type of file system. Upon reading the record, the test configurator calls the stub setup routine. The stub setup routine is associated with the given type of file system and creates an object for the particular file in the mainframe execution environment, such that the object is instantiated from a class representing the given type of file system. The interceptor routine is accessible by the application. In response to a given command issued by the application for the given type of file system, the interceptor routine operates to interact with methods provided by the object. | 2019-08-01 |
20190236000 | DISTRIBUTED SYSTEM TEST DEVICE - Aspects capture test coverage in a distributed system, wherein a processor instigates execution of a unique hypertext transfer request protocol test case within a distributed system of different, networked servers. The header of the unique test case includes a unique name for the unique test case, and the distributed system servers are each configured to, in response to processing a test case, generate a time-stamped log entry that includes header data for the processed test case and a uniform resource locator address of the processing server. The processor thus maps the unique test case to a subset of the distributed system servers as endpoint servers of the unique test case, in response to determining that the uniform resource locator addresses of each of the subset endpoint servers are listed within generated log entries of the endpoint servers in association with the unique test case name. | 2019-08-01 |
20190236001 | SHARED FABRIC ATTACHED MEMORY ALLOCATOR - An example system comprises one or more processing nodes to execute one or more processes; a switching fabric coupled to the one or more processing nodes; a fabric-attached memory (FAM) coupled with the switching fabric; and a memory allocator to allocate and release memory in the FAM in response to memory allocation requests and memory release requests from the one or more processes. The memory allocator is to partition the FAM into a memory shelf comprising a plurality of memory books of equal size. The memory allocator is to map a shelf into a virtual memory zone, the zone aligned with the boundaries of one or more books. The memory allocator is to maintain an indexed free-memory list where each index level is an entry point to a list of free memory blocks of a particular size in the zone, and the memory allocator to maintain a bitmap of the zone to identify if a memory block of a particular size is allocated. | 2019-08-01 |
20190236002 | Inline Coalescing of File System Free Space - An in-line (or foreground) approach to obtaining contiguous ranges of free space in a file system of a data storage system that can select windows having blocks suitable for relocation at a time when one or more blocks within the respective windows are freed or de-allocated. By providing the in-line or foreground approach to obtaining contiguous ranges of free space in a file system, a more efficient determination of windows having blocks suitable for relocation can be achieved, thereby conserving processing resources of the data storage system. | 2019-08-01 |
20190236003 | STORAGE DEVICE THAT MAINTAINS A PLURALITY OF LAYERS OF ADDRESS MAPPING - A storage device includes a nonvolatile memory, a cache memory, and a processor configured to load, from the nonvolatile memory into the cache memory, a fragment of each layer of an address mapping corresponding to a target logical address, and access the nonvolatile memory at a physical address mapped from the target logical address, by referring to the fragments of the layers of the address mapping loaded into the cache memory. The layers are arranged in a hierarchy and each layer of the address mapping except for the lowermost layer indicates correspondence between each of segmented logical address ranges mapped in the layer and a physical location of an immediately-lower layer in which said each segmented logical address range is further mapped in a narrower range. The lowermost layer indicates correspondence between each logical address mapped therein and a physical location of the nonvolatile memory associated therewith. | 2019-08-01 |
20190236004 | DATA REBUILD WHEN CHANGING ERASE BLOCK SIZES DURING DRIVE REPLACEMENT - A method for memory management in a storage system is provided. The method includes defining a required set of pages for writes to solid-state memory and defining multiple levels of indirection for writing data to the solid-state memory, comprising data stripes, each having a plurality of allocation units and each of the allocation units having a plurality of data units. The method includes assigning portions of an allocation unit to a plurality of data units such that one portion of the allocation unit fills an instance of the required set of pages that straddles a boundary between a first data unit and a second data unit, and writing the plurality of data units to the solid-state memory, with the plurality of data units satisfying the required set of pages for writes to solid-state memory. | 2019-08-01 |
20190236005 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system includes a memory device, and a controller suitable for selecting at least one common operation necessary to be performed in first and second tasks, selecting the first or second task, and selectively performing one or more of a valid data scan operation, a valid data read operation, a valid data write operation, and a valid data map update operation based on selected information, wherein the first task is a garbage collection operation performed on a host data block, a system data block and a map data block, wherein the second task is a recovery operation performed after a sudden power-off (SPO) that occurs during the valid data map update operation. | 2019-08-01 |
20190236006 | TILE BASED INTERLEAVING AND DE-INTERLEAVING FOR DIGITAL SIGNAL PROCESSING - Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses. | 2019-08-01 |
20190236007 | FLASH RECOVERY MODE - The disclosed technology is generally directed to data security. In one example of the technology, data is stored in a memory. The memory includes a plurality of memory banks including a first memory bank and a second memory bank. At least a portion of the data is interleaved amongst at least two of the plurality of memory banks. Access is caused to be prevented to at least one of the plurality of memory banks while a debug mode or recovery mode is occurring. Also, access is caused to be prevented to the at least one of the plurality of memory banks starting with initial boot until a verification by a security complex is successful. The verification by the security complex includes the security complex verifying a signature. | 2019-08-01 |
20190236008 | SERVER-BASED PERSISTENCE MANAGEMENT IN USER SPACE - A persistence management system performs, at a server, operations associated with a number of applications. At the server, a persistence manager can intercept a file system call from one of the applications, wherein the file system call specifies a file located on a remote persistent storage device separate from the server. The persistence manager can determine that data belonging to the file requested by the file system call is stored on a local persistent storage device at the server, retrieve the data from the local persistent storage, and respond to the file system call from the application with the data. | 2019-08-01 |
20190236009 | COUPLING WIDE MEMORY INTERFACE TO WIDE WRITE BACK PATHS - Systems and methods are disclosed for performing wide memory operations for a wide data cache line. In some examples of the disclosed technology, a processor having two or more execution lanes includes a data cache coupled to memory, a wide memory load circuit that concurrently loads two or more words from a cache line of the data cache, and a writeback circuit situated to send a respective word of the concurrently-loaded words to a selected execution lane of the processor, either into an operand buffer or bypassing the operand buffer. In some examples, a sharding circuit is provided that allows bitwise, byte-wise, and/or word-wise manipulation of memory operation data. In some examples, wide cache loads allows for concurrent execution of plural execution lanes of the processor. | 2019-08-01 |
20190236010 | DATA CACHING - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component. | 2019-08-01 |
20190236011 | MEMORY STRUCTURE BASED COHERENCY DIRECTORY CACHE - In some examples, with respect to memory structure based coherency directory cache implementation, a hardware sequencer may include hardware to identify, for a coherency directory cache that includes information related to a plurality of cache lines, adjacent cache lines. A state associated with each of the adjacent cache lines may be determined. Based on a determination that the state associated with one of the adjacent cache lines is identical to the state associated with remaining active adjacent cache lines, the adjacent cache lines may be grouped. The hardware sequencer may utilize, for the coherency directory cache, an entry in a memory structure to identify the grouped cache lines. Data associated with the entry in the memory structure may include greater than two possible memory states. | 2019-08-01 |
20190236012 | INTERPROCESSOR MEMORY STATUS COMMUNICATION - In a transactional memory environment including a first processor and one or more additional processors, a computer-implemented method includes identifying a memory location and sending a probe request from the first processor to the additional processors. The probe request includes the memory location. The computer implemented method further includes generating, by each additional processor, an indication including whether the memory location is in use for a transaction by the additional processor. The computer-implemented method further includes sending the indication from each additional processor to the first processor and proceeding, by the first processor, based on the indication. | 2019-08-01 |
20190236013 | DYNAMIC HOME TILE MAPPING - Technologies for migration of dynamic home tile mapping are described. An apparatus includes means for receiving coherence messages from other processor cores on the die, means for recording locations from which the coherence messages originate and means for determining distances between the requested home tiles and the locations from which the coherence messages originate. The apparatus includes means for determining whether an average distance between a particular home tile, whose identifier is stored in the home tile table, exceeds a threshold. When the average distance exceeds the defined threshold, the apparatus includes means for migrating the particular home tile to another location. | 2019-08-01 |
20190236014 | PRESERVATION OF MODIFIED CACHE DATA IN LOCAL NON-VOLATILE STORAGE FOLLOWING A FAILOVER - A dual-server based storage system maintains a first cache and a first non-volatile storage (NVS) in a first server, and a second cache and a second NVS in a second server, where data in the first cache is also written in the second NVS and data in the second cache is also written in the first NVS. In response to a failure of the first server, a determination is made as to whether space exists in the second NVS to accommodate the data stored in the second cache. In response to determining that space exists in the second NVS to accommodate the data stored in the second cache, the data is transferred from the second cache to the second NVS. | 2019-08-01 |
20190236015 | EXTRACT TARGET CACHE ATTRIBUTE FACILITY AND INSTRUCTION THEREFOR - A facility and cache machine instruction of a computer architecture for specifying a target cache cache-level and a target cache attribute of interest for obtaining a cache attribute of one or more target caches. The requested cache attribute of the target cache(s) is saved in a register. | 2019-08-01 |
20190236016 | MANAGEMENT OF CACHING OPERATIONS IN A UNIFIED CACHE - An exemplary embodiment herein is a method including comparing a cache hit rate ratio of a unified cache to a first pre-determined threshold, incrementing an alert counter in response to the cache hit rate ratio being lower than the first pre-determined threshold, comparing the alert counter to a pre-determined limit, preventing a first receipt of a type of data by the unified cache in response to the alert counter being equal to the pre-determined limit, causing a second receipt of metadata by the unified cache, comparing the cache hit rate ratio to a second pre-determined threshold, and allowing the first receipt of the type of data by the unified cache in response to the cache hit rate ratio being greater than the second pre-determined threshold. | 2019-08-01 |
20190236017 | METHOD AND SYSTEM FOR EFFICIENT COMMUNICATION AND COMMAND SYSTEM FOR DEFERRED OPERATION - A method and system for efficiently executing a delegate of a program by a processor coupled to an external memory. A payload including state data or command data is bound with a program delegate. The payload is mapped with the delegate via the payload identifier. The payload is pushed to a repository buffer in the external memory. The payload is flushed by reading the payload identifier and loading the payload from the repository buffer. The delegate is executed using the loaded payload. | 2019-08-01 |
20190236018 | Memory System Cache and Compiler - A memory system implements any combination of zero or more cache eviction policies, zero or more cache prefetch policies, and zero or more virtual address modification policies. A memory allocation technique implements parameter receiving and processing in accordance with the cache eviction policies, the cache prefetch policies, and the virtual address modification policies. A compiler system optionally processes any combination of zero or more indicators of extended data types usable to indicate one or more of the cache eviction policies, the cache prefetch policies, and/or the virtual address modification policies to associate with a variable, an array of variables, and/or a section of memory. The indicators comprise any combination of zero or more compiler flags, zero or more compiler switches, and/or zero or more pseudo-keywords in source code. | 2019-08-01 |
20190236019 | ADDRESS SPACE RESIZING TABLE FOR SIMULATION OF PROCESSING OF TARGET PROGRAM CODE ON A TARGET DATA PROCESSING APPARATUS - A method is provided for controlling processing of target program code on a host data processing apparatus to simulate processing of the target program code on a target data processing apparatus. In response to a target memory access instruction of the target program code specifying a target address within a simulated address space having a larger size than a host address space supported by a memory management unit of the host data processing apparatus, an address space resizing table is looked up to map the target address to a transformed address within said host address space, and information is generated for triggering a memory access based on translation of the transformed address by the memory management unit of the host data processing apparatus. | 2019-08-01 |
20190236020 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system includes a nonvolatile memory device including a plurality of memory blocks; and a controller configured to generate an address mapping table based on a first mapping information on a first logical address set corresponding to host data, wherein the controller generates a second logical address set corresponding to metadata, and generates the address mapping table which includes a second mapping information on the second logical address set and the first mapping information. | 2019-08-01 |
20190236021 | MEMORY ACCESS METHOD - A memory access method for selectively creating a simplified mapping table includes the steps of: selecting one of a plurality of partitions of an original mapping table so as to use one physical page address in a selected partition as a start physical page address; scanning each entry of the selected partition so as to search a randomly mapped entry in the selected partition; determining whether a memory space required for creating the simplified mapping table is smaller than a memory space required for the selected partition; and selectively storing the start physical page address, the number of the randomly mapped entries, and a logical page address and a physical page address recorded on each randomly mapped entry according to the determination result of the determining step so as to create a simplified mapping table. | 2019-08-01 |
20190236022 | ULTRA-SECURE ACCELERATORS - Methods and apparatus for ultra-secure accelerators. New ISA enqueue (ENQ) instructions with a wrapping key (WK) are provided to facilitate secure access to on-chip and off-chip accelerators in computer platforms and systems. The ISA ENQ with WK instructions include a dest operand having an address of an accelerator portal and a scr operand having the address of a request descriptor in system memory defining a job to be performed by an accelerator and including a wrapped key. Execution of the instruction writes a record including the src and a WK to the portal, and the record is enqueued in an accelerator queue if a slot is available. The accelerator reads the enqueued request descriptor and uses the WK to unwrap the wrapped key, which is then used to decrypt encrypted data read from one or more buffers in memory. The accelerator then performs one or more functions on the decrypted data as defined by the job and writes the output of the processing back to memory with optional encryption. | 2019-08-01 |
20190236023 | VIRTUAL ADDRESS TABLE - The present disclosure includes apparatuses and methods related to virtual address tables. An example method comprises generating an object file that comprises: an instruction comprising a number of arguments; and an address table comprising a number of indexed address elements. Each one of the number of indexed address elements can correspond to a virtual address of a respective one of the number of arguments, wherein the address table can serves as a target for the number of arguments. The method can include storing the object file in a memory. | 2019-08-01 |
20190236024 | MULTI-ENGINE ADDRESS TRANSLATION FACILITY - An address translation facility is provided for multiple virtualization levels, where a guest virtual address may be translated to a guest non-virtual address, the guest non-virtual address corresponding without translation to a host virtual address, and the host virtual address may be translated to a host non-virtual address, where translation within a virtualization level may be specified as a sequence of accesses to address translation tables. The address translation facility may include a first translation engine and a second translation engine, where the first and second translation engines each have capacity to perform address translation within a single virtualization level of the multiple virtualization levels. In operation, based on the first translation engine performing a guest level translation, the second translation engine may perform a host level translation of a resulting guest non-virtual address to a host non-virtual address based on the guest level translation by the first translation engine. | 2019-08-01 |
20190236025 | MULTI-ENGINE ADDRESS TRANSLATION FACILITY - An address translation facility is provided for multiple virtualization levels, where a guest virtual address may be translated to a guest non-virtual address, the guest non-virtual address corresponding without translation to a host virtual address, and the host virtual address may be translated to a host non-virtual address, where translation within a virtualization level may be specified as a sequence of accesses to address translation tables. The address translation facility may include a first translation engine and a second translation engine, where the first and second translation engines each have capacity to perform address translation within a single virtualization level of the multiple virtualization levels. In operation, based on the first translation engine performing a guest level translation, the second translation engine may perform a host level translation of a resulting guest non-virtual address to a host non-virtual address based on the guest level translation by the first translation engine. | 2019-08-01 |
20190236026 | MEMORY ACCESS COMPRESSION USING CLEAR CODE FOR TILE PIXELS - One embodiment provides for a graphics processor comprising a translation lookaside buffer (TLB) to cache a first page table entry for a virtual to physical address mapping for use by the graphics processor, the first page table entry to indicate that a first virtual page is cleared to a clear color and a graphics pipeline to bypass a memory access for the first virtual page based on the first page table entry, wherein the graphics pipeline is to read a field in the first page table entry to determine a value of the clear color. | 2019-08-01 |
20190236027 | ADAPTIVE TABLEWALK TRANSLATION STORAGE BUFFER PREDICTOR - A system for generating predictions for a hardware table walk to find a map of a given virtual address to a corresponding physical address is disclosed. The system includes a plurality of memories, which each includes respective plurality of entries, each of which includes a prediction of a particular one of a plurality of buffers which includes a portion of a virtual to physical address translation map. A first circuit may generate a plurality of hash values to retrieve a plurality of predictions from the plurality of memories, where each has value depends on a respective address and information associated with a respective thread. A second circuit may select a particular prediction of the retrieved predictions to use based on a history of previous predictions. | 2019-08-01 |
20190236028 | CUCKOO CACHING - A cuckoo cache has plural buckets of plural cells each. The cells within a bucket are ranked to approximate relative usage recency. New items can be inserted into empty cells; when a bucket is full, room for a new item can be made by laterally transferring an older item to an alternative bucket. When empty cells and lateral transfers are unavailable, an item is selected for eviction based on the usage recency rank of the containing cell. When a match is found, depending on the embodiment, the hit item can be promoted within its bucket, to its alternative bucket, or to a separate tier of the cuckoo cache. The items can be key-value pairs. No metadata is required to track usage recency so that the cuckoo cache can be a very space efficient tool for finding cached values by their keys. | 2019-08-01 |
20190236029 | SYSTEMS AND METHODS FOR LOAD-BALANCING CACHE FLUSHES TO NON-VOLATILE MEMORY - An information handling system may include a processor, a memory communicatively coupled to the processor and comprising a plurality of non-volatile memories, and a memory controller. The memory controller may be configured to monitor memory input/output traffic to each of the plurality of non-volatile memories, determine a quality of service associated with each of the plurality of non-volatile memories based on such monitoring, and based on such monitoring and the qualities of service associated with the plurality of non-volatile memories, reroute input/output data associated with a first non-volatile memory of the plurality of non-volatile memories to a second non-volatile memory of the plurality of non-volatile memories. | 2019-08-01 |
20190236030 | MEMORY MODULE, OPERATION METHOD THEROF, AND OPERATION METHOD OF HOST - A memory module includes a random access memory (RAM) device that includes a first storage region and a second storage region, a nonvolatile memory device, and a controller that controls the RAM device or the nonvolatile memory device under control of a host. The controller includes a data buffer that temporarily stores first data received from the host, and a buffer returning unit that transmits first release information to the host when the first data are moved from the data buffer to the first storage region or the second storage region of the RAM device and transmits second release information to the host when the first data are moved from the second storage region to the nonvolatile memory device. | 2019-08-01 |
20190236031 | STORAGE DEVICE AND METHOD OF OPERATING THE SAME - Provided herein may be a storage device and a method of operating the same. The method of operating a storage device including a replay protected memory block (RPMB) may include receiving a write request for the RPMB from an external host, selectively storing data in the RPMB based on an authentication operation, receiving a read request from the external host, and providing result data to the external host in response to the read request, wherein the read request includes a message indicating that a read command to be subsequently received from the external host is a command related to the result data. | 2019-08-01 |
20190236032 | DATA STORAGE APPARATUS, DATA PROCESSING SYSTEM, AND DATA PROCESSING METHOD - According to one embodiment, a data storage apparatus includes a controller with a data protection function. The controller manages first and second personal identification data. The first personal identification data only includes authority to request inactivation of the data protection function. The second personal identification data includes authority to request inactivation of the data protection function and activation of the data protection function. The controller permits setting of the first personal identification data, when the second personal identification data is used for successful authentication and the first personal identification data is an initial value, or when the data protection function is in an inactive state. | 2019-08-01 |
20190236033 | SECURING STREAM BUFFERS - Described are examples for securing stream data received from a stream source. A secure mode can be enabled, based on a request from an application, for storing the stream data captured from the stream source in a secured buffer. The secured buffer can be allocated in a secure memory based at least in part on enabling the secure mode. A secured buffer identifier of the secured buffer can be provided to a driver of a device providing the stream source for storing the stream data captured from the stream source in the secured buffer. The secured buffer identifier of the secured buffer can also be provided to the application for accessing the stream data stored in the secured buffer. | 2019-08-01 |
20190236034 | ACCESS OF VIRTUAL MACHINES TO STORAGE AREA NETWORKS - A method for managing access of virtual machines executed by a host computer system to storage area networks, the storage area networks connecting the host computer system with storage systems via switches, where the host computer system includes one or more host ports to connect with a switch each, and where one or more port names are assigned to each virtual machine. The method includes, for each storage area network connected with the host computer system, sending the port names of a virtual machine and a target port name as part of a validate access command to the respective switch; and when receiving the validate access command by the switch, the switch returning success information to the sending host computer system in case all received virtual machine port names have access to a target port assigned to the received target port name, otherwise returning a fail information. | 2019-08-01 |
20190236035 | STANDARDIZED DEVICE DRIVER HAVING A COMMON INTERFACE - A system for camera device command and control detects a camera device connected to the system and accesses a set of interfaces defined by a standardized driver. The set of interfaces are a common set of interfaces associated with a device object and the device object is compatible with camera devices from different camera device manufacturers. The system queries the detected camera device for descriptors defining capabilities of the camera device using a packetized command generated based at least on the accessed set of interfaces and provides, based on the descriptors, one or more interfaces of the accessed set of interfaces to one or more applications to make the camera device available to the one or more applications, thereby simplifying connection of camera devices from different manufacturers and improving the user experience. The system then allows the camera device to be controlled. | 2019-08-01 |
20190236036 | FLASH MEMORY CONTROL DEVICE CAPABLE OF DETECTING TYPE OF INTERFACE AND METHOD THEREOF - The present invention discloses a solid state drive (SSD) control device including: a multi-interface compatible physical layer circuit operable to generate a physical layer output signal according to a serializer/deserializer (SerDes) reception signal; and a processing circuit operable to make the solid state drive control device adapt to one of several interface types in accordance with the physical layer output signal. | 2019-08-01 |
20190236037 | LOW PIN-COUNT ARCHITECTURE WITH PRIORITIZED MESSAGE ARBITRATION AND DELIVERY - Methods and apparatus for implementing a low-pin count architecture with priority message arbitration and delivery. The architecture includes a hardware-based message arbitration unit (MAU) including a plurality of priority queues, each having a respective priority level, implemented on a first component, such as a processor and/or System on a Chip (SoC). The first component is communicatively coupled to a second component via a low-pin count link such as an I2C bus. The MAU receives prioritized messages from clients and enqueues the messages in priority queues based on their priority levels. An arbiter selects messages to transmit over the low-pin count link from the priority queues. The MAU further may abort transmission of a message in favor of transmission of a higher-priority message to guarantee a transmit latency. Under one implementation, the components are a processor and a PMIC configured to communicate with a USB Type-C power source and meet timing requirements defined by a USB Power Delivery Specification. | 2019-08-01 |
20190236038 | BUFFERED INTERCONNECT FOR HIGHLY SCALABLE ON-DIE FABRIC - Buffered interconnects for highly scalable on-die fabric and associated methods and apparatus. A plurality of nodes on a die are interconnected via an on-die fabric. The nodes and fabric are configured to implement forwarding of credited messages from source nodes to destination nodes using forwarding paths partitioned into a plurality of segments, wherein separate credit loops are implemented for each segment. Under one fabric configuration implementing an approach called multi-level crediting, the nodes are configured in a two-dimensional grid and messages are forwarded using vertical and horizontal segments, wherein a first segment is between a source node and a turn node in the same row or column and the second segment is between the turn node and a destination node. Under another approach called buffered mesh, buffering and credit management facilities are provided at each node and adjacent nodes are configured to implement credit loops for forwarding messages between the nodes. The fabrics may comprise various topologies, including 2D mesh topologies and ring interconnect structures. Moreover, multi-level crediting and buffered mesh may be used for forwarding messages across dies. | 2019-08-01 |
20190236039 | AUTOMATIC MASTER-SLAVE SYSTEM AND APPROACH - An automatic master slave system and approach for coordinated control of a parameter, for example, a heating, ventilation and air conditioning condition, in an area of multiple spaces controlled by room controllers. Changing a layout of a zone/area in a building such as moving, adding or removing a door, increasing or splitting size of a room through movable walls, or by permanently removing partitions, changing offices to a conference room or vice versa, may occur. A size of a room may be altered within minutes, according to customer demand. For instance, rooms may be converted into a single room by removing partitions. The controllers that were controlling temperatures of the rooms independently earlier, may convert automatically into a master-slave configuration and now work together to control a larger room. If the large room is split into multiple rooms, the controllers may automatically revert to their previous configuration. | 2019-08-01 |
20190236040 | USB ADAPTER AND CABLE - Disclosed is an adapter. The adapter may include a first end, a second end, a housing, and a logic circuit. The first end may be operative to connect to a terminal device. The second end may be operative to connect to a peripheral device. The housing may connect the first end to the second end. The logic circuit may be located within the housing and electrically couple the first end to the send end. The logic circuit may be operative to perform a handshake operation between the terminal device and the peripheral device to determine compatibility between the terminal device and the peripheral device. | 2019-08-01 |
20190236041 | MULTIMODE AUDIO ACCESSORY CONNECTOR - Methods and devices for connecting an accessory device to a connector port of a mobile communication device and automatically detecting an operational mode of the connector port are provided. The method includes implementing a USB Type-C device detection at an electronic processor of the mobile communication device and monitoring a first and second pin of the connector port for pull-up and pull-down signals from a connected accessory. The method also includes interrupting the USB Type-C device detection when a pull-down signal is detected and determining whether an accessory signal is detected at a third pin of the connector port. The method also includes implementing a LMR accessory detection when the accessory signal is detected and completing the USB Type-C device detection when the accessory signal is not detected. | 2019-08-01 |
20190236042 | EFFICIENT TECHNIQUE FOR COMMUNICATING BETWEEN DEVICES OVER A MULTI-DROP BUS - In a device comprising a serial bus and a plurality of devices, register/address mappings and/or unique group identifiers are used to convey additional information in messages/datagrams over the serial bus without explicitly sending such information in the message/datagram. Such register/address mappings may be done beforehand, and in conjunction with group-specific identifiers, may reduce transmission latency by keeping the size of the messages/datagrams small. Since all devices on the serial bus have prior knowledge of such register/address mappings and/or group-specific identifiers, recipient devices are able to infer information from the group-specific identifiers and/or register/address sent in each message/datagram that is not explicitly sent within such message/datagram. | 2019-08-01 |
20190236043 | ASYMMETRIC POWER STATES ON A COMMUNICATION LINK - Asymmetric power states on a communication link are disclosed. In one aspect, the communication link is a Peripheral Component Interconnect (PCI) express (PCIe) link. PCIe is a point-to-point communication link between two termini. Exemplary aspects of the present disclosure allow the two termini to be in different power states. By allowing the two termini to be in the different power states, an individual terminus may be put into a low-power state even though the other terminus is maintained at a higher-power state. The different power states are enabled by providing switches between a reference clock and respective termini such that the reference clock may selectively be provided to only one terminus of the communication link, allowing that terminus to remain in the higher-power state while the other terminus enters a low-power state that does not require the reference clock. | 2019-08-01 |
20190236044 | Enhanced SSD Storage Device Form Factors - Enhanced data storage devices in various form factors are discussed herein. In one example, a storage drive includes a 2.5-inch form factor chassis that structurally supports elements of the storage drive, and at least one host connector. The storage drive also includes a plurality of M.2 storage device connectors, and a Peripheral Component Interconnect Express (PCIe) switch circuit configured to receive storage operations over the at least one host connector and transfer the storage operations for delivery to ones of the plurality of M.2 storage device connectors over associated device PCIe interfaces. The storage drive also includes power circuitry configured to provide holdup power to ones of the plurality of M.2 storage device connectors after loss of input power over the at least one host connector. | 2019-08-01 |
20190236045 | VIRTUAL COMPORT BRIDGE SUPPORTING HOST-TO-HOST USB TRANSMISSION - A USB bridge including a first USB port, a second USB port, a microcontroller, and a host-to-host function circuit is provided. The first USB port is coupled to the first USB host. The second USB port is coupled to the second USB host. The microcontroller is coupled to the first and the second USB ports. The microcontroller communicates with the first and the second USB hosts via the first and the second USB ports, such that the first and the second USB hosts respectively simulate the USB ports of the first and the second USB hosts as virtual comports. The host-to-host function circuit is coupled to the microcontroller and configured to perform a host-to-host transmission function by simulating the USB ports as virtual comports. | 2019-08-01 |
20190236046 | MODULAR AND SCALABLE PCIE CONTROLLER ARCHITECTURE - The present disclosure generally relates to a Modular PCIe Unit (MPU), which is a single-lane PCI Express endpoint that can act as either a Stand-Alone Single-Lane or as a (one) Lane in a Multilane Endpoint Unit, composed by cascaded-MPUs. The MPU will include a PCIe link, a PCIe transition, a SoC specific and a PCIe phy that are all unique to the individual MPU. The MPUs are scalable in that a single MPU may be used or, if more MPUs are desired for higher performance, additional MPUs, each of which can be unique, added to create the Multilane Endpoint Unit. | 2019-08-01 |
20190236047 | METHOD AND APPARATUS FOR PROVIDING INTERFACE - An electronic device and method of operating the electronic device are provided. The electronic device includes a housing, a first connector configured to be exposed to outside of the housing and include a first number of pins, a second connector configured to be exposed to the outside of the housing and include a second number of pins, and a circuit configured to provide an electrical connection between the first number of pins and the second number of pins, wherein the first number is different from the second number, and wherein, when the first connector is connected with a first external electronic device and the second connector is connected with a second external electronic device, the circuit is configured to receive analog identification (ID) information through at least one pin among the first number of pins, and generate digital ID information at least partially based on the analog ID information so as to provide the digital ID information to at least one of the second number of pins. | 2019-08-01 |
20190236048 | CONTROL APPARATUS AND CONTROL METHOD - According to the disclosure, it is possible to perform comparison with high accuracy even if a deviation in the time axis direction occurs between the target signal and the comparison condition. A control apparatus includes an acquisition part acquiring a time series signal output from a device; a comparison condition storage part storing information indicating a temporal change of a predetermined comparison condition; an area determination part determining a target area, which is an area satisfying a predetermined condition indicating that change of a value is stable, in the signal acquired by the acquisition part; and a comparison part performing comparison with the comparison condition by using a signal of the target area determined by the area determination part. | 2019-08-01 |
20190236049 | Performing concurrent operations in a processing element - A processing element (PE) of a systolic array can perform neural networks computations in parallel on two or more sequential data elements of an input data set using the same weight. Thus, two or more output data elements corresponding to an output data set may be generated in parallel. Based on the size of the input data set and an input data type, the systolic array can process a single data element or multiple data elements in parallel. | 2019-08-01 |
20190236050 | MANAGING STORAGE SYSTEM METADATA DURING DATA MIGRATION - Data is migrated from a source storage device to a destination storage device using tape media. Both the source storage device and the destination storage device utilize disk drives to store data. A portion of data is detected migrating to the tape media. Metadata of the portion of data is changed to identify the portion of data as residing on the tape media. A prefetch command for the portion of data is detected. It is determined that the portion of data is stored on the tape media. In response to determining that the portion of data is stored on the tape media, the prefetch command is executing without recalling the portion of data to the disk drives. Instead, the portion of data is read directly from the tape media. | 2019-08-01 |
20190236051 | CLOUD-AWARE SNAPSHOT DIFFERENCE DETERMINATION - Modifications made to files (e.g., stub files) within a distributed file storage system over a defined time period are determined. Moreover, the distributed file storage system employs a tiered cloud storage architecture. In one aspect, snapshots of a stub file can be generated at different instances of time. Further, metadata of the stub file within the different snapshots can be compared to determine whether the stub file has been modified. As an example, the metadata can include cache metadata that describes the content within the cache of the stub file and/or mapping metadata that describes the content within cloud storage that is referenced by the stub file. | 2019-08-01 |
20190236052 | GENERATING AND SHARING METADATA FOR INDEXING SYNCHRONIZED CONTENT ITEMS - Generating and sharing metadata for indexing synchronized content items. A server generates metadata for indexing synchronized content items and manages sharing of the metadata with client devices in accordance with user preferences that may be embodied in metadata generation and sharing management rules. For example, a content item stored at the server has been designated to be synchronized across at least a first client. The server generates metadata for indexing the content item and sends the metadata to at least a second client. | 2019-08-01 |
20190236053 | DYNAMIC MANAGEMENT OF EXPANDABLE CACHE STORAGE FOR MULTIPLE NETWORK SHARES CONFIGURED IN A FILE SERVER - Expandable cache management dynamically manages cache storage for multiple network shares configured in a file server. Once a file is written to a directory or folder on a specially designated network share, such as one that is configured for “infinite backup,” an intermediary pre-backup copy of the file is created in an expandable cache in the file server that hosts the network share. On write operations, cache storage space can be dynamically expanded or freed up by pruning previously backed up data. This advantageously creates flexible storage caches in the file server for each network share, each cache managed independently of other like caches for other network shares on the same file server. On read operations, intermediary file storage in the expandable cache gives client computing devices speedy access to data targeted for backup, which is generally quicker than restoring files from backed up secondary copies. | 2019-08-01 |
20190236054 | SYSTEM AND METHOD FOR EFFICIENTLY MEASURING PHYSICAL SPACE FOR AN AD-HOC SUBSET OF FILES IN PROTECTION STORAGE FILESYSTEM WITH STREAM SEGMENTATION AND DATA DEDUPLICATION - In one example, a method includes measuring an amount of physical storage space used, or expected to be used, by a portion of a dataset S of segments, and measuring the amount of physical storage space includes receiving information that identifies an ad-hoc group of size ‘n’ of files F | 2019-08-01 |
20190236055 | INFORMATION PROCESSING DEVICE, METHOD, AND PROGRAM RECORDING MEDIUM - An information processing device includes acquisition means for acquiring a plurality of tree structure data in which information groups are represented by tree structures and merge means for generating merged tree structure data merging the plurality of tree structure data. The merge means merges, for information satisfying a condition indicating that a certain distance from other information in the tree structure data is maintained, a subtree at a level below the information in the tree structure data, into the merged tree structure data, wherein the information satisfying the condition is included in a node selected from the plurality of tree structure data; and reorganizing for information satisfying a condition indicating that the distance between pieces of information is short, a merged subtree by merging subtrees below the information in the tree structure data into the information. | 2019-08-01 |
20190236056 | COMPUTER SYSTEM - [Problem to be Solved] | 2019-08-01 |
20190236057 | Suppression and Deduplication of Place-Entities on Online Social Networks - In one embodiment, a method includes receiving from a client system a search query, identifying a first place-entity based on the search query, accessing a place-entities graph comprising a plurality of place-entity nodes, each place-entity node representing a particular place-entity associated with a particular geographic location, wherein the first place-entity is represented by a first place-entity node, accessing a redirection graph comprising the plurality of place-entity nodes and a plurality of place-entity clusters, each place-entity node in a place-entity cluster having a redirection edge connecting the place-entity node to a canonical place-entity node of the respective place-entity cluster, and sending a response to the search query, wherein if the first place-entity node is connected to a canonical place-entity node by a redirection edge within the redirection graph, the response comprises a reference to the canonical place-entity node, else the response comprises a reference to the first place-entity node. | 2019-08-01 |