38th week of 2020 patent applcation highlights part 48 |
Patent application number | Title | Published |
20200293329 | PIPELINE INCLUDING SEPARATE HARDWARE DATA PATHS FOR DIFFERENT INSTRUCTION TYPES - A processing element is implemented in a stage of a pipeline and configured to execute an instruction. A first array of multiplexers is to provide information associated with the instruction to the processing element in response to the instruction being in a first set of instructions. A second array of multiplexers is to provide information associated with the instruction to the first processing element in response to the instruction being in a second set of instructions. A control unit is to gate at least one of power or a clock signal provided to the first array of multiplexers in response to the instruction being in the second set. | 2020-09-17 |
20200293330 | Execution Unit - An execution unit comprising a processing pipeline configured to perform calculations to evaluate a plurality of mathematical functions. The processing pipeline comprises a plurality of stages through which each calculation for evaluating a mathematical function progresses to an end result. Each of a plurality of processing circuits in the pipeline is configured to perform an operation on input values during at least one stage of the plurality of stages. The plurality of processing circuits include multiplier circuits. A first multiplier circuit and a second multiplier circuit are configured to operate in parallel, such that at the same stage in the processing pipeline, the first multiplier circuit and the second multiplier circuit perform their processing. A third multiplier circuit is arranged in series with the first multiplier circuit and the second multiplier circuit and processes outputs from the first multiplier circuit and the second multiplier circuit. | 2020-09-17 |
20200293331 | SYSTEMS AND METHODS FOR SIMULATION OF DYNAMIC SYSTEMS - A highly parallelized parallel tempering technique for simulating dynamic systems, such as quantum processors, is provided. Replica exchange is facilitated by synchronizing grid-level memory. Particular implementations for simulating quantum processors by representing cells of qubits and couplers in grid-, block-, and thread-level memory are discussed. Parallel tempering of such dynamic systems can be assisted by modifying replicas based on isoenergetic cluster moves (ICMs). ICMs are generated via secondary replicas which are maintained alongside primary replicas and exchanged between blocks and/or generated dynamically by blocks without necessarily being exchanged. Certain refinements, such as exchanging energies and temperatures through grid-level memory, are also discussed. | 2020-09-17 |
20200293332 | METHOD FOR VECTORIZING HEAPSORT USING HORIZONTAL AGGREGATION SIMD INSTRUCTIONS - Techniques are provided for vectorizing Heapsort. A K-heap is used as the underlying data structure for indexing values being sorted. The K-heap is vectorized by storing values in a contiguous memory array containing a beginning-most side and end-most side. The vectorized Heapsort utilizes horizontal aggregation SIMD instructions for comparisons, shuffling, and moving data. Thus, the number of comparisons required in order to find the maximum or minimum key value within a single node of the K-heap is reduced resulting in faster retrieval operations. | 2020-09-17 |
20200293333 | ELECTRONIC DEVICE, APPLICATION EXECUTION SYSTEM, AND CONTROL METHOD THEREFOR - Disclosed are an electronic apparatus, an application executing system, and control methods thereof, in which a process of parsing a source code of an application and processes of generating execution data of the application and implementing a follow-up measure about execution of the generated execution data are separately performed when the application is executed, thereby executing an application without using parts of a high specification and/or a java virtual machine (JVM). The electronic apparatus includes a communicator configured to communicate with a plurality of external electronic apparatuses; and a processor configured to receive first parsing data, which corresponds to an event that occurs in a first electronic apparatus, of parsing data of an application stored in the first electronic apparatus from the first electronic apparatus among the plurality of external electronic apparatuses, generate first execution data corresponding to the received first parsing data, and perform a follow-up measure about execution of the generated first execution data. | 2020-09-17 |
20200293334 | CONFIGURABLE OPTION ROM - An example apparatus can include a host device and an apparatus including a memory device and a controller coupled to the memory device, wherein the host device is configured to send a command to read an image to configure the host to boot from the memory device to the controller and wherein a base address register is configured to receive the command, indicate the size of the image, and redirect the command to a first image in memory using a first register that indicates a size of the first image and a second register that indicates a location of the first image. | 2020-09-17 |
20200293335 | CONTAINER-BASED LANGUAGE RUNTIME LOADING AN ISOLATED METHOD - Embodiments include a code loader method for loading attributes corresponding to an isolated method by a container-based language runtime. The attributes are received by the container-based language runtime without any specified container for storage of the isolated method attributes. The attributes received as parameters of code loader method and include instructions, live objects, and parameter types corresponding to the isolated method. The container-based language runtime selects a first-order container for storing the attributes of the isolated method. | 2020-09-17 |
20200293336 | SERVER AND METHOD OF REPLACING A SERVER IN A NETWORK - A method of replacing an original server | 2020-09-17 |
20200293337 | SELF-SERVICE ORCHESTRATION PLATFORM - Techniques for self-service orchestration are disclosed. A system deploys instances of a self-service orchestration agent to tenant-specific software-as-a-service (SaaS) environments operating in a multi-tenant SaaS environment, without reconfiguring existing software in the tenant-specific SaaS environments. Each self-service orchestration agent includes functionality to configure one or more components. Each tenant-specific SaaS environment includes a dedicated set of software operating on a dedicated logical partition of hardware infrastructure. The system receives, via a self-service orchestration interface, a request to configure a component across the tenant-specific SaaS environments. The system: transmits the request to a self-service orchestration module that is operating in the multi-tenant SaaS environment and configured to communicate with each instance of the self-service orchestration agent; dispatches, by the self-service orchestration module, the request to each instance of the self-service orchestration agent; configures, by each of the self-service orchestration agents in a corresponding tenant-specific SaaS environment, the component. | 2020-09-17 |
20200293338 | SHARED DISK DRIVE COMPONENT SYSTEM - A server box embodiment is disclosed that generally comprises an array of dummy HDDs that share a common set of universal disk drive components in a master components module, or power module. Each dummy HDDs is constructed without expensive onboard chipsets that control the normal functionality of a standard HDD. By sharing expensive chipsets in a master components module (power module) money can be saved in building and selling the dummy HDD server. Embodiments envision a power module possessing the needed chipset functionality that is missing in a dummy HDD. The power module can be made to move from dummy HDD to dummy HDD supplying the necessary chipset in a shared manner when data is being stored or retrieved for client or end-user. | 2020-09-17 |
20200293339 | NETWORK ADDRESS MANAGEMENT SYSTEMS AND METHODS - Systems and methods provide for a network address management system for generating consistent network addresses to computing resources and for developing applications that are resilient to changes in the network addresses to those resources. In an embodiment, a consumer application executing on a computing system can receive a network address schema for a provider application via a library. The library may include a function for constructing a network address to the provider application. The consumer application can invoke the function to begin building the network address. The computing system/library extract context information at the time the consumer application invokes the build function, augment the context information using a selected application namespace (e.g., network address patterns and rules), and generate the network address using the augmented context information, patterns, and rules. | 2020-09-17 |
20200293340 | METHOD AND SYSTEM FOR DECLARATIVE CONFIGURATION OF USER SELF-REGISTRATION PAGES AND PROCESSES FOR A SERVICE PROVIDER AND AUTOMATIC DEPLOYMENT OF THE SAME - Methods and systems are provided for configuring for declaratively configuring a user self-registration process and a user self-registration page process for a particular service provider. A graphical user interface is displayed that includes a plurality of options for declaratively configuring different user self-registration processes and corresponding user self-registration pages for the particular service provider. One of the options can be selected, and a type of identifier and a type of verification process can be specified from different types for each. The type of identifier is associated with a user to be verified as part of the user self-registration process, and can be specified to define how the user is identified and looked-up during the user self-registration process. The type of verification process can define how the user will be verified as part of the user self-registration process. | 2020-09-17 |
20200293341 | INTEGRATING RELATED THIRD-PARTY SERVICES FOR USER INTERACTION - Disclosed are various approaches for connecting third-party services for user interaction. An integration service can receive from a client device a content query including a selection of content by a user interacting with a user interface on the client device. The integration service can compare the content query with predefined connector data to identify a connector associated with the content query. The integration service can send the content query and an authentication token of the user to the connector to access information from a third-party service. In response to receiving the information from the third-party service, the integration service can provide the information to the client device. | 2020-09-17 |
20200293342 | GENERATING CONTENT OBJECTS USING AN INTEGRATED DEVELOPMENT ENVIRONMENT - Disclosed are examples of systems, apparatus, methods, and computer program products for generating content objects using an integrated development environment. In some implementations, an integrated development environment is displayed. A request to generate or update an educational content object can be processed, and a presentation including metadata selection options can be provided. After a selection of one of the metadata selection options is received, the presentation can be updated. A different request is received from a rich text editor provided in a new presentation, and the new presentation can be updated. | 2020-09-17 |
20200293343 | COMPUTER ARCHITECTURE FOR EMULATING CODING IN A CORRELITHM OBJECT PROCESSING SYSTEM - A device configured to emulate a correlithm object processing system includes a sensors coupled to a node. A first sensor receives a first sample text string comprising a plurality of characters and assigns correlithm objects to corresponding subsets of the plurality of characters of the first sample text string. A second sensor receive a second sample text string comprising a plurality of characters and assigns a correlithm objects to corresponding subsets of the plurality of characters of the second sample text string. A third sensor receives a test text string comprising a plurality of characters and assigns correlithm objects to corresponding subsets of the plurality of characters of the test text string. The node determines which of the first and second sample text string is the closest match to the test text string by determining which is closer to the test text string in n-dimensional space using the correlithm objects. | 2020-09-17 |
20200293344 | METHOD FOR RECIPROCALLY INTEGRATING APPLICATIONS, IN PARTICULAR WEB APPLICATIONS - A method for operating at least a container application and a component application. The container is an application hosting the component application. The method includes the following: providing a library of code files, a bootloader and further executable code files, wherein the component creates a probing message to the container application, and the container creates a response message containing references to code files to be loaded by the component. | 2020-09-17 |
20200293345 | ACCELERATION MANAGEMENT NODE, ACCELERATION NODE, CLIENT, AND METHOD - Embodiments of the present application provide an acceleration management node. The acceleration management node separately receives acceleration device information of all acceleration devices. The acceleration device information includes an acceleration type and an algorithm type. The acceleration management node obtains an invocation request from a client. The invocation request is used to invoke an acceleration device to accelerate a service of the client, and the invocation request includes a target acceleration type and a target algorithm type. The acceleration management node queries the acceleration device information to determine, from all the acceleration devices of the at least one acceleration node, a target acceleration device matching the invocation request. The acceleration management node further instructs a target acceleration node to respond to the invocation request. | 2020-09-17 |
20200293346 | SYSTEM AND METHOD FOR EXECUTING DIFFERENT TYPES OF BLOCKCHAIN CONTRACTS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for executing blockchain contracts are provided. One of the methods includes: obtaining a bytecode of a blockchain contract, wherein the bytecode comprises one or more indicators, and the one or more indicators comprise a first indicator indicating a virtual machine type for executing the blockchain contract; and executing the blockchain contract using a virtual machine of the virtual machine type associated with the first indicator. | 2020-09-17 |
20200293347 | USER PERSISTENCE DATA MOVED BETWEEN INDIVIDUAL COMPUTE ENVIRONMENTS AND SESSION HOST ENVIRONMENTS - A virtual server includes one or more processors to determine a user layer from a user's personalization container, with the user layer associated with a source operating system computing environment and configured to store modifications to file system objects and registry objects made by the user within the source operating system computing environment. A snapshot of differences between a source operating system layer within the source operating system computing environment and a target operating system layer within a target operating system computing environment is determined. The user layer is modified based on a migration policy so that the file system objects and registry objects are compatible with the target operating system computing environment. | 2020-09-17 |
20200293348 | CROSS-CUSTOMER WEB APP ANALYTICS - A computing system includes virtualization servers running virtual machine sessions, and client computing devices grouped by respective enterprises. Each client computing device is operated by an end-user to access an application via a virtual desktop during one of the virtual machine sessions. An analytics server is coupled to the virtualization servers, and collects application usage parameters provided for each client computing device accessing the application during one of the virtual machine sessions, and analyzes the application usage parameters to determine application performance of the application across the client computing devices for each respective enterprise. Client computing devices having slower application performances as compared to application performances of other client computing devices are identified by the analytics server. One of the virtualization servers is instructed to re-provision the virtual hardware for the client computing devices having slower application performances so that application performances are increased. | 2020-09-17 |
20200293349 | OPEN INTERFACE MANAGEMENT OF VIRTUAL AGENT NODES - Cognitive software and/or machine learning software are monitored in a cognitive computing environment. Open interface management of virtual agent nodes is performed in the cognitive computing environment. | 2020-09-17 |
20200293350 | APPARATUS FOR FORWARDING A MEDIATED REQUEST TO PROCESSING CIRCUITRY IN RESPONSE TO A CONFIGURATION REQUEST - An apparatus, method and computer program are described, the apparatus comprising processing circuitry configured to execute software, and an interface configured to receive, from the processing circuitry, a configuration request from first software requesting configuration of a virtualised device. In response to the configuration request, the interface is configured to forward a mediated request to the processing circuitry, and the mediated request comprises a request that second software having a higher privilege level than the first software determines a response to the configuration request received from the first software. | 2020-09-17 |
20200293351 | REMOTE VIRTUAL MACHINE CONSOLE ACCESS WITH PERSISTENT AND SEAMLESS CLIENT CONNECTIONS DURING VIRTUAL MACHINE MIGRATION - A system of computers in network communication where: (i) an end user's computer accesses an instantiation of a virtual machine using remote console software; (ii) the access to the VM is performed through a proxy server; and (iii) by using the proxy server, when the VM instantiation is moved from one physical host computer to another physical host computer, there is no substantial interruption in the access of the VM by the end user through the remote console software. | 2020-09-17 |
20200293352 | REMOTE VIRTUAL MACHINE CONSOLE ACCESS WITH PERSISTENT AND SEAMLESS CLIENT CONNECTIONS DURING VIRTUAL MACHINE MIGRATION - A system of computers in network communication where: (i) an end user's computer accesses an instantiation of a virtual machine using remote console software; (ii) the access to the VM is performed through a proxy server; and (iii) by using the proxy server, when the VM instantiation is moved from one physical host computer to another physical host computer, there is no substantial interruption in the access of the VM by the end user through the remote console software. | 2020-09-17 |
20200293353 | PROCESSING APPARATUS, PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, a processing apparatus includes a memory and a processor coupled to the memory. The processor is configured to: execute data access that is at least one of data writing to the memory and data reading from the memory; receive access control information for controlling timing of the data access to be executed; and control the timing of the data access based on the received access control information. | 2020-09-17 |
20200293354 | CONTAINER DOCKERFILE AND CONTAINER MIRROR IMAGE QUICK GENERATION METHODS AND SYSTEMS - The invention discloses a container Dockerfile and container mirror image quick generation methods and systems. The container Dockerfile quick generation method includes the steps of for a to-be-packaged target application, running and performing tracking execution on the target application, and recording operation system dependencies of the target application in the running process; organizing and constructing a file list required for packaging the target application to a container mirror image; and according to the file list required for packaging the target application to the container mirror image, generating a Dockerfile and container mirror image file creation directory used for packaging the target application to the container mirror image. Any target application can be automatically packaged by the invention to a container; the construction of an executable minimal environmental closure of the target application is finished; the packaged container is smaller than a manually made container. | 2020-09-17 |
20200293355 | PLATFORM INDEPENDENT GPU PROFILES FOR MORE EFFICIENT UTILIZATION OF GPU RESOURCES - Disclosed are various examples for platform independent graphics processing unit (GPU) profiles for more efficient utilization of GPU resources. A virtual machine configuration can be identified to include a platform independent graphics computing requirement. Hosts can be identified as available in a computing environment based on the platform independent graphics computing requirement. The virtual machine can be placed on a host based on a consideration of host priority. | 2020-09-17 |
20200293356 | METHOD AND A SYSTEM FOR OPTIMISING VIRTUAL MACHINE CLUSTERS OF A CLOUD COMPUTING PLATFORM - A method and a system are provided for optimising Virtual Machine (VM) instances ( | 2020-09-17 |
20200293357 | Vehicle System, Vehicle And Method For Operating Such A Vehicle System - A vehicle system having: a hardware level, a first operating system, and a virtual machine integrated on the hardware level having a second operating system. A hypervisor operates the virtual machine such that the first and the second operating systems) are operated in parallel on the hardware. A first application is executed on the first operating system and a second application is executed on the second operating system. The first application has a higher safety standard than the second application. The second operating system is configured to be operated in suspend-to-RAM mode while the first operating system is switched off. | 2020-09-17 |
20200293358 | COMPUTING NODE IDENTIFIER-BASED REQUEST ALLOCATION - Computing node identifiers can be used to encode information regarding the distance between requesting and available computing nodes. Computing node identifiers can be computed based on proximity values for respective computing nodes. Requests can be directed from one computing node to an available computing node based on information encoded by both the computing node identifiers of the requesting node and the receiving node. Using these computing node identifiers to direct request traffic among VMs can more efficiently leverages network resources. | 2020-09-17 |
20200293359 | System and Method for Dynamic Virtualized Network Function Descriptor Management - A Virtual Network Function Descriptor (VNFD) parameter may include subfields that allow a management entity to determine whether the VNFD parameter can be updated. The subfields may include a write-ability subfield that indicates whether the VNFD parameter is a dynamic/configurable VNFD parameter or a fixed/static VNFD parameter. The VNFD parameter may also include an access permission subfield that indicates which entities are authorized to modify/update the VNFD parameter. The VNFD parameter may also include an administrative priority subfield that indicates a priority of an entity that set an attribute of the VNFD parameter. The VNFD parameter may also include a constraints subfield that indicates one or more conditions that are required to occur in order for the VNFD parameter to be updated. | 2020-09-17 |
20200293360 | TECHNIQUES TO MANAGE VIRTUAL CLASSES FOR STATISTICAL TESTS - Techniques to manage virtual classes for statistical tests are described. An apparatus may comprise a simulated data component to generate simulated data for a statistical test, statistics of the statistical test based on parameter vectors to follow a probability distribution, a statistic simulator component to simulate statistics for the parameter vectors from the simulated data with a distributed computing system comprising multiple nodes each having one or more processors capable of executing multiple threads, the simulation to occur by distribution of portions of the simulated data across the multiple nodes of the distributed computing system, and a distributed control engine to control task execution on the distributed portions of the simulated data on each node of the distributed computing system with a virtual software class arranged to coordinate task and sub-task operations across the nodes of the distributed computing system. Other embodiments are described and claimed. | 2020-09-17 |
20200293361 | METHOD AND DISTRIBUTED DATABASE SYSTEM FOR COMPUTER-AIDED EXECUTION OF A PROGRAM CODE - A method by means of which it is possible, for example, to react to a smart contract of a block chain which is incorrectly programmed and to cancel if required, is provided. Furthermore, conventional operating systems and cloud-based operating systems can also be improved to improve the execution of a smart contract/(first) program code. As a result, processes in particular block chain smart contracts, operating system processes, cloud applications, are significantly better carried out and controlled. | 2020-09-17 |
20200293362 | DROPPING AN INDEX WITHOUT BLOCKING LOCKS - Techniques for processing “destructive” database statements are provided. Destructive database statements, when processed, cause metadata of a database object to be changed. Examples of such database statements include ones that delete an index, that set a column as unused, and that drop a constraint on a column. When such a statement is received, a change is made to metadata of a database object. Such a metadata change may involve setting an index as unusable, disabling a constraint, or invalidating a cursor. After the metadata change, a first time is determined. Then, it is determined when one or more database transactions that were pending at the first time have committed. After those database transaction(s) have committed, one or more operations are performed, such as dropping an index or dropping a constraint. | 2020-09-17 |
20200293363 | HARDWARE CO-ORDINATION OF RESOURCE MANAGEMENT IN DISTRIBUTED SYSTEMS - Systems and methods are directed to methods and apparatus for transferring ownership of common resources from a source entity, which owns a resource, to a destination entity, which will own the resource, in a distributed system. The method includes the source entity receiving a command to change ownership (the MOVE command), and then marking the source entity as no longer owning the common resource. The source entity then sends a MOVE command to the destination entity, which will then update its common resource ownership table to reflect that the ownership of the common resource has been transferred from the source entity to the destination entity. It is advantageous that the updating of ownership of the common resource in the source entity occur simultaneously with the dispatching of the MOVE command to the destination entity. | 2020-09-17 |
20200293364 | Management of Unmanaged User Accounts and Tasks in a Multi-Account Mobile Application - Methods, systems, computer-readable media, and apparatuses for providing mobile application management (MAM) functionalities are presented. In some embodiments, a mobile device may initialize a partially managed application associated with a first managed user account and an unmanaged user account. The mobile device may execute first managed tasks associated with the first managed user account in accordance with a first set of MAM policies provided by a first MAM service provider. The mobile device may execute unmanaged tasks associated with the unmanaged account independent of the first set of MAM policies. In some embodiments, the mobile device may initialize the multi-account managed application associated with a second managed user account. | 2020-09-17 |
20200293365 | TRANSACTIONAL PAGE FAULT HANDLING - Methods and apparatus relating to transactional page fault handling. In an example, an apparatus comprises a processor to divide an execution thread of a graphics workload into a set of transactions which are to be executed atomically, initiate the execution of the thread, and manage the execution of the thread according to one of a first protocol in response to a determination that a page fault occurred in the execution of a transaction, or a second protocol in response to a determination that a page fault did not occur in the execution of a transaction. Other embodiments are also disclosed and claimed. | 2020-09-17 |
20200293366 | PROCESSOR ZERO OVERHEAD TASK SCHEDULING - A method for scheduling tasks on a processor includes detecting, in a task selection device communicatively coupled to the processor, a condition of each of a plurality of components of a computer system comprising the processor, determining a plurality of tasks that can be next executed on the processor based on the condition of each of the plurality of components, transmitting a signal to an arbiter of the task selection device that the plurality of tasks can be executed, determining, at the arbiter, a next task to be executed on the processor, storing, by the task selection device, the entry point address of the next task to be executed on the processor, and transferring, by the processor, execution to the stored entry point address of the next task to be executed. | 2020-09-17 |
20200293367 | LOCAL MEMORY SHARING BETWEEN KERNELS - One embodiment provides for a general-purpose graphics processing unit comprising a set of processing elements to execute one or more thread groups of a second kernel to be executed by the general-purpose graphics processor, an on-chip memory coupled to the set of processing elements, and a scheduler coupled with the set of processing elements, the scheduler to schedule the thread groups of the kernel to the set of processing elements, wherein the scheduler is to schedule a thread group of the second kernel to execute subsequent to a thread group of a first kernel, the thread group of the second kernel configured to access a region of the on-chip memory that contains data written by the thread group of the first kernel in response to a determination that the second kernel is dependent upon the first kernel. | 2020-09-17 |
20200293368 | SYSTEMS AND METHODS FOR SYNCHRONIZATION OF MULTI-THREAD LANES - Apparatuses to synchronize lanes that diverge or threads that drift are disclosed. In one embodiment, a graphics multiprocessor includes a queue having an initial state of groups with a first group having threads of first and second instruction types and a second group having threads of the first and second instruction types. A regroup engine (or regroup circuitry) regroups threads into a third group having threads of the first instruction type and a fourth group having threads of the second instruction type. | 2020-09-17 |
20200293369 | GRAPHICS SYSTEMS AND METHODS FOR ACCELERATING SYNCHRONIZATION USING FINE GRAIN DEPENDENCY CHECK AND SCHEDULING OPTIMIZATIONS BASED ON AVAILABLE SHARED MEMORY SPACE - Accelerated synchronization operations using fine grain dependency check are disclosed. A graphics multiprocessor includes a plurality of execution units and synchronization circuitry that is configured to determine availability of at least one execution unit. The synchronization circuitry to perform a fine grain dependency check of availability of dependent data or operands in shared local memory or cache when at least one execution unit is available. | 2020-09-17 |
20200293370 | USECASE SPECIFICATION AND RUNTIME EXECUTION TO SERVE ON-DEMAND QUERIES AND DYNAMICALLY SCALE RESOURCES - A computer-implemented method includes obtaining a usecase specification and a usecase runtime specification corresponding to the usecase. The usecase includes a plurality of applications each being associated with a micro-service providing a corresponding functionality within the usecase for performing a task. The method further includes managing execution of the usecase within a runtime system based on the usecase and usecase runtime specifications to perform the task by serving an on-demand query and dynamically scaling resources based on the on-demand query, including using a batch helper server to employ the usecase specification to load dynamic application instances and connect the dynamic application instances to existing instances, and employ a batch helper configuration to load nodes/machines for execution of the on-demand query. | 2020-09-17 |
20200293371 | USECASE SPECIFICATION AND RUNTIME EXECUTION - A computer-implemented method includes obtaining a usecase specification and a usecase runtime specification corresponding to the usecase. The usecase includes a plurality of applications each being associated with a micro-service providing a corresponding functionality within the usecase for performing a task. The method further includes determining that at least one instance of the at least one of the plurality of applications can be reused during execution of the usecase based on the usecase specification and the usecase runtime specification, and reusing the at least one instance during execution of the usecase. | 2020-09-17 |
20200293372 | EFFICIENT RESOURCE ALLOCATION FOR CONCURRENT GRAPH WORKLOADS - Techniques are described herein for allocating and rebalancing computing resources for executing graph workloads in manner that increases system throughput. According to one embodiment, a method includes receiving a request to execute a graph processing workload on a dataset, identifying a plurality of graph operators that constitute the graph processing workload, and determining whether execution of each graph operator is processor intensive or memory intensive. The method also includes assigning a task weight for each graph operator of the plurality of graph operators, and performing, based on the assigned task weights, a first allocation of computing resources to execute the plurality of graph operators. Further, the method includes causing, according to the first allocation, execution of the plurality of graph operators by the computing resources, and monitoring computing resource usage of graph operators executed by the computing resources according to the first allocation. In addition, the method includes performing, responsive to monitoring computing resource usage, a second allocation of computing resources to execute the plurality of graph operators, and causing, according to the second allocation instead of according to the first allocation, execution of the plurality of graph operators by the computing resources. | 2020-09-17 |
20200293373 | COMPUTER-READABLE RECORDING MEDIUM STORING TRANSFER PROGRAM, TRANSFER METHOD, AND TRANSFERRING DEVICE - A transfer method is performed by an information processing apparatus. The method includes: selecting, based on a load status of the information processing apparatus, candidate transfer data that is among the received data and to be transferred to one or more other information processing apparatuses; selecting, based on load statuses of multiple other information processing apparatuses, one or more candidate transfer destination apparatuses among the multiple other information processing apparatuses as candidate transfer destinations of the data; determining, based on throughput between the information processing apparatus and the candidate transfer destination apparatuses, data to be transferred among the candidate transfer data, transfer destination apparatuses of the data to be transferred among the candidate transfer destination apparatuses, and the sizes of data groups including the data to be transferred; and transferring, to the transfer destination apparatuses determined for the determined data groups, the determined data to be transferred. | 2020-09-17 |
20200293374 | METHOD AND SYSTEM FOR PRIVACY ENABLED TASK ALLOCATION - Data is an asset to any organization and any breach to the data during task allocation to agents may lead to serious damage to organizations including loss of consumer confidence, trust, reputation, financial penalties and the like. Conventional methods mainly focus on the allocating task to agents based on user satisfaction, overall throughput and maximize revenue and less focus is given to data privacy. The present subject matter overcomes the limitations of the conventional methods for task allocation by utilizing a dynamic data exposure analysis method, which enables seamless upgrading of the data access policy and or control. Here, a data exposure is monitored based on a data exposure score, dynamic identification of conflicting tasks and a dynamic privacy budget. The data exposure score is calculated in two execution points. Finally all the values are updated in the system for utilization in the further privacy enabled task allocation. | 2020-09-17 |
20200293375 | DATA STORAGE RESOURCE MANAGEMENT - A resource management system in a data center one or more data storage resource providers and a transaction server. The transaction server is configured to receive, from a client, a request for read and/or write access for a data storage resource, the request comprising one or more specifications, to provide, to the one or more data storage resource providers, at least a portion of the request, and to receive, from the one or more data storage resource providers, respective responses to the request, the responses respectively comprising one or more allocation options. The transaction server is further configured to select one of the one or more allocation options for registration, and register the selected allocation option with a data manager. At least one of the one or more data storage providers is configured to provide the data storage resource in accordance with the registered allocation option. | 2020-09-17 |
20200293376 | DATABASE PROCESS CATEGORIZATION - Described is a system, method, and computer program product to perform monitoring for process-based OS resource utilization by individual database instances in a multi-database environment. This approach may be used to resolve numerous resource allocation and monitoring problems, such as the noisy neighbor problem. | 2020-09-17 |
20200293377 | VERIFYING THE CORRECTNESS OF A DEFLATE COMPRESSION ACCELERATOR - Embodiments of the invention are directed to a DEFLATE compression accelerator and to a method for verifying the correctness of the DEFLATE compression accelerator. The accelerator includes an input buffer and a Lempel-Ziv 77 (LZ77) compressor communicatively coupled to an output of the input buffer. A switch is communicatively coupled to the output of the input buffer and to the output of the LZ77 compressor. The switch is configured to bypass the LZ77 compressor during a compression test. The accelerator further includes a deflate Huffman encoder communicatively coupled to an output of the switch and an output buffer communicatively coupled to the deflate Huffman encoder. When the switch is not bypassed, the compressor can be modified to produce repeatable results. | 2020-09-17 |
20200293378 | DATA TRANSFORMATION CACHING IN AN ARTIFICIAL INTELLIGENCE INFRASTRUCTURE - Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset. | 2020-09-17 |
20200293379 | CONVOLUTIONAL COMPUTING ACCELERATOR, CONVOLUTIONAL COMPUTING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - Embodiments of this application relate to a convolutional computing accelerator, a convolutional computing method, and a convolutional computing device, which belong to the technical field of electronic circuits. The convolutional computing accelerator includes: a controller, a computing matrix, and a first cache. The computing matrix comprising at least one row of computing units, each row of computing units comprising at least two adjacent connected computing units. The controller is configured to control input data of each row of computing units to be loaded into the first cache, and to control the input data loaded into the first cache to be inputted into the two adjacent computing units in a corresponding row. Each of the computing units in the corresponding row is configured to perform, in a first clock cycle, a convolutional computation based on received input data and a pre-stored convolutional kernel. | 2020-09-17 |
20200293380 | THREAD GROUP SCHEDULING FOR GRAPHICS PROCESSING - Embodiments are generally directed to thread group scheduling for graphics processing. An embodiment of an apparatus includes a plurality of processors including a plurality of graphics processors to process data; a memory; and one or more caches for storage of data for the plurality of graphics processors, wherein the one or more processors are to schedule a plurality of groups of threads for processing by the plurality of graphics processors, the scheduling of the plurality of groups of threads including the plurality of processors to apply a bias for scheduling the plurality of groups of threads according to a cache locality for the one or more caches. | 2020-09-17 |
20200293381 | VIRTUAL GRAPH NODES - The described technology is directed towards returning less data than is available for a data item in response to a request to a data service. A virtual graph node is returned in response to client requests, in which the virtual node comprises a relatively lightweight set of information relative to the full set of information for the data item, e.g., maintained in a main (graph) node. A requesting client indicates that a virtual node is desired, and receives a response comprising the virtual node, generally processed from the main node's data into a reduced subset of the main node. The main node may be cached at the data service, and returned if and when requested. | 2020-09-17 |
20200293382 | DYNAMIC DISTRIBUTED WORK ALLOCATION - Dynamic distributed work allocation is disclosed. For example, a first work server (WS) stores a first plurality of tasks and a second WS stores a second plurality of tasks. A work client (WC) is configured to send a first lock request (LR) with a first priority value (PV) to the first WS and a second LR with a second PV to the second WS. The WC receives a first lock notice (LN) and a first task from the first WS, and a second LN and a second task from the second WS. Prior to a first lock duration (LD) expiring and completing processing of the first task, the WC sends a third LR to the first WS that extends the first LD. After completing the second task, the WC sends a lock release notice and a fourth LR to the second WS. | 2020-09-17 |
20200293383 | System and Method for Developing Modularized Application - A system and method for developing modularized applications are disclosed. In one preferred embodiment, a modularized application | 2020-09-17 |
20200293384 | SYSTEMS AND METHODS FOR MANAGING APPLICATION PROGRAMMING INTERFACE INFORMATION - Computerized systems and methods for managing API information. An exemplary method includes receiving an input from a user device associated with a first computer system, the input not including identity of a second computer system. The method includes determining a target API based on the input, the target API being the second computer system's API. The method also includes determining whether a user of the user device has access to the target API. The method includes retrieving documentation of the target API from an API database if it is determined that the user has access to the target API. The method includes providing the user device with the retrieved documentation. | 2020-09-17 |
20200293385 | INPUT OPERATION PROCESSING METHOD AND PROCESSING APPARATUS AND COMPUTER-READABLE STORAGE MEDIUM - The present invention provides an input operation processing method, a processing apparatus and a computer readable storage medium. The input operation processing method is used for processing input operations received by a smart terminal and comprises the following steps: presetting mapping relationships between at least two input operations and input events in an application program; detecting whether an input event interface of the application program receives an optional input event or not; when the input event interface receives the optional input event, recognizing, by the application program, the input operation corresponding to the input event according to the mapping relationship; converting the recognized input event into an input event coexisting with other types of input events; and reporting the input event coexisting with the other types of input events. After the implementation of the technical solutions, the smart terminal can be operated by means of an external input device or by combining the external input device with a touch screen, such that the flexibility of the input operations can be improved, and the user experience is enhanced. | 2020-09-17 |
20200293386 | MESSAGING ABSTRACTION LAYER FOR INTEGRATION WITH MESSAGE ORIENTED MIDDLEWARE PLATFORMS - An apparatus in one embodiment comprises at least one processing platform comprising a plurality of processing devices. The at least one processing platform is configured to provide a plurality of applications with centralized access to a plurality of message oriented middleware (MOM) servers via a connectivity layer, to establish a connection between a given one of the plurality of applications and a given one of the plurality of MOM servers via the connectivity layer, and to exchange data between the given one of the plurality of applications and the given one of the plurality of MOM servers via the connectivity layer. | 2020-09-17 |
20200293387 | METHOD AND APPARATUS FOR PEER-TO-PEER MESSAGING IN HETEROGENEOUS MACHINE CLUSTERS - Various computing network messaging techniques and apparatus are disclosed. In one aspect, a method of computing is provided that includes executing a first thread and a second thread. A message is sent from the first thread to the second thread. The message includes a domain descriptor that identifies a first location of the first thread and a second location of the second thread. | 2020-09-17 |
20200293388 | APPLICATION PROGRAMMING INTERFACE FOR WEB PAGE AND VISUALIZATION GENERATION - A method of hosting a single page application incudes hosting, at an application programming interface (API) module of a server, the single page application as a first API operation by providing code to a client device to enable rendering of a page at the client device as a user interface presentation. | 2020-09-17 |
20200293389 | DEVICE APPLICATION SUPPORT - Various example embodiments for providing device application support are presented. In at least some example embodiments, device application support may be configured to support device programmability. In at least some example embodiments, device application support may be configured to support device programmability for enabling a customer that operates a device to develop a customer application for the device and to run the customer application on the device. In at least some example embodiments, device application support may be provided in a manner for enabling a customer to develop and run a customer application for a device without a need for the customer to use a software development kit (SDK) to develop the customer application. | 2020-09-17 |
20200293390 | SYSTEM INFORMATION TRANSMITTING METHOD AND APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM - A system information transmitting method and apparatus, and a computer-readable storage medium are provided. The method includes receiving, at a terminal, a system call instruction sent by a browser, acquiring, at the terminal, target system information according to the system call instruction, dividing, at the terminal, the target system information into at least one data set, obtaining, at the terminal, encoded pseudo-touch event information by encoding the at least one data segment in the target system information according to a positional relationship of respective touch points in a simulated pseudo-touch event in a touch plane, and transmitting, from the terminal, as an input parameter of the target script, the encoded pseudo-touch event information to the target script through the browser. | 2020-09-17 |
20200293391 | CROSS-CORRELATION OF METRICS FOR ANOMALY ROOT CAUSE IDENTIFICATION - Technologies are disclosed herein for cross-correlating metrics for anomaly root cause detection. Primary and secondary metrics associated with an anomaly are cross-correlated by first using the derivative of an interpolant of data points of the primary metric to identify a time window for analysis. Impact scores for the secondary metrics can be then be generated by computing the standard deviation of a derivative of data points of the secondary metrics during the identified time window. The impact scores can be utilized to collect data relating to the secondary metrics most likely to have caused the anomaly. Remedial action can then be taken based upon the collected data in order to address the root cause of the anomaly. | 2020-09-17 |
20200293392 | MAINTENANCE INTERVENTION PREDICTING - Examples include a non-transitory machine-readable storage medium having stored thereon machine-readable instructions executable to cause a processing resource to monitor sensory inputs related to a device, monitor a first maintenance intervention related to the device, store data relating to the monitored sensory inputs and the first maintenance intervention in a centralized database, and predict a second maintenance intervention based on the data stored in the centralized database. | 2020-09-17 |
20200293393 | OUTPUT METHOD AND INFORMATION PROCESSING APPARATUS - A non-transitory computer-readable recording medium has stored therein a program that causes a computer to execute a process including creating a frequent message group based on an appearance frequency of each message included in a message group that is generated in the past, in response to a generation of an error message; extracting, as an error periphery log, a message group within a predetermined time period before and after the error message from an accumulated message group; determining whether or not the error message is included in the frequent message group based on a degree of relation between the error periphery log and the frequent message group; and outputting a message that is not included in the frequent message group in the error periphery log as a related message associated with the error message based on a result of the determining. | 2020-09-17 |
20200293394 | INTERACTIVE TROUBLESHOOTING ASSISTANT - An interactive troubleshooting assistant and method for troubleshooting a system in real time to repair (fix) one or more problems in a system is disclosed. The interactive troubleshooting assistant and method may include receiving multimodal inputs from sensors, wearable devices, a person, etc. that may be input into a feature extractor including attention layers and pre-processing units of a cloud computing system hosted by one or more servers, such as a private cloud system. A pre-processing unit converts the raw multimodal input into a structed form so that an attention layer can give weights to features provided by the pre-processing unit according to their importance. The weighted extracted features may be provided to an actions predictor. The actions predictor generates the most suitable action based on the weighted extracted features generated by the feature extractor based on the multimodal inputs. After the most suitable action is performed, the interactive troubleshooting assistant considers new information from multimodal inputs so that the interactive troubleshooting assistant can provide the next recommended action. The interactive troubleshooting assistant may repeat these operations until the repair is completed. | 2020-09-17 |
20200293395 | TECHNIQUES FOR STORING DATA TO ENHANCE RECOVERY AND DETECTION OF DATA CORRUPTION ERRORS - Often there are errors when reading data from computer memory. To detect and correct these errors, there are multiple types of error correction codes. Disclosed is an error correction architecture that creates a codeword having a data portion and an error correction code portion. Swizzling rearranges the order of bits and distributes the bits among different codewords. Because the data is redistributed, a potential memory error of up to N contiguous bits, where N for example equals 2 times the number of codewords swizzled together, only affects up to, at most, two bits per swizzled codeword. This keeps the error within the error detecting capabilities of the error correction architecture. Furthermore, this can allow improved error correction and detection without requiring a change to error correcting code generators and checkers. | 2020-09-17 |
20200293396 | DEFERRED ERROR CODE CORRECTION WITH IMPROVED EFFECTIVE DATA BANDWIDTH PERFORMANCE - A deferred error correction code (ECC) scheme for memory devices is disclosed. In one embodiment, a method is disclosed comprising starting a deferred period of operation of a memory system in response to detecting the satisfaction of a condition; receiving an operation during the deferred period, the operation comprising a read or write operation access one or more memory banks of the memory system; deferring ECC operations for the operation; executing the operation; detecting an end of the deferred period of operation; and executing the ECC operations after the end of the deferred period. | 2020-09-17 |
20200293397 | SYSTEMS AND METHODS FOR AN ECC ARCHITECTURE WITH MEMORY MAPPING - Systems, apparatus and methods are provided for providing an error correction code (ECC) architecture with flexible memory mapping. An apparatus may comprise an error correction code (ECC) engine, a multi-channel interface for one or more non-volatile storage devices, a memory comprising a plurality of memory units, a storage containing a plurality of mapping entries to indicate allocation status of the plurality of memory units and a memory mapping manager. The plurality of memory units may be coupled to the ECC engine and the multi-channel interface. The memory mapping manager may be configured to control allocation of the plurality of memory units and set allocation status in the plurality of mapping entries. | 2020-09-17 |
20200293398 | NON VOLATILE MEMORY CONTROLLER DEVICE AND METHOD FOR ADJUSTMENT - There is provided a method of providing adjusted LLR values of a plurality of bits in a codeword to an LDPC decoder, the plurality of bits representing a plurality of charge states of a plurality of memory cells of a non-volatile memory. The method comprises storing in a non-volatile memory controller associated with the non-volatile memory LLR values of the plurality of bits. The controller then determines a plurality of levels of the charge states represented by the plurality of bits. The controller then generates, by a distribution processor, distributions of a population of the plurality of bits in the codeword at each of the plurality of levels at a first and a second time after the first time. The controller then generates the adjusted LLR values based on a comparison between the first and second distributions, and then decodes the codeword according to the adjusted LLR values. | 2020-09-17 |
20200293399 | DECODING SCHEME FOR ERROR CORRECTION CODE STRUCTURE - Various implementations described herein relate to systems and methods for performing error correction in a flash memory device by determining suggested corrections by decoding a codeword. In addition, whether a first set of the suggested corrections obtained based on a first component code of the plurality of component codes agree with a second set of the suggested corrections obtained based on a second component code of the plurality of component codes is determined. One of accepting the first set of the suggested corrections or rejecting the first set of the suggested corrections is selected based on whether the first set of the suggested corrections and the second set of the suggested corrections agree. | 2020-09-17 |
20200293400 | ERROR CORRECTION CODE STRUCTURE - Various implementations described herein relate to systems and methods for encoding data having input bits to be stored in a non-volatile storage device, including mapping the input bits to a plurality of component codes of an error correction code (ECC) and encoding the input bits as the plurality of component codes, wherein first input bits of the input bits encoded by any of the plurality of component codes are encoded by every other component code of the plurality of component codes in a non-overlapping manner. | 2020-09-17 |
20200293401 | STORAGE DEVICE AND METHOD FOR OPERATING STORAGE DEVICE - A storage device and a method for operating the storage device are provided. A storage device includes processing circuitry configured to write multi-stream data on a non-volatile memory; generate parity data of the multi-stream data and/or intermediate parity data upon which the parity data is based; store the parity data and/or the intermediate parity data in a first memory; and perform a data swap between the first memory and a second memory, wherein a number of slots of a plurality of slots in the first memory is based on a number of execution units of program buffering of the non-volatile memory. | 2020-09-17 |
20200293402 | CANDIDATE BIT DETECTION AND UTILIZATION FOR ERROR CORRECTION - A determination is made that error-correcting code functionality detected a first number of erroneous bits within a memory device. Bits within the memory device are evaluated to identify a subset of the bits as candidate bits. The candidate bits are evaluated to determine whether the error-correcting code functionality returns a non-error state, where no error correction is performed, based upon one or more combinations of candidate bits being inverted. Responsive to the error-correcting code functionality returning the non-error state for only one combination of the one or more combinations of candidate bits being inverted, the one combination of candidate bits is corrected. | 2020-09-17 |
20200293403 | ERROR CORRECTION CIRCUIT AND MEMORY SYSTEM - An error correction circuit includes a syndrome calculator to calculate syndrome information of input data, an error position calculator to calculate error position information of the input data, a holder to hold the syndrome information or the error position information at a predetermined timing, an input switch to select one of error-corrected data of the input data, and the input data, and to input the selected data to the syndrome calculator, an error detection determiner to determine whether an error of the input data has been correctly detected, and an error corrector to correct the error of the input data based on information held by the holder and to output error-corrected input data when it is determined by the error detection determiner that the error has been correctly detected whereas to output the input data with no error correction when it is determined by the error detection determiner. | 2020-09-17 |
20200293404 | SYSTEMS AND METHODS TO REDUCE APPLICATION DOWNTIME DURING A RESTORE OPERATION USING A PSEUDO-STORAGE DEVICE - The disclosed systems and methods enable an application to start operating and servicing users soon after and during the course of its backup data being restored, no matter how long the restore may take. This is referred to as “instant application recovery” in view of the fact that the application may be put back in service soon after the restore operation begins. Any primary data generated by the application during “instant application recovery” is not only retained, but is efficiently updated into restored data. An enhanced data agent and an associated pseudo-storage-device driver, which execute on the same client computing device as the application, enable the application to operate substantially concurrently with a full restore of backed up data. According to the illustrative embodiment, the pseudo-storage-device driver presents a pseudo-volume to the file system associated with the application, such that the pseudo-volume may be used as a store for primary data during the period of “instant application recovery.” | 2020-09-17 |
20200293405 | System and Method of Utilizing a Recovery Operating System - In one or more embodiments, one or more methods, processes, and/or systems may modify a configuration of an information handling system (IHS) to prevent access of a first non-volatile memory medium, associated with the IHS, that stores a recovery operating system; may boot the information handling system from a second non-volatile memory medium of the IHS; may determine that at least one issue associated with a boot sequence has occurred; may modify the configuration of the IHS to provide access of the first non-volatile memory medium; may modify the configuration of the IHS to boot the information handling system from the first non-volatile memory medium; may restart the IHS; and may boot the recovery operating system from the first non-volatile memory medium. | 2020-09-17 |
20200293406 | RESET DEVICE AND DISPLAY DEVICE - The disclosure provides a reset device and a display device. The reset device comprises a processor, a reset circuit and a button. The reset circuit electrically connects to the processor and the button. When the button is not pressed, the processor acquires a first level signal from the reset circuit; when the button is pressed, if the processor cannot recognize the second level signal while acquiring the second level signal from the reset circuit, the display device is restarted; and during or after restart operation for the display device, if the reset circuit detects that the first level signal and the second level signal which are output by the reset circuit before and after the button is pressed are different, software fault recovery operation is performed on the display device. | 2020-09-17 |
20200293407 | SYSTEM AND METHOD FOR REPLICATING DATA IN DISTRIBUTED DATABASE SYSTEMS - A method includes receiving an indication of a change to a page of a database and adding a new log record corresponding to the page to a common log comprising log records, the new log record describing the change made to the page and assigned a distinct version number. The method further includes synchronously writing the new log record to each log store replica in a set of log store replicas, and asynchronously writing the new log record to all page store replicas for the page to update the page that is stored on each of the page store replicas, where each store replica for the page serves reads for the page. In response to receiving, from a predetermined number of the page store replicas, an acknowledgement of the writing of the log record, discarding the new log record from the common log. | 2020-09-17 |
20200293408 | MANAGING STRUCTURED DATA IN A DATA STORAGE SYSTEM - According to certain aspects, a secondary computing system can be configured to perform a full backup on database data, generate incremental backups comprising log files associated with modifications to the database data, and create a differential full backup copy of the database data using the full backup copy and changed blocks identified using the log files from the incremental backups. | 2020-09-17 |
20200293409 | SHARING OF SECONDARY STORAGE DATA - An information management system according to certain aspects allows users to share a portion of a file (e.g., a document) stored in secondary storage. The user may specify a portion of a secondary storage file to share and send a link to the portion to another user. The other user can access the shared portion from the link, and just the shared portion may be restored from secondary storage. The system according to certain aspects provides a native view of secondary storage data on a client computing device. The index data and/or metadata relating to secondary storage data may be stored in native application format for access via the native source application. | 2020-09-17 |
20200293410 | SYNCHRONIZING SELECTED PORTIONS OF DATA IN A STORAGE MANAGEMENT SYSTEM - Disclosed methods and systems leverage resources in a storage management system to partially synchronize primary data files based on synchronizing selected portions thereof without regard to changes that may be occurring in other non-synchronized portions. Accordingly, a number of primary data files may be partially synchronized by synchronizing designated portions thereof via auto-restore operations from backup data. This approach relies on storage management resources to designate portions of source data that is to be kept synchronized across any number of targets; detect changes to the designated portions; back up changes to secondary storage; and distribute the changes from secondary storage to the associated targets, with minimal impact to the primary data environment. The approach may be mutually applied, so that changes in any one of an associated group of source data files may be likewise detected, backed up, and distributed to the other members of the group. | 2020-09-17 |
20200293411 | DIRECT ACCESS TO BACKUP COPY - Techniques to provide direct access to backup data are disclosed. An indication is received to provide access to backup data backed up previously to a target device. The backup data as stored on the target device is used to spawn on the target device a logical volume corresponding to the backup data. Access to the logical volume as stored on the target device is provided to a production host. | 2020-09-17 |
20200293412 | Log Management Method, Server, and Database System - In a log management method performed by a server, the server receives a transaction and generates a command log of the transaction. When detecting the transaction is a multi-partition transaction or a non-deterministic transaction, the server generates a data log of the transaction. When the server is faulty, the server recovers data according to the command log or the data log. | 2020-09-17 |
20200293413 | DYNAMIC DATA STORAGE - A method for dynamically storing files/data, comprising: a) acquiring the file/data by an initial random Virtual Machine (r VM); b) shredding the file/data to a plurality of segments; c) wrapping, in a standalone state, each of the remaining segments with a unique code comprised of at least one or more destination storage locations, a pointer to a following segment in the file/data, and a timer; d) autonomously and independently roaming each segment to the destination storage location appearing in its unique code; and e) periodically, according to the timer, continuously roaming segments between storage locations until receiving a request for retrieving of the dynamically stored file/data. | 2020-09-17 |
20200293414 | DYNAMICALLY FORMING A FAILURE DOMAIN IN A STORAGE SYSTEM THAT INCLUDES A PLURALITY OF BLADES - Dynamically forming a failure domain in a storage system that includes a plurality of blades, each blade mounted within one of a plurality of chassis, including: identifying, in dependence upon a failure domain formation policy, an available configuration for a failure domain that includes a first blade mounted within a first chassis and a second blade mounted within a second chassis, wherein each chassis is configured to support multiple types of blades; and creating the failure domain in accordance with the available configuration. | 2020-09-17 |
20200293415 | MEMORY TRAINING - Certain aspects of the present disclosure generally relate to memory training. An example method generally includes assigning each of a plurality of data channels of a memory device to at least one processor, performing memory tests, in parallel, on the plurality of data channels by at least in part performing read and write operations on at least two or more of the plurality of data channels in parallel using the at least one processor, and determining a setting for one or more memory interface parameters associated with the memory device relative to a data eye for each of the plurality of data channels determined based on the memory tests. | 2020-09-17 |
20200293416 | Executing Test Scripts with Respect to a Server Stack - A functional test execution engine (“FTEE”) may be configured to execute test scripts with respect to a server stack. The FTEE may be communicatively coupled to a test script storage device, which may store the test scripts. The FTEE may select one or more test scripts for execution with respect to the server stack. The one or more test scripts may carry out maintenance or diagnostic functions for the server stack. The FTEE may determine the processing resources of the server stack and, based on those processing resources, select a first set of test scripts from the one or more test scripts to execute. The FTEE may cause the first set of test scripts selected to execute with respect to the server stack in order to generate test script results. The FTEE may store the test script results for subsequent analysis and use during execution of subsequent test scripts. | 2020-09-17 |
20200293417 | SCAN SYNCHRONOUS-WRITE-THROUGH TESTING ARCHITECTURES FOR A MEMORY DEVICE - An exemplary testing environment can operate in a testing mode of operation to test whether a memory device or other electronic devices communicatively coupled to the memory device operate as expected or unexpectedly as a result of one or more manufacturing faults. The testing mode of operation includes a shift mode of operation, a capture mode of operation, and/or a scan mode of operation. In the shift mode of operation and the scan mode of operation, the exemplary testing environment delivers a serial input sequence of data to the memory device. In the capture mode of operation, the exemplary testing environment delivers a parallel input sequence of data to the memory device. The memory device thereafter passes through the serial input sequence of data or the parallel input sequence of data to provide an output sequence of data in the shift mode of operation or the capture mode of operation or passes through the serial input sequence of data to provide a serial output sequence of scan data in the scan mode of operation. | 2020-09-17 |
20200293418 | NETWORK NODE, MONITORING NODE AND METHODS PERFORMED THEREIN - Embodiments herein relate to a method performed by a network node ( | 2020-09-17 |
20200293419 | SYSTEMS AND METHODS FOR PREVENTION OF DATA LOSS IN A POWER-COMPROMISED PERSISTENT MEMORY EQUIPPED HOST INFORMATION HANDLING SYSTEM DURING A POWER LOSS EVENT - A method may include, in a host information handling system configured to be inserted into a chassis providing a common hardware infrastructure to a plurality of modular information handling systems including the information handling system: (i) determining a runtime health status of a persistent memory subsystem of the host information handling system; and (ii) communicating a health status indicator indicative of the runtime health status to a management module configured to manage the common hardware infrastructure. | 2020-09-17 |
20200293420 | DYNAMIC DEVICE DETECTION AND ENHANCED DEVICE MANAGEMENT - Embodiments provide for supporting management of an unrecognized device operating as a component of an IHS (Information Handling System). An unrecognized device operating on the IHS is detected. The unrecognized device is probes for determining properties of the unrecognized device. The unrecognized device is monitored for characteristic communications that are indicative of a type of device. A signature of the unrecognized device is generated based on the probed properties and the monitored communications of the unrecognized. Based on the generated signature, a device pack is created that supports management of the unrecognized device. A device pack may include instructions used by a remote access controller of the IHS for management of the unrecognized device. The created device pack may be generated based on monitoring keys supported by firmware of the unrecognized device. The remote access controller may probe and monitor the unrecognized device. | 2020-09-17 |
20200293421 | SYSTEMS AND METHODS FOR IDENTIFYING AND MONITORING SOLUTION STACKS - Systems and methods for identifying and managing solution stacks integrated within a computer environment include one or more computing devices receiving information identifying one or more first assets as belonging to a solution stack integrated within a computer environment. The computing devices can iteratively identify additional assets of the computer environment related to, but not part of, the assets already identified as belonging to the solution stack, and determine, based on a comparison of attributes of the additional assets to attributes of the assets already identified as belonging to the solution stack, whether any of the additional assets belongs to the solution stack. The one or more computing devices can repeat these steps until no additional is identified as belonging to the solution stack. The computing devices can generate a current state of the solution stack defining at least a complete set of assets forming the solution stack. | 2020-09-17 |
20200293422 | DEVICE TELEMETRY CONTROL - Various example embodiments for supporting device telemetry control are presented. Various example embodiments for supporting device telemetry control may provide a customer of a device, which is monitoring the device based on device telemetry whereby the device exposes device data of the device based on device telemetry control information of the device such that the data of the device may be accessed by the customer, with control over device telemetry of the device. Various example embodiments for supporting device telemetry control may provide a customer, which may access device data of a device based on device telemetry supported by the device, with additional control over access to the device data of the device via device telemetry by providing the customer with control over the device telemetry including enabling the customer to insert customer device telemetry control information into the device telemetry control information of the device that controls device telemetry on the device. | 2020-09-17 |
20200293423 | ASSISTED SMART DEVICE CONTEXT PERFORMANCE INFORMATION RETRIEVAL - In an approach to determine performance information of a target item operating under a particular set of context information, a method, in response to receiving a request for performance information of a target item, and operating with a first computing device, identifies context information of the first computing device. The method determines whether a knowledge base includes a response that correlates to the request for performance information of the target item operating within context information similar to the first computing device. The method, in response to determining that the knowledge base includes the response that correlates to the request for performance information of the target item, sends the performance information to the first computing device, and initiates a communication channel between the first computing device and a second computing device operating the target item and having similar context information of the first computing device. | 2020-09-17 |
20200293424 | SYSTEMS AND METHODS FOR EVALUATING PERFORMANCE OF MODELS - A system and method for evaluating performance of difference models. The method may include: obtaining, by at least one computer, a first sample set and a second sample set; dividing, by the at least one computer, the first sample set into a plurality of first sample subsets, each first sample subset providing an average first sample subset characteristic value; dividing, by the at least one computer, the second sample set into a plurality of second sample subsets; each second sample subset providing an average second sample subset characteristic value; determining, by the at least one computer, a final model between the first model and the second model based on an average difference, a significance level, and a confidence interval between the first model and the second model. | 2020-09-17 |
20200293425 | DETERMINING DIAGNOSTIC COVERAGE FOR MEMORY USING REDUNDANT EXECUTION - Memory, used by a computer to store data, is generally prone to faults, including permanent faults (i.e. relating to a lifetime of the memory hardware), and also transient faults (i.e. relating to some external cause) which are otherwise known as soft errors. Since soft errors can change the state of the data in the memory and thus cause errors in applications reading and processing the data, there is a desire to characterize the degree of vulnerability of the memory to soft errors. In particular, once the vulnerability for a particular memory to soft errors has been characterized, cost/reliability trade-offs can be determined, or soft error detection mechanisms (e.g. parity) may be selectively employed for the memory. In some cases, memory faults can be diagnosed by redundant execution and a diagnostic coverage may be determined. | 2020-09-17 |
20200293426 | MEASURING MOBILE APPLICATION PROGRAM RELIABILITY CAUSED BY RUNTIME ERRORS - A quality score for a computer application release is determined using a first number of unique users who have launched the computer application release on user devices and a second number of unique users who have encountered at least once an abnormal termination with the computer application release on user devices. Additionally or optionally, an application quality score can be computed for a computer application based on quality scores of computer application releases that represent different versions of the computer application. Additionally or optionally, a weighted application quality score can be computed for a computer application by further taking into consideration the average application quality score and popularity of a plurality of computer applications. | 2020-09-17 |
20200293427 | SYSTEMS AND METHODS TO IMPROVE DATA CLUSTERING USING A META-CLUSTERING MODEL - Systems and methods for clustering data are disclosed. For example, a system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform, operations. The operations may include receiving data from a client device and generating preliminary clustered data based on the received data, using a plurality of embedding network layers. The operations may include generating a data map based on the preliminary clustered data using a meta-clustering model. The operations may include determining a number of clusters based on the data map using the meta-clustering model and generating final clustered data based on the number of clusters using the meta-clustering model. The operations may include and transmitting the final clustered data to the client device. | 2020-09-17 |
20200293428 | IMMERSIVE WEB-BASED SIMULATOR FOR DIGITAL ASSISTANT-BASED APPLICATIONS - Immersive web-based simulator for digital assistant-based applications is provided. A system can provide, for display in a web browser, an inner iframe configured to load, in a secure, access restricted computing environment, an application configured to integrate with a digital assistant. The application can be provided by a third-party developer device. The system can provide, for display in a web browser, an outer iframe configured with a two-way communication protocol to communicate with the inner iframe. The system can provide a state machine to identify a current state of the application loaded in the inner frame, and load a next state of the application responsive to a control input. | 2020-09-17 |