24th week of 2022 patent applcation highlights part 55 |
Patent application number | Title | Published |
20220188608 | TECHNIQUES FOR OPTIMIZING NEURAL NETWORKS - Apparatuses, systems, and techniques to cache and reuse data for a neural network. In at least one embodiment, data generated by one or more layers of a neural network is cached and reused by the neural network. | 2022-06-16 |
20220188609 | RESOURCE AWARE NEURAL NETWORK MODEL DYNAMIC UPDATING - Resources of an embedded system, such as RAM utilization and available processor cycles or bandwidth are monitored. Neural network models of varying size and computational load for given neural networks are utilized in conjunction with this resource monitoring. The neural network model used for a particular neural network is dynamically varied based on the resource monitoring. In one example, neural network models of varying precision are stored and the best model for the available RAM and processor cycles is loaded. In one example, neural network model weight values are quantized before being loaded for use, the level of quantization being based on the available RAM and processor cycles. This dynamic adaption of the neural network models allows other processes in the embedded system to operate normally and yet allows the neural network to operate at the maximum capability allowed for a given period. | 2022-06-16 |
20220188610 | METHOD FOR MEMORY ALLOCATION DURING EXECUTION OF A NEURAL NETWORK - According to an aspect, a method is proposed for defining placements, in a volatile memory, of temporary scratch buffers used during an execution of an artificial neural network, the method comprising: determining an execution order of layers of the neural network, defining placements, in a heap memory zone of the volatile memory, of intermediate result buffers generated by each layer, according to the execution order of the layers, determining at least one free area of the heap memory zone over the execution of the layers, defining placements of temporary scratch buffers in the at least one free area of the heap memory zone according to the execution order of the layers. | 2022-06-16 |
20220188611 | NEURAL NETWORKS PROCESSING UNITS PERFORMANCE OPTIMIZATION - In an example embodiment, a scalable deep neural networks (DNN) accelerator (sDNA) is provided that includes multiple neural networks processing units (NPUs) that are interconnected to provide a flexible DNN that is programmable and scalable. Each NPU includes one or more pruned weights memories and one or more compressed activation memories. Each NPU may include multiple A_LUT memories that are used as multipliers accelerator and a W_LUT memory that are used all together to reduce a number of DNN multiplications. The sDNA may include one or more W map memories and A map memories that provide a sDNA algorithm inputs data that enable skipping of zero weights and zero activations. The sDNA architecture can be generalized into sDNA parallel mode to achieve higher memory bandwidth and throughput. In an embodiment, the sDNA architecture is power-efficient, silicon size-efficient and cost-efficient. | 2022-06-16 |
20220188612 | NPU DEVICE PERFORMING CONVOLUTION OPERATION BASED ON THE NUMBER OF CHANNELS AND OPERATING METHOD THEREOF - A method of generating an output feature map based on an input feature map, the method including: generating an input feature map vector for a plurality of input feature map blocks when the number of channels of the input feature map is less than a certain number of reference channels; performing a convolution operation on the input feature map based on a target weight map and an additional weight map that has a weight identical to that of the target weight map, when the target weight map numbers less than a reference number; and generating an output feature map based on the performed convolution operation. | 2022-06-16 |
20220188613 | SGCNAX: A SCALABLE GRAPH CONVOLUTIONAL NEURAL NETWORK ACCELERATOR WITH WORKLOAD BALANCING - We introduce SGCNAX, a scalable GCN accelerator architecture for the high-performance and energy-efficient acceleration of GCNs. Unlike prior GCN accelerators that either employ limited loop optimization techniques, or determine the design variables based on random sampling, we systematically explore the loop optimization techniques for GCN acceleration and provide a flexible GCN dataflow that adapts to different GCN configurations to achieve optimal efficiency. We further provide two hardware-based techniques to address the workload imbalance problem caused by the unbalanced distribution of zeros in GCNs. Specifically, SGCNAX exploits an outer-product-based computation architecture that mitigates the intra-PE (Processing Elements) workload imbalance, and employs a group-and-shuffle approach to mitigate the inter-PE workload imbalance. | 2022-06-16 |
20220188614 | FRACTAL CALCULATING DEVICE AND METHOD, INTEGRATED CIRCUIT AND BOARD CARD - A fractal computing device according to an embodiment of the present application may be included in an integrated circuit device. The integrated circuit device includes a universal interconnect interface and other processing devices. The calculating device interacts with other processing devices to jointly complete a user specified calculation operation. The integrated circuit device may also include a storage device. The storage device is respectively connected with the calculating device and other processing devices and is used for data storage of the computing device and other processing devices. | 2022-06-16 |
20220188615 | NEUROMORPHIC PROCESSING SYSTEM AND METHOD OF OPERATING THE SAME - A neuromorphic processing system ( | 2022-06-16 |
20220188616 | DATA PROCESSING APPARATUS - In a data processing apparatus, an M×M data processing unit performs M×M convolution processing using data from an input buffer unit. An N×N data processing unit performs N×N convolution processing using the data from the input buffer unit. A first output buffer unit stores one of results of processing by the M×M data processing unit and the N×N data processing unit, and outputs the same to the input buffer unit. A second output buffer unit stores the other of the results of processing by the M×M data processing unit and the N×N data processing unit. The second output buffer unit transfers the result of processing to the external memory. | 2022-06-16 |
20220188617 | RESERVOIR COMPUTER - A reservoir computer includes a reservoir unit including a plurality of neuron circuits and an output layer. Each of the neuron circuits includes a plurality of inputs, an analog output, and a digital output. Each of the plurality of inputs is supplied with the analog output of any one of other neuron circuits, the analog output of the neuron circuit itself, or an analog input signal from the outside. The neuron circuit includes a capacitor circuit, an amplifier, a capacitor memory circuit, a buffer circuit, and an analog-to-digital conversion circuit. The capacitor circuit includes a plurality of capacitors between the plurality of inputs and a single output, performs a product-sum calculation on analog signals supplied to the plurality of inputs together with the amplifier, and performs a non-linear calculation on a result of the product-sum calculation by using saturation characteristics of the amplifier. | 2022-06-16 |
20220188618 | NEUROMORPHIC DEVICE - This neuromorphic device including: a first element group; and a second element group, in which each of the first element group and the second element group includes a plurality of magnetic domain wall movement elements, each of the plurality of magnetic domain wall movement elements includes a magnetic domain wall movement layer, a ferromagnetic layer, and a non-magnetic layer interposed between the magnetic domain wall movement layer and the ferromagnetic layer, a length of the magnetic domain wall movement layer of each of the magnetic domain wall movement elements belonging to the first element group in a longitudinal direction is shorter than a length of the magnetic domain wall movement layer of each of the magnetic domain wall movement elements belonging to the second element group in the longitudinal direction, and a resistance changing rate when a predetermined pulse is input is higher for each of the magnetic domain wall movement elements belonging to the first element group than for each of the magnetic domain wall movement elements belonging to the second element group. | 2022-06-16 |
20220188619 | Microcontroller Interface for Audio Signal Processing - Disclosed is a neuromorphic-processing systems including, in some embodiments, a special-purpose host processor operable as a stand-alone host processor; a neuromorphic co-processor including an artificial neural network; and a communications interface between the host processor and the co-processor configured to transmit information therebetween. The co-processor is configured to enhance special-purpose processing of the host processor with the artificial neural network. Also disclosed is a method of a neuromorphic-processing system having the special-purpose host processor and the neuromorphic co-processor including, in some embodiments, enhancing the special-purpose processing of the host processor with the artificial neural network of the co-processor. In some embodiments, the host processor is a hearing-aid processor. | 2022-06-16 |
20220188620 | TIME ESTIMATOR FOR DEEP LEARNING ARCHITECTURE - A method for optimizing a neural network architecture by estimating an inference time for each operator in the neural network architecture is provided. The method may include determining a benchmark time for at least one single-path architecture out of a plurality of single-path architectures associated with the neural network by sampling the at least one single-path architecture from the neural network, wherein the at least one single-path architecture comprises one or more operators. The method may further include, based on the benchmark time for the at least one single-path architecture, determining an estimated inference time for an operator, wherein determining the estimated inference time for the operator comprises, applying an operator function, wherein the operator function comprises a function based on a difference between the benchmark time associated with the at least one single-path architecture and the estimated latency of the neural network. | 2022-06-16 |
20220188621 | GENERATIVE DOMAIN ADAPTATION IN A NEURAL NETWORK - A system comprises a computer including a processor and a memory. The memory storing instructions executable by the processor to cause the processor to generate a low-level representation of the input source domain data; generate an embedding of the input source domain data; generate a high-level feature representation of features of the input source domain data; generate output target domain data in the target domain that includes semantics corresponding to the input source domain data by processing the high-level feature representation of the features of the input source domain data using a domain low-level decoder neural network layer that generate data from the target; and modify a loss function such that latent attributes corresponding to the embedding are selected from a same probability distribution. | 2022-06-16 |
20220188622 | ALTERNATIVE SOFT LABEL GENERATION - An approach to identifying alternate soft labels for training a student model may be provided. A teaching model may generate a soft label for a labeled training data. The training data can be an acoustic file for speech or a spoken natural language. A pool of soft labels previously generated by teacher models can be searched at the label level to identify soft labels that are similar to the generated soft label. The similar soft labels can have similar length or sequence at the word phoneme, and/or state level. The identified similar soft labels can be used in conjunction with the generated soft label to train a student model. | 2022-06-16 |
20220188623 | EXPLAINABLE DEEP REINFORCEMENT LEARNING USING A FACTORIZED FUNCTION - A policy based on a compound reward function is learned through a reinforcement learning algorithm at a learning network. The policy is used to choose an action of a plurality of possible actions. A state-action value network is established for each of the two or more reward terms. The state-action value networks are separated from the learning network. A human-understandable output is produced to explain why the action was taken based on each of the state action value networks. | 2022-06-16 |
20220188624 | System and Method for Monitoring Driver Performance - A system for monitoring driver performance includes a server system that generates a performance model based on driver performance related data received from a plurality of vehicles. The performance model mathematically characterizes the crowd wisdom for a driving behavior under a set of driving circumstances. The system also includes the plurality of vehicles, each having an on-vehicle system that compares driver performance related data characterizing the driving behavior performed by the driver of the vehicle under the set of driving circumstances to the performance model so as to determine a deviation score therebetween. The deviation scores for the driving behavior are accumulated over a predetermined period of time, and a divergence indicator is triggered in response to an accumulated deviation score value exceeding one or more predetermined thresholds. | 2022-06-16 |
20220188625 | METHOD AND COMPUTER IMPLEMENTED SYSTEM FOR GENERATING LAYOUT PLAN USING NEURAL NETWORK - A computer-implemented system for generating a layout plan includes a memory and a processor coupled to the memory. The processor is configured to obtain an object or a map, input the obtained object or the obtained map to a pre-trained deep neural network (DNN) model, and output the layout plan as an action suggestion based on an output result the pre-trained DNN model, where the output layout plan is an optimal layout plan based on the weight of the DNN model. | 2022-06-16 |
20220188626 | AUTOMATIC HYBRID QUANTIZATION FOR DEEP NEURAL NETWORK - Methods, computer program products, and/or systems are provided that perform the following operations: obtaining a target neural network structure and constraints for a target neural network; generating a meta learning network having an associated quantization function based, at least in part, on the target neural network structure; training the meta learning network based, at least in part, on providing a hybrid quantization vector as input to the meta learning network and providing a training dataset to the target neural network; obtaining a plurality of hybrid quantization vectors; determining a new hybrid quantization vector from the plurality of hybrid quantization vectors; and retraining the trained meta learning network based, at least in part, on providing the new hybrid quantization vector as input to the trained meta learning network. | 2022-06-16 |
20220188627 | REINFORCEMENT LEARNING FOR TESTING SUITE GENERATION - Aspects of the invention include mutating each neural network of a portion of a first array of neural networks, wherein each neural network of the first array of neural networks is configured to select a respective sequence of test cases for testing a computing infrastructure. Causing each neural network of a second array of neural networks to select a respective sequence of test cases for testing the computing infrastructure. Generating a child neural network by performing a crossover operation between a mutated neural network of the portion of the first array and a neural network of the second array of neural networks, the child neural network generating a new sequence of test cases for testing the computing infrastructure. | 2022-06-16 |
20220188628 | DYNAMIC CONFIGURATION OF READOUT CIRCUITRY FOR DIFFERENT OPERATIONS IN ANALOG RESISTIVE CROSSBAR ARRAY - A device which comprises an array of resistive processing unit (RPU) cells, first control lines extending in a first direction across the array of RPU cells, and second control lines extending in a second direction across the array of RPU cells. Peripheral circuitry comprising readout circuitry is coupled to the first and second control lines. A control system generates control signals to control the peripheral circuitry to perform a first operation and a second operation on the array of RPU cells. The control signals include a first configuration control signal to configure the readout circuitry to have a first hardware configuration when the first operation is performed on the array of RPU cells, and a second configuration control signal to configure the readout circuitry to have a second hardware configuration, which is different from the first hardware configuration, when the second operation is performed on the array of RPU cells. | 2022-06-16 |
20220188629 | SUPERRESOLUTION AND CONSISTENCY CONSTRAINTS TO SCALE UP DEEP LEARNING MODELS - Techniques of facilitating deep learning model rescaling by computing devices. In one example, a system can comprise a processor that executes computer executable components stored in memory. The computer executable components can comprise: a rescaling component; and a forecasting component. The rescaling component can determine a scaling ratio that maps low mesh resolution predictive data output by a partial differential equation (PDE)-based model for a sub-domain to high-resolution observational or ground-truth data for a domain comprising the sub-domain. The forecasting component can generate high mesh resolution predictive data for the domain with a machine-learning model using input data of the PDE-based model and the scaling ratio. | 2022-06-16 |
20220188630 | MODEL IMPROVEMENT USING FEDERATED LEARNING AND CANONICAL FEATURE MAPPING - The method provides for receiving a plurality of trained models from a corresponding plurality of clients, wherein a respective trained model predicts a condition of an asset and is based on a data set associated with the asset of a respective client. The trained model is based on a seed model that includes a canonical set of features. The trained model includes a component that converts the data at a site to the canonical set of features used by the seed model. The plurality of trained models from the corresponding plurality of clients is assigned to two or more groupings, wherein a grouping includes trained models providing similar analysis. The one or more processors generate an improved model for a client with a limited amount of training data, obtaining the improvement by using multiple models that belong to the same grouping of the first client's model. | 2022-06-16 |
20220188631 | ARTIFICIAL NEURAL NETWORK IMPLEMENTATION - A method of implementing an artificial neural network, ANN, ( | 2022-06-16 |
20220188632 | Evolutionary Imitation Learning - Systems, devices, and methods of evolutionary imitation learning are described. For example, a computing system trains an artificial neural network (ANN) using a supervised machine learning technique according to first example data representative of a behavior to be imitated by the ANN in performing a task. The ANN is used to generate first sample data representative of a behavior of the ANN in performing the task. The computing system modifies the first sample data using a technique of evolutionary algorithm to generate second sample data according to a criterion configured to select mutations of the behavior of the ANN. The computing system further trains the ANN according to the second sample data using the supervised machine learning technique. | 2022-06-16 |
20220188633 | LOW DISPLACEMENT RANK BASED DEEP NEURAL NETWORK COMPRESSION - A method and an apparatus for performing deep neural network compression use an approximation training set along with information, such as in matrices representing weights, biases and non-linearities, to iteratively compress a pre-trained deep neural network by low displacement rank based approximation of the network layer weight matrices. The low displacement rank approximation allows for replacement of an original layer weight matrices of the pre-trained deep neural network as the sum of a small number of structured matrices, allowing compression and low inference complexity. | 2022-06-16 |
20220188634 | Artificial Intelligence with Cyber Security - A cyber security system that uses artificial intelligence, such neural networks, to monitor the security of a computer network and take automated remedial action based on the monitoring. The security system autonomically learns behavior profiles, attack profiles and circumvention techniques used to target the network. The remedial action taken by the system includes isolating any misuse that has been identified, surveilling the misuse in the isolated environment, analyzing its behavior profile and reconfiguring the network to enhance security. | 2022-06-16 |
20220188635 | System and Method For Detecting Misclassification Errors in Neural Networks Classifiers - An error detection framework, RED (Residual-based Error Detection), produces reliable confidence scores for detecting misclassification errors. RED calibrates the classifier's inherent confidence indicators and estimates uncertainty of the calibrated confidence scores using Gaussian Processes. | 2022-06-16 |
20220188636 | META PSEUDO-LABELS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using meta pseudo-labels. One of the methods includes training a student neural network using pseudo-labels generated by a teacher neural network that is being trained jointly with the student neural network. | 2022-06-16 |
20220188637 | METHOD FOR TRAINING ADVERSARIAL NETWORK MODEL, METHOD FOR BUILDING CHARACTER LIBRARY, ELECTRONIC DEVICE, AND STORAGE MEDIUM - There are provided a method for training an adversarial network model, a method for building a character library, an electronic device and a storage medium, which relate to a field of artificial intelligence technology, in particular to a field of computer vision and deep learning technologies. The method includes: generating a generated character based on a content character sample having a base font and a style character sample having a style font and generating a reconstructed character based on the content character sample, by using a generation model; calculating a basic loss of the generation model based on the generated character and the reconstructed character, by using a discrimination model; calculating a character loss of the generation model through classifying the generated character by using a trained character classification model; and adjusting a parameter of the generation model based on the basic loss and the character loss. | 2022-06-16 |
20220188638 | DATA REUSE IN DEEP LEARNING - An apparatus for convolution operations is provided. The apparatus includes a PE array, a datastore, writing modules, reading modules, and a controlling module. The PE array performs MAC operations. The datastore includes databanks, each of which stores data to be used by a column of the PE array. The writing modules transfer data from a memory to the datastore. The reading modules transfer data from the datastore to the PE array. Each reading module may transfer data to a particular column of the PE array. The controlling module can determine the rounds of a convolution operation. Each round includes MAC operations based on a weight. The controlling module controls the writing modules and reading modules so that the same data in a databank can be reused in multiple rounds. For different rounds, the controlling module can provide a reading module accesses to different databanks. | 2022-06-16 |
20220188639 | SEMI-SUPERVISED LEARNING OF TRAINING GRADIENTS VIA TASK GENERATION - In an approach for augmenting a neural network with a self-supervised mechanism, a processor trains a first neural network using labeled data, the first neural network configured for a main task. A processor trains a second neural network using the labeled data and unlabeled data, the second neural network being an additional component to the first neural network. A processor computes a gradient using a second loss of the second neural network based on the unlabeled data. | 2022-06-16 |
20220188640 | COMPUTING DEVICE AND METHOD USING A NEURAL NETWORK TO BYPASS CALIBRATION DATA OF AN INFRARED SENSOR - Method and computing device using a neural network to bypass calibration data of an infrared sensor. A predictive model generated by a neural network training engine is stored by the computing device. The computing device determines a two-dimensional (2D) matrix of raw sensor data. Each raw sensor datum is representative of heat energy collected by the infrared sensor. The computing device executes a neural network inference engine. The neural network inference engine implements the neural network using the predictive model for generating outputs based on inputs. The inputs comprise the 2D matrix of raw sensor data. The outputs comprise a 2D matrix of inferred temperatures. A method for training a neural network to bypass calibration data of an infrared sensor is also provided. | 2022-06-16 |
20220188641 | REINFORCEMENT TESTING OF A NEURAL NETWORK - Aspects of the invention include creating a neural network including neurons to which actions are assigned in representation of test cases. Tests of various instantiations of the neural network are executed for each test case and a state of the neural network after each test is evaluated to determine a fitness score of a corresponding instantiation. Instantiations having fitness scores that exceed a predefined level and identified and the instantiations having the fitness scores that exceed the predefined level are selected for adjustments. The executing, the evaluating, the identifying and the selecting are iteratively repeated in order to obtain desired fitness scores. | 2022-06-16 |
20220188642 | Robust Adversarial Immune-Inspired Learning System - The lack of robustness of Deep Neural Networks (DNNs) against different types of attacks is problematic in adversarial environments. The long-standing and arguably most powerful natural defense system is the mammalian immune system, which has successfully defended the species against attacks by novel pathogens for millions of years. This disclosure proposes a Robust Adversarial Immune-inspired Learning System (RAILS) inspired by the mammalian immune system. The RAILS approach is demonstrated using adaptive immune system emulation to harden Deep k-Nearest Neighbor (DkNN) architectures against evasion attacks. Using evolutionary programming to simulate new B-cell generation that occurs in natural immune systems, e.g., B-cell flocking, clonal expansion, and affinity maturation, it is shown that the RAILS learning curve exhibits similar learning behavior as observed in in-vitro experiments on B-cell affinity maturation. The life-long learning mechanism allows RAILS to evolve and defend against diverse attacks. | 2022-06-16 |
20220188643 | MIXUP DATA AUGMENTATION FOR KNOWLEDGE DISTILLATION FRAMEWORK - A method of training a student neural network is provided. The method includes feeding a data set including a plurality of input vectors into a teacher neural network to generate a plurality of output values, and converting two of the plurality of output values from the teacher neural network for two corresponding input vectors into two corresponding soft labels. The method further includes combining the two corresponding input vectors to form a synthesized data vector, and forming a masked soft label vector from the two corresponding soft labels. The method further includes feeding the synthesized data vector into the student neural network, using the masked soft label vector to determine an error for modifying weights of the student neural network, and modifying the weights of the student neural network. | 2022-06-16 |
20220188644 | LATENT-SPACE MISALIGNMENT MEASURE OF RESPONSIBLE AI FOR MACHINE LEARNING MODELS - Computer-implemented machines, systems and methods for providing insights about misalignment in a latent space of a machine learning model. A method includes initializing a second weight matrix of a second artificial neural network based on a first weight matrix from a first artificial neural network. The method further includes applying transfer learning between the first artificial neural network and the second artificial neural network. The method further includes comparing the first latent space with the second latent space. The method further includes determining, responsive to the comparing, a first score indicating alignment of the first latent space and the second latent space. The method further includes determining, and responsive to the first score satisfying a threshold, an appropriateness of the machine learning model. | 2022-06-16 |
20220188645 | USING GENERATIVE ADVERSARIAL NETWORKS TO CONSTRUCT REALISTIC COUNTERFACTUAL EXPLANATIONS FOR MACHINE LEARNING MODELS - Herein are counterfactual explanations of machine learning (ML) inferencing provided by generative adversarial networks (GANs) that ensure realistic counterfactuals and use latent spaces to optimize perturbations. In an embodiment, a first computer trains a generator model in a GAN. A same or second computer hosts a classifier model that inferences an original label for original feature values respectively for many features. Runtime ML explainability (MLX) occurs on the first or second or a third computer as follows. The generator model from the GAN generates a sequence of revised feature values that are based on noise. The noise is iteratively optimized based on a distance between the original feature values and current revised feature values in the sequence of revised feature values. The classifier model inferences a current label respectively for each counterfactual in the sequence of revised feature values. Satisfactory discovered counterfactuals are promoted as explanations of behavior of the classifier model. | 2022-06-16 |
20220188646 | CLASSIFIER WITH OUTLIER DETECTION ALGORITHM - A classifier is executed including an unsupervised artificial intelligence model and a supervised artificial intelligence model. The classifier is configured to receive run-time input data, and process the run-time input data using the unsupervised artificial intelligence model and an outlier detection algorithm to determine whether the run-time input data is an outlier as compared to training input data. Responsive to determining that the run-time input data is not an outlier, the classifier determines a predicted response label for the run-time input based on the run-time input data processed using the supervised artificial intelligence model. Responsive to determining that the run-time input data is an outlier, the classifier refrains from determining the predicted response label for the run-time input based on the run-time input data processed using the supervised artificial intelligence model, and instead outputs a prompt for user input of a user-curated response label for the run-time input. | 2022-06-16 |
20220188647 | MODEL LEARNING APPARATUS, DATA ANALYSIS APPARATUS, MODEL LEARNING METHOD AND PROGRAM - A model learning apparatus includes: a learning unit configured to train an unsupervised deep learning model using training data; a calculation unit configured to calculate a correlation between input dimensions in the deep learning model; and a division model learning unit configured to train an analysis model using the training data for each set of dimensions having a correlation. | 2022-06-16 |
20220188648 | METHOD FOR PREDICTING AND OPTIMIZING PENETRATION RATE IN OIL AND GAS DRILLING BASED ON CART ALGORITHM - The present invention relates to a method for predicting and optimizing a penetration rate in oil and gas drilling based on a CART algorithm. The method includes the following steps: Step | 2022-06-16 |
20220188649 | DECISION TREE-ORIENTED VERTICAL FEDERATED LEARNING METHOD - Provided is a decision tree-oriented vertical federated learning method, which mainly comprises the following steps: 1) all participants sorting local samples for each local feature, and then dividing the sorted samples into different blocks in sequence, each block being called a bucket; 2) for a group of samples corresponding to each feature, a bucket number of each sample under this feature having a certain probability to become other bucket numbers, and selecting an appropriate probability to make this encryption method meet the definition of differential privacy; 3) each participant sending serial numbers of buckets assigned to different samples under different features to the participant who holds a label, and this participant being called a coordinator;4) the coordinator training a decision tree model according to these samples, and no other participants being needed in the training process. | 2022-06-16 |
20220188650 | Device and Method for Configuring a Technical System - Device and method for configuring a technical system are disclosed, wherein the method includes generating a configuration model from configuration criteria for the technical system and the configuration model represents the technical system as an information model, where generating the configuration model includes validating the configuration criteria based on constraints associated with the technical system and identifying a maximum satisfiable rule set for the validated configuration criteria, the maximum satisfiable rule set being identified by determining a minimum number of conflicting rules to be removed to resolve conflicts with the validated configuration criteria, where the minimum number of conflicting rules being determined for rules ranked below the threshold severity; and by removing at least one of the minimum number of conflicting rules to generate the maximum satisfiable rule set. | 2022-06-16 |
20220188651 | SYSTEMS AND METHODS FOR EXTRACTING SPECIFIC DATA FROM DOCUMENTS USING MACHINE LEARNING - A method includes generating, by one or more processors, a first graphical interface. The first graphical interface includes a card-based view with each card in the card-based view corresponding to a field of analysis from a plurality of fields of analysis. The method also includes transmitting, to a client device, the representation of the first graphical interface; receiving, from the client device, a selection of a particular card of the card-based view; and, based on the received selection, generating a representation of a second graphical interface that includes a detailed view of output data associated with a field of analysis that corresponds to the particular card. The method further includes transmitting, to the client device, the representation of the second graphical interface. | 2022-06-16 |
20220188652 | SYSTEM AND METHOD FOR DE NOVO DRUG DISCOVERY - A system and method for de novo drug discovery using machine learning algorithms. In a preferred embodiment, de novo drug discovery is performed via data enrichment and interpolation/perturbation of molecule models within the latent space, wherein molecules with certain characteristics can be generated and tested in relation to one or more targeted receptors. Filtering methods may be used to determine active novel molecules by filtering out non-active molecules and contain activity predictors to better navigate the molecule-receptor domain. The system may comprise neural networks trained to reconstruct known ligand-receptors pairs and from the reconstruction model interpolate and perturb the model such that novel and unique molecules are discovered. A second preferred embodiment trains a variational autoencoder coupled with a bioactivity model to predict molecules exhibiting a range of desired properties. | 2022-06-16 |
20220188653 | DATA DRIVEN RANKING OF COMPETING ENTITIES IN A MARKETPLACE - A method, computer system, and a computer program product for competitive analysis is provided. The present invention may include identifying one or more potential competitors by searching a knowledge corpus using one or more see terms. The present invention may include determining one or more competitors by eliminating at least one potential competitor. The present invention may include generating a competitive analyst report. | 2022-06-16 |
20220188654 | SYSTEM AND METHOD FOR CLINICAL TRIAL ANALYSIS AND PREDICTIONS USING MACHINE LEARNING AND EDGE COMPUTING - A system and method for improving the efficiency of information flow of and during clinical trials and also using edge-based and cloud-based machine learning for analyzing clinical trial data from inception to completion subsequently protecting investments, assets, and human life. The system comprises a pharmaceutical research system that receives, pushes, and facilitates data packets containing clinical trial information across multiple sites and across multiple trial personnel while also using machine learning for a variety of tasks. A mobile application on edge devices uses edge-based machine learning to identify biomarkers and provides sponsors and clinicians with an expedient and secure communication means. The edge devices and the cloud-based machine learning communicate full-duplex and share information and machine learning models leading to an improvement in early adverse effects detection. Biomarkers predicting severe adverse effects trigger the system to send alerts, reports, and potential victims to medical personnel for immediate intervention. | 2022-06-16 |
20220188655 | Toxic Substructure Extraction Using Clustering and Scaffold Extraction - A system and method that takes in a data set comprising molecular structure data and properties of interest, e.g., ADMET, EC50, IC50, etc., and determines the substructures that cause or do not cause the property of interest. The substructures may then be used to filter out potentially harmful new proposed/generated molecules or create a new data set of known active/inactive substructures of a property of interest that may fulfill other obligations. The system comprises a substructure extraction module which further comprises a scaffold extraction module and a comparison module. A scaffold extraction module clusters, searches, and extracts substructures in question while a comparison module compares the bioactivity of each molecule with and without each substructure in question to determine the substructures effect on the property of interest. | 2022-06-16 |
20220188656 | A COMPUTER CONTROLLED METHOD OF OPERATING A TRAINING TOOL FOR CLASSIFYING ANNOTATED EVENTS IN CONTENT OF DATA STREAM - Accurate real time automatic detection of events in content of a data stream, such as a transition to a commercial block in the content of a broadcast audio/video data stream, relies on a trainable event classifier that operates on a well-balanced training set input to the classifier. The present disclosure provides a computer controlled method of operating a training tool for classifying events annotated in the content of a data stream. The training tool presents training samples comprising separators and corresponding descriptors that relate to trigger features obtained from variations in parameters of the annotated data stream, and derived features restoring relationships between various separators and corresponding descriptors. | 2022-06-16 |
20220188657 | SYSTEM AND METHOD FOR AUTOMATED RETROSYNTHESIS - A system and method for automated retrosynthesis which can reliably identify valid and practical precursors and reaction pathways. The methodology involves a k-beam recursive process wherein at each stage of recursion, retrosynthesis is performed using a library of molecule disconnection rules to identify possible precursor sets, validation of the top k precursor sets is performed using a transformer-based forward reaction prediction scoring system, the best candidate of the top k precursor sets is selected, and a database is searched to determine whether the precursors are commercially available. The recursion process is repeated until a valid chain of chemical reactions is found wherein all precursors necessary to synthesize the target molecule are found to be commercially available. | 2022-06-16 |
20220188658 | METHOD FOR AUTOMATICALLY COMPRESSING MULTITASK-ORIENTED PRE-TRAINED LANGUAGE MODEL AND PLATFORM THEREOF - Disclosed is a method for automatically compressing multi-task oriented pre-trained language model and a platform thereof. According to the method, a meta-network of a structure generator is designed, a knowledge distillation coding vector is constructed based on a knowledge distillation method of Transformer layer sampling, and a distillation structure model corresponding to a currently input coding vector is generated by using the structure generator; at the same time, a Bernoulli distribution sampling method is provided for training the structure generator; in each iteration, each encoder unit is transferred by Bernoulli distribution sampling to form a corresponding coding vector; by changing the coding vector input to the structure generator and a small batch of training data, the structure generator and the corresponding distillation structure are jointly trained, and a structure generator capable of generating weights for different distillation structures can be acquired. | 2022-06-16 |
20220188659 | SYSTEM AND METHOD FOR PATIENT HEALTH DATA PREDICTION USING KNOWLEDGE GRAPH ANALYSIS - A system and method for patient health data prediction and analysis which utilizes an automated text mining tool to automatically format ingested electronic health record data to be added to a knowledge graph, which enriches the edges between nodes of the knowledge graph with fully interactive edge data, which can extract a subgraph of interest from the knowledge graph, and which analyzes the subgraph of interest to generate a set of variables that define the subgraph of interest. The system utilizes a knowledge graph and data analysis engine capabilities of the data platform to extract deeper insights based upon the enriched edge data. | 2022-06-16 |
20220188660 | SYSTEMS AND METHODS FOR PROCESSING DATA FOR STORING IN A FEATURE STORE AND FOR USE IN MACHINE LEARNING - Systems and methods for processing data for use in machine learning models, including receiving a request to generate a pipeline including two or more tasks, the request defining which features to ingest and output and instructions for processing the features; generating the pipeline including based on the request; for one or more producer tasks: retrieving the ingestible features from one or more databases, processing the features, and outputting curated features; for one or more consumer tasks: retrieving the ingestible features from a previous task in the pipeline upon which the consumer task depends, processing the features, and outputting curated features; storing the curated features associated with one or more tasks of the two or more tasks in a feature store; and providing the stored curated features associated with the one or more tasks of the two or more tasks to a machine learning model for ingestion. | 2022-06-16 |
20220188661 | Stateful, Real-Time, Interactive, and Predictive Knowledge Pattern Machine - This disclosure describes a knowledge pattern machine that is distinct from and goes beyond a traditional artificially intelligent predictive knowledge system employing simple domain-specific numerical regression models. Rather than generating purely quantitative projections within a static set of parameters and data, the disclosed pattern machine uses various layers of artificial intelligence to recognize and derive dynamically evolving predictive patterns and correlations among quantitative and/or qualitative information pertaining to one or multiple domains. The pattern machine extracts knowledge items, including various signals, events, properties, and correlations therebetween, and predicts future trends and evolvements of relating knowledge items to automatically and intelligently answer user queries. The generated predictive answers are rendered as reports updated in real-time without user interference as the underlying data sources evolve over time and are sharable among different users at various levels. The various knowledge items are timestamped and used to further yield a stateful pattern machine. | 2022-06-16 |
20220188662 | CRITICALITY DETECTION FOR AUTOMATION RISK MITIGATION - A planned action is evaluated to determine a risk of failure. Context information of a target application of the planned action is also evaluated to determine a context risk. Based on the context risk and the failure risk, an overall risk level is determined for the planned action. This overall risk level is compared to a threshold; if the risk level is higher than the threshold, a user may be prompted to approve the planned action. | 2022-06-16 |
20220188663 | AUTOMATED MACHINE LEARNING MODEL SELECTION - An approach to identifying architectures of machine learning models meeting a user defined constraint. The approach can receive input associated with evaluating machine learning models from a user. The approach can determine acceptable architectural templates to evaluate the machine learning models based on the input and determine a list of architectures and metrics based on a calculation of maximum neural network sizes of the acceptable architectural templates not exceeding the constraint. The approach can send the list of architectures and metrics to the user for selection. | 2022-06-16 |
20220188664 | MACHINE LEARNING FRAMEWORKS UTILIZING INFERRED LIFECYCLES FOR PREDICTIVE EVENTS - There is a need for more accurate and more efficient predictive data analysis steps/operations. This need can be addressed by, for example, techniques for efficient predictive data analysis steps/operations. In one example, a method includes mapping a primary event having a primary event code to a related subset of a plurality of candidate secondary events by at least processing one or more lifecycle-related attributes for the primary event code using a lifecycle inference machine learning model to detect an inferred lifecycle for the primary event. | 2022-06-16 |
20220188665 | GAS-OIL SEPARATION PLANT VIRTUAL WATER CUT PREDICTOR BASED ON SUPERVISED LEARNING FRAMEWORK ON TIME SERIES DATA - The present disclosure describes system and methods for accessing data from a gas oil separation plant (GOSP) facility, wherein the data includes measurements at various locations inside the GOSP facility and measurements of water cut of the GOSP facility; selecting, based on feature engineering, a subset of features corresponding to the measurements at various locations inside the GOSP facility, wherein the subset of features are more likely to impact the water cut of the GOSP facility than unselected features; and based on the subset of features, training a predictive model capable of predicting the water cut of the GOSP facility based on the measurements of water cut of the GOSP facility, wherein the training is based on, at least in part, (i) a subset of the measurements at various locations inside the GOSP facility and (ii) a subset of the measurements of water cut of the GOSP facility. | 2022-06-16 |
20220188666 | PATH-SUFFICIENT EXPLANATIONS FOR MODEL UNDERSTANDING - An approach to generate a path for minimally sufficient explanations for improving model understanding. Data is received from a user. The data is iteratively processed to generate minimally sufficient explanations based on the input data and the input of a subsequent explanation determination is constrained to the output of a prior explanation determination. | 2022-06-16 |
20220188667 | VEHICLE PREDICTION MODULE EVALUATION AND TRAINING - A method includes accessing perception data generated based on first sensor data and generating first prediction data using a prediction module based on the perception data. The method includes capturing second sensor data while the vehicle is operating according to a first planned trajectory based on the first prediction data and a path planning parameter. The method includes generating a simulation for evaluating the prediction module, including generating second prediction data based on the second sensor data and generating a second planned trajectory. The method includes, subsequent to determining that a difference between the first planned trajectory and the second planned trajectory fails to satisfy predetermined prediction criteria, identifying an object for which the prediction module underperforms relative to the predetermined prediction criteria and updating the prediction module based on the identified object and data associated with the second prediction data. | 2022-06-16 |
20220188668 | SYSTEM AND METHOD FOR DETERMINING VARIATIONS IN ESTIMATED VEHICLE RANGE - An alternative range estimating system for a vehicle includes one or more processors and a memory communicably coupled to the one or more processors. The memory stores a removable object selection module including computer-readable instructions that when executed by the one or more processors cause the one or more processors to control operation of a vehicle input/output system to display a selectable representation of each of one or more removable objects carried by the vehicle. The module may acquire a user selection of at least one removable object from the removable objects displayed. The module may acquire an estimated alternative vehicle range determined using an estimated weight of the at least one removable object. The module may then control operation of the input/output system to display the estimated alternative vehicle range. | 2022-06-16 |
20220188669 | PREDICTION METHOD FOR SYSTEM ERRORS - The present invention discloses a prediction method for system errors, applied in prediction system predicting system errors of a monitored system. The method comprises steps of: pre-processing training data formed with data points at time slots to generate corresponding features to the data points of each time slot, and extract a frequency-based feature for each time slot according to distribution of clustering, grouping or classification of the corresponding features in the previous time slot of the current time slot. Using machine learning algorithm and taking model building data coming from the corresponding features and frequency-based feature as input to build up a prediction model for predicting and alerting a future error of the monitored system. | 2022-06-16 |
20220188670 | MAINTENANCE COMPUTING SYSTEM AND METHOD FOR AIRCRAFT WITH PREDICTIVE CLASSIFIER - A computing system includes a processor and a non-volatile memory storing executable instructions that, in response to execution by the processor, cause the processor to execute an inspection classifier including at least a first artificial intelligence model, the inspection classifier being configured to receive run-time event input data from a plurality of data sources associated with an aircraft, the data sources including structural health monitoring sensors instrumented on the aircraft; extract features of the run-time event input data; determine a predicted inspection classification based upon the extracted features, the predicted inspection classification being one of a plurality of candidate inspection classifications; and output the predicted inspection classification. | 2022-06-16 |
20220188671 | SYSTEMS AND METHODS FOR USING MACHINE LEARNING TO IMPROVE PROCESSES FOR ACHIEVING READINESS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for using machine learning to improve processes for achieving readiness. In some implementations, a database is accessed to obtain status data that indicates activities or attributes of a subject. One or more readiness scores indicating a level of capability of the subject to satisfy one or more readiness criteria are generated. Data is accessed indicating multiple candidate actions for improving capability of the subject to satisfy one or more readiness criteria. A subset of the candidate actions for the subject are selected based on the one or more readiness scores generated using the one or more models. Output is provided to cause one or more of the actions in the selected subset to be performed or cause an indication one or more of the actions in the selected subset to be presented on a user interface. | 2022-06-16 |
20220188672 | SYSTEMS AND METHODS FOR AUTOMATIC EVENT OUTCOME PREDICTION, CONFIRMATION, AND VALIDATION USING MACHINE LEARNING - Systems and methods for event outcome validation are provided. The system receives a user input indicative of an event and at least one anticipated outcome of the event to be wagered on by the user. The system receives confirmation data associated with an outcome of the event from at least one confirmation data source confirming the outcome of the event and classifies the confirmation data utilizing at least one machine learning algorithm. The system determines a threshold of confirmation data sources to validate the outcome of the event and utilizes the at least one machine learning algorithm to determine a reduced threshold of confirmation data sources to validate the outcome of the event based on at least one of the classified confirmation data and a confirmation rating of the at least one confirmation data source. The system validates the outcome of the event based on the reduced threshold. | 2022-06-16 |
20220188673 | MIXED-PRECISION AI PROCESSOR AND OPERATING METHOD THEREOF - A mixed-precision artificial intelligence (AI) processor and an operating method thereof are provided. The AI processor includes a first calculation module, a second calculation module and a control module. The first calculation module is configured to perform calculation based on the data with a first format. The second calculation module is configured to perform calculation based on the data with a second format different from the first format. The control module is coupled to the first calculation module and the second calculation module to select one of the first calculation module or the second calculation module to perform calculation based on an input data according to a calculation strategy. | 2022-06-16 |
20220188674 | MACHINE LEARNING CLASSIFIERS PREDICTION CONFIDENCE AND EXPLANATION - A method, a computer system, and a computer program product for generating explanations for different confidence levels of machine learning classifiers is provided. Embodiments of the present invention may include obtaining a dataset. Embodiments of the present invention may include training a first classifier using the dataset to generate probabilities. Embodiments of the present invention may include generating confidence scores using the first classifier. Embodiments of the present invention may include defining targeted confidence zones by transforming the generated probabilities into the confidence scores. Embodiments of the present invention may include training a second classifier to derive explanations. Embodiments of the present invention may include providing the explanations as an output. | 2022-06-16 |
20220188675 | DATA PREPROCESSING SYSTEM MODULE USED TO IMPROVE PREDICTIVE ENGINE ACCURACY - An apparatus used to provide preprocessed variables to a predictive engine. The predictive engine generates predictive results, based on the variables, to automate well site operations. The apparatus comprises an analysis module, a pattern recognition module, and a library module. The analysis module identifies a well site operation by examining a well site operation variable, determines categories and standard operating procedures associated with the categories using the well site operation and a-priori information, and searches a library of historical information using the categories. The historical information comprising classified procedures and recommendations of historic well site operations. The pattern recognition module identifies a pattern using a statistics based algorithm. The algorithm uses the standard operating procedures, the categories, and the classified procedures and recommendations. The pattern indicating a deviation in the standard operating procedure. The library module classifies the well site operation variables and stores the classified variables. | 2022-06-16 |
20220188676 | INFERENCE MODEL OPTIMIZATION - An approach to optimize performance for large scale inference models. Data in the form of images is received from sensors such as cameras. The data is processed to generate data tags associated with the context of the image and portion the images. Model tags are generated based on data characteristics or user input. The tags and their associated data are added to a time-based queue for delivery to the appropriate inference models. Based on the embedded delivery time and frequency, the portioned images are delivered to the appropriate inference models. | 2022-06-16 |
20220188677 | SYSTEMS AND METHODS IMPLEMENTING AN INTELLIGENT MACHINE LEARNING TUNING SYSTEM PROVIDING MULTIPLE TUNED HYPERPARAMETER SOLUTIONS - Disclosed examples include after a first tuning of hyperparameters in a hyperparameter space, selecting first hyperparameter values for respective ones of the hyperparameters; generating a polygonal shaped failure region in the hyperparameter space based on the first hyperparameter values; setting the first hyperparameter values to failure before a second tuning of the hyperparameters; and selecting second hyperparameter values for the respective ones of the hyperparameters in a second tuning region after the second tuning of the hyperparameters in the second tuning region, the second tuning region separate from the polygonal shaped failure region. | 2022-06-16 |
20220188678 | COMPUTER-READABLE RECORDING MEDIUM STORING OPTIMIZATION PROGRAM, OPTIMIZATION METHOD, AND INFORMATION PROCESSING APPARATUS - A recording medium stores an optimization program for causing a computer to execute a process including: acquiring information on an evaluation function obtained by converting a problem; determining values of temperature parameters used for solution processing of an optimal solution by a replica exchange method; setting each of values of the temperature parameters to any one of replicas; and executing the solution processing by performing, for each of the replicas, update processing of repeating update of any value of state variables included in the evaluation function in accordance with a first transition probability, and repeating exchange processing of exchanging, between the replicas, any values of the temperature parameters set for each of the replicas or values of the state variables in each of the replicas, in accordance with an exchange probability that satisfies an invariant distribution condition of probability distribution obtained by the first transition probability. | 2022-06-16 |
20220188679 | QUANTUM RESOURCE ESTIMATION USING A RE-PARAMETERIZATION METHOD - Systems, computer-implemented methods, and computer program products to facilitate estimation of quantum resources to calculate an expectation value of a stochastic process using a re-parameterization method are provided. According to an embodiment, a system can comprise a processor that executes computer executable components stored in memory. The computer executable components can comprise a re-parameterization component that applies a quantum fault-tolerant operation to a variationally prepared quantum state corresponding to a probability distribution to produce a quantum state corresponding to a target probability distribution. The computer executable components can further comprise an estimation component that estimates at least one defined criterion of a quantum computer to be used to compute an expectation value of a stochastic process associated with the target probability distribution. | 2022-06-16 |
20220188680 | QUANTUM CIRCUIT OPTIMIZATION ROUTINE EVALUATION AND KNOWLEDGE BASE GENERATION - Systems, computer-implemented methods, and computer program products to facilitate evaluation of quantum circuit optimization routines and knowledge base generation are provided. According to an embodiment, a system can comprise a processor that executes computer executable components stored in memory. The computer executable components can comprise a compilation component that concurrently executes different quantum circuit optimization sequences on multiple copies of a quantum circuit. The computer executable components can further comprise an identification component that identifies at least one of the different quantum circuit optimization sequences that generates an output quantum circuit comprising defined criteria. | 2022-06-16 |
20220188681 | COMPILATION OF A QUANTUM PROGRAM - Embodiments are provided for compilation of a quantum program. In some embodiments, a system can include a processor that executes computer-executable components stored in memory. The computer-executable components can include an identification component that selects a subgraph that is common among a first commutation directed acyclic graph (DAG) and a second commutation DAG. The subgraph has an upper-bound size that is greater than a threshold size. The first commutation DAG represents a first quantum circuit of a set of quantum circuits and the second commutation DAG represents a second quantum circuit of the set of quantum circuits. The computer-executable components also include a compilation component that compiles a quantum subcircuit corresponding to the subgraph. The computer-executable components further include a configuration component that replaces the quantum subcircuit in the first quantum circuit with the compiled quantum subcircuit. | 2022-06-16 |
20220188682 | READOUT-ERROR MITIGATION FOR QUANTUM EXPECTATION - Techniques for mitigating readout error for quantum expectation are presented. Calibration component applies first random Pauli gates to qubits at first output of first circuit prior to first readout measurements of the qubits. Estimation component applies second random Pauli gates to qubits at second output of second circuit prior to second readout measurements of the qubits, and generates an error-mitigated readout determination based on first random Pauli gates applied to qubits at first circuit output and second random Pauli gates applied to qubits at second circuit output. Calibration component determines calibration data based on first readout measurements. Estimation component determines estimation data based on second readout measurements. Estimation component determines normalization scalar value based on the calibration data, determines estimation scalar value based on the estimation data, and determines the error-mitigated readout determination associated with a circuit of interest based on the normalization scalar value and estimation scalar value. | 2022-06-16 |
20220188683 | VECTOR SIGNAL GENERATOR OPERATING ON MICROWAVE FREQUENCIES, AND METHOD FOR GENERATING TIME-CONTROLLED VECTOR SIGNALS ON MICROWAVE FREQUENCIES - A vector signal generator is capable of operating on microwave frequencies. It comprises a microwave resonator, an output for coupling microwave photons out of said microwave resonator, and a Josephson junction or junction array coupled to the microwave resonator for emitting microwave signals into the microwave resonator. A biasing circuit is provided for applying a bias to the Josephson junction or junction array. A tunable attenuator is coupled to said microwave resonator. | 2022-06-16 |
20220188684 | Quantum Parallel Event Counter - A method, apparatus, and system for counting events. a set of quantum registers is reset. A Hadamard operator is applied to the set of quantum registers. A quantum instruction is executed on the set of quantum registers, wherein the quantum instruction incorporates a function for an event vector comprising events identified by bits. The Hadamard operator is applied to the set of quantum registers after executing the quantum instruction. The set of quantum registers is measured to form a measurement of the set of quantum registers. An approximate count of the events is determined using the measurement of the set of quantum registers. | 2022-06-16 |
20220188685 | HYBRID PHOTONICS-SOLID STATE QUANTUM COMPUTER - There is described herein a quantum computing system, quantum processor, and method of operating a quantum computing system. The quantum computing system comprises a quantum control system configured for at least one of delivery and receipt of multiplexed optical signals. At least one optical fiber is coupled to the quantum control system for carrying the multiplexed optical signals, and a quantum processor is disposed inside a cryogenics apparatus and coupled to the at least one optical fiber. The quantum processor comprises: at least one converter configured for converting between the multiplexed optical signals and microwave signals at different frequencies; and a plurality of solid-state quantum circuit elements coupled to the at least one converter and addressable by respective ones of the microwave signals at different frequencies. | 2022-06-16 |
20220188686 | SERVICE FOR MANAGING QUANTUM COMPUTING RESOURCES - Methods, systems, and computer-readable media for a service for managing quantum computing resources are disclosed. A task management service receives a description of a task specified by a client. From a pool of computing resources of a provider network, the service selects a quantum computing resource for implementation of the task. The quantum computing resource comprises a plurality of quantum bits. The service causes the quantum computing resource to run a quantum algorithm associated with the task. The service receives one or more results of the quantum algorithm from the quantum computing resource. | 2022-06-16 |
20220188687 | QUANTUM DEVICE WITH MULTIPLEXED ELECTROMETER OUTPUT SIGNALS - A quantum device including:
| 2022-06-16 |
20220188688 | METHOD AND SYSTEM FOR DISTRIBUTED TRAINING USING SYNTHETIC GRADIENTS - A training node may include a first processor coupled to a first memory, and a second processor coupled to a second memory. The training node may further include a synthetic gradient processing unit (SGPU) coupled to a third memory, the first processor and the second processor. A portion of an electronic model may be disposed in the first memory, the second memory, and the third memory. The SGPU may generate a synthetic gradient signal based on an error data signal from the first processor and the portion of the electronic model. The synthetic gradient signal may update the electronic model during a training operation for the electronic model. | 2022-06-16 |
20220188689 | GRAPH-TO-SIGNAL DOMAIN BASED DATA INTERCONNECTION CLASSIFICATION SYSTEM AND METHOD - A system and method for performing a projected graph based prediction is provided. The method includes obtaining data from a plurality of servers, determining data entities and dataflows between the data entities based on the obtained data, and generating a first graph including the data entities as nodes and the dataflows between the nodes. The method further includes identifying data concepts based on the obtained data and modifying the first graph by inserting the identified data concepts to provide a second graph. The second graph is further projected to generate a sub-graph, which is then utilized for a prediction algorithm to determine a predicted dataflow between at least two nodes connected to a data concept in the sub-graph. | 2022-06-16 |
20220188690 | MACHINE LEARNING SECURITY THREAT DETECTION USING A META-LEARNING MODEL - A computer-implemented method includes receiving at a threat detection system monitoring data in real-time from online activity in a network, the threat detection system including a machine learning model, and analyzing the monitoring data via the machine learning model to identify one or more anomalies in the monitoring data associated with a security threat to the network, the machine learning model trained to have one or more learning parameters. The method also includes receiving a subset of the monitoring data at a meta-learning module, storing the subset as time-based historical data, inputting the historical data at a meta-learning model, calculating an update policy prescribing a change to the one or more learning parameters based on the historical data, and applying the update policy to the machine learning model. | 2022-06-16 |
20220188691 | Machine Learning Pipeline Generation - The present disclosure includes a computer implemented method, system, and computer program product for automated generation of trained machine learning models and a machine learning model created using the method. The method may comprise receiving a space of possible automatically generated trained machine learning model pipelines, the space defined by a context-free grammar, generating, by a processor, a planning model from the context-free grammar, and automatically generating, by the processor, a plurality of candidate trained machine learning pipelines based upon the planning model. | 2022-06-16 |
20220188692 | PRIVATE COMPUTATION OF AN AGENT DATA ATTRIBUTION SCORE IN COLLABORATED TASK - A computer-implemented method of determining an agent data attribution and selection to perform a collaborative data-related task includes computing an agent data attribution score for each agent of the plurality of agents associated with the collaborative data-related task. A subset of the plurality of agents that participate in the collaborative data-related task is selected based on the agent data attribution score. An instruction is transmitted to the selected subset of the plurality of agents for each agent to conduct a respective portion of the collaborative data-related task. | 2022-06-16 |
20220188693 | SELF-IMPROVING BAYESIAN NETWORK LEARNING - A method, a computer system, and a computer program product for creating multiple models asynchronously is provided. Embodiments of the present invention may include receiving input data, wherein input data includes a full training dataset. Embodiments of the present invention may include building, asynchronously, one or more Bayesian network models using one or more portions of the input data on a first pipeline and building a free learning model using the full training dataset on a second pipeline. Embodiments of the present invention may include retrieving the one or more Bayesian network models from the first pipeline. Embodiments of the present invention may include retrieving the free learning model from the second pipeline. | 2022-06-16 |
20220188694 | AUTOMATICALLY CHANGE ANOMALY DETECTION THRESHOLD BASED ON PROBABILISTIC DISTRIBUTION OF ANOMALY SCORES - Approaches herein relate to model decay of an anomaly detector due to concept drift. Herein are machine learning techniques for dynamically self-tuning an anomaly score threshold. In an embodiment in a production environment, a computer receives an item in a stream of items. A machine learning (ML) model hosted by the computer infers by calculation an anomaly score for the item. Whether the item is anomalous or not is decided based on the anomaly score and an adaptive anomaly threshold that dynamically fluctuates. A moving standard deviation of anomaly scores is adjusted based on a moving average of anomaly scores. The moving average of anomaly scores is then adjusted based on the anomaly score. The adaptive anomaly threshold is then adjusted based on the moving average of anomaly scores and the moving standard deviation of anomaly scores. | 2022-06-16 |
20220188695 | AUTONOMOUS VEHICLE SYSTEM FOR INTELLIGENT ON-BOARD SELECTION OF DATA FOR TRAINING A REMOTE MACHINE LEARNING MODEL - Systems and methods for on-board selection of data logs for training a machine learning model. The methods include, by an autonomous vehicle, receiving sensor data logs corresponding to surroundings of the autonomous vehicle from a plurality of sensors, identifying one or more events within each sensor data log. The methods also include, for each sensor data log: analyzing features of the identified one or more events within that sensor data log for determining whether that sensor data log satisfies one or more usefulness criteria for training a machine learning model, and transmitting that sensor data log to a remote computing device for training the machine learning model if that sensor data log satisfies one or more usefulness criteria for training the machine learning model. The features can include spatial features, temporal features, bounding box inconsistencies, or map-based features. | 2022-06-16 |
20220188696 | APPARATUS FOR DETERMINING 3-DIMENSIONAL ATOMIC LEVEL STRUCTURE AND METHOD THEREOF - A data generating method includes: an atomic model generating step of generating one or more three-dimensional atomic models corresponding to a nanomaterial to be measured; a three-dimensional data generating step of generating three-dimensional atomic level structure volume data corresponding to the nanomaterial to be measured based on the one or more three-dimensional atomic model; a tilt series generating step of generating a tilt series by simulating three-dimensional tomography for a plurality of different angles in a predetermined angle range for at least some of the three-dimensional atomic level structure volume data; and a three-dimensional atomic structure tomogram volume data generating step of generating a three-dimensional atomic structure tomogram volume data set by performing three-dimensional reconstruction on at least some of the three-dimensional atomic level structure volume data based on the tilt series. | 2022-06-16 |
20220188697 | SUPPLEMENTING ARTIFICIAL INTELLIGENCE (AI) / MACHINE LEARNING (ML) MODELS VIA ACTION CENTER, AI/ML MODEL RETRAINING HARDWARE CONTROL, AND AI/ML MODEL SETTINGS MANAGEMENT - Supplementing artificial intelligence (AI)/machine learning (ML) models via an action center, providing AI/ML model retraining hardware control, and providing AI/ML model settings management are disclosed. AI/ML models may be deployed on hosting infrastructure where the AI/ML models can be called by robotic process automation (RPA) robots. When the performance of an AI/ML model falls below a threshold, the result of the AI/ML model prediction and other data is sent to an action center where a human reviews the data using a suitable application and approves the prediction or provides a correction if the prediction is wrong. This action center-approved result is then sent to the RPA robot to be used instead of the prediction from the AI/ML model. | 2022-06-16 |
20220188698 | MACHINE LEARNING TECHNIQUES FOR WEB RESOURCE INTEREST DETECTION - Disclosed embodiments include an event processor that identifies events generated by an entity from various resources. The event processor generates a resource cluster interest score based on the events indicating an interest level of the entity in multiple hostname resources belonging to a first party. The event processor identifies a topic cluster including multiple topics and generates a topic cluster interest score indicating an interest level of the entity in the topics. The event processor generates a weighted intent score based on the resource interest score and the topic cluster interest score. The weighted intent score provides an indication of when the entity is interested in consuming resources, or interested in products/services, provided by the first party. Other embodiments may be described and/or claimed. | 2022-06-16 |
20220188699 | MACHINE LEARNING TECHNIQUES FOR WEB RESOURCE FINGERPRINTING - Disclosed embodiments include a resource classification system (RCS) identifies one or more features in information objects (InObs) and uses the features to classify the InObs. The features may be based on structural semantics of the InObs, content semantics of InObs, content interaction behavior with the InObs, types of users accessing the InObs, and/or the like. The RCS may generate vectors that represent the different features. The vectors may be used to train a machine learning model to predict resource classifications of the InObs. The predicted resource classifications provide more accurate intent, consumption, and surge score predictions than existing solutions. Other embodiments may be described and/or claimed. | 2022-06-16 |
20220188700 | DISTRIBUTED MACHINE LEARNING HYPERPARAMETER OPTIMIZATION - Disclosed embodiments include a distributed hyperparameter (HP) tuning system, which includes a manager and a plurality of trainers. The manager continuously estimates HP sets for a machine learning (ML) model and distributes each HP set to respective trainers. Each trainer obtains a respective HP set and trains a local version of the ML model using the respective HP set. Each trainer determines a performance value for an HP sets used to train its local version of the ML model, and sends the performance value and the HP set to the manager. The manager estimates a new HP set from the HP set received from each trainer. The HP set estimation continues until convergence takes place. Other embodiments may be described and/or claimed. | 2022-06-16 |
20220188701 | INTERPRETATION OF MACHINE LEARNING CLASSIFICATIONS IN CLINICAL DIAGNOSTICS USING SHAPELY VALUES AND USES THEREOF - Shapley values (SVs) have become an important tool to further the goal of explainability of machine learning (ML) models. However, the computational load of exact SV calculations increases exponentially with the number of attributes. Hence, the calculation of SVs for models incorporating large numbers of interpretable attributes is problematic. Molecular diagnostic tests typically seek to leverage information from hundreds or thousands of attributes, often using training sets with fewer instances. Methods are described for evaluate SVs using Monte Carlo sampling or exact calculation in polynomial time (i.e., reasonably quickly and efficiently) using the architecture of a ML model designed for robust molecular test generation, and without requiring classifier retraining. | 2022-06-16 |
20220188702 | SYSTEM AND METHOD FOR ITERATIVE DATA CLUSTERING USING MACHINE LEARNING - Systems, methods, and non-transitory computer-readable storage media which train a machine learning algorithm using a training set of Requests for Proposals (RFPs), then cluster a second set of RFPs using the trained machine learning algorithm. Distinct clusters are then compared to historical data, and an outlier is identified. An alert regarding that outlier is then transmitted across a network to an entity associated with the outlier. | 2022-06-16 |
20220188703 | DATA GENERATION APPARATUS AND METHOD THEREOF - A data generation apparatus includes a data input unit that inputs an initial dataset, a data preprocessing unit that normalizes the initial dataset and splits the normalized initial dataset into a initial training dataset and a initial test dataset, a learning model generation unit that trains a first machine learning model using the initial training dataset and optimizes a hyperparameter of the first machine learning model using a predetermined number of cross-validations, thereby generating a first learning model, a validation unit that validates the generated first learning model using the initial test dataset, a semi-synthetic data generation unit that selects and generates a new parameter within a boundary space defined by the initial dataset and performs data prediction using the first learning model based on the new parameter to generate a semi-synthetized dataset, and a database that stores the initial dataset and the semi-synthetized dataset. | 2022-06-16 |
20220188704 | Managing a Machine Learning Environment - A method and computing device are disclosed herein for managing a machine learning (ML) environment, the method comprising receiving, by a ML controller, ML model information from a ML application, the ML model information comprising a ML model definition and ML model metadata comprising information specifying a ML runtime to execute a ML model; and generating, by the ML controller, a model runner instance in an abstraction layer at the ML controller using the received ML model information, the model runner instance being configured to interact with the specified ML runtime hosted by a target ML platform to cause the ML runtime to execute the ML model. | 2022-06-16 |
20220188705 | INTERACTIVE DIGITAL DASHBOARDS FOR TRAINED MACHINE LEARNING OR ARTIFICIAL INTELLIGENCE PROCESSES - The disclosed embodiments include computer-implemented processes that generate and maintain interactive digital dashboards for machine learning or artificial intelligence processes. For example, an apparatus may obtain process data associated with an execution of a plurality of machine learning or artificial intelligence processes. Based on the process data, the apparatus may determine, for each of the plurality of machine learning or artificial intelligence processes, value of one or more metrics characterizing a status of one or more operations that support the execution of the corresponding machine learning or artificial intelligence process. Further, the apparatus may transmit status data that includes the one or more metric values and corresponding process identifiers to a device, which presents, for each of the machine learning or artificial intelligence processes, a graphical representation of at least one of the determined one or more metric values within a digital interface. | 2022-06-16 |
20220188706 | SYSTEMS AND METHODS FOR GENERATING AND APPLYING A SECURE STATISTICAL CLASSIFIER - There is provided a system for computing a secure statistical classifier, comprising: at least one hardware processor executing a code for: accessing code instructions of an untrained statistical classifier, accessing a training dataset, accessing a plurality of cryptographic keys, creating a plurality of instances of the untrained statistical classifier, creating a plurality of trained sub-classifiers by training each of the plurality of instances of the untrained statistical classifier by iteratively adjusting adjustable classification parameters of the respective instance of the untrained statistical classifier according to a portion of the training data serving as input and a corresponding ground truth label, and at least one unique cryptographic key of the plurality of cryptographic keys, wherein the adjustable classification parameters of each trained sub-classifier have unique values computed according to corresponding at least one unique cryptographic key, and providing the statistical classifier, wherein the statistical classifier includes the plurality of trained sub-classifiers. | 2022-06-16 |
20220188707 | DETECTION METHOD, COMPUTER-READABLE RECORDING MEDIUM, AND COMPUTING SYSTEM - A computing system trains an inspector model for training a decision boundary that divides a feature space of data into two application areas based on an output result of the operation model, the inspector model being configured to calculate a distance from the decision boundary to input data. The computing system calculates, by inputting training data to the inspector model, a first distance from the decision boundary to the training data. | 2022-06-16 |