21st week of 2021 patent applcation highlights part 48 |
Patent application number | Title | Published |
20210158129 | METHOD AND SYSTEM FOR GENERATING SYNTHETIC DATA USING A REGRESSION MODEL WHILE PRESERVING STATISTICAL PROPERTIES OF UNDERLYING DATA - A method for generating a synthetic dataset involves generating discretized synthetic data based on driving a model of a cumulative distribution function (CDF) with random numbers. The CDF is based on a source dataset. The method further includes generating the synthetic dataset from the discretized synthetic data by selecting, for inclusion into the synthetic dataset, values from a multitude of entries of the source dataset, based on the discretized synthetic data, and providing the synthetic dataset to a downstream application that is configured to operate on the source dataset. | 2021-05-27 |
20210158130 | DATA FORMAT TRANSFORM METHOD TO IMPROVE AI ENGINE MAC UTILIZATION - A data format converter rearranges data of an input image for input to a systolic array of multiply and accumulate processing elements. The image has a pixel height and a pixel width in a number of channels equal to a number of colors per pixel. The data format converter rearranges the data to a second, greater number of channels and inputs the second number of channels to one side of the systolic array. The second number of channels is less than or equal to the number of MAC PEs on the one side of the systolic array, and results in greater MAC PE utilization in the systolic array. | 2021-05-27 |
20210158131 | HIERARCHICAL PARTITIONING OF OPERATORS - Methods and apparatuses for hierarchical partitioning of operators of a neural network for execution on an acceleration engine are provided. Neural networks are built in machine learning frameworks using neural network operators. The neural network operators are compiled into executable code for the acceleration engine. Development of new framework-level operators can exceed the capability to map the newly developed framework-level operators onto the acceleration engine. To enable neural networks to be executed on an acceleration engine, hierarchical partitioning can be used to partition the operators of the neural network. The hierarchical partitioning can identify operators that are supported by a compiler for execution on the acceleration engine, operators to be compiled for execution on a host processor, and operators to be executed on the machine learning framework. | 2021-05-27 |
20210158132 | EFFICIENT UTILIZATION OF PROCESSING ELEMENT ARRAY - A computer-implemented method includes receiving a neural network model for implementation using a processing element array, where the neural network model includes a convolution operation on a set of input feature maps and a set of filters. The method also includes determining, based on the neural network model, that the convolution operation utilizes less than a threshold number of rows in the processing element array for applying a set of filter elements to the set of input feature maps, where the set of filter elements includes one filter element in each filter of the set of filters. The method further includes generating, for the convolution operation and based on the neural network model, a first instruction and a second instruction for execution by respective rows in the processing element array, where the first instruction and the second instruction use different filter elements of a filter in the set of filters. | 2021-05-27 |
20210158133 | DEEP NEURAL NETWORK ACCELERATOR USING HETEROGENEOUS MULTIPLY-ACCUMULATE UNIT - A deep neural network accelerator includes a unit array including a first sub-array including a first operational unit and a second sub-array including a second operational unit. The first and second operational units have different sizes from each other, the sizes of the first and second operational units are in proportion to each cumulative importance value accumulated in each operational unit of the unit array while performing a deep neural network operation, and the each cumulative importance value is obtained by accumulating an importance for each weight mapped to the each operational unit of the unit array. | 2021-05-27 |
20210158134 | ARITHMETIC DEVICES FOR NEURAL NETWORK - An arithmetic device includes an activation function (AF) control circuit and a data storage circuit. The AF control circuit is configured to generate an activation period signal, an activation active signal, and an activation read signal based on an activation control signal. The data storage circuit includes at least one memory bank that is activated based on a bank active signal that is generated based on the activation active signal. The data storage circuit is configured to output data stored in a memory cell array, which is selected by a row address and a column address, as activation data based on the activation read signal. | 2021-05-27 |
20210158135 | REDUCTION MODE OF PLANAR ENGINE IN NEURAL PROCESSOR - Embodiments relate to a neural processor that includes one or more neural engine circuits and planar engine circuits. The neural engine circuits can perform convolution operations of input data with one or more kernels to generate outputs. The planar engine circuit is coupled to the plurality of neural engine circuits. A planar engine circuit can be configured to multiple modes. In a reduction mode, the planar engine circuit may process values arranged in one or more dimensions of input to generate a reduced value. The reduced values across multiple input data may be accumulated. The planar engine circuit may program a filter circuit as a reduction tree to gradually reduce the data into a reduced value. The reduction operation reduces the size of one or more dimensions of a tensor. | 2021-05-27 |
20210158136 | CALCULATION SCHEME DECISION SYSTEM, CALCULATION SCHEME DECISION DEVICE, CALCULATION SCHEME DECISION METHOD, AND STORAGE MEDIUM - A calculation scheme decision system includes a pre-calculation unit performing, in an execution environment in which calculation is performed, calculation for each of respective layers of the network structure using at least one of calculation schemes prepared in advance for the respective layers, a cost acquisition unit acquiring a calculation cost of at least one calculation scheme for each layer based on a result of the calculation by the pre-calculation unit, a decision unit selecting one calculation scheme for each layer based on the calculation cost from among at least one of the calculation schemes prepared in advance for the respective layers to associate the layer with the selected one calculation scheme, and a calculation unit performing the calculation for each of the respective layers of the network structure on input data in the execution environment using the calculation scheme associated with each layer. | 2021-05-27 |
20210158137 | NEW LEARNING DATASET GENERATION METHOD, NEW LEARNING DATASET GENERATION DEVICE AND LEARNING METHOD USING GENERATED LEARNING DATASET - Even if an existing learning dataset is limited, a new learning dataset with sufficient variation is generated. Therefore, for each of a plurality of learning data subsets, new input signals are generated from input signals of a plurality of pieces of learning data, and a plurality of pieces of new learning data that are respectively combinations of the new input signals and output signals of the corresponding learning data subset are generated. The input signals of the plurality of pieces of the learning data included in the corresponding learning data subset are divided into a first signal group and a second signal group, and the new input signals are generated by a learning device that is generated by performing learning by the first signal group set as an input signal set and the second signal group set as an output signal set. | 2021-05-27 |
20210158138 | GESTURE FEEDBACK IN DISTRIBUTED NEURAL NETWORK SYSTEM - A method for operating a distributed neural network having a plurality of intelligent devices and a server includes: generating, by a first intelligent device of the plurality of intelligent devices, a first output using a first neural network model running on the first intelligent device and using a first input vector to the first neural network model; outputting, by the first intelligent device, the first output; receiving, by the first intelligent device, a gesture feedback on the first output from a user; determining, by the first intelligent device, a user rating of the first output from the gesture feedback; labeling, by the first intelligent device, the first input vector with a first label in accordance with the user rating; and training, by the first intelligent device, the first neural network model using the first input vector and the first label. | 2021-05-27 |
20210158139 | METHODS AND SYSTEMS FOR GEOMETRY-AWARE IMAGE CONTRAST ADJUSTMENTS VIA IMAGE-BASED AMBIENT OCCLUSION ESTIMATION - Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image. | 2021-05-27 |
20210158140 | CUSTOMIZED MACHINE LEARNING DEMONSTRATIONS - A target description is received. Based on the target description, a set of artificial data is generated. A machine learning zero model is trained using the set of artificial data. The machine learning zero model is deployed as a service. A set of demonstration data is processed, using the service, and a user is notified of the results. | 2021-05-27 |
20210158141 | CONTROL INPUT SCHEME FOR MACHINE LEARNING IN MOTION CONTROL AND PHYSICS BASED ANIMATION - A method, system and non-transitory instructions for control input, comprising, taking an integral of an output value from a Motion Decision Neural Network for a movable joint to generate an integrated output value. Generating a subsequent output value using a machine learning algorithm that includes a sensor value and the integrated output value as inputs to the Motion Decision Neural Network and imparting movement with the moveable joint according to an integral of the subsequent output value. | 2021-05-27 |
20210158142 | MULTI-TASK FUSION NEURAL NETWORK ARCHITECTURE - A method includes identifying, by at least one processor, multiple features of input data using a common feature extractor. The method also includes processing, by the at least one processor, at least some identified features using each of multiple pre-processing branches. Each pre-processing branch includes a first set of neural network layers and generates initial outputs associated with a different one of multiple data processing tasks. The method further includes combining, by the at least one processor, at least two initial outputs from at least two pre-processing branches to produce combined initial outputs. In addition, the method includes processing, by the at least one processor, at least some initial outputs or at least some combined initial outputs using each of multiple post-processing branches. Each post-processing branch includes a second set of neural network layers and generates final outputs associated with a different one of the multiple data processing tasks. | 2021-05-27 |
20210158143 | MODELING ENVIRONMENT NOISE FOR TRAINING NEURAL NETWORKS - An approach for altering alter training data and training process associated with a neural network to emulate environmental noise and operational instrument error by using the concepts of shots to sample within a squeezed space model, wherein shots are an uncertainty index that is the average of all shots from a sampling, is disclosed. The approach leverages a squeeze theorem to create a squeezed space model based on the regression of the upper and lower bound associated with the environmental noise and instrument error. The approach calculates an average noise index based on the squeezed space model, wherein the index is used to alter the training data and process. | 2021-05-27 |
20210158144 | COMBINING STATISTICAL METHODS WITH A KNOWLEDGE GRAPH - Certain aspects of the present disclosure provide techniques for node matching with accuracy by combining statistical methods with a knowledge graph to assist in responding (e.g., providing content) to a user query in a user support system. In order to provide content, a keyword matching algorithm, statistical method (e.g., a trained BERT model), and data retrieval are each implemented to identify node(s) in a knowledge graph with encoded content relevant to the user's query. The implementation of the keyword matching algorithm, statistical method, and data retrieval results in a matching metric score, semantic score, and graph metric data, respectively. Each score associated with a node is combined to generate an overall score that can be used to rank nodes. Once the nodes are ranked, the top ranking nodes are displayed to the user for selection. Based on the selection, content encoded in the node is displayed to the user. | 2021-05-27 |
20210158145 | ENERGY EFFICIENT MACHINE LEARNING MODELS - Aspects described herein provide a method including: receiving input data at a machine learning model, comprising: a plurality of processing layers; a plurality of gate logics; a plurality of gates; and a fully connected layer; determining based on a plurality of gate parameters associated with the plurality of gate logics, a subset of the plurality of processing layers with which to process the input data; processing the input data with the subset of the plurality of processing layers and the fully connected layer to generate an inference; determining a prediction loss based on the inference and a training label associated with the input data; determining an energy loss based on the subset of the plurality of processing layers used to process the input data; and optimizing the machine learning model based on: the prediction loss; the energy loss; and a prior probability associated with the training label. | 2021-05-27 |
20210158146 | METHOD AND SYSTEM FOR GENERATING A DYNAMIC SEQUENCE OF ACTIONS - A device may receive historical data and real-time data associated with a troubleshooting service, identify, using a machine learning model, an optimal resolution based on the historical data and the real-time data, and identify, using a graph analytics model, an optimal path of actions based on the optimal resolution. The machine learning model may be trained to identify one of the set of historical issues associated with the unresolved issue, and identify the optimal resolution based on one of the set of historical resolutions associated with the one of the set of historical issues. The graph analytics model may be trained to generate a set of paths of actions based on the historical data, and identify the optimal path based on respective numbers of actions associated with the set of paths. The device may identify optimal action based on the optimal path and the prior action. | 2021-05-27 |
20210158147 | TRAINING APPROACH DETERMINATION FOR LARGE DEEP LEARNING MODELS - In an approach to determining an optimal training approach for a large deep learning model based on model characteristics and system characteristics. The one or more computer processors identify one or more model characteristics associated with a deep learning model. The one or more computer processors identify one or more system configurations associated with a system training the deep learning model. The one or more computer processors determine a training approach for the deep learning model utilizing a trained large model predictor fed with the one or more identified model characteristics and the one or more identified system configurations. The one or more computer processors train the deep learning model utilizing the determined training approach. | 2021-05-27 |
20210158148 | METHODS AND APPARATUS FOR AUDIO EQUALIZATION BASED ON VARIANT SELECTION - Methods, apparatus, systems and articles of manufacture are disclosed methods and apparatus for audio equalization based on variant selection. An example apparatus includes a processor to obtain training data, the training data including a plurality of reference audio signals each associated with a variant of music and organize the training data into a plurality of entries based on the plurality of reference audio signals, a training model executor to execute a neural network model using the training data, and a model trainer to train the neural network model by updating at least one weight corresponding to one of the entries in the training data when the neural network model does not satisfy a training threshold. | 2021-05-27 |
20210158149 | BAYESIAN GRAPH CONVOLUTIONAL NEURAL NETWORKS - Method and system for predicting labels for nodes in an observed graph, including deriving a plurality of random graph realizations of the observed graph; learning a predictive function using the random graph realizations; predicting label probabilities for nodes of the random graph realizations using the learned predictive function; and averaging the predicted label probabilities to predict labels for the nodes of the observed graph. | 2021-05-27 |
20210158150 | Non-Intrusive Load Monitoring Using Machine Learning - Embodiments implement non-intrusive load monitoring using machine learning. A trained convolutional neural network (CNN) can be stored, where the CNN includes a plurality of layers, and the CNN is trained to predict disaggregated target device energy usage data from within source location energy usage data based on training data including labeled energy usage data from a plurality of source locations. Input data can be received including energy usage data at a source location over a period of time. Disaggregated target device energy usage can be predicted, using the trained CNN, based on the input data. | 2021-05-27 |
20210158151 | Machine-Learning Architectures for Broadcast and Multicast Communications - Techniques and apparatuses are described for machine-learning architectures for broadcast and multicast communications. In implementations, a network entity determines a configuration of a deep neural network (DNN) for processing broadcast or multicast communications transmitted over a wireless communication system, where the communications are directed to a targeted group of user equipments (UEs). The network entity forms a network-entity DNN based on the determined configuration of the DNN and processes the broadcast or multicast communications using the network-entity DNN. In implementations, the network entity forms a common DNN to process and/or propagate the broadcast or multicast communications to the targeted group of UEs. | 2021-05-27 |
20210158152 | SIMULATION SYSTEM FOR SEMICONDUCTOR PROCESS AND SIMULATION METHOD THEREOF - Provided is a simulation method performed by a process simulator, implemented with a recurrent neural network (RNN) including a plurality of process emulation cells, which are arranged in time series and configured to train and predict, based on a final target profile, a profile of each process step included in a semiconductor manufacturing process. The simulation method includes: receiving, at a first process emulation cell, a previous output profile provided at a previous process step, a target profile and process condition information of a current process step; and generating, at the first process emulation cell, a current output profile corresponding to the current process step, based on the target profile, the process condition information, and prior knowledge information, the prior knowledge information defining a time series causal relationship between the previous process step and the current process step. | 2021-05-27 |
20210158153 | METHOD AND SYSTEM FOR PROCESSING FMCW RADAR SIGNAL USING LIGHTWEIGHT DEEP LEARNING NETWORK - A method and a system for processing an FMCW radar signal by using a lightweight deep learning network are provided. The data processing method using an AI model includes: converting n-dimensional data into a plurality of pieces of 2D data; inputting the plurality of pieces of 2D data into the AI model through different channels; and processing the plurality of pieces of 2D data inputted to the AI model by analyzing. Accordingly, an amount of computation and a memory usage can be reduced and characteristics of an object can be learned and inferred by the lightweight deep learning network. | 2021-05-27 |
20210158154 | APPARATUS AND METHOD FOR DISTINGUISHING NEURAL WAVEFORMS - A neural waveform distinguishment apparatus includes: a neural waveform obtainment unit that obtains multiple neural waveforms in a pre-designated manner from neural signals sensed by way of at least one electrode; a preprocessing unit that obtains multiple gradient waveforms by calculating pointwise slopes in each of the neural waveforms; a feature extraction unit comprising an encoder ensemble composed of multiple encoders, which have a pattern estimation method learned beforehand and include different numbers of hidden layers, where the feature extraction unit obtains multiple codes as multiple features extracted by the encoders respectively from the gradient waveforms and concatenates the codes extracted by the encoders respectively to extract a feature ensemble for each of the gradient waveforms; and a clustering unit that distinguishes the neural waveforms corresponding respectively to the gradient waveforms by clustering the feature ensembles extracted respectively in correspondence to the gradient waveforms according to a pre-designated clustering technique. | 2021-05-27 |
20210158155 | AVERAGE POWER ESTIMATION USING GRAPH NEURAL NETWORKS - A graph neural network for average power estimation of netlists is trained with register toggle rates over a power window from an RTL simulation and gate level netlists as input features. Combinational gate toggle rates are applied as labels. The trained graph neural network is then applied to infer combinational gate toggle rates over a different power window of interest and/or different netlist. | 2021-05-27 |
20210158156 | Distilling from Ensembles to Improve Reproducibility of Neural Networks - Systems and methods can improve the reproducibility of neural networks by distilling from ensembles. In particular, aspects of the present disclosure are directed to a training scheme that utilizes a combination of an ensemble of neural networks and a single, “wide” neural network that is more powerful (e.g., exhibits a greater accuracy) than the ensemble. Specifically, the output of the ensemble can be distilled into the single neural network during training of the single neural network. After training, the single neural network can be deployed to generate inferences. In such fashion, the single neural model can provide a superior prediction accuracy while, during training, the ensemble can serve to influence the single neural network to be more reproducible. In addition, an additional single wide tower can be added to generate another output, that can be distilled to the single neural network, to further improve its accuracy. | 2021-05-27 |
20210158157 | ARTIFICIAL NEURAL NETWORK LEARNING METHOD AND DEVICE FOR AIRCRAFT LANDING ASSISTANCE - A neural network learning method for aircraft landing assistance, the method includes receiving a set of labeled learning data comprising sensor data associated with a ground truth representing at least a landing runway and an approach light bar; running an artificial neural network deep learning algorithm on the learning data set, the deep learning algorithm using a cost function called runway threshold trapezium, parameterized for the recognition of a runway threshold and of approach light bars; and generating a trained artificial intelligence model for landing runway recognition. | 2021-05-27 |
20210158158 | METHOD AND DEVICE FOR PROCESSING SENSOR DATA - A method for processing sensor data. The method includes receiving input sensor data, determining, starting from the input sensor data as initial state, a plurality of end states, including determining, for each end state, a sequence of states, wherein determining the sequence of states comprises, for each state of the sequence beginning with the initial state until the end state, a first Bayesian neural network determining a sample of a drift term in response to inputting the respective state, a second Bayesian neural network determining a sample of a diffusion term in response to inputting the respective state and determining a subsequent state by sampling a stochastic differential equation including the sample of the drift term as drift term and the sample of the diffusion term as diffusion term. An end state probability distribution is determined, and a processing result is determined from the end state probability distribution. | 2021-05-27 |
20210158159 | ELECTRONIC CIRCUIT, NEURAL NETWORK, AND NEURAL NETWORK LEARNING METHOD - To quickly find an optimal parameter for a neural network. An electronic circuit includes a quantum dot, a capacitance portion, a current portion, and a current adjustment portion. In this circuit, the quantum dot includes a first electrode, a second electrode, and a third electrode. The first electrode is connected to a first potential. The second electrode is connected to a first current source. The third electrode is connected to a second current source. The current portion discharges current from the second electrode or supplies current to the second electrode. The current adjustment portion adjusts a current of the current portion and outputs a parameter to adjust the current. | 2021-05-27 |
20210158160 | OPERATION METHOD FOR ARTIFICIAL NEURAL NETWORK - An operation method of an artificial neural network is provided. The operation method includes: dividing input information into a plurality of sub-input information, and expanding kernel information to generate expanded kernel information; performing a Fast Fourier Transform (FFT) on the sub-input information and the expanded kernel information to respectively generate a plurality of frequency domain sub-input information and frequency domain expanded kernel information; respectively performing a multiplying operation on the frequency domain expanded kernel information and the frequency domain sub-input information to respectively generate a plurality of sub-feature maps; and performing an inverse FFT on the sub-feature maps to provide a plurality of converted sub-feature maps for executing a feature extraction operation of the artificial neural network. | 2021-05-27 |
20210158161 | Methods and Systems for Detecting Spurious Data Patterns - Disclosed are implementations that include a method for detecting anomalous data, including converting a set of data values representative of a multi-dimensional item into a nodes-and-edges graph representation of the item, applying a graph convolution process to the graph representation to generate a transformed graph representation for the item comprising a resultant transformed configuration of the nodes and edges representing the item, and determining, based on the transformed configuration, a probability that the item is anomalous. Another example method includes receiving input data at a neural network circuit comprising a plurality of node layers, with each of the plurality of node layers comprising respective one or more nodes, with the neural network circuit further comprising adjustable weighted connections connecting at least some nodes in different layers of the plurality of node layers. The method further includes removing one or more of the weighted connections at one or more time instances. | 2021-05-27 |
20210158162 | TRAINING REINFORCEMENT LEARNING AGENTS TO LEARN FARSIGHTED BEHAVIORS BY PREDICTING IN LATENT SPACE - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection policy neural network used to select an action to be performed by an agent interacting with an environment. In one aspect, a method includes: receiving a latent representation characterizing a current state of the environment; generating a trajectory of latent representations that starts with the received latent representation; for each latent representation in the trajectory: determining a predicted reward; and processing the state latent representation using a value neural network to generate a predicted state value; determining a corresponding target state value for each latent representation in the trajectory; determining, based on the target state values, an update to the current values of the policy neural network parameters; and determining an update to the current values of the value neural network parameters. | 2021-05-27 |
20210158163 | METHODS AND SYSTEMS FOR POWER MANAGEMENT IN A PATTERN RECOGNITION PROCESSING SYSTEM - A device includes a state machine. The state machine includes a plurality of blocks, where each of the blocks includes a plurality of rows. Each of these rows includes a plurality of programmable elements. Furthermore, each of the programmable elements are configured to analyze at least a portion of a data stream and to selectively output a result of the analysis. Each of the plurality of blocks also has corresponding block activation logic configured to dynamically power-up the block. | 2021-05-27 |
20210158164 | FINDING K EXTREME VALUES IN CONSTANT PROCESSING TIME - A method includes determining a set of k extreme values of a dataset of elements in a constant time irrespective of the size of the dataset. The determining includes reviewing the values bit-by-bit, starting from the most significant bit, where bit n from each element of the dataset is reviewed at the same time. | 2021-05-27 |
20210158165 | SEPARATE DEPLOYMENT OF MACHINE LEARNING MODEL AND ASSOCIATED EMBEDDING - Implementations of the present specification provide a model-based prediction method and apparatus. The method includes: a model running environment receives an input tensor of a machine learning model; the model running environment sends a table query request to an embedding running environment, the table query request including the input tensor, to request low-dimensional conversion of the input tensor; the model running environment receives a table query result returned by the embedding running environment, the table query result being obtained by the embedding running environment by performing embedding query and processing based on the input tensor; and the model running environment inputs the table query result into the machine learning model, and runs the machine learning model to complete model-based prediction. | 2021-05-27 |
20210158166 | SEMI-STRUCTURED LEARNED THRESHOLD PRUNING FOR DEEP NEURAL NETWORKS - A method for pruning weights of an artificial neural network based on a learned threshold includes designating a group of pre-trained weights of an artificial neural network to be evaluated for pruning. The method also includes determining a norm of the group of pre-trained weights, and performing a process based on the norm to determine whether to prune the entire group of pre-trained weights. | 2021-05-27 |
20210158167 | SYSTEMS AND METHODS FOR ENHANCING A DISTRIBUTED MEDICAL NETWORK - Methods and systems for enhancing a distributed medical network. For example, a computer-implemented method includes inputting training data corresponding to each local computer into their corresponding machine learning model; generating a plurality of local losses including generating a local loss for each machine learning model based at least in part on the corresponding training data; generating a plurality of local parameter gradients including generating a local parameter gradient for each machine learning model based at least in part on the corresponding local loss; generating a global parameter update based at least in part on the plurality of local parameter gradients; and updating each machine learning model hosted at each local computer of the plurality of local computers by at least updating their corresponding active parameter set based at least in part on the global parameter update. | 2021-05-27 |
20210158168 | Performing Inference and Training Using Sparse Neural Network - An inference system trains and performs inference using a sparse neural network. The sparse neural network may include one or more layers, and each layer may be associated with a set of sparse weights that represent sparse connections between nodes of a layer and nodes of a previous layer. A layer output may be generated by applying the set of sparse weights associated with the layer to the layer output of a previous layer. Moreover, the one or more layers of the sparse neural network may generate sparse layer outputs. By using sparse representations of weights and layer outputs, robustness and stability of the neural network can be significantly improved, while maintaining competitive accuracy. | 2021-05-27 |
20210158169 | ELECTRONIC DEVICE AND METHOD OF OPERATING THE SAME - Devices for using a neural network to choose an optimal error correction algorithm are disclosed. An example device includes a decoding controller inputting at least one of the number of primary unsatisfied check nodes (UCNs), the number of UCNs respectively corresponding to at least one iteration, and the number of correction bits respectively corresponding to the at least one iteration to a trained artificial neural network, and selecting any one of a first error correction decoding algorithm and a second error correction decoding algorithm based on an output of the trained artificial neural network corresponding to the input, and an error correction decoder performing error correction decoding on a read vector using the selected error correction decoding algorithm. The output of the trained artificial neural network may include a first predicted value indicating a possibility that a first error correction decoding using the first error correction decoding algorithm is successful. | 2021-05-27 |
20210158170 | FEATURE MAP SPARSIFICATION WITH SMOOTHNESS REGULARIZATION - A method includes receiving an image by a deep neural network (DNN) and obtaining a first feature map based on the image while the DNN is in a trained state, wherein the DNN is configured to perform a task based on the image, and is trained with a training image by using a feature sparsification with smoothness regularization process and a back propagation and weight update process that updates the DNN based on an output of the feature sparsification with smoothness regularization process. | 2021-05-27 |
20210158171 | INTELLIGENT DATA CURATION - An apparatus includes processor(s) to: receive a request for a data catalog; in response to the request specifying a structural feature, analyze metadata of multiple data sets for an indication of including it, and to retrieve an indicated degree of certainty of detecting it for data sets including it; in response to the request specifying a contextual aspect, analyze context data of the multiple data sets for an indication of being subject to it, and to retrieve an indicated degree of certainty concerning it for data sets subject to it; selectively include each data set in the data catalog based on the request specifying a structural feature and/or a contextual aspect, and whether each data set meets what is specified; for each data set in the data catalog, generate a score indicative of the likelihood of meeting what is specified; and transmit the data catalog to the requesting device. | 2021-05-27 |
20210158172 | Artificially Intelligent Interaction Agent - A system includes a memory having instructions therein and at least one processor configured to execute the instructions to: begin control of a user-interaction session; determine a first user state; use a reinforcement learning agent to select a first motivational action; communicate the first motivational action to a user device; determine a second user state; generate a reward based at least in part on a tiered reinforcement learning reward categorization of the second user state; communicate the reward and the second user state to the reinforcement learning agent; update the reinforcement learning agent; and determine, based at least in part on whether the second user state corresponds to a goal of the user-interaction session, to wind up control of the user-interaction session. | 2021-05-27 |
20210158173 | NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING PROGRAM CODE GENERATING WAFER MAP BASED ON GENERATIVE ADVERSARIAL NETWORKS AND COMPUTING DEVICE INCLUDING THE SAME - A non-transitory computer-readable medium storing a program code including an image generation model, which when executed, causes a processor to input input data including sampling data of some of a plurality of semiconductor dies of a wafer to a generator network of the image generation model and output a wafer map indicating the plurality of semiconductor dies, and to input the wafer map output from the generator network to a discriminator network of the image generation model and discriminate the wafer map. | 2021-05-27 |
20210158174 | EQUIPMENT MAINTENANCE ASSISTANT TRAINING BASED ON DIGITAL TWIN RESOURCES - A method, computer system, and a computer program product for triggering a training of a knowledge base based on a change to a physical asset is provided. The present invention may include receiving the change to one or more digital twins associated with the physical asset. The present invention may then include modifying one or more selected digital twin resources associated with the one or more digital twins associated with the physical asset based on the received change, wherein the one or more selected digital twin resources are included in the knowledge base. The present invention may also include training the knowledge base based on the modified one or more selected digital twin resources. | 2021-05-27 |
20210158175 | ASSET ADDITION SCHEDULING FOR A KNOWLEDGE BASE - For a first query classification, a query time series is constructed, the query time series comprising a set of natural language queries classified into the first query classification received per unit of time. For a first asset classification, a topic time series is constructed, the topic time series comprising a set of knowledge assets classified into the first asset classification added to a set of knowledge assets per unit of time. From the query time series and the topic time series, a decision tree is generated. By navigating the decision tree, a schedule is generated, the schedule forecasting a time at which a future knowledge asset should be added to the set of knowledge assets in time to answer a future natural language query relative to the knowledge asset. | 2021-05-27 |
20210158176 | MACHINE LEARNING BASED DATABASE SEARCH AND KNOWLEDGE MINING - Disclosed herein are embodiments of systems, methods, and products comprises a server for database search and knowledge mining. The server may learn different table's semantics, relationships, and usage by parsing historical query logs and analyzing tables' metadata (e.g., table descriptions). The analytic server may generate a graph database based on the table relationships obtained from the parsing. The graph database may be a relationship graph where tables are the nodes and edges represent the relationships among tables. When the server receives a query, the server extract semantics of the query, and return a set of tables that are semantically similar to the query. The set of tables may be a list of tables whose semantic similarities with the query satisfies a threshold. The analytic server may further generate a graph including the list of tables to show the relationships of these tables. | 2021-05-27 |
20210158177 | METHOD AND SYSTEM FOR RECOMMENDING DIGITAL CONTENT - A method for recommending digital content includes: determining user preferences and a time horizon of a given user; determining a group for the given user based on the determined user preferences; determining a number of users of the determined group and a similarity of the users; applying information including the number of users, the similarity, and the time horizon to a model selection classifier to select one of a personalized model of the user and a group model of the determined group; and running the selected model to determine digital content to recommend. | 2021-05-27 |
20210158178 | MULTI-DIMENSIONAL RECORD CORRELATIONS - A method, system, and computer program product for correlation detection between artificial intelligence (AI) transactions. The method stores a set of transaction records associated with an AI decision engine. Each transaction record has a set of record characteristics. The method assigns the set of transaction records to a set of batches on the set of record characteristics. A set of batch characteristics are determined for a batch of the set of batches. The method determines one or more correlations among the set of batch characteristics. The one or more correlations are compared with one or more threshold batches. The method determines, from the one or more correlations and the comparing, an impact of one or more recommendations of the AI decision engine. The one or more recommendations are defined by the set of transaction records. | 2021-05-27 |
20210158179 | DYNAMIC RECOMMENDATION SYSTEM FOR CORRELATED METRICS AND KEY PERFORMANCE INDICATORS - A method, apparatus, system, and computer program product for generating a human readable recommendation. The method determines, by a computer system, a key performance value for a key performance indicator from a collection of data; A metric value for a metric is determined by the computer system from the collection of data. A correlation coefficient indicating a correlation between the key performance indicator and the metric is identified by the computer system. A human readable recommendation is generated by the computer system using a recommendation pattern when the correlation coefficient indicates that the correlation between the key performance indicator and the metric is sufficiently significant. | 2021-05-27 |
20210158180 | INTELLIGENT DESIGN PLATFORM USING INDUSTRIALIZED EXPERIENCE IN PRODUCT DESIGNS - Implementations for include providing one or more product designs using an intelligent design platform receiving a product indicator that indicates a product that is to be designed, transmitting a request to a contextual requirements system, the request including the product identifier and requesting one or more contextual requirements, determining a set of context models based on the product identifier, each context model in the set of context models being generated based on one or more scenes represented in a digital video, each scene depicting a contextual use of the product, providing a set of contextual requirements to the design generation system based on one or more context models, and inputting a set of aggregate requirements to a generative design tool that generates the one or more product designs based on the set of aggregate requirements, the set of aggregate requirements including at least one contextual requirement. | 2021-05-27 |
20210158181 | IDENTIFYING COMPARABLE ENTITIES USING MACHINE LEARNING AND PROFILE GRAPHS - Techniques for identifying similar companies based on profile data sets of the companies are provided. In one embodiment, a method comprises using a processing device to obtain a benchmark profile data set for a benchmark company and obtain a plurality of profile data sets, each of the plurality of profile data sets corresponding to a candidate company. The processing device may utilize a machine learning algorithm to determine the distance between each of the plurality of profile data sets and the benchmark profile data set and build profile graphs indicating the distance. The processing device may determine one or more of the plurality of profile data sets that are most similar to the benchmark profile data set based on the determined distance and identify the one or more candidate companies corresponding to the one or more profile data sets as companies most similar to the benchmark company | 2021-05-27 |
20210158182 | ENHANCED SIMILARITY DETECTION BETWEEN DATA SETS WITH UNKNOWN PRIOR FEATURES USING MACHINE-LEARNING - The present disclosure relates to systems and methods for using machine-learning techniques to detect similar features between data sets. More particularly, the present disclosure relates to systems and methods that learn feature patterns within at least two data sets using machine-learning techniques to determine similarities between clusters of users in a scalable and computationally efficient manner. | 2021-05-27 |
20210158183 | TRUSTWORTHINESS OF ARTIFICIAL INTELLIGENCE MODELS IN PRESENCE OF ANOMALOUS DATA - Methods, systems, and computer program products for improving trustworthiness of artificial intelligence models in presence of anomalous data are provided herein. A method includes obtaining a machine learning model and a set of training data; determining one or more anomalous data points in said set of training data; for a given one of said anomalous data points, identifying attributes that decrease confidence with respect to at least one output of said machine learning model; determining that a root cause of said decreased confidence corresponds to one of: a class imbalance issue related to said at least one attribute, a confused class issue related to said at least one attribute, a low density issue related to said at least one attribute, and an adversarial issue related to said at least one attribute; and performing step(s) to improve said confidence based at least in part on said determined root cause. | 2021-05-27 |
20210158184 | Inferring Cognitive Capabilities Across Multiple Cognitive Analytics Applied to Literature - A mechanism is provided to implement an analytic inference engine for inferring cognitive capabilities across multiple cognitive analytics applied to literature. The analytic inference engine receives cognitive analytic output generated by multiple cognitive analytics applied to a portion of content. Response to the analytic inference engine finding a first offset in a first cognitive analytic output matching a second offset in a second cognitive analytic output, the analytic inference engine identifies unique features in the first cognitive analytic output and the second cognitive analytic output with respect to the matching offset. The analytic inference engine generates a composite analytic output comprising the unique features with respect to the matching offset. | 2021-05-27 |
20210158185 | VEHICLE RECOMMENDATION SYSTEM AND METHOD - Systems and methods are disclosed. The system is configured to determine a weight distribution of a vehicle and determine a trajectory associated with the vehicle. The system is further configured to generate a vehicle recommendation based on the weight distribution of the vehicle and the trajectory associated with the vehicle. | 2021-05-27 |
20210158186 | Non-Intrusive Load Monitoring Using Machine Learning and Processed Training Data - Embodiments implement non-intrusive load monitoring using a novel learning scheme. A trained machine learning model configured to disaggregate device energy usage from household energy usage can be stored, where the machine learning model is trained to predict energy usage for a target device from household energy usage. Household energy usage over a period of time can be received, where the household energy usage includes energy consumed by the target device and energy consumed by a plurality of other devices. Using the trained machine learning model, energy usage for the target device over the period of time can be predicted based on the received household energy usage. | 2021-05-27 |
20210158187 | SYSTEM AND METHOD FOR DETECTING FRICTION IN WEBSITES - System and method of detecting friction in a website comprising a plurality of webpages and links includes a database sever, an application executed by a processor, and a management dashboard. The application extracts text data and web usage data from the website, segments the website into three funnel stages, identifies an anomaly in the web usage data, quantifies the impacts of the webpages and links, identifies the friction and the underlying root cause, and displays the friction in the management dashboard. | 2021-05-27 |
20210158188 | RECOMMENDING NEWS IN CONVERSATION - The present disclosure discloses a technique of recommending news in conversation. The technique may recommend news in a conversation with respect to the case that a user may be interested in some news event during a conversation, so that the user's interests in reading news may be found during the conversation, and provides the user with related new information on news in a form of chatting reply at appropriate time. | 2021-05-27 |
20210158189 | ELECTRONIC DEVICE AND A METHOD FOR CONTROLLING A CONFIGURATION PARAMETER FOR AN ELECTRONIC DEVICE - An electronic device includes a memory circuitry, an interface circuitry, a processor circuitry having a controller circuitry, and an inference circuitry configured to operate according to a first inference model of a plurality of inference models. The processor circuitry is configured to obtain first primary detection data from a detection circuitry. The processor circuitry is configured to obtain one or more criteria. The processor circuitry is configured to obtain, from the inference circuitry, a first primary probability associated with the first primary detection data, based on the first inference model. The processor circuitry is configured to generate a first set, based on the one or more criteria, by determining whether the first primary probability associated with the first primary detection data satisfies at least one of the one or more criteria. | 2021-05-27 |
20210158190 | MICROELECTROMECHANICAL SYSTEM AND CORRESPONDING METHOD FOR WEATHER PATTERN RECOGNITION - A microelectromechanical weather pattern recognition system includes: at least one movement sensor, of a MEMS type, which generates a movement signal, in the presence and as a function of at least one weather pattern to be recognized; and a recognition circuitry, which is coupled to the movement sensor and which receives the movement signal; extracts given features of the movement signal; and perform processing operations, based on the given features of the movement signal, in order to recognize the weather pattern by executing at least one, appropriately trained, machine-learning algorithm. | 2021-05-27 |
20210158191 | Method for Performing a Cognitive Learning Lifecycle Operation - A cognitive learning method comprising: receiving data from a plurality of data sources; processing the data from the plurality of data sources to perform a cognitive learning operation, the processing being performed via a cognitive inference and learning system, the cognitive learning operation comprising a plurality of cognitive learning operation lifecycle phases, the cognitive learning operation applying a cognitive learning technique to generate a cognitive learning result; and, updating a destination based upon the cognitive learning result. | 2021-05-27 |
20210158192 | NORMALIZING WEIGHTS OF WEIGHTED TREE LEAF NODES - Nodes of a weighted tree each have their own weight. A normalized weight of a node, relative to other nodes in the tree, is determined based on a proportional weight of the node and a lesser unique sum of the node, as well as those of the node's parents and grandparents, up to a root of the tree. The proportional weight and lesser unique sum of a given node depend only on the unique weights of the sibling group including the given node. Thus, if a weight is modified, the normalized weight can be updated without necessarily recalculating the entire tree. | 2021-05-27 |
20210158193 | Interpretable Supervised Anomaly Detection for Determining Reasons for Unsupervised Anomaly Decision - Techniques are provided for determining reasons for unsupervised anomaly decisions. One method comprises obtaining values of predefined features associated with a remote user device; applying the predefined feature values to an unsupervised anomaly detection model that generates an unsupervised anomaly decision; applying the predefined feature values to a supervised anomaly detection model that generates a supervised anomaly decision; determining a third anomaly decision using the unsupervised anomaly decision; and determining reasons for the third anomaly decision by analyzing the supervised anomaly decision. The supervised anomaly detection model can be trained using the unsupervised anomaly decision and/or anomalous training data based on known anomalies. The third anomaly decision can be based on the supervised anomaly decision and the unsupervised anomaly decision using ensemble techniques. | 2021-05-27 |
20210158194 | GRAPH STRUCTURE ANALYSIS APPARATUS, GRAPH STRUCTURE ANALYSIS METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - A graph structure analysis apparatus | 2021-05-27 |
20210158195 | DATA LABEL VERIFICATION - Aspects of the present invention disclose a method for verifying labels of records of a dataset. The records comprise sample data and a related label out of a plurality of labels. The method includes one or more processors dividing the dataset into a training dataset comprising records relating to a selected label and an inference dataset comprising records with sample data relating to the selected label and all other labels out of the plurality of labels. The method further includes dividing the training dataset into a plurality of learner training datasets that comprise at least one sample relating to the selected label. The method further includes training a plurality of label-specific few-shot learners with one of the learner training datasets. The method further includes performing inference by the plurality of trained label-specific few-shot learners on the inference dataset to generate a plurality of sets of predicted label output values. | 2021-05-27 |
20210158196 | NON-STATIONARY DELAYED BANDITS WITH INTERMEDIATE SIGNALS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, of selecting actions from a set of actions to be performed in an environment. One of the methods includes, at each time step: maintaining count data; determining, for each action, a respective current transition probability distribution that includes a respective current transition probability for each of the intermediate signals that represents an estimate of a current likelihood that the intermediate signal will be observed if the action is performed; determining, for each intermediate signal, a respective reward estimate that is an estimate of a reward that will be received as a result of the intermediate signal being observed; determining, from the respective current transition probability distributions and the respective reward estimates, a respective action score for each action; and selecting an action to be performed based on the respective action scores. | 2021-05-27 |
20210158197 | BIOLOGY EXPERIMENT DESIGNS - Disclosed herein include systems, devices, and methods for training and using a probabilistic predictive ensemble model for recommending experiment designs for a biology (e.g., synthetic biology) experiment. Also disclosed herein include methods for performing a biology (e.g., synthetic biology) experiment using a probabilistic predictive ensemble model for recommending experiment designs for biology. | 2021-05-27 |
20210158198 | FAST MULTI-STEP OPTIMIZATION TECHNIQUE TO DETERMINE HIGH PERFORMANCE CLUSTER - A method of machine learning includes performing dimensionality reduction on a parameter space by performing initial tests to determine scores for a plurality of parameter values in the parameter space, determining aggregate scores for a plurality of parameter value combinations, determining a ranking of the plurality of parameter value combinations based on the aggregate scores, and performing cluster analysis on the plurality of parameter value combinations to determine a set having highest aggregate scores. The method further includes performing additional tests, wherein each additional test is for a parameter value combination in the set. For each such parameter value combination, a probability of achieving a key performance indicator (KPI) is computed. Cluster analysis is then performed to determine a first subset of the set having highest probabilities of achieving the KPI. An operation is then performed on the first subset. | 2021-05-27 |
20210158199 | QUANTUM COMPUTING SERVICE SUPPORTING LOCAL EXECUTION OF HYBRID ALGORITHMS - A quantum computing service includes connections to one or more quantum hardware providers that are configured to execute quantum circuits using quantum computers based on one or more quantum technologies. The quantum computing service also includes at least one edge computing device located adjacent to a quantum computer at one of the quantum hardware provider facilities. The edge computing device is configured to execute classical computing portions of a hybrid algorithm in coordination with the quantum computer, which executes quantum computing portions of the hybrid algorithm. Results of the execution of the hybrid algorithm are automatically stored to a data storage service accessible to the customer. | 2021-05-27 |
20210158200 | QUANTUM NETWORK NODE AND PROTOCOLS WITH MULTIPLE QUBIT SPECIES - The disclosure describes aspects of using multiple species in trapped-ion nodes for quantum networking. In an aspect, a quantum networking node is described that includes multiple memory qubits, each memory qubit being based on a | 2021-05-27 |
20210158201 | DYNAMICALLY PREDICT OPTIMAL PARALLEL APPLY ALGORITHMS - A method, system, and computer program product to analyze data patterns in source workloads and predict the optimal parallel apply algorithms, where the method may include receiving source workload data and replication environment data, where the source workload data includes at least a stream of changes to a target DBMS. The method may also include analyzing characteristics of the source workload data and the replication environment data. The method may also include inputting, as input variables, the characteristics of the source workload data and the replication environment data into a machine learning algorithm. The method may also include obtaining, from the machine learning algorithm, an optimal parallel apply algorithm from a plurality of parallel apply algorithms. The method may also include applying the optimal parallel apply algorithm to the target database management system. | 2021-05-27 |
20210158202 | PROGNOSTIC-SURVEILLANCE TECHNIQUE THAT DYNAMICALLY ADAPTS TO EVOLVING CHARACTERISTICS OF A MONITORED ASSET - We describe a system that performs prognostic-surveillance operations based on an inferential model that dynamically adapts to evolving operational characteristics of a monitored asset. During a surveillance mode, the system receives a set of time-series signals gathered from sensors in the monitored asset. Next, the system uses an inferential model to generate estimated values for the set of time-series signals, and then performs a pairwise differencing operation between actual values and the estimated values for the set of time-series signals to produce residuals. Next, the system performs a sequential probability ratio test (SPRT) on the residuals to produce SPRT alarms. When a tripping frequency of the SPRT alarms exceeds a threshold value, which is indicative of an incipient anomaly in the monitored asset, the system triggers an alert. While the prognostic-surveillance system is operating in the surveillance mode, the system incrementally updates the inferential model based on the time-series signals. | 2021-05-27 |
20210158203 | REINFORCEMENT LEARNING FOR CHATBOTS - A computer-implemented method for generating and deploying a reinforced learning model to train a chatbot. The method includes selecting a plurality of conversations, wherein each conversation includes an agent and a user. The method includes identifying, in each of the conversations, a set of turns and on or more topics. The method further includes associating one or more topics to each turn of the set of turns. The method includes, generating a conversation flow for each conversation, wherein the conversation flow identifies a sequence of the topics. The method includes applying an outcome score to each conversation. The method includes creating a reinforced learning (RL) model, wherein the RL model includes a Markov is based on the conversation flow of each conversation and the outcome score of each conversation. The method includes deploying the RL model, wherein the deploying includes sending the RL model to a chatbot. | 2021-05-27 |
20210158204 | ENHANCING FAIRNESS IN TRANSFER LEARNING FOR MACHINE LEARNING MODELS WITH MISSING PROTECTED ATTRIBUTES IN SOURCE OR TARGET DOMAINS - A method of utilizing a computing device to correct source data used in machine learning includes receiving, by the computing device, first data. The computing device corrects the source data via an application of a covariate shift to the source data based upon the first data where the covariate shift re-weighs the source data. | 2021-05-27 |
20210158205 | LABELING A DATASET - A method, system and computer program product, the method comprising: obtaining a first model trained upon cases and labels, the first model providing a prediction in response to an input case; obtaining a second model trained using the cases and indications whether a predictions of the first model are correct, the second model providing a correctness prediction for the first; determining a case for which the second model predicts that the first provides an incorrect prediction; further training the first model also on a first corpus including the case and a label, thereby improving performance of the first model; providing the case to the first model to obtain a first prediction; and further training the second model also on a second corpus including the case and a correctness label, the correctness label being “correct” if the first prediction is equal to the label, thereby improving performance of the second model. | 2021-05-27 |
20210158206 | ATTENTION MECHANISM FOR NATURAL LANGUAGE PROCESSING - A method may include applying a machine learning model, such as a bidirectional encoder representations from transformers model, trained to generate a representation of a word sequence including a reference word, a first candidate noun, and a second candidate noun. The representation may include a first attention map and a second attention map. The first attention map may include attention values indicative of a strength of various linguistic relationships between the reference word and the first candidate noun. The second attention map may include attention values indicative of a strength of various linguistic relationships between the reference word and the second candidate noun. A natural language processing task, such as determining whether the reference word refers to the first candidate noun or the second candidate noun, may be performed based on the first attention map and the second attention map. Related methods and articles of manufacture are also disclosed. | 2021-05-27 |
20210158207 | ARTIFICIAL INTELLIGENCE SYSTEM AND METHOD FOR SITE SAFETY AND TRACKING - A machine-learning ecosystem includes a correlation module for building at least one prediction model based on at least one data input including at least one input parameter and at least one output parameter, the prediction model relating the output parameter to the input parameter. The correlation module performs at least one threshold check on the prediction model to assess the robustness of the prediction model. The ecosystem further includes a decision module communicatively coupled to the correlation module and receiving the prediction model from the correlation module. Based on a verification check at the decision module, a confirmation, a deferral, or a rejection of the prediction model is sent from the decision module to the correlation module. | 2021-05-27 |
20210158208 | ENGAGEMENT PREDICTION OF IMAGE ASSETS ON SOCIAL MEDIA - The present technology can receive a collection of candidate images that are candidates for posting on a social media platform, and then determine, using artificial intelligence model, a prediction of expected engagement on the social media platform for each image of the candidate images. | 2021-05-27 |
20210158209 | SYSTEMS, APPARATUSES, AND METHODS OF ACTIVE LEARNING FOR DOCUMENT QUERYING MACHINE LEARNING MODELS - Techniques for active learning for document querying machine learning (ML) models as a service are described. A service may perform a search of data of a user, using a machine learning model, for a search query to generate a result, generate a confidence score for the result of the search, select a proper subset of the data to be provided to the user based on the confidence score, display the proper subset of the data to the user, receive an indication from the user of one or more sections of the proper subset of the data for use in a next training iteration of the machine learning model, and perform the next training iteration of the machine learning model with the one or more sections of the proper subset of the data. | 2021-05-27 |
20210158210 | HYBRID IN-DOMAIN AND OUT-OF-DOMAIN DOCUMENT PROCESSING FOR NON-VOCABULARY TOKENS OF ELECTRONIC DOCUMENTS - Techniques are described herein for training and evaluating machine learning (ML) models for document processing computing applications based on in-domain and out-of-domain characteristics. In some embodiments, an ML system is configured to form feature vectors by mapping unknown tokens to known tokens within a domain based, at least in part, on out-of-domain characteristics. In other embodiments, the ML system is configured to map the unknown tokens to an aggregate vector representation based on the out-of-domain characteristics. The ML system may use the feature vectors to train ML models and/or estimate unknown labels for the new documents. | 2021-05-27 |
20210158211 | LINEAR TIME ALGORITHMS FOR PRIVACY PRESERVING CONVEX OPTIMIZATION - Methods, systems, and apparatus, including computer programs encoded on computer storage media for training a machine learning model. The method includes obtaining a training data set comprising a plurality of training examples; determining i) a stochastic gradient descent step size schedule, ii) a stochastic gradient descent noise schedule, and iii) a stochastic gradient descent batch size schedule, wherein the stochastic gradient descent batch size schedule comprises a sequence of varying batch sizes; and training a machine learning model on the training data set, comprising performing stochastic gradient descent according to the i) stochastic gradient descent step size schedule, ii) stochastic gradient descent noise schedule, and iii) stochastic gradient descent batch size schedule to adjust a machine learning model loss function. | 2021-05-27 |
20210158212 | LEARNING METHOD AND LEARNING APPARATUS - A first learning rate is set to a first block including a first parameter, and a second learning rate, which is smaller than the first learning rate, is set to a second block including a second parameter. The first block and the second block are included in a model. Learning processing in which updating the first parameter based on a prediction error of the model, the prediction error having been calculated by using training data, and the first learning rate and updating the second parameter based on the prediction error and the second learning rate are performed iteratively is started. An update frequency of the second parameter is controlled such that this update frequency becomes lower than an update frequency of the first parameter by intermittently omitting the updating of the second parameter in the learning processing based on a relationship between the first and second learning rates. | 2021-05-27 |
20210158213 | LEARNING MODEL MANAGEMENT SYSTEM, LEARNING MODEL MANAGEMENT METHOD, AND RECORDING MEDIUM - A learning model management system executes a provisional evaluation when the number of feedback data is equal to or less than a threshold of a definite evaluation but is more than a threshold of a provisional evaluation. In this provisional evaluation, prediction accuracy of the learning model that is in operation is evaluated a plurality of times, and whether or not the prediction accuracy of the learning model that is in operation is in a deterioration trend is determined based on the change tendency of the evaluation results. If it is determined that the prediction accuracy of the learning model that is in operation is in a deterioration trend, the learning model management system provides a notification about the deterioration trend of the prediction accuracy to a manager to cause him/her to increase the number of feedback data up to a number that enables the definite evaluation. | 2021-05-27 |
20210158214 | METHOD OF PERFORMING A PROCESS USING ARTIFICIAL INTELLIGENCE - The invention relates to a method of performing a process using artificial intelligence. The method comprises running, by a computing device, an application configured to perform a process which uses an artificial intelligence model for processing signals and defining at least one parameter set for performing the at least one process; running, by the computing device, a user-driven workflow engine which comprises multiple modules including at least a first and second module; defining, by the first module, a context of the process and generating corresponding context information, providing the artificial intelligence model based on the generated context information of the process; and using, by the second module, the artificial intelligence model in a user-driven workflow within the application while executing the process. Advantageously, the combination of these modules describes an end-to-end connection which is self-learning and self-improving and is targeted at users with no expertise in the AI domain. | 2021-05-27 |
20210158215 | METHOD AND DEVICE FOR EVALUATING A STATISTICALLY DISTRIBUTED MEASURED VALUE IN THE EXAMINATION OF AN ELEMENT OF A PHOTOLITHOGRAPHY PROCESS - The present invention relates to a method for evaluating a statistically distributed measured value in the examination of an element for a photolithography process, comprising the following steps: (a) using a plurality of parameters in a trained machine learning model, wherein the parameters characterize a state of a measurement environment in a time period assigned to a measurement of the measured value; and (b) executing the trained machine learning model in order to evaluate the measured value. | 2021-05-27 |
20210158216 | METHOD AND SYSTEM FOR FEDERATED LEARNING - Methods, systems, and apparatuses, including computer programs encoded on computer storage media, for federal learning with differentially private (DP) intrinsic quantization are disclosed. One exemplary method may include obtaining a parameter vector of a local model; updating the parameter vector of the local model by adding a plurality of noise vectors to the parameter vector of the local model; performing quantization to the updated parameter vector to obtain a quantized parameter vector, wherein the quantization maps coordinates in the updated parameter vector to a set of discrete finite values; and transmitting, to a server, the quantized parameter vector and at least one of the plurality of noise vectors for the server to update a global model. | 2021-05-27 |
20210158218 | MEDICAL INFORMATION PROCESSING APPARATUS, MEDICAL INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A medical information processing apparatus comprises: an obtaining unit that obtains medical information; a learning unit that performs machine learning on a function of the medical information processing apparatus using the medical information; an evaluation data holding unit that holds evaluation data for evaluating a learning result of the learning unit; and an evaluating unit that evaluates a learning result obtained through machine learning, based on the evaluation data. | 2021-05-27 |
20210158219 | METHOD AND SYSTEM FOR AN END-TO-END ARTIFICIAL INTELLIGENCE WORKFLOW - In general, certain embodiments of the present disclosure provide methods and systems for enabling a reproducible processing of machine learning models and scalable deployment on a distributed network. The method comprises building a machine learning model; training the machine learning model to produce a plurality of versions of the machine learning model; tracking the plurality of versions of the machine learning model to produce a change facilitator tool; sharing the change facilitator tool to one or more devices such that each device can reproduce the plurality of versions of the machine learning model; and generating a deployable version of the machine learning model through repeated training. | 2021-05-27 |
20210158220 | OPTIMIZING ACCURACY OF MACHINE LEARNING ALGORITHMS FOR MONITORING INDUSTRIAL MACHINE OPERATION - A system and method for a method for optimizing machine learning algorithms for monitoring industrial machine operation, including: monitoring at least one industrial machine behavioral model of at least one industrial machine; identifying at least a first ambiguous segment of the at least one industrial machine behavioral model having a first set of characteristics, and identifying a corrective solution recommendation associated with the first ambiguous segment; identifying at least a second ambiguous segment of the at least one industrial machine behavioral model having a second set of characteristics; determining if a similarity between the first set of characteristics and the second set of characteristics exceed a predetermined threshold; and updating a machine learning algorithm of the at least one industrial machine behavioral model to associate the corrective solution recommendation to the second ambiguous segment when it is determined that the similarity has exceed the predetermined threshold. | 2021-05-27 |
20210158221 | METHODS AND SYSTEMS FOR FACILITATING ANALYSIS OF A MODEL - Disclosed herein is a method for facilitating analysis of a model. Accordingly, the method may include receiving, using a communication device, a model data associated with a model from a user device, assessing, using a processing device, the model data, identifying, using the processing device, a field associated with the model based on the assessing, analyzing, using the processing device, the field based on the identifying of the field, identifying, using the processing device, a related field associated with the field based on the analyzing of the field, analyzing, using the processing device, the related field based on the model, generating, using the processing device, a notification based on the analyzing of the related field, transmitting, using the communication device, the notification to the user device, and storing, using a storage device, the model data and the model. | 2021-05-27 |
20210158222 | ARTIFICIAL NEURAL NETWORK EMULATION OF HOTSPOTS - Methods, devices, and systems for emulating a compute kernel with an ANN. The compute kernel is executed on a processor, and it is determined whether the compute kernel is a hotspot kernel. If the compute kernel is a hotspot kernel, the compute kernel is emulated with an ANN, and the ANN is substituted for the compute kernel. | 2021-05-27 |
20210158223 | Finding Semiconductor Defects Using Convolutional Context Attributes - Context attributes for optical imaging of a patterned layer of a semiconductor die are calculated. Calculating the context attributes includes calculating convolutions of a pattern of the patterned layer with respective kernels of a plurality of kernels, wherein the plurality of kernels is orthogonal. Defects on the semiconductor die are found in accordance with the context attributes. | 2021-05-27 |
20210158224 | LEARNING DEVICE, AND LEARNING METHOD - A learning device is configured to perform learning by gradient boosting, and includes: data memory units configured to store learning data including a type of feature amount and corresponding gradient information; gradient output units each configured to receive an input of the feature amount and the corresponding gradient information from a corresponding one of the data memory units, and output the gradient information through an output port corresponding to each value of the input feature amount; an addition unit configured to add up one or more pieces of the gradient information corresponding to the same value of the feature amount, and output an added value of the gradient information corresponding to each value of the feature amount; and a histogram memory unit configured to store a histogram obtained by integrating added values of the gradient information corresponding to each value of the feature amount as a bin. | 2021-05-27 |
20210158225 | Non-Intrusive Load Monitoring Using Ensemble Machine Learning Techniques - Embodiments implement non-intrusive load monitoring using ensemble machine learning techniques. A first trained machine learning model configured to disaggregate target device energy usage from source location energy usage and a second trained machine learning model configured to detect device energy usage from source location energy usage can be stored, where the first trained machine learning model is trained to predict an amount of energy usage for the target device and the second trained machine learning model is trained to predict when a target device has used energy. Source location energy usage over a period of time can be received, where the source location energy usage includes energy consumed by the target device. An amount of disaggregated target device energy usage over the period of time can be predicted, using the first and second trained machine learning models, based on the received source location energy usage. | 2021-05-27 |
20210158226 | MACHINE LEARNING SYSTEM, MACHINE LEARNING METHOD, AND PROGRAM - Machine learning techniques which allow machine learning to be performed even when a cost function is not a convex function are provided. A machine learning system includes a plurality of node portions which learn mapping that uses one common primal variable by machine learning based on their respective input data while sending and receiving information to and from each other. The machine learning is performed so as to minimize, instead of a cost function of a non-convex function originally corresponding to the machine learning, a proxy convex function serving as an upper bound on the cost function. The proxy convex function is represented by a formula of a first-order gradient of the cost function with respect to the primal variable or by a formula of a first-order gradient and a formula of a second-order gradient of the cost function with respect to the primal variable. | 2021-05-27 |
20210158227 | SYSTEMS AND METHODS FOR GENERATING MODEL OUTPUT EXPLANATION INFORMATION - Systems and methods for explaining models. | 2021-05-27 |
20210158228 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING SYSTEM, DISPLAY DEVICE, AND RESERVATION SYSTEM - Provided is an information processing device including a reaction information use unit configured to use reaction information indicating a reaction of a user to presented information in a case where use of the reaction information has been permitted. | 2021-05-27 |
20210158229 | OPTION-BASED DISTRIBUTED RESERVATION SYSTEM - An example operation may include one or more of recording an option associated with an event in a block of a blockchain, receiving a request to consume the option from a client based on a future outcome with respect to the event, receiving, via chaincode, outcome data that is associated with the event, and in response to detecting that the future outcome has occurred based on the outcome data, activating the request to consume the option. | 2021-05-27 |