13th week of 2022 patent applcation highlights part 56 |
Patent application number | Title | Published |
20220101031 | UNIFIED READING SOLUTION FOR VEHICLES - A method, system and computer program product for progressively updating at least one matrix of license plate identification values is disclosed. At a vehicle presence time, an image is captured within which is shown a uniquely identifiable license plate of a vehicle. Analytics is carried out on the image to obtain, in relation to the uniquely identifiable license plate, at least four values in relation to both a license plate number and at least one additional plate-identifying information. The matrix of license plate identification values is populated with the at least four values and stored in a database. | 2022-03-31 |
20220101032 | METHOD AND SYSTEM FOR PRODUCT SEARCH BASED ON DEEP-LEARNING - A method and system for performing a deep learning based product search obtain an input image including a target product to be searched; transform a model pose included in the input image; obtain a standard input image having the transformed pose of the model; obtain a main product image having an area including the target product by performing deep learning based on the standard input image; extract a feature vector from the main product image; perform a product search for a product similar to the target product based on the feature vector; and output a result of the product search. | 2022-03-31 |
20220101033 | VISUAL MATCHING WITH A MESSAGING APPLICATION - Aspects of the present disclosure involve a system and a method for performing operations comprising: identifying a plurality of features for an image received by a messaging application server; assigning a first of the plurality of features and a second of the plurality of features respectively to a first nearest visual codebook cluster and a second nearest visual codebook cluster; applying the first and second nearest visual codebook clusters to a visual search database to identify a plurality of candidate matching images; selecting a given matching image based on a geometric verification of the plurality of matching images and the received image; and accessing an augmented reality experience corresponding to the given matching image. | 2022-03-31 |
20220101034 | METHOD AND SYSTEM FOR SEGMENTING INTERVENTIONAL DEVICE IN IMAGE - A method is provided for segmenting an image of a region of interest of a subject. The method includes receiving training images, at least some of which include a training interventional device; constructing ground truth data, including at least a ground truth image, for each training image; training a segmentation model using the training images based on intensity and vesselness features in the training images and the associated ground truth images; acquiring a new test image showing the region of interest; extracting vesselness values of pixels or voxels from the new test image; dividing the new test image into multiple patches; and performing segmentation of the new test image using the trained segmentation model to generate a segmentation image, corresponding to the new test image, with values indicating for each pixel or voxel in the segmentation image a presence or an absence of an interventional device. | 2022-03-31 |
20220101035 | DIAGNOSTIC TOOL FOR DEEP LEARNING SIMILARITY MODELS - A diagnostic tool for deep learning similarity models and image classifiers provides valuable insight into neural network decision-making. A disclosed solution generates a saliency map by: receiving a baseline image and a test image; determining, with a convolutional neural network (CNN), a first similarity between the baseline image and the test image; based on at least determining the first similarity, determining, for the test image, a first activation map for at least one CNN layer; based on at least determining the first similarity, determining, for the test image, a first gradient map for the at least one CNN layer; and generating a first saliency map as an element-wise function of the first activation map and the first gradient map. Some examples further determine a region of interest (ROI) in the first saliency map, cropping the test image to an area corresponding to the ROI, and determine a refined similarity score. | 2022-03-31 |
20220101036 | SYSTEMS AND METHODS FOR ENFORCING CONSTRAINTS IN CHARACTER RECOGNITION - There is disclosed a method of and a system for predicting text in an image using one or more constraints. The image is input to a machine learning algorithm (MLA). The MLA outputs a probability distribution. The probability distribution comprises a predicted probability for each of a plurality of pairs, where each pair comprises a class and a next state of the MLA. The states of the probability distribution are added to a set of states to be searched. States that are end states or that fail to satisfy at least one of the constraints are removed from the set of states to be searched. States of the set of states to be searched are input to the MLA. The search is repeated with new states output by the MLA. End states output by the MLA are output as output states that each comprise a sequence of characters. | 2022-03-31 |
20220101037 | System and Method for License Plate Recognition - A license plate recognition system includes an image capturing module, a license plate detection module, a segment extraction module, a character classification module, and a character recognition module. The image capturing module is for capturing an image. The license plate detection module is for receiving the image and to identify a license plate in the image. The segment extraction module is for extracting a sequence of character segments on the license plate. The character classification module is for computing a probability of each possible character in each character segment. The character recognition module is for identifying permissible characters for the each character segment according to a syntax of the sequence of character segments, and to identify a character having a highest probability among the permissible characters as a selected character for the each character segment. | 2022-03-31 |
20220101038 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An information processing device is configured to identify a background area of a first image registered from a first account and a background area of a second image registered from a second account, and to output identicalness information indicating whether a user owning the first account and a user owning the second account are identical to each other based on the background area of the first image and the background area of the second image. | 2022-03-31 |
20220101039 | SINGLE-PASS PRIMARY ANALYSIS - Methods and systems for image analysis are provided, and in particular for identifying a set of base-calling locations in a flow cell for DNA sequencing. These include capturing flow cell images after each sequencing step performed on the flow cell, and identifying candidate cluster centers in at least one of the flow cell images. Intensities are determined for each candidate cluster center in a set of flow cell images. Purities are determined for each candidate cluster center based on the intensities. Each candidate cluster center with a purity greater than the purity of the surrounding candidate cluster centers within a distance threshold is added to a template set of base-calling locations. | 2022-03-31 |
20220101040 | DEVICE AND METHOD FOR CLASSIFICATION USING CLASSIFICATION MODEL AND COMPUTER READABLE STORAGE MEDIUM - A device and a method for classification using a pre-trained classification model and a computer readable storage medium are provided. The device is configured to extract, for each of multiple images in a target image group to be classified, a feature of the image using a feature extraction layer of the pre-trained classification model; calculate, for each of the multiple images, a contribution of the image to a classification result of the target image group using a contribution calculation layer of the pre-trained classification model; aggregate extracted features of the multiple images based on calculated contributions of the multiple images, to obtain an aggregated feature as a feature of the target image group; and classify the target image group based on the feature of the target image group. | 2022-03-31 |
20220101041 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM RECORDED WITH INFORMATION PROCESSING PROGRAM - An information processing device acquires an image captured by a transmission electron microscope. The information processing device, for each partial region in the image, calculates a variation in pixel values of pixels included in the partial region. The information processing device, for each partial region in the image, determines a degree of crystallinity of the partial region based on the calculated variation in the pixel values of the partial region. | 2022-03-31 |
20220101042 | Cluster Interlayer Safety Mechanism In An Artificial Neural Network Processor - Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well. The safety mechanisms cover data stream fault detection, software defined redundant allocation, cluster interlayer safety, cluster intralayer safety, layer control unit (LCU) instruction addressing, weights storage safety, and neural network intermediate results storage safety. | 2022-03-31 |
20220101043 | Cluster Intralayer Safety Mechanism In An Artificial Neural Network Processor - Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well. The safety mechanisms cover data stream fault detection, software defined redundant allocation, cluster interlayer safety, cluster intralayer safety, layer control unit (LCU) instruction addressing, weights storage safety, and neural network intermediate results storage safety. | 2022-03-31 |
20220101044 | ARTIFICIAL INTELLIGENCE MODEL GENERATION USING DATA WITH DESIRED DIAGNOSTIC CONTENT - A computer receives a general predictive model and training data. The computer builds a clustering feature tree model to condense the training data into data groups. The computer applies a leave-one-out evaluation method to determine an impact value for each data groups with regard to said general predictive model. The computer identifies a diagnostic category for each data group selected from a list of categories including model-harmful data, model-neutral data, and model-helping data, in accordance with said impact value. The computer removes data in groups labelled as model-harmful from the training data and builds a modified general predictive model based on data in groups labelled as model-neutral or model-helping. | 2022-03-31 |
20220101045 | TRAFFIC LIGHT DETECTION AUTO-LABELING AND FEDERATED LEARNING BASED ON VEHICLE-TO-INFRASTRUCTURE COMMUNICATIONS - A method for traffic light auto-labeling includes aggregating vehicle-to-infrastructure (V2I) traffic light signals at an intersection to determine transition states of each driving lane at the intersection during operation of an ego vehicle. The method also includes automatically labeling image training data to form auto-labeled image training data for a traffic light recognition model within the ego vehicle according to the determined transition states of each driving lane at the intersection. The method further includes planning a trajectory of the ego vehicle to comply with a right-of-way according to the determined transition states of each driving lane at the intersection according to a trained traffic light detection model. A federated learning module may train the traffic light recognition model using the auto-labeled image training data during the operation of the ego vehicle. | 2022-03-31 |
20220101046 | Systems and Methods With Robust Classifiers That Defend Against Patch Attacks - A system and method relate to providing machine learning predictions with defenses against patch attacks. The system and method include obtaining a digital image and generating a set of location data via a random process. The set of location data include randomly selected locations on the digital image that provide feasible bases for creating regions for cropping. A set of random crops is generated based on the set of location data. Each crop includes a different region of the digital image as defined in relation to its corresponding location data. The machine learning system is configured to provide a prediction for each crop of the set of random crops and output a set of predictions. The set of predictions is evaluated collectively to determine a majority prediction from among the set of predictions. An output label is generated for the digital image based on the majority prediction. The output label includes the majority prediction as an identifier for the digital image. | 2022-03-31 |
20220101047 | DATA AUGMENTATION INCLUDING BACKGROUND MODIFICATION FOR ROBUST PREDICTION USING NEURAL NETWORKS - In various examples, a background of an object may be modified to generate a training image. A segmentation mask may be generated and used to generate an object image that includes image data representing the object. The object image may be integrated into a different background and used for data augmentation in training a neural network. Data augmentation may also be performed using hue adjustment (e.g., of the object image) and/or rendering three-dimensional capture data that corresponds to the object from selected views. Inference scores may be analyzed to select a background for an image to be included in a training dataset. Backgrounds may be selected and training images may be added to a training dataset iteratively during training (e.g., between epochs). Additionally, early or late fusion nay be employed that uses object mask data to improve inferencing performed by a neural network trained using object mask data. | 2022-03-31 |
20220101048 | MULTIMODALITY IMAGE PROCESSING TECHNIQUES FOR TRAINING IMAGE DATA GENERATION AND USAGE THEREOF FOR DEVELOPING MONO-MODALITY IMAGE INFERENCING MODELS - Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality. | 2022-03-31 |
20220101049 | ENHANCED TRAINING METHOD AND APPARATUS FOR IMAGE RECOGNITION MODEL - Implementations of the present specification provide an enhanced training method for an image recognition model. A predetermined quantity or predetermined proportion of samples are randomly selected from a first sample set as a seed sample for extension to obtain several extended samples. The extended sample is obtained by adding disturbance to an original image without changing an annotation result. In a sample extension process, disturbance values are arranged towards neighborhood in predetermined distribution with a reference pixel as a reference, so that real disturbance can be well simulated. Because the annotation result of the extended sample remains unchanged after the disturbance is added, an image recognition model trained by using the extended sample can well recognize a target recognition result of an original image, thereby improving robustness of the image recognition model. | 2022-03-31 |
20220101050 | IMAGE GENERATION MODEL BASED ON LOG-LIKELIHOOD - A computer-implemented method of training an image generation model. The image generation model comprises an argmax transformation configured to compute a discrete index feature indicating an index of a feature of the continuous feature vector with an extreme value. The image generation model is trained using a log-likelihood optimization. This involves obtaining a value of the index feature for the training image, sampling values of the continuous feature vector given the value of the index feature according to a stochastic inverse transformation of the argmax transformation, and determining a likelihood contribution of the argmax transformation for the log-likelihood based on a probability that the stochastic inverse transformation generates the values of the continuous feature vector given the value of the index feature. | 2022-03-31 |
20220101051 | DATA-BASED UPDATING OF THE TRAINING OF CLASSIFIER NETWORKS - A method for training a neural network. The method includes providing learning input images and associated learning output data; providing auxiliary input images; generating modifications of these auxiliary input images by introducing at least one predefined change into them; supplying the modifications to the neural network; ascertaining predictions for the predefined change, using output data onto which the neural network maps the modifications; assessing deviations of the predictions from the predefined change, using an auxiliary cost function; optimizing parameters of the neural network to improve the assessment by the auxiliary cost function during further processing of auxiliary input images; supplying the learning input images to the neural network; assessing deviations of the output data, thus obtained, from the learning output data, using a main cost function; optimizing parameters of the neural network to improve the assessment by the main cost function during further processing of learning input images. | 2022-03-31 |
20220101052 | ANSWERING QUESTIONS WITH ARTIFICIAL INTELLIGENCE USING TABULAR DATA - A computer answers a question using a data table. The computer receives a user question and a target table containing a target cell corresponding to a target answer for the user question, with the target cell corresponding to a target column and a target row. The computer generates, a first classifier to provide column correlation values reflecting the probability that a given column is the target column. The computer generates a second classifier that provides row correlation values reflecting the probability that a given row is the target row. The computer applies the first classifier to the target table to determine a column correlation value for each column. The computer applies the second classifier to the target table to determine a row correlation value for each row. The computer suggests, as the target cell, a cell having elevated column and row correlation values relative to other target table cells. | 2022-03-31 |
20220101053 | NEURAL NETWORK IMAGE PROCESSING - A computer, including a processor and a memory, the memory including instructions to be executed by the processor to determine a second convolutional neural network (CNN) training dataset by determining an underrepresented object configuration and an underrepresented noise factor corresponding to an object in a first CNN training dataset, generate one or more simulated images including the object corresponding to the underrepresented object configuration in the first CNN training dataset by inputting ground truth data corresponding to the object into a photorealistic rendering engine and generate one or more synthetic images including the object corresponding to the underrepresented noise factor in the first CNN training dataset by processing the simulated images with a generative adversarial network (GAN) to determine a second CNN training dataset. The instructions can include further instructions to train a CNN to using the first and the second CNN training datasets and input an image acquired by a sensor to the trained CNN and output an object label and an object location corresponding to the underrepresented object configuration and underrepresented object noise factor. | 2022-03-31 |
20220101054 | SPLITTING NEURAL NETWORKS ON MULTIPLE EDGE DEVICES TO TRAIN ON VERTICALLY DISTRIBUTED DATA - One example method includes a pipeline for a distributed neural network. The pipeline includes a first phase that identifies intersecting identifiers across datasets of multiple clients in a privacy preserving manner. The second phase includes a distributed neural network that includes a data receiving portion at each of the clients and an orchestrator portion at an orchestrator. The data receiving portions and the orchestrator portions communicate forward and backward passes to perform training without revealing the raw training data. | 2022-03-31 |
20220101055 | METHOD AND SYSTEM FOR CONSTRUCTING DIGITAL ROCK - The present disclosure provides a method and system for constructing a digital rock, and relates to the technical field of digital rocks. According to the method, a three-dimensional (3D) digital rock image that can reflect real rock information is obtained using an image scanning technology, and the image is preprocessed to obtain a digital rock training image for training a generative adversarial network (GAN). The trained GAN is stored to obtain a digital rock construction model. The stored digital rock construction model can be directly used to quickly construct a target digital rock image. This not only greatly reduces computational costs, but also reduces costs and time consumption for obtaining high-resolution sample images. In addition, the constructed target digital rock image can also reflect real rock information. | 2022-03-31 |
20220101056 | GENERATOR NETWORKS FOR GENERATING IMAGES WITH PREDETERMINED COUNTS OF OBJECTS - A method for training a generator network that is configured to generate images with multiple objects. The method includes providing a set of training images, a generator network, and a discriminator network; drawing noise samples and target counts of objects; mapping, by the generator network, the noise samples and target counts of objects to generated images; randomly drawing images from a pool comprising generated images and training images; supplying the randomly drawn images to the discriminator network, thereby mapping them to a combination of: a decision whether the respective image is a training image or a generated image; optimizing discriminator parameters, optimizing generator parameters, and optimizing both the generator parameters and the discriminator parameters with the goal of improving the match between the predicted count of objects on the one hand, and the actual or target count of objects on the other hand. | 2022-03-31 |
20220101057 | SYSTEMS AND METHODS FOR TAGGING DATASETS USING MODELS ARRANGED IN A SERIES OF NODES - Systems and methods for managing indexing and tagging datasets using a plurality of nodes are disclosed. For example, the system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving a dataset comprising a plurality of columns and applying a series of nodes to the dataset. Applying the series of nodes may compose applying a first node comprising a machine learning model to generate a first probability, appending a first tag based on the first probability, and selecting second nodes subsequent in the series based on the first probability. Applying the series may include iteratively applying the selected second nodes to generate second probabilities and second tags. The operations may include generating a data structure comprising the first and second probabilities and first and second tags. The operations may include outputting metadata. | 2022-03-31 |
20220101058 | LOCALIZATION FOR MOBILE DEVICES - Systems and methods for localization for mobile devices are described. Some implementations may include accessing motion data captured using one or more motion sensors; determining, based on the motion data, a coarse localization, wherein the coarse localization includes a first estimate of position; obtaining one or more feature point maps, wherein the feature point maps are associated with a position of the coarse localization; accessing images captured using one or more image sensors; determining, based on the images, a fine localization pose by localizing into a feature point map of the one or more feature point maps, wherein the fine localization pose includes a second estimate of position and an estimate of orientation; generating, based on the fine localization pose, a virtual object image including a view of a virtual object; and displaying the virtual object image. | 2022-03-31 |
20220101059 | Learning system, learning device, learning method, learning program, teacher data creation device, teacher data creation method, teacher data creation program, terminal device, and threshold value changing device - A training system comprises a training device, and a training data creation device for the training device. The training device trains a neural network by means of a backpropagation algorithm. The training data creation device acquires any one of a positive evaluation indicating that content of the input data coincides with the label, a negative evaluation indicating that content of the input data does not coincide with the label, and an ignorable evaluation indicating exclusion from a training target label, for each label regarding the input data to create training data. In training the neural network for training, the training system adjusts the weight coefficient for the intermediate layer such that the recognition score of the label with the positive evaluation or the negative evaluation comes closer to the ground-truth score of the positive evaluation or the negative evaluation, and makes the recognition score of the label with the ignorable evaluation not affect the adjustment of the weight coefficient for the intermediate layer. | 2022-03-31 |
20220101060 | TEXT PARTITIONING METHOD, TEXT CLASSIFYING METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM - A text partitioning method, a text classifying method, an apparatus, a device and a storage medium, wherein the method includes: parsing a content image, to obtain a target text in a text format; according to a line break in the target text, partitioning the target text into a plurality of text sections; and according to a first data-volume threshold, partitioning sequentially the plurality of text sections into a plurality of text-to-be-predicted sets, wherein a data volume of a last one text section in each of the text-to-be-predicted sets is greater than a second data-volume threshold. | 2022-03-31 |
20220101061 | AUTOMATICALLY IDENTIFYING AND GENERATING MACHINE LEARNING PREDICTION MODELS FOR DATA INPUT FIELDS - An indication to enable machine learning prediction for a form that includes a plurality of data input fields is received and behavior associated with the form is monitored. One or more of the plurality of data input fields are automatically selected based on an analysis of the monitored behavior. For at least a portion of the selected one or more of the plurality of data input fields, one or more machine learning prediction models are automatically generated. At least a portion of the generated machine learning prediction models are allowed for use in providing one or more prediction results for one or more of the plurality of data input fields. | 2022-03-31 |
20220101062 | System and a Method for Bias Estimation in Artificial Intelligence (AI) Models Using Deep Neural Network - A system for bias estimation in Artificial Intelligence (AI) models using a pre-trained unsupervised deep neural network, comprising a bias vector generator implemented by at least one processor that executes an unsupervised DNN with a predetermined loss function. The bias vector generator is adapted to store a given ML model to be examined, with predetermined features; store a test-set of one or more test data samples being input data samples; receive a feature vector consisting of one or more input samples; output a bias vector indicating the degree of bias for each feature, according to said one or more input samples. The system also comprises a post-processor which is adapted to receive a set of bias vectors generated by said bias vector generator; process said bias vectors; calculate a bias estimation for every feature of said ML model, based on predictions of said ML model; provide a final bias estimation for each examined feature. | 2022-03-31 |
20220101063 | METHOD AND APPARATUS FOR ANALYZING NEURAL NETWORK PERFORMANCE - A method of predicting performance of a hardware arrangement or a neural network model includes: obtaining one or more of a first hardware arrangement or a first neural network model, obtaining a first graphical model comprising a first plurality of nodes corresponding to the obtained first hardware arrangement or the obtained first neural network model, wherein each node of the first plurality of nodes corresponds to a respective component or device of the first plurality of interconnected components or devices or a respective operation of the first plurality of operations; extracting, based on the first graphical model, a first graphical representation of the obtained first hardware arrangement or the obtained first neural network model; predicting, based on the first graphical representation, performance of the obtained first hardware arrangement or the obtained first neural network model; and outputting the predicted performance. | 2022-03-31 |
20220101064 | TASK PRIORITIZED EXPERIENCE REPLAY ALGORITHM FOR REINFORCEMENT LEARNING - A task prioritized experience replay (TaPER) algorithm enables simultaneous learning of multiple RL tasks off policy. The algorithm can prioritize samples that were part of fixed length episodes that led to the achievement of tasks. This enables the agent to quickly learn task policies by bootstrapping over its early successes. Finally, TaPER can improve performance on all tasks simultaneously, which is a desirable characteristic for multi-task RL. Unlike conventional ER algorithms that are applied to single RL task learning settings or that require rewards to be binary or abundant, or are provided as a parameterized specification of goals, TaPER poses no such restrictions and supports arbitrary reward and task specifications. | 2022-03-31 |
20220101065 | AUTOMATIC DOCUMENT SEPARATION - In an approach for an automatic document separation, a processor extracts one or more features from a document containing a plurality of pages. A processor generates a data frame based on the feature extraction. In response to analyzing a similarity between the plurality of pages, a processor determines whether the similarity exceeds a predetermined threshold. In response to determining that the similarity does not exceed the predetermined threshold, a processor transforms text into vectors forming float arrays. In response to benchmarking a set of predetermined clustering algorithms, a processor identifies a clustering algorithm using a predetermined criterion. A processor clusters the plurality of pages, using the clustering algorithm, to create a group of pages. A processor validates the clustered group of pages. In response to passing validation, a processor generates a set of final separated files based on the clustered group of pages. | 2022-03-31 |
20220101066 | REDUCING FALSE DETECTIONS FOR NIGHT VISION CAMERAS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for reducing camera false detections. One of the methods includes providing, to a neural network of an image classifier that is trained to detect objects of two or more classification types, a feature vector for a respective training image; receiving, from the neural network, an output vector that indicates, for each of the two or more classification types, a likelihood that the respective training image depicts an object of the corresponding classification type; accessing, from two or more ground truth vectors each for one of the two or more classification types, a ground truth vector for the classification type of an object depicted in the training image; and adjusting one or more weights in the neural network using the output vector and the ground truth vector; and storing, in a memory, the image classifier. | 2022-03-31 |
20220101067 | SYSTEM AND METHOD FOR 3D BLOB CLASSIFICATION AND TRANSMISSION - Embodiments described herein provide an apparatus comprising a processor to project and accumulate three-dimensional (3D) point data from a blob onto a plane; construct a histogram of the 3D point data; identify a center of mass of the blob based on histogram data; surround peaks in coordinates for data in the blob with a shape defined by a diameter of the blob based on the center of mass; obtain height data for the 3D point data; and calculate dimensions for a bounding box to surround the blob based on the shape and the height data. Other embodiments may be described and claimed. | 2022-03-31 |
20220101068 | OUTLIER DETECTION IN A DEEP NEURAL NETWORK USING T-WAY FEATURE COMBINATIONS - Outlier detection using a Deep Neural Network (DNN) includes running a trained DNN model on an received input item. A first feature vector is extracted from the input item and quantized to discrete values. A first number of special t-way feature combinations are computed in the input item and compared against a computed threshold. Based on the comparison, the input item is flagged as an outlier and an alert is generated notifying of the flagged input item. | 2022-03-31 |
20220101069 | MACHINE LEARNING OUTLIER DETECTION USING WEIGHTED HISTOGRAM-BASED OUTLIER SCORING (W-HBOS) - Different automatic tasks are facilitated via outlier detection in datasets using a Weighted Histogram-based Outlier Scoring (W-HBOS). An initial set of features is extracted from a processed dataset. The initial set of features is further filtered by applying robust statistics for size reduction. A second round of automatic feature selection is implemented based on maximum-entropy estimation so that a selected set of features that can give maximum possible information from different dimensions towards detecting anomalies are selected. The selected set of features are transformed to generate principal components that are provided to the W-HBOS-based model for outlier detection. A subset of outliers in one of the directions can be selected and reason codes are identified using back transformation for the execution of a desired automatic task. | 2022-03-31 |
20220101070 | METHOD AND DATA PROCESSING SYSTEM FOR PROVIDING RADIOMICS-RELATED INFORMATION - A computer-implemented method is for providing radiomics-related information. In an embodiment, the computer-implemented method includes receiving radiomics-related data; determining, based on the radiomics-related data and an assistance algorithm, a function for processing the radiomics-related data; calculating, based on the radiomics-related data and the function for processing the radiomics-related data, the radiomics-related information; and providing the radiomics-related information. | 2022-03-31 |
20220101071 | Using Rasterization to Identify Traffic Signal Devices - Systems and methods are provided for identifying and representing a traffic signal device. The method includes determining a location and orientation of the vehicle and receiving a real world image. The method further includes analyzing information about the vehicle's location and environment and using this information and the vehicle's orientation to generate a raster image illustrating an approximation of a view of the real world image, including one or more traffic signal devices. Additionally, the method includes providing the real world image and the raster image as inputs to a neural network to classify a traffic signal device in the real world image as the primary traffic signal device and determine a set of coordinates indicating a location of the primary traffic signal device, generating a classified real world image which includes a bounding box indicating the set of coordinates, and receiving the classified real world image. | 2022-03-31 |
20220101072 | Detecting a User's Outlier Days Using Data Sensed by the User's Electronic Devices - A method for detecting a user's outlier days uses data corresponding to features of the user acquired over multiple days by sensors on the user's electronic device. The data acquired for each day and feature is labeled as regular or irregular by applying N labeling approaches. One of the N labeling approaches compares the data for each feature with how values of previously acquired data for corresponding features are distributed. N labels are generated for the data for each feature and day. The machine learning classification model is trained using one of the N labels for each of the N labeling approaches. An optimal labeling approach is selected from among the N labeling approaches for each feature using the machine learning classification model. For each feature, the method determines whether each of the days is an outlier day for the user using the labels obtained with the optimal labeling approach. | 2022-03-31 |
20220101073 | METHOD AND SYSTEM FOR PERFORMING CLASSIFICATION OF REAL-TIME INPUT SAMPLE USING COMPRESSED CLASSIFICATION MODEL - The present disclosure relates to method and system for performing classification of real-time input sample using compressed classification model. Classification system receives classification model configured to classify training input sample. Relevant neurons are identified from neurons of the classification model. Classification error is identified for each class. Reward value is determined for the relevant neurons based on relevance score of each neuron and the classification error. Optimal image is generated for each class based on the reward value of the relevant neurons. The optimal image is provided to the classification model for generating classification error vector for each class. The classification error vector is used for identifying pure neurons from the relevant neurons. A compressed classification model comprising the pure neurons is generated. The generated compressed classification model is used for performing the classification of real-time input sample. | 2022-03-31 |
20220101074 | DEVICE AND METHOD FOR TRAINING A NORMALIZING FLOW USING SELF-NORMALIZED GRADIENTS - A computer-implemented method for training a normalizing flow. The normalizing flow is configured to determine a first output signal characterizing a likelihood or a log-likelihood of an input signal. The normalizing flow includes at least one first layer which includes trainable parameters. A layer input to the first layer is based on the input signal and the first output signal is based on a layer output of the first layer. The training includes: determining at least one training input signal; determining a training output signal for each training input signal using the normalizing flow; determining a first loss value which is based on a likelihood or a log-likelihood of the at least one determined training output signal with respect to a predefined probability distribution; determining an approximation of a gradient of the trainable parameters; updating the trainable parameters of the first layer based on the approximation of the gradient. | 2022-03-31 |
20220101075 | PRINTED MATTER PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - A printed matter processing apparatus includes an acquisition unit that acquires image data of an environment in which a printed matter is set, a display unit, and a processor configured to, by executing a program, input the image data acquired by the acquisition unit, display how the printed matter is seen in a case where the printed matter is set in the environment on the display unit, by a composited image in which the printed matter is composited in the image data in accordance with a printing condition of the printed matter, and decide the printing condition of the printed matter by changing and displaying the printing condition of the printed matter in accordance with a user operation, and output the printing condition of the printed matter without outputting the image data. | 2022-03-31 |
20220101076 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND METHOD TO PREVENT DUPLICATE ORDER FOR SUPPLIES - An image processing apparatus includes a printer to perform printing using an expendable supply for at least one of a plurality of different colors, and a controller to, when receiving, via a user interface, an ordering operation to order a particular type of supplies selected on an order screen, place an order for the particular type of supplies to an information processing device via a communication interface, and store history information on the particular type of supplies into a memory, the history information indicating an order history of the particular type of supplies and containing information generated based on a most recent order date of the particular type of supplies, and in response to a display trigger, read out the history information on the particular type of supplies from the memory, and display the read history information on a display such that the particular type of supplies is identified. | 2022-03-31 |
20220101077 | SYSTEM THAT ASSOCIATES OBJECT WITH N-DIMENSIONAL SYMBOL - A system includes: an acquiring unit that acquires an image of an n-dimensional symbol; a first image capturing unit that captures an image of a random pattern on a surface of an object; a storing unit that stores the image of the n-dimensional symbol and a first image that is the image of the random pattern captured by the first image capturing unit in a manner that the images are associated with each other; a second image capturing unit that captures an image of a random pattern on a surface of an object; a matching unit that performs matching of the image captured by the second image capturing unit against the first image stored by the storing unit; and a displaying unit that displays the image of the n-dimensional symbol associated with the first image stored by the storing unit based on the result of the matching. | 2022-03-31 |
20220101078 | RFID TAGS WITH SHIELDING STRUCTURE FOR INCORPORATION INTO MICROWAVABLE FOOD PACKAGING - RFID tags are provided for incorporation into the packaging of a microwavable food item, with the RFID tag being configured to be safely microwaved. The RFID tag includes an antenna defining a gap and configured to operate at a first frequency. An RFID chip is electrically coupled to the antenna across the gap. A shielding structure is electrically coupled to the antenna across the gap and overlays the RFID chip. The shielding structure includes a shield conductor and a shield dielectric at least partially positioned between the shield conductor and the RFID chip. The shielding structure is configured to limit the voltage across the gap when the antenna is exposed to a second frequency that is greater than first frequency. In additional embodiments, RFID tags are provided for incorporation into the packaging of a microwavable food item, with the RFID tag being configured to be safely microwaved. The RFID tag includes an RFID chip and an antenna electrically coupled to the RFID chip. The antenna may have a sheet resistance in the range of approximately 100 ohms to approximately 230 ohms, optionally with an optical density in the range of approximately 0.18 to approximately 0.29. Alternatively, or additionally, the antenna may be configured to fracture into multiple pieces upon being subjected to heating in a microwave oven. Alternatively, or additionally, the RFID tag may be incorporated in an RFID label that is secured to the package by a joinder material with a greater resistance than that of the antenna, such as a sheet resistance in the range of approximately 100 ohms to approximately 230 ohms. | 2022-03-31 |
20220101079 | SECURE COUNTLESS PAYMENT METHOD AND DEVICE WITH MOVEMENT-ACTIVATED ELECTRONIC CIRCUITRY - A contactless payment device including a wireless communication device; a power source; a processor coupled to the power source; an accelerometer communicatively coupled to the processor and the power source; and an actuator communicatively coupled to the wireless communication device and the processor. The actuator is configured to activate the wireless communication device when the actuator is set in a closed state, and deactivate the wireless communication device when the actuator is set in an open state, The processor is configured to receive an incoming signal from the accelerometer; determine whether the incoming signal corresponds to a pre-programmed signal corresponding to an enabling gesture; and set the actuator in the dosed state for a time interval, when the incoming signal corresponds to the enabling gesture. | 2022-03-31 |
20220101080 | METAL, CERAMIC, OR CERAMIC-COATED TRANSACTION CARD WITH WINDOW OR WINDOW PATTERN AND OPTIONAL BACKLIGHTING - A transaction card includes at least one metal layer having one or more apertures therein. A light guide is disposed beneath the metal layer. The light guide has a light output and a light input. The light output is positioned to transmit light through at least the one or more apertures of the metal layer. At least one LED is positioned to transmit light into the light guide light input | 2022-03-31 |
20220101081 | HIERARCHICAL COMBINATION OF DISTRIBUTED STATISTICS IN A MONITORING NETWORK - Methods, systems, and computer program products for creating a monitoring network are described. A server associates a master wireless node with a package in a first set of associated packages. The server and the master wireless node communicate with one another over first type of wireless communications interface. The server also associates a peripheral wireless node with another package in the set and with the master node. The peripheral wireless node and the master wireless node communicate with one another over a second type of wireless communications interface. The peripheral node includes a sensor operative to generate sensor data by sensing an environmental condition. The master wireless node processes the sensor data to generate one or more package-level statistics of the processed sensor data. | 2022-03-31 |
20220101082 | GENERATING REPRESENTATIONS OF INPUT SEQUENCES USING NEURAL NETWORKS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating representations of input sequences. One of the methods includes obtaining an input sequence, the input sequence comprising a plurality of inputs arranged according to an input order; processing the input sequence using a first long short term memory (LSTM) neural network to convert the input sequence into an alternative representation for the input sequence; and processing the alternative representation for the input sequence using a second LSTM neural network to generate a target sequence for the input sequence, the target sequence comprising a plurality of outputs arranged according to an output order. | 2022-03-31 |
20220101083 | METHODS AND APPARATUS FOR MATRIX PROCESSING IN A CONVOLUTIONAL NEURAL NETWORK - Described examples include an integrated circuit including a vector multiply unit including a plurality of multiply/accumulate nodes, in which the vector multiply unit is operable to provide an output from the multiply/accumulate nodes, a first data feeder operable to provide first data to the vector multiply unit in vector format, and a second data feeder operable to provide second data to the vector multiply unit in vector format. | 2022-03-31 |
20220101084 | PIPELINING FOR ANALOG-MEMORY-BASED NEURAL NETWORKS WITH ALL-LOCAL STORAGE - Pipelining for analog-memory-based neural networks with all-local storage is provided. In various embodiments, an array of inputs is received by a first synaptic array in a hidden layer from a prior layer during a feed forward operation. The array of inputs is stored by the first synaptic array during the feed forward operation. The array of inputs is received by a second synaptic array in the hidden layer during the feed forward operation. The second synaptic array computes outputs from array of inputs based on weights of the second synaptic array during the feed forward operation. The stored array of inputs is provided from the first synaptic array to the second synaptic array during a back propagation operation. Correction values are received by the second synaptic array during the back propagation operation. Based on the correction values and the stored array of inputs, the weights of the second synaptic array are updated. | 2022-03-31 |
20220101085 | Non-Volatile Memory Accelerator for Artificial Neural Networks - A non-volatile memory (NVM) crossbar for an artificial neural network (ANN) accelerator is provided. The NVM crossbar includes row signal lines configured to receive input analog voltage signals, multiply-and-accumulate (MAC) column signal lines, a correction column signal line, a MAC cell disposed at each row signal line and MAC column signal line intersection, and a correction cell disposed at each row signal line and correction column signal line intersection. Each MAC cell includes one or more programmable NVM elements programmed to an ANN unipolar weight, and each correction cell includes one or more programmable NVM elements. Each MAC column signal line generates a MAC signal based on the input analog voltage signals and the respective MAC cells, and the correction column signal line generates a correction signal based on the input analog voltage signals and the correction cells. Each MAC signal is corrected based on the correction signal. | 2022-03-31 |
20220101086 | RECONFIGURABLE HARDWARE BUFFER IN A NEURAL NETWORKS ACCELERATOR FRAMEWORK - A convolutional accelerator framework (CAF) has a plurality of processing circuits including one or more convolution accelerators, a reconfigurable hardware buffer configurable to store data of a variable number of input data channels, and a stream switch coupled to the plurality of processing circuits. The reconfigurable hardware buffer has a memory and control circuitry. A number of the variable number of input data channels is associated with an execution epoch. The stream switch streams data of the variable number of input data channels between processing circuits of the plurality of processing circuits and the reconfigurable hardware buffer during processing of the execution epoch. The control circuitry of the reconfigurable hardware buffer configures the memory to store data of the variable number of input data channels, the configuring including allocating a portion of the memory to each of the variable number of input data channels. | 2022-03-31 |
20220101087 | MULTI-MODAL REPRESENTATION BASED EVENT LOCALIZATION - A method performed by an artificial neural network (ANN) includes determining, at a first stage of a multi-stage cross-attention model of the ANN, a first cross-correlation between a first representation of each modality of a number of modalities associated with a sequence of inputs. The method still further includes determining, at each second stage of one or more second stages of the multi-stage cross-attention model, a second cross-correlation between first attended representations of each modality. The method also includes generating a concatenated feature representation associated with a final second stage of the one or more second stages based on the second cross-correlation associated with the final second stage, the first attended representation of each modality, and the first representation of each modality. The method further includes determining a probability distribution between a set of background actions and a set of foreground actions from the concatenated feature representation. The method still further includes localizing an action in the sequence of inputs based on the probability distribution. | 2022-03-31 |
20220101088 | TRAINING NEURAL NETWORKS FOR AN EFFICIENT IMPLEMENTATION ON HARDWARE - A method for training an artificial neural network (ANN), which has a multitude of neurons. In the method, a measure of the quality is ascertained that the ANN has achieved overall within a time period in the past; one or more neurons are evaluated based on a measure of their respective quantitative contributions to the ascertained quality; measures by which the evaluated neurons are trained in the further course of the training and/or significance values of these neurons in the ANN are specified based on the evaluations of the neurons. A method is also described in which an arithmetic unit is selected which has hardware resources for a predefined number of neurons, layers of neurons and/or connections between neurons, and a model of the ANN is selected whose number of neurons, layers of neurons and/or connections between neurons exceeds the predefined number. | 2022-03-31 |
20220101089 | METHOD AND APPARATUS FOR NEURAL ARCHITECTURE SEARCH - The disclosure relates to methods, apparatuses and systems for improving a neural architecture search (NAS). For example, A computer-implemented method using a searching algorithm to design a neural network architecture is provided, the method including: obtaining a plurality of neural network models; selecting a first subset of the plurality of neural network models; applying the searching algorithm to the selected subset of models; and identifying an optimal neural network architecture by repeating the selecting and applying for a fixed number of iterations; wherein at least one score indicative of validation loss for each model is used in or alongside at least one of the selecting and applying. | 2022-03-31 |
20220101090 | Neural Architecture Search with Factorized Hierarchical Search Space - The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models. | 2022-03-31 |
20220101091 | NEAR MEMORY SPARSE MATRIX COMPUTATION IN DEEP NEURAL NETWORK - A DNN accelerator includes a multiplication controller controlling whether to perform matrix computation based on weight values. The multiplication controller reads a weight matrix from a WRAM in the DNN accelerator and determines a row value for a row in the weight matrix. In an embodiment where the row value is one, a first switch sends a read request to the WRAM to read weights in the row and a second switch forms a data transmission path from an IRAM in the DNN accelerator to a PE in the DNN accelerator. The PE receives the weights and input data stored in the IRAM and performs MAC operations. In an embodiment where the row value is zero, the first and second switches are not triggered. No read request is sent to the WRAM and the data transmission path is not formed. The PE will not perform any MAC operations. | 2022-03-31 |
20220101092 | NEURAL NETWORK DEVICE, NEURAL NETWORK SYSTEM, PROCESSING METHOD, AND RECORDING MEDIUM - A neural network device includes: a neuron model unit configured as a non-leaky integrate-and-fire spiking neuron and a spiking neuron with which a postsynaptic current is represented using a step function, the neuron model unit being fired once at most in one process of a neural network to indicate an output of the neural model unit itself at firing timing; and a transfer processing unit that transfers information between the neuron model unit. | 2022-03-31 |
20220101093 | PLATFORM FOR SELECTION OF ITEMS USED FOR THE CONFIGURATION OF AN INDUSTRIAL SYSTEM - Provided is a computer-implemented method and platform for context aware sorting of items available for configuration of a system during a selection session, the method including the steps of providing a numerical input vector, V, representing items selected in a current selection session as context; calculating a compressed vector, V | 2022-03-31 |
20220101094 | SYSTEM AND METHOD OF CONFIGURING A SMART ASSISTANT FOR DOMAIN SPECIFIC SELF-SERVICE SMART FAQ AGENTS - A computer aided method of configuring a smart assistant for domain specific self-service; comprising receiving, through a user interface, a predefined domain specific dataset; generating, by a training engine, a set of positive and negative samples in equal ratio; validating the predefined dataset to remove a set of ambiguous data entries; indicating, a numerical representation and a query representation for the predefined dataset; creating a Domain representation using multiple connected layers and use it to process FAQ to understand the best contextual representation; identifying, an accuracy of the configured model by matching with ground truth labels and assigning a confidence score to each entry in the predefined domain specific dataset by creating a Domain representation using multiple connected layers and use it to process FAQ to understand the best contextual representation. | 2022-03-31 |
20220101095 | CONVOLUTIONAL NEURAL NETWORK-BASED FILTER FOR VIDEO CODING - Methods, systems, apparatus for media processing are described. One example method of digital media processing includes performing a conversion between visual media data and a bitstream of the visual media data, wherein the performing of the conversion includes selectively applying a convolutional neural network filter during the conversion based on a rule, and wherein the rule specifies whether and/or how the convolutional neural network filter is applied. | 2022-03-31 |
20220101096 | METHODS AND APPARATUS FOR A KNOWLEDGE-BASED DEEP LEARNING REFACTORING MODEL WITH TIGHTLY INTEGRATED FUNCTIONAL NONPARAMETRIC MEMORY - Methods and apparatus for a knowledge-based deep learning refactoring model with tightly integrated functional nonparametric memory are disclosed. An example non-transitory computer readable medium comprises instructions that, when executed, cause a machine to at least estimate a first information extraction cost corresponding to retrieval of information from a local knowledge base, estimate a second information extraction cost corresponding retrieval of information from a remote knowledge base, select an information source based on the first and second estimated information extraction costs, query the selected information source, in response to determining that the selected information source was an external information source, store the queried information in the local knowledge base, organize the stored information in the local knowledge base, and return the queried information. | 2022-03-31 |
20220101097 | METHOD AND DEVICE FOR CLUSTERING FORECASTING OF ELECTRIC VEHICLE CHARGING LOAD - The present disclosure relates to a method for clustering forecasting of the electric vehicle charging load, comprising the following steps: collecting electric vehicle charging load data on a historical date and weather information data related to that historical date; preprocessing and then normalizing the collected data to obtain a new data set; performing fuzzy C-means clustering on the normalized data, and taking an actual load measurement point as a fuzzy clustering index to construct a similar daily load set of the date to be forecast; according to the similar daily load set, constructing and training a least-square SVM (support vector machine) forecasting model; inputting load values at the same time in three days ahead of the date to be forecast and the weather information data related to the three days into the trained least-square SVM forecasting model, and outputting a forecast load. | 2022-03-31 |
20220101098 | Dynamically Selecting Neural Networks for Detecting Predetermined Features - In one embodiment, a method includes receiving an input for a machine-learning model configured to detect a plurality of predetermined features, the machine-learning model including at least a first neural network configured to detect a first subset of the plurality of predetermined features and a second neural network configured to detect a second subset of the plurality of predetermined features, generating a detection result by processing the input using the first neural network, determining that the input includes a feature in the first subset of the plurality of predetermined features based on the detection result and one or more detection criteria, and outputting the detection result as an output of the machine-learning model without using the second neural network to process the input in response to the determination. | 2022-03-31 |
20220101099 | INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD - According to one embodiment, provided is an information processing system including a parent device and a plurality of child devices. The child device constitutes at least a portion of at least one device selected from a function approximator and an annealing machine, each of the parent devices and the plurality of child devices include a communication interface, and the communication interface is at least one selected from a wireless communication interface and a wired communication interface including an analog circuit. Data to be processed by the child device is transmitted from the parent device to at least one of the plurality of child devices, and an output of at least one node of the child device is transmitted to at least one of the parent device and the other child devices. | 2022-03-31 |
20220101100 | LOAD DISTRIBUTION FOR A DISTRIBUTED NEURAL NETWORK - A method for dynamic load distribution for a distributed neural network is disclosed. The method comprises estimating, in a device of the neural network, an energy usage for processing at least one non-processed layer in the device, and estimating, in the device of the neural network, an energy usage for transmitting layer output of at least one processed layer to a cloud service of the neural network for processing. The method further comprises comparing, in the device of the neural network, the estimated energy usage for processing the at least one non-processed layer in the device with the estimated energy usage for transmitting the layer output of the at least one processed layer to the cloud service. The method furthermore comprises determining to process the at least one non-processed layer in the device when the estimated energy usage for transmitting the layer output of the at least one processed layer to the cloud service is equal or greater than the estimated energy usage for processing the at least one non-processed layer, and determining to transmit the layer output of the at least one processed layer to the cloud service for processing subsequent layers when the estimated energy usage for transmitting the layer output of the at least one processed layer to the cloud service is less than the estimated energy usage for processing the at least one non-processed layer in the device. Corresponding computer program product, apparatus, cloud service assembly, and system are also disclosed. | 2022-03-31 |
20220101101 | DOMAIN ADAPTATION - This specification describes an apparatus relating to domain adaptation. The apparatus may comprise a means for providing a source dataset comprising a plurality of source data items associated with a source domain and a target dataset comprising a plurality of target data items associated with a target domain. The apparatus may also comprise means for providing a first computational model ( | 2022-03-31 |
20220101102 | HARDWARE IMPLEMENTATION OF WINDOWED OPERATIONS IN THREE OR MORE DIMENSIONS - A data processing system and method are disclosed, for implementing a windowed operation in at least three traversed dimensions. The data processing system maps the windowed operation in at least three traversed dimensions to a plurality of constituent windowed operations in two traversed dimensions. This plurality of 2-D windowed operations is implemented as such in at least one hardware accelerator. The data processing system assembles the results of the constituent 2-D windowed operations to produce the result of the windowed operation in at least three traversed dimensions. | 2022-03-31 |
20220101103 | SYSTEM AND METHOD FOR STRUCTURE LEARNING FOR GRAPH NEURAL NETWORKS - A graph structure having nodes and edges is represented as an adjacency matrix, and nodes of the graph structure have node features. A computer-implemented method and system for generating a graph structure are provided, the method comprising: generating an adjacency matrix based on a plurality of node features; generating a plurality of noisy node features based on the plurality of node features; generating a plurality of denoised node features using a neural network based on the plurality of noisy node features and the adjacency matrix; and updating the adjacency matrix based on the plurality of denoised node features. | 2022-03-31 |
20220101104 | VIDEO SYNTHESIS WITHIN A MESSAGING SYSTEM - Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for video synthesis. The program and method provide for accessing a primary generative adversarial network (GAN) comprising a pre-trained image generator, a motion generator comprising a plurality of neural networks, and a video discriminator; generating an updated GAN based on the primary GAN, by performing operations comprising identifying input data of the updated GAN, the input data comprising an initial latent code and a motion domain dataset, training the motion generator based on the input data, and adjusting weights of the plurality of neural networks of the primary GAN based on an output of the video discriminator; and generating a synthesized video based on the primary GAN and the input data. | 2022-03-31 |
20220101105 | DEEP-LEARNING GENERATIVE MODEL - A computer-implemented method for training a deep-learning generative model configured to output 3D modeled objects each representing a mechanical part or an assembly of mechanical parts. The method comprises obtaining a dataset of 3D modeled objects and training the deep-learning generative model based on the dataset. The training includes minimization of a loss. The loss includes a term that penalizes, for each output respective 3D modeled object, one or more functional scores of the respective 3D modeled object. Each functional score measures an extent of non-respect of a respective functional descriptor among one or more functional descriptors, by the mechanical part or the assembly of mechanical parts. This forms an improved solution with respect to outputting 3D modeled objects each representing a mechanical part or an assembly of mechanical parts. | 2022-03-31 |
20220101106 | COMPUTATIONAL IMPLEMENTATION OF GAUSSIAN PROCESS MODELS - A computer-implemented method of processing training data comprising a plurality of training data items to determine parameters of a Gaussian process (GP) model comprising a variational Gaussian process (VGP) corresponding to a GP prior conditioned and marginalized with respect to a set of randomly-distributed inducing variables includes initializing first parameters of the VGP including a positive-definite matrix-valued slack parameter, and iteratively modifying the first parameters to increase or decrease an objective function comprising an expected log-likelihood for each training data item under a respective Gaussian distribution with a predictive variance depending on the slack parameter. At each iteration, modifying the first parameters comprises, for each training data item, determining a respective gradient estimator for the expected log-likelihood using a respective one of a plurality of processor cores, and modifying the first parameters in dependence on the determined gradient estimators. At an optimal value of the slack parameter, the slack parameter is equal to an inverse of a covariance matrix for the set of inducing variables, and the objective function corresponds to a variational lower bound of a marginal log-likelihood for a posterior distribution corresponding to the GP prior conditioned on the training data. | 2022-03-31 |
20220101107 | ARTIFICIAL NEUROMORPHIC CIRCUIT AND OPERATION METHOD - Artificial neuromorphic circuit includes synapse circuit and post-neuron circuit. Synapse circuit includes phase change element, first switch, and second switch. First switch is coupled to phase change element, and is configured to receive first pulse signal. Second switch is coupled to phase change element. Input terminal of post-neuron circuit is coupled to switch circuit, and input terminal is coupled to phase change element. Input terminal charges capacitor through switch circuit in response to first pulse signal. Post-neuron circuit is configured to generate firing signal based on voltage level at input terminal and threshold voltage, and is further configured to generate first control signal and second control signal based on firing signal. Post-neuron circuit turns off switch circuit according to first control signal. Second control signal is configured to cooperate with second pulse signal to control second switch so as to control a state of phase change element. | 2022-03-31 |
20220101108 | MEMORY-MAPPED NEURAL NETWORK ACCELERATOR FOR DEPLOYABLE INFERENCE SYSTEMS - A neural network processor system is provided comprising at least one neural network processing core, an activation memory, an instruction memory, and at least one control register, the neural network processing core adapted to implement neural network computation, control and communication primitives. A memory map is included which comprises regions corresponding to each of the activation memory, instruction memory, and at least one control register. Additionally, an interface operatively connected to the neural network processor system is included, with the interface being adapted to communicate with a host and to expose the memory map. | 2022-03-31 |
20220101109 | DEEP LEARNING-BASED CHANNEL BUFFER COMPRESSION - A method and system are provided. The method includes performing channel estimation on a reference signal (RS), compressing, with a neural network, the channel estimation of the RS, decompressing, with the neural network, the compressed channel estimation, and interpolating the decompressed channel estimation. | 2022-03-31 |
20220101110 | PERSISTENT WEIGHTS IN TRAINING - Techniques are disclosed for performing machine learning operations. The techniques include fetching weights for a first layer in a first format; performing matrix multiplication of the weights fetched in the first format with values provided by a prior layer in a forwards training pass; fetching the weights for the first layer in a second format different from the first format; and performing matrix multiplication for a backwards pass, the matrix multiplication including multiplication of the weights fetched in the second format with values corresponding to values provided as the result of the forwards training pass for the first layer. | 2022-03-31 |
20220101111 | MITIGATING DELUSIONAL BIAS IN DEEP Q-LEARNING FOR ROBOTIC AND/OR OTHER AGENTS - Delusional bias can occur in function approximation Q-learning. Techniques for training and/or using a value network to mitigate delusional bias is disclosed herein, where the value network can be used to generate action(s) for an agent (e.g., a robot agent, a software agent, etc.). In various implementations, delusional bias can be mitigated by using a soft-consistency penalty. Additionally or alternatively, delusional bias can be mitigated by using a search framework over multiple Q-functions. | 2022-03-31 |
20220101112 | NEURAL NETWORK TRAINING USING ROBUST TEMPORAL ENSEMBLING - Apparatuses, systems, and techniques to use one or more neural networks to generate data labels. In at least one embodiment, one or more neural networks is trained based, at least in part on, one or more labels, pseudo-labels, training data, and modified training data. | 2022-03-31 |
20220101113 | KNOWLEDGE DISCOVERY USING A NEURAL NETWORK - Apparatuses, systems, and techniques to identify one or more relationships among one or more words using one or more transformer-based language neural networks trained using domain-specific data. | 2022-03-31 |
20220101114 | INTERPRETABLE DEEP LEARNING-BASED DEFECT DETECTION AND CLASSIFICATION - An explanation of a detection/classification algorithm made using a deep learning neural network clarifies the results that are formed and helps a user to identify the root cause of defect detection/classification model performance issues. A relevance map is determined based on a layer-wise relevance propagation algorithm. A mean intersection over union score between the relevance map and a ground truth is determined. A part of one of the semiconductor images that contributed to the classification using the deep learning model based on the relevance map and the mean intersection over union score is determined. | 2022-03-31 |
20220101115 | AUTOMATICALLY CONVERTING ERROR LOGS HAVING DIFFERENT FORMAT TYPES INTO A STANDARDIZED AND LABELED FORMAT HAVING RELEVANT NATURAL LANGUAGE INFORMATION - Embodiments of the invention are directed to computer-implemented methods of labeling unlabeled electronic information. In a non-limiting embodiment of the invention, the computer-implemented method includes receiving, using a processor system, an unlabeled error log (EL) having an EL format. A set of unlabeled EL keywords are extracted from the unlabeled EL. For each unlabeled EL keyword, the processor system uses the unlabeled EL keyword to extract an electronic document (ED) sentence from an ED based on a determination that the ED sentence is relevant to the unlabeled EL keyword. One or more ED keywords are extracted from the ED sentence. A deep neural network of the processor system is used to predict an ED sentence label for the ED sentence, an ED keyword label for the ED keyword, and an EL keyword label for the unlabeled EL keyword. | 2022-03-31 |
20220101116 | METHOD AND SYSTEM FOR PROBABLY ROBUST CLASSIFICATION WITH DETECTION OF ADVERSARIAL EXAMPLES - A computer-implemented method for training a machine-learning network includes receiving an input data from a sensor, wherein the input data includes a perturbation, wherein the input data is indicative of image, radar, sonar, or sound information, obtain a worst-case bound on a classification error and loss for perturbed versions of the input data, utilizing at least bounding of one or more hidden layer values, in response to the input data, train a classifier, wherein the classifier includes a plurality of classes, including an additional abstain class, wherein the abstain class is determined in response to at least bounding the input data, outputting a classification in response to the input data, and output a trained classifier configured to detect the additional abstain class in response to the input data classifier with a plurality of classes, including an additional abstain class. | 2022-03-31 |
20220101117 | NEURAL NETWORK SYSTEMS FOR ABSTRACT REASONING - A computer-implemented method, system, and computer program product to solve a cognitive task that includes learning abstract properties. One embodiment may comprise accessing datasets that characterize the abstract properties. The accessed datasets may then be inputted into a first neural network to generate first embeddings. Pairs of the first embeddings generated may be formed, which correspond to pairs of the datasets. Data corresponding to the pairs formed may then be inputted into a second neural network, which may be executed to generate second embeddings. The latter may capture relational properties of the pairs of the datasets. A third neural network may be subsequently executed, based on the second embeddings generated, to obtain output values. One or more abstract properties of the datasets are learned based on the output values obtained, in order to solve the cognitive task. | 2022-03-31 |
20220101118 | BANK-BALANCED-SPARSE ACTIVATION FEATURE MAPS FOR NEURAL NETWORK MODELS - Embodiments disclose bank-balanced-sparse activation neural network models and methods to generate the bank-balanced-sparse activation neural network models. According to one embodiment, a neural network sparsification engine determines a first deep neural network (DNN) model having two or more hidden layers. The engine determines a bank size, a bank layout, and a target sparsity. The engine segments the activation feature maps into a plurality of banks based on the bank size and the bank layout. The engine generates a second DNN model by increasing a sparsity for each bank of activation feature map based on the target sparsity, wherein the second DNN model is used for inferencing. | 2022-03-31 |
20220101119 | ARTIFICIAL NEURAL NETWORK TRAINING IN MEMORY - Apparatuses and methods can be related to implementing age-based network training. An artificial neural network (ANN) can be trained by introducing errors into the ANN. The errors and the quantity of errors introduced into the ANN can be based on age-based characteristics of the memory device. | 2022-03-31 |
20220101120 | INTERPRETABLE VISUALIZATION SYSTEM FOR GRAPH NEURAL NETWORK - Use a computerized trained graph neural network model to classify an input instance with a predicted label. With a computerized graph neural network interpretation module, compute a gradient-based saliency matrix based on the input instance and the predicted label, by taking a partial derivative of class prediction with respect to an adjacency matrix of the model. With a computerized user interface, obtain user input responsive to the gradient-based saliency matrix. Optionally, modify the trained graph neural network model based on the user input; and re-classify the input instance with a new predicted label based on the modified trained graph neural network model. | 2022-03-31 |
20220101121 | LATENT-VARIABLE GENERATIVE MODEL WITH A NOISE CONTRASTIVE PRIOR - One embodiment of the present invention sets forth a technique for generating images (or other generative output). The technique includes determining one or more first values for a set of visual attributes included in a plurality of training images, wherein the set of visual attributes is encoded via a prior network. The technique also includes applying a reweighting factor to the first value(s) to generate one or more second values for the set of visual attributes, wherein the second value(s) represent the first value(s) shifted towards one or more third values for the set of visual attributes, wherein the one or more third values have been generated via an encoder network. The technique further includes performing one or more decoding operations on the second value(s) via a decoder network to generate a new image that is not included in the plurality of training images. | 2022-03-31 |
20220101122 | ENERGY-BASED VARIATIONAL AUTOENCODERS - One embodiment of the present invention sets forth a technique for generating data using a generative model. The technique includes sampling from one or more distributions of one or more variables to generate a first set of values for the one or more variables, where the one or more distributions are used during operation of one or more portions of the generative model. The technique also includes applying one or more energy values generated via an energy-based model to the first set of values to produce a second set of values for the one or more variables. The technique further includes either outputting the set of second values as output data or performing one or more operations based on the second set of values to generate output data. | 2022-03-31 |
20220101123 | VIDEO QUALITY ASSESSMENT METHOD AND APPARATUS - An operating method of a computing apparatus is provided. The operating method of the computing apparatus includes obtaining a reference image; obtaining a distorted image generated from a reference image; obtaining an objective quality assessment score of a distorted image that is indicative of a quality of a distorted image as assessed by an algorithm, by using a reference image and a distorted image; obtaining a subjective quality assessment score corresponding to a objective quality assessment score; and training a neural network, by using a distorted image and a subjective quality assessment score as a training data set. | 2022-03-31 |
20220101124 | NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD - A non-transitory computer-readable storage medium storing an information processing program that causes at least one computer to execute a process, the process includes acquiring a first machine learning model trained by using a training data set including first data and a second machine learning model not trained with the specific data; and retraining the first machine learning model so that an output of the first machine learning model and an output of the second machine learning model when second data corresponding to the first data is input get close to each other. | 2022-03-31 |
20220101125 | METHODS FOR BUILDING A DEEP LATENT FEATURE EXTRACTOR FOR INDUSTRIAL SENSOR DATA - The present disclosure relates to a method of and a system for building a latent feature extractor as well as a neural network including a latent feature extractor built by the method and/or with the system. The method includes providing non-uniform training data for a multitude of tasks and optimizing parameters of a neural network of the latent feature extractor based on the multitude of tasks. | 2022-03-31 |
20220101126 | APPLYING DIRECTIONALITY TO AUDIO - The present disclosure describes techniques for adding a perception of directionality to audio. The method includes receiving a set of head related transfer functions (HRTFs). The method also includes training an artificial neural network based on the HRTFs to generate a trained artificial neural network, wherein the trained artificial neural network represents a subspace reconstruction model for generating interpolated HRTFs. The trained artificial neural network is generated using Bayesian optimization to determine a number of layers and a number of neurons per layer of the trained artificial neural network. The method also includes storing the trained artificial neural network, wherein the trained artificial neural network is used to reconstruct a new head related transfer function for a specified direction. The new head related transfer function is used to process an audio signal to produce a perception of directionality. | 2022-03-31 |
20220101127 | AUTOMATIC OPTIMIZATION OF MACHINE LEARNING ALGORITHMS IN THE PRESENCE OF TARGET DATASETS - Methods, systems and computer program products for transferring knowledge using machine learning techniques by automatically generating training datasets are provided. New training datasets based on target datasets are automatically generated and used in machine learning techniques to perform tasks on images. One of the main benefits is the possibility to transfer the knowledge learned in one domain to another domain in which extracting data or labeling images would be costly or simply infeasible. The methods and systems also provide image training sets based on image target sets which augments data in a more efficient way and improves the content of the training set and the prediction of the machine learning techniques. | 2022-03-31 |
20220101128 | DEVICE AND METHOD FOR TRAINING A CLASSIFIER USING AN INVERTIBLE FACTORIZATION MODEL - A computer-implemented method for training a classifier. The classifier is configured to determine an output signal characterizing a classification of an input signal. The training of the classifier includes: determining a first training input signal; determining a first latent representation comprising a plurality of factors based on the first training input signal by means of an invertible factorization model, wherein the invertible factorization model, determining a second latent representation by adapting at least one factor of the first latent representation; determining a second training input signal based on the second latent representation by means of the invertible factorization model; and training the classifier based on the second training input signal. | 2022-03-31 |
20220101129 | DEVICE AND METHOD FOR CLASSIFYING AN INPUT SIGNAL USING AN INVERTIBLE FACTORIZATION MODEL - A computer-implemented method for determining an output signal for an input signal using a classifier. The output signal characterizes a classification of the input signal. The method includes: determining a latent representation based on the input signal using an invertible factorization model comprised in the classifier, the latent representation comprises a plurality of factors; determining the output signal based on the latent representation using an internal classifier comprised in the classifier. | 2022-03-31 |
20220101130 | QUANTIZED FEEDBACK IN FEDERATED LEARNING WITH RANDOMIZATION - Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a client device may determine a feedback associated with a machine learning component based at least in part on applying the machine learning component. Accordingly, the client device may transmit a quantized value based at least in part on the feedback. The quantized value is determined using randomization with probabilities based at least in part on respective distances between one or more values of the feedback and a plurality of quantized digits. Numerous other aspects are provided. | 2022-03-31 |