14th week of 2022 patent applcation highlights part 38 |
Patent application number | Title | Published |
20220108138 | DIAGNOSING SOURCES OF NOISE IN AN EVALUATION - Provided are processes of balancing between exploration and optimization with knowledge discovery processes applied to unstructured data with tight interrogation budgets. A process may include determining a relevance probability distribution of responses and scores as an explanatory diagnostic. A distribution curve may be determined based on a probabilistic graphical network and a result may be audited relative to the distribution curve to determine noise measurements. The distribution curve may be determined based on a distribution of posterior predictions of entities to score ranking entity bias and noisiness of ranking entity feedback. | 2022-04-07 |
20220108139 | FAST ACQUISITION OF LABELED VEHICULAR DATA FROM MULTIPLE DATA SOURCES - Approaches, techniques, and mechanisms are disclosed for generating assisted driving test and evaluation data. According to one embodiment, video and non-video data are collected asynchronously through video and non-video data interface from an image acquisition device and a non-video data source collocated with the vehicle. Timing information received with the data is used to synchronize the video and non-video data into synchronized vehicle data. Specific labels indicating ground truths are identified to be applied to the synchronized vehicle data. Labeled vehicular data is generated from the synchronized vehicle data and the specific labels. At least a portion of the labeled vehicular data is to be used to test and evaluate assisted driving functionality. | 2022-04-07 |
20220108140 | NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING PRETREATMENT INFORMATION GENERATION PROGRAM, PRETREATMENT INFORMATION GENERATION METHOD, AND PRETREATMENT INFORMATION CREATION DEVICE - A non-transitory computer-readable medium stores computer-readable instructions. The computer-readable instructions are executed by a processor of a pretreatment information generation device that generates pretreatment information to be used in pretreatment on a recording medium by a pretreatment device. The computer-readable instructions that, when executed by the processor, perform processes including receiving image data to be printed on the recording medium, and identifying the pretreatment information to perform the pretreatment with respect to the received image data. | 2022-04-07 |
20220108141 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - The image processing apparatus according to the present disclosure is an image processing apparatus converting input image data into halftone image data that a plurality of arranged nozzles ejecting ink can print and including: one or more hardware processors; and one or more memories storing one or more programs configured to be executed by the one or more hardware processors, the one or more programs including instructions for: correcting a pixel value of each pixel in the input image data in accordance with a characteristic of the nozzle; and generating first halftone image data by performing halftone processing for a second frequency component whose frequency is lower than that of a first frequency component of frequency components corresponding to the corrected input image data. Here, the first halftone image data is data that is generated to be printed with first dots whose dot size causes adjacent dots to overlap. | 2022-04-07 |
20220108142 | METHOD FOR ASSOCIATING A MARKING WITH AN OBJECT - Disclosed is a method for associating a marking with an object, including the following steps:—identifying the position of at least two different elements of the marking in relation to the marking and/or the object; and—measuring a relative distance between at least two identified elements; then—recording in a database the position of at least two identified elements, and the relative distance between the identified elements so that the position of two identified elements is correlated with the measurement relating to their distance. | 2022-04-07 |
20220108143 | Method for Fast Replacement of Wireless IoT Product and System Thereof - A method and system for replacing a first wireless node in an IoT system includes determining, by the first wireless node, that the first wireless node needs imminent replacement. The determination may be made, for example, based on a battery of the first wireless node being below a threshold level. The first wireless node initiates a discovery protocol for a second wireless node installed in proximity to the first wireless node, the second wireless node installed to replace the first wireless node. Upon discovery of the second wireless node, the first wireless node transmits a configuration file to the second wireless node, which the second wireless node copies to a storage of the second wireless node. The second wireless node configures itself to operate as a replacement for the first wireless node in the IoT system, based at least in part on the copied configuration file. | 2022-04-07 |
20220108144 | WIRELESS IC TAG-ATTACHED METAL MEDICAL INSTRUMENT - A wireless IC tag-attached metal medical instrument includes a metal medical instrument including a metal portion. The metal medical instrument is configured such that electric, magnetic, or electromagnetic field coupling is established between a resonant circuit and the metal portion when a wireless IC tag is fixed to the metal portion that includes the resonant circuit having an inductor with a spiral or helix shape that turns around a central axis more than one turn. In this configuration, the metal portion either emits a transmission signal with a frequency equal to a resonant frequency of an electromagnetic wave supplied from the resonant circuit or receives a reception signal having a frequency equal to the resonant frequency, and supplies the reception signal to the resonant circuit. | 2022-04-07 |
20220108145 | RFID Antenna - A radio-frequency identification (RFID) antenna for a vending machine includes a controller and configured to selectively provide an item stored therein to a user. The RFID antenna includes a pair of short-circuited emitters, an antenna divider disposed between said pair of short-circuited emitters, a pair of cables each comprising a pair of low signal attenuation and feed points and each connecting respective one of said pair of short-circuited emitters to said antenna divider; and a connector coupled to said antenna divider for connecting said antenna divider to the controller of the vending machine. | 2022-04-07 |
20220108146 | INTERROGATION DEVICE AND/OR SYSTEM HAVING ALIGNMENT FEATURE(S) FOR WIRELESS TRANSPONDER TAGGED SPECIMEN CONTAINERS AND/OR CARRIERS - An interrogation device and/or system includes a body and an antenna, the body has an aperture or elongated receiver with an opening and an internal perimeter or inner wall sized and/or shaped to receive a portion of a container therein, either with or without a cap of the container. The container may, for example, be used to store biological specimens a cryogenic temperatures. One or more alignment features of the body align wireless transponders (e.g., RFID transponders) of tagged specimen containers and/or carriers with the antenna to enhance interrogation. Alignment may be along a longitudinal or Z-axis, and/or alignment in an XY plane, perpendicular to the Z-axis. Shielding may reduce or even eliminate cross-talk with neighboring wireless tagged specimen containers and/or carriers. | 2022-04-07 |
20220108147 | PREDICTIVE MICROSERVICES ACTIVATION USING MACHINE LEARNING - Described are techniques for predictive microservice activation. The techniques include training a machine learning model using a plurality of sequences of coordinates, where the plurality of sequences of coordinates are respectively based upon a corresponding plurality of series of vectors generated from historical usage data for an application and its associated microservices. The techniques further include inputting a new sequence of coordinates representing a series of application operations to the machine learning model. The techniques further include identifying a predicted microservice for future utilization based on an output vector generated by the machine learning model. The techniques further include activating the predicted microservice prior to the predicted microservice being called by the application. | 2022-04-07 |
20220108148 | SYSTEM AND ARCHITECTURE NEURAL NETWORK ACCELERATOR INCLUDING FILTER CIRCUIT - A system and an accelerator circuit includes an internal memory to store data received a memory associated with a processor and a filter circuit block comprising a plurality of circuit stripes, each circuit stripe including a filter processor, a plurality of filter circuits, and a slice of the internal memory assigned to the plurality of filter circuits, where the filter processor is to execute a filter instruction to read data values from the internal memory based on a first memory address, for each of the plurality of circuit stripes: load the data values in weight registers and input registers associated with the plurality of filter circuits of the circuit stripe to generate a plurality of filter results, and write a result generated using the plurality of filter circuits in the internal memory at a second memory address. | 2022-04-07 |
20220108149 | NEURAL NETWORKS WITH PRE-NORMALIZED LAYERS OR REGULARIZATION NORMALIZATION LAYERS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using a neural network system that includes one or more pre-normalized layers or one or more regularization normalization layers. | 2022-04-07 |
20220108150 | METHOD AND APPARATUS FOR PROCESSING DATA, AND RELATED PRODUCTS - Embodiments of the present disclosure relate to a method and an apparatus for processing data, and related products. The embodiments of the present disclosure provide a board card including a storage component, an interface device, a control component, and an artificial intelligence chip. The artificial intelligence chip is connected to the storage component, the control component, and the interface device, respectively; the storage component is configured to store data; the interface device is configured to implement data transfer between the artificial intelligence chip and external equipment; and the control component is configured to monitor a state of the artificial intelligence chip. The board card is configured to perform artificial intelligence operations. | 2022-04-07 |
20220108151 | PHYSICS AUGMENTED NEURAL NETWORKS CONFIGURED FOR OPERATING IN ENVIRONMENTS THAT MIX ORDER AND CHAOS - Methods, systems, and computer readable media for utilizing an augmented neural network are disclosed. In one embodiment, the method includes utilizing a neural network (NN) pre-processor to convert generic coordinates associated with a dynamical system to canonical coordinates, concatenating a Hamiltonian neural network (HNN) to the NN pre-processor to create a generalized HNN, and training the generalized HNN to learn nonlinear dynamics present in the dynamical system from generic training data. The method also includes utilizing the trained generalized HNN to forecast the nonlinear dynamics, and quantifying chaotic behavior from the forecasted nonlinear dynamics to discover and map one or more transitions between orderly states and chaotic states exhibited by the dynamical system. | 2022-04-07 |
20220108152 | METHOD FOR ASCERTAINING AN OUTPUT SIGNAL WITH THE AID OF A MACHINE LEARNING SYSTEM - A computer-implemented method for ascertaining, using a machine learning system, a first output signal characterizing a classification and/or a regression of a first input signal, and the output signal includes a first representation which characterizes an expected value of the classification or the regression, and a second representation, which characterizes a variance of the classification or regression. The method includes: ascertaining, using an encoder, latent representations each ascertained based on a second input signal and a second output signal that corresponds to the second input signal, and the second input signal and the second output signal characterize a context, the latent representation includes a first representation characterizing an expected value and a second representation characterizing a variance; ascertaining a third representation characterizing an accumulation of the first representations; ascertaining a fourth representation characterizing an accumulation of the second representations; ascertaining the first output signal using a decoder. | 2022-04-07 |
20220108153 | BAYESIAN CONTEXT AGGREGATION FOR NEURAL PROCESSES - A method for generating a computer-implemented machine learning system. The method includes receiving a training data set, which corresponds to a dynamic response of a device, and computing an aggregation of at least one latent variable of the machine learning system, using Bayesian inference, and in view of the training data set. An information item contained in the training data set is transferred directly into a statistical description of the plurality of latent variables. The method further includes generating an a-posteriori predictive distribution for predicting the dynamic response of the device, using the calculated aggregation, and under the condition that the training data set has set in. | 2022-04-07 |
20220108154 | QUANTUM DEFORMED BINARY NEURAL NETWORKS - Certain aspects of the present disclosure provide techniques for processing data in a quantum deformed binary neural network, including: determining an input state for a layer of the quantum deformed binary neural network; computing a mean and variance for one or more observables in the layer; and returning an output activation probability based on the mean and variance for the one or more observables in the layer. | 2022-04-07 |
20220108155 | MAPPABLE FILTER FOR NEURAL PROCESSOR CIRCUIT - Embodiments relate to a neural processor circuit that may include a fetch circuit that fetches coefficient data of a machine learning model from a memory source. The neural processor circuit may also include one or more neural engine circuits that are coupled to the fetch circuit. A neural engine circuit may include a buffer circuit that stores the coefficient data. The neural engine circuit may also include a coefficient organizing circuit that generates at least a first mapping and a second mapping of the stored coefficient data according to one or more control signals. The neural engine may also include a computation circuit that receives and processes at least a portion of input data with the coefficient data as mapped according to the first mapping or process at least the portion of the input data with the coefficient data as mapped according to the second mapping. | 2022-04-07 |
20220108156 | HARDWARE ARCHITECTURE FOR PROCESSING DATA IN SPARSE NEURAL NETWORK - A hardware accelerator that is efficient at performing computations related to a sparse neural network. The sparse neural network may be associated with a plurality of nodes. One of the nodes includes one or more sparse tensors. The accelerator may compress the sparse tensor to a dense tensor. The sparse tensor may also be structured so that the dense locations in the tensor are blocked or partitioned. The accelerator may transpose the weight tensor and align the partitions of the tensor with the hardware architecture. The structured tensor has a balanced number of active values so that the active values can be processed by an efficient number of operating cycles of the accelerator. The accelerator may also perform bitwise and operation to determine the location of dense pairs in two sparse tensors to reduce the number of computations. | 2022-04-07 |
20220108157 | HARDWARE ARCHITECTURE FOR INTRODUCING ACTIVATION SPARSITY IN NEURAL NETWORK - A hardware accelerator that is efficient at performing computations related to a sparse neural network. The sparse neural network may be associated with a plurality of nodes. An artificial intelligence (AI) accelerator stores, at a memory circuit, a weight tenor and an input activation tensor that corresponds to a node of the neural network. The AI accelerator performs a computation such as convolution between the weight tenor and the input activation tensor to generate an output activation tensor. The AI accelerator introduces sparsity to the output activation tensor by reducing the number of active values in the output activation tensor. The sparsity activation may be a K-winner approach, which selects the K-largest values in the output activation tensor and set the remaining values to zero. | 2022-04-07 |
20220108158 | ULTRALOW POWER INFERENCE ENGINE WITH EXTERNAL MAGNETIC FIELD PROGRAMMING ASSISTANCE - An MRAM-based vector multiplication device, such as can be used for inferencing in a neural network, is presented that is ultralow power, low cost, and does not require special on-chip programming. A crosspoint array has an MRAM cell at each crosspoint junction and periphery array circuitry capable of supplying independent input voltages to each word line and reading current on each bit line. Vector multiplication is performed as an in-array multiplication of a vector of input voltages with matrix weight values encoded by the MRAM cell states. The MRAM cells can be individually programmed using a combination of input voltages and an external magnetic field. The external magnetic field is chosen so that a write voltage of one polarity reduces the anisotropy sufficiently to align the cell state with the external field, but is insufficient to align the cell if only half of the write voltage is applied. | 2022-04-07 |
20220108159 | CROSSBAR ARRAY APPARATUSES BASED ON COMPRESSED-TRUNCATED SINGULAR VALUE DECOMPOSITION (C- TSVD) AND ANALOG MULTIPLY-ACCUMULATE (MAC) OPERATION METHODS USING THE SAME - A compressed-truncated singular value decomposition (C-TSVD) based crossbar array apparatus is provided. The C-TSVD based crossbar array apparatus may include an original crossbar array in an m×n matrix having row input lines and column output lines and including cells of a resistance memory device, or two partial crossbar arrays obtained by decomposing the original crossbar array based on C-TSVD, an analog to digital converter (ADC) that converts output values of column output lines of sub-arrays obtained through array partitioning, an adder that sums up results of the ADC to correspond to the column output lines, and a controller that controls application of the original crossbar array or the two partial crossbar arrays. Input values are input to the row input lines, a weight is multiplied by the input values and accumulated results are output as output values of the column output lines. | 2022-04-07 |
20220108160 | METHOD AND APPARATUS FOR POWER LINE COMMUNICATION NETWORK - A reliable method and apparatus for communications over AC power lines that may have substantial interference is disclosed. A controller can be plugged into an AC outlet and communicate with a device plugged into any other AC outlet over the power lines within the facility. The controller may perform an analysis of the interference that is present on the power lines that run throughout the facility. In some cases, the particular path for power line signals can be selected to reduce the potential for interference. In some cases, the controller has a front end that comprises a Fast Fourier Transform (FFT) module and a neural network. In addition, devices under the control of the controller may have neural networks that can be used in combination to form a collaborative neural network. | 2022-04-07 |
20220108161 | Three Dimensional Circuit Implementing Machine Trained Network - Some embodiments provide a three-dimensional (3D) circuit structure that has two or more vertically stacked bonded layers with a machine-trained network on at least one bonded layer. As described above, each bonded layer can be an IC die or an IC wafer in some embodiments with different embodiments encompassing different combinations of wafers and dies for the different bonded layers. The machine-trained network in some embodiments includes several stages of machine-trained processing nodes with routing fabric that supplies the outputs of earlier stage nodes to drive the inputs of later stage nodes. In some embodiments, the machine-trained network is a neural network and the processing nodes are neurons of the neural network. In some embodiments, one or more parameters associated with each processing node (e.g., each neuron) is defined through machine-trained processes that define the values of these parameters in order to allow the machine-trained network (e.g., neural network) to perform particular operations (e.g., face recognition, voice recognition, etc.). For example, in some embodiments, the machine-trained parameters are weight values that are used to aggregate (e.g., to sum) several output values of several earlier stage processing nodes to produce an input value for a later stage processing node. | 2022-04-07 |
20220108162 | DECIMATING HIDDEN LAYERS FOR TRAINING TRANSFORMER MODELS - Embodiments of the present disclosure include systems and methods for decimating hidden layers for training transformer models. In some embodiments, input data for training a transform model is received receive at a transformer layer included in the transformer model. The transformer layer comprises a hidden layer. The hidden layer comprises a set of neurons configured to process training data. A subset of the set of neurons of the hidden layer is selected. Only the subset of the set of neurons of the hidden layer are used to train the transformer model with the input data. | 2022-04-07 |
20220108163 | CONTINUOUS TRAINING METHODS FOR SYSTEMS IDENTIFYING ANOMALIES IN AN IMAGE OF AN OBJECT - A system identifying anomalies in an image of an object is first trained using first sets of images corresponding to first anomaly types for the object. A model of the object is formed in a latent space. A label for each anomalous image is used to calculate vectors containing means and standard deviations for each first anomaly types. The means and standard deviations are used to calculate a log-likelihood loss for each first anomaly type. The system is retrained using second sets of images corresponding to second anomaly types for the object. The vectors are supplemented using labels for each second anomaly types. A statistically sufficient sample of information in the means and standard deviations vectors is supplied to the latent space. A log-likelihood loss for each of the first and second anomaly types is calculated based on their respective mean and standard deviation. | 2022-04-07 |
20220108164 | SYSTEMS AND METHODS FOR GENERATING AUTOMATED NATURAL LANGUAGE RESPONSES BASED ON IDENTIFIED GOALS AND SUB-GOALS FROM AN UTTERANCE - The disclosed technology involves autonomously identifying goals and sub-goals from a user utterance and generating responses to the user based on the goals and sub-goals. | 2022-04-07 |
20220108165 | QUANTIFYING REWARD AND RESOURCE ALLOCATION FOR CONCURRENT PARTIAL DEEP LEARNING WORKLOADS IN MULTI CORE ENVIRONMENTS - A method for operating an artificial neural network (ANN) includes quantifying a reward for executing ANN tasks in a system having multiple processing cores. A set of processing cores of the multiple processing cores is allocated to execute each of the tasks based on the reward. The ANN tasks are executed concurrently according to the processing core allocation to operate the ANN. | 2022-04-07 |
20220108166 | METHODS AND SYSTEMS FOR SLOT LINKING THROUGH MACHINE LEARNING - A system for slot linking through machine learning includes a computing device configured to generate a slot profile by retrieving a plurality of elemental profiles, each elemental profile corresponding to an element of the slot and generating the slot profile as a function of the plurality of elemental profiles, to receive biological extraction data of an entry, to generate an entry tendency profile associated with the entry, wherein generating the tendency profile further includes receiving a plurality of training examples correlating biological extraction data to tendency profiles, training a tendency profile model as a function of the plurality of training examples, and generating the tendency profile as a function of the biological extraction and the tendency profile model, to determine an alignment quantifier as a function of the tendency profile and the slot profile, and link the entry to the slot as a function of the alignment quantifier. | 2022-04-07 |
20220108167 | ARTIFICIAL INTELLIGENCE-BASED INFORMATION MANAGEMENT SYSTEM PERFORMANCE METRIC PREDICTION - An information management system is disclosed herein that can use artificial intelligence to identify situations in which a performance metric may not be satisfied. For example, a storage manager of the information management system can maintain data related to historical, current, and/or future execution of secondary copy operations by secondary storage computing device(s) in the information management system. Using some or all of this data, the storage manager can train an artificial intelligence model (e.g., a neural network) to classify whether a current or future secondary copy operation job is likely to succeed or fail. Similarly, the storage manager can use some or all of this data to train another artificial intelligence model (e.g., a machine learning model) to predict the length of time for a current or future secondary copy operation job to complete. The trained models can be used to predict whether a performance metric will be satisfied. | 2022-04-07 |
20220108168 | FACTORIZED NEURAL NETWORK - Aspects of the present disclosure relate to factorized neural network techniques. In examples, a layer of a machine learning model is factorized and initialized using spectral initialization. For example, an initial layer parameterized using an initial matrix is processed such that it is instead parameterized by the product of two or more matrices, thereby resulting in a factorized machine learning model. An optimizer associated with the machine learning model may also be processed to adapt a regularizer accordingly. For example, a regularizer using a weight decay function may be adapted to instead use a Frobenius decay function with respect to the factorized model layer. The factorized machine learning model may be trained using the processed optimizer and subsequently used to generate inferences. | 2022-04-07 |
20220108169 | SYSTEMS AND METHODS FOR NUMERICAL REASONING BY A PARTIALLY SUPERVISED NUMERIC REASONING MODULE NETWORK - Embodiments described herein provide systems and methods for a partially supervised training model for questioning answering tasks. Specifically, the partially supervised training model may include two modules—a query parsing module and a program execution module. The query parsing module parses queries into a grogram, and the program execution module execute the program to reach an answer through explicit reasoning and partial supervision. In this way, the partially supervised training model can be trained with answers as supervision, obviating the need for supervision by gold program operations and gold query-span attention at each step of the program. | 2022-04-07 |
20220108170 | METHOD FOR GENERATING TRAINING MODEL, IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, AND WELDING SYSTEM - A method for generating a training model includes: acquiring training data, the training data including a plurality of training input images, and a training feature extraction image in which a feature is extracted from one of the plurality of training input images; and training a training model by using the training data, the training model outputting an extraction image of the feature estimated from a plurality of input images, the training model including an input layer that performs a convolution, positions of the feature in the plurality of training input images being different from each other, a change amount of the position of the feature in the plurality of training input images being less than a kernel size of a filter of the input layer. | 2022-04-07 |
20220108171 | TRAINING NEURAL NETWORKS USING TRANSFER LEARNING - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training neural networks using transfer learning. One of the methods includes training a neural network to perform a first prediction task, including: obtaining trained model parameters for each of a plurality of candidate neural networks, wherein each candidate neural network has been pre-trained to perform a respective second prediction task that is different from the first prediction task; obtaining a plurality of training examples corresponding to the first prediction task; selecting a proper subset of the plurality of candidate neural networks using the plurality of training examples; generating, for each candidate neural network, one or more fine-tuned neural networks, wherein each fine-tuned neural network is generated by updating the model parameters of the candidate neural network using the plurality of training examples; and determining model parameters for the neural network using the respective fine-tuned neural networks. | 2022-04-07 |
20220108172 | GENERATING A SIMPLIFIED MODEL FOR XIL SYSTEMS - Generating a simplified model for an XiL system includes determining a stipulated parameter characterizing model complexity, for a starting model; generating starting model input and output data; training a neural network to generate a simplified model having a lower complexity than the starting model and where a stipulated lower threshold value for a parameter characterizing model reliability is exceeded; generating a simplified model using the trained neural network; determining a parameter characterizing the complexity for the simplified model; if the determined complexity of the generated simplified model is lower than that of the starting model, testing the generated simplified model using a test set of the generated starting model input and output data, which differs from the training set, and determining a parameter of the generated simplified model characterizing reliability; if the determined reliability of the simplified model exceeds the stipulated threshold value, outputting the simplified model. | 2022-04-07 |
20220108173 | PROBABILISTIC NUMERIC CONVOLUTIONAL NEURAL NETWORKS - Certain aspects of the present disclosure provide techniques for performing operations with probabilistic numeric convolutional neural network, including: defining a Gaussian Process based on a mean and a covariance of input data; applying a linear operator to the Gaussian Process to generate pre-activation data; applying a nonlinear operation to the pre-activation data to form activation data; and applying a pooling operation to the activation data to generate an inference. | 2022-04-07 |
20220108174 | TRAINING NEURAL NETWORKS USING AUXILIARY TASK UPDATE DECOMPOSITION - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network having a plurality of model parameters to perform a main task. In one aspect, a method comprises: determining an auxiliary task update to the model parameters of the neural network that, if applied to the model parameters, is predicted to increase a performance of the neural network on an auxiliary task; determining a decomposition of the auxiliary task update into multiple constituent updates that, if applied to the model parameters, are each predicted to have a different impact on a performance of the neural network on the main task; determining a new auxiliary task update to the model parameters of the neural network as a function of the plurality of constituent updates; and applying the new auxiliary task update to the model parameters of the neural network. | 2022-04-07 |
20220108175 | System and Method for Recommending Semantically Relevant Content - A property vector derived from extractable measurable properties of a data file is mapped to semantic properties for that data file. The property vector is an output from a trained artificial neural network that, following pairwise training of the ANN using pairs of files that map pairwise similarity/dissimilarity in property space towards corresponding pairwise semantic similarity/dissimilarity in semantic space, both preserves and is representative of semantic properties of the data file. The system and method assesses, based on comparisons between generated property vectors, ranks and then recommends and/or filters semantically close or semantically disparate candidate files in a database from a query from a user that includes the data file. Applications of the categorization and recommendation system and method apply to media or search tools and social media platforms, including media in the form of music, video, images data and/or text files. | 2022-04-07 |
20220108176 | MEASURING THE FREE ENERGY IN AN EVALUATION - Provided are processes of balancing between exploration and optimization with knowledge discovery processes applied to unstructured data with tight interrogation budgets. A process may determine an alignment score of each entity participating in an evaluation of a feature in a knowledge discovery process based on feedback received from the respective entities for the feature. Feedback of an entity that is mapped in the PGN may be processed to determine an alignment score of the entity for the feature, e.g., based on how the entity scored a feature. A plurality of different distributions indicative of alignment scores may be processed for display to visually indicate to a user the alignment of participating entities in their evaluations of the features. | 2022-04-07 |
20220108177 | CONCEPTS FOR FEDERATED LEARNING, CLIENT CLASSIFICATION AND TRAINING DATA SIMILARITY MEASUREMENT - A concept for Federated Learning which is more efficient and/or robust is presented. Beyond this, concepts for specifying clients and/or measuring training data similarities in a manner more suitable for being applied in Federated Learning environments, are described. | 2022-04-07 |
20220108178 | NEURAL NETWORK METHOD AND APPARATUS - Provided are a neural network method and an apparatus, the method including obtaining a set of floating point data processed in a layer included in a neural network, determining a weighted entropy based on data values included in the set of floating point data, adjusting quantization levels assigned to the data values based on the weighted entropy, and quantizing the data values included in the set of floating point data in accordance with the adjusted quantization levels. | 2022-04-07 |
20220108179 | METHOD FOR INJECTING HUMAN KNOWLEDGE INTO AI MODELS - Human knowledge may be injected in an explainable AI system in order to improve the model's generalization error, model accuracy, interpretability of the model, avoid or eliminate bias, while providing a path towards the integration of connectionist systems with symbolic and causal logic in a combined AI system. Human knowledge injection may be implemented by harnessing the white-box nature of explainable/interpretable models. In one exemplary embodiment, a user applies intuition to model-specific cases or exceptions. In another embodiment, an explainable model may be embedded in workflow systems which enable users to apply pre-hoc and post-hoc operations. A third exemplary embodiment implements human-assisted focusing. An exemplary embodiment also presents a method to train and refine explainable or interpretable models without losing the injected knowledge defined by humans when applying gradient descent techniques. The white-box nature of explainable models allows for precise source attribution and traceability of knowledge incorporated into the model. | 2022-04-07 |
20220108180 | METHOD AND APPARATUS FOR COMPRESSING ARTIFICIAL NEURAL NETWORK - A method and apparatus for compressing an artificial neural network may acquire weights corresponding to an artificial neural network trained in advance, wherein the artificial neural network includes a plurality of layers, and a processor configured to generate data for acquiring a change of behavior of the artificial neural network due to pruning of the artificial neural network based on the weights, determine a pruning threshold for pruning of the artificial neural network based on the change of the behavior of the artificial neural network, and compress the neural network based on the pruning threshold. | 2022-04-07 |
20220108181 | ANOMALY DETECTION ON SEQUENTIAL LOG DATA USING A RESIDUAL NEURAL NETWORK - A multilayer perceptron herein contains an already-trained combined sequence of residual blocks that contains a semantic sequence of residual blocks and a contextual sequence of residual blocks. The semantic sequence of residual blocks contains a semantic sequence of layers of an autoencoder. The contextual sequence of residual blocks contains a contextual sequence of layers of a recurrent neural network. Each residual block of the combined sequence of residual blocks is used based on a respective survival probability. By the autoencoder and based on the using each residual block of the semantic sequence, a previous entry of a log is semantically encoded. By the recurrent neural network and based on the using each residual block of the contextual sequence, a next entry of the log is predicted. In an embodiment during training, survival probabilities are hyperparameters that are learned and used to probabilistically skip residual blocks such that the multilayer perceptron has stochastic depth. | 2022-04-07 |
20220108182 | METHODS AND APPARATUS TO TRAIN MODELS FOR PROGRAM SYNTHESIS - Methods and apparatus to train models for program synthesis are disclosed. A disclosed example apparatus includes at least one memory, instructions, and processor circuitry. The processor circuitry is to execute the instructions to sample pairs of programs, the pairs of programs including first programs and second programs, the first programs including natural language descriptions and second programs, calculate program similarity scores corresponding to the pairs of programs, and train a model based on entries corresponding to ones of the pairs of programs, at least one of the entries including a corresponding one of the natural language descriptions with a paired one of the second programs, and a corresponding one of the program similarity scores. | 2022-04-07 |
20220108183 | MOMENTUM CONTRASTIVE AUTOENCODER - The embodiments are directed to training a momentum contrastive autoencoder using a contrastive learning framework. The contrastive learning framework learns a latent space distribution by matching latent representations of the momentum contrastive autoencoder to a pre-specified distribution, such as a distribution over a unit hyper-sphere. Once the latent space distribution is learned, samples for a new data set may be obtained from the latent space distribution. This results in a simple and scalable algorithm that avoids many of the optimization challenges of existing generative models, while retaining the advantage of efficient sampling. | 2022-04-07 |
20220108184 | METHOD AND DEVICE FOR TRAINING A MACHINE LEARNING SYSTEM - A computer-implemented method for training a machine learning system in which the machine learning system is configured to ascertain, based on at least a first input signal and a multiplicity of second input signals and second output signals corresponding to the second input signals, a first output signal corresponding to the first input signal, the first output signal characterizing a classification encumbered with an uncertainty and/or a regression encumbered with an uncertainty. | 2022-04-07 |
20220108185 | INVERSE AND FORWARD MODELING MACHINE LEARNING-BASED GENERATIVE DESIGN - Machine-learned networks provide generative design. Rather than emulate the typical human design process, an inverse model is machine trained to generate a design from requirements. A simulation model is machine trained to recover performance relative to the requirements for generated designs. These two machine-trained models are used in an optimization that creates further designs from the inverse model output design and tests those designs with the simulation model. The use of machine-trained models in this loop for exploring many different designs decreases the time to explore, so may result in a more optimal design or better starting designs for the design engineer. | 2022-04-07 |
20220108186 | Niche Ranking Method - This invention is a niching optimization algorithm that provides multiple extrapolated points of a function, the maximum or minimum outputs, over one optimization run. Yet differently than existing niching algorithms, it locally ranks each local population, providing a multi-focus exploration with an equalized number of solutions inside each niche, and identifies the set of most efficient and distinct solutions, instead of the overall population of solution. As an optimization algorithm, it has broad application in artificial intelligence, and in the design of engineering systems such as aeronautical structures, etc. . . . . It also can generate a mesh for any dataset or function domain, grouping inputs by regions of similitude. Thus, it can generate a mesh for a FEM, and segment the domain of expensive metamodels, like artificial neural networks, Kriging models (KR), among others. Experiments demonstrated that with a response surface mesh, the overall training time is substantially reduced. | 2022-04-07 |
20220108187 | Predictor Generation Genetic Algorithm - A method of generating predictor rules using a genetic algorithm for predicting at least one target event associated with a given entity, the entity having a combination of an entity type and one or more attributes. The method comprises partitioning records data into a model generation set and a model testing set. A first generation of predictor rules is determined using the records in the model generation set, and subsequent generations are constructed by iteratively identifying a first subset of predictor rules based on a precision measure of each predictor rule and identifying a second subset of predictor rules based on a recall measure of each predictor rule and generating the subsequent generation by OR combining the predictor rules of the first subset and by AND combining the predictor rules of the second subset. Construction of the predictor rule population is terminated in response to a termination criteria being met. | 2022-04-07 |
20220108188 | QUERYING KNOWLEDGE GRAPHS WITH SUB-GRAPH MATCHING NETWORKS - Techniques regarding identifying candidate knowledge graph subgraphs in a question answering over knowledge graph task are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise a question answering over knowledge graph component that encodes graph structure information of a knowledge graph subgraph and a question graph into neural network embeddings. | 2022-04-07 |
20220108189 | GRAPH SUMMARIZATION APPARATUS, GRAPH SUMMARIZATION METHOD AND PROGRAM - A graph summarizing apparatus includes a computation unit configured to compute, when a graph changes, importance degrees based on factor degrees of nodes in the graph before the change, for the nodes of the graph after the change, each of the nodes having a factor degree indicating an extent of a factor on a state of the graph, the graph having edges each of which has a weight indicating a strength of a causal relationship between the nodes; a selection unit configured to select a first node having the importance degree or less than or equal to a threshold as a candidate for deletion; and a deletion unit configured to delete the first node, and achieves graph summarization capable of suppressing a decrease in accuracy of factor estimation by a causal graph. | 2022-04-07 |
20220108190 | Internet Of Things (IOT) Big Data Artificial Intelligence Expert System Information Management And Control Systems And Methods - IoT Big Data information management and control systems and methods for distributed performance monitoring and critical network fault detection comprising a combination of capabilities including: IoT data collection sensor stations receiving sensor input signals and also connected to monitor units providing communication with other monitor units and/or cloud computing resources via IoT telecommunication links, and wherein a first data collection sensor station has expert predesignated other network elements comprising other data collection sensor stations, monitor units, and/or telecommunications equipment for performance and/or fault monitoring based on criticality to said first data collection sensor station operations, thereby extending monitoring and control operations to include distributed interdependent or critical operations being monitored and analyzed throughout the IoT network, and wherein performance and/or fault monitoring signals received by said first data collection sensor station are analyzed with artificial intelligence, hierarchical expert system algorithms for generation of warning and control signals. | 2022-04-07 |
20220108191 | MACHINE-LEARNED MODEL FOR DUPLICATE CRASH DUMP DETECTION - In an example embodiment, a machine learned model is utilized for identifying duplicate crash dumps. After a developer submits code, corresponding test cases are used to ensure the quality of the software delivery. Test failures can occur during this period, such as crashes, errors, and timeouts. Since it takes time for developers to resolve them, many duplicate failures can occur during this time period. In some embodiments, trash triggering is the most time-consuming task of development, and thus if duplicate crash failures can be automatically identified, the degree of automation will be significantly enhanced. To locate such duplicates, a training-based machine learned model uses component information of an in-memory database system to achieve better crash similarity comparison. | 2022-04-07 |
20220108192 | INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing apparatus includes: a processor configured to: search first action histories for second action histories similar to a specific action history of a second user to whom a recommendation is to be provided, in which each first action history represents objects which a first user corresponding to the first action history took a specific action for and each of which a degree of preference of the corresponding first user is given, and the specific action history represents objects which the second user took the specific action for; select an object to be recommended to the second user based on the degrees of preference for the respective objects which are given to the respective second action histories; and present the selected object. | 2022-04-07 |
20220108193 | INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing apparatus includes: a processor configured to: calculate, for each of plural objects, a recommendation effect that is a degree of influence of recommendation of the object on a user selecting the object, based on a recommendation history indicating an object that was recommended to the user among the plural objects, a non-recommendation history indicating an object that was not recommended, and an action history indicating the object that was selected by the user among the plural objects; and probabilistically determine an object to be recommended according to the recommendation effects calculated for the respective objects. | 2022-04-07 |
20220108194 | PRIVATE SPLIT CLIENT-SERVER INFERENCING - Certain aspects of the present disclosure provide techniques for inferencing with a split inference model, including: generating an initial feature vector based on a client-side split inference model component; generating a modified feature vector by modifying a null-space component of the initial feature vector; providing the modified feature vector to a server-side split inference model component on a remote server; and receiving an inference from the remote server. | 2022-04-07 |
20220108195 | INFINITELY SCALING A/B TESTING - Provided are processes of balancing between exploration and optimization with knowledge discovery processes applied to unstructured data with tight interrogation budgets. Traditional A/B testing protocols when scaled may present, at best, a computationally expensive process (and potentially infeasibly expensive process at larger scales, such as for a thousand or more option) for computing systems or existing data sets. Embodiments of a process may employ a probabilistic model to scale an A/B testing protocol for a set of options including tens, hundreds, thousands or a hundred thousand or more options. The probabilistic model may reduce, by orders of magnitude, the number of tests performed to determine a ranked order of the options based on ranked order among subsets of options selected by sampling techniques that balance explorations and optimization of a semantic space. | 2022-04-07 |
20220108196 | IMPROVED COMPUTER-IMPLEMENTED EVENT FORECASTING AND INFORMATION PROVISION - A computer-implemented method, a computer system and a computer program product for event forecasting and information provision are disclosed. For each entity of a group of entities one or more events are obtained, wherein each event is associated with a category of a set of categories. A model category subset of the set of categories, and a target category subset based on the model category subset, are determined. For entities for which for each category of the target category subset an event has been obtained, a sequence of categories of the model category subset is determined, and corresponding probabilities are calculated. For a target entity, a target sequence of categories of the model category subset is determined based on the events obtained for the target entity. A target category is determined based on the target sequence and the calculated probabilities. Information is provided based on the target entity and the determined target category. | 2022-04-07 |
20220108197 | SYSTEMS AND METHODS FOR ERROR DETECTION, REMEDIATION, AND INTEGRITY SCORING OF DIRECTED ACYCLIC GRAPHS - Systems and methods for error detection, remediation, and integrity scoring of directed acyclic graphs are disclosed. According to an aspect, a method includes providing, in the memory, a directed acyclic graph including a plurality of vertices and a plurality of edges connecting the vertices. The vertices represent tasks in a project. The edges represent dependencies among the tasks. The method also includes providing, for each vertex, at least one task property capable of affecting a property of one of the vertex's successor vertices via an edge. Further, the method includes analyzing one or more properties of the vertices and the edge dependencies to characterize the task properties. The method also includes presenting the analysis via a user interface. | 2022-04-07 |
20220108198 | APPARATUS AND METHOD FOR FORECASTED PERFORMANCE LEVEL ADJUSTMENT AND MODIFICATION - An apparatus, method, and computer program product are provided to adjust and modify input signals used in connection with predictive models by detecting events, such as changes in operating parameters of data objects and/or related systems and calculating adjusted decay rates to be applied to time-series data associated with times prior to an occurrence of an event. In some example implementations, an indication of an event associated with a given datastream is received, in a manner which indicates the change in an operating parameter and the time at which the change occurred. Based at least in part on the indication of the event associated with the datastream, a second decay rate associated with the set of time-series data is determined and applied to the set of time-series data, such that an updated future performance level can be calculated by a predictive model. | 2022-04-07 |
20220108199 | PIPELINING AND PARALLELISM FOR IMPLEMENTING A MIXTURE MODEL - One factor in limiting the speed of conventional implementations of mixture models is that the algorithm involves many decisions where different operations are fetched and performed depending on the outcome of the decisions. These decisions cause flushing of the pipeline, and thus prevent the realization of a highly parallel pipeline in a processor. Without parallelism, the throughput of the pipeline in the processor, i.e., the ability to process many samples of the digital input at a time, is limited. To alleviate this issue, implementation of the mixture model is reformulated, among other things, by embedding decisions into the process flow as multiplicative factors. The resulting implementation alleviates the need to use if-else statements for the decisions and reduces the number of times the pipeline has to be flushed. The implementation enables a pipeline with a higher degree of parallelism and thereby increases throughput and speed of the implementation. | 2022-04-07 |
20220108200 | JOSEPHSON DOUBLE BALANCED COUPLER - Techniques facilitating a quantum gate between qubits using a tunable coupler are provided. In one example, a quantum coupler device can comprise a Josephson ring modulator (JRM) that is operatively coupled to first and second qubits in a balanced bridge topology via respective first and second capacitive devices. The JRM provides tunable coupling between the first and second qubits. | 2022-04-07 |
20220108201 | ENHANCED QUANTUM CIRCUIT EXECUTION IN A QUANTUM SERVICE - Techniques for enhancing quantum circuit execution in a quantum service are presented. Database component stores compiled unitaries associated with quantum functions. Unitary management component (UMC) determines whether to compile a unitary associated with a quantum function for storage in the database component based on a composite quality score associated with the unitary and a threshold composite quality score associated with the quantum function, wherein the threshold score can be, or can be based on, a composite quality score of a compiled unitary that performs the same quantum function or a compiled unitary that performs a different quantum function. UMC determines the composite quality score based on a group of factors comprising frequency of utilizing the quantum function or equivalent quantum function or computation, age of the quantum function or computation, difficulty level of compiling a unitary, quantum circuit quality, or error associated with experimental execution of the quantum function. | 2022-04-07 |
20220108202 | DECREASED CROSSTALK ATOMIC OBJECT DETECTION - Various embodiments provide methods, apparatuses, systems, or computer program products for performing decreased crosstalk atomic object reading/detection. A controller is operatively connected to components of a system comprising a confinement apparatus comprising RF electrodes defining an RF null axis and a plurality of longitudinal electrodes. The components comprise voltage sources and manipulation sources. The controller is configured to cause an atomic object being read and neighboring atomic object(s) to be confined by the confinement apparatus; and cause the voltage sources to provide first control signals to longitudinal electrodes. The first control signals cause the longitudinal electrodes to generate a push field configured to cause one of the atomic object being read or the neighboring atomic object(s) to move off the RF null axis. The controller is further configured to cause a manipulation source to generate/provide a reading beam that is at least partly incident on the atomic object being read. | 2022-04-07 |
20220108203 | MACHINE LEARNING HARDWARE ACCELERATOR - In a memory device, a static random access memory (SRAM) circuit includes an array of SRAM cells arranged in rows and columns and configured to store data. The SRAM array is configured to: store a first set of information for a machine learning (ML) process in a lookup table in the SRAM array; and consecutively access, from the lookup table, information from a selected set of the SRAM cells along a row of the SRAM cells. A memory controller circuit is configured to select the set of the SRAM cells based on a second set of information for the ML process. | 2022-04-07 |
20220108204 | Scale-Permuted Machine Learning Architecture - A computer-implemented method of generating scale-permuted models can generate models having improved accuracy and reduced evaluation computational requirements. The method can include defining, by a computing system including one or more computing devices, a search space including a plurality of candidate permutations of a plurality of candidate feature blocks, each of the plurality of candidate feature blocks having a respective scale. The method can include performing, by the computing system, a plurality of search iterations by a search algorithm to select a scale-permuted model from the search space, the scale-permuted model based at least in part on a candidate permutation of the plurality of candidate permutations. | 2022-04-07 |
20220108205 | GENERATIVE REASONING FOR SYMBOLIC DISCOVERY - Provide a background theory applicable to a scientific problem as input to a computerized generative reasoner, which in turn produces a plurality of provable conjectures applicable to the problem, based on the input. Provide the plurality of provable conjectures and a set of input training data to a computerized model inference engine, which fits the input training data to the plurality of provable conjectures to obtain at least one candidate symbolic model reflecting scientific laws associated with the problem. Reduce a search space of a computerized prediction module by providing to the computerized prediction module the at least one candidate symbolic model. Provide new data to the computerized prediction module, which searches in the reduced search space to make a prediction related to the problem based on the new data and the at least one candidate symbolic model. | 2022-04-07 |
20220108206 | DOCUMENT INITIATED INTELLIGENT WORKFLOW - In an example embodiment, a solution is provided that allows a user to submit a document. Information can be obtained from the document using optical character recognition (OCR) or other techniques. This information can then be used to identify one or more workflows that pertain to the document. The one or more workflows may be ranked using machine learning techniques and presented to the user. Once the user selects a desired workflow, the information obtained from the document can then be used to automatically complete at least a portion of the workflow, for example by prefilling one or more fields in a form. | 2022-04-07 |
20220108207 | AUTOMATION OF COMMUNICATION, NAVIGATION, SURVEILLANCE, SENSOR AND SURVIVABILITY SYSTEM CAPABILITIES IN PRIMARY, ALTERNATE, CONTINGENCY, AND EMERGENCY SCHEMES FOR FACILITATING SEAMLESS COMMAND, CONTROL, COMMUNICATION, COMPUTER, CYBER-DEFENSE, COMBAT, INTELLIGENCE, SURVEILLANCE, AND RECONNAISSANCE CAPABILITIES - A communication system is enclosed. The communication system includes a communication sub-system configured to transmit and receive signals from one or more waveforms of a plurality of waveforms. The communication sub-system further contains one or more processors and memory with instructions that include receiving an artificial intelligence input configured to instruct the one or more processors to prepare an instantiation of a selected waveform. The instructions further include preparing the communication sub-system to transmit, receive, and instantiate the selected waveform. The communication system further includes an artificial intelligence engine in communication with the one or more processors configured to: receive operational data from one or more operation systems; determine, based on the operational data, the selected waveform; prepare the artificial intelligence input based on the selected waveform; and send the artificial intelligence input to the one or more processors. A method for switching waveforms is also disclosed. | 2022-04-07 |
20220108208 | SYSTEMS AND METHODS PROVIDING CONTEXTUAL EXPLANATIONS FOR DOCUMENT UNDERSTANDING - Systems and methods for providing contextual information for computerized document understanding. The systems and methods can be used to assist users in filling out documents by providing contextual information based on anomalies identified in a provided document. The methods and systems may identify the deficiency in the document and automatically generate a query related to the anomaly. The query can be fed as an input to a question-answering (QA) model that can provide an answer as the contextual information. | 2022-04-07 |
20220108209 | SHARED MEMORY SPACES IN DATA AND MODEL PARALLELISM - Techniques for shared memory spaces in data and model parallelism are provided to improve memory efficiency and memory access speed. A shared memory space may be established at a host system or in a hardware memory agent. The shared memory may store training data or model parameters for an artificial intelligence model at a memory address in one or more memory circuits. Data for the artificial intelligence model may be processed across a plurality of artificial intelligence accelerators using the training data or the model parameters of the shared memory space. That is, multiple accelerators access one copy of the data from the shared memory space instead of accessing their own separate memory space. | 2022-04-07 |
20220108210 | METHOD FOR DEVELOPING MACHINE-LEARNING BASED TOOL - The present subject matter refers a method for developing machine-learning (ML) based tool. The method comprises initializing an input dataset for undergoing ML based processing. The input dataset is pre-processed by a first model to harmonize features across the dataset. Thereafter, the dataset is annotated by a second model to define a labelled data set. A plurality of features are extracted with respect to the data set through a feature extractor. A selection of at-least a machine-learning classifier is received through an ML training module to operate upon the extracted features and classify the dataset with respect to one or more labels. A meta controller communicated with one or more of the first model, the second model, the feature extractor and the selected classifier for assessing a performance of at least one of first model and the feature extractor, a comparison of operation among the one or more selected classifier, and diagnosis of an unexpected operation with respect to one or more of the first model, the feature extractor and the selected classifier. | 2022-04-07 |
20220108211 | METHOD AND SYSTEM FOR INTEGRATING FIELD PROGRAMMABLE ANALOG ARRAY WITH ARTIFICIAL INTELLIGENCE - A method and system for integrating Field Programmable Analog Array (FPAA) with Artificial Intelligence (AI) is disclosed. In some embodiments, the method includes automatically creating, by an AI model, a function by auto connecting a first set of computation elements from a plurality of computational elements in an FPAA, in response to receiving an input. The method further includes receiving a feedback comprising a first accuracy level associated with the output. The method further includes automatically adjusting at least one of a plurality of control parameters to modify the function to generate an adjusted output corresponding to the input, based on the first accuracy level associated with the output. | 2022-04-07 |
20220108212 | ATTENTION FREE TRANSFORMER - Attention-free transformers are disclosed. Various implementations of attention-free transformers include a gating and pooling operation that allows the attention-free transformers to provide comparable or better results to those of a standard attention-based transformer, with improved efficiency and reduced computational complexity with respect to space and time. | 2022-04-07 |
20220108213 | DIFFERENTIAL PRIVACY DATASET GENERATION USING GENERATIVE MODELS - Apparatuses, systems, and techniques to train a generative model based at least in part on a private dataset. In at least one embodiment, the generative model is trained based at least in part on a differentially private Sinkhorn algorithm, for example, using backpropagation with gradient descent to determine a gradient of a set of parameters of the generative models and modifying the set of parameters based at least in part on the gradient. | 2022-04-07 |
20220108214 | MANAGEMENT METHOD OF MACHINE LEARNING MODEL FOR NETWORK DATA ANALYTICS FUNCTION DEVICE - A machine learning (ML) model management method for a network data analytics function (NWDAF) device is disclosed. The NWDAF device performs at least one of an analytics logical function (AnLF) for network data and an ML model training logical function (MTLF). | 2022-04-07 |
20220108215 | Robust and Data-Efficient Blackbox Optimization - The present disclosure provides iterative blackbox optimization techniques that estimate the gradient of a function. According to an aspect of the present disclosure, a plurality of perturbations used at each iteration can be sampled from a non-orthogonal sampling distribution. As one example, in some implementations, perturbations that have been previously evaluated in previous iterations can be re-used at the current iteration. thereby conserving computing resources because the re-used perturbations do not need to be re-evaluated at the current iteration. In another example, in addition or alternatively to the use of previously evaluated perturbations, the perturbations evaluated at the current iteration can be sampled from a non-orthogonal sampling distribution. | 2022-04-07 |
20220108216 | MACHINE LEARNING APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - A machine learning apparatus ( | 2022-04-07 |
20220108217 | MODEL LEARNING APPARATUS, LABEL ESTIMATION APPARATUS, METHOD AND PROGRAM THEREOF - A model capable of estimating a label with high accuracy is learned even when training data involving a small number of raters per data item is used. Learning processing is performed in which a plurality of data items and label expectation values that are indicators representing degrees of correctness of individual labels on the data items are used in pairs as training data, and a model that estimates a label on an input data item is obtained. | 2022-04-07 |
20220108218 | QUANTUM-ASSISTED MACHINE LEARNING WITH TENSOR NETWORKS - A method for quantum-assisted machine learning includes encoding, by processing circuitry, classical data into a plurality of quantum states by applying the classical data to an encoding map, and training a quantum model based on the plurality of quantum states. The quantum model may have a tensor network structure. The method may also include compiling, by the processing circuitry, the quantum model into a quantum circuit by mapping virtual qubits onto hardware qubits of a quantum hardware device, the quantum circuit including a sequence of operations tailored for operation on the quantum hardware device. | 2022-04-07 |
20220108219 | Approximate Bayesian Logistic Regression For Sparse Online Learning - Systems and methods leverage low complexity (e.g., linear overall, fixed per example) analytical approximations to perform machine learning problems such as, for example, the sparse online logistic regression problem. Unlike variational inference and other methods, the proposed systems and methods lead to analytical closed forms, lowering the practical number of computations. Further, unlike techniques used for dense features sets, such as Gaussian Mixtures, the proposed systems and methods allow for sparse problems with huge feature sets without increasing complexity. With the analytical closed forms, there is also no need for applying stochastic gradient methods on surrogate losses, and for tuning and balancing learning and regularization parameters of such methods. | 2022-04-07 |
20220108220 | Systems And Methods For Performing Automatic Label Smoothing Of Augmented Training Data - Example aspects of the present disclosure are directed to systems and methods for performing automatic label smoothing of augmented training data. In particular, some example implementations of the present disclosure which in some instances can be referred to “AutoLabel” can automatically learn the labels for augmented data based on the distance between the clean distribution and augmented distribution. AutoLabel is built on label smoothing and is guided by the calibration-performance over a hold-out validation set. AutoLabel is a generic framework that can be easily applied to existing data augmentation methods, including AugMix, mixup, and adversarial training, among others. AutoLabel can further improve clean accuracy, as well as the accuracy and calibration over corrupted datasets. Additionally, AutoLabel can help adversarial training by bridging the gap between clean accuracy and adversarial robustness. | 2022-04-07 |
20220108221 | Systems And Methods For Parameter Sharing To Reduce Computational Costs Of Training Machine-Learned Models - Systems and methods of the present disclosure are directed to a computer-implemented method. The method can include obtaining a machine-learned model comprising a plurality of model units, wherein each model unit comprises a plurality of parameters that are tied to a shared plurality of parameters. The method can include performing a first plurality of training iterations with the machine-learned model to adjust parameters of the shared plurality of parameters. The method can include detecting, based on the first plurality of training iterations, an occurrence of an untying condition. The method can include untying the parameters of one or more model units from the shared plurality of parameters. The method can include performing a second plurality of training iterations with the machine-learned model to adjust parameters of the one or more model units independent of the shared plurality of parameters. | 2022-04-07 |
20220108222 | SYSTEMS AND METHODS FOR DETECTING PREJUDICE BIAS IN MACHINE-LEARNING MODELS - Aspects of the present invention provide methods, apparatuses, systems, computing devices, computing entities, and/or the like for detecting prejudice bias in machine-learning models and/or data sets used in training, testing, and/or validating the models. In accordance various aspects, a method is provided comprising: receiving a data set used for training, testing, and/or validating a model that comprises data instances; generating, using a classification model, a prediction of applicability for each sub-category of a plurality of sub-categories for each bias category of a plurality of bias categories for each data instance; determining that a particular sub-category for a particular bias category is applicable to a proportion of the data set, wherein predictions of applicability for the particular sub-category generated for the proportion of the data set satisfies a threshold; and determining, based on the proportion, that the data set has a prejudice bias with respect to the particular bias category. | 2022-04-07 |
20220108223 | SYSTEM AND METHOD FOR HETEROGENEOUS MODEL COMPOSITION - A method for standardized model interaction can include: determining a model composition, receiving an input, converting the input into a standard object, converting the standard input object into a model-specific input (MSI) object, executing the model using the MSI object, converting the output from the model-specific output (MSO) object to a standard object, repeating previous steps for each successive model within the model composition, and providing a final model output. | 2022-04-07 |
20220108224 | TECHNOLOGIES FOR PLATFORM-TARGETED MACHINE LEARNING - Technologies for platform-targeted machine learning include a computing device to generate a machine learning algorithm model indicative of a plurality of classes between which a user input is to be classified and translate the machine learning algorithm model into hardware code for execution on the target platform. Example instructions cause a processor to obtain dataset features indicative of a plurality of characteristics of an input dataset, rank, using multiple ranking algorithms, the dataset features, identify feature subsets for respective ones of the ranked dataset features, predict performance metrics based on the feature subsets, and select a final subset based on the predicted performance metrics. | 2022-04-07 |
20220108225 | DISTRIBUTED MODEL GENERATION VIA INDIRECT PRIVATE DATA ACCESS - A computing system remotely trains a public ensemble model of an artificial intelligence model management system. The system receives, by the model management system, an encrypted representation of a private data value from a client system. The encrypted representation includes annotation information provided by the client system. The system determines, using the encrypted representation and the annotation information, a data value cluster that corresponds to the private data value. Data value clusters are generated using encrypted representations of a private data values provided by client systems. The system obtains, based on the assigned data value cluster, an encrypted representation of a model. The model is trained remotely by the client system using the private data value. The system adds the encrypted representation of the model to the public ensemble model. The public ensemble model is generated using a plurality of encrypted representations of models remotely trained by the client systems. | 2022-04-07 |
20220108226 | VOTING-BASED APPROACH FOR DIFFERENTIALLY PRIVATE FEDERATED LEARNING - A method for employing a general label space voting-based differentially private federated learning (DPFL) framework is presented. The method includes labeling a first subset of unlabeled data from a first global server, to generate first pseudo-labeled data, by employing a first voting-based DPFL computation where each agent trains a local agent model by using private local data associated with the agent, labeling a second subset of unlabeled data from a second global server, to generate second pseudo-labeled data, by employing a second voting-based DPFL computation where each agent maintains a data-independent feature extractor, and training a global model by using the first and second pseudo-labeled data to provide provable differential privacy (DP) guarantees for both instance-level and agent-level privacy regimes. | 2022-04-07 |
20220108227 | METHOD AND SYSTEM FOR PREDICTING CARPOOL MATCHING PROBABILITY IN RIDESHARING - Methods, systems, and apparatus, including computer programs encoded on computer storage media for carpool dual-pricing in ridesharing are provided. An exemplary method comprises: determining an expected trip count based on a plurality of carpool requests in a pricing unit and a pair of price adjustment multipliers applied to the pricing unit; for each of the plurality of carpool requests, generating a carpool matching probability of the carpool request by a second machine learning model based on the first expected trip count; constructing one or more Key Performance Indicator (KPI) models based on the plurality of carpool matching probabilities and the pair of price adjustment multipliers; and determining optimal values of the pair of price adjustment multipliers based on an optimization model maximizing an aggregated value of the one or more KPI models. | 2022-04-07 |
20220108228 | Ridehail Seat Reservation Enforcement Systems And Methods - Ridehail seat reservation enforcement and user direction systems and methods are disclosed herein. An example method can include receiving a ridehail request that includes a seat selection of a first seat of a ridehail vehicle, the first seat being access through a first door of the ridehail vehicle, activating an external notification feature of the vehicle prior to a user entering the ridehail vehicle when the user attempts to enter the ridehail vehicle on a side of the vehicle associated with the first seat, and activating an internal notification feature of the ridehail vehicle when the user attempts to sit in a second seat that is not the first seat | 2022-04-07 |
20220108229 | INFORMATION PROCESSING APPARATUS AND METHOD, AND PROGRAM - An information processing apparatus includes an entry and leaving management unit configured to analyze a captured image and detect vehicle information included in the captured image, and in a case where there is reservation information of parking corresponding to the detected vehicle information, open a gate of a parking lot corresponding to the reservation information. | 2022-04-07 |
20220108230 | APPARATUS FOR PROVIDING OF ARCHITECTURE VIDEO - An apparatus for providing an architecture video according to an embodiment of the present invention includes a video information management unit configured to manage video information for an architecture, a map management unit configured to map and manage location information of the architecture on an electronic map, a payment processing unit configured to activate the video information for which payment processing has been completed among a plurality of pieces of video information located within a preset range based on location information of a user, and an additional information management unit configured to output additional information including at least one of an advertisement, a game, a quiz, a questionnaire associated with the video information according to the location information of the user. | 2022-04-07 |
20220108231 | MOBILITY SERVICE SYSTEM AND MOBILITY SERVICE DELIVERY METHOD - A mobility service system delivers a mobility service utilizing an electric vertical takeoff and landing aircraft (eVTOL). The mobility service system includes one or more processors configured to secure an itinerary including movement from a first takeoff and landing site to a second takeoff and landing site in response to a reservation request from a user. The one or more processors are further configured to secure both a standard itinerary and a backup itinerary when the user is a first type member. The standard itinerary includes a flight from the first takeoff and landing site to the second takeoff and landing site by an eVTOL. The backup itinerary includes a ground travel that uses an automobile to travel in at least a partial section between the first takeoff and landing site and the second takeoff and landing site. | 2022-04-07 |
20220108232 | System for the accommodation industry for managing reservations as ownable and tradeable digital assets - Improvements in a system and method for selling, transferring and purchasing of ticket-like hotel or room reservations, which when purchased, become an asset belonging to the owner that grants him the right to use the specified hotel room in the specified hotel on the specified night. According to the process, the hotel tokenizes some or all of its inventory for a set period of time where a token represents room per night. The hotel conducts an initial room offering to investors or consumers at discounted prices. Revenues are collected and passed to hotel owner. The token owner can see their portfolios and can offer to buy, sell or swap tokens. A token owner can redeem his token at hotel on the specified night to gain entry to the room. | 2022-04-07 |
20220108233 | BUILDING SPACE RESERVATION - Methods, devices, and systems for building space reservation are described herein. In some examples, one or more embodiments include a display, a memory, and a processor to execute executable instructions stored in the memory to receive a reservation request including building space details for a building space in a building via the display, determine a building space in the building satisfying the building space details based on a location of a mobile device in the building, and reserve the determined building space. | 2022-04-07 |
20220108234 | RENTAL SPACE - A rental space includes a host connected to the Internet and a terminal device that receives information input from the user to the host, and the host includes a reservation manager that manages reservation information on use reservation received from the terminal device, an equipment manager that manages equipment information on equipment stored in a warehouse, and a management controller that controls the reservation manager and the equipment manager. The management controller generates a transportation command for transporting the equipment to a reserved room on a reservation date and time on the basis of the reservation information and the equipment information, and a transportation control unit receives the transportation command from the management controller via the Internet, and causes a transportation unit to transport the equipment to the reserved room by the reservation date and time based on the transportation command. | 2022-04-07 |
20220108235 | Systems and Methods for Accounting for Uncertainty in Ride-Sharing Transportation Services - A computing system is provided. The computing system is configured to obtain an initial candidate multi-modal transportation itinerary for a user. The computing system is configured to determine an uncertainty associated with a first leg of the initial candidate multi-modal transportation itinerary. The computing system is configured to determine one or more modifications to the initial candidate multi-modal transportation itinerary based, at least in part, on the uncertainty associated with the first leg. The computing system is configured to generate an updated candidate multi-modal transportation itinerary for the user based, at least in part, on the one or more modifications to the initial candidate multi-modal transportation itinerary. The computing system is configured to communicate data associated with the updated candidate multi-modal transportation itinerary to a user device. | 2022-04-07 |
20220108236 | INDOOR POSITIONING AND RECORDING SYSTEM - An indoor recording and positioning system for use in construction projects, as well as in a host of related industries and governmental activities, which system provides for immediate and complete retrieval of construction documents, such as floorplans, blueprints, and other specifications and requirements, keyed to and calibrated by the position of the user at the construction site, and allows for the efficient and timely completion of punch lists, reports, and the like. | 2022-04-07 |
20220108237 | SYSTEM AND METHOD FOR PREDICTIVE CORRUPTION RISK ASSESSMENT - The method for predictive corruption risk assessment includes identifying a target for assessing corruption risk. A set of misconduct data requests are provided. Misconduct input associated with each misconduct data request in the set of misconduct data requests is received. A set of predictive factors data requests are provided. Predictive factors input associated with each predictive factors data request in the set of predictive factors data requests is received. The misconduct input and the predictive factors input are aggregated. The set of predictive factors data requests statistically correlate with the set of misconduct data requests. Each predictive factors data request of the set of predictive factors data requests is from at least one or more categories. The aggregated misconduct and predictive factors inputs are analyzed. A report based on the analysis is generated and the report reflects the aggregated misconduct and predictive factors inputs. | 2022-04-07 |