10th week of 2022 patent applcation highlights part 51 |
Patent application number | Title | Published |
20220076091 | Wireless Vibration Monitoring Sensor Device with Flexible Form Factor - A flexible sensor device includes a device layer including at least a first sensor configured to measure sensor data relevant to an object of interest, a flexible electronics layer including flexible circuit connected to the first sensor, a flexible substrate located between the flexible electronics layer and a first adhesive layer, a flexible tape cover that is on the device layer opposite the flexible substrate, the flexible tape cover covering the device layer, and a coupling element, located in a first aperture, the coupling element coupling the first sensor to the object of interest when the sensor device is attached to the object of interest. The first aperture is in one of the flexible substrate and the flexible substrate, overlapping the first sensor and exposing the coupling element to the object of interest. | 2022-03-10 |
20220076092 | SMART IC SUBSTRATE, SMART IC MODULE, AND IC CARD INCLUDING THE SAME - A smart IC substrate according to an embodiment includes: a substrate including one surface and the other surface; a circuit pattern and a connection circuit pattern disposed on the one surface; and a coil pattern disposed on the other surface, wherein a chip mounting region is formed on the other surface, the coil pattern is electrically connected to a first terminal and a second terminal, the substrate includes a first region disposed inside the coil pattern and a second region disposed outside the coil pattern, the first terminal is disposed in the first region, the second terminal is disposed in the second region, a first via is formed in the first region corresponding to the circuit pattern of the substrate, a second via is formed in the second region of the substrate, a third via is formed in the first region corresponding to the connection circuit pattern of the substrate, and a connection member is disposed inside the second via. | 2022-03-10 |
20220076093 | RFID LABEL AND RFID TAG - An RFID label includes: a substrate; a dipole antenna formed of a metal foil so as to have a predetermined antenna length and a predetermined antenna width, the dipole antenna being arranged on a surface of the substrate; an IC chip connected to the dipole antenna; and a separator temporarily adhered to an adhesive agent overlaid on the surface of the substrate on which the dipole antenna is arranged, wherein a tear off line cuts through the substrate and the dipole antenna, in at least a part of the dipole antenna, so as to extend along an antenna length direction and so as to be superimposed with the part of the dipole antenna. | 2022-03-10 |
20220076094 | DYNAMIC REGION BASED APPLICATION OPERATIONS - Techniques are disclosed for a hybrid undo/redo for text editing, where non-linear undo and redo operations are performed across dynamic regions in a document and linear undo and redo operations are performed within the dynamic regions in the document. In an example, the hybrid undo/redo may be achieved by maintaining respective region offset values for the dynamic regions created in a document by the edits made to the document. In operation, the respective region offset values associated with the dynamic regions can be used to negate or otherwise counteract the effect of edits made in the dynamic regions. | 2022-03-10 |
20220076095 | MULTI-LEVEL SPARSE NEURAL NETWORKS WITH DYNAMIC REROUTING - Systems and methods for providing a neural network with multiple sparsity levels include sparsifying a matrix associated with the neural network to form a first sparse matrix; training the neural network using the first sparse matrix to form a second sparse matrix by fixing values and locations of non-zero elements of the first sparse matrix and updating a zero-value element of the first sparse matrix to be a non-zero value, wherein non-zero elements of the second sparse matrix includes the non-zero elements of the first sparse matrix; and outputting the second sparse matrix for executing the neural network. | 2022-03-10 |
20220076096 | DEVICE AND METHOD FOR TRAINING A SCALE-EQUIVARIANT CONVOLUTIONAL NEURAL NETWORK - A computer-implemented method for training a scale-equivariant convolutional neural network. The scale-equivariant convolutional neural network is configured to determine an output signal characterizing a classification of an input image of the scale-equivariant convolutional neural network. The scale-equivariant convolutional neural network includes a convolutional layer. The convolutional layer is configured to provide a convolution output based on a plurality of steerable filters of the convolutional layer and a convolution input. The convolution input is based on the input image and the steerable filters are determined based on a plurality of basis filters. The method for training includes training the plurality of basis filters. | 2022-03-10 |
20220076097 | NEURAL NETWORK COMPUTATION METHOD, DEVICE, READABLE STORAGE MEDIA AND ELECTRONIC EQUIPMENT - The present application discloses a neural network computation method includes determining the size of the first feature map obtained when the processor computes the present layer of the neural network before performing convolution computation on the next layer of the neural network; determining a convolution computation order of the next layer according to the size of the first feature map and the size of the second feature map for a convolution supported by the next layer; performing convolution computation instructions from the next layer based on the convolution computation order. Exemplary embodiments in the present disclosure decrease the interlayer feature map data access overhead and reduce the idle time of a computation unit by leaving out the storage of the first feature map and the loading process of the second feature map. | 2022-03-10 |
20220076098 | SYSTEMS AND METHODS FOR CONSTRUCTING AND APPLYING SYNAPTIC NETWORKS - In selected embodiments a recommendation generator builds a network of interrelationships between venues, reviewers and users based on attributes and reviewer and user reviews of the venues. Each interrelationship or link may be positive or negative and may accumulate with other links (or anti-links) to provide nodal links the strength of which are based on commonality of attributes among the linked nodes and/or common preferences that one node, such as a reviewer, expresses for other nodes, such as venues. The links may be first order (based on a direct relationship between, for instance, a reviewer and a venue) or higher order (based on, for instance, the fact that two venue are both liked by a given reviewer). The recommendation engine in certain embodiments determines recommended venues based on user attributes and venue preferences by aggregating the link matrices and determining the venues which are most strongly coupled to the user. | 2022-03-10 |
20220076099 | CONTROLLING AGENTS USING LATENT PLANS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for controlling an agent. One of the methods includes controlling the agent using a policy neural network that processes a policy input that includes (i) a current observation, (ii) a goal observation, and (iii) a selected latent plan to generate a current action output that defines an action to be performed in response to the current observation. | 2022-03-10 |
20220076100 | Multi-Dimensional Deep Neural Network - An artificial intelligence (AI) system is disclosed. The AI system comprises an input interface to accept input data; a memory storing a multi-dimensional neural network having a sequence of deep neural networks (DNNs) with an inner DNN and an outer DNN; a processor configured to submit the input data to the multi-dimensional neural network to produce an output of the outer DNN and an output interface to render at least a function of the output. Each DNN processes the input data sequentially by a sequence of layers along a first dimension of data propagation. The DNNs are arranged along a second dimension of data propagation from the inner DNN to the outer DNN. Further, the DNNs are connected such that an output of at least one layer of a DNN is combined with an input to at least one layer of subsequent DNN in the sequence of DNNs. | 2022-03-10 |
20220076101 | OBJECT FEATURE INFORMATION ACQUISITION, CLASSIFICATION, AND INFORMATION PUSHING METHODS AND APPARATUSES - Object feature information acquisition methods, systems, and devices, including computer programs encoded on computer storage media are provided. One of the methods includes: obtaining N relation networks of N time instances, wherein each of the relation networks comprises a plurality of nodes and connection relationships between the nodes, and each of the relation networks comprises a first node representing a first user; determining (i) a spatial aggregation feature of the first node at a first time instance and (ii) a node feature of the first node; inputting N spatial aggregation features of the N time instances into a sequential neural network; determining, based on an output result of the sequential neural network, N spatio-temporal expressions of the first node at the N time instances; and aggregating the N spatio-temporal expressions to obtain a spatio-temporal aggregation feature of the first node as feature information of the first user. | 2022-03-10 |
20220076102 | METHOD AND APPARATUS FOR MANAGING NEURAL NETWORK MODELS - A method of managing deep neural network (DNN) models on a device is provided. The method includes extracting information associated with each of a plurality of DNN models, identifying, from the information, common information which is common across the plurality of DNN models, separating and storing the common information into a designated location in the device, and controlling at least one DNN model among the plurality of DNN models to access the common information. | 2022-03-10 |
20220076103 | Data Processing Processor, Corresponding Method and Computer Program. - A data processing processor includes at least one processing memory and a computation unit. The computation unit includes a set of configurable computation units called configurable neurons, each configurable neuron of the set of configurable neurons includes a module configured to compute combination functions and a module configured to compute activation functions. Each module for computing activation functions includes a register for receiving a configuration command so that the command determines an activation function to be executed from at least two activation functions that can be executed by the module for computing activation functions. | 2022-03-10 |
20220076104 | LOW POWER HARDWARE ARCHITECTURE FOR A CONVOLUTIONAL NEURAL NETWORK - Dynamic data quantization may be applied to minimize the power consumption of a system that implements a convolutional neural network (CNN). Under such a quantization scheme, a quantized representation of a 3×3 array of m-bit activation values may include 9 n-bit mantissa values and one exponent shared between the n-bit mantissa values (n2022-03-10 | |
20220076105 | METHOD AND APPARATUS FOR TRIMMING SENSOR OUTPUT USING A NEURAL NETWORK ENGINE - A sensor is provided, comprising: a first sensing element that is arranged to generate, at least in part, a first signal; a second sensing element that is arranged to generate, at least in part, a second signal; and a neural network circuit that is configured to output an adjusted signal based on the first signal and the second signal. | 2022-03-10 |
20220076106 | APPARATUS WITH NEURAL NETWORK OPERATION METHOD - A neural network operation method includes storing a matrix on which an operation of a neural network is to be performed, shuffling a portion of elements of the matrix, and performing a replacement operation for the operation based on the shuffled matrix. | 2022-03-10 |
20220076107 | NEURAL NETWORK PROCESSING DEVICE, DATA PROCESSING METHOD AND DEVICE - A neural network processing device includes first and second operators. The first operator performs a specific calculation on input data to generate first output data. The second operator performs a function calculation on the first output data. The second operator includes a front-end processing circuit, a lookup table circuit, an interpolator circuit, and a back-end processing circuit. The front-end processing circuit performs a first data processing on the first output data to generate processed data. The lookup table circuit searches a first lookup table according to the processed data to obtain lookup data. The first lookup table includes mapping information between first independent variables and first dependent variables corresponding to the function calculation. The interpolator circuit performs an interpolation on the lookup data to obtain interpolated data. The back-end processing circuit performs a second data processing on the interpolated data to generate second output data. | 2022-03-10 |
20220076108 | NEURON AND NEUROMORPHIC SYSTEM INCLUDING THE SAME - The present invention discloses a neuron and a neuromorphic system including the same. The neuron according to an embodiment of the present invention includes a two-terminal spin device for performing integration and fire, and the two-terminal spin device is formed to have a negative differential resistance (NDR) region in which current decreases as voltage increases. | 2022-03-10 |
20220076109 | SYSTEM FOR CONTEXTUAL AND POSITIONAL PARAMETERIZED RECORD BUILDING - Provided is a system for contextual and positional parameterized record building. The system provides a mechanism to track any data record, information element or its parts to extract graphical/logical and contextual location in source documents and construct a data tree representation using a data representation module. The information extraction is enabled by a learning approach that incorporates graphical positional features into the model building process. A prediction architecture includes a gate network having a plurality of gates/neurons. Each gate/neuron is associated with an activation function based on a pre-built logic to perform a specific operation and operates on signals received at the gate/neuron. A record building module constructs one or more records based on candidate data values derived from the data tree representation using the prediction architecture. The candidate data values are wrapped in signals and fed to gates/neurons to construct the one or more records. | 2022-03-10 |
20220076110 | Efficient Neural Network Accelerator Dataflows - A distributed deep neural net (DNN) utilizing a distributed, tile-based architecture includes multiple chips, each with a central processing element, a global memory buffer, and a plurality of additional processing elements. Each additional processing element includes a weight buffer, an activation buffer, and vector multiply-accumulate units to combine, in parallel, the weight values and the activation values using stationary data flows. | 2022-03-10 |
20220076111 | Neural Network Approach for Identifying a Radar Signal in the Presence of Noise - A self-supervised machine-learning system identifies whether an intermittent signal is present. The system includes a receiver, an encoding neural network, a decoding neural network, and a gating neural network. The receiver detects radiation and from the detected radiation generates a sampled sequence including sampled values describing the intermittent signal and noise. The encoding neural network is trained to compress each window over the sampled sequence into a respective context vector having a fixed dimension less than an incoming dimension of the window. The decoding neural network is trained to decompress the respective context vector for each window into an interim sequence describing the intermittent signal while suppressing the noise. The gating neural network is trained to produce a confidence sequence from a sigmoidal output based on the interim sequence. Despite the noise, the confidence sequence identifies whether the intermittent signal is present in each sampled value in the sampled sequence. | 2022-03-10 |
20220076112 | COMPRESSING WEIGHTS FOR DISTRIBUTED NEURAL NETWORKS - Embodiments of the present disclosure include systems and methods for compressing weights for distributed neural networks. In some embodiments, a first network comprising a first set of weights is trained using a set of training data. A second network comprising a second set of weights is trained using the set of training data. A number of weights in the first set of weights is greater than a number of weights in the second set of weights. The first set of weights are adjusted based on a first loss determined by the first network and a second loss determined by the second network. The second set of weights are adjusted based on the first loss determined by the first network and the second loss determined by the second network. Values of the second set of weights are sent to a computing system. | 2022-03-10 |
20220076113 | WEIGHT MATRIX PREDICTION - Embodiments of the present disclosure relate to weight matrix prediction. In an embodiment, a computer-implemented method is disclosed. The method comprises sending a candidate weight matrix of a neural network to one of a plurality of computing nodes comprised in a computing system to perform a testing iteration. The method further comprises receiving a testing loss value from the one of the plurality of computing nodes based on the testing iteration. The method further comprises evaluating whether the testing loss value is applicable. The method further comprises determining that the candidate weight matrix is available to be employed in a new formal iteration in response to the testing loss value being applicable. In other embodiments, a system and a computer program product are disclosed. | 2022-03-10 |
20220076114 | MODULAR-RELATED METHODS FOR MACHINE LEARNING ALGORITHMS INCLUDING CONTINUAL LEARNING ALGORITHMS - A method for modular-based techniques for continual learning applications includes training a neural network based on learning a plurality of parameters associated with the neural network using input data associated with a current task. The neural network comprises a plurality of layers. A first layer, of the plurality of layers, comprises a plurality of nodes. Modularization of the neural network is performed to group the plurality of nodes of the first layer into at least two separate groups. | 2022-03-10 |
20220076115 | DATA PROCESSING BASED ON NEURAL NETWORK - Devices and methods for improving the performance of a data processing system that receives an input data comprising a training data for a neural network are described. An example system includes a plurality of accelerators, each of which is configured to perform a plurality of epoch segment processes, share, after performing at least one of the plurality of epoch segment processes, gradient data associated with a loss function with other accelerators, and update a weight of the neural network based on the gradient data. In some embodiments, each of the plurality of accelerators are further configured to adjust a precision of the gradient data based on at least one of a variance of the gradient data for the input data and a total number of the plurality of epoch segment processes, and transmit precision-adjusted gradient data to the other accelerators. | 2022-03-10 |
20220076116 | LEARNING APPARATUS, METHOD AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - According to one embodiment, a learning apparatus includes processing circuitry. The processing circuitry acquires a plurality of learning samples to be learned and a plurality of target labels associated with the respective learning samples, iteratively learns a learning model so that a learning error between output data corresponding to the learning sample and the target label is small with respect to the learning model to which the output data is output by inputting the learning sample, and displays a layout image in which at least some of the learning samples are arranged based on a learning progress regarding the iterative learning of the learning model and a plurality of the learning errors. | 2022-03-10 |
20220076117 | METHODS FOR GENERATING A DEEP NEURAL NET AND FOR LOCALISING AN OBJECT IN AN INPUT IMAGE, DEEP NEURAL NET, COMPUTER PROGRAM PRODUCT, AND COMPUTER-READABLE STORAGE MEDIUM - Methods for generating a deep neural net and for localizing an object in an input image, the deep neural net, a corresponding computer program product, and a corresponding computer-readable storage medium are provided. A discriminative counting model is trained to classify images according to a number of objects of a predetermined type depicted in each of the images, and a segmentation model is trained to segment images by classifying each pixel according to what image part the pixel belongs to. Parts and/or features of both models are combined to form the deep neural net. The deep neural net is adapted to generate, in a single forward pass, a map indicating locations of any objects for each input image. | 2022-03-10 |
20220076118 | REAL TIME CONTEXT DEPENDENT DEEP LEARNING - In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to receive a plurality of data inputs for training a neural network, wherein the data inputs comprise training data and weights inputs; represent the data inputs in a first form; and represent the weight inputs in a second form. Other embodiments are also disclosed and claimed. | 2022-03-10 |
20220076119 | DEVICE AND METHOD OF TRAINING A GENERATIVE NEURAL NETWORK - A device and a method of training a generative neural network. The method includes: generating an edge image using an edge detection applied to a digital image, the edge image comprising a plurality of edge pixels determined as representing edges of one or more digital objects in the digital image; selecting edge-pixels from the plurality of edge pixels; providing a segmentation image using the digital image, the segmentation image comprising a plurality of first pixels, the positions of the first pixels corresponding to the positions of the selected edge-pixels; selecting one or more second pixels for each first pixel in the segmentation image; generating a distorted segmentation image using a two-dimensional distortion applied to the segmentation image; and training the generative neural network using the distorted segmentation image as input image to estimate the digital image. | 2022-03-10 |
20220076120 | FINE TUNING OF TRAINED ARTIFICIAL NEURAL NETWORK - Systems and methods for fine tuning a trained artificial neural network (ANN) are provided. An example method may include receiving a description of the neurons, a first set of first parameters for the neurons and a second set of second parameters for the neurons; acquiring a plurality of inputs to the neurons, the inputs including first inputs associated with the first set of first parameters and second inputs associated with the second set of second parameters; obtaining first values correlating the first inputs and the second inputs; obtaining second values correlating the first inputs and the second inputs being weighted partially by the first parameters or the second parameters; and determining, based on the first values and the second values, a third set of third parameters to minimize a distance between neurons outputs determined based on the first parameters and neurons outputs determined based the third parameters. | 2022-03-10 |
20220076121 | METHOD AND APPARATUS WITH NEURAL ARCHITECTURE SEARCH BASED ON HARDWARE PERFORMANCE - A processor-implemented neural architecture search method includes: acquiring performance of neural network blocks included in a pre-trained neural network; selecting at least one target block for performance improvement from the neural network blocks; training weights and architecture parameters of candidate blocks corresponding to the target block based on arbitrary input data and output data of the target block generated based on the input data; and updating the pre-trained neural network by replacing the target block in the pre-trained neural network with one of the candidate blocks based on the trained architecture parameters. | 2022-03-10 |
20220076122 | ARITHMETIC APPARATUS AND ARITHMETIC METHOD - According to one embodiment, an arithmetic apparatus includes a non-volatile first memory, a second memory, and a controller. The first memory stores a model to be trained. The second memory has a smaller storage capacity than the first memory. The controller executes learning processing that updates a first parameter of the model based on a loss value obtained by inputting training data into the model stored in the first memory, and stores cumulative update information indicating a difference of the first parameter before and after the update in the second memory. In addition, the controller executes the learning processing using a second parameter in which the cumulative update information stored in the second memory is reflected in the first parameter read from the model stored in the first memory, and stores a difference between a third parameter obtained by updating the second parameter and the first parameter, in the second memory as the cumulative update information. | 2022-03-10 |
20220076123 | NEURAL NETWORK OPTIMIZATION METHOD, ELECTRONIC DEVICE AND PROCESSOR - The present invention discloses a neural network optimization method. An operator to be replaced is selected from multiple operators in a network layer according to a predetermined condition, and the operator to be replaced is replaced by multiple equivalent operators according to a calculation function corresponding to the operator to be replaced, wherein the multiple equivalent operators include a target operator. Pre-calculating is performed for a first operator among the multiple equivalent operators, and the calculation result is inputted into the target operator. A second operator is identified according to data change conditions of the multiple equivalent operators, and the second operator is combined with the target operator to complete optimization of a neural network model. The present invention can further perform lossless conversion of the operators in the neural network, further improving calculation performance on the basis of a simplified network structure. | 2022-03-10 |
20220076124 | METHOD AND DEVICE FOR COMPRESSING A NEURAL NETWORK - A method for compressing a neural network. The method includes: defining a maximum complexity of the neural network; ascertaining a first cost function; ascertaining a second cost function, which characterizes a deviation of a current complexity of the neural network in relation to the defined complexity; training the neural network in such a way that a sum of a first and a second cost function is optimized as a function of parameters of the neural network; and removing those weightings whose assigned scaling factor is smaller than a predefined threshold value. | 2022-03-10 |
20220076125 | NEURAL NETWORK LEARNING DEVICE, METHOD, AND PROGRAM - The learning unit | 2022-03-10 |
20220076126 | Modelling Request Sequences in Online-Connected Video Games - This specification describes a computer-implemented method for testing the performance of a video game server. The method comprises initializing a recurrent neural network. The recurrent neural network is trained based on requests sent from one or more client devices to the video game server. The initializing comprises inputting a start token into the recurrent neural network. An output distribution for a first-time step is generated, as an output of the recurrent neural network. The output distribution comprises a probability of generating each of a set of one or more requests to the video game server, in addition to a probability of generating a stop token. A first request from the set of one or more requests is selected based on the output distribution. The method comprises for one or more further time steps, until a stop token has been selected from output of the recurrent neural network: inputting, into the recurrent neural network, a request selected in the previous time step; generating, as an output of the recurrent neural network, an output distribution for the time step; and selecting, based on the output distribution, a request. A generated sequence of requests is stored. The generated sequence of requests comprises one or more of the requests selected at each respect time step. The generated sequence of requests is inputted into a test generator. A performance test for testing the performance of the video game server is generated by the test generator. | 2022-03-10 |
20220076127 | FORCING WEIGHTS OF TRANSFORMER MODEL LAYERS - Embodiments of the present disclosure include systems and methods for forcing weights of transformer model layers when training a transformer model. In some embodiments, input data is received at a first layer included in a transformer model. The input data is processed through the first layer of the transformer model to produce a first output data. The first output data is processed through the first layer of the transformer model to produce a second output data. The first output data is processed through a second layer included in the transformer model to produce a third output data. A difference is calculated between the second output data and the third output data. Weights included in the first layer of the transformer model are adjusted based on the calculated difference. | 2022-03-10 |
20220076128 | LEARNING AND PROPAGATING VISUAL ATTRIBUTES - One embodiment of the present invention sets forth a technique for performing spatial propagation. The technique includes generating a first directed acyclic graph (DAG) by connecting spatially adjacent points included in a set of unstructured points via directed edges along a first direction. The technique also includes applying a first set of neural network layers to one or more images associated with the set of unstructured points to generate (i) a set of features for the set of unstructured points and (ii) a set of pairwise affinities between the spatially adjacent points connected by the directed edges. The technique further includes generating a set of labels for the set of unstructured points by propagating the set of features across the first DAG based on the set of pairwise affinities. | 2022-03-10 |
20220076129 | METHOD OF TRAINING A DEEP NEURAL NETWORK TO CLASSIFY DATA - A computer-implemented method of training a deep neural network to classify data comprises: for a batch of N training data X | 2022-03-10 |
20220076130 | DEEP SURROGATE LANGEVIN SAMPLING FOR MULTI-OBJECTIVE CONSTRAINT BLACK BOX OPTIMIZATION WITH APPLICATIONS TO OPTIMAL INVERSE DESIGN PROBLEMS - Run a computerized numerical partial differential equation solver on at least one partial differential equation representing at least one physical constraint of a physical system, to generate a training data set. A true potential corresponds to an exact solution to the at least one partial differential equation. Using a computerized machine learning system, learn, from the training data set, a surrogate of a gradient of the true potential. Using the computerized machine learning system, apply Langevin sampling to the learned surrogate of the gradient, to obtain a plurality of samples corresponding to candidate designs for the physical system. Make the plurality of samples available to a fabrication entity. | 2022-03-10 |
20220076131 | DISCRETE VARIATIONAL AUTO-ENCODER SYSTEMS AND METHODS FOR MACHINE LEARNING USING ADIABATIC QUANTUM COMPUTERS - A computational system can include digital circuitry and analog circuitry, for instance a digital processor and a quantum processor. The quantum processor can operate as a sample generator providing samples. Samples can be employed by the digital processing in implementing various machine learning techniques. For example, the computational system can perform unsupervised learning over an input space, for example via a discrete variational auto-encoder, and attempting to maximize the log-likelihood of an observed dataset. Maximizing the log-likelihood of the observed dataset can include generating a hierarchical approximating posterior. | 2022-03-10 |
20220076132 | USING META-INFORMATION IN NEURAL MACHINE TRANSLATION - Systems and methods for neural machine translation are provided. In one example, a neural machine translation system translates text and comprises processors and a memory storing instructions that, when executed by at least one processor among the processors, cause the system to perform operations comprising, at least, obtaining a text as an input to a neural network system, supplementing the input text with meta information as an extra input to the neural network system, and delivering an output of the neural network system to a user as a translation of the input text, leveraging the meta information for translation. | 2022-03-10 |
20220076133 | GLOBAL FEDERATED TRAINING FOR NEURAL NETWORKS - Apparatuses, systems, and techniques to facilitate global semi-supervised training of neural networks to perform image segmentation related to diagnosis and management of emerging diseases, such as COVID-19. In at least one embodiment, distributed client training frameworks train one or more client neural networks to perform image segmentation according to a local training data set as well as global neural network data aggregated, by one or more central servers, from each of one or more globally distributed client neural networks. | 2022-03-10 |
20220076134 | TWO-STAGE DEEP LEARNING BASED SECURE PRECODER FOR INFORMATION AND ARTIFICIAL NOISE SIGNAL IN NON-ORTHOGONAL MULTIPLE ACCESS SYSTEM - A learning method for a two-stage deep learning base secure precoder for information and an artificial noise signal in a non-orthogonal multiple access (NOMA) system is provided. The learning method for designing the two-stage deep learning based secure precoder for the information and the artificial noise signal in the NOMA system may include performing pre-training for downlink NOMA before information transmission to maximize a sum secrecy rate while ensuring secrecy rates of respective legitimate users, each having a single antenna (secrecy fairness), and performing post-training by fine tuning a neural network learned by the pre-training using unsupervised learning. | 2022-03-10 |
20220076135 | META-LEARNING SYSTEM AND METHOD FOR DISENTANGLED DOMAIN REPRESENTATION LEARNING - A method for employing meta-learning based feature disentanglement to extract transferrable knowledge in an unsupervised setting is presented. The method includes identifying how to transfer prior knowledge data from a plurality of source domains to one or more target domains, extracting domain dependence features and domain agnostic features from the prior knowledge data, via a disentangle meta-controller, by discovering factors of variation within the prior knowledge data received from a data stream, and obtaining an evaluation for a downstream task, via a child network, to obtain an optimal child model and a feature disentangle strategy. | 2022-03-10 |
20220076136 | METHOD AND SYSTEM FOR TRAINING A NEURAL NETWORK MODEL USING KNOWLEDGE DISTILLATION - An agnostic combinatorial knowledge distillation (CKD) method for transferring trained knowledge of neural model from a complex model (teacher) to a less complex model (student) is described. In addition to training the student to generate a final output that approximates both the teacher's final output and a ground truth of a training input, the method further maximizes knowledge transfer by training hidden layers of the student to generate outputs that approximate a representation of a subset of teacher hidden layers are mapped to each of the student hidden layers for a given training input. | 2022-03-10 |
20220076137 | QUERY-BASED MOLECULE OPTIMIZATION AND APPLICATIONS TO FUNCTIONAL MOLECULE DISCOVERY - A query-based generic end-to-end molecular optimization (“QMO”) system framework, method and computer program product for optimizing molecules, such as for accelerating drug discovery. The QMO framework decouples representation learning and guided search and applies to any plug-in encoder-decoder with continuous latent representations. QMO framework directly incorporates evaluations based on chemical modeling, analysis packages, and pre-trained machine-learned prediction models for efficient molecule optimization using a query-based guided search method based on zeroth order optimization. The QMO features efficient guided search with molecular property evaluations and constraints obtained using the predictive models and chemical modeling and analysis packages. QMO tasks include optimizing drug-likeness and penalized log P scores with similarity constraints and improving the target binding affinity of existing drugs to pathogens such as the SARS-CoV-2 main protease protein while preserving the desired drug properties. QMO tasks further improves optimizing antimicrobial peptides toward lower toxicity. | 2022-03-10 |
20220076138 | COMPUTATION REDUCTION USING A DECISION TREE CLASSIFIER FOR FASTER NEURAL TRANSITION-BASED PARSING - A fast neural transition-based parser. The fast neural transition-based parser includes a decision tree-based classifier and a state vector control loss function. The decision tree-based classifier is dynamically used to replace a multilayer perceptron in the fast neural transition-based parser, and the decision tree-based classifier increases speed of neural transition-based parsing. The state vector control loss function trains the fast neural transition-based parser, the state vector control loss function builds a vector space favorable for building a decision tree that is used for the decision tree-based classifier in the neural transition-based parser, and the state vector control loss function maintains accuracy of neural transition-based parsing while the decision tree-based classifier is used to increase the speed of the neural transition-based parsing while using the decision tree-based classifier to increase the speed of the neural transition-based parsing. | 2022-03-10 |
20220076139 | MULTI-MODEL ANALYTICS ENGINE FOR ANALYZING REPORTS - Systems and methods for an automated software solution to evaluate expense reports and provide analytic results. For example, some embodiments combine different analytic models, which, when applied to together, provide a comprehensive analysis of aggregated expense report data. In some embodiments, a multi-model approach may determine whether a target expense report varies from predicted values and whether the user who submitted the target report is an outlier with respect to other users who previously submitted expense reports. | 2022-03-10 |
20220076140 | PRIORITIZATION OF ELECTRONIC COMMUNICATIONS - Methods, systems, and apparatus for prioritizing communications are described. Metadata that characterizes an electronic communication is obtained and a machine learning algorithm is applied to the metadata to generate a scoring model. A score for the electronic communication is generated based on the scoring model. | 2022-03-10 |
20220076141 | INSECT ATTACK RISK PREDICTION SYSTEM AND METHOD - It is described an insect attack prediction system, comprising: at least one processor provided with a plurality of software modules comprising: an insect identification module configured to process at least one insect digital image (IM) to provide a presence value (IPD), representing the presence of insects in an area of interest for insect attack; a data collecting module configured to acquire insect behavioral data associated to said area and comprising at least one of the following data groups: meteorological data; environmental data; historical data of insect presence. The system further comprises a prediction module configured to process the presence value (IPD) and the insect behavioral data according to a mathematical prediction algorithm to estimate a risk of attack (PRB) to the area of interests. | 2022-03-10 |
20220076142 | SYSTEM AND METHOD FOR SELECTING UNLABLED DATA FOR BUILDING LEARNING MACHINES - Systems and methods for selecting unlabeled data for building and improving the performance of a learning machine are disclosed. In an aspect, such a system may include a reference learning machine, a set of labeled data, and a learning machine analyzer. The learning machine analyzer is configured to receive the reference learning machine and the set of labeled data as inputs and analyze the inner working of the reference learning machine to produce a selected set of unlabeled data. In an aspect, the learning machine analyzer identifies and measures a relation between different input data samples and finds all pairwise relations to construct a relational graph. In an aspect, the relational graph visualizes how much the different input data samples are like each other in higher dimensions inside the reference learning machine. | 2022-03-10 |
20220076143 | AUTOMATICALLY RECOMMENDING AN EXISTING MACHINE LEARNING PROJECT AS ADAPTABLE FOR USE IN A NEW MACHINE LEARNING PROJECT - According to one or more embodiments, operations may include, extracting first features from existing machine learning (ML) projects and storing the first features in a corpus. In addition, the operations may include performing a first search on the corpus based on a first search query to generate a first ranked set of the existing ML projects. Moreover, the operations may include generating second features based on the first features of the first ranked set of the existing ML projects. Moreover, the operations may include performing a second search on the corpus based on a second search query to generate a second ranked set of the existing ML projects. In addition, the operations may include recommending a highest ranked existing ML project in the second ranked set of the existing ML projects as adaptable for use in a second ML project. | 2022-03-10 |
20220076144 | MACHINE LEARNING WITH MULTIPLE CONSTRAINTS - The exemplary embodiments disclose a method, a computer program product, and a computer system for determining that one or more model pipelines satisfy one or more constraints. The exemplary embodiments may include detecting a user uploading data, one or more constraints, and one or more model pipelines, collecting the data, the one or more constraints, and the one or more model pipelines, and determining that one or more of the model pipelines satisfies all of the one or more constraints based on applying one or more algorithms to the collected data, constraints, and model pipelines. | 2022-03-10 |
20220076145 | METHODS AND APPARATUS FOR REAL-TIME INFERENCE OF MACHINE LEARNING MODELS - This application relates to apparatus and methods for providing recommended items to advertise. In some examples, a computing device determines a plurality of first values for a corresponding plurality of first items based on the user's engagement with each of the first items. The computing device may then determine a subset of the plurality of first items based on the first values. The computing device may receive a search request and determine a plurality of second values for a plurality of second items based on the search request. The computing device may determine a plurality of third values for the subset of items based on the plurality of second values for the plurality of second items and the user's engagement with each of the subset of items. The computing device may determine the recommended items based on the plurality of second values and the plurality of third values. | 2022-03-10 |
20220076146 | YIELD RATE PREDICTION METHOD, YIELD RATE PREDICTION SYSTEM AND MODEL TRAINING DEVICE OF SEMICONDUCTOR MANUFACTURING PROCESS - A yield rate prediction method, a yield rate prediction system, and a model training device of a semiconductor manufacturing process are provided. The yield rate prediction method of a semiconductor manufacturing process includes the following steps. A correspondence relation between a circuit path of a netlist and an integrated circuit layout is established. Several defective points on several stacking layers are obtained. A recognition model is trained to recognize a fault occurred on the circuit path according to the defective points. A probability of the fault occurred on the circuit path of a semiconductor semi-final product according to the recognition model is recognized. A yield rate of the semiconductor semi-final product is predicted according to the probability. | 2022-03-10 |
20220076147 | PROCESS TREE DISCOVERY USING A PROBABILISTIC INDUCTIVE MINER - Systems and methods for splitting an event log into sub-event logs are provided. The event log of a process is received. An activity relation score for a parallel relationship operator is calculated for each respective pair of activities of a plurality of pairs of activities in the event log based on 1) a frequency of occurrence of a first activity of the respective pair of activities between occurrences of a second activity of the respective pair of activities and 2) a frequency of occurrence of the second activity between occurrences of the first activity. A cut location in the event log is determined based on the activity relation scores. The event log is split into the sub-event logs based on the cut location. | 2022-03-10 |
20220076148 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - An information processing device has an inputter configured to input analysis target data including a plurality of explanatory variables, a screening processor configured to generate intermediate data with the number of the explanatory variables included in the analysis target data reduced by using a part of the plurality of explanatory variables as objective variables, a first feature amount extractor configured to extract a first feature amount from the intermediate data based on the objective variables, and a similar feature amount extractor configured to extract a similar feature amount from the intermediate data based on a degree of similarity between the explanatory variables included in the intermediate data and the first feature amount. | 2022-03-10 |
20220076149 | COMPUTER-BASED SYSTEMS CONFIGURED FOR ENTITY RESOLUTION AND INDEXING OF ENTITY ACTIVITY - In order to facilitate the entity resolution and entity activity tracking and indexing, systems and methods include receiving first source records from a first database and second source records from a record database. A candidate set of second source records is determined by a heuristic search in the set of second source records. A candidate pair feature vector associated with each candidate pair of first and second source records is generated. An entity matching machine learning model predicts matching first source records for each candidate second source record based on the respective candidate pair feature vector. An aggregate quantity associated with the matching first source records is aggregated from a quantity associated with each first source record, and a quantity index for each candidate second source record is determined based the aggregate quantities. Each quantity index is displayed to a user. | 2022-03-10 |
20220076150 | METHOD, APPARATUS AND SYSTEM FOR ESTIMATING CAUSALITY AMONG OBSERVED VARIABLES - Method, apparatus and system for estimating causality among observed variables are provided. In response to receiving observed data of mixed observed variables, a mixed causality objective function, being suitable for continuous observed variables and discrete observed variables is determined, wherein the mixed causality objective function includes a causality objective function for continuous observed variables and a causality objective function for discrete observed variables and the fitting inconsistency is adjusted based on weighted factors of the observed variables. Then, the mixed causality objective function is optimally solved by means of a mixed sparse causal inference, which is suitable for both continuous observed variables and discrete observed variables, using the mixed observed data under a constraint of directed acyclic graph, to estimate causality among the observed variables. | 2022-03-10 |
20220076151 | COMPUTER-IMPLEMENTED SYSTEM AND METHOD HAVING A DIGITAL TWIN AND A GRAPH-BASED STRUCTURE - A computer-implemented system that includes at least one first interface configured to receive and send data from a physical object, and a graph-based structure. The graph-based structure includes a conceptual model including a plurality of concepts, each concept mapping a physical object, the concepts being provided with attributes and their respective relations among one another being defined, and a plurality of data instances that have data points of physical objects and are assigned to the respective concepts in the conceptual model. The graph-based structure receives data from the interface and integrates received data into the conceptual model and/or into the data instances. A user interface provides a query and/or definition to the graph-based structure based on an input of a user and outputs a corresponding response. The computer-implemented system includes at least one digital twin that draws data from the graph-based structure and/or provides data to the graph-based structure. | 2022-03-10 |
20220076152 | METHODS AND APPARATUS TO DETERMINE A CONDITIONAL PROBABILITY BASED ON AUDIENCE MEMBER PROBABILITY DISTRIBUTIONS FOR MEDIA AUDIENCE MEASUREMENT - Methods, apparatus, systems to determine a conditional probability based on audience member probability distributions for media audience measurement are disclosed. Disclosed example methods for media audience measurement include determining a first audience probability distribution for a first member of a household and determining a second audience probability distribution for a second member of the household. Disclosed example methods also include calculating probabilities for audience combinations of the first member and the second member of the household based on the first audience probability distribution and the second audience probability distribution. Disclosed example methods further include determining a household audience characteristic probability based on the calculated probabilities of the audience combinations of the household. The household audience characteristic indicates likelihoods of different possible audience compositions of the household for a media event. | 2022-03-10 |
20220076153 | SYSTEM AND METHOD FOR OPTIMAL SENSOR PLACEMENT - A controller includes a memory that stores instructions and a processor that executes the instructions. The instructions cause the controller to execute a process that includes receiving sensor data from a first sensor and a second sensor. The sensor data includes a time-series observation representing a first activity and a second activity. The controller generates models for each activity involving progressions through states indicated by the sensor data from each sensor. The controller receives from each sensor additional sensor data including a time-series observation representing the first activity and the second activity. The controller determines likelihoods that the models generated a portion of the additional sensor data and calculates a pair-wise distance between each sensor-specific determined likelihood to obtain calculated distances. The calculated distances for each sensor are grouped, and a relevance of each sensor to each activity is determined by executing a regression model using the grouped calculated distances. | 2022-03-10 |
20220076154 | CONTROL PULSE GENERATION METHOD, SYSTEM, DEVICE AND STORAGE MEDIUM - A control pulse generation method, a system, a device and a storage medium are provided, which are related to the field of quantum computing. The method includes: acquiring a system Hamiltonian; acquiring an initial control pulse of a quantum logic gate included in a parameterized quantum circuit to obtain an initial pulse sequence for a gate sequence formed for all the quantum logic gates in the parameterized quantum circuit, which is obtained through simulation based on the system Hamiltonian; acquiring system state information of the quantum system obtained after applying the initial pulse sequence to the target quantum hardware device; adjusting a parameter of the parameterized quantum circuit based on a relationship between the system state information and target state information needed to be achieved by the target quantum control task, to adjust a pulse parameter of the initial pulse sequence to obtain a target pulse sequence. | 2022-03-10 |
20220076155 | SYSTEMS AND METHODS FOR EFFICIENT PHOTONIC HERALDED QUANTUM COMPUTING SYSTEMS - Embodiments of systems and methods for efficient photonic based quantum circuitry in a heralded system are disclosed. Embodiments may employ filters with a photon pair source to route photons to quantum circuit blocks in a quantum system. | 2022-03-10 |
20220076156 | MACHINE-LEARNED DATABASE RECOMMENDATION MODEL - A central database system trains a machine-learned model based on training data identifying characteristics of account holder entities, characteristics of account provider entities, and relationships between the account holder entities and account provider entities. For a target entity, the central database system then identifies a target set of account provider entities, and applies the trained machine-learned model to identify a subset of the target set of account provider entities. The identified subset of account provider entities are entities that, if recommended to the target entity, are most likely to result in an established relationship with the target entity. A recommendation is then generated for display to the target entity, the recommendation identifying the subset of account provider entities and including interface elements that, if selected by the target entity, cause a notification identifying the target entity to be sent to a corresponding account provider entity. | 2022-03-10 |
20220076157 | DATA ANALYSIS SYSTEM USING ARTIFICIAL INTELLIGENCE - A data analysis system utilizing custom unsupervised machine learning processes over a communications network is disclosed, the system including a repository of data, a web application deployed on a web server, the web application including a data collection interface, wherein the web application is configured for providing a graphical user interface for modifying threshold parameters of a clustering algorithm for clustering the data, executing the clustering algorithm with the threshold parameters that were modified, thereby producing a set of results, providing a graphical user interface for reviewing the set of results of the clustering algorithm and re-executing previous steps if the set of results are not useful and, executing a deep learning algorithm in a deep learning software framework on the set of results, thereby establishing relationships between the data, and providing generalizations of the data. | 2022-03-10 |
20220076158 | SYSTEM AND METHOD FOR A SMART ASSET RECOVERY MANAGEMENT FRAMEWORK - An information handling system receives historical data that includes configuration information and recovery values of recycled assets, and builds a training dataset from a subset of the historical data. The information handling system also builds a validation dataset from another subset of the historical data, and trains a machine learning model on the training dataset to learn the recovery values of the recycled assets. The system also validates the machine learning model based on the validation dataset, tunes a hyperparameter of the machine learning model, and predicts a recovery value of a recyclable asset using the machine learning model utilizing an extreme gradient boosting algorithm. | 2022-03-10 |
20220076159 | ENTITY MODIFICATION OF MODELS - A method, apparatus, system, and computer program product for generating information for modifying a model. Training data comprising entities, operations for the entities, and a number of mesh quality metrics for meshes generated from modified entities resulting from the operations being performed on the entities are selected by a computer system. A set of machine learning models is trained by the computer system using the training data. The set of machine learning models trained with the training data identifies a number of mesh quality metrics for a set of input entities in a model input into the set of machine learning models. | 2022-03-10 |
20220076160 | Embedded Machine Learning for Storage Devices - Methods are provided for tactically deploying machine learning operations within existing storage devices without additional capital investment. Machine learning operations can be processed within a SoC of a storage device as embedded software. Storage device designed to utilize machine learning methods within existing configurations can include a non-volatile memory for storing data and executable instructions and a processor to conduct a variety of steps. The steps can include executing a plurality of applications stored in the non-volatile memory, and receiving a request for data, including measurements, from at least one of the plurality of applications. The steps can further determine if the requested data is suitable for substitution by an inference and subsequently select at least one machine learning model for generating a suitable inference. | 2022-03-10 |
20220076161 | COMPUTER SYSTEM AND INFORMATION PROCESSING METHOD - The prediction accuracy of prediction models generated by ensemble learning is enhanced. A computer system configured to generate a prediction model for predicting an event includes: a storage unit configured to store a plurality of training data including a plurality of sample data including values of a plurality of feature variables and a prediction correct value of the event; and a prediction model generating unit configured to generate a plurality of prediction models using the plurality of training data, to thereby generate a prediction model for calculating an ultimate predicted value on the basis of predicted values of the plurality of prediction models. Prediction models generated by applying the same machine learning algorithm to the plurality of training data are different from each other in features of the event that are reflected in the prediction models. | 2022-03-10 |
20220076162 | STORAGE MEDIUM, DATA PRESENTATION METHOD, AND INFORMATION PROCESSING DEVICE - A non-transitory computer-readable storage medium storing a data presentation program that causes at least one computer to execute a process, the process includes acquiring certain data from an estimation target data set that uses an estimation model, based on an estimation result for the estimation target data set; and presenting data obtained by changing the certain data in a direction orthogonal to a direction in which loss of the estimation model fluctuates, in a feature space that relates to feature amounts obtained from the estimation target data set. | 2022-03-10 |
20220076163 | MODEL PARAMETER LEARNING METHOD AND MOVEMENT MODE DETERMINATION METHOD - A learning device | 2022-03-10 |
20220076164 | AUTOMATED FEATURE ENGINEERING FOR MACHINE LEARNING MODELS - Training computer models by generating time-aware training datasets is provided. A system receives a secondary dataset to be combined with a primary dataset for generation of a training dataset. The primary dataset includes a plurality of data records where at least one data record corresponds to a time-of-prediction value corresponding to a timestamp at which at least one data record was used to generate a prediction. The secondary dataset includes a plurality of features where at least one feature corresponds to a timestamp value. The system selects a feature within the secondary dataset with a timestamp that precedes or matches a time-of-prediction value for a corresponding data record within the primary dataset. The system generates the training dataset that includes the primary dataset and the selected feature. The system trains a model using the generated training dataset. | 2022-03-10 |
20220076165 | SYSTEMS AND METHODS FOR AUTOMATING DATA SCIENCE MACHINE LEARNING ANALYTICAL WORKFLOWS - Systems and methods for automating data science machine learning using analytical workflows are disclosed that provide for user interaction and iterative analysis including automated suggestions based on at least one analysis of a dataset. | 2022-03-10 |
20220076166 | SYSTEMS AND METHODS FOR STORING AND RETRIEVING DATA SETS BASED ON TEMPORAL INFORMATION - Described herein are systems and methods for providing data sets from a constantly changing database to a streaming machine learning component. In one embodiment, a data streaming sub-system receives multiple incoming streams of data sets, in which each stream is generated in real-time by one of multiple data sources. The streaming sub-system sends data sets, on-the-fly as they are received, to storage in the memory of a database, in which there is a linkage between the storage and the time of arrival or the time of storage, of the data sets. The database receives, from a machine learning component, a request to receive data sets according to a particular time or time period. In response to such request, the database identifies such data sets according to the particular time or time period and sends them to the machine learning component. | 2022-03-10 |
20220076167 | METHOD FOR MODEL DEPLOYMENT, TERMINAL DEVICE, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A method for model deployment, a terminal device, and a non-transitory computer-readable storage medium are provided. The method includes the following. A to-be-deployed model and an input/output description file of the to-be-deployed model are obtained. Output verification is performed on the to-be-deployed model based on the input/output description file. If the output verification of the to-be-deployed model passes, an inference service resource is determined from multiple running environments and the inference service resource is allocated to the to-be-deployed model. An inference parameter value of executing an inference service by the to-be-deployed model based on the inference service resource is determined. A resource configuration file and an inference service interface of the to-be-deployed model are generated according to the inference service resource, if the inference parameter value is greater than or equal to a preset inference parameter threshold. | 2022-03-10 |
20220076168 | METHOD FOR RECOGNIZING FOG CONCENTRATION OF HAZY IMAGE - A method for recognizing a fog concentration of a hazy image includes inputting a target hazy image into a pre-trained directed acyclic graph (DAG) support vector machine to acquire a fog concentration of the target hazy image. The fog concentration of the target hazy image is represented based on a prebuilt multi-feature model, and the feature vector in the multi-feature model includes at least one of a color feature, a dark channel feature, an information quantity feature and a contrast feature. | 2022-03-10 |
20220076169 | FEDERATED MACHINE LEARNING USING LOCALITY SENSITIVE HASHING - Using locality sensitive hashing in federated machine learning. A server receives from clients locality sensitive hash (LSH) vectors. In one embodiment, the server groups the clients into clusters, based on the LSH vectors; the server selects a subset of the clients, by choosing at least one client from each of the clusters. In another embodiment, the server finds a subset of the clients, by minimize gradient divergence for the subset of the clients. The server receives from selected clients LSH vectors computed based on parameter vectors of updated models, and based on LSH vectors the server determines whether the updated models are sufficiently different from a model being trained; in response to determining that the updated models are sufficiently different from the model, the server requests the selected clients to send the parameter vectors to the server. | 2022-03-10 |
20220076170 | UTILIZING PROVIDER DEVICE EFFICIENCY METRICS TO SELECT A PROVIDER DEVICE FOR A FUTURE TIME WINDOW - The present disclosure relates to systems, non-transitory computer readable media, and methods that provide graphical user interfaces comprising future transportation options with varying time windows at different transportation values and dynamically analyze the time windows to identify provider devices to fulfill transportation requests based on provider device efficiency metrics. For instance, the disclosed systems can delay selection of a provider device within a future time window utilizing a dynamic threshold provider device efficiency metric. For instance, the disclosed systems can analyze historical distributions of provider devices to generate a transition probability matrix that is utilized to analyze current provider devices and determine a threshold provider device efficiency metric that reflects the likelihood of identifying more efficient matches in the future. The disclosed systems can compare the determined threshold to anticipated efficiency metrics for individual provider devices to generate matches for digital transportation requests. | 2022-03-10 |
20220076171 | SERVER, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND TERMINAL APPARATUS - A server includes a communication interface and a controller. The controller receives a reservation request for a vehicle from a first user via the communication interface, determines an importance level of the reservation request, and generates first reservation information for a vehicle to be used by the first user, the first reservation information including the importance level. In a case in which there is no vehicle to assign to the first reservation information, the controller compares the importance level of the reservation request with importance levels of other reservation information that already exists, and assigns, to the first reservation information, a vehicle that has been assigned to second reservation information which is from among the other reservation information and includes an importance level lower than the importance level of the reservation request. | 2022-03-10 |
20220076172 | FITTING ROOM RESERVATION SYSTEM FOR CONTACT MINIMIZATION IN RETAIL CONTEXT - Methods and systems for implementing a reduced contact fitting room reservation system are described. In example implementations, a customer may digitally select one or more apparel items to be assessed at a physical retail location. The customer may also select the specific location, as well as a notice (e.g., a date and time range) for the time at which the customer is likely to arrive at the retail location. Upon receipt of the selection of apparel items and the notice at the physical retail location, an employee may also sanitize a particular fitting room, and also collect the selected items and place those items in the particular fitting room for assessment by the customer. | 2022-03-10 |
20220076173 | METHODS AND SYSTEMS FOR ITINERARY CREATION - Disclosed are methods, systems, and computer-readable medium to perform operations including: receiving a request from a user to create an itinerary; in response, determining a current location of the user, a search radius for the itinerary, interests of the user, and whether the user is interested in meeting other users; generating the itinerary for the user, the itinerary comprising a plurality of categorized activities; generating a graphical user interface (GUI) that displays an activity of the plurality of categorized activities, the activity associated with an activity category; displaying the GUI on a display device of a computing device associated with the user; in response to receiving an input indicating that the user has accepted the activity, displaying a graphical feature that enables the user to upload media associated with the activity; and in response to receiving uploaded media, associating the uploaded media with the activity. | 2022-03-10 |
20220076174 | INTEGRATED MATERIALS AND SERVICES FORECASTING PROCESS AND MODEL FOR ENERGY COMPANY - Systems and methods include a computer-implemented method for determining a future cost of procurement. A first estimate of prices of contract activities of existing contracts of a company is determined. A second estimate of prices of new contracts for new facilities, projects, and plants is determined. A third estimate of the forecasted volume and types of contracts activities for other company operations, functions, and organizations is determined. A fourth estimate of the total cost of contracts and contract-related activities for the company is determined using the first, second, and third estimates. A future cost of procurement is determined based on a total forecasted cost of materials and the fourth estimate. | 2022-03-10 |
20220076175 | Run efficiency measuring system, a vehicle and a certificate - A run efficiency measuring system measures both investment into a vehicle for run and actual run performance by the vehicle. The investment means cost of energy, toll, and budgeted time, while the actual run performance means travel distance and saved time. Run efficiency is calculated on the measurements separately by difference in driver, between toll run and free run, and in unit price of energy. The measurement is summarized every time when one of the differences occurs, or day changes, or predetermined travel distance has been run. Vehicle is in wireless communication with IC card type certificate integrated with credit card which is inserted into card slot of the vehicle to identify driver for the individual run efficiency calculation, authentication as qualified driver, and ETC payment. Run efficiency data gotten in a vehicle can be taken over to next different vehicle for accumulation by way of memory of the certificate. | 2022-03-10 |
20220076176 | STANDBY POSITION DETERMINATION DEVICE AND STANDBY POSITION DETERMINATION METHOD - In a standby position determination method, work frequency information including a plurality of work frequencies that a worker works at each of a plurality of production facilities is calculated. On the other hand, floor arrangement information including facility layouts of the plurality of production facilities is acquired. The standby position where the worker waits for the work in the floor is determined based on the work frequency information and the floor arrangement information. | 2022-03-10 |
20220076177 | MICROBIAL STRAIN DESIGN SYSTEM AND METHODS FOR IMPROVED LARGE-SCALE PRODUCTION OF ENGINEERED NUCLEOTIDE SEQUENCES - The generation of a factory order includes receiving an expression indicating an operation on a first sequence operand and a second sequence operand. The first sequence operand represents multiple biological sequence parts, and the second sequence operand represents at least one biological sequence part. The expression is evaluated to a sequence specification, which represents modifications to at least one biological sequence, and comprises a data structure representing (a) the first and second sequence operands, (b) one or more first-level operations to be performed on one or more first-level sequence operands, and (c) one or more second-level operations, the execution of at least one of which resolves values of at least one of the first-level sequence operands. A factory order is generated based upon execution of at least one first-level operation and at least one second-level operation. | 2022-03-10 |
20220076178 | Master Network Techniques for a Digital Duplicate - Disclosed herein are techniques and tools for verifying data for semantic correctness and/or verifying data for network correctness. In one respect, a method includes receiving an input defining at least two master nodes and at least one master link, each master node having at least one or more respective data properties populated with master node data and the master link having at least one or more master link data, the master nodes and master link defining a master semantic network, importing source data into a second semantic network, comparing the source data to the master node data and making a first determination that the source data reflects a data relationship defined by the master node data, and based on the first determination, populating the source data into the second semantic network, wherein the source data populated within the second semantic network reflects the data relationship defined by the master node data and the master link data. | 2022-03-10 |
20220076179 | CONTROL DEVICE, SYSTEM, VEHICLE, AND SERVICE SUPPORT METHOD - A control device includes a control unit configured to acquire status data indicating status of a first facility and to determine according to the status indicated by the acquired status data whether to send resources possessed by a second facility different from the first facility from the second facility to the first facility as resources to be used to provide a service at the first facility. | 2022-03-10 |
20220076180 | RESOURCE MANAGEMENT APPARATUS, RESOURCE MANAGEMENT SYSTEM, RESOURCE MANAGEMENT PROGRAM - A resource management server executes a determination of whether reservation can be extended as to whether a reserved resource can be extended based on a reservation status beyond reservation end date and time of a predetermined resource, and as a response of a result of a reservation, transmits information for displaying “EXTENSION” when a reservation extension is “OK” or transmits information for displaying “MOVEMENT” when a reservation extension is “N/A” to information processing terminal. | 2022-03-10 |
20220076181 | ESTIMATION METHOD, ESTIMATION DEVICE, AND ESTIMATION PROGRAM - A deriving unit ( | 2022-03-10 |
20220076182 | DATABASE SYSTEM AND METHOD FOR CARBON FOOTPRINT OPTIMIZATION - Various examples are directed to using a web-based analytics system to determine a carbon footprint metric for a product. The web-based analytics system may import indirect carbon footprint data from an environmental compliance system and quantity structure data describing the product from an accounting system table. The web-based analytics system may identify, using the quantity structure data, a first constituent component for the product and import first constituent component carbon footprint data for the first constituent component. The web-based analytics system may identify an activity for the product and import activity carbon footprint data describing a carbon footprint for the activity. The web-based analytics system may determine the carbon footprint metric for the product using the first constituent component carbon footprint data, the activity carbon footprint data, and the indirect carbon footprint data. | 2022-03-10 |
20220076183 | FACILITATING DECISION MARKING IN A BUSINESS PROCESS - Embodiments of the present disclosure relate to facilitating decision marking in a business process. In an embodiment, process execution data associated with execution of at least one instance of a business process are obtained. At least one first target attribute available at a first target point is determined based on the process execution data. The first target point is subsequent to a first decision point of a plurality of decision points in the business process, the at least one first target attribute has a contribution in deriving a first expected outcome at the first decision point, and the first target point is a first activity point or a first decision point. A suggestion is provided which suggests incorporating the at least one first target attribute in decision making at the first decision point executed in a further instance of the business process. | 2022-03-10 |
20220076184 | EFFICIENT ONGOING EVALUATION OF HUMAN INTELLIGENCE TASK CONTRIBUTORS - A facility for assessing a crowdsourcing platform contributor is described. By combining a first probability value for the contributor with a source of randomness, the facility determines that a gold HIT should be presented to the contributor. In response, the facility presents a gold HIT to the contributor, and receives a response. Where the response is correct, the facility reduces the first probability value to obtain a second probability value, otherwise the facility decreases the first probability value to obtain the second probability value. The facility then determines whether a gold HIT or a regular HIT should next be presented to the contributor by combining the second probability value with a source of randomness. | 2022-03-10 |
20220076185 | PROVIDING IMPROVEMENT RECOMMENDATIONS FOR PREPARING A PRODUCT - A method for providing improvement recommendations to a first group of a plurality of groups for preparing a product. The method comprising receiving data including: preparation data associated with tasks for preparing the product for each of the plurality of groups, result data associated with results for preparation of the product for each of the plurality of groups, and standards data associated with standards for the results for preparation and tasks for preparing the product. The method further comprising analyzing, for the first group, the preparation data and the result data relative to the corresponding standards data; identifying a first result that is deficient relative to the standards; determining one or more of the tasks associated with the first result that are deficient relative to the standards; and sending, to an electronic device of the first group, at least one improvement recommendation for improving the first result. | 2022-03-10 |
20220076186 | EQUIPMENT MANAGEMENT METHOD AND SYSTEM BASED ON RADIO FREQUENCY IDENTIFICATION - An equipment management method based on radio frequency identification comprises binding a first electronic tag and second electronic tags, reading the first electronic tag in a search mode, obtaining an abnormality list of one or more abnormal tags in the second electronic tags according to the first electronic tag, reading one of the second electronic tags in the search mode, and outputting an error signal when the read second electronic tag matches up to the abnormality list. The first electronic tag is set on a test machine, the second electronic tag are respectively set on test elements, and the test elements are disposed in the test machine. | 2022-03-10 |
20220076187 | SYSTEMS AND METHODS FOR SKILLS INFERENCE USING A DATASTORE AND MODELS - A system comprising: a skills data store; an employee action data store; at least one hardware processor; and one or more software modules that are configured to, when executed by the at least one hardware processor, retrieve skills data and employee action data from the skills data store and employee action data store, train a classification model, wherein training a classification model comprises performing feature preprocessing, generating an LDA topic vector and TF/IDF Word2Vec similarity scoring, and use AutoML to train ML models, and infer employee skills and levels based on the classification model and employee action data. | 2022-03-10 |
20220076188 | ADAPTIVE TASK COMMUNICATION BASED ON AUTOMATED LEARNING AND CONTEXTUAL ANALYSIS OF USER ACTIVITY - The techniques disclosed herein improve existing systems by automatically identifying tasks from a number of different types of user activity and providing suggestions for the tasks to one or more selected delivery mechanisms. A system compiles the tasks and pushes each task to a personalized task list of a user. The delivery of each task may be based on any suitable user activity, which may include communication between one or more users or a user's interaction with a particular file or a system. The system can identify timelines, performance parameters, and other related contextual data associated with the task. The system can identify a delivery schedule for the task to optimize the effectiveness of the delivery of the task. The system can also provide smart notifications. When a task conflicts with a person's calendar, the system can resolve scheduling conflicts based on priorities of a calendar event. | 2022-03-10 |
20220076189 | AN INFORMATION EXCHANGE AND SYNCHRONIZATION METHOD AND APPARATUS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for exchange and synchronization of status information of a vehicle. One of the methods includes obtaining, by a terminal, status information of the vehicle from equipment on the vehicle and connected to the terminal, sending, by the terminal, the status information to a server, and receiving, by the terminal and from the server, an updated status identifier associated with the terminal, wherein the updated status identifier is determined based on the status information. | 2022-03-10 |
20220076190 | SMART TOTE - The present disclosure is directed to systems and methods that monitor the quality or content included in cannabis products as those products are shipped from a source to a destination. Cannabis products consistent with the present disclosure include cannabis plant biomass, cannabis extracts, or products that contain cannabinoids. A controller at a shipping container may collect sensor data before, during, and after shipment of the cannabinoid containing product. The controller may perform analysis on sensed data or that sensed data may be sent to another computer for analysis. This sensor data may be used to identify the quality or content included of a cannabis product to see whether the quality or content of that product changed during shipment. The sensor data may also be compared to historical data when identifying preferred extraction processes or preferred settings or parameters to apply when an extraction process is performed. | 2022-03-10 |