Patent application number | Description | Published |
20140242560 | FACIAL EXPRESSION TRAINING USING FEEDBACK FROM AUTOMATIC FACIAL EXPRESSION RECOGNITION - A machine learning classifier is trained to compute a quality measure of a facial expression with respect to a predetermined emotion, affective state, or situation. The expression may be of a person or an animated character. The quality measure may be provided to a person. The quality measure may also used to tune the appearance parameters of the animated character, including texture parameters. People may be trained to improve their expressiveness based on the feedback of the quality measure provided by the machine learning classifier, for example, to improve the quality of customer interactions, and to mitigate the symptoms of various affective and neurological disorders. The classifier may be built into a variety of mobile devices, including wearable devices such as Google Glass and smart watches. | 08-28-2014 |
20140310208 | Facilitating Operation of a Machine Learning Environment - Machine learning systems are represented as directed acyclic graphs, where the nodes represent functional modules in the system and edges represent input/output relations between the functional modules. A machine learning environment can then be created to facilitate the training and operation of these machine learning systems. | 10-16-2014 |
20140314284 | DATA ACQUISITION FOR MACHINE PERCEPTION SYSTEMS - Apparatus, methods, and articles of manufacture for obtaining examples that break a visual expression classifier at user devices such as tablets, smartphones, personal computers, and cameras. The examples are sent from the user devices to a server. The server may use the examples to update the classifier, and then distribute the updated classifier code and/or updated classifier parameters to the user devices. The users of the devices may be incentivized to provide the examples that break the classifier, for example, by monetary reward, access to updated versions of the classifier, public ranking or recognition of the user, a self-rewarding game. The examples may be evaluated using a pipeline of untrained crowdsourcing providers and trained experts. The examples may contain user images and/or depersonalized information extracted from the user images. | 10-23-2014 |
20140314310 | AUTOMATIC ANALYSIS OF RAPPORT - In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator. | 10-23-2014 |
20140315168 | FACIAL EXPRESSION MEASUREMENT FOR ASSESSMENT, MONITORING, AND TREATMENT EVALUATION OF AFFECTIVE AND NEUROLOGICAL DISORDERS - Apparatus, methods, and articles of manufacture facilitate diagnosis of affective mental and neurological disorders. Extended facial expression responses to various stimuli are evoked or spontaneously collected, and automatically evaluated using machine learning techniques and automatic facial expression measurement (AFEM) techniques. The stimuli may include pictures, videos, tasks of various emotion-eliciting paradigms, such as a reward-punishment paradigm, an anger eliciting paradigm, a fear eliciting paradigm, and a structured interview paradigm. The extended facial expression responses, which may include facial expression responses as well head pose responses and gesture responses, are analyzed using machine learning techniques to diagnose the subject, to estimate the likelihood that the subject suffers from a specific disorder, and/or to evaluate treatment efficacy. | 10-23-2014 |
20140316881 | ESTIMATION OF AFFECTIVE VALENCE AND AROUSAL WITH AUTOMATIC FACIAL EXPRESSION MEASUREMENT - Apparatus, methods, and articles of manufacture facilitate analysis of a person's affective valence and arousal. A machine learning classifier is trained using training data created by (1) exposing individuals to eliciting stimuli, (2) recording extended facial expression appearances of the individuals when the individuals are exposed to the eliciting stimuli, and (3) obtaining ground truth of valence and arousal evoked from the individuals by the eliciting stimuli. The classifier is thus trained to analyze images with extended facial expressions (such as facial expressions, head poses, and/or gestures) evoked by various stimuli or spontaneously obtained, to estimate the valence and arousal of the persons in the images. The classifier may be deployed in sales kiosks, online trough mobile and other devices, and in other settings. | 10-23-2014 |
20140321737 | COLLECTION OF MACHINE LEARNING TRAINING DATA FOR EXPRESSION RECOGNITION - Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers. | 10-30-2014 |
20140328536 | Automatic Analysis of Individual Preferences For Attractiveness - A method facilitates selection of candidate matches for an individual from a database of potential applicants. A filter is calculated for the individual by processing images of people in conjunction with the individual's preferences with respect to those images. Feature sets are calculated for the potential applicants by processing images of the potential applicants. The filter is then applied to the feature sets to select candidate matches for the individual. | 11-06-2014 |
20140328547 | ANONYMIZATION OF FACIAL EXPRESSIONS - A method facilitates training of an automatic facial expression recognition system through distributed anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least part of the emotional expression contained in the corresponding original facial image. | 11-06-2014 |
20150023603 | HEAD-POSE INVARIANT RECOGNITION OF FACIAL EXPRESSIONS - A system facilitates automatic recognition of facial expressions. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation. | 01-22-2015 |