Patent application number | Description | Published |
20100292174 | CASPASE INHIBITORS IN THE TREATMENT OF INFECTION-ASSOCIATED PRETERM DELIVERY - Apoptotic processes induced by infection of, or injury to, fetal and placental tissues have been implicated in preterm delivery. Thus, modulation of apoptotis constitutes a strategy for improving pregnancy outcome in women with intrauterine infections. Caspase inhibitors, including the pancaspase inhibitor Z-VAD-FMK, can be used to prevent apoptosis and, thus, prevent preterm delivery. Accordingly, compositions and methods comprising caspase inhibitors for prevention of preterm delivery are provided. | 11-18-2010 |
20120195875 | DETECTION OF IMMUNE MODULATION RESULTING FROM REDUCED PROTEIN PHOSPHATASE 2A ACTIVITY - The inventors demonstrate herein that measuring the level of protein phosphatase 2A activity (PP2A) is useful for assessing immune modulation and susceptibility to infection in an individual. This invention is especially useful when applied to septic individuals, and individuals with chronic infections. The invention also teaches a method of prevention and treatment of secondary infections, as well as prevention and treatment of cancerous conditions. | 08-02-2012 |
20120238469 | DIAGNOSTIC BIOMARKER TO IDENTIFY WOMEN AT RISK FOR PRETERM DELIVERY - The invention relates to biomarkers associated with preterm delivery. More specifically, the invention provides methods of measuring biomarkers found in women that are at risk for preterm delivery. | 09-20-2012 |
20140051598 | DIAGNOSTIC BIOMARKER TO PREDICT WOMEN AT RISK FOR PRETERM DELIVERY - The invention relates to biomarkers associated with preterm delivery. More specifically, the invention provides methods of measuring biomarkers including but not limited to cytokines, cytokine receptors, cytokine receptor antagonists, chemokines, chemokine receptors, and/or chemokine receptor antagonists found in women that are at risk for preterm delivery. The diagnostic methods may be performed on whole blood. | 02-20-2014 |
Patent application number | Description | Published |
20120216302 | METHODS AND ASSAYS FOR TREATING SUBJECTS WITH SHANK3 DELETION, MUTATION OR REDUCED EXPRESSION - Methods and assays are disclosed for treating subjects with 22q13 deletion syndrome or SHANK3 deletion or duplication, mutation or reduced expression, where the methods comprise administering to the subject insulin-like growth factor 1 (IGF-1), IGF-1-derived peptide or analog, growth hormone, an AMPAkine, a compound that directly or indirectly enhances glutamate neurotransmission, including by inhibiting inhibitory (most typically GABA) transmission, or an agent that activates the growth hormone receptor or the insulin-like growth factor 1 (IGF-1) receptor, or a downstream signaling pathway thereof | 08-23-2012 |
20140178307 | METHODS AND ASSAYS FOR TREATING SUBJECTS WITH SHANK3 DELETION, MUTATION OR REDUCED EXPRESSION - Methods and assays are disclosed for treating subjects with 22q13 deletion syndrome or SHANK3 deletion or duplication, mutation or reduced expression, where the methods comprise administering to the subject insulin-like growth factor 1 (IGF-1), IGF-1-derived peptide or analog, growth hormone, an AMPAkine, a compound that directly or indirectly enhances glutamate neurotransmission, including by inhibiting inhibitory (most typically GABA) transmission, or an agent that activates the growth hormone receptor or the insulin-like growth factor 1 (IGF-1) receptor, or a downstream signaling pathway thereof. | 06-26-2014 |
Patent application number | Description | Published |
20120116756 | METHOD FOR TONE/INTONATION RECOGNITION USING AUDITORY ATTENTION CUES - In a spoken language processing method for tone/intonation recognition, an auditory spectrum may be determined for an input window of sound and one or more multi-scale features may be extracted from the auditory spectrum. Each multi-scale feature can be extracted using a separate two-dimensional spectro-temporal receptive filter. One or more feature maps corresponding to the one or more multi-scale features can be generated and an auditory gist vector can be extracted from each of the one or more feature maps. A cumulative gist vector may be obtained through augmentation of each auditory gist vector extracted from the one or more feature maps. One or more tonal characteristics corresponding to the input window of sound can be determined by mapping the cumulative gist vector to one or more tonal characteristics using a machine learning algorithm. | 05-10-2012 |
20120146891 | ADAPTIVE DISPLAYS USING GAZE TRACKING - Methods and systems for adapting a display screen output based on a display user's attention. Gaze direction tracking is employed to determine a sub-region of a display screen area to which a user is attending. Display of the attended sub-region is modified relative to the remainder of the display screen, for example, by changing the quantity of data representing an object displayed within the attended sub-region relative to an object displayed in an unattended sub-region of the display screen. | 06-14-2012 |
20120253812 | SPEECH SYLLABLE/VOWEL/PHONE BOUNDARY DETECTION USING AUDITORY ATTENTION CUES - In syllable or vowel or phone boundary detection during speech, an auditory spectrum may be determined for an input window of sound and one or more multi-scale features may be extracted from the auditory spectrum. Each multi-scale feature can be extracted using a separate two-dimensional spectro-temporal receptive filter. One or more feature maps corresponding to the one or more multi-scale features can be generated and an auditory gist vector can be extracted from each of the one or more feature maps. A cumulative gist vector may be obtained through augmentation of each auditory gist vector extracted from the one or more feature maps. One or more syllable or vowel or phone boundaries in the input window of sound can be detected by mapping the cumulative gist vector to one or more syllable or vowel or phone boundary characteristics using a machine learning algorithm. | 10-04-2012 |
20120259554 | TONGUE TRACKING INTERFACE APPARATUS AND METHOD FOR CONTROLLING A COMPUTER PROGRAM - A tongue tracking interface apparatus for control of a computer program may include a mouthpiece configured to be worn over one or more teeth of a user of the computer program. The mouthpiece can include one or more sensors configured to determine one or more tongue orientation characteristics of the user. Other sensors such as microphones, pressure sensors, etc. located around the head, face, and neck, can also be used for determining tongue orientation characteristics. | 10-11-2012 |
20120259638 | APPARATUS AND METHOD FOR DETERMINING RELEVANCE OF INPUT SPEECH - Audio or visual orientation cues can be used to determine the relevance of input speech. The presence of a user's face may be identified during speech during an interval of time. One or more facial orientation characteristics associated with the user's face during the interval of time may be determined. In some cases, orientation characteristics for input sound can be determined. A relevance of the user's speech during the interval of time may be characterized based on the one or more orientation characteristics. | 10-11-2012 |
20120268359 | CONTROL OF ELECTRONIC DEVICE USING NERVE ANALYSIS - An electronic device may be controlled using nerve analysis by measuring a nerve activity level for one or more body parts of a user of the device using one or more nerve sensors associated with the electronic device. A relationship can be determined between the user's one or more body parts and an intended interaction by the user with one or more components of the electronic device using each nerve activity level determined. A control input or reduced set of likely actions can be established for the electronic device based on the relationship determined. | 10-25-2012 |
20120281181 | INTERFACE USING EYE TRACKING CONTACT LENSES - Methods of eye gaze tracking are provided using magnetized contact lenses tracked by magnetic sensors and/or reflecting contact lenses tracked by video-based sensors. Tracking information of contact lenses from magnetic sensors and video-based sensors may be used to improve eye tracking and/or combined with other sensor data to improve accuracy. Furthermore, reflective contact lenses improve blink detection while eye gaze tracking is otherwise unimpeded by magnetized contact lenses. Additionally, contact lenses may be adapted for viewing 3D information. | 11-08-2012 |
20140198382 | INTERFACE USING EYE TRACKING CONTACT LENSES - Methods of eye gaze tracking are provided using magnetized contact lenses tracked by magnetic sensors and/or reflecting contact lenses tracked by video-based sensors. Tracking information of contact lenses from magnetic sensors and video-based sensors may be used to improve eye tracking and/or combined with other sensor data to improve accuracy. Furthermore, reflective contact lenses improve blink detection while eye gaze tracking is otherwise unimpeded by magnetized contact lenses. Additionally, contact lenses may be adapted for viewing 3D information. | 07-17-2014 |
Patent application number | Description | Published |
20140112556 | MULTI-MODAL SENSOR BASED EMOTION RECOGNITION AND EMOTIONAL INTERFACE - Features, including one or more acoustic features, visual features, linguistic features, and physical features may be extracted from signals obtained by one or more sensors with a processor. The acoustic, visual, linguistic, and physical features may be analyzed with one or more machine learning algorithms and an emotional state of a user may be extracted from analysis of the features. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. | 04-24-2014 |
20140114655 | EMOTION RECOGNITION USING AUDITORY ATTENTION CUES EXTRACTED FROM USERS VOICE - Emotion recognition may be implemented on an input window of sound. One or more auditory attention features may be extracted from an auditory spectrum for the window using one or more two-dimensional spectro-temporal receptive filters. One or more feature maps corresponding to the one or more auditory attention features may be generated. Auditory gist features may be extracted from feature maps, and the auditory gist features may be analyzed to determine one or more emotion classes corresponding to the input window of sound. In addition, a bottom-up auditory attention model may be used to select emotionally salient parts of speech and execute emotion recognition only on the salient parts of speech while ignoring the rest of the speech signal. | 04-24-2014 |
20140149112 | COMBINING AUDITORY ATTENTION CUES WITH PHONEME POSTERIOR SCORES FOR PHONE/VOWEL/SYLLABLE BOUNDARY DETECTION - Phoneme boundaries may be determined from a signal corresponding to recorded audio by extracting auditory attention features from the signal and extracting phoneme posteriors from the signal. The auditory attention features and phoneme posteriors may then be combined to detect boundaries in the signal. | 05-29-2014 |
20150073794 | SPEECH SYLLABLE/VOWEL/PHONE BOUNDARY DETECTION USING AUDITORY ATTENTION CUES - In syllable or vowel or phone boundary detection during speech, an auditory spectrum may be determined for an input window of sound and one or more multi-scale features may be extracted from the auditory spectrum. Each multi-scale feature can be extracted using a separate two-dimensional spectro-temporal receptive filter. One or more feature maps corresponding to the one or more multi-scale features can be generated and an auditory gist vector can be extracted from each of the one or more feature maps. A cumulative gist vector may be obtained through augmentation of each auditory gist vector extracted from the one or more feature maps. One or more syllable or vowel or phone boundaries in the input window of sound can be detected by mapping the cumulative gist vector to one or more syllable or vowel or phone boundary characteristics using a machine learning algorithm. | 03-12-2015 |