25th week of 2019 patent applcation highlights part 59 |
Patent application number | Title | Published |
20190188850 | MEDICAL IMAGE EXAM NAVIGATION USING SIMULATED ANATOMICAL PHOTOGRAPHS - Methods and systems for generating and displaying a simulated anatomical photograph. One system includes an electronic processor. The electronic processor is configured to receive a first selection from a user, the first selection designating a body part of a patient, receive a second selection from the user, the second selection designating a time period, automatically access imaging information for the patient associated with the first selection and the second selection, automatically generate the simulated anatomical photograph for the body part for the time period based on the imaging information, and display the simulated anatomical photograph to the user within a graphical user interface. | 2019-06-20 |
20190188851 | Methods for Screening and Diagnosing a Skin Condition - Provided herein are digital-implemented methods for performing simultaneous analyses on an object on the skin of an animal body, for example, a human, to classify the object as a skin cancer, an ulcer or neither. The analyses are performed simultaneously on a hand-held imaging device. | 2019-06-20 |
20190188852 | USER INTERFACE FOR DISPLAYING SIMULATED ANATOMICAL PHOTOGRAPHS - Methods and systems for generating and displaying a simulated anatomical photograph based on a medical image generated by an imaging modality. The system comprises an electronic processor configured to receive the medical image, determine an anatomical structure in the medical image, and automatically generate the simulated anatomical photograph based on the anatomical structure, wherein the pixels of the simulated anatomical photograph represent a simulated cross-sectional anatomical photograph of the anatomical structure. The electronic processor is also configured to determine a degree of confidence of a portion of the simulated anatomical photograph, compare the degree of confidence to a threshold, and, in response to the degree of confidence of the portion of the simulated anatomical photograph failing to satisfy the threshold, display the portion of the simulated anatomical photograph differently from another portion of the simulated anatomical photograph. | 2019-06-20 |
20190188853 | CHANGE DETECTION IN MEDICAL IMAGES - A system and method are provided for change detection in medical images. A difference image representing intensity differences between a first medical image and a second medical image is generated. A mixture model is fitted to an intensity distribution of the difference image to identify a plurality of probability distributions which collectively model the intensity distribution. A plurality of intensity ranges is determined as a function of the plurality of probability distributions. Image data of the difference image is labeled by determining into which of the plurality of intensity ranges said labeled image data falls. Accordingly, more accurate change detection is obtained than known systems and method. | 2019-06-20 |
20190188854 | METHOD AND DEVICE FOR DETECTING REGION OF INTEREST BASED ON IMAGES - A method and device for detecting region of interest based on images is disclosed. The method includes creating, by a region detection device, at least one histogram associated with at least one storage area within an image captured for a predetermined location. The method further includes identifying, by the region detection device, a plurality of boundaries within the at least one storage area based on the at least one histogram. The method includes detecting, by the region detecting device, at least one region of interest based on the plurality of boundaries. | 2019-06-20 |
20190188855 | PROCESSING DIGITAL IMAGE TO REMOVE UNWANTED PORTION - An image processing method to sample the image to generate patches. Feature vectors are extracted from the patches, and the extracted feature vectors are partitioned into clusters, where feature vectors in the same cluster share a common characteristic. A portion of interest in the image is segmented. An aggregate bounding region creation process is carried out by finding the largest segment and creating a bounding box around it; determining which cluster contains the most patches within the bounding box of the segment; and adding the patches of the determined cluster to an aggregate bounding region for the portion of interest. The aggregate bounding region creation process is repeated for each other segment in order of size. The resulting aggregate bounding region contains all the patches associated with the portion of interest. The patches which fall outside the resulting aggregate bounding region are then removed from the image. | 2019-06-20 |
20190188856 | SYSTEMS AND METHODS FOR BLOCK BASED EDGEL DETECTION WITH FALSE EDGE ELIMINATION - Methods and systems which provide object edge image representation generation using block based edgel techniques implementing post edgel detection processing to eliminate false edges are described. Embodiments subdivide image data (e.g., image point clouds) to facilitate separate edgel detection processing of a plurality of sub-blocks of the image data. A false edge elimination algorithm of embodiments is applied in recombining the object edge image representation sub-blocks resulting from the sub-block edgel detection processing to eliminate false edge artifacts associated with use of block based edgel detection. | 2019-06-20 |
20190188857 | SYSTEM, METHOD, AND COMPUTER PROGRAM for ADJUSTING IMAGE CONTRAST USING PARAMETERIZED CUMULATIVE DISTRIBUTION FUNCTIONS - A system and method are provided for optimizing histogram cumulative distribution function curves. In use, a first image is received and divided into two or more pixel regions. For at least one of the two or more pixel regions, a first histogram is computed, and based on the first histogram, at least one cumulative distribution function is computed for the at least one of the two or more pixel regions. Next, based on the at least one cumulative distribution function, two or more curve fit coefficients are extracted and interpolated. Further, an interpolated cumulative distribution function is created based on the interpolation and the interpolated cumulative distribution function is applied to the at least one of the two or more pixel regions. | 2019-06-20 |
20190188858 | IMAGE PROCESSING DEVICE AND METHOD THEREOF - Volume data is used for extracting a contour of a measurement object and measurement information describing anatomical structure useful for diagnosis is acquired from the contour. When volume data of a subject is inputted (S | 2019-06-20 |
20190188859 | METHOD FOR PERFORMING SEGMENTATION IN AN ORDERED SEQUENCE OF DIGITAL DATA - A method for making a segmentation in the sorted sequence of data ( | 2019-06-20 |
20190188860 | DETECTION SYSTEM - The present disclosure provides a detection system, which includes an image sensor, a lens device, and a processor. The image sensor is configured to take a first picture of a foreground object and a background object. The lens device is attached to the image sensor and configured to allow the foreground object to form a clear image on the first picture and the background object to form a blurred image on the first picture. The processor is configured to determine the image of the foreground object by analyzing the sharpness of the images of the first pictures. | 2019-06-20 |
20190188861 | METHOD AND APPARATUS FOR DETECTING MOTION DEVIATION IN A VIDEO SEQUENCE - Detection of motion deviation in a video sequence is provided. Change grids each comprise elements generated by storing in each element of the change grid an indication of whether there is change between corresponding elements of at least two images. A current direction grid is generated from a pair of change grids by searching for movement of a corresponding segment identified in each change grid, the movement occurring between the locations of the segment in each of the pair of change grids and, storing in elements of the current direction grid a vector corresponding to the movement of the segment. A vector stored in an element of the current direction grid is compared with a reference vector. It is determined whether there is motion deviation in the video sequence in accordance with the comparison. | 2019-06-20 |
20190188862 | A perception device for obstacle detection and tracking and a perception method for obstacle detection and tracking - A perception device, including at least one image sensor configured to detect a plurality of images; an information estimator configured to estimate from each image of the plurality of images a depth estimate, a velocity estimate, an object classification estimate and an odometry estimate; a particle generator configured to generate a plurality of particles, wherein each particle of the plurality of particles comprises a position value determined from the depth estimate, a velocity value determined from the velocity estimate and a classification value determined from the classification estimate; an occupancy hypothesis determiner configured to determine an occupancy hypothesis of a predetermined region, wherein each particle of the plurality of particles contributes to the determination of the occupancy hypothesis. | 2019-06-20 |
20190188863 | Unsupervised Video Segmentation - In one embodiment, a method includes a computing system accessing a first training data comprising a first image and a second image and an associated optical flow estimation. The system may input (1) the first image into a first machine-learning model configured to generate a first output and (2) the optical flow estimation into a second machine-learning model configured to generate a second output. The first output of the first machine-learning model is associated with first image segments of a predetermined number, and the second output of the second machine-learning model is associated with transformations of the predetermined number. The first output, the transformations, and the first image are configured to generate an estimated image. The system trains the first machine-learning model and the second machine-learning model based on at least a comparison of the estimated image and the second image. | 2019-06-20 |
20190188864 | METHOD AND APPARATUS FOR DETECTING DEVIATION FROM A MOTION PATTERN IN A VIDEO - A current motion grid comprising a plurality of elements is generated by storing in each element of the current motion grid an indication of whether there is a change between corresponding elements of at least two images captured from a video sequence. A current motion pattern grid comprising a plurality of elements is generated by firstly searching for a segment consisting of grid elements in which a change has been indicated in the current motion grid and which are neighbouring to one another and, secondly, storing in each element of the segment a value corresponding to a size of the segment. A value of an element of the current motion pattern grid is compared with a threshold value. It is then determined, based on the result of the comparison, whether there is deviation from the motion pattern. | 2019-06-20 |
20190188865 | HAND DETECTION AND TRACKING METHOD AND DEVICE - For each frame of a video, a determination is made whether an image of a hand exists in the frame. When at least one frame of the video includes the image of the hand, locations of the hand in the frames of the video are tracked to obtain a tracking result. A verification is performed to determine whether the tracking result is valid in a current frame of the frames of the video. When the tracking result is valid in the current frame of the video, a location of the hand is tracked in a next frame. When the tracking result is not valid in the current frame, localized hand image detection is performed on the current frame. | 2019-06-20 |
20190188866 | SYSTEM AND METHOD FOR DETECTING INTERACTION - A system and method of detecting an interaction between a plurality of objects. The method comprises receiving tracking information for the plurality of objects in a scene; generating a plurality of frames, each of the plurality of frames comprising an activation for each of the plurality of objects and representing a relative spatial relationship between the plurality objects in the scene determined from the received tracking information, the frames encoding properties of the objects using properties of the corresponding activations; determining, using a trained neural network, features associated with the plurality of objects from the generated plurality of frames using the activations and the relative spatial relationship between the objects, the features representing changes in the relative spatial relationship between the objects over time relating to the interaction; and detecting time localization of the interaction in the plurality of frames using the determined features. | 2019-06-20 |
20190188867 | MOTION-BASED FACIAL RECOGNITION - An apparatus, system and method of motion-based facial recognition is provided through the establishment of facial motion profiles. The system uses a database, a camera and a processing unit. The database stores a predetermined profile of a default facial motion made by a user having at least one facial landmark. The camera tracks the user's facial motions and captures a facial motion over a duration. The processing unit connected to the database and the camera. The processing unit establishes the predetermined profile and a comparison profile of the facial motion through a profile establishment process. The processing unit further compares the comparison profile with the predetermined profile to verify the facial motion. | 2019-06-20 |
20190188868 | Method for Displaying Off-Screen Target Indicators in Motion Video - A method for displaying off-screen target indicators in motion video comprising the steps of receiving motion video containing a series of individual video frames, selecting a target object within a selected video frame by choosing selected target object pixel space coordinates, and determining whether the selected target object pixel space coordinates are within the selected video frame. Upon determining that the selected target object pixel space coordinates are within the selected video frame, the method updates a dynamical system model with the target object geographical coordinates, longitudinal target object speed, and latitudinal target object speed. Upon determining that the selected target object pixel space coordinates are not within the selected video frame, the method calculates estimated target object geographical coordinates at time t using the dynamical system model. The method then calculates final values in the video field of view at which to draw a target indicator. | 2019-06-20 |
20190188869 | METHOD AND APPARATUS FOR DETECTING MOTION DEVIATION IN A VIDEO - A current motion grid comprising a plurality of elements is generated by storing in each element of the current motion grid an indication of whether there is a change between corresponding elements of at least two images captured from the video. A motion model comprising a plurality of elements is provided by accumulating information from motion grids obtained from the video. At least one element of the current motion grid is compared to at least one corresponding element of the motion model. It is determined whether there is motion deviation in accordance with the result of the comparison. | 2019-06-20 |
20190188870 | MEDICAL IMAGE REGISTRATION GUIDED BY TARGET LESION - Machine logic (for example, software) for registering multiple medical images, each showing a common lesion, with each other. In performing this registration, registration points are chosen to be both: (i) outside of image portion that is potentially compromised by the lesion (in any of the multiple images); and (ii) as close to the lesion as possible. However, in at least one of the images the extent of the lesion is not known—so, in order to accommodate this uncertainty about the lesion boundaries, lesion predicting machine logic rules are used to predict the size, shape and/or location of the lesion. Machine learning is used to intermittently adjust and improve the lesion predicting machine logic rules. | 2019-06-20 |
20190188871 | ALIGNMENT OF CAPTURED IMAGES BY FUSING COLOUR AND GEOMETRICAL INFORMATION - A method of combining object data captured from an object, the method comprising: receiving first object data and second object data, the first and second object data comprising intensity image data and three-dimensional geometry data of the object; synthesising a first fused image of the object and a second fused image of the object by fusing the respective intensity image data and the respective three-dimensional geometry data of the object illuminated by a directional lighting arrangement produced by a directional light source, the directional lighting arrangement produced by the directional light source being different to a lighting arrangement used to capture at least one of the first object data and the second object data; aligning the first fused image and the second fused image; and combining the first object data and the second object data. | 2019-06-20 |
20190188872 | IMAGE PROCESSING WITH ITERATIVE CLOSEST POINT (ICP) TECHNIQUE - In various embodiments of an image processing method and apparatus, first and second point clouds representing respective images of a scene/object from different viewpoints are obtained. Extracted features points from the first point cloud are matched with extracted feature points from the second point cloud, using depth based weighting, as part of an ICP initiation process. The first and second point clouds are then further ICP processed using results of the initiation process to generate at least one coordinate-transformed point cloud. | 2019-06-20 |
20190188873 | Automatic Correction Method and Device for Structured-light 3D Depth Camera - The present disclosure provides an automatic correction method and device for a structured-light 3D depth camera. When the optical axis of a laser encoded pattern projector and the optical axis of an image reception sensor change, an offset of an input encoded image relative to an image block in a reference encoded image is acquired, and then the position of the reference encoded image is oppositely adjusted upwards or downwards according to an offset change to form a self-feedback regulation closed-loop system between the center of the input encoded image and the center of the reference encoded image, so that the optimal matching relation can always be figured out when the optical axes of the input encoded image and the reference encoded image change drastically. Furthermore, depth calculation can be carried out according to the corrected offset. | 2019-06-20 |
20190188874 | Self-correction Method and Device for Structured Light Depth Camera of Smart Phone - Disclosed are a self-correction method and device for a structured light depth camera of a smart phone. The self-correction device for the structured light depth camera of the smart phone consists of an infrared laser speckle projector, an image receiving sensor, a self-correction module, a depth calculating module and a mobile phone application processing AP. The projector projects a speckle pattern, a feature block is set in a reference speckle image, an input speckle image is acquired by the image receiving sensor, and an optimal matching block which corresponds to the feature block is searched from the input speckle image through a similarity criterion to obtain an offset between the feature block and the matching block, once the optical axis of the projector and the optical axis of the image sensor change relatively, the offset may change along with the change, an optimal offset is solved according to a certain rule and the reference speckle image is adjusted reversely, thus, the center of the input speckle image and the center of the reference speckle image can form a self-feedback adjusting closed-loop system, and an optimal matching relation between the input speckle image and the corrected reference speckle image can be always found out when the optical axes vary widely. | 2019-06-20 |
20190188875 | ADVANCED LENSLESS LIGHT-FIELD IMAGING SYSTEMS FOR ENABLING A WIDE RANGE OF ENTIRELY NEW APPLICATIONS - Continuing a sequence of lensless light-field imaging camera patents beginning 1999, the present invention adds light-use efficiency, predictive-model design, distance-parameterized interpolation, computational efficiency, arbitrary shaped surface-of-focus, angular diversity/redundancy, distributed image sensing, plasmon surface propagation, and other fundamentally enabling features. Embodiments can be fabricated entirely by printing, transparent/semi-transparent, layered, of arbitrary size/curvature, flexible/bendable, emit light, focus and self-illuminate at zero-separation distance between (planar or curved) sensing and observed surfaces, robust against damage/occultation, implement color sensing without use of filters or diffraction, overlay on provided surfaces, provided color and enhanced multi-wavelength color sensing, wavelength-selective imaging of near-infrared/near-ultraviolet, and comprise many other fundamentally enabling features. Embodiments can be thinner, larger/smaller, more light-use efficient, and higher-performance than recently-popularized coded aperture imaging cameras. Vast ranges of diverse previously-impossible applications are enabled: credit-card cameras/phones, in-body monitoring of healing/disease, advanced biomarker analysis systems, perfect eye-contact video conferencing, seeing fabrics/skin/housings, and manufacturing-monitoring, wear-monitoring, and machine vision capabilities. | 2019-06-20 |
20190188876 | User Pose and Item Correlation - In aspects of user pose and item correlation, a mobile device includes a correlation module that receives an indication a user of the mobile device has positioned a hand proximate an item based on a pose of the user. The mobile device includes wireless radio systems to scan for wireless-enabled devices corresponding to items tagged with the wireless-enabled devices. The correlation module of the mobile device is implemented to receive the indication that the hand of the user of the mobile device is positioned proximate the item, and initiate a scan from the mobile device for the wireless-enabled devices proximate the user based on the indication of the hand of the user positioned proximate the item. The correlation module receives identifying data of the item that is proximate the hand of the user responsive to the scan of the wireless-enabled devices, and can correlate the item with the user. | 2019-06-20 |
20190188877 | OPTICAL TRACKING SYSTEM AND OPTICAL TRACKING METHOD - The present disclosure provides an optical tracking system for tracking a location and a posture of a marker. The marker is attachable to a target and configured so that a pattern surface formed inside the marker is visible through an optical system formed in an aperture. The system includes a processor configured to determine the posture of the marker based on a first image in which a part of the pattern surface viewed through the aperture is captured at an infinite focal length, and to determine the location of the marker based on a second image and a third image in which outgoing lights emitted through the aperture in different directions are captured at a focal length shorter than the infinite focal length. | 2019-06-20 |
20190188878 | FACE POSITION DETECTING DEVICE - A face position detecting device includes: an image analyzer configured to analyze an image of a face imaged by an imaging unit and to extract a positional relationship between at least two characteristic points in the face; and a face position calculator configured to calculate a position of the face. The face position calculator calculates a position of the face in a space, upon operation of an operation screen, according to positional relationships among the face imaged, the imaging unit, and an operation unit, and except upon the operation, according to the positional relationship between the at least two characteristic points that the image analyzer extracts from the face imaged and positional relationships between a plurality of positions of the face and a plurality of the at least two characteristic points corresponding to the plurality of positions of the face obtained in respective operations, respectively. | 2019-06-20 |
20190188879 | METHOD AND SYSTEM FOR DETECTING OBSTRUCTIVE OBJECT AT PROJECTED LOCATIONS WITHIN IMAGES - A method for supporting image processing for a movable object includes acquiring one or more images captured by an imaging device borne by the movable object. The imaging device is at least partially blocked by an obstructive object attached to the movable object. The method further includes applying a template to the one or more images to obtain one or more projected locations of the obstructive object within the one or more images and detecting at least portion of the obstructive object at the one or more projected locations within the one or more images. | 2019-06-20 |
20190188880 | POSITION AND ORIENTATION ESTIMATION APPARATUS, POSITION AND ORIENTATION ESTIMATION METHOD, AND PROGRAM - A three-dimensional detailed position/orientation estimation apparatus includes a first position/orientation estimation unit and a second position/orientation estimation unit that are configured to estimate three-dimensional position and orientation. The first position/orientation estimation unit optimizes six parameters (translations x, y, and z, and rotations φ, γ, and θ) using 3D data, and the second position/orientation estimation unit optimizes only three parameters (translations x and y, and rotation θ) that can be estimated with high accuracy using a 2D image, based on the result of the three-dimensional position/orientation estimation performed by the first position/orientation estimation unit using the 3D data. | 2019-06-20 |
20190188881 | REPRESENTATION OF A COMPONENT USING CROSS-SECTIONAL IMAGES - A method includes receiving a cross-sectional image of a component, including a plurality of pixels representing the component, at a perspective, determining a threshold color value based on color values associated with the plurality of pixels, and setting an updated color value for each pixel of the plurality of pixels based on the threshold color value. The method includes analyzing a set of adjacent pixels of the cross-sectional image that is selected based on a geometric parameter based on an expected geometry of a physical domain of the component, identifying a plurality of adjacent pixels from the set of adjacent pixels that is likely to be associated with the physical domain of the component based on the updated color values associated with the plurality of adjacent pixels, and outputting a representation of the component including the plurality of adjacent pixels that are likely to be associated with the physical domain. | 2019-06-20 |
20190188882 | METHOD AND APPARATUS FOR PROCESSING IMAGE INTERACTION - A method and apparatus for processing an image interaction are provided. The apparatus extracts, using an encoder, an input feature from an input image, converts the input feature to a second feature based on an interaction for an application to the input image, and generates, using a decoder, a result image from the second feature. | 2019-06-20 |
20190188883 | IMAGE COLOR CONVERSION APPARATUS, NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING COMPUTER PROGRAM, AND IMAGE COLOR CONVERSION METHOD - At least one processor configured as hardware generates gamma correction processing information based on an achromatic color input value and an output value acquired by referencing a three-dimensional LUT based on the achromatic color input value, references the three-dimensional LUT based on a representative input value as a chromatic color input value to acquire a representative output value, sets color difference signal processing information such that the representative output value is obtained when the representative input value is subjected to gamma correction processing and color difference signal processing, and subjects an image signal to gamma correction processing and color difference signal processing. | 2019-06-20 |
20190188884 | SYSTEM AND METHOD FOR SEGMENTING MEDICAL IMAGE - A method for segmenting a medical image is disclosed. The method includes acquiring MR image and PET data during a scan of the object, acquiring an air/bone ambiguous region in the MR image, the air/bone ambiguous region including air voxels and bone voxels undistinguished from each other. The method also includes assigning attenuation coefficients to the voxels of the plurality of regions and generating an attenuation map. The method further includes iteratively reconstructing the PET data and the attenuation map to generate a PET image and an estimated attenuation map. The method further includes reassigning attenuation coefficients to the voxels of the air/bone ambiguous region based on the estimated attenuation map, and distinguishing the bone voxels and air voxels in the air/bone ambiguous region. | 2019-06-20 |
20190188885 | MODEL REGULARIZED MOTION COMPENSATED MEDICAL IMAGE RECONSTRUCTION - A medical imaging system ( | 2019-06-20 |
20190188886 | Selective Editing of Brushstrokes in a Digital Graphical Image Based on Direction - An image editing application selectively edits a brushstroke in an image, based on a direction of the brushstroke. In some cases, the brushstroke is selectively edited based on a similarity between the direction of the brushstroke and a direction of an editing tool. Additionally or alternatively, directional data for each pixel of the brushstroke is compared to directional data for each position of the editing tool. Data structures capable of storing directional information for one or more of a pixel, a brushstroke, or a motion of an editing tool are disclosed. | 2019-06-20 |
20190188887 | TEXT BORDER TOOL AND ENHANCED CORNER OPTIONS FOR BACKGROUND SHADING - Disclosed herein are various techniques for more precisely and reliably (a) positioning top and bottom border edges relative to textual content, (b) positioning left and right border edges relative to textual content, (c) positioning mixed edge borders relative to textual content, (d) positioning boundaries of a region of background shading that fall within borders of textual content, (e) positioning borders relative to textual content that spans columns, (f) positioning respective borders relative to discrete portions of textual content, (g) positioning collective borders relative to discrete, abutting portions of textual content, (h) applying stylized corner boundaries to a region of background shading, and (i) applying stylized corners to borders. | 2019-06-20 |
20190188888 | Imaging Using a Biological-Lifeform Visual Perspective - During an imaging technique, an electronic device (such as a cellular telephone) may acquire a first image of a biological lifeform using a first imaging sensor in the electronic device. Then, the electronic device may identify the biological lifeform in the first image. For example, the biological lifeform may be an animal or an insect. Moreover, the electronic device may acquire a second image of an object using one of the first imaging sensor or a second imaging sensor in the electronic device. Next, the electronic device may generate a modified second image of the object based on one or more visual effects associated with the identified biological lifeform. In particular, the one or more visual effects may be based on an approximate or an estimated visual perspective of the biological lifeform. | 2019-06-20 |
20190188889 | USING LAYER BLOCKS TO APPLY EFFECTS TO IMAGE CONTENT - Applying an image effect within an image processing application. An image processing application receives a selection of an image effect to be applied to an image. The image includes image layers, each of which has a layer property and is created based on an application of a first effect. The application selects a template from a set of predefined templates. The selection is based on the image effect and the template. The template includes template layers. The application matches each of the image layers to a corresponding template layer having a template property corresponding to the layer property. The application determines from the matching that no conflicts exist between the image layers and the template. The application merges the image layers with the template layers and applies the image effect. | 2019-06-20 |
20190188890 | MITIGATION OF BIAS IN DIGITAL REALITY SESSIONS - Embodiments of the present invention disclose a method, computer program product, and system for identifying biases of one or more users during a shared augmented reality session and modifying the display. A first login is detected of an augmented reality. A first set of biases associated with the first user are generated. Visual data is received from a device associated with the first user profile. A masked overlay for display is generated and displayed to the first user based on the received visual data and the first set of biases. A second login of the augmented reality session is received. A second set of biases associated with the second login is generated. In response to analyzing a generated heat map, a positivity score is calculated. Changes are monitored for, in the first set of biases and second set of biases. The masked overlay is displayed. | 2019-06-20 |
20190188891 | VIRTUAL VEHICLE SKIN - A system and method of enabling an augmented reality/virtual reality (AR/VR) device to augment image or video data using a virtual vehicle skin, wherein the method is carried out by vehicle electronics included within a vehicle, the method including: establishing a connection to the AR/VR device using a wireless communications device included in the vehicle electronics of the vehicle; and sending a virtual vehicle skin response to the AR/VR device via the established connection, wherein the AR/VR device is configured to obtain background video and to display the virtual vehicle skin over the obtained background video so that the virtual vehicle skin overlays a portion of the vehicle, and wherein the virtual vehicle skin response includes the virtual vehicle skin or virtual vehicle skin access information that can be used by the AR/VR device to derive or obtain the virtual vehicle skin. | 2019-06-20 |
20190188892 | SYSTEM AND METHOD FOR CREATING CUSTOMIZED CHARACTERS AND SELECTIVELY DISPLAYING THEM IN AN AUGMENTED OR VIRTUAL REALITY DISPLAY - A system and method of creating customized characters and selectively displaying them in an electronic display, such as an augmented reality or virtual reality display is provided. A digital character may be provided by a character provider for customization by others using the system. Such customizations may be instantiated in user devices that provide electronic displays. Instantiation of the custom digital character may be conditioned on one or more trigger conditions, which may be specified by the character customizer. For example, a digital character customized using the system may be conditioned on triggering events in the real-world or in a virtual world. When a relevant triggering condition is satisfied at a user device, the custom character (i.e., information for instantiating the custom character) may be transmitted to that user device. In this manner, the system may push custom characters to user devices that satisfy the triggering condition. | 2019-06-20 |
20190188893 | SIMULATED REALITY DATA REPRESENTATION SYSTEM AND METHOD - A data representation system is provided. The data representation system includes a display device and a non-transitory memory containing computer-readable instructions operable to create a simulated reality. The data representation system also includes a processor configured to process the instructions for carrying out steps for creating the simulated reality. The system accesses source data having four or more attributes. The system converts a portion of the source attributes to representative attributes. The system accesses the representative attributes and forms an agglomerated asset that is based on a default asset. Each of the representative attributes forms a distinct characteristic of the asset. | 2019-06-20 |
20190188894 | MULTI-USER AND MULTI-SURROGATE VIRTUAL ENCOUNTERS - A virtual reality encounter system is described. A first surrogate supporting at least one first camera that captures image data from a first physical location and a second surrogate supporting at least one second camera that captures second image data from the first physical location. Aliasing substitution processing has a computing system including a processor receive the first image data and detect an image of the second surrogate in the first image data and replace the image data of the second surrogate in the first physical location, with image data of a user in the first physical location to form a transformed image that substitutes the image data of the user for the image data of the second surrogate. | 2019-06-20 |
20190188895 | CONTEXTUAL-BASED RENDERING OF VIRTUAL AVATARS - Examples of systems and methods for rendering an avatar in a mixed reality environment are disclosed. The systems and methods may be configured to automatically scale an avatar or to render an avatar based on a determined intention of a user, an interesting impulse, environmental stimuli, or user saccade points. The disclosed systems and methods may apply discomfort curves when rendering an avatar. The disclosed systems and methods may provide a more realistic interaction between a human user and an avatar. | 2019-06-20 |
20190188896 | GRAPHICS PROCESSING - A graphics processing system can divide a render output into plural larger patches, with each larger patch encompassing plural smaller patches. A rasteriser of the system tests a larger patch against a primitive to be processed to determine if the primitive covers the larger patch. When it is determined that the primitive only partially covers the larger patch, the larger patch is sub-divided into plural smaller patches and at least one of the smaller patches is re-tested against the primitive. Conversely, when it is determined that the primitive completely covers the larger patch, the larger patch is output from the rasteriser in respect of the primitive for processing by a subsequent stage, of the graphics processing system. The system can provide efficient, hierarchal, processing of primitives, whilst helping to prevent the output of the rasteriser from becoming blocked. | 2019-06-20 |
20190188897 | Method for Rendering an Augmented Object - The present disclosure describes a new method for rendering ray traced reflections, applied to augmented reality and virtual reality. The intersections between secondary rays and scene geometry are done in large groups of rays, gaining high speed and lowering the computational complexity. Its reduced power consumption is suitable to consumer class of computing devices. | 2019-06-20 |
20190188898 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING SYSTEM - The disclosure proposes an image processing apparatus for rendering a maximum intensity projection image by extracting, as objects to be rendered, only voxels having a high brightness value in three-dimensional volume data and using the brightness values of these voxels for the corresponding pixels. | 2019-06-20 |
20190188899 | DYNAMIC CULLING OF MATRIX OPERATIONS - An output of a first one of a plurality of layers within a neural network is identified. A bitmap is determined from the output, the bitmap including a binary matrix. A particular subset of operations for a second one of the plurality of layers is determined to be skipped based on the bitmap. Operations are performed for the second layer other than the particular subset of operations, while the particular subset of operations are skipped. | 2019-06-20 |
20190188900 | DATA ACQUISITION AND ENCODING PROCESS FOR MANUFACTURING, INSPECTION, MAINTENANCE AND REPAIR OF A STRUCTURAL PRODUCT - A method is provided that includes generating a report template usable to produce a report to convey information about a structural product or one or more of a plurality of parts thereof, rendering for display. A model of the structural product is observed from a home viewpoint. Input is received to navigate the model to a part selected from the plurality of parts, the model at the navigated viewpoint including information for the part selected from the plurality of parts. A command string is generated that includes information specifying the navigated viewpoint. The command string is output to a recorder configured to record the command string on at least one of a physical medium or to an electronic document in which the command string is thereby included, the command string capable of being machine-read to automatically restore the model at the navigated viewpoint. | 2019-06-20 |
20190188901 | INTER-VEHICLE COOPERATION FOR VEHICLE SELF IMAGING - Method and apparatus are disclosed for inter-vehicle cooperation for vehicle self imaging. An example vehicle includes an inter-vehicle communication module and a infotainment head unit. The infotainment head unit determines a pose of the vehicle and, in response to receiving an input to generate a composite image, broadcasts a request for images of the vehicle. The request message includes the pose. The infotainment head unit also generates the composite image of the vehicle based on the images and display the composite image on a display. | 2019-06-20 |
20190188902 | DETERMINING PIXEL VALUES USING REFERENCE IMAGES - An apparatus includes: an object data storage section that stores polygon identification data for polygons of an object to be displayed; a reference image data storage section that stores data of reference images each representing an image when a space including the object to be displayed is viewed from one of a plurality of prescribed reference viewing points, and further stores polygon identification data corresponding to each reference image; a viewing point information acquisition section that acquires information relating to a viewing point; a projection section that represents on a plane of a display image the position and shape of an image of the object when the space is viewed from the viewing point; a pixel value determination section that determines the values of pixels constituting the image of the object in the display image, using the values of the pixels representing the same image in one or more of the plurality of reference images; and an output section that outputs the data of the display image; wherein for a subject pixel, the pixel value determination section is arranged to determine the position on a reference image corresponding to the position of the subject pixel on the object, obtain the polygon identification corresponding to the determined position on the reference image, compare the obtained polygon identification with the polygon identification of the polygon corresponding to the position of the subject pixel on the object; and select the reference image if the compared polygon identifications match. | 2019-06-20 |
20190188903 | METHOD AND APPARATUS FOR PROVIDING VIRTUAL COMPANION TO A USER - The present disclosure provides a method and an apparatus for providing a user companion by using mixed reality technology. The method includes: receiving from the user a summon indication for summoning a character; summoning the character in response to the summon indication, wherein the character is a virtualized object of a real person; controlling the summoned character to imitate an action or an expression of the real person; receiving from the user an interaction indication with regard to the character; matching the interaction indication against a database to acquire corresponding reaction data of the character; and updating a presentation of the character based on the reaction data. The implementation of the present disclosure may realize the interaction between human and virtual world, which may improve the efficiency and effect of interactivity. | 2019-06-20 |
20190188904 | POINT CLOUD DATA HIERARCHY - One embodiment is directed to a system for presenting views of a very large point data set, comprising: a storage system comprising data representing a point cloud comprising a very large number of associated points; a controller operatively coupled to the storage cluster and configured to automatically and deterministically organize the point data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; and a user interface through which a user may select a viewing perspective origin and vector, which may be utilized to command the controller to assemble an image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy. | 2019-06-20 |
20190188905 | POINT CLOUD DATA HIERARCHY - A system comprises a storage system comprising data representing a point cloud comprising a very large number of associated points; a controller operatively coupled to the storage cluster and configured to organize the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; and a user interface through which a user may select a viewing perspective origin and vector, which may be utilized to command the controller to assemble an image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy, the plurality of data sectors being assembled such that sectors representative of points closer to the selected viewing origin have a higher octree mesh resolution than that of sectors representative of points farther away from the selected viewing origin. | 2019-06-20 |
20190188906 | Search And Rescue Unmanned Aerial System - The subject matter of this specification can be embodied in, among other things, a method that includes a computer-implemented method for creating three-dimensional models includes capturing, at a first location, a two-dimensional first image of a three-dimensional scene, capturing, at a second location, a two-dimensional second image of the three-dimensional scene, measuring a range distance from at least one of the first location and the second location to a closest object in the scene, determining a depth map based on differences between the first image and the second image, determining a three-dimensional point cloud based on the range distance, the depth map, and at least one of the first image and the second image, and providing the three-dimensional point cloud as a three-dimensional model of the three-dimensional scene. | 2019-06-20 |
20190188907 | Assembling Primitive Data into Multi-view Primitive Blocks in a Graphics Processing System - Methods and apparatus for generating a data structure for storing primitive data for a number of primitives and vertex data for a plurality of vertices, wherein each primitive is defined with reference to one or more of the plurality of vertices. The vertex data comprises data for more than one view, such as a left view and a right view, with vertex parameter values for a first group of vertex parameters being stored separately for each view and vertex parameter values for a second, non-overlapping group of vertex parameters being stored only once and used when rendering either or both views. | 2019-06-20 |
20190188908 | METHOD AND SYSTEM FOR CONVERTING CONTENT FROM TWO-DIMENSIONS TO THREE-DIMENSIONS - Disclosed is a method and a system to convert two-dimensional (“2D”) content into three-dimensional (“3D”) content. The method includes receiving the 2D content and analyzing the 2D content to obtain a first set of data related to the 2D content. Further, the method includes determining a 2D-to-3D content conversion logic based on a result of the content analysis. Further, the method includes generating the 3D content by applying the determined logic to the received 2D content. Further, the method includes providing the generated 3D content. The 2D content includes at least one of an image, a Computer Aided Design (CAD) drawing, and a web image content. The first set of data includes vector parameters of one or more objects present in the content, text related to the one or more objects present in the content, dimensional data of the one or more objects present in the content. | 2019-06-20 |
20190188909 | MARKING A COMPUTERIZED MODEL OF A CARDIAC SURFACE - Described embodiments include a system that includes an electrical interface and a processor. The processor is configured to receive, via the electrical interface, an electrocardiographic signal from an electrode within a heart of a subject, to ascertain a location of the electrode in a coordinate system of a computerized model of a surface of the heart, to select portions of the model responsively to the ascertained location, such that the selected portions are interspersed with other, unselected portions of the model, and to display the model such that the selected portions, but not the unselected portions, are marked to indicate a property of the signal. Other embodiments are also described. | 2019-06-20 |
20190188910 | SYSTEMS AND METHODS FOR TELEPORTING A VIRTUAL POSITION OF A USER IN A VIRTUAL ENVIRONMENT TO A TELEPORTABLE POINT OF A VIRTUAL OBJECT - Teleporting a virtual position of a user in a virtual environment to a teleportable point of a virtual object shown in the virtual environment. Particular systems and methods identify a virtual object, determine teleportable points of the virtual object, detect intent by the user to relocate a virtual position of the user to a first teleportable point of the teleportable points, and relocate the virtual position of the user to the first teleportable point. | 2019-06-20 |
20190188911 | PRESENTING AN AUGMENTED REALITY INTERFACE - One or more computing devices, systems, and/or methods for presenting augmented reality (AR) interfaces are provided. For example, a first object corresponding to a representation of content in an AR interface may be presented. Responsive to receiving a selection of the first object, a first graphical object corresponding to the content may be presented. An AR interface comprising a real time view of a camera of the device may be presented. A first graphical representation of the first graphical object may be presented overlaid on the real time view of the camera of the device. A second graphical representation of the first graphical object comprising the graphical modification and a second graphical object associated with information corresponding to the content may be presented overlaid on the real time view of the camera of the device. Responsive to receiving a selection of the second graphical object, the information may be presented. | 2019-06-20 |
20190188912 | AUGMENTED REALITY VEHICLE USER INTERFACE - A system and method of operating a vehicle using virtual vehicle controls, wherein the method includes: capturing image or video data from an area within an interior of the vehicle; sending a virtual vehicle control graphics request to a vehicle; receiving virtual vehicle control graphics response from the vehicle, wherein the virtual vehicle control graphics response includes virtual vehicle control graphics; and presenting the captured image or video data and the virtual vehicle control graphics on a display of the AR/VR device such that the virtual vehicle control graphics are presented over the captured image or video data, wherein the virtual vehicle control graphics includes one or more vehicle-user interface components. | 2019-06-20 |
20190188913 | Lighting And Internet Of Things Design Using Augmented Reality - An augmented reality-based lighting design method includes displaying, by an augmented reality device, a real-time image of a target physical area on a display screen. The method further includes displaying, by the augmented reality device, a lighting fixture 3-D model on the display screen in response to a user input, where the lighting fixture 3-D model is overlaid on the real-time image of the target physical area. The method also includes displaying, by the augmented reality device, a lighting pattern on the display screen overlaid on the real-time image of the target physical area, wherein the lighting pattern is generated based on at least photometric data associated with the lighting fixture 3-D model. | 2019-06-20 |
20190188914 | TERMINAL DEVICE, SYSTEM, PROGRAM, AND METHOD - A terminal device includes a memory configured to store computer-readable instructions and a processor configured to perform the computer-readable instructions. The processor is configured to: cause a real space camera in a real space to capture a real space image including a real player; cause a virtual space camera in a virtual space to capture a virtual space image including a virtual object, the real player performing an instruction input to the virtual object; create a composite image that is formed by composing part of the virtual space image stored in the memory and a player image in the real space image stored in the memory; and output the composite image to a display so that the display is configured to display the composite image. | 2019-06-20 |
20190188915 | Method and apparatus for representing a virtual object in a real environment - The invention relates to a method for representing a virtual object in a real environment, having the following steps: generating a two-dimensional representation of a real environment by means of a recording device, ascertaining a position of the recording device relative to at least one component of the real environment, segmenting at least one area of the real environment in the two-dimensional image on the basis of non-manually generated 3D information for identifying at least one segment of the real environment in distinction to a remaining part of the real environment while supplying corresponding segmentation data, and merging the two-dimensional image of the real environment with the virtual object or, by means of an optical, semitransparent element directly with reality with consideration of the segmentation data. The invention permits any collisions of virtual objects with real objects that occur upon merging with a real environment to be represented in a way largely close to reality. | 2019-06-20 |
20190188916 | METHOD AND APPARATUS FOR AUGMENTING REALITY - A method and apparatus for augmenting reality are disclosed. A specific embodiment of the method includes: recognizing an object in an image collected by a camera of a user terminal, and determining a space occupied by the recognized object in a world coordinate system; determining a position in the world coordinate system corresponding to an augmented reality tag of the recognized object, and determining a superimposition position in the collected image corresponding to the position in the world coordinate system corresponding to an augmented reality tag of the recognized object, the position in the world coordinate system corresponding to the augmented reality tag being associated with the space occupied by the recognized object in the world coordinate system; and superimposing, in an augmented reality, the augmented reality tag at the superimposition position. The direct association relationship between the superimposed information and the object in the real world is established. | 2019-06-20 |
20190188917 | Lighting And Internet Of Things Design Using Augmented Reality - An augmented reality-based lighting design method includes displaying, by an augmented reality device, a real-time image of a target physical area on a display screen. The method further includes displaying, by the augmented reality device, a lighting fixture 3-D model on the display screen in response to a user input, wherein the lighting fixture 3-D model is overlaid on the real-time image of the target physical area. The method also includes determining, by the augmented reality device, illuminance values for locations in the target physical area, where the illuminance values indicate illuminance levels of a light to be provided by a lighting fixture represented by the lighting fixture 3-D model. The method further includes displaying, by the augmented reality device, the illuminance values on the display screen overlaid on the real-time image of the target physical area. | 2019-06-20 |
20190188918 | SYSTEMS AND METHODS FOR USER SELECTION OF VIRTUAL CONTENT FOR PRESENTATION TO ANOTHER USER - Systems, methods, and computer-readable media for user selection of virtual content for presentation in a virtual environment via a first user device and a second user device are provided. The method can include storing information associated with sensitivities of the first user device and sensitivities of the second user device. The sensitivities can indicate one or more conditions at one or more of the first user device and the second user device that affect presentation of portions of the virtual environment. The method can include detecting a selection of content at the first user device for presentation via the second user device, determining if the content can be presented at the second device based on the sensitivities of the second user device and generating a first version of the content that complies with the sensitivities of the second user device. | 2019-06-20 |
20190188919 | LAYERED 3-D IMAGES FOR AUGMENTED REALITY PROCESSING - A method for creating and storing a captured image and associated spatial data and augmented reality (AR) data in a file that allows subsequent manipulation and processing of AR objects is disclosed. In embodiments, one or more frames are extracted from a video stream, along with spatial information about the camera capturing the video stream. The one or more frames are analyzed in conjunction with the spatial information to calculate a point cloud of depth data. The one or more frames are stored in a file in a first layer, and the point cloud is stored in the file in a second layer. In some embodiments, one or more AR objects are stored in a third layer. | 2019-06-20 |
20190188920 | SURFACE AWARE LENS - Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for rendering three-dimensional virtual objects within real-world environments. Virtual rendering of a three-dimensional virtual object can be altered appropriately as a user moves around the object in the real-world through utilization of a redundant tracking system comprising multiple tracking sub-systems. Virtual object rendering can be with respect to a reference surface in a real-world three-dimensional space depicted in a camera view of a mobile computing device. | 2019-06-20 |
20190188921 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND IMAGE DISPLAY SYSTEM - A wide angle image to be displayed on a screen fixed on the head or the face of a user is processed. When a display view angle reaches a boundary of an original wide angle image | 2019-06-20 |
20190188922 | DEVICE FOR MEASURING PASSING TIME OF RUNNER - To provide a device for measuring a time of passage that is capable of sensing passage of the torso more accurately than a photoelectric cell while maintaining the ease of use of a photoelectric cell, passage of a runner is sensed by causing the upper portion of the body of the runner to be broadly illuminated by infrared light, visible light, and/or other such electromagnetic waves, and by detecting light reflected from large part(s) of the body of the runner. | 2019-06-20 |
20190188923 | DUAL-STAGE, SEPARATED GAS/FLUID SHOCK STRUT SERVICING MONITORING SYSTEM USING ONE PRESSURE/TEMPERATURE SENSOR - A dual-stage, separated gas/fluid shock strut arrangement includes a dual-stage, separated gas/fluid shock strut, a pressure/temperature sensor mounted to the primary gas chamber, a stroke sensor, and a monitoring system, comprising a recorder configured to receive a plurality of sensor readings from at least one of the pressure/temperature sensor and the stroke sensor, a landing detector configured to detect a landing event based upon a stroke sensor reading received from the stroke sensor, and a health monitor configured to determine a volume of oil in the oil chamber, a volume of gas in the primary gas chamber, and a volume of gas in the secondary gas chamber. | 2019-06-20 |
20190188924 | Monitoring and Diagnostics System for a Machine with Rotating Components - A monitoring and diagnostics system for a machine having a plurality of rotating components includes a powertrain with a plurality of rotating components and a vibration sensor. The vibration sensor include a vibration sensor element and a sensor controller. The vibration sensor is disposed adjacent one of the plurality of rotating components. The vibration sensor element is configured to generate raw vibration data indicative of vibrations of the vibration sensor element. The sensor controller is configured to access a vibration threshold, access a time threshold, receive the raw vibration data from the vibration sensor element, generate condition indicators based upon the raw vibration data; compare the condition indicators to the vibration threshold, and if the condition indicators exceed the vibration threshold for a time exceeding the time threshold, transmit a predetermined amount of raw vibration data to a remote system remote from the machine. | 2019-06-20 |
20190188925 | Low-Power Wireless for Vehicle Diagnostics and Reporting - Novel tools and techniques for low-power wireless for vehicle diagnostics and reporting are provided. A system includes a wireless charging station, a diagnostic server, a network device, a low-power wireless device, and a low-power wireless transceiver. The network device may be in communication with the diagnostic server. The low-power wireless device may include a processor, and non-transitory computer readable media including instructions executable by the processor to establish a low-power wireless connection, obtain on-board information from the vehicle via the low-power wireless connection, transmit the on-board information to the diagnostic server, and receive a report from the diagnostic server based on the on-board information. | 2019-06-20 |
20190188926 | SERVICE MANAGEMENT SYSTEM, NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING SERVICE MANAGEMENT PROGRAM, AND SERVICE MANAGEMENT METHOD - A service management system includes a storage unit and a service management unit. The storage unit stores service information associated with a vehicle ID assigned to each vehicle and a user ID assigned to each user of the vehicle. The service information includes content of a service to be provided to the user of the vehicle. The service management unit reads from the storage unit the service information associated with both the vehicle ID assigned to the vehicle and the user ID assigned to the user and manages the content of the service to be provided to the user of the vehicle when the user of the vehicle uses the service. | 2019-06-20 |
20190188927 | ABNORMALITY DIAGNOSTIC METHOD AND ABNORMALITY DIAGNOSTIC DEVICE FOR DRIVING FORCE CONTROL SYSTEM - An abnormality diagnostic method is provided for a driving force control system in which an automatic transmission is interposed between an engine and a drive wheel, and a target driving force that is transmitted to the drive wheel is calculated based on a driver's output request. The automatic transmission and the engine are controlled based on the target driving force. The abnormality diagnostic method includes calculating a target engine torque based on the target driving force, detecting an actual engine torque of the engine, and detecting an intake temperature of the engine. Upon determining the automatic transmission has been operating normally, abnormality diagnostic method determines an abnormality of the driving force control system exists that is caused by the engine upon determining a difference between the target engine torque and the actual engine torque has exceeded a predetermined threshold value. | 2019-06-20 |
20190188928 | METHOD AND SYSTEM FOR VALIDATING STATES OF COMPONENTS OF VEHICLE - A system and a method for validating states of one or more components of a vehicle are provided. The system includes circuitry that receives an event, determines associated priority level based on first mapping. The circuitry identifies the one or more components associated with the priority level based on a second mapping. The circuitry identifies one or more parameters associated with each of the one or more components, and generates a query message. The query message is a function of either the one or more components or the one or more parameters. The circuitry transmits the query message to the vehicle, and receives the values of the one or more parameters from the vehicle. The circuitry validates the state of one or more components by matching the values of the one or more parameters to corresponding stored values of the one or more parameters. | 2019-06-20 |
20190188929 | METHOD AND APPARATUS FOR PROCESSING ALARM SIGNALS - A method for processing alarm signals is disclosed in which a multiplicity of selected alarm signals are first compared with a predefined alarm pattern. The multiplicity of the selected alarm signals are determined from the alarm signals. At least one response signal is then transmitted if the selected alarm signals match the predefined alarm pattern. | 2019-06-20 |
20190188930 | DRIVE RECORDER - A drive recorder includes a video recording unit that records a video captured by a camera mounted on a vehicle in association with the time of day, an abnormal event detection unit that detects an abnormal event, a time period determination unit that determines a time period that includes the time of day when the abnormal event is detected and that has a length determined based on a traveling condition other than the speed of the vehicle, a video extraction unit that extracts a video of the determined time period from the video recording unit, and a data saving unit that records, or transmits to an external device, a file including the extracted video as an erasure prohibited object. | 2019-06-20 |
20190188931 | Mobile Device Attendance Verification - Provided are systems and methods of attendance verification using mobile electronic devices (or “mobile devices”). In some embodiments, a person's biometric data is acquired and verified locally by a mobile device associated with the person while the mobile device is located in a geographic region associated with an event, and attendance data, including an indication of the verification of the biometric data along with a unique identifier of the mobile device, such as an international mobile equipment identity (IMEI) of the mobile device, is transmitted to an attendance server that makes a record of the person's attendance of the event based on the attendance data. | 2019-06-20 |
20190188932 | Mobile Device Attendance Verification - Provided are systems and methods of attendance verification using mobile electronic devices (or “mobile devices”). In some embodiments, a person's biometric data is acquired and verified locally by a mobile device associated with the person while the mobile device is located in a geographic region associated with an event, and attendance data, including an indication of the verification of the biometric data along with a unique identifier of the mobile device, such as an international mobile equipment identity (IMEI) of the mobile device, is transmitted to an attendance server that makes a record of the person's attendance of the event based on the attendance data. | 2019-06-20 |
20190188933 | Mobile Device Attendance Verification - Provided are systems and methods of attendance verification using mobile electronic devices (or “mobile devices”). In some embodiments, a person's biometric data is acquired and verified locally by a mobile device associated with the person while the mobile device is located in a geographic region associated with an event, and attendance data, including an indication of the verification of the biometric data along with a unique identifier of the mobile device, such as an international mobile equipment identity (IMEI) of the mobile device, is transmitted to an attendance server that makes a record of the person's attendance of the event based on the attendance data. | 2019-06-20 |
20190188934 | Low-Power Wireless for Access Control - Novel tools and techniques for low-power wireless access control are provided. A system includes an access control server, network device, and a low-power wireless device. The low-power wireless device may include a low-power wireless transceiver configured to communicate with a mobile device, a processor, and non-transitory computer readable media comprising instructions executable by the processor to establish a low-power wireless connection with the mobile device, obtain authorization information from the mobile device, transmit the authorization information to the access control server, receive an access determination from the access control server, and perform a secure function based on the access determination. | 2019-06-20 |
20190188935 | CLOUD-BASED WIRELESS COMMUNICATION SYSTEM AND METHOD - A server may receive information from a computer, store the information in a database at the server, determine a reader device that is configured to receive and/or process credential information and/or a unique identifier and to receive the information based on an analysis of the information, select one or more mobile devices to deliver the information to the reader, and transmit data to the one or more mobile devices where the data includes at least a portion of the information. | 2019-06-20 |
20190188936 | METHODS AND APPARATUS TO WIRELESSLY INTERLOCK DOORS - Methods and apparatus to wirelessly interlock doors are disclosed. A door system includes a user interface to receive interlock configuration data input from a user, the interlock configuration data to define an interlock condition to be satisfied before a first door is to undergo an operation, the interlock condition associated with a current state of a second door. The door system includes a first wireless transceiver to receive a signal from a second wireless transceiver associated with a second door. The method includes a door operation controller to at least one of (1) implement the operation of the first door in response to a request when the current state of the second door satisfies the interlock condition, (2) ignore the request, or (3) not execute the operation of the first door in response to the request when the current state of the second door does not satisfy the interlock condition. | 2019-06-20 |
20190188937 | SYSTEMS AND METHODS TO CONTROL LOCKING AND UNLOCKING OF DOORS USING POWERLINE AND RADIO FREQUENCY COMMUNICATIONS - An electronic door lock system automatically controls locking and unlocking of a door. A door lock controller interfaces with an electronic door lock, sends messages including door lock data to a local receiver, and receives messages including door lock commands from the local receiver. In turn, the local receiver interfaces with a hub device through a mesh network. The hub receives the door lock data, applies a rule set to make lock operation decisions, and sends messages, which may comprise commands to operate the door lock, through the mesh network to the local receiver. The local receiver decodes the messages and passes the commands to the door lock controller to automatically control the electronic door lock. | 2019-06-20 |
20190188938 | DUAL MODE, PASSCODE STORAGE, WIRELESS SECURE LOCK - A dual mode, passcode storage, wireless secure lock is disclosed. In one embodiment, a key is provided that includes a key coil, a first key data processing device (DPD), a second key DPD, and a key radio transceiver. The first key DPD is configured to receive a first authentication code (AC) from a lock via the key coil. The first key DPD is configured to compare the first AC with data in memory of the key DPD. The first key DPD is configured to activate the second key DPD in response to response to determining the first AC compares equally to data in memory of the first key DPD. The second key DPD is configured to transmit a second AC to the lock via the key radio transceiver after the second key DPD is activated. | 2019-06-20 |
20190188939 | VEHICLE MANAGEMENT SYSTEM, VEHICLE MANAGEMENT METHOD, COMPUTER-READABLE NON-TRANSITORY STORAGE MEDIUM - A vehicle management system includes an acquisition unit configured to acquire a manipulation list including one or a plurality of vehicle manipulations set when electronic key data of a vehicle is issued to a user device, a detection unit configured to detect that the vehicle manipulation included in the manipulation list has been performed when the vehicle is being used using the electronic key data of the user device, and a notification unit configured to notify an owner terminal related to an owner of the vehicle that the vehicle manipulation included in the manipulation list has been performed, after the detection of the detection unit. | 2019-06-20 |
20190188940 | System And Method For Preventing Pilferage And Tampering Of A Lock From A Vehicle - An embodiment herein provides a system for preventing pilferage and tampering of a lock from a vehicle | 2019-06-20 |
20190188941 | ARCHITECTURE FOR ACCESS MANAGEMENT - Disclosed are techniques that use devices with corresponding identity wallet applications that execute on an electronic processor device of the devices, and which identity wallets store identity information and encrypt the stored identity information. A distributed ledger system, and a broker system that interfaces to the wallet and the distributed ledger are used for various information exchange cases pertaining to access to facilities. In particular, disclosed is a registration process to register an identity wallet with a facility. | 2019-06-20 |
20190188942 | REFRIGERATION SYSTEM - A refrigeration system having a housing which houses at least one dispenser for storing and dispensing packaged beverages and fluid disperser. The dispenser includes a loading end, a dispensing end and a travel path extending from the loading end to the dispensing end, wherein the dispensing end is positioned lower than the loading end. The fluid disperser disperses a fluid over at least a portion of the travel path of the dispenser. | 2019-06-20 |
20190188943 | EXTENSION GAMING AND SERVICES FOR MOBILE DEVICES IN A GAMING ENVIRONMENT - A player is enabled to initiate on a mobile device a multiplayer game via a community gaming panel for multiplayer game play. Additional players are enabled to access the multiplayer game on additional mobile devices to enroll in the multiplayer game. The multiplayer game is funded by wagers from the player, additional players, a gaming venue, or a third party funding source. An outcome is determined for the multiplayer game based on one or more plays of the multiplayer game by the player and/or the additional players. The mobile device and the additional mobile devices are remotely connected to a server associated with multiple gaming machines. The player and the additional authenticated players are each associated with a respective gaming machine. | 2019-06-20 |
20190188944 | Smart Bin Lottery Ticket Dispenser with Integrated Controller - A lottery ticket dispenser array includes a frame and a plurality of separate bins contained within the frame. Each bin includes a housing having a front side that faces a purchaser in operational use of the dispenser array, an opposite back side, and an internal space for receipt of a supply of interconnected lottery tickets, wherein each lottery tickets contains a code printed thereon. Each bin has an electronic drive mechanism that dispenses the lottery tickets therefrom. A controller is in communication with each of the drive mechanisms to initiate a dispense sequence upon receipt of a ticket dispense command from the controller. The controller is configured on the frame and is variably positional relative to the frame between different operational positions. | 2019-06-20 |
20190188945 | Lottery Method - A method for conducting lotteries includes distributing the payout to a larger group of participants and creating more interest by directing payouts to smaller geographical areas. For example; a current payout of $1,000,000 maybe converted to, for example, 10 payouts of $100,000 or 100 payouts of $10,000 to produce more winners. The history of lottery winners has shown that many winners of very large payouts ultimately receive little long-term benefit, so a smaller payout to a larger number of participants is likely to show much greater positive results and generate great enthusiasm for purchasing lottery tickets. Further, the zip code lottery may be reduced to one or a small number of zip codes to produce local winners, and motivate participant enthusiasm. The method may be incorporated into the PowerBall® or Mega Millions® lottery games or as a new lottery game, e.g. Guaranteed 100K, or Lottery Guaranteed $100K. | 2019-06-20 |
20190188946 | SYSTEMS, APPARATUS, AND METHODS FOR A GAME UTILIZING A WHEEL WITH DYNAMICALLY RESIZABLE GAME SPACES - A bingo game system provides for new features and functionality for a bingo game platform, including a dynamically resizable wheel segment for a wheel-based game. | 2019-06-20 |
20190188947 | METHODS FOR SELLING PRE-PRINTED ONLINE LOTTERY TICKETS - A system and method of selling pre-printed lottery tickets for random draw lotteries through the retailers POS without the use of additional lottery hardware. Pre-printed lottery tickets allow a consumer to purchase a lottery ticket for a subsequently occurring draw by including that ticket in their shopping basket. The pre-printed lottery ticket may be purchased as any other common product through the point of sale terminal. No specialized hardware such as lottery terminal, printer or dispensing device is necessary. | 2019-06-20 |
20190188948 | GAMING SYSTEM AND METHOD - Gaming systems and methods for online gaming are disclosed. A method includes providing and streaming a video of the gaming machine, which is to be remotely played by a user; displaying the video at a client station of the user; and receiving from the client station input data including data indicative of the user's interaction with controls of the gaming machine which appearing in the video; and activating the gaming machine based on the input data. Activation of the gaming machine based on the input data can be based on mapping data associating regions of the video with controls of the machine, which appearing at these regions of the video, and includes processing the input data by using the mapping data to thereby map the one or more regions of the video with which the user had interacted to respective controls of the gaming machine that appear in those regions. | 2019-06-20 |
20190188949 | SNAP-AND-CLICK DISPLAY - Examples disclosed herein relate to an enhanced installation for a video display, such as a video display of an electronic gaming device. The enhanced installation may allow for easier installation and/or removal of the video display via one or more biased connectors, such as snap-and-click connectors. | 2019-06-20 |