Patent application number | Description | Published |
20100251101 | Capture and Display of Digital Images Based on Related Metadata - Methods and apparatuses receive receiving a plurality of images and metadata associated with each respective image, determining a viewpoint of one of the images of the plurality, the viewpoint to represent the location and orientation of the image capture device when the image was captured and creating a view including the plurality of images, wherein placement of the images based on each images respective metadata and the determined viewpoint. | 09-30-2010 |
20100309225 | IMAGE MATCHING FOR MOBILE AUGMENTED REALITY - Embodiments of a system and method for mobile augmented reality are provided. In certain embodiments, a first image is acquired at a device. Information corresponding to at least one second image matched with the first image is obtained from a server. A displayed image on the device is augmented with the obtained information. | 12-09-2010 |
20110081090 | METHODS AND APPARATUS FOR RETRIEVING IMAGES FROM A LARGE COLLECTION OF IMAGES - An image retrieval program (IRP) may be used to query a collection of digital images. The IRP may include a mining module to use local and global feature descriptors to automatically rank the digital images in the collection with respect to similarity to a user-selected positive example. Each local feature descriptor may represent a portion of an image based on a division of that image into multiple portions. Each global feature descriptor may represent an image as a whole. A user interface module of the IRP may receive input that identifies an image as the positive example. The user interface module may also present images from the collection in a user interface in a ranked order with respect to similarity to the positive example, based on results of the mining module. Query concepts may be saved and reused. Other embodiments are described and claimed. | 04-07-2011 |
20110304694 | SYSTEM AND METHOD FOR 3D VIDEO STABILIZATION BY FUSING ORIENTATION SENSOR READINGS AND IMAGE ALIGNMENT ESTIMATES - Methods and systems to for generating high accuracy estimates of the 3D orientation of a camera within a global frame of reference. Orientation estimates may be produced from an image-based alignment method. Other orientation estimates may be taken from a camera-mounted orientation sensor. The alignment-derived estimates may be input to a high pass filter. The orientation estimates from the orientation sensor may be processed and input to a low pass filter. The outputs of the high pass and low pass filters are fused, producing a stabilized video sequence. | 12-15-2011 |
20120075342 | AUGMENTING IMAGE DATA BASED ON RELATED 3D POINT CLOUD DATA - Embodiments of the invention describe processing a first image data and 3D point cloud data to extract a first planar segment from the 3D point cloud data. This first planar segment is associated with an object included in the first image data. A second image data is received, the second image data including the object captured in the first image data. A second planar segment related to the object is generated, where the second planar segment is geometrically consistent with the object as captured in the second image data. This planar segment is generated based, at least in part, on the second image data, the first image data and the first planar segment. | 03-29-2012 |
20120173549 | METHODS AND APPARATUS FOR RETRIEVING IMAGES FROM A LARGE COLLECTION OF IMAGES - A processing system may receive an example image for use in querying a collection of digital images. The processing system may use local and global feature descriptors to perform a content-based image comparison of the digital images with the example image, to automatically rank the digital images with respect to similarity to the example image. A local feature descriptor may represent a portion of the contents of a digital image. A global feature descriptor may represent substantially all of the contents of that digital image. The global feature descriptor may be content based, not keyword based. Intermediate and final classifiers may be used to perform the automatic ranking. Different intermediate classifiers may generate intermediate relevance metrics with respect to different modalities. The final classifier may use results from the intermediate classifiers to produce a final relevance metric for the digital images. Other embodiments are described and claimed. | 07-05-2012 |
20130002649 | MOBILE AUGMENTED REALITY SYSTEM - Embodiments of the invention relate to systems, apparatuses and methods to provide image data, augmented with related data, to be displayed on a mobile computing device. Embodiments of the invention display a live view augmented with information identifying an object amongst other objects. Embodiments of the invention may utilize other related data, such as 3D point cloud data, image data and location data related to the object, to obtain a specific location of an object within the live view. Embodiments of the invention may further display a live view with augmented data three-dimensionally consistent with the position and orientation of the image sensor of the mobile computing device. | 01-03-2013 |
20140028794 | VIDEO COMMUNICATION WITH THREE DIMENSIONAL PERCEPTION - Generally, this disclosure provides methods and systems for real-time video communication with three dimensional perception image rendering through generated parallax effects based on identification, segmentation and tracking of foreground and background layers of an image. The system may include an image segmentation module configured to segment a current local video frame into a local foreground layer and a local background layer and to generate a local foreground mask based on an estimated boundary between the local foreground layer and the local background layer; a face tracking module configured to track a position of a local user's face; a background layer estimation module configured to estimate a remote background layer; and an image rendering module configured to render a 3D perception image based on the estimated remote background layer, the current remote video frame and the remote foreground mask. | 01-30-2014 |
20140233847 | NETWORKED CAPTURE AND 3D DISPLAY OF LOCALIZED, SEGMENTED IMAGES - Systems, devices and methods are described including receiving a source image having a foreground portion and a background portion, where the background portion includes image content of a three-dimensional (3D) environment. A camera pose of the source image may be determined by comparing features of the source image to image features of target images of the 3D environment and using the camera pose to segment the foreground portion from the background portion may generate a segmented source image. The resulting segmented source image and the associated camera pose may be stored in a networked database. The camera pose and segmented source image may be used to provide a simulation of the foreground portion in a virtual 3D environment. | 08-21-2014 |
20140247325 | GUIDED IMAGE CAPTURE - Examples are disclosed for determining a suggested camera pose or suggested camera settings for a user to capture one or more images. In some examples, the suggested camera pose or suggested camera settings may be based on an indication of the user's interest and gathered information associated with the user's interests. The user may be guided to adjust an actual camera pose or actual camera settings to match the suggested camera pose or suggested camera settings. | 09-04-2014 |