Class / Patent application number | Description | Number of patent applications / Date published |
348140160 | User positioning (e.g., parallax) | 73 |
20080291265 | SMART CROPPING OF VIDEO IMAGES IN A VIDEOCONFERENCING SESSION - Systems and methods are disclosed for controlling cropping areas of video images to match the allocated area associated with the image in a video conferencing layout. The disclosed methods can protect regions of interest from being cropped. A region of interest within a video image is identified to adjust the cropping in such a way that the region of interest is preserved within the cropped image. The region of interest may be identified based on motion detection or flesh tone detection, for example. | 11-27-2008 |
20080297589 | EYE GAZING IMAGING FOR VIDEO COMMUNICATIONS - Video communication systems and methods for communicating between an individual in a local environment, and a remote viewer in a remote environment are provided. The system has an image display device; at least one image capture device which acquires video images for fields of view of a local environment, and any individuals therein; an audio system having an audio emission device and an audio capture device; a computer, which includes a contextual interface having a gaze adapting process, and image processor; and a communication controller which transmits and receives video images of the local environment and the remote environment, and data regarding video scene characteristics thereof across a network between the local environment and the remote environment; wherein the gaze adapting process identifies video scene characteristics of the local environment indicative of eye gaze image capture and altering the video images when the characteristics are indicative. | 12-04-2008 |
20090079816 | METHOD AND SYSTEM FOR MODIFYING NON-VERBAL BEHAVIOR FOR SOCIAL APPROPRIATENESS IN VIDEO CONFERENCING AND OTHER COMPUTER MEDIATED COMMUNICATIONS - A method is described for modifying behavior for social appropriateness in computer mediated communications. Data can be obtained representing the natural non-verbal behavior of a video conference participant. The cultural appropriateness of the behavior is calculated based on a cultural model and previous behavior of the session. Upon detecting that the behavior of the user is culturally inappropriate, the system can calculate an alternative behavior based on the cultural model. Based on this alternative behavior, the video output stream can be modified to be more appropriate by altering gaze and gesture of the conference participants. The output stream can be modified by using previously recorded images of the participant, by digitally synthesizing a virtual avatar display or by switching the view displayed to the remote participant. Once the user's behavior changes to be once again culturally appropriate, the modified video stream can be returned to unmodified state. | 03-26-2009 |
20090278913 | GAZE ACCURATE VIDEO CONFERENCING - A gaze accurate video conferencing system includes a screen that alternates between a light-scattering state and a substantially transparent state. A camera is positioned behind the screen and is configured to capture images of a user positioned in front of the screen when the screen is in its substantially transparent state. When the screen is in its substantially light-scattering state, a projector projects a display image on the screen. | 11-12-2009 |
20100073458 | SYSTEMS AND METHODS FOR PROVIDING PERSONAL VIDEO SERVICES - Systems and methods for processing video are provided. Video compression schemes are provided to reduce the number of bits required to store and transmit digital media in video conferencing or videoblogging applications. A photorealistic avatar representation of a video conference participant is created. The avatar representation can be based on portions of a video stream that depict the conference participant. A face detector is used to identify, track and classify the face. Object models including density, structure, deformation, appearance and illumination models are created based on the detected face. An object based video compression algorithm, which uses machine learning face detection techniques, creates the photorealistic avatar representation from parameters derived from the density, structure, deformation, appearance and illumination models. | 03-25-2010 |
20100079576 | DISPLAY SYSTEM AND METHOD - A display system is disclosed as including first and second visual display units (VDU's), each for displaying visual images for viewing; a first digital video camera for capturing images of a first individual viewing images displayed by the first VDU; at least second and third digital video cameras for capturing, each from a different angle, images of a second individual viewing the second VDU; in which the first digital video camera is connectable with the second VDU for transmitting the captured images to the second VDU for display; and the first VDU is connectable with either of the second and third digital video cameras for display of images captured by either of the second and third digital video cameras; means for identifying the position of the centre point between the eyes of the captured images of the first individual against a capture window of the first digital video camera; and means for selectively connecting the first VDU with the second digital video camera or the third digital video camera in accordance with the identified position of the centre point between the eyes of the first individual. A visual display apparatus is also disclosed as including a visual display unit supported by a table, the table including a closable opening; a reflector movable relative to the table between a first position in which the reflector closes the opening and a second position in which the opening is open and images displayed by the visual display unit are reflectable by the reflector for viewing; and an end of the reflector is slidably and swivellably movable relative to the table for movement between the first and second positions. | 04-01-2010 |
20100091088 | DEVICE FOR A VIDEOCONFERENCE COMMUNICATION AND ASSOCIATED COMMUNICATION METHOD - The invention relates to a device ( | 04-15-2010 |
20100149310 | VISUAL FEEDBACK FOR NATURAL HEAD POSITIONING - A videoconferencing conferee may be provided with feedback on his or her location relative a local video camera by altering how remote videoconference video is displayed on a local videoconference display viewed by the conferee. The conferee's location may be tracked and the displayed remote video may be altered in accordance to the changing location of the conferee. The remote video may appear to move in directions mirroring movement of the conferee. This effect may be achieved by modeling the remote video as offset and behind a virtual portal corresponding to the display. The remote video may be displayed according to a view of the remote video through the virtual portal. As the conferee's position changes, the view through the portal changes, and the remote video changes accordingly. | 06-17-2010 |
20100171808 | System and Method for Enhancing Eye Gaze in a Telepresence System - A system for enhancing eye gaze in a telepresence system includes a plurality of local cameras coupled to at least one local display. Each local camera is directed to at least one respective local user section and operable to generate an image of the respective local user section. The system also includes a plurality of remote displays. Each remote display is operable to reproduce the local video image of the local user section. Within the system the plurality of remote displays and the plurality of local cameras are aligned such that when a first local user within a local user section looks at a target at least one remote display is operable to reproduce the local video image of the first user section comprising the first local user such that the eye gaze of the reproduced image of the first local user is directed approximately at a corresponding target. | 07-08-2010 |
20100177159 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes a photographing unit configured to generate a plurality of images by photographing a plurality of times a range which can be photographed, an object detection unit configured to detect a specified object from each of the plurality of images generated by the photographing unit, a position determination unit configured to determine an existing position of the specified object based on a detection result of the specified object in the plurality of images, and a range determination unit configured to determine a photographing range to be photographed within the range which can be photographed based on the existing position of the specified object determined by the position determination unit. | 07-15-2010 |
20100188478 | METHODS AND SYSTEMS FOR PERFORMING VISUAL COLLABORATION BETWEEN REMOTELY SITUATED PARTICIPANTS - Embodiments of the present invention are directed to a visual-collaborative systems and methods enabling geographically distributed groups to engage in face-to-face, interactive collaborative video conferences. In one aspect, a method for establishing a collaborative interaction between a local participant and one or more remote participants includes capturing images of the local participant in front of a display screen. The includes collecting depth information of the local participant located in front of the display screen and transmitting the images and depth information of the local participant to the one or more remote participants. The method also includes receiving images and depth information of the one or more remote participants and projecting the images of the one or more remote participants on the display screen based on the depth information of the remote participants. | 07-29-2010 |
20100194849 | METHOD AND A DEVICE FOR CONTROLLING THE MOVEMENT OF A LINE OF SIGHT, A VIDEOCONFERENCING SYSTEM, A TERMINAL AND A PROGRAM FOR IMPLEMENTING SAID METHOD - A method of controlling movement of a line of sight of a video camera mounted on a mobile videoconferencing terminal includes: a) a step ( | 08-05-2010 |
20100238265 | Telepresence Systems and Methods Therefore - A telepresence system enhances the perception of presence of a remote person involved in a video conference. The system preferably has a two-way mirror, which is between the observer and the display device, positioned at an angle to reflect a backdrop surface. The backdrop surface may or may not appear superimposed in a position behind the image of a person from the remote location. The system preferably minimizes image distortion via an optical path for the camera line of sight that is substantially longer than the physical distance between the user and the camera. The system may be asymmetrical, in that one camera is on axis with the user's line of sight while the other camera is off axis with the user's line of sight. | 09-23-2010 |
20100253761 | Reflected Backdrop for Communications Systems - A video conferencing system has a two-way mirror, which is between the observer and the display device, positioned at an angle to reflect a backdrop surface. The backdrop surface, which is further away from the two-way mirror than the image plane of the image display device, appears superimposed in a position behind the image of a person from the remote location. A camera is located in the backdrop at a position along the line of sight of the transmitted image so that a perceived eye contact is achieved. A system is disclosed wherein telepresence systems that are compatible with a pre-defined standard are connected via a network connecting, either directly or via a telepresence operations center, to provide interaction with substantially life-sized images of the remote participant displayed with three-dimensional depth cues in the room setting. | 10-07-2010 |
20100283830 | METHOD AND APPARATUS MAINTAINING EYE CONTACT IN VIDEO DELIVERY SYSTEMS USING VIEW MORPHING - A view morphing algorithm is applied to synchronous collections of video images from at least two video imaging devices, and interpolating between the images, creates a composite image view of the local participant. This composite image approximates what might be seen from a point between the video imaging devices, presenting the image to other video session participants. | 11-11-2010 |
20100309285 | Technique For Maintaining Eye Contact In A Videoconference Using A Display Device - In a videoconferencing terminal, a flat panel display has thereon display elements for displaying an image of a remote object during a videoconference. The display elements are arranged on the flat panel display such that light-transmissive regions are interspersed among the display elements. A camera in the terminal is used to receive light through the light-transmissive regions to capture an image of an object in front of the flat panel display to realize the videoconference. | 12-09-2010 |
20100328423 | Method and apparatus for improved mactching of auditory space to visual space in video teleconferencing applications using window-based displays - A method and apparatus for enabling an improved experience by better matching of the auditory space to the visual space in video viewing applications such as those that may be used in video teleconferencing systems using window-based displays. In particular, in accordance with certain illustrative embodiments of the present invention, one or more desired sound source locations are determined based on a location of a window in a video teleconference display device (which may, for example, comprise the image of a teleconference participant within the given window), and a plurality of audio signals which accurately locate the sound sources at the desired sound source locations (based on the location of the given window in the display) are advantageously generated. | 12-30-2010 |
20110018963 | VIDEO COLLABORATION - A video collaboration method includes examining a video image to locate therein a strip segment containing desired facial features of a second collaborator. The method also includes causing a display of the strip segment in a second frame positioned above a first frame for communicating shared collaboration content on a display device positioned for a first collaborator. | 01-27-2011 |
20110090303 | Facial Pose Improvement with Perspective Distortion Correction - Methods, systems, and apparatus are presented for reducing distortion in an image, such as a video image. A video image can be captured by an image capture device, e.g. during a video conferencing session. Distortion correction processing, such as the application of one or more warping techniques, can be applied to the captured image to produce a distortion corrected image, which can be transmitted to one or more participants. The warping techniques can be performed in accordance with one or more warp parameters specifying a transformation of the captured image. Further, the warp parameters can be generated in accordance with an orientation of the image capture device, which can be determined based on sensor data or can be a fixed value. Additionally or alternatively, the warp parameters can be determined in accordance with a reference image or model to which the captured image should be warped. | 04-21-2011 |
20110096140 | Analysis Of Video Composition Of Participants In A Video Conference - A method of determining whether a video frame meets the design composition requirements associated with a video conference, said method comprising steps performed by a processor of: providing design composition requirements for the video frame, wherein the design composition requirements are available at runtime; analyzing captured video content from a video conference, to determine whether a participant of interest is present in a video frame of the video content; and analyzing the video frame to determine if it meets the design composition requirements for the video conference. | 04-28-2011 |
20110096141 | VIDEOCONFERENCING ENVIRONMENT - A method of videoconferencing includes arranging a portable environment to include a first wall generally parallel to and spaced from a second wall and facing a videocamera on the first wall toward the second wall. An on-camera subject zone is implemented between the first wall and the second wall via: (1) setting a field-of-view of the videocamera to include the second wall while excluding a peripheral edge of the second wall; and (2) positioning a subject station between the first wall and the second wall and setting the on-camera subject zone within a boundary defined by the field-of-view of the videocamera extending from the subject station to the second wall. | 04-28-2011 |
20110157300 | METHOD AND SYSTEM FOR DETERMINING A DIRECTION BETWEEN A DETECTION POINT AND AN ACOUSTIC SOURCE - A method including: receiving acoustic signals originating from an acoustic source at a first pair of microphone elements, arranged symmetrically about a detection point; calculating, with a processor device, a cross correlation of signals provided by the first pair of microphone elements, resulting in a first cross correlation signal; receiving the acoustic signals originating from the acoustic source at a second pair of microphone elements, arranged symmetrically about the detection point; calculating, with the processor device, a cross correlation of signals provided by the second pair of microphone elements, resulting in a second cross correlation signal; and calculating, with the processor device, a direction between the detection point and the acoustic source based on a convolution of the first cross correlation signal by the second cross correlation signal. | 06-30-2011 |
20110254914 | IMMERSIVE VIEWER, A METHOD OF PROVIDING SCENES ON A DISPLAY AND AN IMMERSIVE VIEWING SYSTEM - An apparatus, a method of providing scenes on a display and an immersive viewing system is disclosed. In one embodiment, the apparatus includes: (1) a movement detector configured to monitor movement of a user with respect to a display, wherein the movement detector is disengaged from the user and includes a distance sensor configured to detect a distance of the user from the display and (2) an active screen displayer configured to navigate an active scene on the display based on changes in the distance and dynamically provide an updated view of the active scene on the display based thereon. | 10-20-2011 |
20110267422 | MULTI-PARTICIPANT AUDIO/VIDEO COMMUNICATION SYSTEM WITH PARTICIPANT ROLE INDICATOR - An audio/video communication system displays the status of participants in a video chat session. The system includes multiple video chat capable (VCC) information handling systems (IHSs) that display video images of the participants. In this manner, each user may see the user's own video image as well as the video images of other users in the video chat session. When a user speaks, that user's VCC IHS detects audio, thus designating a speaker participant. This user's VCC IHS includes a gaze direction detector that determines at which particular user video image the user gazes, thus determining a target participant. The VCC IHS sends speaker participant ID information and target participant ID information to other VCC IHSs in the video chat session. In response, the other VCC IHSs display an indicator that designates one user video image as the speaker participant and another user video image as the target participant. | 11-03-2011 |
20110285809 | Automatic Camera Framing for Videoconferencing - A videoconferencing apparatus automatically tracks speakers in a room and dynamically switches between a controlled, people-view camera and a fixed, room-view camera. When no one is speaking, the apparatus shows the room view to the far-end. When there is a dominant speaker in the room, the apparatus directs the people-view camera at the dominant speaker and switches from the room-view camera to the people-view camera. When there is a new speaker in the room, the apparatus switches to the room-view camera first, directs the people-view camera at the new speaker, and then switches to the people-view camera directed at the new speaker. When there are two near-end speakers engaged in a conversation, the apparatus tracks and zooms-in the people-view camera so that both speakers are in view. | 11-24-2011 |
20110298887 | Apparatus Using an Accelerometer to Capture Photographic Images - Methods and apparatuses for operating an electronic device based on an accelerometer to capture photographic images with a camera integrated into the display screen are described. | 12-08-2011 |
20110316966 | METHODS AND SYSTEMS FOR CLOSE PROXIMITY SPATIAL AUDIO RENDERING - Disclosed herein are multimedia-conferencing systems and methods enabling local participants to hear remote participants from the direction the remote participants are rendered on a display. In one aspect, a method includes a computing device receives a remote participant's image and sound information collected at a remote site. The remote participant's image is rendered on a display at a local site. When the local participant is in close proximity to the display, sounds generated by the remote participant are played over stereo loudspeakers so that the local participant perceives the sounds as emanating from the remote participant's location rendered on the display. | 12-29-2011 |
20110316967 | FACILITATING COMMUNICATIONS USING A PORTABLE COMMUNICATION DEVICE AND DIRECTED SOUND OUTPUT - An exemplary method of facilitating communication includes determining a position of a portable communication device that generates a video output. A sound output control is provided to an audio device that is distinct from the portable communication device for directing a sound output from the audio device based on the determined position of the portable communication device. | 12-29-2011 |
20120033031 | METHOD AND SYSTEM FOR LOCATING AN INDIVIDUAL - A method of locating a first user comprises receiving at a server, via a communication network, video data from a sensor at a predetermined location that is remote from the server. Using a process in execution on a processor of the server, at least one of video analytics, image analytics and audio analytics is performed for determining a presence of the first user at the predetermined location. When a result of the video analytics is indicative of the first user being present at the predetermined location, an indication that the first user is at the predetermined location is retrievably stored within a memory element of the server. | 02-09-2012 |
20120038741 | APPARATUS FOR CORRECTING GAZE, A METHOD OF VIDEOCONFERENCING AND A SYSTEM THEREFOR - An apparatus, a method of videoconferencing and a videoconferencing system are disclosed herein. In one embodiment, the apparatus includes: (1) a monitor configured to switch between a display mode and a reflecting mode and (2) a camera located in front of the monitor, the camera positioned to face the monitor and synchronized with the monitor to capture a local image reflected therefrom during the reflecting mode. | 02-16-2012 |
20120038742 | System And Method For Enabling Collaboration In A Video Conferencing System - The present invention is a video conferencing system that includes: a first display area for displaying content shared between at least a first participant at a first location and at least a second participant at a second location in a video conference; and a second display area for displaying the video captured of the at least first participant, wherein the video captured of the at least first participant is spatially consistent with the video captured of the at least second participant. | 02-16-2012 |
20120069139 | DIGITALLY-GENERATED LIGHTING FOR VIDEO CONFERENCING APPLICATIONS - A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model. | 03-22-2012 |
20120092445 | AUTOMATICALLY TRACKING USER MOVEMENT IN A VIDEO CHAT APPLICATION - A system for automatically tracking movement of a user participating in a video chat application executing in a computing device is disclosed. A capture device connected to the computing device captures a user in a field of view of the capture device and identifies a sub-frame of pixels identifying a position of the head, neck and shoulders of the user in a capture frame of a capture area. The sub-frame of pixels is displayed to a remote user at a remote computing system who is participating in the video chat application with the user. The capture device automatically tracks the position of the head, neck and shoulders of the user as the user moves to a next location within the capture area. A next sub-frame of pixels identifying a position of the head, neck and shoulders of the user in the next location is identified and displayed to the remote user at the remote computing device. | 04-19-2012 |
20120120185 | VIDEO COMMUNICATION METHOD, APPARATUS, AND SYSTEM - The present invention relates to the communications field and discloses a video communication method, apparatus, and system, which are invented to solve the problem that the prior art does not achieve consistent eye-to-eye video communication in a horizontal direction. The technical solutions of the present invention includes: obtaining video images of a participant from more than two different horizontal shooting angles, where a range of viewing angles of the participant is between the more than two different horizontal shooting angles; and sending the video images of the participant to a video communication remote end. The embodiments of the present invention may be applied in the video communication field. | 05-17-2012 |
20120140023 | Eye Gaze Reduction - Eye gaze reduction may be provided. First, a location of a near-end camera relative to a near-end screen may be determined. Next, based upon the determined location of the near-end camera, a location may be determined for a video window on the near-end screen. The determined location for the video window may be configured to reduce an eye gaze error in a near-end image transmitted to a far-end device from the near-end camera. Then video data from a far-end camera corresponding to the far-end device may be received and rendered in the video window at the determined location for the video window on the near-end screen. | 06-07-2012 |
20120147131 | VIDEO COMMUNICATING APPARATUS HAVING EYE-TO-EYE COMMUNICATION FUNCTION AND METHOD THEREOF - Provided are a video communication apparatus having an eye-to-eye communication function that enables video communication users to communicate with each other in an eye-to-eye state, and a method thereof. The apparatus includes a display unit configured to display a video of a video communication partner, the display unit including a plurality of cameras activated according to a selection control signal; a video processor configured to encode a captured video from the activated camera, and to decode a video from the video communication partner and identify a position of the video communication partner's eyes from the video; a camera selector configured to activate a camera installed in a position corresponding to the position among the plurality of cameras; and a transceiver configured to transmit the encoded video from the video processor to the video communication partner and provide the video from the video communication partner to the video processor. | 06-14-2012 |
20120154517 | METHOD AND DEVICE FOR ADJUSTING DEPTH PERCEPTION, TERMINAL INCLUDING FUNCTION FOR ADJUSTING DEPTH PERCEPTION AND METHOD FOR OPERATING THE TERMINAL - A method and a device for adjusting depth perception, a terminal including a function for adjusting depth perception and a method for operating the terminal are provided. The method for adjusting depth perception includes: obtaining color and depth videos of a user; detecting a user's position based on the obtained depth video of the user; calculating a range of maximum and minimum depths in a 3-dimensional (3D) video according to the detected user's position; and adjusting a left and right stereo video generating interval of the 3D video to be rendered so as to satisfy the calculated range of the maximum and minimum depths. Therefore, during a 3D or multi-view video call, the 3D video having a three-dimensional effect optimized according to the user's position may be provided. | 06-21-2012 |
20120162356 | IMAGE PROCESSING SYSTEM - The present invention relates to a method for an image processing system ( | 06-28-2012 |
20120169838 | THREE-DIMENSIONAL VIDEO CONFERENCING SYSTEM WITH EYE CONTACT - Methods, devices, and non-transitory computer-readable storage media are disclosed for allowing video conferencing participants to maintain eye contact with each other. A display is disposed between a first video capture device and a second video capture device. The video capture devices capture images of a subject video conferencing participant. Images from the first capture device are associated with images from the second video capture device for transmission over a network to a video conferencing agent of a peer video conferencing participant. Images of the peer video conferencing participant are received over the network and displayed on the display that is disposed between the video capture devices. The video capture devices may be disposed at a height that is approximately even with a focal point of the subject video conferencing participant such that the subject video conferencing participant appears, to the peer video conferencing participant, to be making eye contact with the peer video conferencing participant when he is looking at the images of the peer video conferencing participant on the display. | 07-05-2012 |
20120249724 | VIDEO CONFERENCING DISPLAY DEVICE - A video conferencing display device a display panel, at least one imaging device and processing structure. The at least one imaging device has a field of view aimed at an inner surface of the display panel and captures images through the display panel such that when a user is positioned adjacent an outer surface of the display panel, the user appears in the captured images. The processing structure communicates with the at least one imaging device and processes the captured images to create a direct eye image for transmission to a remote device over a network. | 10-04-2012 |
20120257004 | Direct Eye-Contact Enhancing Videoconferencing Unit - A desktop videoconferencing endpoint for enhancing direct eye-contact between participants can include a transparent display device and a camera placed behind the display device to capture images of a near end participant located in front of the display device. The display device can alternate between display states and non-display states. The camera can be operated to capture images of the near end participant only when the display device is in the non-display state. The camera can be placed behind the display device at a location where an image of eyes of the far end participant is displayed. Images captured by the camera when displayed at to the far end participants can give the perceived impression that the near end participant is making direct eye-contact with the far end participant. | 10-11-2012 |
20120274734 | SYSTEM AND METHOD FOR PROVIDING ENHANCED EYE GAZE IN A VIDEO CONFERENCING ENVIRONMENT - An apparatus is provided in one example and includes first and second cameras configured to capture image data associated with an end user involved in a video session. The can further include a display configured to interface with the cameras, and a shaft coupled to a rotor. The cameras are secured to the shaft, and the shaft receives a rotational force such that during rotation of the shaft, the cameras pass over the display in order to capture particular image data associated with the end user's face in such a way as to improve eye gaze alignment. | 11-01-2012 |
20120274735 | SYSTEM AND METHOD FOR PROVIDING ENHANCED EYE GAZE IN A VIDEO CONFERENCING ENVIRONMENT - An apparatus is provided in one example and includes an eye gaze module configured to interact with a processor and a memory element such that the apparatus is configured for: receiving first video data from a first camera; receiving second video data from a second camera; determining which of the first and second video data captured by the cameras is to be selected for transmission to a counterparty engaged in a video session; determining a tilt characteristic associated with at least one of the cameras; and modifying image data based on the tilt characteristic in order to orient objects for rendering on a display associated with the video session. | 11-01-2012 |
20120274736 | METHODS AND SYSTEMS FOR COMMUNICATING FOCUS OF ATTENTION IN A VIDEO CONFERENCE - Methods and systems for communicating each participant's focus of attention in a video conference are described. In one aspect, a method for communicating where each participant's attention is focused in a video conference includes receiving each remote participant's video and audio streams and focus of attention data, based on the remote participant's head location. The at least one remote participant's video streams are presented in separate viewing areas of the local participant's display. The viewing areas presenting the remote participants are modified to indicate to the local participant each remote participant's focus of attention, based on the focus of attention data. | 11-01-2012 |
20120281063 | Systems And Methods For Providing Personal Video Services - Systems and methods for processing video are provided. Video compression schemes are provided to reduce the number of bits required to store and transmit digital media in video conferencing or videoblogging applications. A photorealistic avatar representation of a video conference participant is created. The avatar representation can be based on portions of a video stream that depict the conference participant. A face detector is used to identify, track and classify the face. Object models including density, structure, deformation, appearance and illumination models are created based on the detected face. An object based video compression algorithm, which uses machine learning face detection techniques, creates the photorealistic avatar representation from parameters derived from the density, structure, deformation, appearance and illumination models. | 11-08-2012 |
20120293606 | TECHNIQUES AND SYSTEM FOR AUTOMATIC VIDEO CONFERENCE CAMERA FEED SELECTION BASED ON ROOM EVENTS - Techniques for automatically selecting a video camera feed based on room events in a video teleconference are described. An embodiment may receive video information from multiple cameras in a conference room. An event of interest may be detected from the video information. Events of interest may be detected, for example, by detecting faces, detecting an eye gaze or head direction, and detecting motion. When an event of interest is detected, the video camera having the optimal view of the event may be selected, and the feed from the selected video camera may be transmitted to remote participants. Other embodiments are described and claimed. | 11-22-2012 |
20120320147 | METHOD AND APPARATUS FOR HANDS-FREE CONTROL OF A FAR END CAMERA - One embodiment of the present invention sets forth a method for intuitively controlling a far-end camera via physical movements. The method includes the steps of receiving an image captured by a first camera and including a digital representation of at least a portion of a user of the first camera, analyzing the digital representation to identify a position of the user relative to the first camera, computing a value associated with a first property of a second camera based on the position of the user, and transmitting the value to the second camera, wherein, in response to receiving the value, a perspective of the second camera is modified based on the value. | 12-20-2012 |
20130070046 | METHOD AND SYSTEM FOR CORRECTING GAZE OFFSET - A method of correcting gaze offset in an image of at least one individual having eyes is disclosed. The method comprises: processing the image to extract location of at least one eye over the image, processing the image to replace imagery data associated with each location of each eye with replacement data thereby providing a corrected image, and transmitting the corrected image to a display device. The replacement data are preferably previously-recorded imagery data which respectively correspond to the same eye but a different gaze. | 03-21-2013 |
20130093838 | METHODS AND SYSTEMS FOR ESTABLISHING EYE CONTACT AND ACCURATE GAZE IN REMOTE COLLABORATION - Embodiments of the present invention are directed to video-conferencing systems that create eye contact and accurate gaze awareness between video-conferencing participants. In one aspect, a method includes capturing images of a first participant through a display using a camera ( | 04-18-2013 |
20130147907 | Display Device Having Image Pickup Function and Two-Way Communication System - A compact and lightweight display device having an image pickup function and a two-way communication system which can shoot an image of a user as an object and display an image at the same time without degrading image quality by disposing a semi-transmitting mirror or the like which blocks an image on the display screen (display plane). The display device having the image pickup function includes a display panel capable of transmitting visible light at least and arranging display elements which can be controlled by voltage or current, and an image pickup device disposed around the display panel. The image pickup device is input with data of an image of a user or the like by a reflector, or equipped with a fiberscope bundling optical fibers. | 06-13-2013 |
20130182064 | METHOD FOR OPERATING A CONFERENCE SYSTEM AND DEVICE FOR A CONFERENCE SYSTEM - A device for a conference system and method for operation thereof is provided. The device is configured to receive a first audio signal and a first identifier associated with a first participant. The device is further configured to receive a second audio signal and a second identifier associated with a second participant. The device includes a filter configured to filter the received first audio signal and the received second audio signal and to output a filtered signal to a number of electroacoustic transducers. The device includes a control unit connected to the filter. The control unit is configured to control one or more first filter coefficients based on the first identifier and to control one or more second filter coefficients based on the second identifier. Preferably the device comprises a headtracker function for changing the first and second filter coefficients depending on tracking of head's position. | 07-18-2013 |
20130293669 | SYSTEM AND METHOD FOR EYE ALIGNMENT IN VIDEO - A system for image manipulation enables an improved video conferencing experience. The system includes a camera; a display screen adjacent to the camera; a processor coupled to the camera and the display screen; and a memory coupled to the processor. Instructions executable by the processor enable receiving a source image from the camera and generating a synthetic image based upon the source image. The synthetic image corresponds to a view of a virtual camera located at the display screen. | 11-07-2013 |
20130321566 | AUDIO SOURCE POSITIONING USING A CAMERA - Audio source positioning technique embodiments are presented that are employed in a video teleconference or telepresence session between a local site and one or more remote sites. Each of these sites has one participant, and a virtual scene is constructed and displayed at each site that depicts each of the participants from the other sites in the constructed scene. However, rather than simply playing audio captured at the other site or sites in the viewing participant's site, audio source positioning is used to make it seem to a participant viewing a rendering of the virtual scene that the voice of another participant is emanating from a location on the display device where the remote participant is depicted. | 12-05-2013 |
20140002586 | GAZE DIRECTION ADJUSTMENT FOR VIDEO CALLS AND MEETINGS | 01-02-2014 |
20140015918 | FEEDBACK-SYSTEM FOR MANAGING VIDEO CONFERENCING WITH A PORTABLE MULTIMEDIA DEVICE COMPRISING A FRONTAL CAMERA - A feedback-system for managing the position of a frontal camera provided on a portable multimedia device during video communication. The feedback-system comprises a camera image analyzer coupled to the frontal camera and adapted to detect and analyze the pose of a user facing the frontal camera, an optimal video estimator coupled to the camera image analyzer and adapted to calculate a 6-dimentional error-vector of the current position of the frontal camera with respect to an optimal position of the frontal camera, an intuitive feedback manager coupled to the optimal video estimator and adapted to generate a transformation matrix translating the error-vector into an error message for the image displayed on a screen of the portable multimedia device, the intuitive feedback manager being further coupled to the portable multimedia device which is adapted to use the error message for modifying the image displayed on the screen. | 01-16-2014 |
20140043432 | METHOD AND APPARATUS FOR TRACKING ACTIVE SUBJECT IN VIDEO CALL SERVICE - A method for tracking an active subject in a video call service includes establishing a peer-to-peer connection between a videophone input apparatus and a peer over a wireless connection; receiving information of a first resolution display of an A/V output apparatus of the peer; and generating a local video stream in the videophone apparatus based on a video signal provided by a second resolution camera of the videophone apparatus, the second resolution being greater than the first resolution. The method further includes generating a local audio stream in the videophone input apparatus based on an audio signal provided by a microphone of the videophone input apparatus; determining active subject information of the video call using at least one of the local video stream and the local audio stream; targeting the local video stream towards the active subject by selecting a first resolution view area from the second resolution video signal based on the determined active subject information; and transmitting the first resolution view area for displaying on the first resolution display. | 02-13-2014 |
20140055556 | Telepresence Systems and Methods Therefore - A telepresence system enhances the perception of presence of a remote person involved in a video conference. The system preferably has a two-way mirror, which is between the observer and the display device, positioned at an angle to reflect a backdrop surface. The backdrop surface may or may not appear superimposed in a position behind the image of a person from the remote location. The system preferably minimizes image distortion via an optical path for the camera line of sight that is substantially longer than the physical distance between the user and the camera. The system may be asymmetrical, in that one camera is on axis with the user's line of sight while the other camera is off axis with the user's line of sight. | 02-27-2014 |
20140098183 | CONTROLLED THREE-DIMENSIONAL COMMUNICATION ENDPOINT - A controlled three-dimensional (3D) communication endpoint system and method for simulating an in-person communication between participants in an online meeting or conference and providing easy scaling of a virtual environment when additional participants join. This gives the participants the illusion that the other participants are in the same room and sitting around the same table with the viewer. The controlled communication endpoint includes a plurality of camera pods that capture video of a participant from 360 degrees around the participant. The controlled communication endpoint also includes a display device configuration containing display devices placed at least 180 degrees around the participant and display the virtual environment containing geometric proxies of the other participants. Placing the participants at a round virtual table and increasing the diameter of the virtual table as additional participants are added easily achieves scalability. | 04-10-2014 |
20140098184 | IMAGING DEVICE AND METHOD - A display is disclosed that comprises an array of display pixels, in which light sensing pixels are interspersed with the display pixels substantially across the area of the display. At least one colour display sub-pixel is arranged to be switched off when the corresponding colour light sensor pixel closest to that display sub-pixel is detecting light to generate an image. A portable electronic device is disclosed which comprises the display. The display is then operable to capture an image from the light sensing pixels, so that for example it can then operate as one or more of a digital mirror, scanner, biometric lock or touch panel. When a user looks at the display for a video call, the captured image of the user appears to look directly the other party. | 04-10-2014 |
20140139620 | TELEPRESENCE COMMUNICATIONS SYSTEM AND METHOD - The sharing an image of a local participant in a group with at least one other participant in that group commences by establishing, for that local participant, orientation information indicative of an orientation of the communication channel local participant relative to a display screen and a video camera. The local participant sends his or her image of the local participant along with the orientation information to at least one other participant. The orientation information enables display of the image of the local participant with a common orientation recognized by the at least one other participant. | 05-22-2014 |
20140184736 | SYSTEMS AND METHODS FOR CAUSING A USER TO LOOK INTO A CAMERA - Systems and method for causing a user to look into a camera are described. In some aspects, a position of a camera coupled with a computing device is determined. A computer-generated element is displayed proximate to the determined position of the camera. The computer-generated element is presented for causing a user of the computing device to reposition a face of the user toward the determined position of the camera. | 07-03-2014 |
20140218467 | 3D Video-Teleconferencing Apparatus Capable of Eye Contact and Method Using the Same - Disclosed herein is a 3D teleconferencing apparatus and method enabling eye contact. The 3D teleconferencing apparatus enabling eye contact according to the present invention includes an image acquisition unit for acquiring depth images and color images by manipulating cameras in real time in consideration of images obtained by capturing a subject that is a teleconference participant and images received over a network and corresponding to a counterpart involved in the teleconference; a full face generation unit for generating a final depth image and a final color image corresponding to a full face of the participant for eye contact using the depth images and the color images; and a 3D image generation unit for generating a 3D image corresponding to the counterpart and displaying the 3D image on a display device. | 08-07-2014 |
20140267584 | VIEW RENDERING FOR THE PROVISION OF VIRTUAL EYE CONTACT USING SPECIAL GEOMETRIC CONSTRAINTS IN COMBINATION WITH EYE-TRACKING - A virtual camera pose determiner is configured to determine a position and an orientation of a virtual camera. The position of the virtual camera is determined on the basis of a display position of a displayed representation of a remote participant on a display. The orientation of the virtual camera is determined on the basis of a geometrical relation between the display position of the remote participant on the display, and a position of a local participant. The virtual camera is configured to transmit an image or a sequence of images to the remote participant, so that an image provided by the virtual camera has the view on the local participant as is if viewed from the display position. Further embodiments provide a video communication system having a virtual camera pose determiner for providing a virtual camera pose on basis of the display position and the position of the local participant. | 09-18-2014 |
20150049165 | APPARATUS FOR EYE CONTACT VIDEO CALL - The present disclosure relates to an apparatus for an eye contact video call capable of a realistic call in eye contact with the other party. A transparent display is formed at a central part of the opaque display and a camera is formed at a rear side thereof, thereby allowing a realistic call in eye contact with the other party and at the same time, the readability to be improved even in cases of a usual display use (mode). Further, the manufacturing cost of a terminal may be reduced and an interference of an output video of a transparent display that exists in the camera photograph video may also be effectively eliminated. | 02-19-2015 |
20150296178 | Use of Face and Motion Detection for Best View Framing in Video Conference Endpoint - A video conference endpoint detects faces at associated face positions in video frames capturing a scene. The endpoint frames the video frames to a view of the scene encompassing all of the detected faces. The endpoint detects that a previously detected face is no longer detected. In response, a timeout period is started and independently of detecting faces, motion is detected across the view. It is determined if any detected motion (i) coincides with the face position of the previously detected face that is no longer detected, and (ii) occurs before the timeout period expires. If conditions (i) and (ii) are met, the endpoint restarts the timeout period and repeats the independently detecting motion and the determining. Otherwise, the endpoint reframes the view to encompass the remaining detected faces. | 10-15-2015 |
20150334344 | Virtual Window - Novel tools and techniques are provided for displaying video. In some embodiments, novel tools and techniques might be provided for sensing the presence and/or position of a user in a room, and/or for customizing displayed content (including video call content, media content, and/or the like) based on the sensed presence and/or position of the user. In particular, in some aspects, a user device (which might include, without limitation, a video calling device, an image capture device, a gaming console, etc.) might determine a position of a user relative to a display device in communication with the user device. The user device and/or a control server (in communication with the user device over a network) might adjust an apparent view of video or image(s) displayed on the display device, based at least in part on the determined position of the user relative to the display device. | 11-19-2015 |
20150341545 | VOICE TRACKING APPARATUS AND CONTROL METHOD THEREFOR - A method for controlling a voice tracking apparatus according to an embodiment of the present invention includes the steps of: tracking a sound source of a voice signal generated from the outside; turning an image capturing unit of the voice tracking apparatus toward the location of the tracked sound source; and beamforming the voice signal of the sound source through a voice input unit mounted on the image capturing unit. | 11-26-2015 |
20150358579 | MOBILE TERMINAL AND METHOD FOR OPERATING SAME - The present invention relates to a mobile terminal, and a method for operating the same. According to an embodiment of the present invention, a method for operating a mobile terminal includes the steps of: forming an audio beam based on at least one of a photographed image from a camera and motion information from a motion sensor; receiving an audio signal from a speaker through a plurality of microphones; and processing the received audio signal based on the formed audio beam. Thus, the use convenience is improved. | 12-10-2015 |
20160028991 | PERSPECTIVE-CORRECT COMMUNICATION WINDOW WITH MOTION PARALLAX - A perspective-correct communication window system and method for communicating between participants in an online meeting, where the participants are not in the same physical locations. Embodiments of the system and method provide an in-person communications experience by changing virtual viewpoint for the participants when they are viewing the online meeting. The participant sees a different perspective displayed on a monitor based on the location of the participant's eyes. Embodiments of the system and method include a capture and creation component that is used to capture visual data about each participant and create a realistic geometric proxy from the data. A scene geometry component is used to create a virtual scene geometry that mimics the arrangement of an in-person meeting. A virtual viewpoint component displays the changing virtual viewpoint to the viewer and can add perceived depth using motion parallax. | 01-28-2016 |
20160049052 | SYSTEMS AND METHODS FOR POSITIONING A USER OF A HANDS-FREE INTERCOMMUNICATION SYSTEM - A hands-free intercom may include a user-tracking sensor, a directional microphone, a directional sound emitter, a display device, and/or a communication interface. The user-tracking sensor may determine a location of a user so the directional microphone can measure vocal emissions by the user and the directional sound emitter can deliver audio to the user. The hands-free intercom may induce the user to move to a desired location and/or to stay within a connectivity area. The hands-free intercom may also or instead induce the user to face in a desired orientation. The directional sound emitter and/or the display device may induce the user by explicitly indicating the desired location, by adjusting an apparent source of the audio or video, by changing quality of delivered audio or video based on user position, by producing irritating audio or video, and/or the like. | 02-18-2016 |
20160057385 | Automatic Switching Between Different Cameras at a Video Conference Endpoint Based on Audio - A video conference endpoint includes predefined main and side audio search regions angularly-separated from each other at a microphone array configured to transduce audio received from the search regions. The endpoint includes one or more cameras to capture video in a main field of view (FOV) that encompasses the main audio search region. The endpoint determines if audio originates from any of the main and side audio search regions based on the transduced audio and predetermined audio search criteria. If it is determined that audio originates from the side audio search region, the endpoint automatically switches from capturing video in the main FOV to one or more cameras to capture video in a side FOV that encompasses the side audio search region. | 02-25-2016 |
20160182856 | DIGITAL ZOOM CONFERENCING | 06-23-2016 |
20160205352 | METHOD FOR PROVIDING TELEPRESENCE USING AVATARS, AND SYSTEM AND COMPUTER-READABLE RECORDING MEDIUM USING THE SAME | 07-14-2016 |