23rd week of 2018 patent applcation highlights part 70 |
Patent application number | Title | Published |
20180160010 | DATA COLLECTION SERVER, DEVICE, AND DATA COLLECTION AND TRANSMISSION SYSTEM - Quickly transmitting and acquiring device information is enabled. A data collection server has a receiver that receives device information related to device states; an evaluator configured to determines if the received device information is first device information that is not an object of priority transmission, or second device information that is an object of priority transmission; and a transmitter configured to send the received device information to a management server that manages the device information. The transmitter transmits the second device information to the management server with priority over the first device information. | 2018-06-07 |
20180160011 | METHODS AND SYSTEMS FOR PROCESSING DOCUMENTS USING MULTIPLE HALFTONE SCREENING TECHNIQUES - A method of processing a document for printing includes receiving a document; for at least one of the plurality of regions, determining a drop size for that region and determining whether the drop size for printing the region meets a threshold criterion and, if so, processing the region using a tile-based screen for halftoning and, if not, processing the region using an error diffusion screen for halftoning; and printing the processed regions. Another method includes additionally considering coverage level to determine whether to use a tile-based screen or error diffusion screen. Yet another method includes determining whether the region is within a threshold range of a drop step change. | 2018-06-07 |
20180160012 | PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus according to the present invention includes an acquisition unit configured to acquire an output condition when an image forming apparatus forms an image on a recording medium based on image data, and a processing unit configured to perform image processing on the image data for improving sharpness of the image by using a parameter based on the output condition. The parameter to be referred to by the processing unit represents such a characteristic that the image formed by the image forming apparatus has a luminous characteristic in relation to spatial frequency that remains constant or decreases continuously without causing any inflection point or any discontinuous point in a frequency band from a predetermined frequency to a limit frequency of the image forming apparatus. | 2018-06-07 |
20180160013 | SYSTEM AND METHOD FOR ADAPTIVELY COMPRESSING DATA HAVING NOISY IMAGES USING LOSSLESS COMPRESSION - A system and method adaptively lossy compresses image data by receiving a pixel of image data and meta data indicating a type of object that generated the pixel of image data, the pixel of image data includes a first byte having most significant bits of image data and a second byte having least significant bits of image data; electronically determining if the meta data associated with the pixel of image data is associated with a noisy image or indicates that the pixel of image data was generated by a specific type of object; electronically modifying the pixel of image data, when it is determined that the meta data is associated with the pixel of image data is associated with a noisy image or indicates that the pixel of image data was generated by a specific type of object, by setting a predetermined number of low bits of the pixel of image data to zero. | 2018-06-07 |
20180160014 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - In the case where both thickening processing and UCR processing are performed for an object, appropriate effects are obtained for both pieces of the processing. An image processing apparatus including an image processing unit configured to perform thickening processing for an object included in an input image and to perform saturation suppression processing for an edge portion of the object in the input image for which the thickening processing has been performed. | 2018-06-07 |
20180160015 | CORRECTION TABLE GENERATION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR STORING PROGRAM - A correction table generation method includes acquiring a first color value of an apparatus non-dependent color system corresponding to an input value of a multi-dimensional apparatus dependent color system in an input lattice point of the color conversion table and generating a multi-dimensional conversion table in which a first apparatus dependent value of the apparatus dependent color system having color conversion table color value characteristics is correlated with a second apparatus dependent value having a color value of the apparatus non-dependent color system and approximate linearity over multiple dimensions, and the generating of the multi-dimensional conversion table includes determining a target color value for multiple dimensions having approximate linearity over the multiple dimensions with respect to the input value, and determining a first apparatus dependent value corresponding to a second apparatus dependent value using the input value, the target color value for the multiple dimensions, and the first color value. | 2018-06-07 |
20180160016 | CAMERA COVER AND IMAGING SYSTEM INCLUDING THE SAME - A camera cover configured to accommodates a camera lens is provided, which includes a case having an opening that opens upwardly, a lid body attached to the case, a lid body moving mechanism configured to move the lid body between an open position at which the opening is opened and a closed position at which the opening is closed, an air blow-off part provided in the case and configured to blow off air upwardly when the lid body is at the open position, and an air channel configured to lead air from an air supply source disposed outside the case to the air blow-off part. | 2018-06-07 |
20180160017 | CAMERA MODULE - A camera module includes lens modules; and image sensors corresponding to lens modules, wherein f-numbers of the lens modules corresponding to numerical values indicating amounts of light passing through the lens modules, respectively, are different from each other, and an image sensor corresponding to a lens module having a relatively greater f-number is a color (RGB) sensor and an image sensor corresponding to a lens module having a relatively smaller f-number is a black and white (BW) sensor. | 2018-06-07 |
20180160018 | CAMERA MODULE - A camera module includes a camera, a light emitting unit, a circuit board. The camera is mounted on the circuit board and having an optical axis. The light emitting unit is disposed on the circuit board which emits a light beam forming a batwing-shaped luminous intensity distribution. The batwing-shaped luminous intensity distribution has at least two peaks of maximum luminous intensity. The optical axis of the camera is arranged at a position between the at least two peaks of the batwing-shaped luminous intensity distribution. The camera module emits a uniform light for enhancing authenticity of the image and increasing the reliability of the recognition system. | 2018-06-07 |
20180160019 | VEHICLE PHOTOGRAPHIC CHAMBER - A circular dome for photographing vehicles includes curved frames that support a skin. Contoured walls offset from the curved frames to define a recess adapted to receive cameras and lights for photographing vehicles around a perimeter of the dome. A door matching the contour of the curved frames completes the dome and is size to receive a vehicle, to collectively implement subtractive lighting along the perimeter of the dome. A system and method are provided for automatically photographing vehicles in an enclosable circular domed structure where an automated process captures a series of vehicle images, and uploads the captured images to a web template for display and recordation. | 2018-06-07 |
20180160020 | PRIVACY DEVICE AND METHOD FOR USE WITH NETWORK ENABLED CAMERAS - The invention disclosed herein concerns a privacy enhancing device for use with IP cameras and related methods. The privacy device includes an adjustable light filter and is configured to be placed over the lens of an IP camera such that the image captured by the IP camera passes through the filter. The transparency of the light filter is controlled using a control module in response to user inputs received using an on-board user interface so as to provide varying levels of privacy ranging from an opaque state and a transparent state. Inputs that serve to facilitate and enhance operation of the device can also be received from other input sources such as connected computing devices. For security, the control path defined by the control module and the on-board user input device can be isolated from other more sophisticated control devices that can be prone to hacking and remote control. | 2018-06-07 |
20180160021 | Method and Device for Photographing Dynamic Picture - A method and a device for photographing a dynamic picture. The method includes: entering a dynamic picture photographing mode; continuously photographing a plurality of images at different shutter speeds, wherein the images includes dynamic images photographed at a first shutter speed and dynamic images photographed at a second shutter speed, the first shutter speed is greater than second shutter speed; automatically composing the static image and dynamic image to obtain a dynamic picture. According to the solution provided by the embodiment of the present invention, photographing the static images and the dynamic images respectively at different shutter speeds and automatically composing the static image and dynamic image to obtain the dynamic picture, a user can automatically obtain a relatively ideal dynamic picture through a terminal, no requirement is attached to photographing skill and picture processing technique of the user, entertainment and experience of the terminal user are increased. | 2018-06-07 |
20180160022 | INFORMATION COMMUNICATION APPARATUS, METHOD, AND RECORDING MEDIUM USING SWITCHABLE NORMAL MODE AND VISIBLE LIGHT COMMUNICATION MODE - An apparatus is provided that includes a display, a processor, and a recording medium having a program that causes the processor to execute operations, including obtaining a first image by image capture with a first exposure time by starting exposure for a plurality of exposure lines in an image sensor. The operations also include obtaining a second image, including a plurality of bright lines, by capturing a subject changing in luminance by the image sensor with a second exposure time by starting exposure for the plurality of exposure lines in the image sensor. The operations further include obtaining information by demodulating data specified by a pattern of the plurality of bright lines included in the obtained second image, and displaying a first image on the display during a period of obtaining of the second image using the image sensor. | 2018-06-07 |
20180160023 | DETERMINATION OF EXPOSURE TIME FOR AN IMAGE FRAME - An apparatus for adjusting an exposure time for an image frame is presented. The apparatus comprises at least one processing unit and at least one memory. The at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to process at least one image frame, select at least one region of interest from the at least one image frame, process at least two consecutive image frames to determine a motion field, segment the motion field into at least one motion cluster, select based on the at least one region of interest and the at least one motion cluster, the most relevant motion cluster, and adjust the exposure time based on motion information of the selected most relevant motion cluster. | 2018-06-07 |
20180160024 | DATA MANAGEMENT DEVICE AND METHOD FOR MANAGING DATA - A data management device and a method of managing data are provided. The data management device includes a communication interface configured to receive image data from cameras, and a data manager configured to sort predetermined cameras among the cameras into a group, and record, among the received image data, image frames that are received from the predetermined cameras sorted into the group, in a single group file. | 2018-06-07 |
20180160025 | AUTOMATIC CAMERA CONTROL SYSTEM FOR TENNIS AND SPORTS WITH MULTIPLE AREAS OF INTEREST - A single operator, automatic camera control system is disclosed for providing action images of players, during a sporting event. A LiDAR scanner obtains images from a field of play and is configured for generating multiple sequential LiDAR data of each player on the field. At least one fixed video camera is focused on a designated area of the field for generating video images that supplement the LiDAR data. A control computer is connected to the LiDAR scanner and the at least one video camera and is configured to combine the LiDAR data and the video images to create a composite target image representative of each player, and to update the composite target image during the sporting event. | 2018-06-07 |
20180160026 | MOBILE TERMINAL AND CONTROLLING METHOD THEREOF - A mobile terminal and controlling method thereof are disclosed, by which a flying object equipped with a camera can be remotely controlled. The present disclosure includes a wireless communication unit configured to perform a communication with a flying object, a touchscreen configured to output a preview image received from the flying object, and a controller outputting a shot mode list on the preview image, the controller, if at least one shot mode is selected from the shot mode list, remotely controlling a flight location of the flying object in accordance with the selected at least one shot mode. | 2018-06-07 |
20180160027 | Image Sensor with In-Pixel Depth Sensing - An imaging area in an image sensor includes a plurality of photo detectors. A light shield is disposed over a portion of two photo detectors to partially block light incident on the two photo detectors. The two photo detectors and the light shield combine to form an asymmetrical pixel pair. The two photo detectors in the asymmetrical pixel pair can be two adjacent photo detectors. The light shield can be disposed over contiguous portions of the two adjacent photo detectors. A color filter array can be disposed over the plurality of photo detectors. The filter elements disposed over the two photo detectors can filter light representing the same color or different colors. | 2018-06-07 |
20180160028 | FOCUS DETECTION APPARATUS, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM FOR USE THEREWITH - Focus detection apparatuses, control methods, and storage mediums for use therewith are provided herein. In a focus detection apparatus, a difference amplification unit performs processing for amplifying a difference between an A-image signal and a B-image signal and a CPU determines whether or not a focus state is a false in-focus state on a basis of a result of a correlation calculation performed by a correlation calculation unit on the A-image signal and the B-image signal. The A-image signal and the B-image signal are output from an imaging sensor that receives a pair of luminous fluxes passing through different pupil areas of an imaging optical system. | 2018-06-07 |
20180160029 | CONTROL APPARATUS, IMAGE PICKUP APPARATUS, IMAGE PICKUP SYSTEM, LENS APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A control apparatus includes a focus detector configured to perform focus detection based on an image signal obtained via a first optical system, the first optical system having a shallowest depth of field in a plurality of optical systems having focal lengths different from each other, and a controller configured to perform focus control of the plurality of optical systems based on an output signal from the focus detector. | 2018-06-07 |
20180160030 | SYSTEMS AND METHODS FOR VARYING FIELD OF VIEW OF OUTSIDE REAR VIEW CAMERA - A side rear view camera is located at a driver or passenger side of the vehicle, captures video beside and behind the vehicle, and has a predetermined field of view (FOV). The predetermined FOV is defined by a first predetermined horizontal angle of view (AOV) and a first predetermined vertical AOV. A display is located and visible within a passenger cabin of the vehicle. A display module, on the display: displays a first portion of the video from within a first predetermined portion of the predetermined FOV, the first predetermined portion of the predetermined FOV being defined by a second predetermined horizontal AOV and a second predetermined vertical AOV; and selectively displays a second portion of the video from within a second predetermined portion of the predetermined FOV, the second predetermined portion of the predetermined FOV being defined by a third predetermined horizontal AOV and a third predetermined vertical AOV. | 2018-06-07 |
20180160031 | IMAGING DEVICE, IMAGING METHOD, AND STORAGE MEDIUM - An imaging device includes the following. A temporary storage cyclically stores images for a predetermined duration or number imaged in succession. A record controller records temporarily stored images for a first duration or a first number of images before detection of the capturing instruction, and records images for a second duration or a second number of images after detection of the capturing instruction. A setter preliminarily determines a total duration of the first duration and the second duration or a total number of the first number and the second number and determines a ratio of the first duration to the second duration or a ratio of the first number to the second number in response to one type of operation of an operational input unit, while retaining the preliminarily determined total duration or the preliminarily determined total number. | 2018-06-07 |
20180160032 | DIRECTIONALITY CONTROL SYSTEM, CALIBRATION METHOD, HORIZONTAL DEVIATION ANGLE COMPUTATION METHOD, AND DIRECTIONALITY CONTROL METHOD - A directionality control system includes: a camera; a microphone provided as a separate body from the camera; a display that displays video data captured by the camera; and a processor that computes a sound collection direction, which is directed from the microphone toward a sound position corresponding to a designated position in the video data. The processor computes the sound collection direction by using parameters including: a first height of the camera from a reference surface, a second height of the microphone from the reference surface, a third height of a computation reference point from the reference surface, the computation reference point being positioned in the sound collection direction at a position different from the sound position, a direction which is directed from the camera toward the sound position, and a fourth height of the sound position from the reference surface. | 2018-06-07 |
20180160033 | MOBILE TERMINAL AND METHOD OF PERFORMING MULTI-FOCUSING AND PHOTOGRAPHING IMAGE INCLUDING PLURALITY OF OBJECTS USING THE SAME - The present invention provides a mobile terminal and a method of capturing an image using the same. The mobile terminal controls a camera conveniently and efficiently to capture an image and performs focusing in various manners to capture an image. Accordingly, a user can obtain a desired image easily and conveniently. | 2018-06-07 |
20180160034 | Dynamic tracking device - A dynamic tracking device provided in the present disclosure comprises a video recording module, a controller and a rotating module. The controller, which is connected with the video recording module to receive captured images for a target, analyzes pixels distributed on the captured images to detect the target's location and enable the rotating module to be turned. Accordingly, the dynamic tracking device effectuates dynamic surveillance by the controller which analyzes differences of pixels among images shot by the video recording module and enables the rotating module to be turned for tracking a target moving or staying at a location. | 2018-06-07 |
20180160035 | Robot System for Controlling a Robot in a Tele-Operation - A robot system controls a robot in a tele-operation. The system includes a robot and a remote gaze-controlled camera (GCC) system. The robot includes one or more manipulators and a vision system that provides a remote operator a close up view of a work area of the robot from the robot's perspective. The robot and the one or more manipulators are remotely controlled by the hands of the operator. The GCC system remotely controls the vision system of the robot. The GCC system relieves the operator of additional manual control of the video camera and the cognitive workload of manual control of the video camera. The GCC system includes a video display for remotely displaying video from the vision system to the operator, an eyetracker for determining an eye/head activity variable of the operator's eye, and a processor that remotely controls the video camera based on the eye/head activity variable. | 2018-06-07 |
20180160036 | AUTOMATED IMAGE CAPTURE BASED ON IMAGE CONTEXT - According to an embodiment of the present invention, a system dynamically captures and stores an image based on the context of the image being captured. Initially, an image capture device receives and analyzes an image to determine a first set of one or more attributes associated with the image. A processor compares the first set of attributes associated with the image with a second set of one or more pre-defined attributes associated with an image context indicating preferences for image capture, and, based on the results of the comparing, instructs the image capture device to store the image. Embodiments of the present invention further include a method and computer program product for capturing an image based on the context of the image in substantially the same manner described above. | 2018-06-07 |
20180160037 | ELECTRONIC APPARATUS, METHOD, AND PROGRAM - According to one embodiment, electronic apparatus configured to control camera, includes communication circuitry and processor circuitry. The communication circuitry capable of communicating with a first electronic terminal of a first user or a second electronic terminal of a second user, when a plurality of electronic terminals includes the first electronic terminal and the second electronic terminal are located within a first communication distance from the electronic apparatus. The processor circuitry configured to start a capturing of the camera when the first electronic terminal is located outside of the first communication distance, and stop the capturing of the camera when at least the first electronic terminal is located within the first communication distance. | 2018-06-07 |
20180160038 | ADVANCED RAW CONVERSION TO PRODUCE HIGH DYNAMIC RANGE, WIDE COLOR GAMUT OUTPUT - Described are examples for generating high dynamic range (HDR)/wide color gamut (WCG) output from an image sensor. A raw red, green, blue (RGB) image obtained by the image sensor can be received. A plurality of color transform operations can be applied to the raw RGB image to generate a HDR/WCG image. The HDR/WCG image can be stored in a memory, displayed on a display, transmitted to another device, etc. | 2018-06-07 |
20180160039 | IMAGE PROCESSING APPARATUS - A histogram is detected from an acquired image, and a ratio of a pixel distribution for a range from a preliminarily set low-brightness-side determination point to a preliminarily set high-brightness-side determination point to the entire histogram is calculated for the detected histogram. In a case where the ratio exceeds a preliminarily set threshold, it is estimated that there is fog or mist and a subject looks hazy. | 2018-06-07 |
20180160040 | HIGH RESOLUTION THIN MULTI-APERTURE IMAGING SYSTEMS - A multi-aperture imaging system comprising a first camera with a first sensor that captures a first image and a second camera with a second sensor that captures a second image, the two cameras having either identical or different FOVs. The first sensor may have a standard color filter array (CFA) covering one sensor section and a non-standard color CFA covering another. The second sensor may have either Clear or standard CFA covered sections. Either image may be chosen to be a primary or an auxiliary image, based on a zoom factor. An output image with a point of view determined by the primary image is obtained by registering the auxiliary image to the primary image. | 2018-06-07 |
20180160041 | PASSIVE AND ACTIVE STEREO VISION 3D SENSORS WITH VARIABLE FOCAL LENGTH LENSES - A stereoscopic | 2018-06-07 |
20180160042 | APPARATUS AND METHOD FOR CONTROLLING CAMERA ARRAY - Apparatus and methods for controlling a camera array are disclosed. According to certain embodiments, a camera system may include a plurality of cameras and a controller coupled to the cameras. The controller may be configured to: activate the cameras; determine at least one of hardware or software conditions of the cameras; when the conditions of the cameras are normal, synchronize an operation mode and operation parameters used in the operation mode to the cameras; when determining that at least one of the operation mode or the operation parameters is not set successfully, generate an alert message; instruct the cameras to initiate an operation; after the operation is initiated, monitor operation status of the cameras; and when an abnormal operation status is detected, instruct the cameras to stop the operation. | 2018-06-07 |
20180160043 | ELECTRONIC DEVICE, CONTROL METHOD, AND STORAGE MEDIUM - A power control unit of an electronic device executes auto power-OFF that causes the electronic device to automatically switch from a first operation mode to a second operation mode. A setting unit sets a set period. In a case where the auto power-OFF is executed when a period equal to or shorter than a first period is set as the set period, a status control unit performs control so that a first operation status at the time of the execution of the auto power-OFF is taken over when the electronic device is placed in the first operation mode next. In a case where the auto power-OFF is executed when a period longer than the first period is set as the set period, the status control unit performs control so that the first operation status is not taken over. | 2018-06-07 |
20180160044 | IMAGE PROCESSING DEVICE AND SYSTEM - An image processing device includes a first image distortion correction circuit configured to receive a first image and first calibration data from a first camera module, and perform correction on the first image based on the first calibration data; a second image distortion correction circuit configured to receive a second image and second calibration data from a second camera module, receive a first tilt range from the first camera module, and perform correction on the second image based on the second calibration data and the first tilt range; and an image processing unit (IPU) configured to receive corrected first and second images from the first and second image distortion correction circuits, and perform image processing on the corrected first and second images, wherein the first image is obtained by correcting an image acquired from an image sensor of the first camera module based on the first tilt range. | 2018-06-07 |
20180160045 | METHOD AND DEVICE OF IMAGE PROCESSING AND CAMERA - An image processing method includes obtaining an acquired image captured by an image acquisition device and obtaining vibration information associated with the acquired image and generated while the image acquisition device capturing the acquired image. The vibration information includes an angle of vibration. The method further includes performing a distortion correction to the acquired image to obtain a distortion-corrected image based upon the vibration information and a preset distortion correction parameter, and determining a target image from the distortion-corrected image based upon the vibration information. | 2018-06-07 |
20180160046 | DEPTH-BASED ZOOM FUNCTION USING MULTIPLE CAMERAS - A method for displaying preview images is disclosed. In one aspect, the method includes: receiving first images captured by a first camera having a first field-of-view (FOV), receiving second images captured by a second camera having a second FOV that is different than the first FOV, and displaying preview images generated based on the first and second images. The method may further include determining a spatial transform based on depth information associated with individual pixels in the first and second images, and upon receiving instructions to zoom in or out beyond a camera switching threshold, modifying the second image using the spatial transform and displaying the first image and the modified second image consecutively. | 2018-06-07 |
20180160048 | IMAGING SYSTEM AND METHOD OF PRODUCING IMAGES FOR DISPLAY APPARATUS - An imaging system and a method of producing images for a display apparatus, via the imaging system includes at least one focusable camera for capturing at least one image of a given real-world scene; means for generating a depth map or a voxel map of the given real-world scene; and a processor coupled to the focusable camera and the aforesaid means. The processor is communicably coupled with the display apparatus. The processor is configured to receive information of the gaze direction of the user; map the gaze direction to the depth map or the voxel map to determine an optical depth of a region of interest in the given real-world scene; and control the focusable camera to employ a focus length that is substantially similar to the determined optical depth of the region of interest when capturing the at least one image. | 2018-06-07 |
20180160049 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus receives input operation information by an operation unit configured to successively perform a moving operation for a position and/or orientation of a virtual viewpoint for generating a virtual-viewpoint image, switches the virtual viewpoint to another virtual viewpoint located at a position spatially separated from a position of the virtual viewpoint and having an image-capturing space common to the virtual viewpoint, and determines, if the virtual viewpoint is switched, a motion of the other virtual viewpoint after the switching in accordance with a motion of the virtual viewpoint based on the operation information before the switching. | 2018-06-07 |
20180160050 | AUTOMATED LOCAL POSITIONING SYSTEM CALIBRATION USING OPTICALLY READABLE MARKERS - An improved mechanism for calibrating a local positioning system through the use of passive or retro-reflective markers is described herein. A plurality of imaging targets with the passive or retro-reflective markers may be attached or affixed on a surface of an object. The local positioning system may then capture a first image of the imaging targets in a non-illuminated state and further capture a second image of the imaging targets in an illuminated state. A difference image between the first and second captured images may be computed and then segmented. The local positioning system may then identify the plurality of imaging targets based on the segmented difference image and position itself to extract information. The extracted information may then be used to help calibrate the local positioning system. | 2018-06-07 |
20180160051 | SYSTEM AND METHOD FOR PROVIDING IMAGES AND VIDEO HAVING HIGH DYNAMIC RANGE - Apparatuses, computer readable media, and methods are disclosed for providing composite images and/or videos having a high dynamic range. The apparatus includes a video capture module for capturing video images of a scene at a differing exposure levels. The apparatus further includes a region identification module for identifying regions in the captured video images of the scene that may benefit from being individually optimally-exposed. The apparatus further includes a region adjustment module for updating the positions of the various identified regions within the scene and a region exposure adjustment module for determining optimal exposure settings for the various identified regions of the scene. A video subsystem composites and encodes the various optimally-exposed regions of the scene onto a video image of a static portion of the scene having a high dynamic range that was, e.g., captured at a different moment in time than the various optimally-exposed regions of the scene. | 2018-06-07 |
20180160052 | IMAGING SYSTEM, AND MOBILE SYSTEM - An imaging system includes: a first imaging apparatus including a first imaging element and a first optical system; and a generator that generates an image based on image data acquired from the first imaging element. The first optical system includes a first free-form surface lens that has a shape that allows an image to be formed on the first imaging element such that a resolution is different between a portion of a predetermined region and another portion of the predetermined region, the resolution being defined as a total number of imaging pixels that capture an image within a unit field angle on a horizontal plane. | 2018-06-07 |
20180160053 | IMAGE PROCESSING DEVICE THAT CHANGES EXTENT OF IMAGE ALTERING BY FIRST AND SECOND IMAGE PROCESSING - An image processing device includes: an image processing unit that alters an image by executing first image processing and second image processing on the image; and a processing change unit that changes an extent to which the image is altered through the first image processing and an extent to which the image is altered through the second image processing. The processing change unit raises the extent to which the image is altered through the second image processing when the extent to which the image is altered through the first image processing is lowered. | 2018-06-07 |
20180160054 | SYSTEM AND METHOD FOR AUTOMATICALLY GENERATING SPLIT SCREEN FOR A VIDEO OF A DYNAMIC SCENE - A system and method for automatically generating split screen for a video of a dynamic scene is disclosed. The system generates the split screen by (a) obtaining the video of the dynamic scene, (b) detecting one or more objects in the video, (c) selecting one or more shot specifications for the one or more objects, (d) automatically cropping the dynamic scene of the one or more objects based on selected shot specifications, (e) automatically selecting a layout configuration for the split screen based on the cropped dynamic scenes to partition a display screen of a computing unit, (f) optimizing the split screen, (g) automatically generating the split screen by depicting (i) an overview of an original shot of the video in a top half and (ii) the cropped dynamic scenes in the one or more partition views of a bottom half and (h) displaying the split screen to a user. | 2018-06-07 |
20180160055 | MEDIA EFFECT APPLICATION - Exemplary embodiments relate to the application of media effects, such as visual overlays, sound effects, etc. to a video conversation. A media effect may be applied as a reaction to an occurrence in the conversation, such as in response to an emotional reaction detected by emotion analysis of information associated with the video. Effect application may be controlled through gestures, such as applying different effects with different gestures, or canceling automatic effect application using a gesture. Effects may also be applied in group settings, and may affect multiple users. A real-time data channel may synchronize effect application across multiple participants. When broadcasting a video stream that includes effects, the three channels may be sent to an intermediate server, which stitches the three channels together into a single video stream; the single video stream may then be sent to a broadcast server for distribution to the broadcast recipients. | 2018-06-07 |
20180160056 | INFRARED DETECTOR WITH INCREASED IMAGE RESOLUTION - An apparatus for increasing the resolution of at least one portion of a cryogenically cooled and vacuum-sealed infrared imaging detector including a two-dimensional detector array having a fill factor value, the portion of the array successively exposed to an image scene to acquire multiple infrared imaging samples of the scene, a masking filter having a single pattern, disposed between the array and the image scene and operative to reduce the fill factor value of the portion of the array by a fill factor reduction amount (FFRA), an optical element disposed between the masking filter and the image scene, a directional angle shifter for shifting an angle at which the imaging samples are directed onto the portion of the array thereby successively shifting the image scene relative to the portion of the array between each of the imaging samples by a shifting increment corresponding to the FFRA and a processor. | 2018-06-07 |
20180160057 | LEASE OBSERVATION AND EVENT RECORDING - A system and method for observing and reporting key vehicle events based upon information collected by a wide array of sensors already included in modern motor vehicles is provided. The system and method may be particularly valuable for electric vehicle applications. | 2018-06-07 |
20180160058 | SOLID-STATE IMAGING DEVICE AND ELECTRONIC APPARATUS - Image quality can be improved. In a case where light is focused on the center of an opening, sensitivity can be decreased by extending any side of a light-shielding film to the center. Namely, in the case of adjusting the sensitivity to be decreased, there is no need to simultaneously change all the sides, but the adjusting may be achieved by moving only a specific side. Further, since each side is regarded as individual adjusting parameter, a complicated inside-angle-of-view distribution can be corrected. The present disclosure can be applied to, for example, a CMOS solid-state imaging device to be used in an imaging device such as a camera. | 2018-06-07 |
20180160059 | SYSTEM AND METHOD FOR USING FILTERING AND PIXEL CORRELATION TO INCREASE SENSITIVITY IN IMAGE SENSORS - An over sampled image sensor in which the pixel size is small enough to provide spatial oversampling of the minimum blur size of the sensor optics is disclosed. Image processing to detect targets below the typical limit of 6× the temporal noise floor is also disclosed. The apparatus is useful in detecting dimmer targets and targets at a longer range from the sensors. The inventions exploits signal processing, which allows spatial temporal filtering of the superpixel image in such manner that the Noise Equivalent Power is reduced by a means of Superpixel Filtering and Pooling, which increases the sensitivity far beyond a non-oversampled imager. Overall visual acuity is improved due to the ability to detect dimmer targets, provide better resolution of low intensity targets, and improvements in false alarm rejection. | 2018-06-07 |
20180160060 | IMAGE PICKUP DEVICE AND METHOD ENABLING CONTROL OF SPECTRAL SENSITIVITY AND EXPOSURE TIME - [Object] The present technique relates to an image pickup device, an image pickup method, and a program that enables pixels having 4 types of spectral sensitivities to be controlled while changing exposure times. | 2018-06-07 |
20180160061 | Single Image Sensor for Capturing Mixed Structured-light Images and Regular Images - An integrated image sensor for capturing a mixed structured-light image and regular image using an integrated image sensor are disclosed. The integrated image sensor comprises a pixel array, one or more output circuits, one or more analog-to-digital converters, and one or more timing and control circuits. The timing and control circuits are arranged to perform a set of actions including capturing a regular image and a structured-light image. According to the present invention, the structured-light image captured before or after the regular image is used to derive depth or shape information for the regular image. An endoscope based on the above integrated image sensor is also disclosed. The endoscope may comprises a capsule housing adapted to be swallowed, where the components of integrated image sensor, a structured light source and a non-structured light source are enclosed and sealed in the capsule housing. | 2018-06-07 |
20180160062 | IMAGE CAPTURING APPARATUS AND DRIVING METHOD OF IMAGE SENSOR - An image capturing apparatus is provided. The apparatus comprises an image sensor comprising pixels and a controller. The pixels include a photoelectric converter, a holding unit, an output unit and a first and second transfer unit. The controller causes the sensor to repeatedly perform a holding operation that the first transfer unit transfer charge from the photoelectric conversion unit to the holding unit, and a transfer operation that the second transfer unit transfer the charge from the holding unit to the output unit. The controller causes the sensor to perform, as the holding operation, holding operations including a first holding operation for performing the first transfer unit transfer charge generated in a first exposure time and a second holding operation for performing the first transfer unit transfer charge generated in a second exposure time longer than the first exposure time. | 2018-06-07 |
20180160063 | SAMPLE-AND-HOLD CIRCUIT WITH RTS NOISE REDUCTION BY PERIODIC MODE SWITCHING - A sample-and-hold-circuit includes an amplifier transistor, a resistor connected between a source terminal of the amplifier and a voltage, a first switch connected in parallel with the resistor, and a second switch connected between a gate terminal of the amplifier transistor and the voltage. When the first switch is closed and the second switch is open, the amplifier transistor is in an inversion mode; and when the first switch is open and the second switch is closed, the amplifier transistor is in an accumulation mode. | 2018-06-07 |
20180160064 | SOLID-STATE IMAGING DEVICE AND ELECTRONIC APPARATUS - The present disclosure relates to a solid-state imaging device and an electronic apparatus that are capable of validating data at the time of vertical addition even when special pixels exist. In a case of A, when, for example, a special pixel S in the second row and an imaging pixel GR in the fourth row are to be added and the special pixel S is selected, the addition of the imaging pixel GR is masked so that information on the special pixel S can be output. In a case of B, when, for example, a special pixel S in the second row and an imaging pixel GR in the fourth row are to be added and the imaging pixel GR is selected, the addition of the special pixel S is masked so that information on the imaging pixel GR can be output. The present disclosure is applicable, for example, to a CMOS solid-state imaging device that is used as an image pickup device such as a camera. | 2018-06-07 |
20180160065 | IMAGE SENSOR, ELECTRONIC DEVICE, CONTROL DEVICE, CONTROL METHOD, AND PROGRAM - The present technology relates to an image sensor capable of achieving both higher S/N and higher frame rate, an electronic device, a control device, a control method, and a program. | 2018-06-07 |
20180160066 | A/D CONVERTER AND SENSOR DEVICE USING THE SAME - An A/D converter includes an analog input terminal, a successive approximation A/D converter connected to the analog input terminal, the successive approximation A/D converter for generating an upper conversion result at an upper conversion result terminal, the successive approximation A/D converter having an internal D/A converter generating an internal reference voltage at an internal reference voltage terminal, and a delta-sigma A/D converter connected to the analog input terminal and the internal reference voltage terminal, the delta-sigma A/D converter for generating a lower conversion result at a lower conversion result terminal. | 2018-06-07 |
20180160067 | SOLID-STATE IMAGING DEVICE, SIGNAL PROCESSING METHOD THEREFOR, AND ELECTRONIC APPARATUS - The present disclosure relates to a solid-state imaging device, a signal processing method therefor, and an electronic apparatus enabling sensitivity correction in which a sensitivity difference between solid-state imaging devices is suppressed. | 2018-06-07 |
20180160068 | DIGITAL READOUT METHOD AND APPARATUS - Autonomously operating analog to digital converters are formed into a two dimensional array. The array may incorporate digital signal processing functionality. Such an array is particularly well-suited for operation as a readout integrated circuit and in combination with a sensor array, forms a digital focal plane array. | 2018-06-07 |
20180160069 | METHOD AND SYSTEM TO TEMPORARILY DISPLAY CLOSED CAPTION TEXT FOR RECENTLY SPOKEN DIALOGUE - A method, system and computer readable media temporarily display closed captions. An equipment device (e.g., set top box, smart TV or other display device) receives a source signal that includes at least a video signal and a plurality of closed caption entries. Each closed caption entry is associated with a portion of the video signal. The equipment device stores in a memory the plurality of closed caption entries and displays, on a display device interfaced with the equipment device, the video signal. The equipment device also displays, on the display device, a least one prior closed caption entry of the plurality of closed caption entries stored in the memory where the associated portion of the video signal was previously displayed on the display device, wherein the prior closed caption entry is displayed without interrupting display of the video signal. | 2018-06-07 |
20180160070 | BODY CAMERA - Described herein is a body camera, a method of operating a body camera, a system for a body camera, and methods and systems for configuring a body camera. The body camera may include a digital camera configured to capture video. The body camera may also include one or more sensors for sensing data about the environment experienced by the body camera. The body camera may also include a global positioning system (GPS) receiver. The body camera may also include a transceiver for streaming video and sensor data. | 2018-06-07 |
20180160071 | Feature Detection In Compressive Imaging - The present disclosure provides systems and methods that are configured for feature extraction or object recognition using compressive measurements that represent a compressed image of a scene. In various aspects, a compressive sensing matrix is constructed and used to acquire the compressive measurements, such that in the extraction phase, the compressive measurements can be processed to detect feature points and determine their feature vectors in the scene without using a pixel representation of the scene. The determined feature vectors are used to detect objects based on comparison with one or more predetermined feature vectors. | 2018-06-07 |
20180160072 | ESTABLISHING A VIDEO CONFERENCE DURING A PHONE CALL - Some embodiments provide a method for initiating a video conference using a first mobile device. The method presents, during an audio call through a wireless communication network with a second device, a selectable user-interface (UI) item on the first mobile device for switching from the audio call to the video conference. The method receives a selection of the selectable UI item. The method initiates the video conference without terminating the audio call. The method terminates the audio call before allowing the first and second devices to present audio and video data exchanged through the video conference. | 2018-06-07 |
20180160073 | REMOTE MONITORING OF TELEMEDICINE DEVICE - Methods and systems for monitoring usage of a telemedicine system are described. A monitoring system at a hospital or other central monitoring location provides for communication between personnel at a monitoring location (e.g. a medical care provider) and patient and/or caregiver at a patient location (e.g., the patient's home) via a telemedicine system. The telemedicine system may provide for audiovisual or other communication between the monitoring location and patient location, which may be in combination with medical monitoring or treatment provided with one or more associated article of medical equipment. The medical support monitoring system tracks the amount and type of usage of telepresence system and/or associated medical equipment. Tracked information regarding system usage may be used for various purposes, including billing, quality assurance, data analytics, including population studies of usage patterns, for example. Usage information may be linked to identity of patient, caregiver, or equipment used, or anonymized, depending upon the intended use. | 2018-06-07 |
20180160074 | TRANSITIONING A TELEPHONE NETWORK PHONE CALL TO A VIDEO CALL - The present disclosure relates to systems, methods, and devices for transitioning phone calls to video calls. Specifically, one or more embodiments allow users to transition from an active phone call over a telephone network to a video call. One or more embodiments determine a first user identifier for a first user and a second user identifier for a second user. Additionally, the systems and methods identify user client devices actively performing the phone call based on the user identifiers and provide an option to the identified client devices to switch the phone call to the video call. Transitioning the phone call to a video call involves generating a null connection prior to selection of the option to transition to the video call and then streaming media between the client devices using the generated null connection in response to selection of the option. | 2018-06-07 |
20180160075 | Automatic Camera Selection - Various embodiments enable a video messaging experience which permits the exchange of short video messages in an asynchronous manner. The video messaging experience preserves the video intimacy and experience of synchronous-type video communications, while at the same time provides the convenience of SMS-type message exchange. | 2018-06-07 |
20180160076 | COMMUNICATION TERMINAL, COMMUNICATION SYSTEM, MOVING-IMAGE OUTPUTTING METHOD, AND RECORDING MEDIUM STORING PROGRAM - A communication terminal comprising circuitry, a method of outputting moving images, and a computer-readable non-transitory recording medium storing a program for causing a computer to execute the method. The communication terminal and the method includes receiving an event related to an initiation of communication with a counterpart communication terminal, inputting moving images output by an external device, and outputting the moving images input by the inputting to a display. After the receiving receives the event related to the initiation of communication, the outputting includes reducing a frame rate of the moving images to be output to the display. | 2018-06-07 |
20180160077 | System, Method and Software for Producing Virtual Three Dimensional Avatars that Actively Respond to Audio Signals While Appearing to Project Forward of or Above an Electronic Display - A system and method of providing a virtual avatar to accompany audio signals being broadcast from an electronic device that has a display screen. A virtual avatar model is created. The virtual avatar model is altered in real time in response to audio signals being broadcast from the electronic device. A 3D stereoscopic or auto-stereoscopic video file is created using the virtual avatar model while the virtual avatar model is responding to the audio signals. The 3D video file is played on the display screen of the electronic device. When viewed, the 3D video file shows an avatar that appears, at least in part, to a viewer to be three-dimensional. Furthermore, the avatar appears to extend out from the display screen. The result is a three-dimensional avatar that appears to extend out of a display screen, wherein movements of the avatar are synchronized to audio signals that are being broadcast. | 2018-06-07 |
20180160078 | System and Method for Producing Three-Dimensional Images from a Live Video Production that Appear to Project Forward of or Vertically Above an Electronic Display - A system and method for communicating between a first person at a first location and one or more people at remote locations. A production set is established at the first location where the first person is imaged with stereoscopic video cameras. The stereoscopic footage is digitally enhanced with 3D effects to create a production video file. The production video file is transmitted to an electronic device at one or more remote locations. The production video file is played and creates images of the first person at the first location. On a display screen at the remote locations, the images appear three dimensional to the remote viewers. Furthermore, the images appear to extend in front of, or above, the display screen of the electronic device. | 2018-06-07 |
20180160079 | PUPIL DETECTION DEVICE - A pupil detection device includes an active light source, an image sensor and a processing unit. The active light source emits light toward an eyeball. The image sensor captures at least one image frame of the eyeball to be served as an image to be identified. The processing unit is configured to calculate a minimum gray value in the image to be identified and to identify a plurality of pixels surrounding the minimum gray value and having gray values within a gray value range as a pupil area. | 2018-06-07 |
20180160080 | WIDE AREA INTERMITTENT VIDEO USING NON-ORTHORECTIFIED FEATURE MATCHING IN LONG PERIOD AERIAL IMAGE CAPTURE WITH PIXEL-BASED GEOREFERENCING - This application relates to techniques for obtaining wide area intermittent video (WAIV). Some embodiments disclosed herein include a method of obtaining WAIV. The method can include, for example, capturing images at a series of sensor stations having pre-determined locations along a flightline. The flightline can be repeated one or more times, where images are captured at the same sensor stations with each pass of the flightline. The captured images from the same sensor station may have replicated view geometry and may be co-registered and precisely aligned with pixel-level precision. The captured images from multiple sensor stations through time may also be displayed together based upon absolute or relative sensor station locations to create a temporal sequence of wide area intermittent video. The approach provides efficient methods for creating wide area video with reduced temporal imaging frame rates. Systems and devices for forming wide area intermittent video are also disclosed. | 2018-06-07 |
20180160081 | INFORMATION PROCESSING METHOD, ELECTRONIC DEVICE AND COMPUTER STORAGE MEDIUM - The present invention discloses an information processing method, an electronic device and a computer storage medium. The method is applied to a mobile electronic device, the electronic device is provided with an image collection unit, the image collection unit can carry out image collection, and the method includes: acquiring a first instruction, wherein the first instruction is used for indicating to carry out image collection based on target tracking; responding to the first instruction, and acquiring position information between a target object and the electronic device; acquiring a first parameter according to the position information, and controlling a movement state of the electronic device according to the first parameter, so that the electronic device tracks the target object, wherein the first parameter is a parameter used for controlling the movement state of the electronic device; and acquiring a second parameter according to the position information, and controlling an image collection posture of the image collection unit according to the second parameter, so that the image collection unit can acquire an image satisfying a first condition in a process when the electronic device tracks the target object. | 2018-06-07 |
20180160082 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus comprise a correction unit configured to generate a corrected image from an input image using a correction value; an image processing unit configured to generate a first image, in which a first pattern for measuring an ambient light distribution is incorporated into the input image or the corrected image, and a second image that is different from the first image; an acquisition unit configured to acquire an ambient light distribution based on captured images of a region in which the first image and the second image are displayed while the first image and the second image are switched therebetween in the same frame; and a correction value calculation unit configured to calculate the correction value based on an ambient light distribution acquired by the acquisition unit. | 2018-06-07 |
20180160083 | IMAGE GENERATION DEVICE AND IMAGE PROJECTION APPARATUS - An image generation device includes an image generator, a stationary unit, a movable unit, a driver, and a wiring board. The image generator receives light and generate an image. The movable unit includes a movable plate movably supported by the stationary plate of the stationary unit, the image generator mounted on the movable plate, and a diffusion heat radiator connected to the movable plate, to cool the image generator. The driver relatively moves the movable unit with respect to the stationary unit. The driver includes a driving coil disposed in the radiator and a driving magnet opposed to the coil. The wiring board is connected to the movable unit, to pass a current through at least the coil. A movable area of the wiring board in which the wiring board moves with movement of the movable unit does not overlap any of the coil and the magnet in plan view. | 2018-06-07 |
20180160084 | Image Generation Device and Method for Producing an Array of Image Forming Elements - An image generation device has at least one light source, a field lens array, an array of image-forming elements, and a projection lens array. The array of image-forming elements is formed by superimposing a first pattern array and a second pattern array, which is formed identically to the first pattern array and is rotated relative to the first pattern array. A method is provided for producing the array of image-forming elements. | 2018-06-07 |
20180160085 | CONNECTED LAMP AND LAMP CONTROL METHOD - A portable connected lamp makes it possible to project a pattern. The lamp includes lighting to transmit a light. The lighting includes a transmitter to transmit at least one light beam and a projector arranged to project at least one light pattern. The lamp also includes a receiver to receive control instructions via a wireless connection, the control instructions including data to be projected, and a controller to control the projector so that they project the data received by the receiver. | 2018-06-07 |
20180160086 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing apparatus, an information processing method, and a program for reducing any drop in the accuracy of color correction, the information processing apparatus using a captured image captured by a calibrated imaging section imaging a projected image projected to a projection plane by a calibrated projection section in order to set color correction information as the information for correcting a color of each pixel of the projection section. This technology may be applied to electronic equipment having the functions of both a projector and a camera, or to a computer for controlling such electronic equipment. | 2018-06-07 |
20180160087 | HUD IMAGE OPTIMIZATION THROUGH SOFTWARE CONTROL OF DISPLAYED GRAPHICS - A head up display calibration arrangement includes a substrate associated with a windshield. The substrate has a test pattern. A head up display module projects a test display in association with the windshield. The test display at least partially overlaps the test pattern. The test display is projected dependent upon at least one projection parameter value. A camera captures an image of the test display and the test pattern, and transmits an image signal dependent upon the captured image. An electronic processor is communicatively coupled to each of the camera and the head up display module. The processor receives the image signal and determines from the image signal a positional relationship between the test display and the test pattern within the captured image. The processor modifies the projection parameter value dependent upon the determined positional relationship between the test display and the test pattern within the captured image. | 2018-06-07 |
20180160088 | METHOD FOR COLOR MAPPING A VIDEO SIGNAL AND METHOD OF ENCODING A VIDEO SIGNAL AND CORRESPONDING DEVICES - A method for color mapping a video signal into a mapped video signal responsive to at least one color mapping function is disclosed. To this aim, at least one black point offset is determined responsive to a color encoding system of the video signal and to a color encoding system of the mapped video signal. The at least one black point offset and the at least one color mapping function are then applied on the video signal to obtain the mapped video signal. | 2018-06-07 |
20180160089 | TRANSFORMATION OF DYNAMIC METADATA TO SUPPORT ALTERNATE TONE RENDERING - An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model. | 2018-06-07 |
20180160090 | TRANSMITTING DEVICE, TRANSMITTING METHOD, RECEIVING DEVICE, AND RECEIVING METHOD - Display with an appropriate luminance dynamic range is realizable on a receiving side. A gamma curve is applied to input video data having a level range from 0% to 100%*N (N: a number larger than 1) to obtain transmission video data. This transmission video data is transmitted together with auxiliary information used for converting a high-luminance level on the receiving side. A high-level side level range of the transmission video data is converted on the receiving side such that a maximum level becomes a predetermined level based on the auxiliary information received together with the transmission video data. | 2018-06-07 |
20180160091 | WHITE BALANCE METHOD OF FOUR-COLOR PIXEL SYSTEM - A white balance method of a four-color pixel system is disclosed. The method includes the following steps: lightening a plurality of sub pixel units according to two different combinations respectively so as to display a white color; and regulating a gray-scale value of each sub pixel unit, and taking the gray-scale values of each of the sub pixel units, which enable the white color displayed by two different combinations to meet respective preset conditions as output four color gray-scale values. The display effect of the four-color pixel system can be improved through the method, which is simple, efficient, and easy to perform. | 2018-06-07 |
20180160092 | SYSTEMS AND METHODS FOR GENERATING A DIGITAL IMAGE - A system, method, and computer program product for generating a digital image is disclosed. In use, a first image set is captured by a first image sensor, the first image set including two or more first source images and a plurality of chrominance values, and a second image set is captured by a second image sensor, the second image set including two or more second source images and a plurality of luminance values. Next, a first image of the first source images and a second image of the first source images are combined to form a first pair of source images, and a first image of the second source images and a second image of the second source images are combined to form a second pair of source images. Additionally, a first resulting image by is generated combining the first pair of source images with the second pair of source images. Additional systems, methods, and computer program products are also presented. | 2018-06-07 |
20180160093 | PORTABLE DEVICE AND OPERATION METHOD THEREOF - A portable device includes a display module, an image capture module, and an eye-tracking module. The image capture module is used to capture images steadily. The eye-tracking module is used to track a user's viewpoint position relative to the portable device. Viewport within the images captured by the image capture module is adjusted in accordance with the user's viewpoint position for generating modified images displayed on the display module instantaneously. The modified images shown on the display module can fit the background scene seen by the user, and the augmented reality experience on the portable device may be improved accordingly. | 2018-06-07 |
20180160094 | COLOR NOISE REDUCTION IN 3D DEPTH MAP - A camera based depth mapping system. Depth information is coded with colors to make a laser-generated 3D depth map easier to interpret. In the event that the laser illumination is not sufficient, the depth information can have a low signal to noise ratio, i.e., the depth map can be noisy. Color noise reduction techniques are used to alleviate this problem. | 2018-06-07 |
20180160095 | LIGHTING DEVICE AND VEHICLE LAMP INCLUDING SAME - Provided are a lighting apparatus that is capable of implementing various stereoscopic effects and a lamp for a vehicle including the lighting apparatus. A volumic type, a depth effect, and a stereoscopic effect of optical patterns emitted after an interaction of excited light of emission layers disposed in adjacent light source modules can be implemented so that various designs of three-dimensional (3D) light can be implemented. | 2018-06-07 |
20180160096 | DIGITAL SIGNAGE SYSTEM AND DATA PROCESSING METHOD IN THE SAME - A digital signage system including a first display device; a second display device arranged a first spatial interval from the first display device and having a display screen that at least partially overlaps a display screen on the first display device; and a controller configured to control the first display device to display single image data at a first time, and control the second display device to display the single image data at a second time based on the first spatial interval between the first display device and the second display device. Further, at least one of the first display device and the second display device is a transparent display. | 2018-06-07 |
20180160097 | Instantaneous 180-degree 3D Recording and Playback Systems - The term instantaneous in this invention means that the roughly 180° horizontal visual field of view that a human senses in real time. The major novelty of the instantaneous 180° (i180°) 3D technology includes (a) a combination of multiple binocular and monocular fields of view for image acquisition, (b) a combination of binocular and monocular fields of view in content playback, (c) a multi-resolution scheme for sensing, processing, transmission, and playback, (d) a realization of physical consistency of the line of sight with minimal distortion of all projection lines between imaging and display, and (e) a method for two-way compatibility for systems with conventional binocular 3D and monocular 2D systems. In addition to applications in consumer electronics, the invention has potential applications in professional business, such as film industry, theaters, museums, advertisements, surgery, rehabilitation, and assistance to the handicapped and elderly. | 2018-06-07 |
20180160098 | System, Method and Software for Producing Live Video Containing Three-Dimensional Images that Appear to Project Forward of or Vertically Above a Display - A system, method and software for producing 3D effects in a video of a physical scene. The 3D effects can be observed when the video is viewed, either during a live stream or later when viewing the recorded video. A reference plane is defined. The reference plane has peripheral boundaries. A live event is viewed with stereoscopic video cameras. The stereoscopic camera viewpoints are calculated that enable the event to be recorded within the peripheral boundaries of the reference plane. The footage from the stereoscopic video cameras is digitally altered prior to being imaged. The altering of the footage includes bending, tapering, stretching and/or tilting a portion of the footage in real time. Once the footage is altered, a common set of boundaries are set for superimposed footage to create a final video production. | 2018-06-07 |
20180160099 | SUB-DIFFRACTION LIMIT IMAGE RESOLUTION IN THREE DIMENSIONS - The present invention generally relates to sub-diffraction limit image resolution and other imaging techniques, including imaging in three dimensions. In one aspect, the invention is directed to determining and/or imaging light from two or more entities separated by a distance less than the diffraction limit of the incident light. For example, the entities may be separated by a distance of less than about 1000 nm, or less than about 300 nm for visible light. In some cases, the position of the entities can be determined in all three spatial dimensions (i.e., in the x, y, and z directions), and in certain cases, the positions in all three dimensions can be determined to an accuracy of less than about 1000 nm. In one set of embodiments, the entities may be selectively activatable, i.e., one entity can be activated to produce light, without activating other entities. A first entity may be activated and determined (e.g., by determining light emitted by the entity), then a second entity may be activated and determined. The emitted light may be used to determine the x and y positions of the first and second entities, for example, by determining the positions of the images of these entities, and in some cases, with sub-diffraction limit resolution. In some cases, the z positions may be determined using one of a variety of techniques that uses intensity information or focal information (e.g., a lack of focus) to determine the z position. Non-limiting examples of such techniques include astigmatism imaging, off-focus imaging, or multi-focal-plane imaging. Other aspects of the invention relate to systems for sub-diffraction limit image resolution, computer programs and techniques for sub-diffraction limit image resolution, methods for promoting sub-diffraction limit image resolution, and the like. | 2018-06-07 |
20180160100 | IMAGING SYSTEM AND METHOD OF PRODUCING CONTEXT AND FOCUS IMAGES - Disclosed is an imaging system and a method of producing a context image and a focus image for a display apparatus, via the imaging system. The imaging system includes at least one imaging sensor per eye of a user, and a processor coupled thereto. The processor is configured to control the imaging sensors to capture at least one image of a real world environment, and is arranged to be communicably coupled with the display apparatus. Furthermore, the processor is configured to: receive, from the display apparatus, information indicative of a gaze direction of the user; determine a region of visual accuracy of the at least one image, based upon the gaze direction of the user; process the at least one image to generate the context image and the focus image; and communicate the generated context image and the generated focus image to the display apparatus. | 2018-06-07 |
20180160101 | VARIABLE FOCAL LENGTH LENSES AND ILLUMINATORS ON TIME OF FLIGHT 3D SENSING SYSTEMS - A time-of-flight 3D imaging system includes a time-of-flight measurement device, an illuminator, and an imaging sensor. The illuminator and the imaging sensor have adjustable optics to vary the field of illumination of the illuminator and the field of view of the imaging sensor. | 2018-06-07 |
20180160102 | METHOD FOR 3D RECONSTRUCTION OF AN ENVIRONMENT OF A MOBILE DEVICE, CORRESPONDING COMPUTER PROGRAM PRODUCT AND DEVICE - A method is proposed for 3D reconstruction of an environment of a mobile device comprising a camera. The method includes calculating a coarse 3D reconstruction of at least one of the environment by a first reconstruction method that takes into account first pictures of the at least one area captured by the camera, determining if at least one target part exists in the environment based on a detection of at least one object attribute taking into account at least one of the first pictures, calculating a refined 3D reconstruction of the at least one target part by a second reconstruction method that takes into account second pictures of the at least one target part captured by the camera, and aggregating the calculated reconstructions for providing the 3D reconstructionof the environment. | 2018-06-07 |
20180160103 | THREE-DIMENSIONAL DEPTH SENSOR - A three-dimensional (3D) depth sensor may include: a plurality of light sources configured to irradiate light to an object, the light having different center wavelengths; an optical shutter configured to allow reflected light reflected from the object to pass through; and an image sensor configured to filter the reflected light having passed through the optical shutter and detect the filtered light. | 2018-06-07 |
20180160104 | Head Mounted Display - A head mounted display includes a first and second light sources, a light turning prism, a field lens group, an image output module, a first and second eyepiece modules. The first and second light sources are respectively configured to emit first and second lights. The image output module is configured to receive the first light and the second light, and to respectively generate a first image light and a second image light with corresponding image information. The light turning prism is optically coupled between the first light source (or the second light source) and the field lens group. The light turning prism is configured to vary a propagating direction of the first light (or the second light) from the first light source (or the second light source) to the image output module. The first/second eyepiece modules are configured to make the second/first image light image to first/second target positions. | 2018-06-07 |
20180160105 | REPRESENTATIONS OF EVENT NOTIFICATIONS IN VIRTUAL REALITY - According to an example implementation, a method may include receiving, from a non-virtual reality application, a non-virtual reality event notification, and providing, in a virtual environment based on the non-virtual reality event notification, a non-textual indication of a status of the non-virtual reality application, wherein a characteristic of the non-textual indication is adjusted to indicate the status of the non-virtual reality application. | 2018-06-07 |
20180160106 | OMNISTEREO CAPTURE AND RENDER OF PANORAMIC VIRTUAL REALITY CONTENT - Systems and methods are described include defining, at a computing device, a set of images based on captured images, projecting, at the computing device, a portion of the set of images from a planar perspective image plane onto a spherical image plane by recasting a plurality of viewing rays associated with the portion of the set of images from a plurality of viewpoints arranged around a curved path to a viewpoint, determining, at the computing device, a periphery boundary corresponding to the viewpoint and generating updated images by removing pixels that are outside of the periphery boundary, and providing, for display, the updated images within the bounds of the periphery boundary. | 2018-06-07 |
20180160107 | BACK LIGHT APPARATUS, DISPLAY APPARATUS HAVING THE BACK LIGHT APPARATUS, AND CONTROL METHOD FOR THE DISPLAY APPARATUS - A back light apparatus includes a plurality of light sources configured to generate light; and a light guide part, wherein the light guide part includes a light guide plate configured to change a path of the light; a first pattern part disposed on a first surface of the light guide plate and configured to emit the light in a first direction; and a second pattern part disposed on a second surface of the light guide plate and configured to emit the light in a second direction. | 2018-06-07 |
20180160108 | ELECTRONIC DEVICE AND METHOD FOR DISPLAYING IMAGE - An electronic device and method are disclosed herein. The electronic device includes a display, a memory and a processor. The processor implements the method, including displaying a first image through the display, storing screen information associated with the first image in a memory, detecting whether the electronic device is mounted on the wearable device, generating a second image corresponding to a view point of the first image based on the stored screen information, and displaying the generated second image through the display. | 2018-06-07 |
20180160109 | IMAGE SENSOR FAILURE DETECTION - A novel image sensor includes error detection circuitry for detecting sequencing errors. In a particular embodiment a pattern is inserted into a captured image and an image processor detects sequencing errors by determining a location of the pattern. In a more particular embodiment, the image sensor includes a pixel array, arranged in columns and rows. A row select signal is encoded as a bitwise signal, and the bitwise signal is decoded by a multi-input AND gate associated with a particular column of the image sensor, based on a relationship between rows and columns of the pixel array. The relationship determines the pattern asserted into the captured image. | 2018-06-07 |
20180160110 | METHOD FOR CALIBRATING A CAMERA AND CALIBRATION SYSTEM - A method for calibrating a camera. The method includes a step of reading in and a step of ascertaining, an imaging trajectory and a reference trajectory of a moving calibration object detected by using the camera being read in in the step of reading in, the imaging trajectory representing a trajectory imaged in image coordinates of the camera and the reference trajectory representing the trajectory in world coordinates, and at least one calibration parameter for the camera being ascertained in the step of ascertaining by using the imaging trajectory and the reference trajectory. | 2018-06-07 |