20th week of 2022 patent applcation highlights part 59 |
Patent application number | Title | Published |
20220156940 | DATA SEGMENTATION USING MASKS - A vehicle can include various sensors to detect objects in an environment. Sensor data can be captured by a perception system in a vehicle and represented in a voxel space. Operations may include analyzing the data from a top-down perspective. From this perspective, techniques can associate and generate masks that represent objects in the voxel space. Through manipulation of the regions of the masks, the sensor data and/or voxels associated with the masks can be clustered or otherwise grouped to segment data associated with the objects. | 2022-05-19 |
20220156941 | A NEURAL-NETWORK-DRIVEN TOPOLOGY FOR OPTICAL COHERENCE TOMOGRAPHY SEGMENTATION - A device receives a two-dimensional (2-D) image that depicts a cross-sectional view of a macula comprised of layers and boundaries to segment the layers, and determines spatial coordinates of the 2-D image that include x-coordinates and y-coordinates. The device uses a data model, that has been trained using a deep learning technique, to process the 2-D image and the spatial coordinates to generate boundary maps that indicate likelihoods of voxels of the 2-D image being in positions that are part of particular boundaries. The device determines, by analyzing the boundary maps, an initial set of boundary positions, and determines a final set of boundary positions by using a topological order identification technique to refine the initial set of boundary positions. The device determines the thickness levels of the layers of the macula based on the final set of boundary positions, and performs one or more actions based on the thickness levels. | 2022-05-19 |
20220156942 | CLOSED SURFACE FITTING FOR SEGMENTATION OF ORTHOPEDIC MEDICAL IMAGE DATA - Techniques are described for closed surface fitting (CSF). Processing circuitry may determine a plurality of points on a shape, determine a contour, used to determine a shape of an anatomical object, from image information of one or more images of a patient, and determine corresponding points on the contour that correspond to the plurality of points on the shape based on at least one of respective normal vectors projected from points on the shape and normal vectors projected from points on the contour. The processing circuitry may generate a plurality of intermediate points between the points on the shape and the corresponding points on the contour, generate an intermediate shape based on plurality of intermediate points, and generate a mask used to determine the shape of the anatomical object based on the intermediate shape. | 2022-05-19 |
20220156943 | CONSISTENCY MEASURE FOR IMAGE SEGMENTATION PROCESSES - Techniques are provided for determining consistency measures for image segmentation. For instance, a system can determine a first segmentation feature associated with a first segmentation mask of a first image frame. The system can determine a second segmentation feature associated with a second segmentation mask of a second image frame. The second segmentation feature corresponds to the first segmentation feature. The system can determine a first image feature of the first image frame that corresponds to the first segmentation feature and a second image feature of the second image frame that corresponds to the second segmentation feature. The system can determine a first similarity measurement between the first image feature and the second image feature. The system can further determine a temporal consistency measurement associated with the first image frame and the second image frame based at least in part on the first similarity measurement. | 2022-05-19 |
20220156944 | APPARATUS AND METHOD WITH VIDEO PROCESSING - A processor-implemented method with video processing includes: determining a first image feature of a first image of video data and a second image feature of a second image that is previous to the first image; determining a time-domain information fusion processing result by performing time-domain information fusion processing on the first image feature and the second image feature; and determining a panoptic segmentation result of the first image based on the time-domain information fusion processing result. | 2022-05-19 |
20220156945 | Method for motion estimation based on motion blur - A method for motion estimation based on perceived blur due to motion between a first image frame and a second image frame of a digital video sequence comprises estimating a motion vector between the frames for each of a plurality of pixel blocks in the first and second image frames. A patch motion vector is then estimated for each of a plurality of patches of the motion vectors based on one of vectors in each patch and motion vectors in proximate patches. The patch motion vector of its corresponding patch is allocated to each pixel in the first frame. A system and computer program for motion estimation based on perceived blur is also provided. | 2022-05-19 |
20220156946 | SUPERVISED LEARNING AND OCCLUSION MASKING FOR OPTICAL FLOW ESTIMATION - Systems and techniques are described for performing supervised learning (e.g., semi-supervised learning, self-supervised learning, and/or mixed supervision learning) for optical flow estimation. For example, a method can include obtaining an image associated with a sequence of images and generating an occluded image. The occluded image can include at least one of the image with an occlusion applied to the image and a different image of the sequence of images with the occlusion applied. The method can include determining a matching map based at least on matching areas of the image and the occluded image and, based on the matching map, determining a loss term associated with an optical flow loss prediction associated with the image and the occluded image. The loss term may include a matched loss and/or other loss. Based on the loss term, the method can include training a network configured to determine an optical flow between images. | 2022-05-19 |
20220156947 | Trajectory Calculation Device, Trajectory Calculating Method, and Trajectory Calculating Program - In the athletic competition involving jumping such as figure skating, it is desired to measure jumping height and flight distance by calculating a trajectory of a moving object as a target from moving images captured by a monocular camera. Provided is a device, a method and a program to calculate the trajectory of the moving object jumping in three dimensions from information of a plurality of image frames captured by the monocular camera by detecting a specific point of the moving object, calculating an amount of change with respect to three-dimensional positions of the specific point in the consecutive image frames, and calculating the trajectory of the specific point from a positional relation between straight lines having the positions of the specific point and a curved line capable of expressing a parabolic motion passing through a take-off point and a landing point. | 2022-05-19 |
20220156948 | Resilient Dynamic Projection Mapping System and Methods - Systems and methods for dynamically tracking objects, and projecting rendered 3D content onto said objects in real-time. The methods described herein further include image data capture performed by various image-capturing devices, wherein said data is segmented into various components to identify one or more projectors for rendering and projecting 3D content components onto one or more objects. | 2022-05-19 |
20220156949 | INFORMATION PROCESSING METHOD AND SYSTEM - The present disclosure is related to systems and methods for noise reduction. The method includes obtaining a current frame including a plurality of first pixels. The method includes determining an interframe difference between each first pixel in the current frame and a corresponding pixel in a previous frame obtained prior to the current frame. The method includes generating a denoised frame by performing a first noise reduction operation on the current frame. The method includes determining an intraframe difference for each second pixel in the denoised frame. The method includes generating a target frame by performing a second noise reduction operation on the denoised frame. | 2022-05-19 |
20220156950 | SCANNING DEVICE FOR SCANNING AN ENVIRONMENT AND ENABLING AN IDENTIFICATION OF SCANNED MOVING OBJECTS - The present disclosure relates to a scanning device being configured to enable efficient identification of scanned measurement points being associated with moving objects in an otherwise static environment. The scanning device is built as total station or laser scanner being typically used for scanning an environment and enabling, based on the scanning, the generation of a three dimensional (3D) point cloud representing the scanned environment. | 2022-05-19 |
20220156951 | METHOD, SYSTEM, AND COMPUTER-READABLE MEDIUM FOR USING FACE ALIGNMENT MODEL BASED ON MULTI-TASK CONVOLUTIONAL NEURAL NETWORK-OBTAINED DATA - A method includes receiving a facial image; receiving a facial image; and obtaining, using a multi-task convolutional neural network, a detected face location and a facial characteristic category set of a plurality of first facial characteristic categories; selecting a first face alignment model from a plurality of face alignment models based on the facial characteristic category set; and obtaining, using the first face alignment model, a plurality of facial landmarks. The first facial characteristic categories are arranged hierarchically. A hierarchy of the first facial characteristic categories includes a plurality of first levels corresponding to a plurality of corresponding facial characteristics. The facial characteristic category set includes the first facial characteristic categories of a path of the hierarchy of the first facial characteristic categories | 2022-05-19 |
20220156952 | DENTAL IMAGE REGISTRATION DEVICE AND METHOD - Provided is a dental image registration device comprising: an outermost boundary detection unit for detecting, from first teeth image data, a first outermost boundary region which is the outermost boundary region of dentition, and detecting, from second teeth image data, a second outermost boundary region which is the outermost boundary region of the dentition; and an image registration unit which registers the first and second teeth image data on the basis of a first inscribed circle inscribed within the first outermost boundary region and a second inscribed circle inscribed within the outermost boundary region, or registers the first and second teeth image data on the basis of a first central point of the first outermost boundary region and a second central point of the second outermost boundary region. | 2022-05-19 |
20220156953 | METHOD FOR REGISTERING VIRTUAL MODELS OF THE DENTAL ARCHES OF AN INDIVIDUAL WITH A DIGITAL MODEL OF THE FACE OF SAID INDIVIDUAL - The invention relates to a method for registering non-radiographic virtual models of a mandibular arch and a maxillary arch of an individual with a non-radiographic digital model of the face of said individual, comprising:
| 2022-05-19 |
20220156954 | STEREO MATCHING METHOD, IMAGE PROCESSING CHIP AND MOBILE VEHICLE - Embodiments of the present invention relates to a stereo matching method, an image processing chip and a mobile vehicle. The stereo matching method includes: calculating aggregate cost values between all reference pixels and a target pixel in a preset search region, the reference pixel being a pixel in a reference image, and the target pixel being a pixel in a target image; determining a texture property of the reference image in the search region according to a distribution of the aggregate cost values; and using a method for calculating a disparity value corresponding to the texture property, to obtain a disparity value between the reference image and the target image at a position of the target pixel. In the method, advanced information is mined from an image, and a manner or policy of calculating a disparity is adaptively adjusted according to a distribution of aggregate cost values in a search region, to exclude the influence of a repeated texture region or a texture-less region as much as possible, so that the robustness of calculating the disparity is significantly improved. | 2022-05-19 |
20220156955 | DEPTH MAP PROCESSING - For multi-view video content represented in the MVD (Multi-view+Depth) format, the depth maps may be processed to improve the coherency therebetween. In one implementation, to process a target view based on an input view, pixels of the input view are first projected into the world coordinate system, then into the target view to form a projected view. The texture of the projected view and the texture of the target view are compared. If the difference at a pixel is small, then the depth of the target view at that pixel is adjusted, for example, replaced by the corresponding depth of the projected view. When the multi-view video content is encoded and decoded in a system, depth map processing may be applied in the pre-processing and post-processing modules to improve video compression efficiency and the rendering quality. | 2022-05-19 |
20220156956 | ACTIVE IMAGE DEPTH PREDICTION - An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection. | 2022-05-19 |
20220156957 | Image Depth Estimation Method and Device, Readable Storage Medium and Electronic Equipment - Disclosed are an image depth estimation method and device, a computer-readable storage medium and electronic equipment. The method includes: obtaining a first image frame and second image frame collected in a movement process of an electronic apparatus; determining a first feature map corresponding to the first image frame and a second feature map corresponding to the second image frame; determining a scaled inter-frame geometrical relationship between the first and second image frames; determining a reconstruction error between the first and second feature maps based on the inter-frame geometrical relationship; and determining a depth map corresponding to the first image frame based on the reconstruction error. According to embodiments of the disclosure, the reconstruction error between the first and second feature maps is determined by utilizing the inter-frame geometrical relationship, and explicit geometrical constraints are added for depth estimation, thereby improving the generalization of the depth estimation. | 2022-05-19 |
20220156958 | MOVING IMAGE DISTANCE CALCULATOR AND COMPUTER-READABLE STORAGE MEDIUM STORING MOVING IMAGE DISTANCE CALCULATION PROGRAM - A moving image distance calculator ( | 2022-05-19 |
20220156959 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM IN WHICH PROGRAM IS STORED - An image processing device is provided with: a movement detection unit- for detecting, from time-series images, a first feature relating to the first movement of a head portion of a person and a second feature relating to the second movement of a body portion, which is a part of the person other than the head portion; and an index value calculation unit for calculating an index value which indicates the degree of consistency between the first feature relating to the first movement of the head portion of the person and the second feature relating to the second movement of the body portion of the person. | 2022-05-19 |
20220156960 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - The present technology relates to an information processing apparatus, an information processing method, and a program that enable an operation to be performed using a wearable device that is less resistant to being worn all the times. | 2022-05-19 |
20220156961 | ELECTROMAGNETIC ENVIRONMENT ANALYSIS SYSTEM, ELECTROMAGNETIC ENVIRONMENT ANALYSIS METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM - In an electromagnetic environment analysis system | 2022-05-19 |
20220156962 | SYSTEM AND METHOD FOR GENERATING BASIC INFORMATION FOR POSITIONING AND SELF-POSITIONING DETERMINATION DEVICE - A system and method for generating basic information for positioning and a self-positioning determination device are disclosed. The system includes an information supplying device and a computing device. The information supplying device recognizes an object position and an object category of each of a plurality of reference objects and accordingly generates a reference unique feature value for each reference object, and the computing device generates basic information for positioning according to the reference unique feature values. The self-positioning determination device recognizes an object position and an object category of a current object and accordingly generates a current unique feature value for the current object, and determines a position of itself according to the current unique feature value, the basic information for positioning, and a distance and an angle between the self-positioning determination device and the current object. | 2022-05-19 |
20220156963 | COORDINATE SYSTEM CONVERSION PARAMETER ESTIMATING APPARATUS, METHOD AND PROGRAM - A technology for obtaining a coordinate system conversion parameter more easily than in the related art is provided. A coordinate system conversion parameter estimation device includes: a camera coordinate system correspondence point estimation unit | 2022-05-19 |
20220156964 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - According to one embodiment, an information processing device according to an embodiment includes an acquisition unit, a detection unit, and a calculation unit. The acquisition unit acquires image data in which an image of a space where a plurality of specific objects exist is captured. The detection unit detects a position and an orientation of each of the plurality of specific objects included in the image data in the space. The calculation unit calculates, for any evaluation target object of the plurality of specific objects, a close-contact evaluation value indicating a degree of close contact between the evaluation target object and one or a plurality of other specific objects other than the evaluation target object among the plurality of specific objects, based on the position and the orientation of each of the plurality of specific objects. | 2022-05-19 |
20220156965 | MULTI-MODAL 3-D POSE ESTIMATION - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for estimating a 3-D pose of an object of interest from image and point cloud data. In one aspect, a method includes obtaining an image of an environment; obtaining a point cloud of a three-dimensional region of the environment; generating a fused representation of the image and the point cloud; and processing the fused representation using a pose estimation neural network and in accordance with current values of a plurality of pose estimation network parameters to generate a pose estimation network output that specifies, for each of multiple keypoints, a respective estimated position in the three-dimensional region of the environment. | 2022-05-19 |
20220156966 | ERROR COMPENSATION FOR A THREE-DIMENSIONAL TRACKING SYSTEM - Tracking system for tracking one or more reflective markers includes at least two optical sensors configured to obtain image data of an environment that includes at least one marker. The tracking system obtains the image data from the at least two optical sensors. The tracking system is configured for extracting, from the image data, optical signatures representing reflections of the optical signal from at least one marker, determining optical centroids of the optical signatures of the at least one marker, estimating an initial pose for at least one marker, determining offset error vectors from the optical centroids of the at least one marker based on the initial pose, determining corrected optical centroids based on the offset error vectors and the optical centroids, and determining a corrected three dimensional position of the marker in the environment based on the corrected optical centroids of the marker. | 2022-05-19 |
20220156967 | Device and method for detection and localization of vehicles - A method for determining a location of a moving vehicle, the method comprising processing image data to determine a direction between a camera capturing an image and the moving vehicle; processing additional data comprising at least one of map data and velocity sensor data; and combining information based on the image data and the additional data to arrive at a location of the moving vehicle. The present invention also relates to a corresponding robot configured to carry out such a method. | 2022-05-19 |
20220156968 | VISUAL FEATURE DATABASE CONSTRUCTION METHOD, VISUAL POSITIONING METHOD AND APPARATUS, AND STORAGE MEDIUM - This application provides a visual feature database construction method, including: obtaining a database creation image; performing feature extraction on the database creation image to obtain a feature point of the database creation image and a descriptor of the feature point of the database creation image; intersecting a ray corresponding to the feature point of the database creation image with a 3D model, to determine the 3D position of the intersection point at which the ray intersects with the 3D model as the 3D position of the feature point of the database creation image; and writing the descriptor of the feature point of the database creation image and the database creation image into a visual feature database, to construct the visual feature database. | 2022-05-19 |
20220156969 | VISUAL LOCALIZATION METHOD, TERMINAL, AND SERVER - The disclosure provides example visual localization methods. One method includes that a terminal device obtains an image of a building. The terminal device generates a descriptor based on the image. The descriptor includes information about a horizontal viewing angle between a first vertical feature line and a second vertical feature line in the image. The first vertical feature line indicates a first facade intersection line of the building, and the second vertical feature line indicates a second facade intersection line of the building. The terminal device performs matching in a preset descriptor database based on the descriptor, to obtain localization information of a photographing place of the image. | 2022-05-19 |
20220156970 | SYSTEMS AND METHODS FOR IMAGE-BASED COMPONENT DETECTION - A method includes detecting, for each of a plurality of images, a plurality of key points, where each of the plurality of images represents an object of an assembly system. The method includes generating, for each of the plurality of images, a correspondence between the plurality of key points, and generating, for each of the plurality of images, a reference region based on the correspondence between the plurality of key points. The method includes identifying, for each of the plurality of images, a reference key point among the plurality of key points based on the reference region, and determining a pose of the object based on the reference key point of each of the plurality of images and a reference pose of the object. | 2022-05-19 |
20220156971 | SYSTEMS AND METHODS FOR TRAINING A MACHINE-LEARNING-BASED MONOCULAR DEPTH ESTIMATOR - Systems and methods described herein relate to training a machine-learning-based monocular depth estimator. One embodiment selects a virtual image in a virtual dataset, the virtual image having an associated ground-truth depth map; generates a set of ground-truth surface-normal vectors for pixels in the virtual image based on the ground-truth depth map; processes the virtual image using the machine-learning-based monocular depth estimator to generate a predicted depth map; generates a set of calculated surface-normal vectors for the pixels in the virtual image based on the predicted depth map; and supervises training of the machine-learning-based monocular depth estimator by computing a surface-normal loss between the set of calculated surface-normal vectors and the set of ground-truth surface-normal vectors, wherein the surface-normal loss regularizes depth predictions produced by the machine-learning-based monocular depth estimator to improve accuracy of the depth predictions as the machine-learning-based monocular depth estimator is trained. | 2022-05-19 |
20220156972 | LONG RANGE DISTANCE ESTIMATION USING REFERENCE OBJECTS - Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for generating a distance estimate for a target object that is depicted in an image of a scene in an environment. The system obtains data specifying (i) a target portion of the image that depicts the target object detected in the image, and (ii) one or more reference portions of the image that each depict a respective reference object detected in the image. The system further obtains, for each of the one or more reference objects, a respective distance measurement for the reference object that is a measurement of a distance from the reference object to a specified location in the environment. The system processes the obtained data to generate a distance estimate for the target object that is an estimate of a distance from the target object to the specified location in the environment. | 2022-05-19 |
20220156973 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - To estimate a location and an attitude of a target object in real space in a more preferred aspect. Provided is an information processing apparatus including: an estimating unit configured to estimate at least one of a location or an attitude of a predetermined chassis in real space on the basis of a first image captured by a first image capturing unit among a plurality of image capturing units held in the chassis; and a verifying unit configured to verify a likelihood of the estimation result on the basis of a second image captured by a second image capturing unit having an optical axis different from an optical axis of the first image capturing unit among the plurality of image capturing units. | 2022-05-19 |
20220156974 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, RECORDING MEDIUM, AND IMAGE CAPTURING APPARATUS - Provided is an information processing device that includes a movement estimation unit that estimates position and posture movement by using corresponding-point information obtained based on image capturing, and a corresponding-point information selection unit that selects the corresponding-point information to be used by the movement estimation unit from among a plurality of pieces of corresponding-point information. The plurality of pieces of corresponding-point information includes first corresponding-point information obtained based on image capturing by a first camera set including a first camera and a second camera, and second corresponding-point information obtained based on image capturing by a second camera set including a third camera and a fourth camera. The first camera set has a wider view angle than that of the second camera set. The first camera set and the second camera set have image capturing ranges that are at least partially identical to each other. | 2022-05-19 |
20220156975 | SYSTEMS AND METHODS FOR POSE DETECTION AND MEASUREMENT - A method for estimating a pose of an object includes: receiving a plurality of images of the object captured from multiple viewpoints with respect to the object; initializing a current pose of the object based on computing an initial estimated pose of the object from at least one of the plurality of images; predicting a plurality of 2-D keypoints associated with the object from each of the plurality of images; and computing an updated pose that minimizes a cost function based on a plurality of differences between the 2-D keypoints and a plurality of 3-D keypoints associated with a 3-D model of the object as arranged in accordance with the current pose, and as projected to each of the viewpoints. | 2022-05-19 |
20220156976 | LOCALIZING AN AUGMENTED REALITY DEVICE - Determining the position and orientation (or “pose”) of an augmented reality device includes capturing an image of a scene having a number of features and extracting descriptors of features of the scene represented in the image. The descriptors are matched to landmarks in a 3D model of the scene to generate sets of matches between the descriptors and the landmarks. Estimated poses are determined from at least some of the sets of matches between the descriptors and the landmarks. Estimated poses having deviations from an observed location measurement that are greater than a threshold value may be eliminated. Features used in the determination of estimated poses may also be weighted by the inverse of the distance between the feature and the device, so that closer features are accorded more weight. | 2022-05-19 |
20220156977 | CALIBRATION APPARATUS, CALIBRATION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - In a calibration apparatus ( | 2022-05-19 |
20220156978 | AUTOMATIC LOW CONTRAST DETECTION - A method includes generating a delicate area map by performing a morphological function on a portion of a received first image and identifying a plurality of edges in the first image, the plurality of edges comprising a plurality of pixels. The method also includes verifying a first contrast metric for a first subset of pixels that are in the plurality of pixels but not in the delicate area map, verifying a second contrast metric for a second subset of pixels that are in the plurality of pixels and in the delicate area map, and generating a validation result based on the verifying of the first contrast metric and the verifying of the second contrast metric. | 2022-05-19 |
20220156979 | METHOD, SYSTEM, AND DEVICE FOR COLOR MEASUREMENT OF A SURFACE - Methods and systems for determining a surface color of a target surface under an environment with an environmental light source. A plurality of images of the target surface are captured as the target surface is illuminated with a variable intensity, constant color light source and a constant intensity, constant color environmental light source, wherein the intensity of the light source on the target surface is varied by a known amount between the capturing of the images. A color feature tensor, independent of the environmental light source, is extracted from the image data, and used to infer a surface color of the target surface. | 2022-05-19 |
20220156980 | METHOD AND APPARATUS FOR POINT CLOUD CODING - Aspects of the disclosure provide methods and apparatuses for point cloud compression and decompression. In some examples, an apparatus for point cloud compression/decompression includes processing circuitry. In some examples, the processing circuitry receives a bitstream carrying compressed data for a point cloud. The processing circuitry determines that a current node in an octree structure is eligible for an isolated mode. The octree structure corresponds to three dimensional (3D) partitions of a space of the point cloud. Then the processing circuitry determines, based on information of one or more other nodes, a single isolated point flag for the current node that indicates whether the current node is coded with a single isolated point. | 2022-05-19 |
20220156981 | SYSTEMS AND METHOD FOR LOW BANDWIDTH VIDEO-CHAT COMPRESSION - In one embodiment, a first device may receive, from a second device, a reference landmark map identifying locations of facial features of a user of the second device depicted in a reference image and a feature map, generated based on the reference image, representing an identity of the user. The first device may receive, from the second device, a current compressed landmark map based on a current image of the user and decompress the current compressed landmark map to generate a current landmark map. The first device may update the feature map based on a motion field generated using the reference landmark map and the current landmark map. The first device may generate scaling factors based on a normalization facial mask of pre-determined facial features of the user. The first device may generate an output image of the user by decoding the updated feature map using the scaling factors. | 2022-05-19 |
20220156982 | CALCULATING DATA COMPRESSION PARAMETERS - Apparatuses, systems, and techniques for calculating data compression parameters using codebook entry values. In at least one embodiment, one or more circuits is to calculate one or more data compression parameters based, at least in part, on at least on one or more values of the data to be compressed in relation to at least two codebook entry values. | 2022-05-19 |
20220156983 | GENERATING VISUAL CONTENT CONSISTENT WITH ASPECTS OF A VISUAL LANGUAGE - Systems, methods and non-transitory computer readable media for generating visual content consistent with aspects of a visual brand language are provided. An indication of at least one aspect of a visual brand language may be received. Further, an indication of a desired visual content may be received. A new visual content consistent with the visual brand language and corresponding to the desired visual content may be generated based on the indication of the at least one aspect of the visual brand language and the indication of the desired visual content. The new visual content may be provided in a format ready for presentation. | 2022-05-19 |
20220156984 | Augmented Reality Effect Resource Sharing - An augmented reality (AR) effect system can improve application of AR effects by sharing resources between AR effects. The AR effect system can employ manifests for AR effects that define which resources are required to render each AR effect. The AR effect system can organize rendering operations used by selected AR effects into a pipeline and can use the manifests of the AR effects to determine when each resource will be needed. Based on this pipeline, the AR effect system can create a cache order defining a resource schedule which specifies, when a resource is freed, conditions for whether to save the resource to a local cache or unload the resource. As rendering of the video with the AR effects progresses, the resource schedule can control whether resources not currently being used to render an AR effect should be unloaded or cached for fast access for future render operations. | 2022-05-19 |
20220156985 | PERIPHERAL VIDEO GENERATION DEVICE, PERIPHERAL VIDEO GENERATION METHOD, AND STORAGE MEDIUM STORING PROGRAM - A peripheral video generation device includes: a video input unit that inputs peripheral video data captured by a plurality of cameras; a video composition unit that composites the peripheral video data to generate a composite video as viewed from a predetermined viewpoint; a three-dimensional shape estimation unit that estimates a three-dimensional shape of a peripheral object based on the peripheral video data; a shielded area estimation unit that uses an estimation result of the three-dimensional shape to estimate a shielded area not visible from the predetermined viewpoint in the composite video; an inference unit that infers a video of the shielded area using deep learning; and a video superimposition unit that superimposes the video inferred by the inference unit on the shielded area in the composite video. | 2022-05-19 |
20220156986 | SCENE INTERACTION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER STORAGE MEDIUM - A method for scene interaction includes identifying a first real scene interacting with a virtual scene and obtaining media information of the first real scene. The method also includes determining a scene feature associated with the first real scene based on a feature extraction on the media information, and mapping the scene feature associated with the first real scene to the virtual scene according to a correspondence between the virtual scene and the first real scene. Apparatus and non-transitory computer-readable storage medium counterpart embodiments are also contemplated. | 2022-05-19 |
20220156987 | ADAPTIVE CONVOLUTIONS IN NEURAL NETWORKS - A technique for performing style transfer between a content sample and a style sample is disclosed. The technique includes applying one or more neural network layers to a first latent representation of the style sample to generate one or more convolutional kernels. The technique also includes generating convolutional output by convolving a second latent representation of the content sample with the one or more convolutional kernels. The technique further includes applying one or more decoder layers to the convolutional output to produce a style transfer result that comprises one or more content-based attributes of the content sample and one or more style-based attributes of the style sample. | 2022-05-19 |
20220156988 | METHOD AND APPARATUS FOR CONFIGURING COLOR, DEVICE, MEDIUM AND PRODUCT - Embodiments of the present disclosure provide a method, an apparatus, a device, a medium and a product for configuring a color, relates to the field of computer technology, and particularly to the data visualization technology. A specific implementation comprises: acquiring a set of chart entities in a chart; determining target color information corresponding to the chart entities in the set of the chart entities based on a preset target function and a constraint condition; and configuring colors corresponding to the chart entities in the set of the chart entities based on the target color information. | 2022-05-19 |
20220156989 | METHOD AND SYSTEM FOR OUTPUTTING CHARACTER, ELECTRONIC DEVICE, AND STORAGE MEDIUM - Provided is a method for outputting a character. The method includes: acquiring a character data stream; acquiring a character picture by drawing character data in the character data stream based on a size of a target picture and a number of target characters contained in the target picture; generating, according to a target display frequency, a character video based on the character picture; and outputting the character video. | 2022-05-19 |
20220156990 | Risk Monitoring Approach for a Micro Service - A computer-implemented system and method includes a visualization in a graphical user interface providing a circular hierarchical modeling view for monitoring the health of a model at each circular hierarchy level of a plurality of circular hierarchy levels. The visualization includes a first presentation of heath rules of the model at each hierarchy level of the circular hierarchy, the heath rules comprising a measure of a category, an analysis type, and an analysis model, and a second presentation of one or more health indicators at each hierarchy level. The health indicators comprise one or more colors in the visualization representing an indication of a health goal or a health status of the model. | 2022-05-19 |
20220156991 | VISUAL CONTENT OPTIMIZATION - Systems, methods and non-transitory computer readable media for optimizing visual contents are provided. A particular mathematical object corresponding to a particular visual content in a mathematical space including a plurality of mathematical objects corresponding to visual contents may be determined. The mathematical space and the particular mathematical object may be used to obtain first and second mathematical objects of the plurality of mathematical objects. The visual content corresponding to the first mathematical object may be used in a communication with a first user and the visual content corresponding to the second mathematical object may be used in a communication with a second user. Indications of the reactions of the first and second users to the communications may be received. A third visual content may be obtained based on the reactions. The third visual content may be used in a communication with a third user. | 2022-05-19 |
20220156992 | IMAGE SEGMENTATION USING TEXT EMBEDDING - A non-transitory computer-readable medium includes program code that is stored thereon. The program code is executable by one or more processing devices for performing operations including generating, by a model that includes trainable components, a learned image representation of a target image. The operations further include generating, by a text embedding model, a text embedding of a text query. The text embedding and the learned image representation of the target image are in a same embedding space. Additionally, the operations include generating a class activation map of the target image by, at least, convolving the learned image representation of the target image with the text embedding of the text query. Moreover, the operations include generating an object-segmented image using the class activation map of the target image. | 2022-05-19 |
20220156993 | NEURAL NETWORK-BASED IMAGE COLORIZATION ON IMAGE/VIDEO EDITING APPLICATIONS - A computing system and method for neural network-based image colorization is provided. The computing system obtains a reference color image by selective application of a color effect on a region of interest of an input image and controls a display device to display a first node graph on a Graphical User Interface of an image/video editing application. The first node graph includes a colorization node representing a first workflow for colorization of at least a first object in grayscale images of a first image feed. The computing system selects the reference color image based on a user input and executes the first workflow associated with the colorization node by feeding the reference color image and the first image feed as an input to a neural network-based colorization model. The computing system receives a second image feed comprising colorized images as output of the neural network-based colorization model for the input. | 2022-05-19 |
20220156994 | SYNTHETIC VISUAL CONTENT CREATION AND MODIFICATION USING TEXTUAL INPUT - Systems, methods and non-transitory computer readable media for generating and modifying synthetic visual content using textual input are provided. One or more keywords may be received from a user. The one or more keywords may be used to generate a plurality of textual descriptions. Each generated textual description may correspond to a possible visual content. The generated plurality of textual descriptions may be presented to the user through a user interface that enables the user to modify the presented textual descriptions. A modification to at least one of the plurality of textual descriptions may be received from the user, therefore obtaining a modified plurality of textual descriptions. A selection of one textual description of the modified plurality of textual descriptions may be received from the user. A plurality of visual contents corresponding to the selected textual description may be presented to the user. | 2022-05-19 |
20220156995 | AUGMENTED REALITY SYSTEMS AND METHODS UTILIZING REFLECTIONS - A display system comprises a wearable display device for displaying augmented reality content. The display device comprises a display area comprising light redirecting features that are configured to direct light to a user. The display area is at least partially transparent and is configured to provide a view of an ambient environment through the display area. The display device is configured to determine that a reflection of the user is within the user's field of view through the display area. After making this determination, augmented reality content is displayed in the display area with the augmented reality content augmenting the user's view of the reflection. In some embodiments, the augmented reality content may overlie on the user's view of the reflection, thereby allowing all or portions of the reflection to appear to be modified to provide a realistic view of the user with various modifications made to their appearance. | 2022-05-19 |
20220156996 | DISPLAY OF A LIVE SCENE AND AUXILIARY OBJECT - A mobile device comprises one or more processors, a display, and a camera configured to capture an image of a live scene. The one or more processors are configured to determine a location of the mobile device and display an augmented image based on the captured image. The augmented image includes at least a portion of the image of the live scene and a map including an indication of the determined location of the mobile device. The one or more processors are also configured to display the at least a portion of the image of the live scene in a first portion of the display and displaying the map in a second portion of the display. The augmented image is updated as the mobile device is moved, and the map is docked to the second portion of the display as the augmented image is updated. | 2022-05-19 |
20220156997 | SYSTEMS AND METHODS FOR MODIFYING A SAFETY BOUNDARY FOR VIRTUAL REALITY SYSTEMS - The disclosed computer-implemented method may include establishing a virtual boundary for a virtual-world environment in reference to a real-world environment, and determining whether the virtual boundary requires a correction. The method may include providing, in response to determining that the virtual boundary requires the correction, and alert and, in response to the alert, receiving a request from a user to modify the virtual boundary. The method may further include monitoring, in response to the request from the user, an orientation of a direction indicator to generate orientation data, and modifying the virtual boundary based on the orientation data. Various other methods, systems, and computer-readable media are also disclosed. | 2022-05-19 |
20220156998 | MULTIPLE DEVICE SENSOR INPUT BASED AVATAR - Examples are disclosed that relate to utilizing image sensor inputs from different devices having different perspectives in physical space to construct an avatar of a first user in a video stream. The avatar comprises a three-dimensional representation of at least a portion of a face of the first user texture mapped onto a three-dimensional body simulation that follows actual physical movement of the first user. The three-dimensional body simulation of the first user is generated based on image data received from an imaging device and image sensor data received from a head-mounted display device both associated with the first user. The three-dimensional representation of the face of the first user is generated based on the image data received from the imaging device. The resulting video stream is sent, via a communication network, to a display device associated with a second user. | 2022-05-19 |
20220156999 | PERSONALIZED AVATAR REAL-TIME MOTION CAPTURE - Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for performing operations comprising: capturing a video that depicts a person; identifying a set of skeletal joints of the person depicted in the video; storing a movement vector representing previously captured three-dimensional (3D) movement of the set of skeletal joints of the person depicted in the video; receiving input that selects a 3D avatar; and animating, based on the movement vector, the 3D avatar to mimic the previously captured 3D movement of the set of skeletal joints of the person depicted in the video. | 2022-05-19 |
20220157000 | BODY ANIMATION SHARING AND REMIXING - Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for performing operations comprising: receiving, by a client device associated with a first user, a communication from a second user; retrieving, from the communication, a movement vector representing three-dimensional (3D) movement of a set of skeletal joints of the second user; receiving, by the client device associated with the first user, input that selects a 3D avatar; and animating, based on the movement vector, the 3D avatar to mimic the 3D movement of the set of skeletal joints of the second user. | 2022-05-19 |
20220157001 | Method for Computation of Local Densities for Virtual Fibers - An image generator generates images of a set of virtual fibers and effects thereon by processing representations of the set of fibers and computing representation of a virtual surface for a fiber clump in the set of virtual fibers from an artist parameter representing a desired artist effect, computing correlations of the vertices from a set of vertices based on associations of the vertices corresponding to the artist parameter, computing a set of relevant vertices using the correlations of the vertices, computing orthogonal gradients to produce a plurality of gradients using a selected vertex and the set of relevant vertices for the fiber clump, and computing the virtual surface of the fiber clump from the plurality of gradients. | 2022-05-19 |
20220157002 | SYSTEM AND METHOD FOR IMMERSIVE TELECOMMUNICATIONS - A system and method for immersive telecommunications communication by tracking the movement of objects and/or persons with sensors. Movement tracking is then used to animate an avatar that represents the person or object. Movement may be tracked in real time, which at least reduces latency of communication. The sensors comprise any type of movement sensor which may be attached to a person and/or object for tracking motion, including but not limited to, an IMU (Inertial Measurement Unit), an accelerometer, a gyroscope or other such sensors. | 2022-05-19 |
20220157003 | SYSTEMS AND METHODS FOR CROSS-APPLICATION AUTHORING, TRANSFER, AND EVALUATION OF RIGGING CONTROL SYSTEMS FOR VIRTUAL CHARACTERS - Various examples of cross-application systems and methods for authoring, transferring, and evaluating rigging control systems for virtual characters are disclosed. A first application, which implements a first rigging control protocol, can provide an input associated with a request for a behavior from the rig for the virtual character. The input can be converted to be compatible with a second rigging control protocol that is different from the first rigging control protocol. One or more control systems can be evaluated based on the input to determine an output to provide the requested behavior from the virtual character rig. The one or more control systems can be defined according to the second rigging control protocol. The output can be converted to be compatible with the first rigging control protocol and provided to the first application to manipulate the virtual character according to the requested behavior. | 2022-05-19 |
20220157004 | GENERATING TEXTURED POLYGON STRIP HAIR FROM STRAND-BASED HAIR FOR A VIRTUAL CHARACTER - Computer generated (CG) hair for a virtual character can include strand-based (instanced) hair in which many thousands of digital strands represent real human hair strands. Strand-based hair can appear highly realistic, but rendering strand-based hair in real-time presents challenges. Techniques for generating textured polygon strip (poly strip) hair for a virtual character can use as an input previously-generated strand-based hair for the virtual character. Poly strips can be generated for a sampled set of strands in the strand-based hair. Additional poly strips may be generated near hairlines or part lines. Hair textures from a hair texture library can be matched to the poly strips. The matched textures can be scaled and packed into a region of texture space (e.g., a square region), which provides improved computer access, efficiency, and speed. A rendering engine can use the poly strips and the packed hair textures to render the character's hair in real-time. | 2022-05-19 |
20220157005 | METHOD AND APPARATUS FOR VIEWPORT SHIFTING OF NON-REAL TIME 3D APPLICATIONS - Systems and methods for super sampling and viewport shifting of non-real time | 2022-05-19 |
20220157006 | SYSTEMS AND METHODS FOR AUGMENTED REALITY - Disclosed is a method of localizing a user operating a plurality of sensing components, preferably in an augmented or mixed reality environment, the method comprising transmitting pose data from a fixed control and processing module and receiving the pose data at a first sensing component, the pose data is then transformed into a first component relative pose in a coordinate frame based on the control and processing module. A display unit in communication with the first sensing component is updated with the transformed first component relative pose to render virtual content with improved environmental awareness. | 2022-05-19 |
20220157007 | Controlling Rendering Operations by Shader Buffer Identification - Methods of rendering a scene in a graphics system identify a draw call within a current render and analyse the last shader in the series of shaders used by the draw call to identify any buffers that are sampled by the last shader and that are to be written by a previous render that has not yet been sent for execution on the GPU. If any such buffers are identified, further analysis is performed to determine whether the last shader samples from the identified buffers using screen space coordinates that correspond to a current fragment location and if this determination is positive, the draw call is added to data relating to the previous render and the last shader is recompiled to replace an instruction that reads data from an identified buffer with an instruction that reads data from an on-chip register. | 2022-05-19 |
20220157008 | CONTENT SOFTENING OPTIMIZATION - A computer-implemented method comprising: receiving, as input, a plurality of images, each associated with a specified content category; generating, from each of the plurality of images, a set of transformed images by applying a series of non-photorealistic transformations having escalating transformation degrees, wherein each of the transformed images is labeled with a label indicating (i) the transformation degree applied thereto, and (ii) a content category associated therewith; obtaining, with respect to each of the set of transformed images, classification results assigned by a human annotator, wherein the classification results assign each of the transformed images in the set into one of a plurality of content categories; and calculating, for the human annotator, a classification score in each of the plurality of content categories, based, at least in part, on all of the classification results. | 2022-05-19 |
20220157009 | APPARATUS AND METHOD FOR EFFICIENTLY STORING RAY TRAVERSAL DATA - Apparatus and method for preventing re-traversal of a prior path on a restart. For example, one embodiment of an apparatus comprises: a ray generator to generate a plurality of rays in a graphics scene; a bounding volume hierarchy (BVH) generator to construct a BVH comprising a plurality of hierarchically arranged nodes, wherein the BVH comprises a specified number of child nodes at a current BVH level beneath a parent node in the hierarchy; circuitry to traverse one or more of the rays through the BVH to form a current traversal path and intersect the one or more rays with primitives contained within the nodes, wherein the circuitry is to process entries from the top of a first data structure comprising entries each associated with a child node at the current BVH level, the entries being ordered from top to bottom based on a sorted distance of each respective child node. | 2022-05-19 |
20220157010 | APPARATUS AND METHOD FOR EFFICIENTLY MERGING BOUNDING VOLUME HIERARCHY DATA - An apparatus and method for efficiently reconstructing a BVH. For example, one embodiment of a method comprises: constructing an object bounding volume hierarchy (BVH) for each object in a scene, each object BVH including a root node and one or more child nodes based on primitives included in each object; constructing a top-level BVH using the root nodes of the individual object BVHs; performing an analysis of the top-level BVH to determine whether the top-level BVH comprises a sufficiently efficient arrangement of nodes within its hierarchy; and reconstructing at least a portion of the top-level BVH if a more efficient arrangement of nodes exists, wherein reconstructing comprises rebuilding the portion of the top-level BVH until one or more stopping criteria have been met, the stopping criteria defined to prevent an entire rebuilding of the top-level BVH. | 2022-05-19 |
20220157011 | SYNTHESIZING AN IMAGE FROM A VIRTUAL PERSPECTIVE USING PIXELS FROM A PHYSICAL IMAGER ARRAY WEIGHTED BASED ON DEPTH ERROR SENSITIVITY - A method assigns weights to physical imager pixels in order to generate photorealistic images for virtual perspectives in real-time. The imagers are arranged in three-dimensional space such that they sparsely sample the light field within a scene of interest. This scene is defined by the overlapping fields of view of all the imagers or for subsets of imagers. The weights assigned to imager pixels are calculated based on the relative poses of the virtual perspective and physical imagers, properties of the scene geometry, and error associated with the measurement of geometry. This method is particularly useful for accurately rendering numerous synthesized perspectives within a digitized scene in real-time in order to create immersive, three-dimensional experiences for applications such as performing surgery, infrastructure inspection, or remote collaboration. | 2022-05-19 |
20220157012 | Relighting Images and Video Using Learned Lighting and Geometry - Novel machine learning (ML) models are introduced for image reconstruction training and inference workflows, which are able to estimate intrinsic components of single view images, including albedo, normal, and lighting components. According to some embodiments, such models may be trained on a mix of real and synthetic image datasets. For training on real datasets, both reconstruction and cross-relighting consistency terms may be imposed. The use of a cross-relighting consistency term allows for the use of multiple images of the same scene—although lit under different lighting conditions—to be used during training. At inference time, the model is able to operate on single or multiple images. According to other embodiments, adversarial training (e.g., in the form of a generative adversarial network (GAN)) may optionally be incorporated into the training workflow, e.g., in order to better refine the re-rendered images from the individual lighting and geometric components estimated by the model. | 2022-05-19 |
20220157013 | A METHOD FOR PROCESSING A 3D SCENE, AND CORRESPONDING DEVICE, SYSTEM AND COMPUTER PROGRAM - A method for processing a 3D scene, and corresponding device, system and computer program are disclosed. In an example embodiment, the disclosed method includes: obtaining an image comprising at least a nadir view of a 3D scene, captured by at least one camera; detecting, in the image, at least one shadow cast by at least one object of the 3D scene acting as a support for the at least one camera; and determining a direction of at least one real light source from the at least one detected shadow and at least information representative of the object. | 2022-05-19 |
20220157014 | METHOD FOR RENDERING RELIGHTED 3D PORTRAIT OF PERSON AND COMPUTING DEVICE FOR THE SAME - The disclosure provides a method for generating relightable 3D portrait using a deep neural network and a computing device implementing the method. A possibility of obtaining, in real time and on computing devices having limited processing resources, realistically relighted 3D portraits having quality higher or at least comparable to quality achieved by prior art solutions, but without utilizing complex and costly equipment is provided. A method for rendering a relighted 3D portrait of a person, the method including: receiving an input defining a camera viewpoint and lighting conditions, rasterizing latent descriptors of a 3D point cloud at different resolutions based on the camera viewpoint to obtain rasterized images, wherein the 3D point cloud is generated based on a sequence of images captured by a camera with a blinking flash while moving the camera at least partly around an upper body, the sequence of images comprising a set of flash images and a set of no-flash images, processing the rasterized images with a deep neural network to predict albedo, normals, environmental shadow maps, and segmentation mask for the received camera viewpoint, and fusing the predicted albedo, normals, environmental shadow maps, and segmentation mask into the relighted 3D portrait based on the lighting conditions. | 2022-05-19 |
20220157015 | GHOST POINT FILTERING - Among other things, techniques are described for obtaining a range image related to a depth sensor of a vehicle operating in an environment. A first data point is identified in the range image with an intensity at or below a first intensity threshold. A first number of data points are determined in the range image that have an intensity at or above a second intensity threshold in a first region of the range image. Then, it is determined whether the first number of data points is at or above a region number threshold. The first data point is removed from the range image if the first number of data points is at or above the region number threshold. Operation of the vehicle is then facilitated in the environment based at least in part on the range image. Other embodiments may be described or claimed. | 2022-05-19 |
20220157016 | SYSTEM AND METHOD FOR AUTOMATICALLY RECONSTRUCTING 3D MODEL OF AN OBJECT USING MACHINE LEARNING MODEL - A system and method of automatically reconstructing a three-dimensional (3D) model of an object using a machine learning model is provided. The method includes (i) obtaining, using an image capturing device, a color image of an object, (ii) generating, using an encoder, a feature map by converting the color image that is represented in the 3D array to n-dimensional array, (iii) generating, using the machine learning model, a set of peeled depth maps and a set of RGB maps from the feature map, (iv) determining one or more 3D surface points of the object by back projecting the set of peeled depth maps and the set of RGB maps to 3D space, (v) reconstructing, using the machine learning model, a 3D model of the object by performing surface reconstruction using the one or more 3D surface points of the object. | 2022-05-19 |
20220157017 | METHOD AND APPARATUS FOR RECONSTRUCTING 3D MODEL FROM 2D IMAGE, DEVICE AND STORAGE MEDIUM - Disclosed are a method, an apparatus, a device and a storage medium for reconstructing a 3D model from 2D images, comprising: obtaining two-dimensional images respectively corresponding to at least two viewing angles of a three-dimensional object; and inputting the two-dimensional images respectively corresponding to the at least two viewing angles into a set neural network for information fusion and 3D model reconstruction so as to obtain a 3D model of the three-dimensional object. | 2022-05-19 |
20220157018 | TESSELLATION METHOD USING VERTEX TESSELLATION FACTORS - A tessellation method uses vertex tessellation factors. For a quad patch, the method involves comparing the vertex tessellation factors for each vertex of the quad patch to a threshold value and if none exceed the threshold, the quad is sub-divided into two or four triangles. If at least one of the four vertex tessellation factors exceeds the threshold, a recursive or iterative method is used which considers each vertex of the quad patch and determines how to further tessellate the patch dependent upon the value of the vertex tessellation factor of the selected vertex or dependent upon values of the vertex tessellation factors of the selected vertex and a neighbor vertex. A similar method is described for a triangle patch. | 2022-05-19 |
20220157019 | RENDERING IN COMPUTER GRAPHICS SYSTEMS - A graphics system has a rendering space divided into a plurality of rectangular areas, each being sub-divided into a plurality of smaller rectangular areas of a plurality of pixels. Data is received representing a tiled set of polygons to be rendered in a selected one of the rectangular areas. For each polygon, a determination is made whether that polygon is located at least partially inside a selected one of the smaller rectangular areas in the selected rectangular area. If so, which pixels of the plurality of pixels in the selected smaller rectangular area are inside the polygon are identified. Or, if that polygon is not located at least partially inside the selected smaller rectangular area, no further processing of the polygon is performed at one or more of the plurality of pixels in the smaller rectangular area. | 2022-05-19 |
20220157020 | METHOD AND APPARATUS FOR ONLINE FITTING - An online fitting method and apparatus receive, from a user, a body size of the user and a target size of clothes desired by the user for fitting, obtain barycentric coordinate information corresponding to a result of fitting the clothes of the target size to a reference avatar selected based on the body size of the user, generate a target avatar having the same mesh topology as the reference avatar and corresponding to the body size of the user, fit the clothes to the target avatar by applying the barycentric coordinate information to the target avatar, and display a result of fitting the clothes to the target avatar. | 2022-05-19 |
20220157021 | PARK MONITORING METHODS, PARK MONITORING SYSTEMS AND COMPUTER-READABLE STORAGE MEDIA - Park monitoring methods, park monitoring systems, and computer-readable storage media are provided. A method includes: determining a first path from a current position of a first object to a first position; generating a 3D virtual image of a surrounding environment when the first object is moving along the first path based on a perspective of the first object; and when the first object is located at the first position, displaying a real image of the first position. | 2022-05-19 |
20220157022 | METHOD AND APPARATUS FOR VIRTUAL TRAINING BASED ON TANGIBLE INTERACTION - A method and an apparatus for virtual training based on tangible interaction are provided. The apparatus acquires data for virtual training, and acquires a three-dimensional position of a real object based on a depth image and color image of the real object and infrared (IR) data included in the obtained data. Then, virtualization of an overall appearance of a user is performed by extracting a depth from depth information on a user image included in the obtained data and matching the extracted depth with the color information, and depth data and color data for the user obtained according to virtualization of the user is visualized in virtual training content. In addition, the apparatus performs correction on joint information using the joint information and the depth information included in the obtained data, estimates a posture of the user using the corrected joint information, and estimates a posture of a training tool using the depth information and IR data included in the obtained data. | 2022-05-19 |
20220157023 | TRIGGERING A COLLABORATIVE AUGMENTED REALITY ENVIRONMENT USING AN ULTRASOUND SIGNAL - According to an aspect, a method for sharing a collaborative augmented reality (AR) environment including obtaining, by a sensor system of a first computing system, visual data representing a physical space of an AR environment, where the visual data is used to create a three-dimensional ( | 2022-05-19 |
20220157024 | Systems for Augmented Reality Authoring of Remote Environments - In implementations of systems for augmented reality authoring of remote environments, a computing device implements an augmented reality authoring system to display a three-dimensional representation of a remote physical environment on a display device based on orientations of an image capture device. The three-dimensional representation of the remote physical environment is generated from a three-dimensional mesh representing a geometry of the remote physical environment and digital video frames depicting portions of the remote physical environment. The augmented reality authoring system receives input data describing a request to display a digital video frame of the digital video frames. A particular digital video frame of the digital video frames is determined based on an orientation of the image capture device relative to the three-dimensional mesh. The augmented reality authoring system displays the particular digital video frame on the display device. | 2022-05-19 |
20220157025 | REAL-TIME MOTION TRANSFER FOR PROSTHETIC LIMBS - Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for performing operations comprising: receiving a video that depicts a person; identifying a set of skeletal joints corresponding to limbs of the person; tracking 3D movement of the set of skeletal joints corresponding to the limbs of the person in the video; causing display of a 3D virtual object that has a plurality of limbs including one or more extra limbs than the limbs of the person in the video; and moving the one or more extra limbs of the 3D virtual object based on the movement of the set of skeletal joints corresponding to the limbs of the person in the video. | 2022-05-19 |
20220157026 | AUTHORING AND PRESENTING 3D PRESENTATIONS IN AUGMENTED REALITY - Various methods and systems are provided for authoring and presenting 3D presentations. Generally, an augmented or virtual reality device for each author, presenter and audience member includes 3D presentation software. During authoring mode, one or more authors can use 3D and/or 2D interfaces to generate a 3D presentation that choreographs behaviors of 3D assets into scenes and beats. During presentation mode, the 3D presentation is loaded in each user device, and 3D images of the 3D assets and corresponding asset behaviors are rendered among the user devices in a coordinated manner. As such, one or more presenters can navigate the scenes and beats of the 3D presentation to deliver the 3D presentation to one or more audience members wearing augmented reality headsets. | 2022-05-19 |
20220157027 | Artificial Reality Environment with Glints Displayed by An Extra Reality Device - The present embodiments relate to display of glints associated with real-world objects in an environment displayed on an extra reality (XR) device. The glint can include a virtual object associated with a real-world object, such as an indication of a social interaction associated with a real-world object, a content item tagged to an object, etc. The system as described herein can present glints on a display of an XR device based on a distance between the XR device and a location associated with the glint. Responsive to selection of a glint in the environment, additional information can be presented relating to the glint or another action can be taken, such as to open an application. In some instances, a glint can include a series of search results relating to a corresponding real-world object to provide additional information relating to the real-world object. | 2022-05-19 |
20220157028 | MOBILE DEVICE IMAGE ITEM REPLACEMENTS - A system for replacing physical items in images is discussed. A depicted item can be selected and removed from an image via image mask data and pixel merging techniques. Virtual light source positions can be generated based on real-world light source data from the image. A rendered simulation of a virtual item can then be integrated into the image to create a modified image for display. | 2022-05-19 |
20220157029 | STORAGE MEDIUM STORING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - A virtual reference plane and a virtual camera are updated based on detection of a characteristic portion in a captured image. A virtual object and the virtual camera are updated based on a shooting state. An overlay image in which an image of the virtual object is overlaid on the captured image is generated. The virtual camera and the virtual object are controlled such that the virtual object is in a field-of-view range, before the detection of the characteristic portion. The virtual object, the virtual reference plane, and the virtual camera are updated such that the virtual object is along the virtual reference plane, based on the shooting state, after the detection of the characteristic portion, and such that an appearance of the virtual object is in association with the shooting state, no matter whether or not the characteristic portion has been detected, when a position fixation condition is satisfied. | 2022-05-19 |
20220157030 | High Quality AR Cosmetics Simulation via Image Filtering Techniques - The disclosure is directed to embodiments for producing high-quality images simulating the application of a material (e.g., virtual cosmetics) to a person's body. Example implementations can generate an augmented image displaying a virtual cosmetic layer (e.g., lipstick) on a person's face. For instance, a computing system can obtain an image depicting the face and identify a region for applying the cosmetic. The system can use augmented reality and/or image filtering to process the image into datasets that can be combined with material data related to the virtual cosmetic to generate augmented image(s) simulating application of the material. | 2022-05-19 |
20220157031 | SYSTEMS AND METHODS FOR ENHANCING AND DEVELOPING ACCIDENT SCENE VISUALIZATIONS - Systems and methods are disclosed for enhancing and developing a damage scene virtual reality (VR) visualization. Annotated immersive multimedia image(s) may be received from a first user, where the annotated immersive multimedia image(s) can be associated with a damage scene. A VR visualization of the annotated immersive multimedia image(s) may be rendered using a VR device associated with a second user. The VR visualization may be used to determine a damage amount, where the damage amount is determined from one or more damaged items identifiable in the annotated immersive multimedia image(s). | 2022-05-19 |
20220157032 | MULTI-MODALITY LOCALIZATION OF USERS - Systems and methods providing for determining physical location of a device of a user of an augmented reality environment corresponding to a physical space. The systems and methods involve requesting and receiving a list of participating users having a GPS location within a predetermined radius of a first device; sending advertising and scanning beacons, via a first wireless network, to generate a second list of devices present in the physical space; performing simultaneous localization and mapping (SLAM) using the participating devices of the second list; generating a third list based at least partly on a Bluetooth connection between the one or more participating devices of the second list; and identifying the participating devices of the third list. | 2022-05-19 |
20220157033 | SYSTEM AND METHOD TO GENERATE MODELS USING INK AND AUGMENTED REALITY - This application relates to systems, methods, devices, and other techniques for methods with cameras and specialized ink spreads and augmented reality technology that can be utilized to generate models within an auto-checkout system within a retail environment | 2022-05-19 |
20220157034 | Intra-oral scanning device with active delete of unwanted scanned items - An intra-oral scanning device and system are augmented to provide an active delete (or filter) tool. The tool automatically detects and deletes unwanted items and artifacts capturing as the scan is being done. The technique provides a better user experience by automatically removing items the deletion of which that may otherwise be time-consuming, frustrating, or impossible for the user to delete any other way. | 2022-05-19 |
20220157035 | SYSTEMS AND METHODS FOR PROVIDING AUGMENTED MEDIA - The present disclosure relates, in part, to spatially aware media that includes three-dimensional (3D) spatial information pertaining to a real-world space. The spatially aware media may map this 3D spatial information to media such as an image, for example, to provide 3D spatial context for the media. This may allow users to more flexibly and efficiently interact with virtual content in real-world spaces that are relevant to them. According to one embodiment, spatially aware media is augmented to provide an image of a real-world space overlaid with a render of a 3D model defined relative to the 3D spatial features of the real-world space. Before augmenting the spatially aware media, a recommended position for the 3D model relative to the 3D spatial features of the real-world space may be determined based on the 3D model and/or on the spatially aware media. | 2022-05-19 |
20220157036 | METHOD FOR GENERATING VIRTUAL CHARACTER, ELECTRONIC DEVICE, AND STORAGE MEDIUM - The present disclosure discloses a method for generating a virtual image, an electronic device, and a storage medium, relating to a field of virtual reality, in particular to fields of artificial intelligence, Internet of Things, voice technology, cloud computing, etc. An implementation includes: acquiring a language description generated by a user for a target virtual character; extracting a respective semantic feature based on the language description; and generating the target virtual character based on the semantic feature. | 2022-05-19 |
20220157037 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An image display apparatus according to an embodiment of the present technology includes a display control unit and a processing executing unit. The display control unit controls display of a designation image capable of designating a region with respect to a target image. The processing executing unit executes processing associated with the designation image on a designated region designated by the designation image. The processing executing unit executes, on an overlap region in which a first designated region designated by a first designation image and a second designated region designated by a second designation image overlap with each other, first processing associated with the first designation image and second processing associated with the second designation image. The display control unit moves the second designation image in conjunction with movement of the first designation image when the overlap region exists. | 2022-05-19 |
20220157038 | AUTOMATED AUTHENTICATION REGION LOCALIZATION AND CAPTURE - A device includes a processor, a machine-readable memory, and an optical capture device coupled to the processor. The machine-readable memory, which is accessible to the processor, stores processor-executable instructions and data. The processor is configured to perform certain operations responsive to execution of the processor-executable instructions. The certain operations include capturing image data of a first region of a physical object using the optical capture device. The captured image data includes an anchor region. The certain operations also include determining whether the captured image data at least meets a predetermined level of image quality, and in a case that it does, locating an authentication region in the captured image data based on the anchor region. The certain operations also include authenticating the object based on the captured image data in the authentication region, generating a result of the authenticating, and outputting the result to a user interface of the mobile device. | 2022-05-19 |
20220157039 | Object Location Determination - Object parts ( | 2022-05-19 |