42nd week of 2021 patent applcation highlights part 56 |
Patent application number | Title | Published |
20210327060 | DISCERNING DEVICE, CELL MASS DISCERNING METHOD, AND COMPUTER PROGRAM - A discerning device that discerns a cell mass includes: a storage unit that stores a trained model that has been subjected to machine learning on the basis of training data in which an index associated with a first cell mass out of a predetermined index including at least one index indicating a feature of a cell mass is correlated with information indicating whether a state of the first cell mass is a first state or a second state that is different from the first state; an image-analyzing unit that acquires an index associated with a second cell mass out of the predetermined index; and a discerning-processing unit that discerns whether a state of the second cell mass is the first state or the second state on the basis of the index associated with the second cell mass and the trained model. | 2021-10-21 |
20210327061 | METHOD FOR OBJECT DETECTION USING HIERARCHICAL DEEP LEARNING - A hierarchical deep-learning object detection framework provides a method for identifying objects of interest in high-resolution, high pixel count images, wherein the objects of interest comprise a relatively a small pixel count when compared to the overall image. The method uses first deep-learning model to analyze the high pixel count images, in whole or as a patchwork, at a lower resolution to identify objects, and a second deep-learning model to analyze the objects at a higher resolution to classify the objects. | 2021-10-21 |
20210327062 | METHOD AND DEVICE FOR ASSISTING HEART DISEASE DIAGNOSIS - The present invention relates to a method of assisting in diagnosis of a target heart disease using a retinal image, the method including: obtaining a target retinal image which is obtained by imaging a retina of a testee; on the basis of the target retinal image, obtaining heart disease diagnosis assistance information of the testee according to the target retinal image, via a heart disease diagnosis assistance neural network model which obtains diagnosis assistance information that is used for diagnosis of the target heart disease according to the retinal image; and outputting the heart disease diagnosis assistance information of the testee. | 2021-10-21 |
20210327063 | SYSTEMS AND METHODS FOR WHOLE-BODY SPINE LABELING - Methods and systems are provided for whole-body spine labeling. In one embodiment, a method comprises acquiring a non-functional image volume of a spine, acquiring a functional image volume of the spine, determining at least one spine label seed point on a non-functional image volume, automatically labeling the non-functional image volume with a plurality of spine labels based on the at least one spine label seed point, automatically correcting the geometric misalignments and registering the functional image volume, adjusting the plurality of spine labels and propagating the adjusted spine labels to the functional image volume. In this way, the anatomical details of non-functional imaging volumes may be leveraged to improve clinical diagnoses based on functional imaging, such as diffusion weighted imaging (DWI). | 2021-10-21 |
20210327064 | SYSTEM AND METHOD FOR CALCULATING FOCUS VARIATION FOR A DIGITAL MICROSCOPE - Apparatus and methods are described for use with a digital microscope unit that includes a digital microscope. A biological cell sample that is disposed within a sample carrier is received into the digital microscope unit. It is determined that there is a variation in the focal depth of the biological sample with respect to the microscope due to curvature in the sample carrier and/or due to tolerance in setup of the microscope. In response to determining that there is the variation in the focal depth of the biological sample with respect to the microscope, the variation in the focal depth of the biological sample with respect to the microscope is accounted for. Other applications are also described. | 2021-10-21 |
20210327065 | PROSTHESIS SCANNING AND IDENTIFICATION SYSTEM AND METHOD - The present application is directed to a Prosthesis Scanning and Identification System and Method used to positively identify an implanted prosthesis or any other implanted orthopedic device. The Prosthesis Scanning and Identification System and Method enables the positive identification of an implanted prosthesis or device by providing the steps of: (1) obtaining an initial conventional X-ray radiograph, or other suitable imaging of the affected area, and especially procuring the profile of an implanted prosthesis; (2) photographing the resulting X-ray radiograph then scanning said photograph using a smartphone and configured smartphone application; (3) searching a configured prosthesis identification database stored on a central server, accessible using a smartphone application, for profiles similar to the scanned prosthesis image; and (4) obtaining a list of probable prosthesis models based on the scanned images and profile comparisons. | 2021-10-21 |
20210327066 | APPARATUS AND METHOD FOR DETERMINING MUSCULOSKELETAL DISEASE - An apparatus for determining a musculoskeletal disease may be disclosed. The apparatus may include a motion protocol learning unit configured to generate a first motion protocol used to determine a musculoskeletal disease in advance through learning, a motion protocol recognition model unit configured to generate a motion protocol recognition model for determining a musculoskeletal disease by using information of the first motion protocol, a body pose estimator configured to receive a user image to be recognized and estimate a body pose from the user image, and a disease classification and prediction unit configured to determine a musculoskeletal disease by matching the body pose and the motion protocol recognition model. | 2021-10-21 |
20210327067 | ENDOSCOPIC IMAGE PROCESSING APPARATUS, ENDOSCOPIC IMAGE PROCESSING METHOD, AND RECORDING MEDIUM - An endoscopic image processing apparatus includes a processor. The processor performs processing for acquiring lesion information including information indicating a position of a lesion region included in an endoscopic image, determines whether the lesion region is included in the endoscopic image, and performs processing for generating a display image for displaying one or more marks in at least one of N mark display regions set as regions as many as a maximum display number of the one or more marks, and generating the one or more marks indicating in which reference region among a predetermined plurality of reference regions set in the endoscopic image a present position of the lesion region or a position of the lesion region immediately before the detection is interrupted is included and displaying the one or more marks in at least one of the N mark display regions. | 2021-10-21 |
20210327068 | Methods for Automated Lesion Analysis in Longitudinal Volumetric Medical Image Studies - Described herein is a computer implemented method that includes receiving at a data processor two or more digital data files representing medical images of a same modality; performing group-wise 3D registration of the digital data files representing medical images of a same modality; and parallel lesion detection and analysis on the digital data files representing the medical images. | 2021-10-21 |
20210327069 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - It is possible to inhibit deterioration of extraction precision of a subject and reliably extract the subject even when colors of the subject and a background are the same or similar. An image processing device | 2021-10-21 |
20210327070 | SYSTEMS AND METHODS FOR IMAGE SEGMENTATION - The present disclosure relates to systems and methods for image segmentation. The system may include at least one processor that is directed to obtain an image; divide the image into a plurality of image blocks; determine, for each of the plurality of image blocks, a compressed representation of one or more features of the image block; group the plurality of image blocks into at least two different categories based on the plurality of compressed representations; and extract a region of interest from the image based on at least one of the at least two different categories of the plurality of image blocks. | 2021-10-21 |
20210327071 | Automated Cropping of Images Using a Machine Learning Predictor - Example systems and methods may selection of video frames using a machine learning (ML) predictor program are disclosed. The ML predictor program may generate predicted cropping boundaries for any given input image. Training raw images associated with respective sets of training master images indicative of cropping characteristics for the training raw image may be input to the ML predictor, and the ML predictor program trained to predict cropping boundaries for raw image based on expected cropping boundaries associated training master images. At runtime, the trained ML predictor program may be applied to runtime raw images in order to generate respective sets of runtime cropping boundaries corresponding to different cropped versions of the runtime raw image. The runtime raw images may be stored with information indicative of the respective sets of runtime boundaries. | 2021-10-21 |
20210327072 | METHODS AND SYSTEMS FOR IMAGE PROCESSING - Methods and systems for image processing are provided. A target image may be acquired, wherein the target image may include a plurality of elements, an element of which may correspond to a pixel or a voxel. The target image may be decomposed into at least one layer, wherein the at least one layer may include a low frequency sub-image and a high frequency sub-image. The at least one layer may be transformed. The transformed layer may be reconstructed into a composite image. | 2021-10-21 |
20210327073 | METHOD AND PIXEL ARRAY FOR DETECTING MOTION INFORMATION - A method for detecting motion information includes the following steps. First, a pixel array is provided for detecting an image of a measured object located in a first distance range or in a second distance range, and the pixel array includes a plurality of invisible image sensing pixels and a plurality of visible image sensing pixels. Then, image detection is conducted within the first distance range by using the invisible image sensing pixels to output a plurality of invisible images. Next, the image detection is conducted within the second distance range by using the visible image sensing pixels to output a plurality of visible images. Then, the plurality of invisible images and the plurality of visible images are analyzed by using a processing unit, so as to obtain motion information of the measured object. A pixel array for detecting motion information and an image sensor are also provided. | 2021-10-21 |
20210327074 | IMAGE PROCESSING METHOD AND APPARATUS - An image processing method includes defining relations between entities of a target of which a motion is to be predicted from an image of a first time point based on a feature vector of the entities, estimating a dynamic interaction between the entities at the first time point based on the defined relations between the entities, predicting a motion of the entities changing at a second time point based on the estimated dynamic interaction, and outputting a result to which the motion predicted at the second time point is applied. | 2021-10-21 |
20210327075 | APPARATUS AND METHOD FOR MEASURING FLOW VELOCITY OF STREAM USING OPTICAL FLOW IMAGE PROCESSING - Disclosed is a river flow velocity measurement device using optical flow image processing, including: an image photographing unit configured to acquire consecutive images of a flow velocity measurement site of a river; an image conversion analysis unit configured to dynamically extract frames of the consecutive images in order to normalize image data of the image photographing unit, image-convert the extracted frames, and perform homography calculation; an analysis region extracting unit configured to extract an analysis region of an analysis point; a pixel flow velocity calculating unit configured to calculate a pixel flow velocity using an image in the analysis region of the analysis point extracted by the analysis region extracting unit; and an actual flow velocity calculating unit configured to convert the pixel flow velocity calculated by the pixel flow velocity calculating unit into an actual flow velocity. | 2021-10-21 |
20210327076 | TARGET TRACKING METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE - This application provides a target tracking method, including: obtaining a plurality of consecutive picture frames of a target video, and setting a tracked target region of an n | 2021-10-21 |
20210327077 | DETERMINING A GEOLOGICAL CORRELATION WITH A WELLBORE POSITION - This disclosure presents a process to determine an alignment parameter for geosteering a wellbore undergoing drilling operations. The process can receive one or more azimuthal image log data sets, one or more geology logs, and other input parameters. The image log data sets can be transformed to better approximate the geology logs, such as transforming a 3D representation to a 2D representation and flattening out curves represented in the original image log data. The geology logs or transformed image log data can then be moved to create an approximate alignment between the other log data. The movement, which can be a sliding movement, a linear movement, a tilting movement, an angling movement, or a rotating movement, can be used to determine the determined alignment parameter or final alignment parameter. The alignment parameter can be used as input into a geosteering system for the wellbore. | 2021-10-21 |
20210327078 | THREE-DIMENSIONAL CAMERA SYSTEM - A camera system. In some embodiments, the camera system includes a first laser, a camera, and a processing circuit connected to the first laser and to the camera. The first laser may be steerable, and the camera may include a pixel including a photodetector and a pixel circuit, the pixel circuit including a first time-measuring circuit. | 2021-10-21 |
20210327079 | DEPTH SENSING APPARATUS AND OPERATING METHOD OF DEPTH SENSING APPARATUS - A synchronization method of a first depth sensing apparatus includes: transmitting a first optical signal to measure a distance to an object; receiving the first optical signal reflected by the object; when recognition of the received first optical signal fails, stopping transmission of the first optical signal and generating first synchronization information for synchronization with at least one second depth sensing apparatus; receiving a third optical signal for synchronization with the first depth sensing apparatus, which is transmitted by the at least one second depth sensing apparatus, and decoding the received third optical signal to extract at least one piece of second synchronization information; determining a time point at which and a cycle in which to re-transmit the first optical signal, based on the first synchronization information and the at least one piece of second synchronization information; and re-transmitting the first optical signal at the determined time point and cycle. | 2021-10-21 |
20210327080 | Image Processing and Segmentation of Sets of Z-Stacked Images of Three-Dimensional Biological Samples - Methods are provided to project depth-spanning stacks of limited depth-of-field images of a sample into a single image of the sample that can provide in-focus image information about three-dimensional contents of the image. These methods include applying filters to the stacks of images in order to identify pixels within each image that have been captured in focus. These in-focus pixels are then combined to provide the single image of the sample. Filtering of such image stacks can also allow for the determination of depth maps or other geometric information about contents of the sample. Such depth information can also be used to inform segmentation of images of the sample, e.g., by further dividing identified regions that correspond to the contents of the sample at multiple different depths. | 2021-10-21 |
20210327081 | METHODS FOR GENERATING MOIRE-PRODUCING PATTERN, APPARATUSES FOR GENERATING MOIRE-PRODUCING PATTERN, AND SYSTEMS FOR GENERATING MOIRE-PRODUCING PATTERN - A feature value such as a grayscale value is extracted from a design pattern on which a moiré image is based. Then, an aperture/non-aperture ratio of a moiré pattern is set according to the feature value, taking into account the specification of layers desired for a moiré image, setting of a basic pattern, and information regarding a moiré display. In addition to the aperture/non-aperture ratio, a phase shift amount and a pitch ratio can be further added to efficiently produce a moiré image having a beautiful appearance and excellent design. Moreover, such a method for generating a moiré pattern can be incorporated in a system to further improve production efficiency. | 2021-10-21 |
20210327082 | METHOD AND PROCESSING SYSTEM FOR UPDATING A FIRST IMAGE GENERATED BY A FIRST CAMERA BASED ON A SECOND IMAGE GENERATED BY A SECOND CAMERA - A method and system for processing camera images is presented. The system receives a first depth map generated based on information sensed by a first type of depth-sensing camera, and receives a second depth map generated based on information sensed by a second type of depth-sensing camera. The first depth map includes a first set of pixels that indicate a first set of respective depth values. The second depth map includes a second set of pixels that indicate a second set of respective depth values. The system identifies a third set of pixels of the first depth map that correspond to the second set of pixels of the second depth map, identifies one or more empty pixels from the third set of pixels, and updates the first depth map by assigning to each empty pixel a respective depth value based on the second depth map. | 2021-10-21 |
20210327083 | SYSTEMS AND METHODS OF MEASURING AN OBJECT IN A SCENE OF A CAPTURED IMAGE - Systems and methods are provided that include a plurality of sensors, communicatively coupled to one another, to periodically transmit positional location information. A digital image capture device, communicatively coupled to the plurality of sensors, may capture an image of a scene which includes at least one of the plurality of sensors. A processor, communicatively coupled to the digital capture device, may determine a measurement of a least one object in the captured image of the scene, where the measurement of the at least one object is based at least in part on the positional location information received by the digital image capture device at the time that the image of the scene is captured. A display device, communicatively coupled to the processor, may display the determined measurements of the at least one object. | 2021-10-21 |
20210327084 | VISUAL LOCALIZATION USING A THREE-DIMENSIONAL MODEL AND IMAGE SEGMENTATION - An apparatus receives a first image captured by an image capture device of a mobile apparatus. The first image corresponds to surroundings of the mobile apparatus. Each artificial image of a first plurality of artificial images respectively is a two-dimensional projection of a three-dimensional model from a perspective of an image position and image pose. The apparatus determines one or more first image attributes respectively for one or more sections of the first image; identifies at least one artificial image of the first plurality of artificial images that has one or more artificial image attributes that substantially match the one or more first image attributes for corresponding sections of the at least one artificial image and the first image; and determines a location and/or pose of the mobile apparatus based at least in part on the image position and/or the image pose associated with the at least one artificial image. | 2021-10-21 |
20210327085 | PERSONALIZED NEURAL NETWORK FOR EYE TRACKING - Disclosed herein is a wearable display system for capturing retraining eye images of an eye of a user for retraining a neural network for eye tracking. The system captures retraining eye images using an image capture device when user interface (UI) events occur with respect to UI devices displayed at display locations of a display. The system can generate a retraining set comprising the retraining eye images and eye poses of the eye of the user in the retraining eye images (e.g., related to the display locations of the UI devices) and obtain a retrained neural network that is retrained using the retraining set. | 2021-10-21 |
20210327086 | DETECTION METHOD FOR PEDESTRIAN EVENTS, ELECTRONIC DEVICE, AND STORAGE MEDIUM - The present disclosure relates to a pedestrian event detection method and device, an electronic apparatus, and a storage medium. The method comprises: acquiring coordinates of a target pedestrian in multi-frame to-be-processed images; acquiring coordinates of a preset space; and determining a pedestrian events occurring to the target pedestrian in the preset space according to the coordinates of the target pedestrian in the multi-frame to-be-processed images and the coordinates of the preset space. The embodiments of the present disclosure can improve the accuracy of detecting pedestrian events. | 2021-10-21 |
20210327087 | SYSTEMS AND METHODS FOR LOCATING OBJECTS - In one embodiment, a method includes receiving an image generated by a camera associated with a vehicle. The image includes a point of interest (POI) associated with a physical object. The method also includes determining a number of pixels from the POI of the image to an edge of the image. The edge of the image represents a location of the camera. The method further includes determining an offset distance from the POI to a Global Positioning System (GPS) unit associated with the vehicle using the number of pixels. | 2021-10-21 |
20210327088 | EYE TRACKING METHOD AND SYSTEM - A method for determining a series of gaze positions of at least one eye over time is provided. The method comprises capturing a video of a user's face simultaneously with displaying a stimulus video on a screen and extracting at least one color component for each one of a plurality of images obtained from the video of the user's face. Based on the at least one color component for each one of the plurality of images, the series of gaze positions of the user's face over the time of the video is determined. A system for determining a series of gaze positions of at least one eye over time is also provided. | 2021-10-21 |
20210327089 | Method for Measuring Positions - A method of measuring a target's position is provided. The method comprises: providing a marker for the target and a tracking assembly. The marker has a convex measuring surface, configured to be part or whole of a sphere, such that the center of the convex measuring surface substantially corresponds to the position of the target. The tracking assembly comprises a measuring piece, which has a tracking tool fixedly attached onto. One type of the measuring piece has a concave measuring surface substantially fit with the convex measuring surface of the marker; Another type of the measuring piece comprises a vision measuring system configured to be able to measure position of a center of the marker with respect to a designated coordinate system of the vision measuring system. The method to obtain the calibration relationship between the designated coordinate system of the vision measuring system and the tracking tool is described also. The disclosed method is more convenient and is able to improve the accuracy for measuring a target position. | 2021-10-21 |
20210327090 | SENSOR CALIBRATION SYSTEM, DISPLAY CONTROL APPARATUS, PROGRAM, AND SENSOR CALIBRATION METHOD - A sensor calibration system is provided that includes a sensor apparatus including an event-driven vision sensor including a sensor array configured with sensors that generate event signals upon detection of a change in incident light intensity, and a display apparatus including a display section configured to change luminance of a planar region instantaneously with a predetermined spatial resolution as per a calibration pattern of the sensors. | 2021-10-21 |
20210327091 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing method of an image processing apparatus according to an embodiment of the present invention comprises the steps of: obtaining an RGB image from an RGB camera; obtaining a ToF image from a ToF camera; extracting a first RGB feature point from the RGB image; extracting a first ToF feature point from the ToF image; matching the first RGB feature point and the first ToF feature point and extracting a second RGB feature point and a second ToF feature point such that a correlation between the first RGB feature point and the first ToF feature point is equal to or greater than a predetermined value; calculating an error value between the second RGB feature point and the second ToF feature point; updating pre-stored calibration data when the error value is greater than a threshold value, and calibrating the RGB image and the ToF image by using the updated calibration data; and synthesizing the calibrated RGB and ToF images. | 2021-10-21 |
20210327092 | NON-RIGID STEREO VISION CAMERA SYSTEM - A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system. | 2021-10-21 |
20210327093 | IMAGE PROCESSING METHOD OF VIRTUAL REALITY AND APPARATUS THEREOF - The present disclosure is related to an image processing method of virtual reality. The image processing method may include obtaining position information of a gaze point; performing a compression process on an original image to obtain a compressed image based on the position information of the gaze point; performing a compression process on a distorted image of the original image in an lens to obtain a compressed distorted image based on the position information of the gaze point; and performing an anti-distortion process on the compressed image to obtain an anti-distortion image based on a relationship between the compressed image and the compressed distorted image. The position information of the gaze point may be position information of a gaze point of a user's eye on an original image. | 2021-10-21 |
20210327094 | EDGE ENHANCEMENT FILTER - A method, computer program, and computer system is provided for coding video data. Video data is received, and an edge present within a sample of the received video data is detected. A gradient value corresponding to a direction associated with the detected edge is calculated. The video data is decoded based on the calculated gradient. | 2021-10-21 |
20210327095 | ANGULAR MODE SIMPLIFICATION FOR GEOMETRY-BASED POINT CLOUD COMPRESSION - A method of decoding point cloud data comprises obtaining a bitstream that includes an arithmetically encoded syntax element indicating a vertical plane position of a planar mode of a node; and decoding the vertical plane position of the planar mode in the node, wherein decoding the vertical plane position of the planar mode comprises: determining a laser index of a laser candidate in a set of laser candidates, wherein the determined laser index indicates a laser beam that intersects the node; determining a context index based on whether the laser beam is above a first distance threshold, between the first distance threshold and a second distance threshold, between the second distance threshold and a third distance threshold, or below the third distance threshold; and arithmetically decoding the vertical plane position of the planar mode using a context indicated by the determined context index. | 2021-10-21 |
20210327096 | SECONDARY COMPONENT ATTRIBUTE CODING FOR GEOMETRY-BASED POINT CLOUD COMPRESSION (G-PCC) - In some examples, a method of decoding a point cloud includes decoding an initial QP value from an attribute parameter set. The method also includes determining a first QP value for a first component of an attribute of point cloud data from the initial QP value. The method further includes determining a QP offset value for a second component of the attribute of the point cloud data and determining a second QP value for the second component of the attribute from the first QP value and from the QP offset value. The method includes decoding the point cloud data based on the first QP value and further based on the second QP value. | 2021-10-21 |
20210327097 | GLOBAL SCALING FOR POINT CLOUD DATA - An example device for decoding point cloud data includes: a memory configured to store point cloud data; and one or more processors implemented in circuitry and configured to: decode a frame of the point cloud data including a plurality of points, each of the points being associated with position values defining a respective position of the point; determine a global scaling factor for the frame; and scale the position values of each of the points by the global scaling factor. The scaling may be clipped to prevent the points exceeding the boundaries of a corresponding bounding box including respective points. | 2021-10-21 |
20210327098 | CODING OF LASER ANGLES FOR ANGULAR AND AZIMUTHAL MODES IN GEOMETRY-BASED POINT CLOUD COMPRESSION - A device comprises one or more processors configured to: obtain a value for a first laser, the value for the first laser indicating a number of probes in an azimuth direction of the first laser; decode a syntax element for a second laser, wherein the syntax element for the second laser indicates a difference between the value for the first laser and a value for the second laser, the value for the second laser indicating a number of probes in the azimuth direction of the second laser; determine the value for the second laser indicating the number of probes in the azimuth direction of the second laser based on the first value and the indication of the difference between the value for the first laser and the value for the second laser; and decode a point based on the number of probes in the azimuth direction of the second laser. | 2021-10-21 |
20210327099 | ANGULAR MODE SIMPLIFICATION FOR GEOMETRY-BASED POINT CLOUD COMPRESSION - A method of decoding point cloud data comprises obtaining a bitstream that includes an arithmetically encoded syntax element indicating a vertical point position offset within a node of a tree that represents 3-dimensional positions of points in a point cloud represented by the point cloud data; and decoding the vertical point position offset, wherein decoding the vertical point position offset comprises: determining a laser index of a laser candidate in a set of laser candidates, wherein the determined laser index indicates a laser beam that intersects the node; determining a context index based on whether the laser beam is above a first distance threshold, between the first distance threshold and a second distance threshold, between the second distance threshold and a third distance threshold, or below the third distance threshold; and arithmetically decoding a bin of the vertical point position offset using a context indicated by the determined context index. | 2021-10-21 |
20210327100 | THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE - A three-dimensional data encoding method includes: extracting, from first three-dimensional data, second three-dimensional data having an amount of a feature greater than or equal to a threshold; and encoding the second three-dimensional data to generate first encoded three-dimensional data. For example, the three-dimensional data encoding method may further include encoding the first three-dimensional data to generate the second encoded three-dimensional data. | 2021-10-21 |
20210327101 | INFORMATION PROCESSING DEVICE, DRAWING CONTROL METHOD, AND RECORDING MEDIUM RECORDING PROGRAM OF THE SAME - Deterioration in usability due to a delay is improved. An information processing device includes a control unit ( | 2021-10-21 |
20210327102 | CROSS-DEVICE SUPERVISORY COMPUTER VISION SYSTEM - A supervisory computer vision (CV) system may include a secondary CV system running in parallel with a native CV system on a mobile device. The secondary CV system is configured to run less frequently than the native CV system. CV algorithms are then run on these less-frequent sample images, generating information for localizing the device to a reference point cloud (e.g., provided over a network) and for transforming between a local point cloud of the native CV system and the reference point cloud. AR content may then be consistently positioned relative to the convergent CV system's coordinate space and visualized on a display of the mobile device. Various related algorithms facilitate the efficient operation of this system. | 2021-10-21 |
20210327103 | METHOD FOR PREVENTING DISPLAY BURN-IN IN ELECTRONIC DEVICE, AND ELECTRONIC DEVICE - Various embodiments relate to an electronic device and, according to one embodiment, the electronic device comprises a display and a processor, wherein the processor can be configured to detect at least one outline corresponding to at least one graphic object included in a first image to be displayed through the display, generate a second image in which the at least one outline has been adjusted to a first designated color and areas excluding the at least one outline in a first image have been adjusted to a second designated color, and display the second image by using the display. Other additional embodiments are possible. | 2021-10-21 |
20210327104 | IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, AND PROGRAM - Provided is an image processing method wherein a computer: generates an estimation model for estimating the colorized image from the line-drawing image for each element through machine learning using the learning data of that element; identifies the element corresponding to the subject line-drawing image; generates the colorized image that is to be paired with the subject line-drawing image, on the basis of the estimation model corresponding to the identified element and the subject line-drawing image; generates a colorization layer of an image file including a line-drawing layer and the colorization layer by using the subject colorized image; extracts the modified colorization layer and the corresponding line-drawing layer as the image pair for learning; and stores a pair of the line-drawing image of the extracted line-drawing layer and the colorized image of the extracted colorization layer, as the learning data, in association with the element corresponding to the estimation model. | 2021-10-21 |
20210327105 | SYSTEMS AND METHODS TO SEMI-AUTOMATICALLY SEGMENT A 3D MEDICAL IMAGE USING A REAL-TIME EDGE-AWARE BRUSH - Apparatus, systems, and methods to generate an edge aware brush for navigation and segmentation of images via a user interface are disclosed. An example processor is to at least: construct a brush for segmentation of image data; provide an interactive representation of the brush with respect to the image data via a user interface, the interactive representation to be displayed and made available for interaction in each of a plurality of viewports provided for display of views of the image data in the user interface; enable update of the viewports based on manipulation of the representation; facilitate display of a preview of a segmentation of the image data corresponding to a location of the representation; and, when the segmentation is confirmed, facilitate generation of an output based on the segmentation. | 2021-10-21 |
20210327106 | METHOD AND APPARATUS FOR USING A PARAMETERIZED CELL BASED CIRCULAR SORTING ALGORITHM - A method of grouping detection events in an imaging apparatus is described herein. The detection events can include primary detection events and secondary scattered events, which are frequently discarded due to the secondary scattered events, thus reducing sensitivity of the dataset for eventual image reconstruction. The method includes cell modules cascaded with identical parametrized cells, in a pipeline fashion, having the last cell in the chain circle back to the first cell. A rotating data pointer indicates the location of the first entry in the cell pipeline. The described method enables the grouping of multiple samples of detector data in real time with no loss of information, based on a time and location of the detected event. The method can be implemented in an FPGA as a hardware-based real time process. | 2021-10-21 |
20210327107 | Systems and Methods for 3D Reconstruction of Anatomical Organs and Inclusions Using Short-Wave Infrared (SWIR) Projection Tomography - Presented herein are systems and methods for tomographic imaging of a region of interest in a subject using short-wave infrared light to provide for accurate reconstruction of absorption maps within the region of interest. The reconstructed absorption maps are representations of the spatial variation in tissue absorption within the region of interest. The reconstructed absorption maps can themselves be used to analyze anatomical properties and biological processes within the region of interest, and/or be used as input information about anatomical properties in order to facilitate data processing used to obtain images of the region of interest via other imaging modalities. For example, the reconstructed absorption maps may be incorporated into forward models that are used in tomographic reconstruction processing in fluorescence and other contrast-based tomographic imaging modalities. Incorporating reconstructed absorption maps into other tomographic reconstruction processing algorithms in this manner improves the accuracy of the resultant reconstructions. | 2021-10-21 |
20210327108 | GENERATING A HIGH-DIMENSIONAL NETWORK GRAPH FOR DATA VISUALIZATION UTILIZING LANDMARK DATA POINTS AND MODULARITY-BASED MANIFOLD TEARING - The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate interactive visual shape representation of digital datasets. For example, the disclosed systems can generate an augmented nearest neighbor network graph from a sampled subset of digital data points using a nearest neighbor model and witness complex model. The disclosed system can further generate a landmark network graph based on the augmented nearest neighbor network graph utilizing a plurality of random walks. The disclosed systems can also generate a loop-augmented spanning network graph based on a partition of the landmark network graph by adding community edges between communities of landmark groups based on modularity and to complete community loops. Based on the loop-augmented spanning network graph, the disclosed systems can generate an interactive visual shape representation for display on a client device. | 2021-10-21 |
20210327109 | Computer-Implemented System and Method for Generating Radial Hierarchical Data Visualizations - The computer-implemented tool generates radial organization charts by ingesting hierarchical structured data, with associated performance attributes, and populating a virtual reporting tree that stores tree structure and radial structure information. The graphing server populates the virtual reporting tree while adding ghost nodes to ensure symmetry. The graphing server calculates and assigns radial and angular positional information to each node and uses that positional information to generate the radial organizer chart, applying coloring information to selected nodes and graphically represented radial relationship lines based on the structure and associated performance attributes from the ingested data. | 2021-10-21 |
20210327110 | COMPUTERIZED SYSTEMS AND METHODS FOR GRAPH DATA MODELING - Systems, methods, and computer-readable media are provided for graph data modeling. In accordance with one implementation, a method is provided that includes operations performed by at least one processor. The operations of the method include receiving raw data and determining a model for the raw data, wherein the model defines the graph structure for the raw data. The method also includes converting the raw data to fit the model, and generating at least a portion of a graph based on the raw data and the model, wherein the graph produces modeled data. The method also includes archiving the graph. | 2021-10-21 |
20210327111 | Systems and methods for applying effects to design elements - Described herein is a computer implemented method. The method comprises detecting user input activating a text effect selection control. In response to the first user input the method further comprises: automatically generating and displaying a first shadow for a selected design element, the first shadow having a first colour, a first offset value, and a first direction; and automatically generating and displaying a second shadow for the selected design element, the second shadow having a second colour, the first offset value, and a second direction, the second direction being opposite the first direction. | 2021-10-21 |
20210327112 | METHOD AND SYSTEM FOR POPULATING A DIGITAL ENVIRONMENT USING A SEMANTIC MAP - A method of populating a digital environment with digital content is disclosed. Environment data describing the digital environment is accessed. Populator data describing a populator digital object is accessed. The populator data includes semantic data describing the populator digital object. The populator digital object is placed within the digital environment. A semantic map representation of the populator digital object is generated. The semantic map representation is divided into a plurality of cells. A target cell of the plurality of cells is selected as a placeholder in the digital environment for a digital object that is optionally subsequently instantiated. The selecting of the target cell is based on an analysis of the environment data, the populator data, and the semantic map representation. Placeholder data is recorded in the semantic map representation. The placeholder data includes properties corresponding to the digital object that is optionally subsequently instantiated. | 2021-10-21 |
20210327113 | METHOD AND ARRANGEMENT FOR PRODUCING A SURROUNDINGS MAP OF A VEHICLE, TEXTURED WITH IMAGE INFORMATION, AND VEHICLE COMPRISING SUCH AN ARRANGEMENT - A method for producing a surroundings map of a transportation vehicle textured with image information, including detecting at least first and second image information from vehicle surroundings using at least one transportation vehicle-mounted camera device; producing a surroundings map of the transportation vehicle; determining at least one area of the surroundings map in which both the first and second sets of image information are present; and selecting either the first or the second set of image information for texturing the area of the surroundings map according to at least one of the following criteria: selecting the image information with the highest resolution and selecting the latest image information. Also disclosed is an arrangement for producing a surroundings map of a transportation vehicle textured with image information and a transportation vehicle having the arrangement. | 2021-10-21 |
20210327114 | IMAGE DISPLAY SYSTEM, NON-TRANSITORY STORAGE MEDIUM HAVING STORED THEREIN IMAGE DISPLAY PROGRAM, DISPLAY CONTROL APPARATUS, AND IMAGE DISPLAY METHOD - An example of an image display system includes a goggle apparatus having a display section. A virtual camera and a user interface are placed in a virtual space. The orientation of the virtual camera in the virtual space is controlled in accordance with the orientation of the goggle apparatus. When the goggle apparatus rotates by an angle greater than or equal to a predetermined angle in a pitch direction, the user interface is moved to the front of the virtual camera in a yaw direction. | 2021-10-21 |
20210327115 | EFFICIENT THREE-DIMENSIONAL, INTERACTIVE IMAGE RENDERING - Discussed herein are devices, systems, and methods for software-based animation. A method can include receiving data indicating a first object name, a corresponding file path of a model, a camera location, a reference point, and an animation of the first object, the animation comprising a stack of atomic animation functions that affect the view of one or more of the first object or the camera, storing the object name, the file path, the camera location, the reference location, and atomic animation functions in a memory, in response to receiving data indicating the first object was selected, automatically retrieving the model based on the file path, providing, by the display, a view of the model of the first object consistent with the camera location and the reference point, and executing the stored atomic animation functions to animate the model of the first object. | 2021-10-21 |
20210327116 | METHOD FOR GENERATING ANIMATED EXPRESSION AND ELECTRONIC DEVICE - This application provides a method for generating an animation expression and an electronic device. The method for generating an animation expression includes: obtaining an initial three-dimensional mesh, where a vertex in the initial three-dimensional mesh is used to represent an expression feature of a face; transforming the initial three-dimensional mesh to obtain a target three-dimensional mesh, where a topological structure of the target three-dimensional mesh is the same as a topological structure of a basic blendshape; determining a personalized blendshape fitting the face based on the basic blendshape; determining personalized blendshape coefficients based on the target three-dimensional mesh and the personalized blendshape; and generating the animated expression based on the personalized blendshape coefficients. | 2021-10-21 |
20210327117 | PHOTOREALISTIC REAL-TIME PORTRAIT ANIMATION - Provided are systems and methods for photorealistic real-time portrait animation. An example method includes receiving a scenario video with at least one input frame. The input frame includes a first face of a first person. The method further includes receiving a target image with a second face of a second person. The method further includes determining, based on the at least one input frame and the target image, two-dimensional (2D) deformations of the second face and a background in the target image. The 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression and a head orientation of the first face. The method further includes applying the 2D deformations to the target image to obtain at least one output frame of an output video. | 2021-10-21 |
20210327118 | METHOD FOR RAY INTERSECTION SORTING - A system and a method are disclosed for ray tracing in a pipeline of a graphic processing unit (GPU). It is determined whether a ray bounce of a first ray intersects a first primitive that is the closest primitive intersected by the ray bounce. The first ray is part of a first group of rays being processed by a first single-instruction-multiple-data (SIMD) process. The first ray is assigned by a sorting or binning unit to a second group of rays based on the intersection of the first primitive. The second group of rays is processed by a second SIMD process. The first ray is assigned to the second group of rays based on a material identification of the first primitive, an identification of the first primitive intersected by the ray bound of the first ray, a pixel location, and a bounce number of the ray bounce intersecting the first primitive. | 2021-10-21 |
20210327119 | System for Generating a Three-Dimensional Scene Reconstructions - A system configured to generate a three-dimensional scene reconstruction of a physical environment. In some cases, the system may store the three-dimensional scene reconstruction as two or more meshes and/or as one or more ray bundles including a plurality of depth values from a center point of the bundle. | 2021-10-21 |
20210327120 | REAL TIME RAY TRACING (RTRT)-BASED ADAPTIVE MULTI-FREQUENCY SHADING (AMFS) - Real time ray tracing-based adaptive multi frequency shading. For example, one embodiment of an apparatus comprising: rasterization hardware logic to process input data for an image in a deferred rendering pass and to responsively update one or more graphics buffers with first data to be used in a subsequent rendering pass; ray tracing hardware logic to perform ray tracing operations using the first data to generate reflection ray data and to store the reflection ray data in a reflection buffer; and image rendering circuitry to perform texture sampling in a texture buffer based on the reflection ray data in the reflection buffer to render an output image. | 2021-10-21 |
20210327121 | DISPLAY BASED MIXED-REALITY DEVICE - A display based mixed-reality device for a viewer to view an adjustable holographic image of an object, comprises a first computer having a display, a first camera, and a processor having a data set used for displaying the adjustable image on the display. A tracker of the viewer tracks a position of the viewer to create position data corresponding to a face of the user, wherein the position data is compared to reference data from a facial database to obtain the viewer position. The adjustable image of the object is continuously adjusted in response to a change in the viewer position. | 2021-10-21 |
20210327122 | METHODS AND APPARATUS FOR EFFICIENT MULTI-VIEW RASTERIZATION - The present disclosure relates to methods and apparatus for graphics processing. Aspects of the present disclosure can determine at least one scene including one or more viewpoints. Also, aspects of the present disclosure can divide the at least one scene into a plurality of zones based on each of the one or more viewpoints. Further, aspects of the present disclosure can determine whether a zone based on one viewpoint of the one or more viewpoints is substantially similar to a zone based on another viewpoint of the one or more viewpoints. Aspects of the present disclosure can also generate a geometry buffer for each of the plurality of zones based on the one or more viewpoints. Moreover, aspects of the present disclosure can combine the geometry buffers for each of the plurality of zones based on the one or more viewpoints. | 2021-10-21 |
20210327123 | IMAGE OCCLUSION PROCESSING METHOD, DEVICE, APPARATUS AND COMPUTER STORAGE MEDIUM - This disclosure provides a method and apparatus for processing occlusion in an image, a device, and a computer storage medium. The method includes: determining a current viewpoint parameter used for drawing a current image frame; obtaining a predicted depth map matching the current viewpoint parameter as a target depth map of the current image frame; and determining an occlusion culling result of an object in the current image frame according to the target depth map. | 2021-10-21 |
20210327124 | USING TILING DEPTH INFORMATION IN HIDDEN SURFACE REMOVAL IN A GRAPHICS PROCESSING SYSTEM - A graphics processing system includes a tiling unit for performing tiling calculations and a hidden surface removal (HSR) unit for performing HSR on fragments of the primitives. Primitive depth information is calculated in the tiling unit and forwarded for use by the HSR unit in performing HSR on the fragments. This takes advantage of the tiling unit having access to the primitive data before the HSR unit performs the HSR on the primitives, to determine some depth information which can simplify the HSR performed by the HSR unit. Therefore, the final values of a depth buffer determined in the tiling unit can be used in the HSR unit to determine that a particular fragment will subsequently be hidden by a fragment of a primitive which is yet to be processed in the HSR unit, such that the particular fragment can be culled. | 2021-10-21 |
20210327125 | ILLUMINATION RENDERING METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE - An illumination rendering method and apparatus includes obtaining a first picture at a target viewing angle from a virtual three-dimensional (3D) scene. The first picture includes a virtual object to be subject to illumination rendering in the virtual 3D scene at the target viewing angle. A target virtual light source point set is determined that performs illumination rendering on the virtual object in the first picture. Illumination rendering is performed on the virtual object in the first picture by using the target virtual light source point set. This illumination rendering improves efficiency in rendering on the virtual object in the virtual 3D scene. | 2021-10-21 |
20210327126 | 3D Object Reconstruction Method, Computer Apparatus and Storage Medium - A three-dimensional (3D) object reconstruction method, a device, an apparatus and a storage medium. By acquiring a scanning image sequence of a target object with the scanning image sequence including at least one frame of scanning image including depth information, and adopting a neural network algorithm, a terminal predicts the scanning images in the scanning image sequence, and acquires a predicted semantic label of each scanning image. A 3D model of the target object is then reconstructed according to the predicted semantic labels and the scanning images in the scanning image sequence. The terminal adopts the neural network algorithm to acquire the predicted semantic labels of the scanning images, and a 3D model of the target object is reconstructed according to the predicted semantic labels and the scanning images, so that the reconstructed 3D model is more accurate, and the accuracy of the reconstructed 3D model is improved. | 2021-10-21 |
20210327127 | GENERATING SYNTHETIC IMAGES AND/OR TRAINING MACHINE LEARNING MODEL(S) BASED ON THE SYNTHETIC IMAGES - Particular techniques for generating synthetic images and/or for training machine learning model(s) based on the generated synthetic images. For example, training a machine learning model based on training instances that each include a generated synthetic image, and ground truth label(s) for the generated synthetic image. After training of the machine learning model is complete, the trained machine learning model can be deployed on one or more robots and/or one or more computing devices. | 2021-10-21 |
20210327128 | A POINT CLOUDS GHOSTING EFFECTS DETECTION SYSTEM FOR AUTONOMOUS DRIVING VEHICLES - In one embodiment, a system generates an occupancy grid map based on an initial frame of point clouds. The system receives one or more subsequent frames of the point clouds. For each of the subsequent frames, the system updates an occupancy grid map based on the subsequent frame. The system identifies one or more problematic voxels based on the update, the system determines whether the problematic voxels belong to a wall object, and in response to determining that the problematic voxels belong to a wall object, the system flags the problematic voxels as ghost effect voxels for the subsequent frame. | 2021-10-21 |
20210327129 | METHOD FOR A SENSOR-BASED AND MEMORY-BASED REPRESENTATION OF A SURROUNDINGS, DISPLAY DEVICE AND VEHICLE HAVING THE DISPLAY DEVICE - A method for a sensor-based and memory-based representation of a surroundings of a vehicle. The vehicle includes an imaging sensor for detecting the surroundings. The method includes: detecting a sequence of images; determining distance data on the basis of the detected images and/or of a distance sensor of the vehicle, the distance data comprising distances between the vehicle and objects in the surroundings of the vehicle; generating a three-dimensional structure of a surroundings model on the basis of the distance data; recognizing at least one object in the surroundings of the vehicle on the basis of the detected images, in particular by a neural network; loading a synthetic object model on the basis of the recognized object; adapting the generated three-dimensional structure of the surroundings model on the basis of the synthetic object model and on the basis of the distance data; and displaying the adapted surroundings model. | 2021-10-21 |
20210327130 | METHOD AND DEVICE FOR DETERMINING AN AREA MAP - Method for determining an environment map comprising, server-side receiving of motion data of a mobile device, server-side receiving of orientation data of a camera of the mobile device and server-side receiving of the respective image of the camera associated with the received motion data and orientation data, server-side evaluation of the received image together with the motion data and the orientation data for creating a server-side point cloud, the server-side point cloud forming at least in parts the environment map. | 2021-10-21 |
20210327131 | SYSTEMS AND METHODS FOR TERRAIN MODIFICATION AT RUNTIME USING DISPLACEMENT MAPPING - Systems, methods, devices, and non-transitory media of the various embodiments may include encoding localized terrain modifications into one or more heightmaps, which are used to modify the vertices of the world-wide terrain map at runtime using a Graphics Processing Unit (GPU). Various embodiments apply displacement to dynamic terrain surfaces, such as time dynamic surfaces, animated surfaces, Hierarchical Level-of-Detail (HLOD) surfaces, and surfaces suitable for interactive user editing, at a global scale. | 2021-10-21 |
20210327132 | METHOD OF IDENTIFYING AND DISPLAYING AREAS OF LODGED CROPS - A three-dimensional surface of the upper cover of the crops is recognized and recorded by a contactless scanning from above the field; a referential height h of the crop's stem is determined and the three-dimensional surface of the field from which the crops grow is determined. A reached height x of the vegetation for the individual points is computed by comparison of the three-dimensional surface of the upper cover of the crops with the three-dimensional surface of the field, whereby in case the reached height x of the vegetation is smaller than the referential height h of the crop's stem this difference between the reached height x of the vegetation and the referential height h of the stem determines in a given point of the field the angle β of lodging pursuant to the goniometric function. A classification of the grains into classes depending on the angle α of the slope of the lodged grain in interval 0° to 90° is realized, whereby this results in computation of the heights (h) of the grain spikes pursuant to relation x | 2021-10-21 |
20210327133 | System and Method of Virtual Plant Field Modelling - A technique for generating virtual models of plants in a field is described. Generally, this includes recording images of plants in-situ; generating point clouds from the images; generating skeleton segments from the point cloud; classifying a subset of skeleton segments as unique plant features using the images; and growing plant skeletons from skeleton segments classified as unique plant feature. The technique may be used to generate a virtual model of a single, real plant, a portion of a real plant field, and/or the entirety of the real plant field. The virtual model can be analyzed to determine or estimate a variety of individual plant or plant population parameters, which in turn can be used to identify potential treatments or thinning practices, or predict future values for yield, plant uniformity, or any other parameter can be determined from the projected results based on the virtual model. | 2021-10-21 |
20210327134 | AUTOMATED COMPONENT DESIGN EXTRACTION - A method for modeling a physical component is provided, the method comprising scanning the component to generate a point cloud, generating definitions of surfaces of the component from the point cloud, identifying vertices of edges at which the surfaces of the component meet, and converting the point cloud to a corresponding parameterized digital model of the component. In some example implementations, generating the definitions for a surface comprises: selecting data points of the point cloud that are within a determined spatial range from a frame of reference, identifying a planar surface formed by those data points that conform to a collinear pattern within the determined spatial range from the frame of reference or a cylindrical surface formed by those of the plurality of data points that do not conform to a collinear pattern and generating the definition of the surface from the planar or cylindrical surface as identified. | 2021-10-21 |
20210327135 | SYSTEMS AND METHODS FOR GENERATING A MODEL OF A CHARACTER FROM ONE OR MORE IMAGES - A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image. | 2021-10-21 |
20210327136 | SYSTEM AND METHOD FOR EFFICIENT 3D RECONSTRUCTION OF OBJECTS WITH TELECENTRIC LINE-SCAN CAMERAS - In this invention, systems and methods are described that allow performing stereo reconstruction with line-scan cameras in general configurations. Consequently, the cumbersome exact alignment of the sensor lines becomes superfluous. The proposed method requires that telecentric lenses instead of perspective lenses are mounted on the line-scan cameras. In this case, the images can be accurately rectified with stereo rectification. The rectified images can be used to perform an efficient stereo matching. The method comprises a camera model and a calibration procedure that allow to precisely model the imaging process, also including the modelling of lens distortions even if the sensor lines are not exactly mounted behind the principal points. This ensures a high accuracy of the resulting 3D reconstruction. | 2021-10-21 |
20210327137 | TECHNOLOGIES FOR 3D PLACEMENT OF VIRTUAL OBJECTS FROM A 2D LAYOUT - Technologies for 3D virtual environment placement of 3D models based on 2D images are disclosed. At least an outline of a 3D virtual environment may be generated. A 2D image of one or more 2D images may be identified. A first product from the first 2D image may be identified. At least one 3D model of one or more 3D models based, at least, on the first product may be determined. A first location for placement of the first product in the 3D virtual environment may be identified. The at least one 3D model may be added within the 3D virtual environment based, at least, on the first location. The 3D virtual environment may be rendered into a visually interpretable form. A second product may be identified from the first 2D image, forming a first grouping of products. A starting element for the first grouping of products may be determined. | 2021-10-21 |
20210327138 | Interactive System and Method Providing Real-Time Virtual Reality Visualization of Simulation Data - A method for providing an immersive VR experience comprises defining in the computer memory, a model representing a three-dimensional model. The method further comprises producing field data based upon a simulation of the three-dimensional model. Additionally, the method comprises storing the field data within a data structure. The method also comprises extracting, for display, a surface of the three-dimensional model from a simulation model. The method additionally comprises creating a surface texture for the surface of the three-dimensional model from the field data. Further, the method comprises creating a query optimized grid from the calculated field data. Further still, the method comprises displaying a visualization of the calculated field data by means of the surface and the query optimized grid. | 2021-10-21 |
20210327139 | MULTIZONE QUADRILATERAL MESH GENERATOR FOR HIGH MESH QUALITY AND ISOTROPY - Methods for CAD operations and corresponding systems ( | 2021-10-21 |
20210327140 | TECHNIQUES FOR PARTICIPATION IN A SHARED SETTING - In accordance with some embodiments, an exemplary process for initializing members of a shared enhanced reality setting is described. In accordance with some embodiments, an exemplary process for forming a private sub-space and example features of the private sub-space is described. | 2021-10-21 |
20210327141 | METHOD FOR MAKING A CONTENT SENSITIVE VIDEO - A method is described for recording a video. The method comprises receiving one or more user content and then providing a 3D virtual world and a virtual camera having one or more parameters. The optimal 3D flight path of the virtual camera is then determined based on the user content and the virtual camera is then allowed to travel along the optimal 3D flight path and to record the video. | 2021-10-21 |
20210327142 | DIRECTIONAL INSTRUCTIONS IN AN HYBRID-REALITY SYSTEM - A computer system enhances guidance information on a display, allowing a user position and orientation in 3D space to be achieved. In some embodiments, the guidance is visual, for example the blurring of display objects not at the desired position. In some embodiments, the guidance is aural, for example placing a sound moving to the desired position. In some embodiments, the guidance is tactile, for example using haptic pads attached to a head-mounted display to push the user in a specified direction. The system may be used to guide the user spatially for the performing of a task, to help the user avoid sensitive components, or to guide the user to position a sensor, such as a camera, in an optimal direction for taking measurements. The system includes head positioning, hand positioning and gaze positioning techniques. | 2021-10-21 |
20210327143 | FACILITATION OF AUGMENTED REALITY-BASED SPACE ASSESSMENT - A view can be presented with an augmented reality (AR) view of the space. The AR view can be augmented with imagery to indicate to the viewer environmental conditions that may not otherwise be known to the viewer. The viewer can also initiate alterations to the environment based on the information and recommendations presented in the AR view. Current conditions, past trends, and forecasted future trends can be included in the creation of the AR displays. | 2021-10-21 |
20210327144 | SYSTEMS AND METHODS FOR RENDERING IMMERSIVE ENVIRONMENTS - Disclosed herein are systems for rendering an immersive environment, the systems comprising at least one electronic device configured to be coupled to a body part of a user, the at least one electronic device comprising a sensor, an actuator, or both; a processor capable of being communicatively coupled to the at least one electronic device; and a rendering device capable of being communicatively coupled to the processor. The processor is configured to execute machine-executable instructions that, when executed by the processor, cause the processor to obtain data from or provide data to the at least one electronic device. The rendering device is configured to receive rendering information from the processor, and render the immersive environment based at least in part on the rendering information from the processor. | 2021-10-21 |
20210327145 | EXTENDED REALITY RECORDER - Implementations of the subject technology provide systems and methods for recording an extended reality experience in a way that allows the experience to be played back at a later time from a different viewpoint or perspective. This allows computer-generated content that was rendered for display to a user during the recording, to be re-rendered during playback at the correct time and location in the recording, but from a different perspective. In order to facilitate this type of viewer-centric playback, the recording includes a computer-generated content track that references resources for re-rendering the computer-generated content at each point in time in the recording. | 2021-10-21 |
20210327146 | VIRTUAL ANCHORING SYSTEMS AND METHODS FOR EXTENDED REALITY - Implementations of the subject technology provide virtual anchoring for extended reality (XR) display devices. A device may generate an XR environment that includes computer-generated (CG) content for display relative to various physical objects in a physical environment. In order to position the CG content, an XR application may request a physical anchor object to which the CG content can be anchored. In circumstances in which the physical anchor object is not available in the physical environment, a virtual anchor and/or a virtual anchor object corresponding to the physical anchor object can be provided to which the CG content can be anchored. | 2021-10-21 |
20210327147 | AUGMENTED REALITY SYSTEM FOR AN AMUSEMENT RIDE - An amusement ride system includes a ride vehicle configured to carry a passenger, one or more sensors configured to detect a face and a body of the passenger while the passenger is in the ride vehicle, and a display assembly configured to be viewable by the passenger while the passenger is in the ride vehicle. The amusement ride system also includes a controller configured to generate an animation based on signals received from the one or more sensors and to instruct display of the animation on the display assembly. The signals are indicative of movement of the face and the body of the passenger, and the animation mimics the movement of the face and the body of the passenger. | 2021-10-21 |
20210327148 | VIRTUAL TRY-ON SYSTEMS FOR SPECTACLES USING REFERENCE FRAMES - A method for virtual try-on of user-wearable items is provided. The method includes capturing, in a client device, a first image of a user, the first image including a reference token for a user-wearable item and displaying, in the client device, images of multiple user-wearable items for the user, receiving an input from the user, the input indicative of a selected user-wearable item from the user-wearable items on display. The method also includes segmenting the first image to separate the reference token from a background comprising a portion of a physiognomy of the user, replacing a segment of the reference token in the first image with an image of the selected user-wearable item in a second image of the user, and displaying, in the client device, the second image of the user. | 2021-10-21 |
20210327149 | System and Method for Emotion-Based Real-Time Personalization of Augmented Reality Environments - Systems and methods for emotion-based real-time personalization of augmented reality environments are disclosed. Exemplary implementations may: receive a series of original images of a user's face; detect, in the series of images, at least one emotion of the user; modify the series of images in response to the detected emotion(s); and display the modified images on a screen of the personal electronic device. In some embodiments, images may be modified based on one or more detected demographic characteristics of a user. In various implementations, processing may be done locally on a personal electronic device and the modified images displayed substantially in real-time. | 2021-10-21 |
20210327150 | DATA STERILIZATION FOR POST-CAPTURE EDITING OF ARTIFICIAL REALITY EFFECTS - In one embodiment, the system may receive a serialized data stream generated by serializing data chunks including data from a video stream and contextual data streams associated with the video stream. The contextual data streams may include a first computed data stream and a sensor data stream. The system may extract the video data stream and one or more contextual data streams from the serialized data stream. The system may generate a second computed data stream based on the sensor data stream in the extracted contextual data streams. The system may compare the second computed data stream to the first computed data stream extracted from the serialized data stream to select a computed data stream based on one or more pre-determined criteria. The system may render an artificial reality effect for display with the extracted video data stream based at least in part on the selected computed data stream. | 2021-10-21 |
20210327151 | VISUAL DISPLAY SYSTEMS AND METHOD FOR MANIPULATING IMAGES OF A REAL SCENE USING AUGMENTED REALITY - The present disclosure relates to a visual display system for manipulating images of a real scene using augmented reality. In one implementation, the system may include at least one processor in communication with a first mobile device; and a storage medium storing instructions that, when executed, configure the at least one processor to perform operations. The operations may include receiving a request from a mobile device to access an account of a user, receiving a first image depicting a real scene from an image sensor of the mobile device, receiving a selection of a virtual object, receiving an augmented reality image comprising the virtual object overlaid on the first image, comparing the augmented reality image to one or more stored augmented reality images, authenticating the user based on the comparison, and authorizing access to the user account based on the authentication. | 2021-10-21 |
20210327152 | MULTI-FEED CONTEXT ENHANCED IMAGING - A system or method includes a platform to allow users to coordinate images captured by a separated camera with images captured by a first camera, displaying such images at real-time to a user through a user interface on the first camera or a user interface on a separate device. The separate camera is contemplated to provide a separate image feed to provide one or more augmentations to an image captured by the first camera. | 2021-10-21 |
20210327153 | DEPTH MAP RE-PROJECTION ON USER ELECTRONIC DEVICES - A method includes rendering, on displays of an extended reality (XR) display device, a first sequence of image frames based on image data received from an external electronic device associated with the XR display device. The method further includes detecting an interruption to the image data received from the external electronic device, and accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames. The plurality of feature points includes movement and position information of one or more objects within the first sequence of image frames. The method further includes performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data, and rendering a second sequence of image frames corresponding to the partial re-rendering of the one or more objects. | 2021-10-21 |
20210327154 | SYSTEMS AND METHODS FOR SCENE-INDEPENDENT AUGMENTED REALITY INTERFACES - Some embodiments include a method comprising using a first computing device to perform: obtaining an image of a scene captured using a camera coupled to the first computing device; obtaining camera setting values used to capture the image; determining, using the image, surface attribute values characterizing at least one surface shown in the image of the scene; generating an augmented reality (AR) interface at least in part by using the camera setting values and the surface attribute values to create a first composite image by overlaying a selected virtual furniture object onto the image so that the virtual furniture object is displayed in the AR interface as being on a first surface of the at least one surface shown in the image; and transmitting, to a second computing device and via at least one communication network, the first composite image, the camera setting values and the surface attribute values. | 2021-10-21 |
20210327155 | SYSTEM AND METHOD FOR DENSE, LARGE SCALE SCENE RECONSTRUCTION - A system configured to improve the operations associated with generating virtual representations on limited resources of a mobile device. In some cases, the system may utilize viewpoint bundles that include collection of image data with an associated pose in relative physical proximity to each other to render a virtual scene. In other cases, the system may utilize 2.5D manifolds including 2D image data and a weighted depth value to render the 3D environment. | 2021-10-21 |
20210327156 | PERFORMING OPERATIONS USING A MIRROR IN AN ARTIFICIAL REALITY ENVIRONMENT - This disclosure describes an artificial reality system that presents artificial reality content in the context of a physical environment that includes a mirror or other reflective surface. In one example, this disclosure describes a method that includes capturing capture data representative of a physical environment, wherein the physical environment includes a reflective surface and a plurality of objects, determining a pose of the HMD, determining a map of the physical environment, wherein the map includes position information about the reflective surface and position information about each of the plurality of physical objects in the physical environment, identifying a visible object from among the plurality of physical objects, and generating artificial reality content associated with the visible object. | 2021-10-21 |
20210327157 | Systems and Methods for Interactions with Remote Entities - In the disclosed systems and methods for competitive scene completion, in conjunction with a scene completion challenge, an image of an initial scene and a plurality of markers are displayed. For each user marker selection, virtual furnishing units corresponding to the unit type are displayed. User unit selection results in display of a three-dimensional graphic of the selected virtual furnishing unit at the corresponding coordinates within the scene, thereby creating an augmented scene that comprises the initial scene with three-dimensional graphics of selected virtual furnishing units. The augmented scene is submitted to a remote server. The user is provided with a reward that consists of credits. Responsive to user selection to access the store, a user interface for the store is displayed within the application. Visual representations of tangible products are displayed. The credits are configured for use towards purchase of the tangible products. | 2021-10-21 |
20210327158 | METHOD, APPARATUS AND STORAGE MEDIUM FOR DISPLAYING THREE-DIMENSIONAL SPACE VIEW - The present disclosure provides a method for displaying a three-dimensional space view. The three-dimensional space view includes a first three-dimensional space view and a second three-dimensional space view. The method includes presenting the first three-dimensional space view on a first user interface; presenting the second three-dimensional space view on a second user interface; changing the first three-dimensional space view according to a user input; and changing the second three-dimensional space view according to a change in the first three-dimensional space view. | 2021-10-21 |
20210327159 | OVERLAY AND MANIPULATION OF MEDICAL IMAGES IN A VIRTUAL ENVIRONMENT - Methods, apparatus, systems and articles of manufacture are disclosed to enable medical image visualization and interaction in a virtual environment. An example apparatus includes at least one processor and at least one memory including instructions. The instructions, when executed, cause the at least one processor to at least: generate a virtual environment for display of image content via a virtual reality display device; enable interaction with the image content in the virtual environment via an avatar; adjust the image content in the virtual environment based on the interaction; and generate an output of image content from the virtual environment. | 2021-10-21 |