32nd week of 2022 patent applcation highlights part 55 |
Patent application number | Title | Published |
20220254003 | DATA PROCESSING SYSTEM AND DATA PROCESSING METHOD | 2022-08-11 |
20220254004 | MONITORING STANDING WATER AND DRAINAGE PROBLEMS - Systems and methods for detecting and monitoring areas of water damage or water management problems in a property are described. Monitoring devices can be deployed at different locations of a property to obtain sensor data and image data regarding the environmental conditions in potentially problematic areas of the property. An image obtained by the monitoring devices can be processed and compared with reference images or other image data that is filtered in a different manner that the image to identify portions of the image in which water damage or water management problems exist. | 2022-08-11 |
20220254005 | YARN QUALITY CONTROL - A textile package production system includes an imager, a transporter, a sorter, and a controller. The imager is configured to generate an optical image for a textile package. The imager has at least one optical detector and an optical emitter. The imager has an inspection region. The transporter has a test subject carrier configured for relative movement as to the carrier and the inspection region. The sorter is coupled to the transporter and is configured to make a selection as to a first classification and a second classification. The controller has a processor and a memory. The controller is coupled to the imager, the transporter, and the sorter. The controller is configured to implement an artificial engine classifier in which the sorter is controlled based on the optical image and based on instructions and training data in the memory. | 2022-08-11 |
20220254006 | ARTIFICIAL INTELLIGENCE SERVER - An artificial intelligence server is disclosed. The artificial intelligence server, according to one embodiment of the present invention, comprises: a communication unit for communicating with a terminal of a user; and a processor for receiving, from the terminal, a video capturing a home appliance, acquiring a first characteristics vector by inputting image data separated from the video into an image categorization model, acquiring a second characteristics vector by inputting sound data separated from the video into a voice categorization model, acquiring a result value by inputting, into an abnormality categorization model, a data set derived by combining the first characteristics vector with the second characteristics vector, and transmitting, to the terminal, a malfunction type acquired on the basis of the result value. | 2022-08-11 |
20220254007 | MULTI-VIEW INTERACTIVE DIGITAL MEDIA REPRESENTATION VIEWER - Damage to an object such as a vehicle may be detected and presented based at least in part on image data. In some configurations, image data may be detected by causing the object to pass through a gate or portal on which cameras are located. Alternatively, or additionally, image data may be selected by a user operating a camera and moving around the object. The cameras may capture image data, which may be combined and analyzed to detect damage. Some or all of the image data and/or analysis of the image data may be presented in a viewer, which may allow a user to perform actions such as navigating around the object in a virtual environment, identifying and viewing areas of the object where damage has been detected, and accessing the results of the analysis. | 2022-08-11 |
20220254008 | MULTI-VIEW INTERACTIVE DIGITAL MEDIA REPRESENTATION CAPTURE - Images of an object may be captured by cameras located at fixed locations in space as the object travels through the cameras' fields of view. A three-dimensional model of the object may be determined using the images. A portion of the object that has been damaged may be identified based on the three-dimensional model and the images. A damage map of the object illustrating the portion of the object that has been damaged may be generated. | 2022-08-11 |
20220254009 | PROCESS CONDITION ESTIMATING APPARATUS, METHOD, AND PROGRAM - A technique for estimating a process condition without limiting forms (such as shapes) of an object is presented, and a process condition estimating apparatus for estimating a process condition in which an object is processed, and which includes an input unit configured to input measurement data acquired at a predetermined position of the object; and an estimation unit configured to estimate the process condition from the measurement data, based on a process-condition-estimating function for inputting the measurement data and outputting an estimation value of the process condition. | 2022-08-11 |
20220254010 | INSPECTION APPARATUS AND INSPECTION METHOD FOR COKE OVEN CONSTRUCTION, AND COKE OVEN CONSTRUCTION METHOD - An inspection apparatus for coke oven construction that is configured to check accuracy after refractories are laid in oven construction work for updating or newly creating a coke oven that produces coke. The inspection apparatus includes an image capturing device configured to acquire an image of a work area where oven construction work is in progress, measurement region determining means configured to identify a work-completed area where laying work has been completed on the basis of the image of the work area acquired by the image capturing device, and determine the identified work-completed area as a measurement region, and a refractory position measuring device configured to check laying accuracy by measuring positions of laid refractories in the measurement region determined by the measurement region determining means. | 2022-08-11 |
20220254011 | IMAGE ANALYSIS - Provided herein is technology relating to analysis of images and particularly, but not exclusively, to methods and systems for determining the area and/or volume of a region of interest using optical coherence tomography data. Some embodiments provide for determining the area and/or volume of a lesion in retinal tissue using three-dimensional optical coherence tomography data and a two-dimensional optical coherence tomography fundus image. | 2022-08-11 |
20220254012 | METHODS, DEVICES, AND SYSTEMS FOR DETERMINING PRESENCE OF APPENDICITIS - Methods, devices, and systems for determining a presence of appendicitis are provided. In one aspect, a method includes receiving a medical image associated with the patient. Further, the method includes determining, using at least one trained machine learning model, an anatomical position of the appendix in the medical image. Additionally, the method includes determining, using the at least one trained machine learning model, a dimension associated with the appendix in the medical image. The method also includes identifying if the dimension associated with the appendix is above a pre-defined threshold. Furthermore, the method includes generating a notification on an output unit if the dimension associated with the appendix is above the pre-defined threshold, wherein dimension associated with the appendix being above the pre-defined threshold indicates presence of appendicitis in the patient. | 2022-08-11 |
20220254013 | IMAGE PROCESSING DEVICE AND STORAGE MEDIUM - Provided is an image processing device that includes a hardware processor. The hardware processor calculates a feature amount relevant to a breast shape from a mammography image. The hardware processor selects a schema image corresponding to the breast shape of the mammography image from a plurality of types of predetermined schema images based on the feature amount relevant to the breast shape calculated by the hardware processor. | 2022-08-11 |
20220254014 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND SENSING SYSTEM - There is provided an information processing apparatus including a macro-measurement analysis calculating section that performs calculation of detection data from a macro-measurement section that performs sensing of a first measurement area of a measurement target at first spatial resolution, a micro-measurement analysis calculating section that performs calculation of detection data from a micro-measurement section that performs sensing of a second measurement area included in the first measurement area of the measurement target at second spatial resolution which is resolution higher than the first spatial resolution, and a complementary analysis calculating section that performs complementary analysis calculation by using a result of calculation by the macro-measurement analysis calculating section, and a result of calculation by the micro-measurement analysis calculating section, and generates complementary analysis information. | 2022-08-11 |
20220254015 | IDENTIFYING NEUTROPHIL EXTRACELLULAR TRAPS IN PERIPHERAL BLOOD SMEARS - The present technology is directed to identifying neutrophil extracellular traps (NETs) in blood. For example, the present technology provides artificial intelligence systems, architectures, and/or programs that can rapidly and/or automatically identify and/or enumerate NETs in peripheral blood smears, CBC scattergrams, and the like. The artificial intelligence architectures can be integrated into current automated imaging and/or analysis systems (e.g., automated imaging systems for performing cell blood counts (CBC)). The artificial intelligence architectures can also be integrated into another computing device, such as a mobile device. | 2022-08-11 |
20220254016 | Regional Pulmonary V/Q via image registration and Multi-Energy CT - A method for imaging a lung of a patient is provided. The method includes acquiring a full inspiration computed tomography (CT) scan of the lung to provide a total lung capacity (TLC) image and acquiring a functional residual capacity contrast enhanced multi-energy CT scan of the lung. The method further includes processing the functional residual capacity contrast enhanced multi-energy CT scan of the lung to generate a perfused blood volume (PBV) image and a virtual non-contrast (VNC) image. The method further includes registering the TLC image to at least one of the PBV and VNC images so as to provide a map of regional ventilation and to co-register local ventilation with blood perfusion, generating a lung performance metric using the co-registered images, and outputting the lung performance metric at a user interface of a computing device. | 2022-08-11 |
20220254017 | SYSTEMS AND METHODS FOR VIDEO-BASED POSITIONING AND NAVIGATION IN GASTROENTEROLOGICAL PROCEDURES - The present disclosure provides systems and methods for improving detection and location determination accuracy of abnormalities during a gastroenterological procedure. One example method includes obtaining a video data stream generated by an endoscopic device during a gastroenterological procedure for a patient. The method includes generating a three-dimensional model of at least a portion of an anatomical structure viewed by the endoscopic device based at least in part on the video data stream. The method includes obtaining location data associated with one or more detected abnormalities based on localization data generated from the video data stream of the endoscopic device. The method includes generating a visual presentation of the three-dimensional model and the location data associated with the one or more detected abnormalities; and providing the visual presentation of the three-dimensional model and the location data associated with the detected abnormality for use in diagnosis of the patient. | 2022-08-11 |
20220254018 | DEVICE, PROCESS AND SYSTEM FOR DIAGNOSING AND TRACKING OF THE DEVELOPMENT OF THE SPINAL ALIGNMENT OF A PERSON - A process operable using a computerized system for providing one or more output images of the spinal region of a subject for which anatomical landmarks applicable for clinical assessment are labeled in a pre-trained neural network ( | 2022-08-11 |
20220254019 | Healthy-Selfie(TM): Methods for Remote Medical Imaging Using a Conventional Smart Phone or Augmented Reality Eyewear - A method for remote medical imaging involves displaying augmented reality (AR) images on a remote patient's smart phone or smart eyewear. The augmented reality images guide the patient concerning how to move their mobile and/or wearable device relative to a wound, injury, lesion, or abnormality on their body in order to capture images of the wound, injury, lesion, and/or abnormality from different distances and angles. These images are then integrated into a 3D digital image for medical diagnosis or treatment. | 2022-08-11 |
20220254020 | Deep Learning Method For Predicting Patient Response To A Therapy - A method for indicating how a cancer patient will respond to a predetermined therapy relies on spatial statistical analysis of classes of cell centers in a digital image of tissue of the cancer patient. The cell centers are detected in the image of stained tissue of the cancer patient. For each cell center, an image patch that includes the cell center is extracted from the image. A feature vector is generated based on each image patch using a convolutional neural network. A class is assigned to each cell center based on the feature vector associated with each cell center. A score is computed for the image of tissue by performing spatial statistical analysis based on classes of the cell centers. The score indicates how the cancer patient will respond to the predetermined therapy. The predetermined therapy is recommended to the patient if the score is larger than a predetermined threshold. | 2022-08-11 |
20220254021 | MEDICAL IMAGE DIAGNOSTIC APPARATUS - According to one embodiment, a medical image diagnostic apparatus includes processing circuitry. The processing circuitry inputs a noise correlation map and a medical image or an intermediate image to a learned model that is functioned to generate a denoise image, in which noise of the medical image or noise of the intermediate image is reduced, based on the medical image generated based on data collected with respect to a subject or the intermediate image at a front stage for generating the medical image and the noise correlation map correlated with the noise included in the medical image or the intermediate image, and generates the denoise image, in which the noise of the medical image or the noise of the intermediate image is reduced. | 2022-08-11 |
20220254022 | METHOD AND SYSTEM FOR AUTOMATIC MULTIPLE LESION ANNOTATION OF MEDICAL IMAGES - A method includes receiving, from a patient, an image having a visible lesion, modifying the image to appear as if the lesion were not present, thereby forming a second image, generating a delineation of the abnormality using a difference between the first and second images, and tagging the segmented lesions. | 2022-08-11 |
20220254023 | System and Method for Interpretation of Multiple Medical Images Using Deep Learning - A method is disclosed of processing a set of images. Each image in the set has an associated counterpart image. One or more regions of interest (ROIs) are identified in one or more of the images in the set of images. For ROI identified, a reference region is identified in the associated counterpart image. ROIs and associated reference regions are cropped out, thereby forming cropped pairs of images 1 . . . n | 2022-08-11 |
20220254024 | SYSTEMS, DEVICES AND METHODS FOR NON-INVASIVE HEMATOLOGICAL MEASUREMENTS - A system for non-invasive hematological measurements includes a platform to receive a body portion of a user and an imaging device to acquire a set of images of a capillary bed in the body portion. For each image, a controller detects one or more capillaries in the body portion of the finger to identify a first set of capillaries by estimating one or more attributes of each capillary (e.g., structural attributes, flow attributes, imaging attributes, or combinations thereof), wherein at least one attribute of each capillary meets a predetermined criterion. The controller also identifies a second set of capillaries from the first set of capillaries such that each capillary of the second set of capillaries is visible in a predetermined number of images of the set of images. | 2022-08-11 |
20220254025 | Evaluating Quality of Segmentation of an Image into Different Types of Tissue for Planning Treatment Using Tumor Treating Fields (TTFields) - To plan tumor treating fields (TTFields) therapy, a model of a patient's head is often used to determine where to position the transducer arrays during treatment, and the accuracy of this model depends in large part on an accurate segmentation of MRI images. The quality of a segmentation can be improved by presenting the segmentation to a previously-trained machine learning system. The machine learning system generates a quality score for the segmentation. Revisions to the segmentation are accepted, and the machine learning system scores the revised segmentation. The quality scores are used to determine which segmentation provides better results, optionally by running simulations for models that correspond to each segmentation for a plurality of different transducer array layouts. | 2022-08-11 |
20220254026 | Deep Learning Architecture For Analyzing Medical Images For Body Region Recognition And Delineation - Provided are systems and methods for analyzing medical images to localize body regions using deep learning techniques. A combination of convolutional neural network (CNN) and a recurrent neural network (RNN) can be applied to medical images, identifying axial slices of a body region. In accordance with embodiments, boundaries, e.g., superior and inferior boundaries of various body regions in computed tomography images can be automatically demarcated. | 2022-08-11 |
20220254027 | DETECTING METHOD - A detecting method adapted to detect a detecting cassette is provided. A detecting cassette is placed into a device main body to be located at a detecting region inside the device main body. At least one image of the detecting region is captured by an image capturing unit. Whether a function of the image capturing unit is normal is determined by a determining unit according to a grayscale value of the at least one image. If the function of the image capturing unit is normal, a detection result is determined by the determining unit according to a portion of the at least one image corresponding to the detecting cassette. | 2022-08-11 |
20220254028 | METHOD AND APPARATUS FOR ADJUSTING BLOOD FLOW VELOCITY IN MAXIMUM HYPEREMIA STATE BASED ON INDEX FOR MICROCIRCULATORY RESISTANCE - Provided are a method and apparatus for adjusting blood flow velocity in maximum hyperemia state based on index for microcirculatory resistance. The method comprises: acquiring an index for microcirculatory resistance iFMR during a diastolic phase according to a blood flow velocity v, an aortic pressure waveform, and an physiological parameter (S | 2022-08-11 |
20220254029 | IMAGE SEGMENTATION USING A NEURAL NETWORK TRANSLATION MODEL - The neural network includes an encoder, a common decoder, and a residual decoder. The encoder encodes input images into a latent space. The latent space disentangles unique features from other common features. The common decoder decodes common features resident in the latent space to generate translated images which lack the unique features. The residual decoder decodes unique features resident in the latent space to generate image deltas corresponding to the unique features. The neural network combines the translated images with the image deltas to generate combined images that may include both common features and unique features. The combined images can be used to drive autoencoding. Once training is complete, the residual decoder can be modified to generate segmentation masks that indicate any regions of a given input image where a unique feature resides. | 2022-08-11 |
20220254030 | Computer-Implemented Method of Analyzing an Image to Segment Article of Interest Therein - A computer-implemented method of analyzing an image to segment an article of interest in the image comprises (i) receiving the image having a width of n | 2022-08-11 |
20220254031 | EDGE-GUIDED HUMAN EYE IMAGE ANALYZING METHOD - The embodiments of the present disclosure disclose an edge-guided human eye image analyzing method. A specific implementation of this method comprises: collect a human eye image as an image to be detected; obtain a human eye detection contour map; obtain a semantic segmentation detection map and an initial human eye image detection fitting parameter; performing an iterative search on the initial human eye image detection fitting parameter to determine a target human eye image detection fitting parameter; sending the semantic segmentation detection map and the target human eye image detection fitting parameter as image analyzing results to a display terminal for display. This implementation improves the accuracy at the boundary dividing the pupil-iris area, and increases the structural integrity of the ellipse resulted from dividing the pupil-iris area. In addition, the iterative search can achieve a more accurate ellipse parameter fitting result. | 2022-08-11 |
20220254032 | IMAGE DATA PROCESSING METHOD AND APPARATUS - A medical image processing apparatus including processing circuitry configured to: obtain from medical imaging measurements, observations of one or more vector or tensor valued fields as projected from one or more 2D acquisition planes; use an optimisation procedure to determine from the observations a superset of 3D fields (which may be scalar, vector, or tensor) via a solution ansatz constrained by a system of partial differential equations, and output the plurality of these fields. | 2022-08-11 |
20220254033 | MOVEMENT HISTORY CHANGE METHOD, STORAGE MEDIUM, AND MOVEMENT HISTORY CHANGE DEVICE - A movement history change method for a computer to execute a process includes acquiring identification information of a first object detected at a first location by a sensor and a detection timing of the first object from object detection history information that indicates time-series of detection; identifying identification information of a second object present at the first location at a first timing corresponding to the detection timing in the object movement history information generated from a video; and changing the identification information of the second object to the identification information of the first object in the object movement history information. | 2022-08-11 |
20220254034 | OBJECT TRACKING DEVICE, OBJECT TRACKING METHOD, AND RECORDING MEDIUM - An object tracking device includes a location information acquisition means configured to acquire location information of an object detected by a sensor, a sensor speed acquisition means configured to acquire speed information of the sensor, a parameter control means configured to generate parameter control information including information for controlling a parameter for use in a tracking process of the object on the basis of the speed information acquired by the sensor speed acquisition means, and an object tracking means configured to perform the tracking process using the parameter control information generated by the parameter control means and the location information acquired by the location information acquisition means. | 2022-08-11 |
20220254035 | IMAGE PROCESSING APPARATUS, HEAD-MOUNTED DISPLAY, AND METHOD FOR ACQUIRING SPACE INFORMATION - In an image processing apparatus, an image acquisition section acquires captured images from a stereo camera of a head-mounted display. An image correction section performs correction on a partial image-wise basis, a partial image being smaller than one frame, while referring to a displacement vector map representing displacement vectors of pixels required for the correction. A feature point extraction section extracts feature points from partial images, and supplies the feature points sequentially to a feature point comparison section. The feature point comparison section associates feature points in a plurality of images with one another. A space information acquisition section acquires information as to a real space on the basis of correspondence information as to the feature points. | 2022-08-11 |
20220254036 | Interactive Formation Analysis in Sports Utilizing Semi-Supervised Methods - A computing system identifies player tracking data and event data corresponding to a match. The match includes a first team and a second team. The player tracking data includes coordinate positions of each player during the event. The event data defines events that occur during the match. The computing system divides the player tracking data into a plurality of segments based on the event information. For each segment of the plurality of segments, the computing system learns a first formation associated with a respective team in possession. For each segment of the plurality of segments, the computing system learns a second formation associated with a respective team not in possession. The computing system maps each first formation to a first class of known formation clusters. The computing system maps each second formation to a second class of known formation clusters. | 2022-08-11 |
20220254037 | AUGMENTING TRAINING SAMPLES FOR MOTION DETECTION SYSTEMS USING IMAGE ADDITIVE OPERATIONS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an event detector. The methods, systems, and apparatus include actions of identifying a portion of a first interframe difference image that represents motion of an OI, determining that a second interframe difference image represents motion by a non-OI, combining the portion of the first interframe difference image and the second interframe difference image as a third interframe difference image labeled as motion of both an OI and a non-OI, and training an event detector with the third interframe difference image. | 2022-08-11 |
20220254038 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device includes: a reception unit that receives position information of a capturing target and a captured image of the capturing target captured by at least one camera, a prediction unit that predicts a position of the capturing target within a capturing range of the camera based on the position information of the capturing target; a detection unit that detects the capturing target by reading a captured image of a limitation range that is a part of the capturing range from the captured image of the capturing range based on a predicted position of the capturing target; a measurement unit that measures a position of a detected capturing target; and an output unit that outputs a difference between a measured position of the capturing target and the predicted position. | 2022-08-11 |
20220254039 | METHOD AND APPARATUS FOR ACCELERATING HYPERSPECTRAL VIDEO RECONSTRUCTION - A method for accelerating hyperspectral video reconstruction includes steps of: acquiring, according to a spectral video and an RGB video captured by a hyperspectral video camera, a calibration matrix of the spectral video and the RGB video; sorting the calibration matrix to generate an ordered calibration matrix; converting, according to the ordered calibration matrix, the spectral video and the RGB video into a data matrix in a parallel manner; acquiring all related calibration points of a reconstruction region according to the ordered calibration matrix; and, reconstructing a hyperspectral video in a parallel manner according to the related calibration points and the data matrix. The related calibration points are acquired by sorting the calibration matrix, such that the number of times the calibration matrix is traverse is reduced, and the computation amount of hyperspectral video reconstruction is decreased. | 2022-08-11 |
20220254040 | SYSTEMS AND METHODS FOR REGISTRATION BETWEEN PATIENT SPACE AND IMAGE SPACE USING REGISTRATION FRAME APERTURES - Systems and methods for performing registration between a patient space and an image space are disclosed herein. Systems using such methods may identify a pose of a registration frame having one or more apertures within the patient space using a tracking system. Image data corresponding to the image space having an image of the registration frame is taken. A plurality of aperture locations within the image data (corresponding to the apertures of the registration frame) are identified in the image data. Aperture representations from a model of the registration frame are matched to these aperture locations to determine a pose of the registration frame within the image space. A transform between the patient space and the image space is generated based on the pose of the registration frame within the patient space and the pose of the registration frame within the image space. Optimization methods for this transform are discussed. | 2022-08-11 |
20220254041 | IMAGE PROCESSING DEVICE, STORAGE MEDIUM, AND IMAGE PROCESSING METHOD - To provide an image processing device capable of obtaining support information for optimizing a matching model when the matching model is generated, the image processing device includes: an image acquisition unit; a matching model acquisition unit configured to acquire an image processing matching model based on an image acquired by the image acquisition unit; an image transformation unit configured to perform predetermined transformation on the image to acquire a transformed image; a comparison unit configured to compare the matching model acquired by the matching model acquisition unit with the transformed image acquired by the image transformation unit; and a display unit configured to display support information for optimizing the matching model based on a result of the comparison unit. | 2022-08-11 |
20220254042 | METHODS AND SYSTEMS FOR SENSOR UNCERTAINTY COMPUTATIONS - Systems and method are provided for controlling a sensor of a vehicle. In one embodiment, a method includes: receiving depth image data from the sensor of the vehicle; computing, by a processor, an aleatoric variance value based on the depth image data; dividing, by the processor, the depth image data into grid cells; computing, by the processor, a confidence bound value for each grid cell based on the depth image data; computing, by the processor, an uncertainty value for each grid cell based on the confidence bound value of the grid cell and the aleatoric variance value; and controlling, by the processor, the sensor based on the uncertainty values. | 2022-08-11 |
20220254043 | A compression area identification platform and method thereof using content analysis - A compression area identification platform and method thereof using content analysis, wherein the platform includes: a compression identification device for acquiring each imaging depth of field value of each skin imaging pixel point in a skin imaging area, and calculating the mean value of each imaging depth of field value to obtain the reference depth of field; the compression identification device is also used in areas within the skin imaging area, and issuing compression area identification signal if the magnitude of imaging depth of field value greater than the reference depth of field exceeds a preset magnitude threshold in overrun of skin imaging pixel points, or otherwise, to issue the compression area unidentified signal. The present invention can quickly identify the skin compression area at a surgical puncture position and perform an on-site signal alarm. | 2022-08-11 |
20220254044 | RANGING DEVICE AND RANGING METHOD - A ranging device and a ranging method are provided. The ranging device includes a light source, an image sensor, and a processor. The light source projects a plurality of projection patterns onto a surface of an object to be measured at different times. The image sensor senses the surface of the object to be measured in synchronization with projection times of the projection patterns to obtain a plurality of sensing images respectively corresponding to the projection patterns. The processor analyzes the sensing images to determine depth information of the object to be measured. The processor performs trigonometric calculations to obtain the depth information. | 2022-08-11 |
20220254045 | Determining Object Structure Using Physically Mounted Devices With Only Partial View Of Object - Techniques are described for automated analysis and use of data acquired about an object of interest, such as from a physically mounted camera or other sensing device with only partial coverage of the object exterior, such as to automatically generate a computer model of the object from visual data in images and to use the computer model to automatically estimate values for one or more object attributes. For example, the described techniques may be used to measure the volume of a pile of material significantly larger than a human using images acquired by one or more fixed-location cameras that provide visual coverage of only a subset of the pile's exterior. The images from such physically mounted devices may be acquired at various times (e.g., when triggered by conditions in the environment of the object, dynamically upon request, etc.), and may be used to monitor changes in the object. | 2022-08-11 |
20220254046 | METHOD FOR PERFORMING SIMULTANEOUS LOCALIZATION AND MAPPING AND DEVICE USING SAME - Provided is an accelerator provided in an electronic device and configured to perform simultaneous localization and mapping (SLAM), the accelerator including a factor graph database, a memory, and a back-end processor, wherein the back-end processor is configured to receive a first piece of data corresponding to map points and camera positions from the factor graph database, convert the received first piece of data into a matrix for the map points and a matrix for the camera positions, store, in the memory, results obtained by performing an optimization calculation on the matrix for the map points and a matrix for at least one camera position, among the camera positions, corresponding to the map points, and obtain a second piece of data optimized with respect to the first piece of data based on the results stored in the memory. | 2022-08-11 |
20220254047 | SYSTEM AND METHOD FOR DYNAMIC STEREOSCOPIC CALIBRATION - Methods for stereo calibration of a dual-camera that includes a first camera and a second camera and system for performing such methods. In some embodiments, a method comprises obtaining optimized extrinsic and intrinsic parameters using initial intrinsic parameters and, optionally, initial extrinsic parameters of the cameras, estimating an infinity offset e using the optimized extrinsic and extrinsic parameters, and estimating a scaling factor s using the optimized extrinsic and extrinsic parameters and infinity offset parameter e, wherein the optimized extrinsic and extrinsic parameters, infinity offset e and scaling factor s are used together to provide stereo calibration that leads to improved depth estimation. | 2022-08-11 |
20220254048 | CAMERA SYSTEMS USING FILTERS AND EXPOSURE TIMES TO DETECT FLICKERING ILLUMINATED OBJECTS - The technology relates to camera systems for vehicles having an autonomous driving mode. An example system includes a first camera mounted on a vehicle in order to capture images of the vehicle's environment. The first camera has a first exposure time and being without an ND filter. The system also includes a second camera mounted on the vehicle in order to capture images of the vehicle's environment and having an ND filter. The system also includes one or more processors configured to capture images using the first camera and the first exposure time, capture images using the second camera and the second exposure time, use the images captured using the second camera to identify illuminated objects, use the images captured using the first camera to identify the locations of objects, and use the identified illuminated objects and identified locations of objects to control the vehicle in an autonomous driving mode. | 2022-08-11 |
20220254049 | PORTABLE DIMENSIONAL REFERENCE FOR 3-D TOWER MODELING - A portable dimensional reference (PDR) that is transportable to a tower site and deployable on site at ground level near a tower. The PDR includes a pair of target pads and multiple connecting segments, which may be attached end-to-end between the target pads. The target pads are marked with respective targets that enable photogrammetry software to identify the targets, their locations, and the distance between them with high precision, enabling the software to apply the known distance as a scale constraint for accurately scaling dimensions of imaged components of the tower. | 2022-08-11 |
20220254050 | NOISE REDUCTION CIRCUIT FOR DUAL-MODE IMAGE FUSION ARCHITECTURE - Embodiments relate to an image processing circuit comprising a noise reduction circuit configurable to perform bilateral filtering on demosaiced and resampled image data, or on raw image data, based on the operating mode of the image processing circuit. The noise reduction circuit filters received image data based upon directional taps, by selecting, for each pixel, a set of neighbor pixels, and comparing values of the set of neighbor pixels to determine whether the pixel lies on a directional edge. For raw images, the noise reduction circuit selects the set of neighbor pixels to include a plurality of pixels of the same color channel as the pixel, and one or more additional pixels of a different color channel, where color values for the one or more additional pixels are determined by interpolating color values of two or more adjacent pixels of the same color channel as the pixel. | 2022-08-11 |
20220254051 | TESTING SYSTEM AND TESTING METHOD FOR IMAGE PROCESSING ALGORITHM - A testing system for an image processing algorithm including a control unit, an image processing device, an image processing hardware, and a testing device is disclosed. The control unit provides an original image and parameters. The image processing device obtains the original image and the parameters, and drives the image processing hardware to perform a first image processing procedure to the original image based on the parameters to generate a hardware-processed image. The testing device obtains the original image, the parameters, and the hardware-processed image, and performs, through a simulation software, a second image processing procedure to the original image based on the parameters to generate a software-processed image. The testing device compares the hardware-processed image with the software-processed image through a testing software to generate a comparing result that shows a pixel difference of the hardware-processed image and the software-processed image. | 2022-08-11 |
20220254052 | THREE-DIMENSIONAL POSE ESTIMATION METHOD, PROGRAM, RECORDING MEDIUM AND THREE-DIMENSIONAL POSE ESTIMATION DEVICE - A method for estimating a three-dimensional pose from control points of an image of the object, includes: selecting, for each pair consisting of first and second control points that define a skeleton of the object, relative positions of the first control point with respect to the second control point on the image, estimating, for each of the selected relative positions, a relative depth of the first control point with respect to the second control point over the entire image based on an assumption that the second control point exists at each position on the image; detecting two-dimensional positions of the control points on the image for the object using the image; obtaining, based on the relative depth estimated for each of the selected relative positions and the two-dimensional positions of the control points, relative three-dimensional positions of the control points; and estimating the three-dimensional pose for the object. | 2022-08-11 |
20220254053 | MAP REPRESENTATION DATA PROCESSING DEVICE, CORRESPONDENCE INFORMATION PRODUCTION METHOD, AND PROGRAM - It is possible to automatically acquire coordinate information and position information regarding a location name or the like on map representation data, in association with each other, with a map representation data processing device including: a map representation data acceptance unit that accepts map representation data; a character string acquisition unit that acquires a character string from the map representation data; a coordinate information acquisition unit that acquires coordinate information corresponding to the acquired character string; a position information acquisition unit that acquires pieces of position information corresponding to pieces of location information that are character strings, using a location dictionary that contains one or more pieces of location position information that associate the pieces of location information and the pieces of position information with each other; and a correspondence output unit that outputs the coordinate information and the position information in association with each other. | 2022-08-11 |
20220254054 | CONSTRUCTION MACHINE WORK INFORMATION GENERATION SYSTEM AND WORK INFORMATION GENERATION METHOD - A work information generation system includes: a data acquisition unit that acquires time-series work data for determining work content of work performed by a construction machine and attendant situation data representing an attendant situation including at least one of a situation of the construction machine and a situation around the construction machine when the work is performed; a work content determination unit that determines the work content based on the time-series work data; and a reflection information generation unit that generates, based on the work content and the attendant situation data, reflection information, which is information in which the attendant situation is reflected on the work content. | 2022-08-11 |
20220254055 | SYSTEMS AND METHODS FOR IMAGE-BASED ELECTRICAL CONNECTOR ASSEMBLY DETECTION - A method includes defining, in an image, bounding boxes about an electrical connector assembly, identifying an edge of each of the bounding boxes, and determining one or more metrics based on the edge of each the bounding boxes, where the one or more metrics indicate a positional relationship between the edges of the bounding boxes. The method includes determining a state of the electrical connector assembly based on the one or more metrics and transmitting a notification based on the state. | 2022-08-11 |
20220254056 | DISTANCE CALCULATION APPARATUS AND VEHICLE POSITION ESTIMATION APPARATUS - A distance calculation apparatus includes an in-vehicle detection unit configured to detect a situation around a subject vehicle, and a microprocessor and a memory connected to the microprocessor. The microprocessor is configured to perform recognizing an object on the basis of a detection data detected by the in-vehicle detection unit, extracting feature points included in the detection data, recognizing a representative position of a predetermined object on the basis of a distribution of feature points corresponding to the predetermined object among the feature points extracted in the extracting when the predetermined object is recognized in the recognizing, and calculating a distance from the subject vehicle to the predetermined object on the basis of the representative position recognized in the recognizing. | 2022-08-11 |
20220254057 | POINT CLOUD ATTRIBUTE PREDICTION METHOD AND DEVICE BASED ON FILTER - Provided are a point cloud attribute prediction method and prediction device based on a filter. The method comprises a coding method and a decoding method, and the device comprises a coding device and a decoding device. The method comprises: determining K nearest neighbor points of the current point; determining a filter matrix; and determining an attribute prediction value of the current point according to the filter matrix. Therefore, the compression performance of a point cloud attribute can be improved by means of selecting an appropriate filter. | 2022-08-11 |
20220254058 | METHOD FOR DETERMINING LINE-OF-SIGHT, METHOD FOR PROCESSING VIDEO, DEVICE, AND STORAGE MEDIUM - Provided is a method for detecting line-of-sight. The method for detecting line-of-sight includes: determining, based on a key feature point in a face image, a face posture and an eye pupil rotational displacement corresponding to the face image, wherein the eye pupil rotational displacement is a displacement of a pupil center relative to an eyeball center in the face image; and acquiring a line-of-sight direction of an actual face by back-projecting, based on a preset projection function and the face posture, the eye pupil rotational displacement to a three-dimensional space where the actual face is located. | 2022-08-11 |
20220254059 | Data Processing Method and Related Device - The present disclosure relates to a data processing method and a related device. The method comprises the following steps of: acquiring a point cloud to be processed which comprises at least one object to be located; determining at least two target areas in the point cloud to be processed, adjusting normal vectors of points in the target areas to significant normal vectors according to initial normal vectors of the points in the target areas, any two of the at least two target areas being different; dividing the point cloud to be processed according to the significant normal vectors of the target areas to acquire at least one divided area; and acquiring a three-dimensional position of a reference point of the object to be positioned according to three-dimensional positions of the point in the at least one divided area. | 2022-08-11 |
20220254060 | 3D FIDUCIAL FOR PRECISION 3D NAND CHANNEL TILT/SHIFT ANALYSIS - Systems for and methods for generating precise structure reconstruction using slice and view images, are disclosed. An example method comprises, obtaining a slice and view images of a sample that depicts a 3D fiducial and cross-sections of a structure in the sample. The 3D fiducial is configured such that when a layer of material having a uniform thickness is removed from a surface of the sample that includes the 3D fiducial the cross-sectional shape of the 3D fiducial in the new surface is consistent. Relative positions are determined between the 3D fiducial the cross-sections of the structure in individual images. Positional relationships are then determined between the cross-sections of the structure in different images in a common reference frame based on the relative positions. | 2022-08-11 |
20220254061 | Optical Flow Odometry Based on Optical Mouse Sensor Technology - An optical flow odometer to determine the position of an object movable relatively to a surface is provided that includes a cluster of digital image sensors intended to be arranged on the movable object at respective reference positions and an electronic processing and control unit electrically connected with the digital image sensors and configured to: operate the digital image sensors in subsequent time instants during movement of the movable object so as to carry out a sequence of multiple digital image capture operations, in each of which the digital image sensors are operated to simultaneously capture respective digital images, and receive from the digital image sensors and process the digital images captured in the multiple digital image capture operations to compute either the positions of the individual digital image sensors or the position of the cluster of digital image sensors at one or more of the multiple digital image capture operations, and compute the position of the movable object at one or more multiple digital image capture operations based on either the positions of one or more individual digital image sensors or the position of the cluster of digital image sensors computed at one or more multiple digital image capture operations. | 2022-08-11 |
20220254062 | METHOD, DEVICE AND STORAGE MEDIUM FOR ROAD SLOPE PREDICATING - A road slope predicting method, a device, and a storage medium are disclosed. The road slope predicting method includes obtaining a road image of a road by a camera, detecting a first image lane line and a second image lane line of the road from the road image, setting a number of road image segmentation points along a road image center line of the first image lane line and the second image lane line respectively by means of the corresponding road image segmentation points, determining the pitch angle of the camera with respect to the road plane at each road space segmentation point; based on the internal and external parameters of the camera and the pitch angle, a space coordinate of each road space segmentation points is calculated in a recursive man. A road model of the road is constructed based on each space coordinate. | 2022-08-11 |
20220254063 | GAZE POINT ESTIMATION PROCESSING APPARATUS, GAZE POINT ESTIMATION MODEL GENERATION APPARATUS, GAZE POINT ESTIMATION PROCESSING SYSTEM, AND GAZE POINT ESTIMATION PROCESSING METHOD - A gaze point estimation processing apparatus in an embodiment includes a storage configured to store a neural network as a gaze point estimation model and one or more processors. The storage stores a gaze point estimation model generated through learning based on an image for learning and information relating to a first gaze point for the image for learning. The one or more processors estimate information relating to a second gaze point with respect to an image for estimation from the image for estimation using the gaze point estimation model. | 2022-08-11 |
20220254064 | EXTERNAL PARAMETER CALIBRATION METHOD, DEVICE AND SYSTEM FOR IMAGE ACQUISITION APPARATUS - An external parameter calibration method for an image acquisition apparatus is disclosed. The method includes acquiring images from images acquired by the image acquisition apparatus. The images contain reference objects acquired by the image acquisition apparatus during the driving of the vehicle. The reference objects in the images are divided into a number of sections along a road direction in which the vehicle is located, and reference objects in each of the sections are fitted into straight lines. Pitch angles and yaw angles of the image acquisition apparatus are determined based on vanishing points of a straight line in each of the sections. The sequences of the determined pitch and yaw angles are filtered. Straight portions in the road from the filtered sequences of pitch and yaw angles are obtained. Data of the pitch angles and yaw angles corresponding to the straight portions are stored to a data stack. | 2022-08-11 |
20220254065 | CAMERA CALIBRATION METHOD AND APPARATUS AND ELECTRONIC DEVICE - a camera calibration method, apparatus and an electronic device. The method includes: obtaining a calibration board image, where the calibration board image includes a plurality of annular patterns; obtaining an inner edge and an outer edge of each annular pattern in the calibration board image; determining image coordinates of a center point of each annular pattern according to the inner edge and the outer edge of each annular pattern; and determining internal and external parameters of a camera according to the image coordinates and corresponding world coordinates of the center point of each annular pattern. The accuracy of camera calibration is improved. | 2022-08-11 |
20220254066 | Method for Calibrating the Position and Orientation of a Camera Relative to a Calibration Pattern - A method for calibrating the position and orientation of a camera, in particular a vehicle-mounted camera, relative to a calibration pattern includes the steps of: A] acquiring an image of the calibration pattern by the camera; B] determining a parameter of the image or of the calibration pattern; C] transforming the image based on the parameter; D] identifying characteristic points or possible characteristic points of the calibration pattern within the transformed image; E] deriving the position or orientation of the camera relative to the calibration pattern from the identified characteristic points or possible characteristic points; F] in dependence of a confidence value of the derived position or orientation of the camera or in dependence of the number of iterations of steps B to F so far, repeating steps B to F; and G] outputting the position or orientation of the camera derived in the last iteration of step E. | 2022-08-11 |
20220254067 | OPTICAL INFORMATION DETECTION METHOD, DEVICE AND EQUIPMENT - An optical information detection method includes: acquiring a first image captured by a first imaging device, wherein the first image comprises a target full-field projection pattern; extracting features from the first image to obtain first feature information; acquiring second feature information and first graphic information of a reference full-field projection pattern, wherein the first graphic information comprises zero-order information and/or secondary information; calculating a first mapping relationship between the first feature information and the second feature information; mapping the first graphic information to the target full-field projection pattern according to the first mapping relationship, to obtain second graphic information corresponding to the target full-field projection pattern; and calculating target optical information according to the second graphic information. The above-mentioned method is compatible with cases in which, regardless of whether a zero-order speckle pattern is globally unique or not, optical information can be accurately detected. | 2022-08-11 |
20220254068 | A METHOD AND APPARATUS FOR DECODING THREE-DIMENSIONAL SCENES - Generating an image from a source image can involve encoding a projection of a part of a three-dimensional scene. Pixels of a source image comprise a depth and a color attribute. Pixels of a source image are de-projected as colored point cloud. A de-projected point in a 3D space has the color attribute of the pixel that it has been de-projected from. Also, a score is attributed to the generated point according to a local depth gradient and/or a local color gradient of the pixel it comes from, the lower the gradient, the higher the score. The generated point cloud is captured by a virtual camera for rendering on a display device. The point cloud is projected onto the viewport image by blending color of points projected on a same pixel, the blending being weighted by the scores of these points. | 2022-08-11 |
20220254069 | IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM - This application discloses an image processing method performed by an electronic device. The method includes: receiving padding frame information transmitted by a server and an image frame set of a to-be-processed image sequence; inputting the image frame set to a cache queue of a decoder, and determining an image frame currently located at the first position of the cache queue as a target image frame; inserting padding frames between the target image frame and a next image frame subsequent the target image frame based on padding frame information, and performing the step of inserting padding frames between the target image frame and a next image frame subsequent the target image frame based on the padding frame information, until all image frames in the image frame set are decoded; and processing the decoded image frames and displaying a processing result. The solution reduces time delay caused by decoding an image sequence. | 2022-08-11 |
20220254070 | METHODS AND APPARATUS FOR LOSSLESS COMPRESSION OF GPU DATA - The present disclosure relates to methods and devices for data or graphics processing including an apparatus, e.g., a GPU. The apparatus may receive at least one bitstream including a plurality of bits, each of the bits corresponding to a position in the at least one bitstream, and each of the bits being associated with color data. The apparatus may also arrange an order of the plurality of bits in the at least one bitstream, such that at least one of the bits corresponds to an updated position in the at least one bitstream. Additionally, the apparatus may convert, upon arranging the order of the bits, the color data associated with each of the plurality of bits in the at least one bitstream. The apparatus may also compress, upon converting the color data associated with each of the bits, the plurality of bits in the at least one bitstream. | 2022-08-11 |
20220254071 | FEW-SHOT DIGITAL IMAGE GENERATION USING GAN-TO-GAN TRANSLATION - The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and efficiently modifying a generative adversarial neural network using few-shot adaptation to generate digital images corresponding to a target domain while maintaining diversity of a source domain and realism of the target domain. In particular, the disclosed systems utilize a generative adversarial neural network with parameters learned from a large source domain. The disclosed systems preserve relative similarities and differences between digital images in the source domain using a cross-domain distance consistency loss. In addition, the disclosed systems utilize an anchor-based strategy to encourage different levels or measures of realism over digital images generated from latent vectors in different regions of a latent space. | 2022-08-11 |
20220254072 | PROVIDING CONTEXT FOR SOFTWARE DEPLOYMENTS USING AUGMENTED REALITY - Systems and methods are described for providing context for software deployment using augmented reality. In an example method, an augmented reality (AR) device having one or more processors may receive a set of compatibility requirements for deployment of a computer executable program (e.g., a software). A camera may acquire image data of a first video showing one or more computing devices. A respective device identifier corresponding to each computing device may be determined. Based on each device identifiers, a respective device specification may be received for each computing device. The set of compatibility requirements may be compared with each of the device specifications. The AR device may generate one or more annotation labels indicating a respective compatibility value for each computing device. Furthermore, the AR device may generate, in real time, an augmented video by mapping the annotation labels to the computing devices. | 2022-08-11 |
20220254073 | METHODS FOR GENERATING SOIL MAPS AND APPLICATION PRESCRIPTIONS - Methods are provided for generating a prescription map for the application of crop inputs. In one method, the user draws a boundary on a map within a user interface and the system identifies relevant soil data and generates a soil map overlay and legend for changing the application prescription for various soils and soil conditions. In another method, the user instead drives a field boundary which is recorded on a planter monitor using a global positioning receiver, and the system generates a soil map and legend for changing the application prescription. | 2022-08-11 |
20220254074 | SHARED EXTENDED REALITY COORDINATE SYSTEM GENERATED ON-THE-FLY - Consistent with disclosed embodiments, systems, methods, and computer readable media including instructions for wearable extended reality appliances to share virtual content. Embodiments may include a processor to generate a visual code reflecting a first physical position of a mobile device. The visual code may be presented on a display of the mobile device in order for the plurality of wearable extended reality appliances to detect the visual code. The plurality of wearable extended reality appliances may share content in a common coordinate system upon detection of the visual code. The processor may detect movement of the mobile device to a second physical position different from the first physical position. The processor may alter the presentation of the visual code upon the detected movement of the mobile device making the visual code unavailable for use in content sharing. | 2022-08-11 |
20220254075 | Systems and Methods for Training an Image Colorization Model - A method for training an image colorization model may include inputting a training input image into a colorization model and receive a predicted color map as an output of the colorization model. A first color distance may be calculated between a first pixel of the predicted color map and a second pixel of the predicted color map. A second color distance may be calculated between a third pixel included in a ground truth color map and a fourth pixel included in the ground truth colorization map. The third pixel and fourth pixel included in the ground truth color map may spatially correspond, respectively, with the first pixel and second pixel included in the predicted color map. The method may include adjusting parameters associated with the colorization model based on a neighborhood color loss function that evaluates a difference between the first color distance and the second color distance. | 2022-08-11 |
20220254076 | Automated Digital Tool Identification from a Rasterized Image - A visual lens system is described that identifies, automatically and without user intervention, digital tool parameters for achieving a visual appearance of an image region in raster image data. To do so, the visual lens system processes raster image data using a tool region detection network trained to output a mask indicating whether the digital tool is useable to achieve a visual appearance of each pixel in the raster image data. The mask is then processed by a tool parameter estimation network trained to generate a probability distribution indicating an estimation of discrete parameter configurations applicable to the digital tool to achieve the visual appearance. The visual lens system generates an image tool description for the parameter configuration and incorporates the image tool description into an interactive image for the raster image data. The image tool description enables transfer of the digital tool parameter configuration to different image data. | 2022-08-11 |
20220254077 | CONSERVATIVE RASTERIZATION - Conservative rasterization hardware comprises hardware logic arranged to perform an edge test calculation for each edge of a primitive and for each corner of each pixel in a microtile. Outer coverage results are determined, for a particular pixel and edge, by combining the edge test results for the four corners of the pixel and the particular edge in an OR gate. Inner coverage results are determined, for a particular pixel and edge, by combining the edge test results for the four corners of the pixel and the particular edge in an AND gate. An overall outer coverage result for the pixel and the primitive is calculated by combining the outer coverage results for the pixel and each of the edges of the primitive in an AND gate. The overall inner coverage result for the pixel is calculated in a similar manner. | 2022-08-11 |
20220254078 | EDITING DIGITAL IMAGES UTILIZING EDGE AWARE TRANSFORMATIONS BASED ON VECTOR SPLINES AND MESHES FOR MULTIPLE IMAGE REGIONS - The present disclosure relates to systems, methods, and non-transitory computer-readable media that utilize simultaneous, multi-mesh deformation to implement edge aware transformations of digital images. In particular, in one or more embodiments, the disclosed systems generates a transformation handle that targets an edge portrayed in a digital image. In some cases, the disclosed systems provide the transformation handle for display over the digital image. Additionally, in one or more embodiments, the disclosed systems generate vectors splines and meshes for the edge and one or more influenced regions adjacent to the edge. In response to detecting a user interaction with the transformation handle, the disclosed systems can modify the edge and the at least one influenced region by modifying the corresponding vector splines and meshes. | 2022-08-11 |
20220254079 | IMAGE GENERATION DEVICE, IMAGE GENERATION METHOD, AND PROGRAM - Provided are an image generation device, an image generation method, and a program for a computer which can smoothly generate an edited image based on editing operations of a plurality of users. | 2022-08-11 |
20220254080 | VIRTUAL HAIR EXTENSION SYSTEM - A virtual hair extension system is provided. The system includes a memory device having a user image of a user having hair, a display device, and a computer that is operably coupled to the memory device and the display device. The computer has a hair segmentation module and a hair extension blending module. The hair segmentation module generates a binary hair mask based on the user image. The hair extension blending module generates a final modified user image having the hair of the user with a selected hair extension thereon utilizing the user image, the binary hair mask, and a reference image of the selected hair extension. The computer displays the final modified user image having the hair of the user with the selected hair extension thereon on the display device. | 2022-08-11 |
20220254081 | SYSTEMS AND METHODS FOR IMPROVING THE READABILITY OF CONTENT - Systems and methods are disclosed for improving readability of content wherein content is organized into segments that are each displayed on a respective row on a display device. Each segment is made up of characters such as letters and punctuation. The text in a particular row, the reading row, on the display device is displayed differently than the text in the other rows. Each row is consecutively displayed in the reading row until the user has seen each segment of the content presented in the reading row. Users are able to configure the display of the reading row text and text in other rows independently. | 2022-08-11 |
20220254082 | METHOD OF CHARACTER ANIMATION BASED ON EXTRACTION OF TRIGGERS FROM AN AV STREAM - Digital events to dynamically establish avatar emotion may include particular dynamic metadata happening on live TV, Internet streamed video, live computer gameplay, movie scene, particular voice trigger from users or spectators, the dynamic position/state of an input device (such as a game controller resting on a table), etc. The system dynamically changes the state of a user avatar to various emotional and rig transformation state through this smart system. This brings in more life to the existing static chat/video chat conversation which is active, and user driven while this system is dynamically trigger-driven and autonomous in nature. The avatar world is thus rendered to ne more life-like and responsive to environmental and digital happenings of the user. | 2022-08-11 |
20220254083 | Machine-learning Models for Tagging Video Frames - According to a first aspect of this specification, there is described a computer-implemented method of tagging video frames. The method comprises generating, using a frame tagging model, a tag for each of a plurality of frames of an animation sequence. The frame tagging model comprises: a first neural network portion configured to process, for each frame of the plurality of frames, a plurality of features associated with the frame and generate an encoded representation for the frame. The frame tagging model further comprises a second neural network portion configured to receive input comprising the encoded representations of each frame and generate output indicative of a tag for each of the plurality of frames. | 2022-08-11 |
20220254084 | AVATAR STYLE TRANSFORMATION USING NEURAL NETWORKS - Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for transforming a motion style of an avatar from a first style to a second style. The program and method include: retrieving, by a processor from a storage device, an avatar depicting motion in a first style; receiving user input selecting a second style; obtaining, based on the user input, a trained machine learning model that performs a non-linear transformation of motion from the first style to the second style; and applying the obtained trained machine learning model to the retrieved avatar to transform the avatar from depicting motion in the first style to depicting motion in the second style. | 2022-08-11 |
20220254085 | METHOD FOR PLAYING AN ANIMATION, DEVICE AND STORAGE MEDIUM - The disclosure relates to a method for playing an animation. The method includes: obtaining a target animation based on a setting operation for switching between a camera application and an image storage program. The target animation is an animation of a target image displayed when switching between the camera application and the image storage program. The target animation is played in a transparent window. The transparent window covers an application interface of the camera application as an upper layer of the application interface. | 2022-08-11 |
20220254086 | ANIMATED FACES USING TEXTURE MANIPULATION - A method and system is provided to create animated faces using texture manipulation. A face template is provided to enable a user to define features of the face. A composite face is created from multiple layers that include a bottom layer, an animation layer, and a static layer. The composite face is animated by selectively animating one or more of the layers. | 2022-08-11 |
20220254087 | METHOD AND SYSTEM FOR SAFETY CRITICAL RENDERING OF A FRAME - A method and system for performing safety-critical rendering of a frame in a tile based graphics processing system. Geometry data for the frame is received, including data defining a plurality of primitives representing a plurality of objects in the frame. A definition of a region in the frame is received, the region being associated with one or more primitives among the plurality of primitives. Verification data is received that associates one or more primitives with the region in the frame. The frame is rendered using the geometry data and the rendering of the frame is controlled using the verification data, so that the rendering excludes, from the frame outside the region, the primitives identified by the verification data. | 2022-08-11 |
20220254088 | METHOD AND SYSTEM FOR SAFETY CRITICAL RENDERING OF A FRAME - A method and system for performing safety-critical rendering of a frame in a tile based graphics processing system. Geometry data for the frame is received, including data defining a plurality of primitives representing a plurality of objects in the frame. A definition of a region in the frame is received, the region being associated with one or more primitives among the plurality of primitives. Verification data is received that associates one or more primitives with the region in the frame. The frame is rendered using the geometry data and the rendering of the frame is controlled using the verification data, so that the rendering excludes, from the frame outside the region, the primitives identified by the verification data. | 2022-08-11 |
20220254089 | SHADER AUTO-SIMPLIFYING METHOD AND SYSTEM BASED ON RENDERING INSTRUCTION FLOW - The present invention discloses a shader auto-simplifying method and system based on a rendering instruction flow, the method including: obtaining a rendering instruction flow, extracting a target shader from the rendering instruction flow, and creating a simplifying shader differing from the target shader in code only; intercepting a current frame of a rendering instruction comprising a rendering initiating instruction of the target shader as a particular frame; obtaining time consumed by the simplifying shader by measuring time needed for rendering the particular frame with the simplifying shader; obtaining error(s) of the simplifying shader by measuring a pixel difference value between a rendering frame drawn by the simplifying shader and the particular frame when a rendering instruction corresponding to the particular frame is executed; and screening an optimal simplifying shader according to the time consumed by the simplifying shader and the error of the simplifying shader. Thus, the time consumed by the simplifying shader and the error of the simplifying shader can be accurately measured. Meanwhile, without depending on the original graphical program, favorable simplifying effects and high practicality can be achieved. | 2022-08-11 |
20220254090 | CLUSTER OF SCALAR ENGINES TO ACCELERATE INTERSECTION IN LEAF NODE - Cluster of acceleration engines to accelerate intersections. For example, one embodiment of an apparatus comprises: a set of graphics cores to execute a first set of instructions of a primary graphics thread; a scalar cluster comprising a plurality of scalar execution engines; and a communication fabric interconnecting the set of graphics cores and the scalar cluster; the set of graphics cores to offload execution of a second set of instructions associated with ray traversal and/or intersection operations to the scalar cluster; the scalar cluster comprising a plurality of local memories, each local memory associated with one of the scalar execution engines, wherein each local memory is to store a portion of a hierarchical acceleration data structure required by an associated scalar execution engine to execute one or more of the second set of instructions; the plurality of scalar execution engines to store results of the execution of the second set of instructions in a memory accessible by the set of graphics cores; wherein the set of graphics cores are to process the results within the primary graphics thread. | 2022-08-11 |
20220254091 | APPARATUS AND METHOD FOR OPTIMIZED RAY TRACING - An apparatus and method for efficient ray tracing. For example, one embodiment of an apparatus comprises: a general purpose processor to generate a plurality of ray streams; a first hardware queue to receive the ray streams generated by the general purpose processor; a graphics processing unit (GPU) comprising a plurality of execution units (EUs) to process the ray streams from the first hardware queue; a second hardware queue to store graphics processing jobs submitted by the GPU; the general purpose processor to process the jobs submitted by the GPU and share results with the GPU. | 2022-08-11 |
20220254092 | Intersection Testing for Ray Tracing - A system and method for performing intersection testing of rays in a ray tracing system. The ray tracing system uses a hierarchical acceleration structure comprising a plurality of nodes, each identifying one or more elements for intersection testing. The system defines and updates progress information that identifies, for a ray, leaf nodes of the hierarchical acceleration structure which identify elements for which it is not yet known whether or not the ray interests. | 2022-08-11 |
20220254093 | A WEB-SIDE REAL-TIME HYBRID RENDERING METHOD, DEVICE AND COMPUTER EQUIPMENT COMBINED WITH RAY TRACING - The present invention discloses a Web-side real-time hybrid rendering method, device and computer equipment combined with ray tracing. The method includes acquiring three-dimensional scene data and the textures transformed according to the three-dimensional scene data; for the part with slow convergence speed and low frequency of rendering result, employ rasterization rendering according to the three-dimensional scene data; for the part with fast convergence speed and high-frequency rendering results, employ ray tracing rendering according to the texture; according to the rasterization rendering result and/or the ray tracing rendering result, the rendering results of the current frame and the historical frame are mixed. In this way, the problem of low rendering realism on the Web-side was solved, and high-quality global illumination effects can be achieved on the Web-side at a relatively low cost, which enhances the realism of rendering on the Web-side. | 2022-08-11 |
20220254094 | IMAGE RENDERING APPARATUS AND METHOD - A medical image processing apparatus comprises processing circuitry configured to: | 2022-08-11 |
20220254095 | APPARATUS AND METHOD FOR SEARCHING FOR GLOBAL MINIMUM OF POINT CLOUD REGISTRATION ERROR - Disclosed herein are an apparatus and method for searching for a global minimum of a point cloud registration error. The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program performs collecting multiple registration results in which the registration error between a source point cloud and a target point cloud is a local minimum as candidates and selecting the registration result in which the registration error between the source point cloud and the target point cloud is a global minimum, among the candidates. Collecting the multiple registration results may comprise repeatedly randomly initializing the source point cloud and the target point cloud and registering the initialized source point cloud to the initialized target point cloud to thereby search for a registration result in which the registration error therebetween is a local minimum. | 2022-08-11 |
20220254096 | VIRTUAL DISPLAY CHANGES BASED ON POSITIONS OF VIEWERS - Systems, methods, and non-transitory computer readable media configured for enabling content sharing between users of wearable extended reality appliances are provided. In one implementation, the computer readable medium may be configured to contain instructions to cause at least one processor to establish a link between a first wearable extended reality appliance and a second wearable extended reality appliance. The first wearable extended reality appliance may display first virtual content. The second wearable extended reality appliance may obtain a command to display first virtual content via the second wearable extended reality appliance, and in response, this content may be transmitted and displayed via the second extended reality appliance. Additionally, the first wearable extended reality appliance may receive second virtual content from the second wearable extended reality appliance, and display said second virtual content via the first wearable extended reality appliance. | 2022-08-11 |
20220254097 | Digital Image Editing using a Depth-Aware System - Digital image editing techniques are described as implemented by a depth-aware system of a computing device. The depth-aware system employs a depth-aware grid that defines constraints for depth-aware editing of digital images involving perspective. These techniques support automated editing in which changes to directional spacing, perspective arrangement, perspective movement, object redistribution, and so on are implemented in real time, which is not possible in conventional techniques involving object distortion. As such, these techniques improve operation of the computing device that implements these techniques as well as user efficiency in interacting with the computing device to perform digital image editing that involves perspective. | 2022-08-11 |
20220254098 | TRANSPARENT, SEMI-TRANSPARENT, AND OPAQUE DYNAMIC 3D OBJECTS IN DESIGN SOFTWARE - A computing system, having stored thereon a design software, is configured to generate a design file representing a three-dimensional space that embodies a design. The computing system is further configured to render a three-dimensional view of the three-dimensional space in a graphical user interface. In response to receiving a user input of placing a three-dimensional object in the three-dimensional space, the computing system then renders the three-dimensional object in the three-dimensional view of the three-dimensional space in the graphical user interface as a silhouette. | 2022-08-11 |
20220254099 | FRACTIONAL VISIBILITY ESTIMATION USING PARTICLE DENSITY FOR LIGHT TRANSPORT SIMULATION - In various examples, transmittance may be computed using a power-series expansion of an exponential integral of a density function. A term of the power-series expansion may be evaluated as a combination of values of the term for different orderings of samples in the power-series expansion. A sample may be computed from a combination of values at spaced intervals along the function and a discontinuity may be compensated for based at least on determining a version of the function that includes an alignment of a first point with a second point of the function. Rather than arbitrarily or manually selecting a pivot used to expand the power-series, the pivot may be computed as an average of values of the function. The transmittance estimation may be computed from the power-series expansion using a value used to compute the pivot (for a biased estimate) or using all different values (for an unbiased estimate). | 2022-08-11 |
20220254100 | METHOD AND APPARATUS FOR GENERATING 3D ENTITY SHAPE DATA USING IMAGE RECOGNITION - A method for generating 3D entity shape data using image recognition which is performed by a computing device, the method includes the steps of: recognizing a grid matching part having four edge vertices of a quadrangle displayed in an image captured by a camera; generating a cube-shaped 3D space grid of a specific distance unit applied to the image by using the grid matching part; and generating shape data for an external object using the 3D space grid. | 2022-08-11 |
20220254101 | FACE MODEL PARAMETER ESTIMATION DEVICE, FACE MODEL PARAMETER ESTIMATION METHOD, AND FACE MODEL PARAMETER ESTIMATION PROGRAM - A face model parameter estimation device includes: an image coordinate system coordinate value derivation unit detecting x-coordinate and y-coordinate values in an image coordinate system at a feature point of an organ of a face in an image and estimating a z-coordinate value to derive three-dimensional coordinate values in the image coordinate system; a camera coordinate system coordinate value derivation unit deriving three-dimensional coordinate values in a camera coordinate system from the three-dimensional coordinate values in the image coordinate system; a parameter derivation unit applying the three-dimensional coordinate values in the camera coordinate system to a three-dimensional face shape model to derive a position and posture parameter of the three-dimensional face shape model in the camera coordinate system; and an error estimation unit estimating a position and posture error between the position and posture parameter and a true parameter and a shape deformation parameter. | 2022-08-11 |
20220254102 | SEMANTIC LABELING OF POINT CLOUD CLUSTERS - In one implementation, a method of semantically labeling a point cloud cluster is performed at a device including one or more processors and non-transitory memory. The method includes obtaining a point cloud of a physical environment including a plurality of points, each of the plurality of points associated with coordinates in a three-dimensional space. The method includes spatially disambiguating portions of the plurality of points into a plurality of clusters. The method includes determining a semantic label based on a volumetric arrangement of the points of a particular cluster of the plurality of clusters. The method includes generating a characterization vector of a particular point of the points of the particular cluster, wherein the characterization vector includes the coordinates of the particular point, a cluster identifier of the particular cluster, and the semantic label. | 2022-08-11 |