24th week of 2022 patent applcation highlights part 59 |
Patent application number | Title | Published |
20220189008 | METHOD FOR DETECTING DATA DEFECTS AND COMPUTING DEVICE UTILIZING METHOD - A method for detecting data defects and a computing device applying the method obtains a test image for analysis. A field to which the test image relates is determined. Based on the field, a target convolutional layer is determined from a convolutional neural network. The target convolutional layer is used to extract features of the test image. A target score of the test image and a score threshold corresponding to the field are determined. If the target score is less than the score threshold, it is determined that the test image reveals defects, thereby improving an accuracy of defect detection. | 2022-06-16 |
20220189009 | MEDICAL IMAGE ANALYSIS METHOD AND DEVICE - A medical image analysis method includes: reading an original medical image; performing image classification and object detection on the original medical image to generate a first classification result and a plurality of object detection results by a plurality of complementary artificial intelligence (AI) models; performing object feature integration and transformation on a first detection result and a second detection result among the object detection results to generate a transformation result by a features integration and transformation module; and performing machine learning on the first classification result and the transformation result to generate an image interpretation result by a machine learning module and display the image interpretation result. | 2022-06-16 |
20220189010 | A SYSTEM AND A METHOD FOR ALERTING ON VISION IMPAIRMENT - The present invention discloses a technique for alerting on vision impairment. The system comprises a processing unit configured and operable for receiving scene data being indicative of a scene of at least one consumer in an environment, identifying in the scene data a certain consumer, identifying an event being indicative of a behavioral compensation for vision impairment, and, upon identification of such an event, sending a notification relating to the vision impairment. | 2022-06-16 |
20220189011 | END-TO-END TRAINING FOR A THREE-DIMENSIONAL TOMOGRAPHY RECONSTRUCTION PIPELINE - A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle. | 2022-06-16 |
20220189012 | DEEP LEARNING ARCHITECTURE SYSTEM FOR AUTOMATIC FUNDUS IMAGE READING AND AUTOMATIC FUNDUS IMAGE READING METHOD USING DEEP LEARNING ARCHITECTURE SYSTEM - Disclosed are an algorithm for automatic fundus image reading, and a deep learning architecture for automatic fundus image reading, which are capable of minimizing the amount of data required for learning by training and reading artificial intelligence in a manner similar to that of an ophthalmologist who acquires medical knowledge. | 2022-06-16 |
20220189013 | PROVISION OF CORRECTED MEDICAL IMAGE DATA - A method includes receiving image data of an examination object. A first temporary data record is created by applying a first correction to the image data. A further temporary data record is created by applying a further correction to the image data. The further correction at least partially corresponds to the first correction. A trained function is applied to input data that is based on the first temporary data record and the further temporary data record. A parameter of the trained function is based on an image quality metric. It is determined whether the first temporary data record has a higher image quality compared with the further temporary data record. When a result is positive, the first temporary data record is provided as the corrected medical image data. When the result is negative, the further temporary data record is provided as the image data, and part of the method is repeated. | 2022-06-16 |
20220189014 | LEARNING CLASSIFIER FOR BRAIN IMAGING MODALITY RECOGNITION - Systems and methods for training a model for identifying an imaging modality. The systems and methods can be performed by a computer system having one or more processors and memory. A plurality of image vectors can be generated from first image data using a convolutional neural network. A loss function can be applied to each of the plurality of image vectors to produce an intermediate dataset. The intermediate dataset can be projected in a space having lower dimensional space that the intermediate dataset. A plurality of clusters can be identified from the intermediate dataset in the space using a clustering technique. Each of the plurality of clusters can be classified into one of a plurality of imaging modalities. | 2022-06-16 |
20220189015 | DEEP LEARNING BASED AUXILIARY DIAGNOSIS SYSTEM FOR EARLY GASTROINTESTINAL CANCER AND INSPECTION DEVICE - A deep learning-based examination and diagnosis assistance system and apparatus for early digestive tract cancer comprising a feature extraction network, an image classification model, an endoscope classifier, and an early cancer recognition model. The feature extraction network is used for performing initial feature extraction on endoscope images based on a neural network model; the image classification model is used for performing extraction on the initial features to acquire image classification features; the endoscope classifier is used for performing feature extraction on the initial features to acquire endoscope classification features and classify gastroscope/colonoscope images; the early cancer recognition model is used for splicing the initial features, the endoscope classification features, and the image classification features to acquire the probability of early cancer lesions in white light images, electronic dye images or chemical dye images of a corresponding site or acquire a flushing prompt or position recognition prompt for the corresponding site. | 2022-06-16 |
20220189016 | ASSESSING RISK OF BREAST CANCER RECURRENCE - The subject disclosure presents systems and computer-implemented methods for assessing a risk of cancer recurrence in a patient based on a holistic integration of large amounts of prognostic information for said patient into a single comparative prognostic dataset. A risk classification system may be trained using the large amounts of information from a cohort of training slides from several patients, along with survival data for said patients. For example, a machine-learning-based binary classifier in the risk classification system may be trained using a set of granular image features computed from a plurality of slides corresponding to several cancer patients whose survival information is known and input into the system. The trained classifier may be used to classify image features from one or more test patients into a low-risk or high-risk group. | 2022-06-16 |
20220189017 | MEDICAL IMAGE PROCESSING METHOD AND APPARATUS, IMAGE PROCESSING METHOD AND APPARATUS, TERMINAL AND STORAGE MEDIUM - A medical image processing method and apparatus, and an image processing method and apparatus, terminal and storage medium that obtains a to-be-processed medical image; generates a difference image according to the first image data, the second image data, and the third image data included in the to-be-processed medical image; and performs binarization processing on the difference image to obtain a binarized image, a foreground region of the binarized image corresponding to a pathological tissue region of the to-be-processed medical image. A difference image is generated based on color information of different channels before binarization processing is performed on an image, thereby effectively using the color information in the image. The pathological tissue region extracted based on the difference image is more accurate and facilitates subsequent image analysis. | 2022-06-16 |
20220189018 | HUMAN EMBRYO EVALUATION USING AI/ML ANALYSIS OF REAL-TIME VIDEO FOR PREDICTING MALE-SEX OFFSPRING - A computer-implemented method for predicting the likelihood an embryo will produce a human male offspring by processing video image data derived from video of a target embryo. The method includes receiving image data derived from video of a target embryo taken at substantially real-time frame speed during an embryo observation period of time. The video contains recorded morphokinetic movement of the target embryo occurring during the embryo observation period of time. The movement is represented in the received image data and the received image data is processed using a model generated utilizing machine learning and correlated embryo outcome data to predict the likelihood the target embryo will produce a human male offspring. | 2022-06-16 |
20220189019 | HUMAN EMBRYO EVALUATION USING AI/ML ANALYSIS OF REAL-TIME VIDEO FOR PREDICTING FEMALE-SEX OFFSPRING - A computer-implemented method for predicting the likelihood an embryo will produce a human female offspring by processing video image data derived from video of a target embryo. The method includes receiving image data derived from video of a target embryo taken at substantially real-time frame speed during an embryo observation period of time. The video contains recorded morphokinetic movement of the target embryo occurring during the embryo observation period of time. The movement is represented in the received image data and the received image data is processed using a model generated utilizing machine learning and correlated embryo outcome data to predict the likelihood the target embryo will produce a human female offspring. | 2022-06-16 |
20220189020 | BOVINE EMBRYO EVALUATION USING AI/ML ANALYSIS OF REAL-TIME VIDEO FOR PREDICTING MALE-SEX OFFSPRING - A computer-implemented method for predicting the likelihood an embryo will produce a bovine male offspring by processing video image data derived from video of a target embryo. The method includes receiving image data derived from video of a target embryo taken at substantially real-time frame speed during an embryo observation period of time. The video contains recorded morphokinetic movement of the target embryo occurring during the embryo observation period of time. The movement is represented in the received image data and the received image data is processed using a model generated utilizing machine learning and correlated embryo outcome data to predict the likelihood the target embryo will produce a bovine male offspring. | 2022-06-16 |
20220189021 | BOVINE EMBRYO EVALUATION USING AI/ML ANALYSIS OF REAL-TIME VIDEO FOR PREDICTING FEMALE-SEX OFFSPRING - A computer-implemented method for predicting the likelihood an embryo will produce a bovine female offspring by processing video image data derived from video of a target embryo. The method includes receiving image data derived from video of a target embryo taken at substantially real-time frame speed during an embryo observation period of time. The video contains recorded morphokinetic movement of the target embryo occurring during the embryo observation period of time. The movement is represented in the received image data and the received image data is processed using a model generated utilizing machine learning and correlated embryo outcome data to predict the likelihood the target embryo will produce a bovine female offspring. | 2022-06-16 |
20220189022 | EMBRYO VIABILITY EVALUATION USING AI/ML ANALYSIS OF REAL-TIME VIDEO - A computer-implemented method for predicting an embryo outcome by processing video image data of the embryo. The method includes receiving image data derived from video of a target embryo taken at substantially real-time frame speed during an embryo observation period of time. The video contains recorded morphokinetic movement of the target embryo occurring during the embryo observation period of time. The movement is represented in the received image data and the received image data is processed using a model generated utilizing machine learning and correlated embryo outcome data. | 2022-06-16 |
20220189023 | SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR DETECTING INDIVIDUALS WITH FEVER IN VIEW OF ACUTE INFECTION OF THE INDIVIDUAL - A method for detecting a plurality of persons with fever including acquiring a first image by a camera, the first image including a first image data of at least a first person, acquiring a second image by a thermal imaging camera, wherein the second image includes a second image data of the first person and a temperature reference apparatus in an infrared spectrum, the second image, at least in sections, has an identical image area as the first image, superimposing the first image and the second image to form a third image, wherein the third image includes a third image data of the first image and the second image, determining a first measurement point within the first image data of the first image, and determining a first temperature for the first measurement point based on the third image and the second image data of the temperature reference apparatus. | 2022-06-16 |
20220189024 | DEVICE AND METHOD FOR EVALUATING DARK FIELD IMAGES - Device and Method for Evaluating Dark Field Images The present invention relates to the use of dark field X-ray images in an ablation treatment of a tumour. By acquiring dark field X-ray images displaying the region of interest targeted in the ablation treatment, information can be derived which allows taking a decision on terminating the ablation treatment. A set of dark field X-ray images is received ( | 2022-06-16 |
20220189025 | CROP YIELD PREDICTION PROGRAM AND CULTIVATION ENVIRONMENT ASSESSMENT PROGRAM - A crop yield prediction program includes: a degree-of-association acquisition step of acquiring in advance a degree of association between a combination of reference image information which is a captured image of a growing crop and reference soil information about a soil in which the crop is planted and a yield of the growing crop as harvested for the combination, the degree of association being represented in three or more levels; an information acquisition step of, when making a new prediction of the yield of the crop, capturing an image of a new growing crop to acquire image information and to acquire soil information about a soil in which the crop is planted; and a prediction step of predicting a yield of the new growing crop with reference to the degree of association acquired at the degree-of-association acquisition step and based on the image information and the soil information. | 2022-06-16 |
20220189026 | METHOD FOR DETECTING A CELL EVENT - Method for detecting or predicting the occurrence of a cell event, selected from between cell division or cell death, in a sample ( | 2022-06-16 |
20220189027 | Panorama Rendering Method, Electronic Device and Storage Medium - A panorama rendering method, an electronic device and a storage medium are provided, and relates to the field of panoramic technology. The method includes: segmenting a panoramic picture to obtain segmented panoramic tile pictures; determining a target tile picture under a current screen and a target spherical patch in a panoramic picture sphere model, according to the panoramic picture sphere model and the segmented panoramic tile pictures; and performing drawing on the target spherical patch by using the target tile picture, to obtain a target scene. In this way, time consumption of panorama loading can be reduced and loading performance can be improved. | 2022-06-16 |
20220189028 | AUTOMATIC DETECTION OF LESIONS IN MEDICAL IMAGES USING 2D AND 3D DEEP LEARNING NETWORKS - Systems and methods for automatic segmentation of lesions from a 3D input medical image are provided. A 3D input medical image depicting one or more lesions is received. The one or more lesions are segmented from one or more 2D slices extracted from the 3D input medical image using a trained 2D segmentation network. 2D features are extracted from results of the segmentation of the one or more lesions from the one or more 2D slices. The one or more lesions are segmented from a 3D patch extracted from the 3D input medical image using a trained 3D segmentation network. 3D features are extracted from results of the segmentation of the one or more lesions from the 3D patch. The extracted 2D features and the extracted 3D features are fused to generate final segmentation results. The final segmentation results are output. | 2022-06-16 |
20220189029 | SEMANTIC REFINEMENT OF IMAGE REGIONS - Examples are described of segmenting an image into image regions based on depicted categories of objects, and for refining the image regions semantically. For example, a system can determine that a first image region in an image depicts a first category of object. The system can generate a color distance map of the first image region that maps color distance values to each pixel in the first image region. A color distance value quantifies a difference between a color value of a pixel in the first image region and a color value of a sample pixel in a second image region in the image. The system can process the image based on a refined variant of the first image region that is refined based on the color distance map, for instance by removing pixels from the first image region whose color distances fall below a color distance threshold. | 2022-06-16 |
20220189030 | METHOD AND SYSTEM FOR DEFECT DETECTION IN IMAGE DATA OF A TARGET COATING - Described herein is a computer-implemented method, including: | 2022-06-16 |
20220189031 | METHOD AND APPARATUS WITH OPTIMIZATION AND PREDICTION FOR IMAGE SEGMENTATION - A processor-implemented method includes: determining a probability that a pixel of an input image belongs to each of a plurality of preset categories; and determining a category of the pixel to be a category corresponding to either one or both of a plurality of category areas and a category determined based on the probability that the pixel belongs to each of the preset categories, based on a result of comparing, to a preset threshold value, a probability that the pixel belongs to the category corresponding to the category areas. | 2022-06-16 |
20220189032 | CEREBRAL STROKE EARLY ASSESSMENT METHOD AND SYSTEM, AND BRAIN REGION SEGMENTATION METHOD - A cerebral stroke early assessment system for cerebral stroke early assessment, comprising a preprocessing module, configured to preprocess an acquired brain medical image set; a brain partitioning module, configured to perform brain region segmentation on the preprocessed brain medical image set, the brain partitioning module comprising an image segmentation neural network and the image segmentation neural network being trained with the aid of an auto-encoder; and a scoring module, configured to perform scoring on the basis of a brain partition image obtained by the brain partitioning module. The present disclosure can improve the segmentation accuracy of brain partition images and the accuracy of cerebral stroke early assessment. | 2022-06-16 |
20220189033 | BOUNDARY DETECTION DEVICE AND METHOD THEREOF - A boundary detection device is provided in the present invention. The boundary detection device includes a camera drone and an image processing unit. The camera drone, for shooting a region to obtain an aerial image data. The image processing unit is configured to convert the aerial image data from a RGB color space to an XYZ color space, then convert the aerial image data from the XYZ color space to a Lab color space to obtain a Lab color image data, and then operate a brightness feature data and a color feature data according to the Lab color image data. The image processing unit picks first to eighth circular masks, each of the circular masks having a boundary line to divide the mask region into two left and right semicircles with different colors. | 2022-06-16 |
20220189034 | INTENSITY-BASED IMAGE MODIFICATION FOR COMPUTER VISION - A computer vision method and computer vision system can be used to process a time-based series of images. For a subject image of the time-based series, a light intensity value is identified for each pixel of a set of pixels of the subject image. A light intensity threshold is defined for the subject image based on a size of a bounding region for an object detected within a previous image of the time-based series captured before the subject image. A modified image is generated for the subject image by one or both of: reducing the light intensity value of each pixel of a lower intensity subset of pixels of the subject image that is less than the light intensity threshold, and increasing the light intensity value of each pixel of a higher intensity subset of pixels of the subject image that is greater than the light intensity threshold. | 2022-06-16 |
20220189035 | COMPUTER-READABLE STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, AND METHOD FOR IMAGE PROCESSING - A non-transitory computer readable storage medium storing computer readable instructions executable by a computer is provided. The computer readable instructions cause the computer to obtain subject image data composing a subject image, set larger regions and smaller regions in each of the larger regions in the subject image, calculate a first feature amount of each of the smaller regions with use of values of pixels in each of the smaller regions and a second feature amount of each of the larger regions with use of values of pixels in each of the larger regions, determine whether each of the smaller regions is an edge region including an edge based on a comparison between the first feature amount and the second feature amount, and generate edge image data indicating edges in the subject image with use of results of the determination whether each of the smaller regions is an edge region. | 2022-06-16 |
20220189036 | CONTOUR-BASED PRIVACY MASKING APPARATUS, CONTOUR-BASED PRIVACY UNMASKING APPARATUS, AND METHOD FOR SHARING PRIVACY MASKING REGION DESCRIPTOR - Disclosed herein are a contour-based privacy region masking apparatus and method. The contour-based privacy region masking apparatus includes a memory for storing at least one program, and a processor for executing the program, wherein the program may perform detecting a contour of an object that is a privacy protection target in an original image, setting a masking region based on the detected contour, and de-identifying the masking region in the original image. | 2022-06-16 |
20220189037 | Method for Identifying Still Objects from Video - The present invention generally relates to a technology of detecting a stationary object in a static video, i.e., a video which is generated by a camera with a fixed view, such as a CCTV video. More specifically, the present invention relates to a technology of identifying a stationary object by use of inference information (Tensor, Activation Map) of Deep Learning object detector among objects which have been detected in a static video such as a CCTV video. According to the present invention, it is possible to lower the priority of stationary objects (e.g., parked vehicles) in processing a static video so that computing resource of the video analysis system can be effectively utilized in the process after object detection so as to enhance the performance of image analysis. | 2022-06-16 |
20220189038 | OBJECT TRACKING APPARATUS, CONTROL METHOD, AND PROGRAM - An object tracking apparatus ( | 2022-06-16 |
20220189039 | SYSTEM AND METHOD FOR CAMERA-BASED DISTRIBUTED OBJECT DETECTION, CLASSIFICATION AND TRACKING - A camera-based system and method for detecting, classifying and tracking distributed objects moving along surface terrain and through multiple zones. The system acquires images from an image sensor mounted in each section or zone, classifies objects in the zone, detects pixel coordinates of the object, transforms the pixel coordinates into a position in real space, and generates a path of each object through the zone. The system further predicts a path of an object from a first cell for matching of criteria to objects in a second cell, whereby objects may be associated across cells based on predicted paths and without the need to storage and transmission of personally identifiable information. | 2022-06-16 |
20220189040 | METHOD OF DETERMINING AN ORIENTATION OF AN OBJECT AND A METHOD AND APPARATUS FOR TRACKING AN OBJECT - An object orientation determination method includes generating pixel data on each of a plurality of unit pixels included in a region of interest of a point cloud acquired from an object, generating a plurality of candidate boxes using the generated pixel data, and determining, as a heading angle of an oriented bounding box, an inclination of a candidate box having a smallest cost among costs calculated on the plurality of candidate boxes. A cost of each of the plurality of candidate boxes is calculated based on positions of respective sides of a corresponding one of the plurality of candidate boxes and the pixel data. | 2022-06-16 |
20220189041 | RETROSPECTIVE MOTION CORRECTION USING A COMBINED NEURAL NETWORK AND MODEL-BASED IMAGE RECONSTRUCTION OF MAGNETIC RESONANCE DATA - A combined physics-based and machine learning framework is used for reconstructing images from k-space data, in which motion artifacts are significantly reduced in the reconstructed images. In general, model-based retrospective motion correction techniques are accelerated using fast machine learning (“ML”) steps, which may be implemented using a trained neural network such as a convolutional neural network. In this way, the confidence of a classical physics-based reconstruction is obtained with the computational benefits of an ML-based network. | 2022-06-16 |
20220189042 | EVALUATION METHOD, STORAGE MEDIUM, AND INFORMATION PROCESSING APPARATUS - An evaluation method for a computer to execute a process includes, acquiring a plurality of pieces of skeleton information in time series based on position information of joints of an object that executes a plurality of motions; specifying a transition period between a first motion and a second motion that follows the first motion, which are included in the plurality of motions based on the plurality of pieces of skeleton information; determining whether the transition period is related to a certain combination of motions by inputting skeleton information among the plurality of pieces of skeleton information that corresponds to the transition period into an evaluation model trained to evaluate a transition period between motions based on a plurality of pieces of skeleton information in time series; and outputting an evaluation result of the transition period by the evaluation model. | 2022-06-16 |
20220189043 | CORRECTING LINE BIAS IN AN IMAGE - Examples are described that relate to correcting line bias in images. One example provides a method comprising receiving, from an imaging device, a plurality of images each comprising a plurality of lines of pixels. The method further comprises, for each image of the plurality of images, for each line of pixels of the plurality of lines of pixels, based at least on one or more pixel values of one or more pixels in the line of pixels, determining a line bias correction for the line, and applying the line bias correction to each pixel in the line, the line bias correction comprising an offset applied to each pixel value in the line of pixels. | 2022-06-16 |
20220189044 | METHOD AND SYSTEM FOR VISUAL BASED INSPECTION OF ROTATING OBJECTS - This disclosure relates to method and system for visual inspection of rotating components. The method includes representing rotation cycles of a rotating component as spatial features based on video or image frames, ascertaining and/or evolving Hidden Markov Model (HMM) chains for the cycles, ascertaining a count of the rotating component in the frames and/or labelling the frames with ascertained states of the HMM chains. | 2022-06-16 |
20220189045 | DEFORMITY-WEIGHTED REGISTRATION OF MEDICAL IMAGES - Disclosed is a computer-implemented method of determining a spatial relationship between planning image data and current surface data which leads to improved surface registration accuracy by considering the elasticity and deformability of the tissue. The knowledge about the tissue can be estimated based on type of tissue and atlas information. For the process of generating surface registration points on specific anatomical regions, e.g. the face or forehead, are acquired with a classical navigated pointer or laser pointer. It is also possible to acquire points with surface scanners. Confidence values defining a probability for certain parts of the surface registration points being deformed in comparison to a planning image are read from atlas data and used to compensate for the deformation in the registration between the surface registration points and the planning image in order to render the registration valid. | 2022-06-16 |
20220189046 | COMPARISON APPARATUS, CONTROL METHOD, AND PROGRAM - A comparison apparatus ( | 2022-06-16 |
20220189047 | REGISTRATION OF TIME-SEPARATED X-RAY IMAGES - A method according to one embodiment of the present disclosure comprises receiving a first image of a patient's anatomy, the first image generated at a first time and depicting a plurality of rigid elements; receiving a second image of the patient's anatomy, the second image generated at a second time after the first time and depicting the plurality of rigid elements; determining a transformation from the first image to the second image for each one of the plurality of rigid elements to yield a set of transformations; calculating a homography for each transformation in the set of transformations to yield a set of homographies; and identifying, using the set of homographies, a common portion of each transformation attributable to a change in camera pose, and an individual portion of each transformation attributable to a change in rigid element pose. | 2022-06-16 |
20220189048 | GENERATING COMPOSITE IMAGE FROM MULTIPLE IMAGES CAPTURED FOR SUBJECT - A method of generating a composite image from multiple images captured for a subject is disclosed. In some embodiments, the method may include receiving, via an image capturing device, a plurality of sets of images of at least a portion of a subject. The images within a set of images may be captured at a plurality of vertical positions with respect to an associated fixed section of a horizontal plane. The method may further include generating a plurality of focus-stacked images corresponding to the plurality of sets of images, for example, by combining the images in the associated set of images. The method may further include aligning the plurality of focus-stacked images in the horizontal plane based on a horizontal coordinate transformation model to generate a composite image representing the subject. | 2022-06-16 |
20220189049 | Self-Supervised Multi-Frame Monocular Depth Estimation Model - A multi-frame depth estimation model is disclosed. The model is trained and configured to receive an input image and an additional image. The model outputs a depth map for the input image based on the input image and the additional image. The model may extract a feature map for the input image and an additional feature map for the additional image. For each of a plurality of depth planes, the model warps the feature map to the depth plane based on relative pose between the input image and the additional image, the depth plane, and camera intrinsics. The model builds a cost volume from the warped feature maps for the plurality of depth planes. A decoder of the model inputs the cost volume and the input image to output the depth map. | 2022-06-16 |
20220189050 | SYNTHESIZING 3D HAND POSE BASED ON MULTI-MODAL GUIDED GENERATIVE NETWORKS - Systems and methods for obtaining hand images are provided. A method, performed by at least one processor that implements at least one network, includes obtaining a single source image, a three-dimensional (3D) hand pose of a first hand in the single source image, and a 3D target hand pose; and generating an image of a second hand, that has an appearance of the first hand and a pose of the 3D target hand pose, based on the single source image, the 3D hand pose, and the 3D target hand pose. | 2022-06-16 |
20220189051 | Learnable Cost Volume for Determining Pixel Correspondence - A method includes obtaining a first plurality of feature vectors associated with a first image and a second plurality of feature vectors associated with a second image. The method also includes generating a plurality of transformed feature vectors by transforming each respective feature vector of the first plurality of feature vectors by a kernel matrix trained to define an elliptical inner product space. The method additionally includes generating a cost volume by determining, for each respective transformed feature vector of the plurality of transformed feature vectors, a plurality of inner products, wherein each respective inner product of the plurality of inner products is between the respective transformed feature vector and a corresponding candidate feature vector of a corresponding subset of the second plurality of feature vectors. The method further includes determining, based on the cost volume, a pixel correspondence between the first image and the second image. | 2022-06-16 |
20220189052 | VEHICULAR TRAILERING ASSIST SYSTEM WITH TRAILER BEAM LENGTH ESTIMATION - A vehicular trailer beam estimation system includes a camera disposed at a rear portion of a vehicle equipped with a trailer hitch. With a trailer hitched at the trailer hitch of the vehicle, the camera views at least a portion of the trailer hitched to the vehicle. The system, responsive at least in part to processing by an image processor of image data captured by the camera, determines a trailer angle of the trailer relative to the vehicle. During a driving maneuver, the system tracks trailering parameters, which include (i) the trailer angle, (ii) steering angle of the vehicle and (iii) speed of the vehicle. The system determines beam length of the trailer based at least in part on (i) the trailering parameters, (ii) the vehicle's wheelbase and (iii) hitch length of the trailer hitch of the vehicle. | 2022-06-16 |
20220189053 | METHOD OF EXTRACTING NUMBER OF STEM AND TILLERING FOR WHEAT UNDER FIELD CONDITION - A field wheat stem tillering number extraction method, including: acquiring field wheat point clouds by means of a LiDAR, and extracting any row of wheat point clouds in a research area; projecting a Y axis to a plane, and retaining an X and Z axis; applying adaptive layering to obtain number of clusters of the wheat row; applying hierarchical clustering analysis to obtain tillering number of each wheat cluster; and further obtaining stem tillering number of the whole wheat row, so as to extract a field wheat stem tillering number. The feasibility of an algorithm is verified by comparing the wheat stem tillering number extracted by means of the method with an actually measured field stem tillering number, and the method realizes rapid, accurate and nondestructive extraction of a large-field crop stem tillering number and provides theoretical basis and technical support for extraction of the field wheat stem tillering number. | 2022-06-16 |
20220189054 | LOCALIZATION BASED ON SEMANTIC OBJECTS - Techniques for determining a location of a vehicle in an environment using sensors and determining calibration information associated with the sensors are discussed herein. A vehicle can use map data to traverse an environment. The map data can include semantic map objects such as traffic lights, lane markings, etc. The vehicle can use a sensor, such as an image sensor, to capture sensor data. Semantic map objects can be projected into the sensor data and matched with object(s) in the sensor data. Such semantic objects can be represented as a center point and covariance data. A distance or likelihood associated with the projected semantic map object and the sensed object can be optimized to determine a location of the vehicle. Sensed objects can be determined to be the same based on matching with the semantic map object. Epipolar geometry can be used to determine if sensors are capturing consistent data. | 2022-06-16 |
20220189055 | ITEM DETECTION DEVICE, ITEM DETECTION METHOD, AND INDUSTRIAL VEHICLE - There is provided an item detection device that detects an item to be loaded and unloaded and includes an image acquisition unit acquiring a surrounding image obtained by capturing surroundings of the item detection device, an information image creation unit creating an information image, in which information related to a part to be loaded and unloaded in the item has been converted into an easily recognizable state, on the basis of the surrounding image, and a computing unit computing at least one of a position and a posture of the part to be loaded and unloaded on the basis of the information image. | 2022-06-16 |
20220189056 | TECHNOLOGY TO AUTOMATICALLY IDENTIFY THE FRONTAL BODY ORIENTATION OF INDIVIDUALS IN REAL-TIME MULTI-CAMERA VIDEO FEEDS - Methods, systems and apparatuses may provide for technology that detects an individual in a real-time multi-camera video feed and generates three-dimensional (3D) skeletal data based on the real-time multi-camera video feed. The technology may also automatically identify a frontal body orientation of an individual based on the 3D skeletal data and one or more anthropometric constraints. | 2022-06-16 |
20220189057 | DIFFERENCE-GUIDED VIDEO ANALYSIS - A method can include obtaining, from a video having a first resolution, a set of frames having a second resolution. The first resolution can be higher than the second resolution. The set of frames can include a first frame and a second frame. The method can include generating a difference feature map. The method can include obtaining a third frame having the first resolution. The method can include detecting, based on the difference feature map, a first location of a first object in the third frame. The method can include cropping, from the third frame, a first cropped area. The first cropped area can be smaller than a third frame area. The method can include generating, based on a feature map and the difference feature map, a spatial attention layer. The method can include detecting, by the spatial attention layer, the first object in the first cropped area. | 2022-06-16 |
20220189058 | CONTEXT-AWARE REAL-TIME SPATIAL INTELLIGENCE PROVISION SYSTEM AND METHOD USING CONVERTED THREE-DIMENSIONAL OBJECTS COORDINATES FROM A SINGLE VIDEO SOURCE OF A SURVEILLANCE CAMERA - Disclosed is a context-aware real-time spatial intelligence provision system that estimates the locations of persons or things captured in a video by extracting objects representative of the persons and the things from the video captured by viewing a real space and placing the extracted objects in a virtual space to which a digital twin technique is applied. The disclosed context-aware real-time spatial intelligence provision system allows people to keep the distance between each other indoors, thereby preventing the spread of an infectious disease such as COVID-19. | 2022-06-16 |
20220189059 | Image-Based Pose Determination Method and Apparatus, Storage Medium, and Electronic Device - Embodiments of the present disclosure disclose an image-based pose determination method and apparatus, a computer readable storage medium, and an electronic device. The method include: acquiring a current image frame captured by a camera disposed on a moving object and a historical image frame captured before the current image frame; determining a first relative camera pose between the current image frame and the historical image frame; determining a virtual binocular image based on the first relative camera pose; and determining current pose information of the camera based on preset visual odometry and the virtual binocular image. According to the embodiments of the present disclosure, the virtual binocular image may be generated based on a monocular image, thus achieving effects of binocular visual odometry without using a binocular camera, thereby reducing costs. Moreover, monocular odometry may be enabled to obtain real physical scale of a space where the moving object is located, thereby improving accuracy of the monocular odometry in determining a position and a pose of the camera. | 2022-06-16 |
20220189060 | Visual Camera Re-Localization using Graph Neural Networks and Relative Pose Supervision - The present disclosure describes approaches to camera re-localization using a graph neural network (GNN). A re-localization model includes encoding an input image into a feature map. The model retrieves reference images from an image database of a previously scanned environment based on the feature map of the image. The model builds a graph based on the image and the reference images, wherein nodes represent the image and the reference images, and edges are defined between the nodes. The model may iteratively refine the graph through auto-aggressive edge-updating and message passing between nodes. With the graph built, the model predicts a pose of the image based on the edges of the graph. The pose may be a relative pose in relation to the reference images, or an absolute pose. | 2022-06-16 |
20220189061 | METHODS AND DEVICES FOR GUIDING A PATIENT - Methods and systems for guiding a patient for a medical examination using a medical apparatus. For example, a computer-implemented method for guiding a patient for a medical examination using a medical apparatus includes: receiving an examination protocol for the medical apparatus; determining a reference position based at least in part on the examination protocol; acquiring a patient position; determining a deviation metric based at least in part on comparing the patient position and the reference position; determining whether the deviation metric is greater than a pre-determined deviation threshold; and if the deviation metric is greater than a pre-determined deviation threshold: generating a positioning guidance based at least in part on the determined deviation metric, the positioning guidance including guidance for positioning the patient relative to the medical apparatus. | 2022-06-16 |
20220189062 | MULTI-VIEW CAMERA-BASED ITERATIVE CALIBRATION METHOD FOR GENERATION OF 3D VOLUME MODEL - Proposed is a multi-view camera-based iterative calibration method for generation of a 3D volumetric model that performs calibration between cameras adjacent in a vertical direction for a plurality of frames, performs calibration while rotating with the results of viewpoints adjacent in the horizontal direction, and creates a virtual viewpoint between each camera pair to repeat calibration. Thus, images of various viewpoints are obtained using a plurality of low-cost commercial color-depth (RGB-D) cameras. By acquiring and performing the calibration of these images at various viewpoints, it is possible to increase the accuracy of calibration, and through this, it is possible to generate a high-quality real-life graphics volumetric model. | 2022-06-16 |
20220189063 | CALIBRATING CROP ROW COMPUTER VISION SYSTEM - System and techniques for calibrating a crop row computer vision system are described herein. An image set that includes crop rows and furrows is obtained. Models of the field are searched to find a model that best fits the field. A calibration parameter is extracted from the model and communicated to a receiver. | 2022-06-16 |
20220189064 | CAMERA ANGLE DETECTION METHOD AND RELATED SURVEILLANCE APPARATUS - A camera angle detection method is applied to a surveillance apparatus and includes detecting a plurality of straight lines within a surveillance image acquired by the surveillance apparatus, selecting at least one first candidate parallel line and at least one second candidate parallel line from the plurality of straight lines according to a directional trend, stretching the first candidate parallel line and the second candidate parallel line to acquire an intersection point, and computing parameter difference between the intersection point and a reference point of the surveillance image so as to transform the parameter difference to acquire a tilted angle of an optical axis of a lens of the surveillance apparatus relative to a supporting plane where the surveillance apparatus is located. | 2022-06-16 |
20220189065 | CALIBRATION DEVICE AND CALIBRATION METHOD - High precision calibration of external parameters is possible even when the surrounding road surface is configured from a plurality of flat surfaces. Specifically, provided is a calibration device comprising a camera to be mounted on a vehicle traveling on a road surface; a characteristic point extraction unit which extracts characteristic points from a captured image obtained by being captured with the camera; a tracking unit which tracks the characteristic points extracted with the characteristic point extraction unit; a storage unit which stores a trajectory of the characteristic points obtained by tracking the characteristic points; a road surface estimation unit which estimates a flat surface on which the vehicle is traveling by using the trajectory of the characteristic points; a calculation unit which calculates a calibration trajectory, which is a trajectory of the characteristic points, for use in calibration based on the estimated flat surface; and an external parameter estimation unit which executes calibration of an external parameter of the camera by using the calibration trajectory. | 2022-06-16 |
20220189066 | CAMERA CALIBRATION APPARATUS, CAMERA CALIBRATION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - In a camera calibration apparatus ( | 2022-06-16 |
20220189067 | SYSTEM AND METHOD OF OPTICAL SCANNING OF A VEHICLE FOR MEASURING AND/OR CONTROLLING THE VEHICLE AND/OR PARTS THEREOF - An optical scanning system for measuring and/or controlling a vehicle and/or parts of a vehicle, wherein the vehicle is arranged on a support surface. The optical scanning system comprises two optical reader apparatuses, which are arranged on said support surface, on opposite sides of the vehicle, and are provided with respective optical image capturing devices configured to provide respective data/signals encoding one or more images of the vehicle, an electronic system, which is designed to process the data/signals in order to construct one or more three-dimensional images of the vehicle. The optical image reader apparatuses each comprise a calibration target, which lies on an approximately horizontal support surface and is arranged immediately adjacent to the optical image capturing device of the relative optical image reader apparatus at a predetermined distance from it. | 2022-06-16 |
20220189068 | Image Compression And Decompression - Embodiments include methods for image compression and decompression. A sending computing device may determine a type of packing used for a chunk of image data, generate metadata describing the type of packing used for the chunk of image data, pack the chunk of image data according to the determined type of packing, and send the packed chunk of image data and the metadata to a second computing device. A receiving computing device may decode the metadata describing the type of packing used for the chunk of image data, determine the type of packing used for the chunk of image data based on the decoded metadata, and unpack the chunk of image data according to the determined type of packing used for the chunk of image data. | 2022-06-16 |
20220189069 | CONTENT-ADAPTIVE TILING SOLUTION VIA IMAGE SIMILARITY FOR EFFICIENT IMAGE COMPRESSION - Techniques are provided herein for more efficiently storing images that have a common subject, such as product images that share the same product in the image. Each image undergoes an adaptive tiling procedure to split the image into a plurality of tiles, with each tile identifying a region of the image having pixels with the same content. The tiles across multiple images can then be clustered together and those tiles having identical content are removed. Once all duplicate tiles have been removed from the set of all tiles across the images, the tiles are once again clustered based on their encoding scheme and certain encoding parameters. Tiles within each cluster are compressed using the best compression technique for the tiles in each corresponding cluster. By removing duplicative tile content between numerous images of the same subject, the total amount of data that needs to be stored is reduced. | 2022-06-16 |
20220189070 | MACHINE-LEARNING FOR 3D OBJECT DETECTION - A computer-implemented method of machine-learning for learning a neural network that encodes a super-point of a 3D point cloud into a latent vector. The method including obtaining a dataset of super-points. Each super-point is a set of points of a 3D point cloud. The set of points represents at least a part of an object. The method further includes learning the neural network based on the dataset of super-points. The learning includes minimizing a loss. The loss penalizes a disparity between two super-points. This constitutes improved machine-learning for 3D object detection. | 2022-06-16 |
20220189071 | POINT CLOUD PLAYBACK MECHANISM - An apparatus to facilitate real-time playback of point cloud sequence data is disclosed. The apparatus comprises one or more processors to receive point cloud data of a captured scene, decompose the point cloud data into a plurality of point cloud patches, wherein each point cloud patch is associated with an object in the scene and includes contextual information regarding the point cloud patch, encode each of the point cloud patches via a deep-learning based algorithm to generate encoded point cloud patches, receive a viewpoint selection from a client, assign a priority to data chunks within each encoded point cloud patch based on the viewpoint selection and the contextual information and transmit the data chunks to the client based on the assigned priority. | 2022-06-16 |
20220189072 | CONTAMINATION DETECTION AND NOTIFICATION SYSTEMS - A notification system includes a memory, an output device and activity, localization, and tracking modules. The memory stores an activity history log associated with a supporting structure. The activity module: receives signals from sensors or electrical devices; and tracks activities at least one of in or within a set distance of the supporting structure to generate the activity history log. The localization module relates the activities to aspects of the supporting structure and generates corresponding localization data. The tracking module tracks states of the aspects of the supporting structure contacted at least one of directly or indirectly by one or more animate objects and determine at least one of contamination levels or sanitization levels of the aspects based on the localization data and the activity history log. The output device indicates the at least one of the contamination levels or the sanitization levels. | 2022-06-16 |
20220189073 | INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing apparatus includes: a processor configured to: if a part of a designated wiring or a designated component is arranged in an invisible wiring layer of a printed wiring board other than a currently visible wiring layer of the printed wiring board when superimposing and displaying, on a captured image of the printed wiring board, information related to the component or wiring which has been designated from among plural components and plural wirings arranged on the printed wiring board, display information related to the part of the designated wiring or the designated component on the currently visible wiring layer. | 2022-06-16 |
20220189074 | SYSTEMS, METHODS, COMPUTING PLATFORMS, AND STORAGE MEDIA FOR AUTOMATICALLY DISPLAYING A VISUALIZATION OF A DESIRED VOLUME OF MATERIAL - Systems, methods, computing platforms, and storage media for automatically displaying a visualization of a desired volume of material are disclosed. Exemplary implementations may: receive a first measurement of a subject for which a desired material is to be applied; receive a name of the desired material receive a desired concentration of the desired material; receive a use case scenario for an application of the desired material to the subject; calculate, based on the first measurement of the subject, the name of the desired material, the desired concentration, and the use case scenario, a correct volume of the desired material for the application; retrieve, from a database, at least one image associated with the correct volume of the desired material for the application; and display, on an interface, the at least one image associated with the correct volume of the desired material. | 2022-06-16 |
20220189075 | Augmented Reality Display Of Commercial And Residential Features During In-Person Real Estate Showings/Open Houses and Vacation Rental Stays - A system and method of preparing an augmented reality (AR) composite view, configured to create, edit, store, and display information, highlight property features, provide visualizations, and display location information during in-person real estate showings/open houses and during vacation rental property stays. The system includes an AR Editor such that a user may edit and input AR element information into an electronically stored database and a AR Portal for a user to upload and assign linked meta information corresponding to the AR elements in the stored database The system also includes an AR Viewer, where the relevant information from the electronically stored database is determined based on each user's location and view direction. The AR Viewer displays a composite image incorporating AR elements over the real-time field of view of the user, indicating the availability of useful information, including property features. Selecting an AR element (icon, 3D shape, or 3D avatar in turn displays the text information, linked documents, pictures, videos, 3D avatar videos, or audio files as linked to that object in the database. | 2022-06-16 |
20220189076 | METHOD FOR GENERATING A MULTIMEDIA ELEMENT WHEN READING A MEDIUM, TERMINAL, AND SYSTEM - A method for generating a first multimedia element includes reading a first media; acquiring at least one image of at least one face of a user; detecting a plurality of characteristic points of the face of the user; generating at least one physiological parameter from at least one processing of at least one characteristic point detected; generating a first multimedia element superimposed on the first media being played on the display, the first multimedia element being determined according to at least the value of the physiological parameter; emitting simultaneously to the generating a piece of digital data deduced from the value of the physiological parameter, the piece of digital data further including a time marker of the first media. | 2022-06-16 |
20220189077 | VEHICLE DISPLAY CONTROL DEVICE AND VEHICLE DISPLAY CONTROL METHOD - A vehicle display control device includes: a locus determination unit determining a predicted locus of a wheel of a subject vehicle according to a steering angle of the subject vehicle; and a display processing unit displaying an image of a field view of the subject vehicle including a vehicle body of the subject vehicle on a display device, and the display processing unit superimposes the guide line indicating the predicted locus determined by the locus determination unit on the image of the field view regardless of the steering angle of the subject vehicle, while semi-transparently displaying a portion of the guide line which overlaps with the vehicle body, thereby allowing visibility of the vehicle body. | 2022-06-16 |
20220189078 | IMAGE PROCESSING APPARATUS, METHOD FOR CONTROLLING IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM - An image processing apparatus obtains one or plurality of images based on capturing by one or plurality of image capturing apparatuses, obtains information related to a virtual object, and generates a two-dimensional image including the virtual object, based on the one or plurality of obtained images and the obtained information related to the virtual object. The image processing apparatus generates the two-dimensional image by determining color information of the virtual object based on color information of a real object included in the one or plurality of images. | 2022-06-16 |
20220189079 | METHOD FOR CARRYING OUT A SHADING CORRECTION AND OPTICAL OBSERVATION DEVICE SYSTEM - A method for correcting a shading in a digital image of a three-dimensional observation object obtained by at least one image sensor of an optical observation device is provided. The three-dimensional observation object is illuminated by illumination light and an intensity distribution, and an inhomogeneity in an image brightness is present in the digital image of the three-dimensional observation object. The method includes ascertaining a topography of the three-dimensional observation object, correcting the inhomogeneity in the image brightness of the digital image based on the topography of the three-dimensional observation object and the intensity distribution of the illumination light. In addition, an optical observation system is provided to perform the method. | 2022-06-16 |
20220189080 | Determination of Dynamic DRRs - A computer implemented method for determining a two dimensional DRR referred to as dynamic DRR based on a 4D-CT, the 4D-CT describing a sequence of three dimensional medical computer tomographic images of an anatomical body part of a patient, the images being referred to as sequence CTs, the 4D-CT representing the anatomical body part at different points in time, the anatomical body part comprising at least one primary anatomical element and secondary anatomical elements, the computer implemented method comprising the following steps: acquiring the 4D-CT; acquiring a planning CT, the planning CT being a three dimensional image used for planning of a treatment of the patient, the planning CT being acquired based on at least one of the sequence CTs or independently from the 4D-CT, acquiring a three dimensional image, referred to as undynamic CT, from the 4D-CT, the undynamic CT comprising at least one first image element representing the at least one primary anatomical element and second image elements representing the secondary anatomical elements; acquiring at least one trajectory, referred to as primary trajectory, based on the 4D-CT, the at least one primary trajectory describing a path of the at least one first image element as a function of time; acquiring trajectories of the second image elements, referred to as secondary trajectories, based on the 4D-CT; for the image elements of the undynamic CT, determining trajectory similarity values based on the at least one primary trajectory and the secondary trajectories, the trajectory similarity values respectively describing a measure of similarity between a respective one of the secondary trajectories and the at least one primary trajectory; determining the dynamic DRR by using the determined trajectory similarity values, and, in case the planning CT is acquired independently from the 4D-CT, further using a transformation referred to as planning transformation from the undynamic CT to the planning CT, at least a part of image values of image elements of the dynamic DRR being determined by using the trajectory similarity values. | 2022-06-16 |
20220189081 | B0 FIELD INHOMOGENEITY ESTIMATION USING INTERNAL PHASE MAPS FROM LONG SINGLE ECHO TIME MRI ACQUISITION - A magnetic resonance (MR) image may be created from MR data by receiving the MR data, applying a transform to the MR data, where a result of the applying is an image space representation of the MR data, determining a wrapped phase map of the image space representation of the MR data, obtaining an unwrapped phase map based on the wrapped phase map, scaling the unwrapped phase map into a B0 field map, reconstructing the MR image based on the MR data, correcting the MR image based on the B0 field map, and outputting the MR image. The scaling may be free of accounting for effects on the MR data by artifact sources secondary to B0 field inhomogeneities. | 2022-06-16 |
20220189082 | SYSTEM AND METHOD FOR IDENTIFYING, MARKING AND NAVIGATING TO A TARGET USING REAL TIME TWO DIMENSIONAL FLUOROSCOPIC DATA - A system for facilitating identification and marking of a target in a fluoroscopic image of a body region of a patient, the system comprising one or more storage devices having stored thereon instructions for: receiving a CT scan and a fluoroscopic 3D reconstruction of the body region of the patient, wherein the CT scan includes a marking of the target; and generating at least one virtual fluoroscopy image based on the CT scan of the patient, wherein the virtual fluoroscopy image includes the target and the marking of the target, at least one hardware processor configured to execute these instructions, and a display configured to display to a user the virtual fluoroscopy image and the fluoroscopic 3D reconstruction. | 2022-06-16 |
20220189083 | TRAINING METHOD FOR CHARACTER GENERATION MODEL, CHARACTER GENERATION METHOD, APPARATUS, AND MEDIUM - Provided is a training method for a character generation model, and a character generation method, apparatus and device, which relates to the technical field of artificial intelligences, particularly, the technical field of computer vision and deep learning. The specific implementation schemes are: a source domain sample word and a target domain style word are input into the character generation model to obtain a target domain generation word; the target domain generation word and a target domain sample word are input into a pre-trained character classification model to calculate a feature loss of the character generation model; and a parameter of the character generation model is adjusted according to the feature loss. | 2022-06-16 |
20220189084 | METHODS AND SYSTEMS FOR VISUALIZING SOUND AND HEARING ABILITY - A computer-implemented method of visualizing hearing ability, comprising acquiring, by one or more processors, audio data, generating, by the one or more processors, a hearing ability visualization with the audio data for display, wherein the hearing ability visualization includes a graphical element, a horizontal axis representing volume, and a vertical axis representing frequency, the graphical element being positioned relative to the horizontal axis such that volume is louder closer to the graphical element and quieter further from the graphical element. | 2022-06-16 |
20220189085 | APPARATUS AND METHODS FOR GENERATING DATA STRUCTURES TO REPRESENT AND COMPRESS DATA PROFILES - Embodiments described herein relate generally to apparatuses and methods for structuring and processing data. In some embodiments, a method includes receiving stimulus-response data, via a processor, the stimulus-response data including a digital representation of a stimulus and a digital representation of a response. The processor calculates a weight associated with the stimulus-response data, based on a rule, and identifies: (1) a distribution type, based on the digital representation of the stimulus; and (2) a range of inclination values of the distribution type, based on the digital representation of the response. The processor compiles a compressed multidimensional data profile associated with an object of the stimulus-response data and based on the weight, the digital representation of the distribution type, and the digital representation of the range of inclination values. | 2022-06-16 |
20220189086 | INTEGRATED MEDICAMENT DELIVERY DEVICE FOR USE WITH CONTINUOUS ANALYTE SENSOR - An integrated system for the monitoring and treating diabetes is provided, including an integrated receiver/hand-held medicament injection pen, including electronics, for use with a continuous glucose sensor. In some embodiments, the receiver is configured to receive continuous glucose sensor data, to calculate a medicament therapy (e.g., via the integrated system electronics) and to automatically set a bolus dose of the integrated hand-held medicament injection pen, whereby the user can manually inject the bolus dose of medicament into the host. In some embodiments, the integrated receiver and hand-held medicament injection pen are integrally formed, while in other embodiments they are detachably connected and communicated via mutually engaging electrical contacts and/or via wireless communication. | 2022-06-16 |
20220189087 | VIRTUAL CLOTHING TRY-ON - A messaging system performs virtual clothing try-on. A method of virtual clothing try-on may include accessing a target garment image and a person image of a person wearing a source garment and processing the person image to generate a source garment mask and a person mask. The method may further include processing the source garment mask, the person mask, the target garment image, and a target garment mask to generate a warping, the warping indicating a warping to apply to the target garment image. The method may further include processing the target garment to warp the target garment in accordance with the warping to generate a warped target garment image, processing the warped target garment image to blend with the person image to generate a person with a blended target garment image, and processing the person with blended target garment image to fill in holes to generate an output image. | 2022-06-16 |
20220189088 | METHOD AND SYSTEM FOR REMOVING SCENE TEXT FROM IMAGES - Methods of removing text from digital video or still images are disclosed. An image processing system receives an input image set defining a region of interest (ROI) that contains text. The system determines an input background color for the ROI. The system applies a text infilling function to remove text from the ROI to yield a preliminary output image set. The system may determine a residual corrective signal that corresponds to a measurement of background color error between the input set and the preliminary output set. The system may apply the residual corrective signal to the ROI in the preliminary output set to yield a final output set that does not contain the text. Alternatively, the system may remove background from the ROI of the input set before text infilling, then return background to the ROI after the text infilling. | 2022-06-16 |
20220189089 | INFORMATION PROCESSING APPARATUS, DISPLAY CONTROL METHOD, STORAGE MEDIUM, SUBSTRATE PROCESSING APPARATUS, AND MANUFACTURING METHOD OF ARTICLE - An information processing apparatus configured to control display on a user interface includes an acquisition unit configured to acquire recipe data regarding a processing condition of a substrate processing apparatus configured to process a substrate and layout data regarding a layout of a processed portion of the substrate processed by the substrate processing apparatus, and a display control unit configured to perform control to display, on the user interface, information regarding the layout data and information regarding the recipe data related to the layout data, the layout data and the recipe data being acquired by the acquisition unit, in association with each other. | 2022-06-16 |
20220189090 | Systems and methods for applying effects to design elements - Described herein is a computer implemented method. The method includes detecting user input activating a text effect selection control. In response to the user input the method further includes: automatically generating a first shadow for a selected design element, the first shadow having a first colour, a first direction, a first offset, and a first opacity; automatically generating a second shadow for the selected design element, the second shadow having the first colour, the first direction, a second offset greater than the first offset, and a second opacity less than the first opacity; displaying the first shadow behind the selected design element; and displaying the second shadow behind the first shadow. | 2022-06-16 |
20220189091 | POST-SPLIT CONTRALATERAL COMBINATIONS - A method for animation. The method including generating a first sub-shape of a first shape that corresponds to a first expression of a face, wherein the first sub-shape corresponds to a first portion of the face. The method including generating a second sub-shape of a second shape that corresponds to a second expression of the face, wherein the second sub-shape corresponds to a second portion of the face. The method including modifying the first sub-shape. The method including generating a combined shape by combining the first sub-shape that is modified and the second sub-shape using an editing application. The method includes applying a correction to the combined shape to align relative portions of the combined shape with the first shape or the second shape. | 2022-06-16 |
20220189092 | WEIGHT MAPS TO GENERATE OFF-CENTER SPLIT MAPS OF A SHAPE - A method for generating split-shapes. The method including generating a three dimensional (3D) wire frame of a shape that corresponds to an expression of a face. The method including generating a UV map that corresponds to the 3D wire frame. The method including identifying an isolated area of the shape using vertex weights or the UV map, wherein the isolated area corresponds to a portion of the face. The method including generating a weight map based on the vertex weights or the UV map, wherein the weight map identifies the isolated area. The method including generating a sub-shape of the shape based on the weight map, wherein the sub-shape is editable using an editing application. | 2022-06-16 |
20220189093 | INTERACTION BASED ON IN-VEHICLE DIGITAL PERSONS - Methods, systems, apparatuses, and computer-readable storage media for interactions based on in-vehicle digital persons are provided. In one aspect, a method includes: acquiring a video stream of a person in a vehicle captured by a vehicle-mounted camera, processing at least one frame of image included in the video stream to obtain one or more task processing results based on at least one predetermined task, and performing, according to the one or more task processing results, at least one of displaying a digital person on a vehicle-mounted display device or controlling a digital person displayed on a vehicle-mounted display device to output interaction feedback information. | 2022-06-16 |
20220189094 | ANIMATION VIDEO PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM - An animation video processing method and apparatus, an electronic device, and a storage medium. The method includes: determining an original animation video matching a target object; preprocessing the original animation video to obtain a key video frame in the original animation video and motion data corresponding to the key video frame; determining a motion data set matching the target object; determining a displacement parameter of the target object; and obtaining an animation video matching a real-time motion state of the target object based on the motion data set matching the target object and the displacement parameter of the target object. The present disclosure can accurately and efficiently obtain an animation video matching a real-time motion state of a target object in an original animation video. | 2022-06-16 |
20220189095 | METHOD AND COMPUTER PROGRAM PRODUCT FOR PRODUCING 3 DIMENSIONAL MODEL DATA OF A GARMENT - In a method for producing three-dimensional model data of a garment having a target garment size, intermediate model data of the garment are stored, which are associated with a base garment size and include intermediate geometry data and at least one intermediate texture map associated with the intermediate geometry data. Through semantic segmentation of the intermediate texture map a label map is generated that associates each element of the intermediate texture map to a respective one of a set of segments of the garment and associated resizing rules. Geometry data for the model data are generated by resizing the intermediate geometry data based on the label map, on the resizing rules, on a body model and on a set of scaling factors between the base garment size and the target garment size. A texture map is generated based on the intermediate texture map, on the label map, on the resizing rules and on the set of scaling factors. | 2022-06-16 |
20220189096 | OPACITY TEXTURE-DRIVEN TRIANGLE SPLITTING - Techniques for performing ray tracing operations are provided. The techniques include dividing a primitive of a scene to generate primitive portions; identifying, from the primitive portions, and based on an opacity texture, one or more opaque primitive portions and one or more invisible primitive portions; generating box nodes for a bounding volume hierarchy corresponding to the opaque primitive portions, but not the invisible primitive portions; and inserting the generated box nodes into the bounding volume hierarchy. | 2022-06-16 |
20220189097 | INTERSECTION TESTING IN A RAY TRACING SYSTEM USING MULTIPLE RAY BUNDLE INTERSECTION TESTS - Ray tracing systems and computer-implemented methods are described for performing intersection testing on a bundle of rays with respect to a box. Silhouette edges of the box are identified from the perspective of the bundle of rays. For each of the identified silhouette edges, components of a vector providing a bound to the bundle of rays are obtained and it is determined whether the vector passes inside or outside of the silhouette edge. Results of determining, for each of the identified silhouette edges, whether the vector passes inside or outside of the silhouette edge, are used to determine an intersection testing result for the bundle of rays with respect to the box. | 2022-06-16 |
20220189098 | Rendering of Soft Shadows - Systems can identify visible surfaces for pixels in an image (portion) to be rendered. A sampling pattern of ray directions is applied to the pixels, so that the sampling pattern of ray directions repeats, and with respect to any pixel, the same ray direction can be found in the same relative position, with respect to that pixel, as for other pixels. Rays are emitted from visible surfaces in the respective ray direction supplied from the sampling pattern. Ray intersections can cause shaders to execute and contribute results to a sample buffer. With respect to shading of a given pixel, ray results from a selected subset of the pixels are used; the subset is selected by identifying a set of pixels, collectively from which rays were traced for the ray directions in the pattern, and requiring that surfaces from which rays were traced for those pixels satisfy a similarity criteria. | 2022-06-16 |
20220189099 | TECHNIQUES FOR TRAVERSING DATA EMPLOYED IN RAY TRACING - Ray tracing hardware accelerators supporting multiple specifiers for controlling the traversal of a ray tracing acceleration data structure are disclosed. For example, traversal efficiency and complex ray tracing effects can be achieved by specifying traversals through such data structures using both programmable ray operations and explicit node masking. The explicit node masking utilizes dedicated fields in the ray and in nodes of the acceleration data structure to control traversals. Ray operations, however, are programmable per ray using opcodes and additional parameters to control traversals. Traversal efficiency is improved by enabling more aggressive culling of parts of the data structure based on the combination of explicit node masking and programmable ray operations. More complex ray tracing effects are enabled by providing for dynamic selection of nodes based on individual ray characteristics. | 2022-06-16 |
20220189100 | THREE-DIMENSIONAL TOMOGRAPHY RECONSTRUCTION PIPELINE - A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle. | 2022-06-16 |
20220189101 | THREE-DIMENSIONAL OBJECT MARKING - Examples of methods for three-dimensional object marking are described herein. In some examples, a method may include determining a set of volumes based on a one-dimensional (1D) barcode. In some examples, the method may include overlapping the set of volumes with a voxel representation of a three-dimensional (3D) object. In some examples, the method may include marking voxels of the 3D object that are within the set of volumes. | 2022-06-16 |
20220189102 | HIGH-DEFINITION CITY MAPPING - A vehicle generates a city-scale map. The vehicle includes one or more Lidar sensors configured to obtain point clouds at different positions, orientations, and times, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform registering, in pairs, a subset of the point clouds based on respective surface normals of each of the point clouds; determining loop closures based on the registered subset of point clouds; determining a position and an orientation of each of the subset of the point clouds based on constraints associated with the determined loop closures; and generating a map based on the determined position and the orientation of each of the subset of the point clouds. | 2022-06-16 |
20220189103 | GENERATION APPARATUS, GENERATION METHOD, AND STORAGE MEDIUM - A generation apparatus includes an acquisition unit configured to acquire shape data representing a shape of a subject based on a plurality of captured images obtained by a plurality of imaging apparatuses capturing the subject in an imaging region, a first determination unit configured to determine a subject position which is a position of the subject in the imaging region, a second determination unit configured to determine a reference position serving as a reference of the position of the subject, and a generation unit configured to generate, based on the shape data acquired by the acquisition unit, a virtual viewpoint image in accordance with a deviation between the subject position determined by the first determination unit and the reference position determined by the second determination unit. | 2022-06-16 |
20220189104 | Methods and Systems for Rendering View-Dependent Images Using 2D Images - Disclosed herein is methods and systems for providing different views to a viewer. One particular embodiment includes a method including providing, to a neural network, a plurality of 2D images of a 3D object. The neural network may include a signed distance function based sinusoidal representation network. The method may further include obtaining a neural model of a shape of the object by obtaining a zero-level set of the signed distance function; and modeling an appearance of the object using a spatially varying emission function. In some embodiments, the neural model may be converted into a triangular mesh representing the object which may be used to render multiple view-dependent images representative of the 3D object. | 2022-06-16 |
20220189105 | 3d Conversations in An Artificial Reality Environment - A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context. | 2022-06-16 |
20220189106 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus includes an acquisition unit configured to acquire a three-dimensional shape data of an object based on images captured by a plurality of cameras, a generation unit configured to generate information based on a relationship between the three-dimensional shape data acquired by the acquisition unit and positions of the plurality of cameras, and a correction unit configured to correct the three-dimensional shape data based on the information generated by the generation unit. | 2022-06-16 |
20220189107 | DISTRIBUTED RENDERING AND DISPLAY SYSTEM - A rendering system comprises a host device disposed in communication with one or more rendering pipelines. Each rendering pipeline comprises a rendering device and a display device. Each display device enables one or more users to view a scene rendered on the host device. Each rendering pipeline provides the user with independent control of their perspective of the scene. The host device receives a CG camera definition from each rendering pipeline and uses it to perform geometry culling and creates a z-buffer for each rendering pipeline. For each rendering pipeline, the rendering device receives a z-buffer and renders a frame buffer for the display device. This architecture reduces the rendering power requirements of the rendering device for each rendering pipeline as compared to performing all rendering on the rendering device, and is particularly useful when multiple users are viewing a complex scene such as a high fidelity simulation environment. | 2022-06-16 |