39th week of 2021 patent applcation highlights part 57 |
Patent application number | Title | Published |
20210303823 | PROGRAM CREATION DEVICE, OBJECT DETECTION SYSTEM, ANCHOR SETTING METHOD, AND ANCHOR SETTING PROGRAM - A program creation device configured to create an object detection program for detecting whether an object is included in an image includes, training data including a plurality of image data including area information of the object, a setting unit configured to set an anchor that is information of a frame specifying a region for each cell for detecting the presence or absence of the object from the image, and a learning unit configured to execute machine learning of the training data based on the information of the setting unit and to create a learned program for extracting the object from the image. The setting unit acquires information on a target region of the training data and on aspect ratios of the anchor, calculates a degree of matching between the anchor and the target region for each aspect ratio while changing a size of the anchor, and calculates an adoption rate of the target region whose degree of matching is a proportion no less than a threshold, and determines, based on a result of the calculation, the size of the anchor used in the learned program. | 2021-09-30 |
20210303824 | FACE DETECTION IN SPHERICAL IMAGES USING OVERCAPTURE - Face detection in a spherical image is performed using overcapture. Multiple views of a spherical image are separately processed using face detection, and results of the face detection for those views are projected to a format for further processing of the spherical image. | 2021-09-30 |
20210303825 | DIRECTIONAL ASSISTANCE FOR CENTERING A FACE IN A CAMERA FIELD OF VIEW - Methods and systems are provided for providing directional assistance to guide a user to position a camera for centering a person's face within the camera's field of view. A neural network system is trained to determine the position of the user's face relative to the center of the field of view as captured by an input image. The neural network system is trained using training input images that are generated by cropping different regions of initial training images. Each initial image is used to create a plurality of different training input images, and directional assistance labels used to train the network may be assigned to each training input image based on how the image is cropped. Once trained, the neural network system determines a position of the user's face, and automatically provides a non-visual prompt indicating how to center the face within the field of view. | 2021-09-30 |
20210303826 | GLOBAL CONFIGURATION INTERFACE FOR DEFAULT SELF-IMAGES - A method for operating a messaging system is provided. The method is adapted to send and receive modifiable videos and includes receiving, by a computing device, a first authorization from a user to use a self-image of the user in a personalized video. The method also includes receiving, by the computing device, a second authorization from the user to enable use of another self-image of another user in the personalized video. The method further includes sending, by the computing device, after the first and second authorizations have been received, the personalized video including at least part of the self-image of the user, at least part of the other self-image of the other user, and at least part of a stock video. A system and a non-transitory processor-readable medium for operating a messaging system adapted to send and receive modifiable videos are provided. | 2021-09-30 |
20210303827 | FACE FEATURE POINT DETECTION METHOD AND DEVICE, EQUIPMENT AND STORAGE MEDIUM - Provided are a face feature point detection method, applied to an image processing device, where the image processing device stores a feature area detection model and a feature point detection model. The method includes: preprocessing a face image to be detected to obtain a preprocessed target face image; performing feature point extraction on the target face image according to the feature area detection model and the feature point detection model to obtain a target feature point coordinate located within a face feature area in the target face image; and performing coordinate transformation on the target feature point coordinate to obtain a face feature point coordinate corresponding to the face image to be detected. Further provided are a face feature point detection device, an equipment and a storage medium. | 2021-09-30 |
20210303828 | Systems, Methods, and Platform for Facial Identification within Photographs - In an illustrative embodiment, systems and methods for assisting users in identifying unknown individuals in photographs first apply facial recognition to obtain a first likelihood of match between a target face and other faces in a corpus of images provided by users of a genealogy platform, and then adjusts the first likelihood of match according to similarities and dissimilarities in attributes supplied by users regarding individuals represented by each face. Resultant likelihoods drive presentation of potential matches for consideration by a requesting user. | 2021-09-30 |
20210303829 | FACE LIVENESS DETECTION USING BACKGROUND/FOREGROUND MOTION ANALYSIS - Face recognition systems are vulnerable to the presentation of spoofed faces, which may be presented to face recognition systems, for example, by an unauthorized user seeking to gain access to a protected resource. A face liveness detection method that addresses this vulnerability utilizes motion analysis to compare the relative movement among three regions of interest in a facial image, and based upon that comparison to make a face liveness determination. | 2021-09-30 |
20210303830 | SYSTEMS AND METHODS FOR AUTOMATED TRACKING USING A CLIENT DEVICE - Systems and methods are disclosed for determining which of the multitude of objects within a frame of a camera to track. Specifically, objects within a frame of a camera are detected and compared with objects in visual content items captured by the user's device (e.g., pictures/videos captured by the smart phone or the electronic tablet). If a match is found between an object within the frame (e.g., a person) and an object within visual content items captured on the user's device (e.g., the same person), the system will proceed to track the identified object. | 2021-09-30 |
20210303831 | MANAGING CONTENT ON IN-FLIGHT ENTERTAINMENT PLATFORMS - Introduced here are technologies for examining an image of an airline passenger, while the passenger is onboard. The image may be examined for abnormalities, the passenger's mood, and/or the passenger's expression. The image analysis may be supplemented by data from a suite of sensors such as heart rate monitors, accelerometers, gyroscopes, and the like. The image and the sensor data are then analyzed to detect abnormalities, identify related recommendations, and suggest products and services offered onboard or at the destination that are related to the abnormalities and recommendations. The general purpose is to improve an airline passenger's travel experience while on board the flight. | 2021-09-30 |
20210303832 | Learned Feature Motion Detection - A data processing device for detecting motion in a sequence of frames each comprising one or more blocks of pixels, includes a sampling unit configured to determine image characteristics at a set of sample points of a block, a feature generation unit configured to form a current feature for the block, the current feature having a plurality of values derived from the sample points, and motion detection logic configured to generate a motion output for a block by comparing the current feature for the block to a learned feature representing historical feature values for the block. | 2021-09-30 |
20210303833 | OBJECT ATTRIBUTE INFERENCE METHOD, STORAGE MEDIUM AND ELECTRONIC DEVICE - An object attribute inference method, a storage medium and an electronic device. The method includes the following steps: training a preset neural network by using a training sample comprising an action sequence and a target object attribute tag; where the action sequence is an interactive action sequence of a human body and an object. In this way, the object attribute inference model obtained through training can recognize the object attributes from the interaction action of the human body and the object, the object attributes include and are not limited to the weight, the shape, the hardness degree and the like of the object, and therefore the inference module obtained through training has the advantages of being universal in object attribute inference and wide in application range. | 2021-09-30 |
20210303834 | OBJECT DETECTION DEVICE AND OBJECT DETECTION METHOD FOR CONSTRUCTION MACHINE - An object detection device ( | 2021-09-30 |
20210303835 | TRANSFORMATION OF HAND-DRAWN SKETCHES TO DIGITAL IMAGES - Techniques are disclosed for generating a vector image from a raster image, where the raster image is, for instance, a photographed or scanned version of a hand-drawn sketch. While drawing a sketch, an artist may perform multiple strokes to draw a line, and the resultant raster image may have adjacent or partially overlapping salient and non-salient lines, where the salient lines are representative of the artist's intent, and the non-salient (or auxiliary) lines are formed due to the redundant strokes or otherwise as artefacts of the creation process. The raster image may also include other auxiliary features, such as blemishes, non-white background (e.g., reflecting the canvas on which the hand-sketch was made), and/or uneven lighting. In an example, the vector image is generated to include the salient lines, but not the non-salient lines or other auxiliary features. Thus, the generated vector image is a cleaner version of the raster image. | 2021-09-30 |
20210303836 | HANDWRITING INPUT DISPLAY APPARATUS, HANDWRITING INPUT DISPLAY METHOD AND RECORDING MEDIUM STORING PROGRAM - A handwriting input display apparatus causes display means to display a stroke generated by an input made by using input means to a screen as a handwritten object. The apparatus includes display control means for causing the display means to display character string candidates including a handwriting recognition candidate when the handwritten object does not change for a predetermined time. When the handwriting recognition candidate is selected, the display control means causes the display means to erase a display of the character string candidates and a display of the handwritten object, and causes the display means to display a character string object at a position where the erased handwritten object was displayed. When selection of the handwriting recognition candidate is not performed for a predetermined time and the display of the character string candidates is erased, the display control means causes the handwritten object to be kept displayed. | 2021-09-30 |
20210303837 | IMAGE PROCESSING DEVICE - An image processing device includes a document image extraction unit, a business card array image detection unit, and a business card image separation unit. The document image extraction unit extracts one document image in a read image obtained by scanning one or a plurality of documents placed on a platen glass of an image reading device. When the document is a plurality of business cards arranged without gaps, the business card array image detection unit detects the document image as a business card array image in which a plurality of business card images are arranged without gaps, based on at least one of the distribution of character string objects and the distribution of character sizes in the document image. The business card image separation unit separates the detected business card array image into the plurality of business card images. | 2021-09-30 |
20210303838 | IMAGE CLASSIFICATION USING COLOR PROFILES - A device may receive a target document. The device may segment the target document into multiple segments. The device may determine, for each segment of the multiple segments, a set of color parameters for a corresponding set of pixels included in that segment. The device may determine, for each segment of the multiple segments, an average color parameter for that segment based on the set of color parameters for the corresponding set of pixels included in that segment. The device may generate a target color profile for the target document based on determining the average color parameter for each segment. The device may compare the target color profile and a model color profile associated with classifying the target document. The device may classify the target document based on comparing the target color profile and the model color profile. | 2021-09-30 |
20210303839 | METHOD AND APPARATUS TO ESTIMATE IMAGE TRANSLATION AND SCALE FOR ALIGNMENT OF FORMS - Method and apparatus to match bounding boxes around text to align forms. The approach is less computationally intensive, and less prone to error than text recognition. For purposes of achieving alignment, information per se is not as important as information location. Information within the bounding boxes is not as critical as is the location of the area which the bounding boxes occupy. Scanning artifacts, missing characters, or noise generally do not affect bounding boxes themselves so much as they do the contents of the bounding boxes. Thus, for purposes of form alignment, the bounding boxes themselves are sufficient. Using bounding boxes also avoids misalignment issues that can result from stray marks on a page, for example, from holes punched in a sheet, or from handwritten notations. | 2021-09-30 |
20210303840 | SYSTEM AND METHOD FOR RECONSTRUCTING AN IMAGE - This disclosure relates generally to image processing, and more particularly to method and system for reconstructing an image. In one embodiment, the method includes pre-processing an input image to generate character images corresponding to characters in the input image, determining a local character thickness threshold value for each character image, determining a global character thickness threshold value for the input image based on the local character thickness threshold values for the character images, and reconstructing each character image based on the local character thickness threshold value for each character image and the global character thickness threshold value to generate reconstructed character images. The local character thickness threshold value in a character image may be based on a set of character pixel values in a pre-determined number of segments in the character image. The method further includes re-constructing the input image based on the reconstructed character images. | 2021-09-30 |
20210303841 | INFORMATION PROCESSING APPARATUS, IMAGE READING APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - An information processing apparatus includes a processor configured to acquire an image, receive designation of a type of the image, present another type specified from the image in a case where the type does not satisfy a criterion corresponding to the image, and receive re-designation of a type of the image. | 2021-09-30 |
20210303842 | INFORMATION PROCESSING DEVICE AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing device is provided with a processor configured to acquire a document image illustrating a document, and extract target information with respect to a target character string from a region set with reference to a position of a specific type of impression included in the document image. | 2021-09-30 |
20210303843 | INFORMATION PROCESSING APPARATUS - An information processing apparatus includes a processor configured to acquire an image showing a document of a closed contract, recognize characters from the acquired image, calculate positions of the recognized characters in the image, determine, based on the calculated positions, whether any other characters are present in a region anterior or posterior to a date represented by the recognized characters, and output the date as an execution date of the contract if determining that no other characters are present in the anterior region and the posterior region. | 2021-09-30 |
20210303844 | VERIFICATION APPARATUS, CONTROL METHOD THEREFOR, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A verification apparatus comprises setting a feature point of a reference image as a verification target image, and a reference region of a predetermined pattern formed in advance on a recording medium on which the verification target image is to be formed; extracting the set feature point from a read image obtained by reading a formed image; specifying, based on a positional relationship between the set feature point and the reference region, from an image position of the extracted feature point, a first region on the read image, which indicates a region where the predetermined pattern should have been formed, and a second region other than the first region of the read image; and performing verification for the first region by a first algorithm, and verification for the second region by a second algorithm. | 2021-09-30 |
20210303845 | FUNGAL IDENTIFICATION BY PATTERN RECOGNITION - A system includes a memory and a processor configured to execute computer instructions stored in the memory that when executed cause the system to perform operations. The computer instructions include a learning component that includes one or more trained models to learn pathogenic features of a given fungal species based on learning from a plurality of stored fungal species images. A fungal identifier component employs the trained models to determine pathogenic parameters of an unidentified fungal species image based on the learned pathogenic features. The fungal identifier component generates an output file that classifies the unidentified fungal species image according to the determined pathogenic parameters from the unidentified fungal species image. | 2021-09-30 |
20210303846 | IMAGING DEVICE AND TRACKING METHOD - An imaging device, comprising an image sensor that exposes a subject image and repeatedly outputs image signals resulting from having subjected this subject image to photoelectric conversion at a fixed period, a subject detection circuit in which the image signals are input to a neural network circuit that has learned operation parameters for detecting a specified subject by deep learning, and that detects the specified subject, and a subject association determination circuit for forming associations based on a positional relationship between a subject (whole subject) that has been detected by the subject detection circuit, and parts of the subject, wherein the specified subject is a subject (whole subject) and parts that have been subjected to subject association. | 2021-09-30 |
20210303847 | SPACE RECOGNITION METHOD, ELECTRONIC DEVICE AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - The embodiments of the disclosure provide a space recognition method, an electronic device and a non-transitory computer-readable storage medium. The method includes the following steps. Sensor data for detecting obstacle positions is obtained from a sensor associated with an electronic device. A plurality of coordinates respectively corresponding to the obstacle positions are generated based on the sensor data. Boundary line information of a space surrounding the electronic device is updated according to the coordinates until an optimization condition is met for each boundary line. A spatial range of the space surrounding the electronic device is identified based on the boundary line information. The spatial range is used to guide a movement of the electronic device. | 2021-09-30 |
20210303848 | IMAGE PROCESSING OF AERIAL IMAGERY FOR ENERGY INFRASTRUCTURE ANALYSIS USING JOINT IMAGE IDENTIFICATION - A computer-implemented method for processing images to identify Energy Infrastructure (EI) features within aerial images of global terrain is provided. The image processing method identifies information about EI features by applying an EI feature recognition model to aerial images of global terrain. The EI feature recognition model identifies the EI feature information according to image content of the aerial image. The method further provides updates to the identification of the EI feature information according to relationships between identified EI features. | 2021-09-30 |
20210303849 | Manual Curation Tool for Map Data Using Aggregated Overhead Views - Examples disclosed herein may involve (i) obtaining a first layer of map data associated with sensor data capturing a geographical area, the first layer of map data comprising an aggregated overhead-view image of the geographical area, where the aggregated overhead-view image is generated from aggregated pixel values from a plurality of images associated with the geographical area, (ii) obtaining a second layer of map data, the second layer of map data comprising label data for the geographical area derived from the aggregated overhead-view image of the geographical area, and (iii) causing the first layer of map data and the second layer of map data to be presented to a user for curation of the label data. | 2021-09-30 |
20210303850 | OBJECT DETECTION DEVICE AND OBJECT DETECTION METHOD BASED ON NEURAL NETWORK - An object detection device and an object detection method based on a neural network are provided. The object detection method includes: receiving an input image and identifying an object in the input image according to an improved YOLO-V2 neural network. The improved YOLO-V2 neural network includes a residual block, a third convolution layer, and a fourth convolution layer. A first input of the residual block is connected to a first convolution layer of the improved YOLO-V2 neural network, and an output of the residual block is connected to a second convolution layer of the improved YOLO-V2 neural network. Here, the residual block is configured to transmit, to the second convolution layer, a summation result corresponding to the first convolution layer. The third convolution layer and the fourth convolution layer are generated by decomposing a convolution layer of an original YOLO-V2 neural network. | 2021-09-30 |
20210303851 | Optical Systems with Authentication and Privacy Capabilities - A head-mounted electronic device may include a display with an optical combiner. The combiner may include a waveguide with first and second output couplers. The first output coupler may couple a first portion of image light at visible wavelengths out of the waveguide and towards an eye box. The second output coupler may couple a second portion of the image light at near-infrared wavelengths out of the waveguide and towards the surrounding environment. The second portion of the image light may include an authentication code that is used by a secondary device to authenticate the head-mounted device and/or may include a pattern that serves to prevent camera equipment in the surrounding environment from capturing accurate facial recognition information from a user while wearing the head-mounted device. | 2021-09-30 |
20210303852 | SYSTEM AND METHOD OF AUTOMATICALLY SELECTING ONE OR MORE STORAGE LOCKERS BASED ON DYNAMICALLY ACQUIRED DELIVERED ITEM VOLUME AND LOCKER VOLUME COMPATIBILITY AND AVAILABILITY - A system comprises a memory maintaining storage volume data and availability of each locker of a plurality of lockers; one or more cameras; and a device in communication with the one or more cameras, the device having a processing unit communicating with the one or more cameras that perform an imaging scan of the object to be placed into one of the plurality of lockers, to collect image data of one or more objects detected in the imaging scan, and to measure a volume of the one or more objects based on the collected image data; and a processor configured to select, based on the measured volume of the one or more objects and on the storage volume and availability of each locker, one or more lockers of the plurality of lockers with enough storage volume to enclose the one or more objects. | 2021-09-30 |
20210303853 | SYSTEMS AND METHODS FOR AUTOMATED TRACKING ON A HANDHELD DEVICE USING A REMOTE CAMERA - Systems and methods are disclosed for determining which of the multitude of objects within a feed being received from a remote camera to track. Specifically, objects within an image feed received from a remote camera are detected and compared with objects in visual content items captured by the user's device (e.g., pictures/videos captured by the smart phone or the electronic tablet). If a match is found between an object within the feed of the video (e.g., a person) and an object within visual content items captured on the user's device (e.g., the same person), the system will proceed to track the identified object. | 2021-09-30 |
20210303854 | Imaging System and Method for Producing Images with Virtually-Superimposed Functional Elements - An imaging system and a method for producing extended-reality images for a display apparatus. The imaging system includes camera and processor. The processor is configured to: control camera to capture image of real-world environment; analyse captured image to identify first image segment representing input device and to determine location of at least one actionable area of input device in first image segment; determine at least one functional element to be presented for the actionable area, the functional element being indicative of at least one of: functionality of the at least one actionable area, status of the at least one actionable area; and process captured image to generate extended-reality image in which the functional element is virtually superimposed over the actionable area of input device or a virtual representation of the actionable area of input device. | 2021-09-30 |
20210303855 | AUGMENTED REALITY ITEM COLLECTIONS - Systems and methods are provided for performing operations including: receiving, via a messaging application, input that selects a collection of augmented reality items; obtaining an identifier of the collection of the augmented reality items; searching, based on the identifier, a plurality of augmented reality items to identify a subset of augmented reality items associated with the identifier; causing the messaging application to present the subset of augmented reality items; and causing the messaging application to modify an image based on a first augmented reality item in the subset. | 2021-09-30 |
20210303856 | SIMULATION-BASED LEARNING OF DRIVER INTERACTIONS THROUGH A VEHICLE WINDOW - A model can be trained to detect interactions of other drivers through a window of their vehicle. A human driver behind a window (e.g., front windshield) of a vehicle can be detected in a real-world driving data. The human driver can be tracked over time through the window. The real-world driving data can be augmented by replacing at least a portion of the human driver with at least a portion of a virtual driver performing a target driver interaction to generate an augmented real-world driving dataset. The target driver interaction can be a gesture or a gaze. Using the augmented real-world driving data set, a machine learning model can be trained to detect the target driver interactions. Thus, simulation can be leveraged to provide a large set of useful training data without having to acquire real-world data of drivers performing target driver interactions as viewed from outside the vehicle. | 2021-09-30 |
20210303857 | INGREDIENT INQUIRY SYSTEM, INGREDIENT INQUIRY METHOD, AND INGREDIENT INQUIRY PROGRAM - According to an embodiment, an ingredient inquiry system includes a personal-information storing unit, an image input unit, an ingredient discriminating unit, an ingredient specifying unit, and a display. The personal-information storing unit stores, about each of a plurality of individuals, personal information for causing restriction about intake of ingredients used in food and drink. The image input unit inputs an image obtained by photographing the food and drink. The ingredient discriminating unit specifies the food and drink based on the image and discriminates the ingredients used in the food and drink. The ingredient specifying unit collates the personal information corresponding to each of the individuals and the ingredients discriminated by the ingredient discriminating unit and specifies the ingredients corresponding to each of the individuals. The display displays a list of the ingredients corresponding to each of the individuals specified by the ingredient specifying unit. | 2021-09-30 |
20210303858 | PHOTOGRAPHING APPARATUS AND PHOTOGRAPHING METHOD - A photographing apparatus includes a carriage, a light source, and a camera. The carriage moves in a travel direction. The light source is mounted on the carriage. The camera is attached to the carriage and configured to perform photography. A normal direction is defined orthogonal to the travel direction and a vertical axis. The camera faces a camera direction that is rotated about the vertical axis with respect to the normal direction. | 2021-09-30 |
20210303859 | SHARED AUGMENTED REALITY SYSTEM - An augmented reality system to perform operations that include: accessing image data at a client device; determining a position of a user of the client device based on the image data; causing display of a projection that extends from the position of the user upon a presentation of the image data at the client device; detecting an intersection of the projection and a surface of an object; generating a request that includes an identification of the portion of the surface of the object at the client device; and presenting the portion of the surface of the object based on the graphical property of the projection at the client device in response to the request that includes the identification of the portion of the surface of the object. | 2021-09-30 |
20210303860 | DISPLAYING OBJECT NAMES IN ASSOCIATION WITH AUGMENTED REALITY CONTENT - Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for displaying object names in association with augmented reality content. The program and method provide for receiving, by a messaging application running on a device, a first request to identify plural objects based on an image captured by a camera of the device; identifying, in response to receiving the first request, the plural objects based on the image; for each of the plural objects, determining at least one attribute of the object, and calculating a number of augmented reality content items, from plural augmented reality content items, corresponding to the at least one attribute of the object; selecting, from the plural objects, an object with a largest calculated number of corresponding augmented reality content items; and displaying a name for each of the plural objects based on the selecting. | 2021-09-30 |
20210303861 | SYSTEMS AND METHODS FOR OPTICAL CHARACTER RECOGNITION OF TEXT AND INFORMATION ON A CURVED SURFACE - A method for optical character recognition of text and information on a curved surface, comprising: activating an image capture device; scanning of the surface using the image capture device in order to acquire a plurality of scans of sections of the surface; performing OCR on the plurality of scans; separating the OCRed content into layers for each of the plurality of scans; merging the separated layers into single layers; and merging the single layers into an image. | 2021-09-30 |
20210303862 | OBJECT DETECTION IN AN IMAGE - Embodiments of the present disclosure relate to object detection in an image. In an embodiment, a computer-implemented method is disclosed. According to the method, image data representing a scene is obtained and sound distribution information related to the scene is obtained. A detection strategy to be applied in object detection is determined based on the sound distribution information. The object detection is performed on the image data by applying the detection strategy. In other embodiments, a system and a computer program product are disclosed. | 2021-09-30 |
20210303863 | Machine Learning in Video Classification - Described herein are systems and methods that search videos and other media content to identify items, objects, faces, or other entities within the media content. Detectors identify objects within media content by, for instance, detecting a predetermined set of visual features corresponding to the objects. Detectors configured to identify an object can be trained using a machine learned model (e.g., a convolutional neural network) as applied to a set of example media content items that include the object. The systems provide user interfaces that allow users to review search results, pinpoint relevant portions of media content items where the identified objects are determined to be present, review detector performance and retrain detectors, providing search result feedback, and/or reviewing video monitoring results and analytics. | 2021-09-30 |
20210303864 | METHOD AND APPARATUS FOR PROCESSING VIDEO, ELECTRONIC DEVICE, MEDIUM AND PRODUCT - A method, apparatus, and electronic device for processing a video, a medium and a product are presented. An implementation of the method includes: acquiring a target video; selecting, from at least one preset model, a preset model as a target model; determining output data of the target model based on the target video and the target model; reselecting, in response to determining that the output data does not meet a condition corresponding to the target model, another preset model as the target model from the at least one preset model until the output data of the target model meets the condition corresponding to the target model; and determining, based on the output data, a dynamic cover from the target video. | 2021-09-30 |
20210303865 | DERIVED COMPETITION - Video feeds may be analyzed to identify matchups between participants. The matchups may be used to derive a competition, context, statistics, and the like. A result of the derived competition may depend, at least in part, on an ordering and outcomes of a set of matchup events. The derived competition may be presented as a collection of video clips corresponding to the set of matchup events, related commentary, and/or related statistics. | 2021-09-30 |
20210303866 | METHOD, SYSTEM AND ELECTRONIC DEVICE FOR PROCESSING AUDIO-VISUAL DATA - A method, a system and an electronic device for processing audio-visual data. In the method, a first dataset is obtained, where the first dataset includes several data pairs, and each of the data pairs in the first dataset includes a video frame and an audio clip that match each other. A multi-channel feature extraction network model is established to extract the visual features of each video frame and the auditory features of each audio clip in the first dataset. A contrastive loss function model is established using the extracted visual features and the auditory features to train the multi-channel feature extraction network. A classifier is established to determine whether an input audio-visual data pair is matched. | 2021-09-30 |
20210303867 | SYSTEMS AND METHODS FOR MODELING AND CONTROLLING PHYSICAL DYNAMICAL SYSTEMS USING ARTIFICIAL INTELLIGENCE - The present disclosure provides systems, methods, and computer program products for controlling an object. An example method can comprise (a) obtaining video data of the object and (b) performing motion analysis on the video data to generate modified video data. The method can further comprise (c) using artificial intelligence (AI) to identify a set of features in the modified video data. The set of features may be indicative of a predicted state of the object. The AI may be been trained offline on historical training data. The method can further comprise (d) using the predicted state to determine a control signal and (e) transmitting, in real-time, the control signal to the object to adjust or maintain a state of the object in relation to the predicted state. Operations (a) to (d) can be performed without contacting the object. | 2021-09-30 |
20210303868 | ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING ELECTRONIC APPARATUS - Provided is an electronic apparatus including a body for providing a process room where clothes are placed, a heater for supplying at least one of hot air and steam into the process room, at least one camera for photographing an inside of the process room, a display, a processor, and a memory, wherein the memory stores instructions to be executed by the processor to control the camera to generate a first clothing image by photographing first clothing introduced into the process room, obtain information about the first clothing by using the first clothing image, obtain information about second clothing matching the first clothing by using the information about the first clothing, control the display to display the obtained information about the second clothing, and obtain and display information about second clothing different from the displayed second clothing in response to a user input for changing a second clothing recommendation condition. When the information about the second clothing is estimated, the electronic apparatus may use a rule-based or artificial intelligence (AI) algorithm. When the information about the second clothing is estimated using the AI algorithm, the electronic apparatus may use a machine learning, neural network, or deep learning algorithm. | 2021-09-30 |
20210303869 | CONGESTION CONFIRMATION SYSTEM - A user can confirm the congestion degree of a congestion confirmation area at the present moment by a congestion confirmation system which includes mobile communication devices and a congestion confirmation server. The mobile communication device periodically detects and transmits the current location information to the congestion confirmation server. Receiving the current location information, the congestion confirmation server stores the current location information in a database in association with a current time. The mobile communication device sends a congestion confirmation request designating one of the congestion confirmation areas. In response to this request, the congestion confirmation server refers to the database and transmits the location information of mobile communication devices currently located in the congestion confirmation area as designated. Receiving the location information of the mobile communication devices, the mobile communication device which has sent the congestion confirmation request displays a map including the mobile communication devices. | 2021-09-30 |
20210303870 | VIDEO ANALYTIC SYSTEM FOR CROWD CHARACTERIZATION - A computer-implemented method for characterizing a crowd that includes recording a video stream of individuals at a location having at least one reference point for viewing; and extracting the individuals from frames of the video streams. The method may further include assigning tracking identification values to the individuals that have been extracted from the video streams; and measuring at least one type classification from the individuals having the tracking identification values. The method may further include generating a crowd designation further characterizing the individuals having the tracking identification values in the location, the crowd designation comprising at least one measurement of probability that the individuals having the tracking identification values in the location view the at least one reference point for viewing. | 2021-09-30 |
20210303871 | REAL-TIME SCENE MAPPING TO GPS COORDINATES IN TRAFFIC SENSING OR MONITORING SYSTEMS AND METHODS - Systems and methods for tracking objects though a traffic control system include an image sensor configured to capture a stream of images of scene from an associated real-world position, an object tracker configured to identify an object in the captured images and define an associated object location in the captured images, a three-dimensional stage model system configured to transform the associated object location in the image to three-dimensional coordinates associated with the image sensor, and a three-dimensional world model configured to transform identified objects to real-world coordinates. Embodiments use lens aberration, sensor mounting height and location, accelerometer, gyro-compass and/or global position satellite information to generate a situational map. | 2021-09-30 |
20210303872 | CONTEXT DEPENDENT TRANSFER LEARNING ADAPTATION TO ACHIEVE FAST PERFORMANCE IN INFERENCE AND UPDATE - Autonomous vehicles may utilize neural networks for image classification in order to navigate infrastructures and foreign environments, using context dependent transfer learning adaptation. Techniques include receiving a transferable output layer from the infrastructure, which is a model suitable for the infrastructure and the local environment. Sensor data from the autonomous vehicle may then be passed through the neural network and classified. The classified data can map to an output of the transferable output layer, allowing the autonomous vehicle to obtain particular outputs for particular context dependent inputs, without requiring further parameters within the neural network. | 2021-09-30 |
20210303873 | LANE LINK GENERATION DEVICE AND COMPUTER READABLE MEDIUM - A lane link generation device includes an acquisition unit, a lane link generation unit, and a connection unit. The connection unit acquires first and second sections representing vehicle traveling areas. The lane link generation unit generates lane links in the first section and the second section. When the lane link of the first section and the lane link of the second section have endpoints, the connection unit determines a connection destination endpoint which is an endpoint of a connection destination to be connected to an endpoint and which is an endpoint belonging to a different section, based on determination rule information. The connection unit generates a lane link that connects the endpoint to the determined connection destination endpoint. | 2021-09-30 |
20210303874 | A POINT CLOUD-BASED LOW-HEIGHT OBSTACLE DETECTION SYSTEM - A method, apparatus, and system for determining a low-height obstacle based on outputs of a LIDAR device in an autonomous vehicle is disclosed. A point cloud comprising a plurality of points is generated based on outputs of a LIDAR device. For each point within a first number of lowest rings of points, a neighboring point in a same ring to a first direction is determined, and a first and a second coordinate values-related differences are determined. A first, a second, a third, and a fourth quantities are determined based on the first and second differences. In response to determining that the first, the second, the third, and the fourth quantities satisfy a predetermined condition, a low-height obstacle is determined based on the points within the first number of lowest rings of points. Operations of an autonomous vehicle are controlled based at least in part on the determined low-height obstacle. | 2021-09-30 |
20210303875 | DETECTING DEBRIS IN A VEHICLE PATH - In some examples, one or more processors may receive at least one image of a road, and may determine at least one candidate group of pixels in the image as potentially corresponding to debris on the road. The one or more processors may determine at least two height-based features for the candidate group of pixels. For instance, the at least two height-based features may include a maximum height associated with the candidate group of pixels relative to a surface of the road, and an average height associated the candidate group of pixels relative to the surface of the road. In addition, the one or more processors may determine at least one weighting factor based on comparing the at least two height-based features to respective thresholds, and may determine whether the group of pixels corresponds to debris based at least on the comparing. | 2021-09-30 |
20210303876 | METHOD FOR TRAINING A DRIVING RELATED OBJECT DETECTOR - A method for driving-related object detection, the method may include receiving an input image by an input of an object detector; and detecting, by an object detector, objects that appear in the input image. The detecting includes searching for (i) a first object having a first size that is within a first size range and belongs to a four wheel vehicle class, (ii) a second object having a second size that is within a second size range and belongs to a subclass out of multiple four wheel vehicle subclasses, (iii) a pedestrian, and (iv) a two wheel vehicle; wherein a maximum of the first size range does not substantially exceed a minimum of the second size range. | 2021-09-30 |
20210303877 | SYSTEMS AND METHODS FOR AUGMENTING PERCEPTION DATA WITH SUPPLEMENTAL INFORMATION - Examples disclosed herein may involve a computing system that is configured to (i) obtain previously-derived perception data for a collection of sensor data including a sequence of frames observed by a vehicle within one or more scenes, where the previously-derived perception data includes a respective set of object-level information for each of a plurality of objects detected within the sequence of frames, (ii) derive supplemental object-level information for at least one object detected within the sequence of frames that adds to the previously-derived object-level information for the at least one object, (iii) augment the previously-derived perception data to include the supplemental object-level information for the at least one object, and (iv) store the augmented perception data in an arrangement that encodes a hierarchical relationship between the plurality of objects, the sequence of frames, and the one or more scenes. | 2021-09-30 |
20210303878 | OBSTACLE DETECTION APPARATUS, OBSTACLE DETECTION METHOD, AND PROGRAM - An obstacle detection apparatus incudes an operation control portion configured to control operation of a movable portion, a first obtaining portion configured to obtain captured image data from an imager provided at the movable portion, the captured image data including first captured image data when the movable portion is in a first state and second captured image data when the movable portion is in a second state, a second obtaining portion configured to obtain moving amount information of the movable portion, an imager position calculation portion configured to calculate imager position information including a position of the imager when the movable portion is in the first state and a position of the imager when the movable portion is in the second state, and an obstacle position calculation portion configured to calculate a three-dimensional position of an obstacle included in the first captured image data and the second captured image data. | 2021-09-30 |
20210303879 | METHOD FOR EVALUATING SENSOR DATA, INCLUDING EXPANDED OBJECT RECOGNITION - A method for evaluating sensor data. The sensor data are ascertained by scanning a surrounding area, using at least one sensor. On the basis of the sensor data, object detection is carried out for determining objects from the sensor data. Object filtering is carried out. Surface characteristics of at least one object are identified, and/or the surface characteristics of at least one object are ascertained with the aid of access to a database. A control unit is also described. | 2021-09-30 |
20210303880 | DYNAMIC SENSOR OPERATION AND DATA PROCESSING BASED ON MOTION INFORMATION - Methods and apparatuses are disclosed for determining a characteristic of a device's object detection sensor oriented in a first direction. An example device may include one or more processors. The device may further include a memory coupled to the one or more processors, the memory including one or more instructions that when executed by the one or more processors cause the device to determine a direction of travel for the device, compare the direction of travel to the first direction to determine a magnitude of difference, and determine a characteristic of the object detection sensor based on the magnitude of difference. | 2021-09-30 |
20210303881 | PARKING SPACE RECOGNITION SYSTEM AND PARKING ASSIST SYSTEM INCLUDING THE SAME - A parking space recognition system includes an external environment information acquiring device and a parking space candidate detecting device. The parking space candidate detecting device includes: a virtual line calculating unit configured to calculate a virtual line connecting road side ends of parking space lines adjacent to each other; an angle calculating unit configured to calculate an angle between the virtual line and one of the parking space lines; a parking type determining unit configured to determine a parking type; and a parking space candidate setting unit configured to set at least one provisional parking space in an area between the parking space lines based on positions of the parking space lines, and to set an available parking area among the provisional parking space to the parking space candidate. | 2021-09-30 |
20210303882 | SYSTEM AND METHOD FOR INTERSECTION MANAGEMENT BY AN AUTONOMOUS VEHICLE - Systems and methods for navigating intersections autonomously or semi-autonomously can include, but are not limited to including, accessing data related to the geography and traffic management features of the intersection, executing autonomous actions to navigate the intersection, and coordinating with one or more processors and/or operators executing remote actions, if necessary. Traffic management features can be identified by using various types of images such as oblique images. | 2021-09-30 |
20210303883 | DETERIORATION DIAGNOSIS DEVICE, DETERIORATION DIAGNOSIS SYSTEM, DETERIORATION DIAGNOSIS METHOD, AND STORAGE MEDIUM FOR STORING PROGRAM - A deterioration diagnosis device including an acquisition unit that acquires sensing information including at least a captured image captured by an image capture device mounted on a moving body, driving condition information indicating driving details of the moving body, and position information corresponding to the captured image and the driving condition information; a deterioration degree analysis unit that analyzes a deterioration degree of an inspection target appearing in the captured image; and a priority ranking computation unit that computes a priority ranking of the inspection target based on deterioration degrees of the same inspection target appearing in multiple captured images identified by the position information, and the driving condition information corresponding to the identified inspection target. | 2021-09-30 |
20210303884 | AUTOMATIC POSITIONING OF 2D IMAGE SIGN SIGHTINGS IN 3D SPACE - An apparatus for sign detection includes a point cloud analysis module, an image analysis module, a frustum comparison module, and a sign detector. The point cloud analysis module is configured to receive point cloud data associated with a geographic region and classify at least one point neighborhood in the point cloud data as planar and a sign position candidate. The image analysis module is configured to receive image data associated with the geographic region and calculate a sighting frustum from the image data. The frustum comparison module is configured to perform a comparison of the sighting frustum to the sign position candidate having at least one point neighborhood classified as planar. The sign detector is configured to provide a location for the sign detection in response to the comparison of the sighting frustum to the sign position candidate. | 2021-09-30 |
20210303885 | GENERATIVE ADVERSARIAL NETWORK MODELS FOR SMALL ROADWAY OBJECT DETECTION - Systems, methods, and non-transitory computer-readable media for detecting small objects in a roadway scene. A camera is coupled to a vehicle and configured to capture a roadway scene image. An electronic controller is coupled to the camera and configured to receive the roadway scene image from the camera. The electronic controller is also configured to generate a Generative Adversarial Network (GAN) model using the roadway scene image. The electronic controller is further configured to determine a distribution indicting how likely each location in the roadway scene image can contain a roadway object using the GAN model. The electronic controller is also configured to determine a plurality of locations in the roadway scene image by sampling the distribution. The electronic controller is further configured to detect the roadway object at one of the plurality of locations in the roadway scene images. | 2021-09-30 |
20210303886 | SEMANTICALLY-CONSISTENT AUGMENTED TRAINING DATA FOR TRAFFIC LIGHT DETECTION - Methods, systems, and non-transitory computer-readable media for generating augmented data to train a deep neural network to detect traffic lights in image data. The method includes receiving a plurality of real roadway scene images and selecting a subset of the plurality of real roadway scene images. The method also includes selecting an image from the subset and determining a distribution indicting how likely each location in the selected image can contain a traffic light. The method further includes selecting a location in the selected image by sampling the distribution and superimposing a traffic light image onto the selected image at the selected location to generate an augmented roadway scene image. The method also includes processing each image in the subset to generate a plurality of augmented roadway scene images. The method further includes training a deep neural network model using the pluralities of real and augmented roadway scene images. | 2021-09-30 |
20210303887 | VEHICLE CARGO CAMERAS FOR SENSING VEHICLE CHARACTERISTICS - Described herein are systems, methods, and computer readable media for capturing image data of one or more regions of a vehicle (e.g., a cargo area of an autonomous vehicle) at various particular times and assessing the image data to determine whether a past vehicle occupant has left behind one or more belongings of value in the vehicle. If it is determined that a former vehicle occupant has left behind an article of value, an audible message may be outputted from a speaker of the vehicle to inform the former occupant of the presence of the article in the vehicle or a notification may be sent to a mobile device of the former occupant. The audible message may be outputted, for example, while the former occupant is beyond a predetermined distance from the vehicle, but still within range of hearing the message. | 2021-09-30 |
20210303888 | Methods and System for Predicting Driver Awareness of a Feature in a Scene - An embodiment takes the form of a training server that presents a video comprising a plurality of frames, each comprising a respective scene representation of a scene at a respective time. The scene representations comprise respective representations of a feature in the scene. The training server presents a respective gaze representation of a driver gaze for each frame. The gaze representations comprise respective representations of driver gaze locations at the times of the respective scene representations. The training server generates an awareness prediction via a neural network based on the driver gaze locations, the awareness prediction reflecting a predicted driver awareness of the feature. The training server receives an awareness indication associated with the video and the gaze representations, and trains the neural network based on a comparison of the awareness prediction with the awareness indication. | 2021-09-30 |
20210303889 | EYE OPENING DEGREE CALCULATION DEVICE - An eye opening degree calculation device includes: a processor configured to: calculate a degree of eye opening of a crew, based on an image in which a face of the crew appears; calculate a face direction angle of the crew with respect to a predetermined reference direction, based on the image; calculate a line-of-sight angle of the crew with respect to the predetermined reference direction, based on the image; correct the degree of eye opening or a threshold value which will be compared with the degree of eye opening, when a difference between the face direction angle and the line-of-sight angle is equal to or greater than a predetermined value; and determine an eye opening state of the crew, based on the corrected degree of eye opening or a comparison result of the degree of eye opening with the corrected threshold value. | 2021-09-30 |
20210303890 | METHOD AND APPARATUS FOR FOREGROUND GEOMETRY AND TOPOLOGY BASED FACE ANTI-SPOOFING - A method and system to detect visual spoofing of a process of authenticating a person's identity employs computer vision techniques to define characteristics of different kinds of spoofing. Embodiments identify a foreground object within an image and by examining positions and/or orientations of that foreground object within the image, determine whether the presentation of the foreground object is an attempt to spoof the authentication process. | 2021-09-30 |
20210303891 | LIVING BODY DETECTION METHOD, APPARATUS AND DEVICE - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for liveness detection are provided. One of methods includes: displaying an image to a user, and capturing a face image of the user while displaying the image to the user; determining an eye image of an eye of the user based on the face image; extracting a to-be-verified image from the eye image of the user, wherein the to-be-verified image is reflection of the displayed image in the eye of the user; comparing the displayed image with the to-be-verified image to determine whether the to-be-verified image matches the displayed image; and performing liveness detection on the user based on a result of comparison. | 2021-09-30 |
20210303892 | METHOD AND SYSTEM TO SECURELY REGISTER A USER IN A BIOMETRIC SYSTEM - A method and system for securely registering a user in a biometric system. The method includes: receiving, on a computer processor of the biometric system, an identifier of a biometric device and user information; sending, with the computer processor, a request that the biometric device be sent to the user of the biometric device upon the receipt of the identifier of the biometric device and user information; receiving, on the computer processor, the identifier of the biometric device and one or more authenticators from the user; initiating, with the computer processor, a registration of the user based on the receipt of the identifier of the biometric device and the one or more authenticators from the user; and receiving, on the computer processor, biometric data of the user from the biometric device to complete a registration of the user in the biometric system. | 2021-09-30 |
20210303893 | BIOMETRIC INFORMATION AUTHENTICATION DEVICE - A biometric information authentication device includes a control unit, wherein, when a first biometric authentication is successfully completed based on biometric information acquired from an operator and registered biometric information preliminarily registered by a registered person and an operation for registering new registered biological information is subsequently performed, the control unit permits registration upon a successful second biometric authentication based on biometric information acquired again and the registered biometric information preliminarily registered by the registered person. | 2021-09-30 |
20210303894 | IMAGE PROCESSING APPARATUS, IMAGE RECOGNITION SYSTEM, AND RECORDING MEDIUM - An image processing apparatus, includes a memory; and a processor coupled to the memory and the processor configured to: Identify a first recognition error, the first recognition error being an error between ground truth data and a first recognition result obtained by inputting a first feature of image data into a first image recognition model, generate a second feature obtained by adding noise to the first feature of the image data, identify a second recognition error, the second recognition error being an error between the first recognition result and a recognition result obtained by inputting the second feature into a second image recognition model, and execute training of the first image recognition model and the second image recognition model based on the first recognition error and the second recognition error. | 2021-09-30 |
20210303895 | INFORMATION PROCESSING APPARATUS FOR OBTAINING CHARACTER STRING - Correction content is made learnable based on a correction operation performed by a user on an attribute setting screen in setting attribute information, such as a filename, based on a character string obtained by character recognition processing on a scan image. | 2021-09-30 |
20210303896 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An information processing apparatus includes a map creation unit configured to create a defocus map corresponding to a captured image of a subject, an object setting unit configured to set a recognition target, and a determination unit configured to determine, based on the defocus map, whether the recognition target is recognizable in the image. | 2021-09-30 |
20210303897 | GAMING ENVIRONMENT TRACKING OPTIMIZATION - A gaining system that receives a frame of image data captured by a camera at a gaming table, generates a set of images from portions of the frame of image data, and determines whether the set of images meets an input requirement of a neural network model. If the set of images does not meet the input requirement, the gaming system modifies, by an incremental amount, an image property of a subset from the set of images until the set of images meets the input requirement. When the set of images meets the input requirement, the gaming system transmits the set of images as a unit (e.g., as a composite of the set of images) to the neural network model for concurrent analysis. | 2021-09-30 |
20210303898 | COMBINED SENSOR CALIBRATION TARGET FOR CALIBRATING MULTIPLE TYPES OF SENSORS, INCLUDING INFRARED CAMERAS AND VISIBLE LIGHT CAMERAS - A sensor calibration target includes dark markings on a light substrate. The dark markings absorb more heat from a heat source than the light substrate, and are visually distinguishable from the light substrate. Thus, the dark markings are discernable by the both a camera and an infrared sensor of a vehicle during vehicle sensor calibration. The sensor calibration target may otherwise non-metallic but include a metallic feature for calibrating a radio detection and ranging (RADAR) sensor. The sensor calibration target may also include one or more apertures in the substrate for calibrating a light detection and ranging (LIDAR) sensor. | 2021-09-30 |
20210303899 | SYSTEMS AND METHODS FOR AUTOMATIC RECOGNITION OF VEHICLE INFORMATION - Disclosed systems and methods provide automatic recognition of information from stationary and/or moving vehicles. A disclosed system includes an image capture device that captures an image of a vehicle surface and thereby generates image data. A processor circuit receives the image data from the image capture device and may process the image data to determine a Department of Transportation (DOT) number. The processor circuit may control the image capture device to capture a plurality of images, to detect and recognize text characters in each of the plurality of images, and to compare probabilities of likely DOT numbers determined from each of the plurality of images. The processor circuit may be further configured to determine DOT numbers from captured images by processing image data using a machine learning algorithm. The system may be configured to be portable and to perform real-time analysis using an application specific integrated circuit (ASIC). | 2021-09-30 |
20210303900 | Method and Apparatus for Measuring Endolymphatic Hydrops Ratio of Inner Ear Organ Using Artificial Neural Network - Provided are a method and an apparatus for measuring an endolymphatic hydrops ratio of inner ear organs using an artificial neural network. The method of measuring an endolymphatic hydrops ratio includes obtaining a plurality of frame images obtained by capturing inner ear organs, obtaining a plurality of pieces of mask data corresponding to each of the plurality of frame images by inputting the plurality of frame images into a neural network, clustering the plurality of pieces of mask data according to the inner ear organs and obtaining representative images according to the inner ear organs according to certain conditions, and overlapping a target image synthesized by using the plurality of frame images and the representative images according to the inner ear organs so as to measure an endolymphatic hydrops ratio. | 2021-09-30 |
20210303901 | TEXT LOCATION METHOD AND APPARATUS - Aspects of the present invention provide a new text location technique, which can be applied to general handwriting detection at a variety of levels, including characters, words, and sentences. The inventive technique is efficient in training deep learning systems to locate text. The technique works for different languages, for text in different orientations, and for overlapping text. In one aspect, the technique's ability to separate overlapping text also makes the technique useful in application to overlapping objects. Embodiments take advantage of a so-called skyline appearance that text tends to have. Recognizing a skyline appearance for text can facilitate the proper identification of bounding boxes for the text. Even in the case of overlapping text, discernment of a skyline appearance for words can help with the proper identification of bounding boxes for each of the overlapping text words/phrases, thereby facilitating the separation of the text for purposes of recognition. | 2021-09-30 |
20210303902 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing device includes: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: calculate a feature amount of each of feature amount calculation ranges of a plurality of subdivisions set in a processing target range of a captured image obtained by capturing an image of a recognition object; and specify a plurality of feature amount calculation ranges capable of constituting vertices of a polygon among the feature amount calculation ranges as feature ranges based on the feature amount. | 2021-09-30 |
20210303903 | OBJECT RECOGNITION DEVICE, OBJECT RECOGNITION LEARNING DEVICE, METHOD, AND PROGRAM - An object included in a low-resolution image can be recognized with a high degree of precision. An acquisition unit acquires, from a query image, an increased-resolution image, which is acquired by increasing the resolution of the query image, by performing pre-learned acquisition processing for increasing the resolution of an image. A feature extraction unit, using the increased-resolution image as input, extracts a feature vector of the increased-resolution image by performing pre-learned extraction processing for extracting a feature vector of an image. A recognition unit recognizes an object captured on the increased-resolution image on the basis of the feature vector of the increased-resolution image and outputs the recognized object as the object captured on the query image. | 2021-09-30 |
20210303904 | METHODS FOR DETERMINING UNIT LOAD DEVICE (ULD) CONTAINER TYPE USING TEMPLATE MATCHING - Methods for determining a unit load device (ULD) container type are disclosed herein. An example method includes capturing a set of image data featuring the ULD and aligning the set of image data with a template. The method further includes converting the set of image data and the template to down-sampled grids including a plurality of rows and columns. The method further includes removing portions of the image data grid that do not exceed a density threshold. The method further includes identifying a ULD border and a template border by extracting leftmost, rightmost, and topmost grid values from the respective grids. The method further includes calculating a match score corresponding to the template by determining a shortest respective distance between grid values in the ULD border and the template border, and determining ULD container type corresponding to the ULD based on the match score. | 2021-09-30 |
20210303905 | METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER READABLE RECORD MEDIUM FOR EXTRACTING AND PROVIDING TEXT COLOR AND BACKGROUND COLOR IN IMAGE - A method for extracting and providing a text color and background color in an image, includes detecting a first area that includes a text in a given image; extracting, from the first area, a representative text color that represents the text and a representative background color that represents a background of the first area; and overlaying a second area that includes a translation result of the text on the given image and applying the representative text color and the representative background color to a text color and a background color of the second area. | 2021-09-30 |
20210303906 | IMAGE PROCESSING METHOD, APPARATUS AND COMPUTER PROGRAM - The present invention provides a novel algorithm for salience detection based on a dual rail antagonistic structure to predict where people look in images in a free-viewing condition. Furthermore, the proposed algorithm can be effectively applied to both still and moving images in visual media without any parameter tuning in real-time. | 2021-09-30 |
20210303907 | SYSTEM AND METHOD OF IDENTIFYING VEHICLE BRAND AND MODEL - The present invention relates to a method of identifying a brand and a model of a vehicle. The method comprises obtaining one or more video frames. Further, the method comprises detecting one of presence or absence of text in the at least one vehicle. Upon detecting the presence of the text, the method comprises determining at least one of the brand and the model of the at least one vehicle by accumulating the text recognized in the one or more video frames. Further, identifying at least one of the brand and the model of the at least one vehicle based on an accumulated text. Upon detecting the absence of the text, the method comprises determining the brand of the at least one vehicle based on logo associated with the vehicle. | 2021-09-30 |
20210303908 | STRONG LASER SCANNER MATCHING METHOD AND APPARATUS CONSIDERING MOVEMENT OF GROUND ROBOT - Robust laser scanner matching method and device considering a movement of a ground robot are disclosed. A scan matching method according to an embodiment of the inventive concept includes receiving two point clouds, sequentially inspecting height values until a point is within a specific height range to search for correspondence points between the two point clouds, and performing scan matching between the two point clouds based on the search result of the correspondence points. | 2021-09-30 |
20210303909 | FLEXIBLE ACCELERATOR FOR SPARSE TENSORS IN CONVOLUTIONAL NEURAL NETWORKS - A system with a multiplication circuit having a plurality of multipliers is disclosed. Each of the plurality of multipliers is configured to receive a data value and a weight value to generate a product value in a convolution operation of a machine learning application. The system also includes an accumulator configured to receive the product value from each of the plurality of multipliers and a register bank configured to store an output of the convolution operation. The accumulator is further configured to receive a portion of values stored in the register bank and combine the received portion of values with the product values to generate combined values. The register bank is further configured to replace the portion of values with the combined values. | 2021-09-30 |
20210303910 | SYSTEMS AND METHODS FOR MODEL-BASED IMAGE ANALYSIS - A system for categorizing images is provided. The system is programmed to store a first training set of images. Each image of the first training set of images is associated with an image category of a plurality of image categories. The system is further programmed to analyze each image of the first training set of images to determine one or more features associated with each of the plurality of image categories and receive a second training set of images. The second training set of images includes one or more errors. The system is also programmed to analyze each image of the second training set of images to determine one or more features associated with an error category and generate a model to identify each of the image categories based on the analysis such that the model includes the error category in the plurality of image categories. | 2021-09-30 |
20210303911 | METHOD OF SEGMENTING PEDESTRIANS IN ROADSIDE IMAGE BY USING CONVOLUTIONAL NETWORK FUSING FEATURES AT DIFFERENT SCALES - The present invention discloses a roadside image pedestrian segmentation method based on a variable-scale multi-feature fusion convolutional network. For scenes where the pedestrian scale changes significantly in the intelligent roadside terminal image, this method designs two parallel convolutional neural networks to extract the local and global features of pedestrians at different scales in the image, and then fuses the local features and global features extracted by the first network with the local features and global features extracted by the second network at the same level, and then fuse the fused local features and global features for the second time to obtain a variable-scale multi-feature fusion convolutional neural network, and then train the network and input roadside pedestrian images to realize pedestrian segmentation. The present invention effectively solves the problems that most current pedestrian segmentation methods based on a single network structure are prone to segmentation boundary fuzziness and missing segmentation. | 2021-09-30 |
20210303912 | POINT CLOUD BASED 3D SEMANTIC SEGMENTATION - System and techniques are provided for three-dimension (3D) semantic segmentation. A device for 3D semantic segmentation includes: an interface, to obtain a point cloud data set for a time-ordered sequence of 3D frames, the 3D frames including a current 3D frame and one or more historical 3D frames previous to the current 3D frame; and processing circuitry, to: invoke a first artificial neural network (ANN) to estimate a 3D scene flow field for each of the one or more historical 3D frames by taking the current 3D frame as a reference frame; and invoke a second ANN to: produce an aggregated feature map, based on the reference frame and the estimated 3D scene flow field for each of the one or more historical 3D frames; and perform the 3D semantic segmentation based on the aggregated feature map. | 2021-09-30 |
20210303913 | AUTOMATIC INTRUSION DETECTION METHOD AND APPARATUS - Disclosed are systems and methods for improving interactions with and between computers in distributional similarity identification using randomized observations. In connection with an intrusion detection system monitoring a computing system, a pair of perturbed sample sets are generating using a pair of real sample set (or real observations) and a pair of random sample sets (of randomly-selected observations), and a similarity measuring representing a level of consistency in user behavior is determined. The systems improve the quality and accuracy of the similarity determination for use in intrusion detection. | 2021-09-30 |
20210303914 | CLOTHING COLLOCATION - A method includes: acquiring an image of first piece of clothing to be collocated; determining information of one or more second piece of clothing for collocation with the first piece of clothing; determining clothing collocation images containing the information of the one or more second piece of clothing in the collocation image library; and selecting clothing collocation images matched with the image of the first piece of clothing from the determined clothing collocation images. The information of the one or more second piece of clothing is pre-marked clothing category information of clothing collocation images in a collocation image library. | 2021-09-30 |
20210303915 | INTEGRATED CLUSTERING AND OUTLIER DETECTION USING OPTIMIZATION SOLVER MACHINE - According to an aspect of an embodiment, operations include receiving a set of datapoints for integrated clustering and outlier detection. The operations further include receiving, as a first input, a clustering constraint comprising a number of outlier datapoints to be detected from the set of datapoints and a second input including a distance metric. The operations further include formulating an objective function based on the first and second inputs and transforming the objective function into an unconstrained binary optimization formulation. The operations further include providing such formulation as input to an optimization solver machine and generating a clustering result and an outlier detection result based on output of the optimization solver machine for the input. The clustering result includes a set of datapoint clusters, and the outlier detection result includes a set of outlier datapoints. The clustering result and the outlier detection result are published on a publisher system. | 2021-09-30 |
20210303916 | SYSTEMS AND METHODS FOR CLUSTERING USING A SMART GRID - System, methods, and other embodiments described herein relate to improving clustering of points within a point cloud. In one embodiment, a method includes grouping the points into cells of a grid. The grid divides an observed region of a surrounding environment associated with the point cloud into the cells. The method includes computing feature vectors for the cells that use cell features to characterize the points in the cells and relationships between the cells. The method includes analyzing the feature vectors according to a clustering model to identify clusters for the cells. The clustering model evaluates the cells to identify which of the cells belong to common entities. The method includes providing the clusters as assignments of the points to the entities depicted in the point cloud. | 2021-09-30 |
20210303917 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM, AND ULTRASONIC DIAGNOSIS APPARATUS - An information processing apparatus includes an acquisition unit configured to acquire ultrasonic image data, an inference unit configured to infer a body mark corresponding to the ultrasonic image data acquired by the acquisition unit, and a display control unit configured to display the body mark inferred by the inference unit together with the ultrasonic image data. | 2021-09-30 |
20210303918 | BEHAVIOR CONTROL SYSTEM - A behavior control system includes: a behavior control device for changing a behavior of a moving body; an imaging device for acquiring images ahead of the moving body; and a control device for controlling the behavior of the moving body based on the images acquired by the imaging device. The control device includes: a feature point extraction unit configured to extract feature points from the images acquired by the imaging device; a straight moving state storing unit configured to store, as a reference feature point state, a feature point state when the moving body is in a straight moving state, the feature point state being obtained based on a temporal change of the feature points; and a moving body control unit configured to control the behavior control device when the feature point state obtained based on the temporal change of the feature points differs from the reference feature point state. | 2021-09-30 |
20210303919 | IMAGE PROCESSING METHOD AND APPARATUS FOR TARGET RECOGNITION - The embodiment of the application provides an image processing method and apparatus for target recognition, the method comprises: inputting N frames of images into a quality evaluation network model; determining, with the quality evaluation network model, a feature vector of each of the N frames of images according to an attention weight of a preset sub-region image and M quality evaluation parameters; determining quality evaluation values of the N frames of images according to the feature vectors of the N frames of images; determining a target image or a target vector for target recognition according to the quality evaluation values of the N frames of images. According to the technical scheme provided by the embodiment of the present application, the feature vector of each frame of image is determined based on the attention weight of the sub-region image and M quality evaluation parameters. Each frame of image is evaluated according to the attention weight of the sub region image and M quality evaluation parameters, which improves the accuracy of image quality evaluation of each frame of image. The quality evaluation value is obtained according to the feature vector of the image, which improves the imaging quality of the target image. | 2021-09-30 |
20210303920 | METHOD AND SYSTEM FOR DETECTING AND TRACKING OBJECTS IN A ROTATIONAL ENVIRONMENT - This disclosure relates to method and system for detecting and tracking at least one object in a rotational environment. The method includes receiving a set of first features based on first data and a set of second features based on second data, detecting at least one object based on the set of first features using a Convolutional Neural Network (CNN) based predictive model, determining a set of first parameters for the at least one object, detecting the at least one object based on the set of second features using the CNN based predictive model, determining a set of second parameters for the at least one object, and tracking the at least one object based on the set of first parameters and the set of second parameters. It should be noted that the first data and the second data sequentially belong to an input dataset that includes images or video frames. | 2021-09-30 |
20210303921 | CROSS-MODALITY PROCESSING METHOD AND APPARATUS, AND COMPUTER STORAGE MEDIUM - A cross-modality processing method is related to a field of natural language processing technologies. The method includes: obtaining a sample set, wherein the sample set includes a plurality of corpus and a plurality of images; generating a plurality of training samples according to the sample set, in which each of the plurality of the training samples is a combination of at least one of the plurality of the corpus and at least one of the plurality of the images corresponding to the at least one of the plurality of the corpus; adopting the plurality of the training samples to train a semantic model, so that the semantic model learns semantic vectors containing combinations of the corpus and the images. | 2021-09-30 |
20210303922 | Systems and Methods for Training Object Detection Models Using Adversarial Examples - Systems and methods for training object detection models using adversarial examples are provided. A method includes obtaining a training scene and identifying a target object within the training scene. The method includes obtaining an adversarial object and generating a modified training scene based on the adversarial object, the target object, and the training scene. The modified training scene includes the training scene modified to include the adversarial object placed on the target object. The modified training scene is input to a machine-learned model configured to detect the training object. A detection score is determined based on whether the training object is detected, and the machine-learned model and the parameters of the adversarial object are trained based on the detection output. The machine-learned model is trained to maximize the detection output. The parameters of the adversarial object are trained to minimize the detection output. | 2021-09-30 |