25th week of 2019 patent applcation highlights part 55 |
Patent application number | Title | Published |
20190188450 | Systems, Methods and Apparatuses for Deployment of Virtual Objects Based on Content Segment Consumed in a Target Environment - Systems, methods and apparatuses for deployment of virtual objects based on content segment consumed in a target environment. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, to capturing contextual information for the target environment. The method can further include detecting an indication that a content segment being consumed in the target environment has virtual content associated with it and/or presenting the virtual object for consumption in target environment. | 2019-06-20 |
20190188451 | Lightweight 3D Vision Camera with Intelligent Segmentation Engine for Machine Vision and Auto Identification - Various embodiments of the invention are implemented for an entry level, compact and lightweight single package apparatus combining a conventional high-resolution two-dimensional (2D) camera with a low-resolution three-dimensional (3D) depth image camera, capable to learn, through depth information, how to improve the performance of a set of 2D identification and machine vision algorithms in terms of speed-up (e.g. through regions of interests (ROIs)) and raw discriminative power. The cameras simultaneously capture images that are processed by an Intelligent Segmentation Engine in the system to facilitate object recognition. | 2019-06-20 |
20190188452 | IMAGE PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE - The present disclosure relates to an image processing method and apparatus, and an electronic device. The method includes: acquiring a first face coordinate of a first image; acquiring a second face coordinate of a second image, in which, the first image has a different resolution from the second image, and an image size of the first image is greater than an image size of the second image; calculating a magnification ratio according to the image size of the first image and the image size of the second image, and calculating a second target face coordinate according to the magnification ratio and the second face coordinate; comparing the first face coordinate with the second target face coordinate to obtain a comparing result; and performing face clustering according to the comparing result. | 2019-06-20 |
20190188453 | TERMINAL AND SERVER FOR PROVIDING VIDEO CALL SERVICE - There is provided an application stored in a computer-readable storage medium for a first terminal to perform a method of providing a video call service, the method including: receiving a first video stream of a first user of the first terminal when the application that provides the video call service is executed; extracting facial feature points of the first user from the first video stream; predicting whether the first user is a bad user by applying distribution information of the facial feature points of the first user to a learning model for bad user identification based on facial feature points of a plurality of users; and controlling display of a component on an execution screen of the application based on a result of the predicting. | 2019-06-20 |
20190188454 | FACIAL IMAGE PROCESSING METHOD, TERMINAL, AND DATA STORAGE MEDIUM - The present disclosure provides technical solutions for improving facial image capturing, recognition, and authentication, including: collecting a face image in response to a facial scan instruction (e.g., for facial recognition) using a camera of a mobile terminal; calculating a measure of image brightness (e.g., a luminance value) of the collected face image; enhancing, when a value of the measure of image brightness of the collected face image is less than a first preset threshold, luminance of light that is emitted from a display of the mobile terminal to a target luminance value, and re-collecting a face image using the camera of the mobile terminal and calculating a corresponding value of the measure of image brightness for the re-collected face image; and performing, when the value of the measure of image brightness of the re-collected face image falls within a preset value range, facial recognition based on the re-collected face image. | 2019-06-20 |
20190188455 | CAPTURING AND USING FACIAL METRICS FOR MASK CUSTOMIZATION - An apparatus for use with an electronic device in capturing facial metrics of a user of the electronic device, wherein the electronic device has a front facing camera facing the user and a rear facing camera facing away from the user. The apparatus includes a frame structured to be coupled to the electronic device and a number of reflective surfaces mounted to the frame. The number of reflective surfaces are positioned and structured such that the front facing camera may capture a first image of the user and the rear facing camera may capture a second image of the user simultaneously with the first image. | 2019-06-20 |
20190188456 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - An image processing device includes one or more processors. The processors detects two or more first partial areas corresponding to each of two or more portions among a plurality of portions that are included in an object and that are set in advance from a first image, and detect two or more second partial areas corresponding to each of two or more portions among the portions from a second image. The processors extracts two or more first feature vectors from two or more of the first partial areas, and extract two or more second feature vectors from two or more of the second partial areas. The processors determines whether an object included in the first image and an object included in the second image are same, by using the first feature vectors and the second feature vectors. | 2019-06-20 |
20190188457 | METHOD AND SYSTEM FOR FACIAL FEATURES ANALYSIS AND DELIVERY OF PERSONALIZED ADVICE - Disclosed is a method for analyzing facial features of a person, including the steps of: acquiring a picture of the face of the person; delimiting, on the picture, at least two zones of the face of the person; processing the picture to determine contrast values of each of the at least two zones; and based on the determined contrast values, determining a cluster to which the person pertains among a plurality of pre-established clusters, with the pre-established clusters being elaborated based on a set of contrast values determined for the same zones of the faces of a reference population in respective pictures of the faces; and providing the person with personalized information, wherein the personalized information depends on the cluster to which the person pertains. | 2019-06-20 |
20190188458 | METHOD AND DEVICE FOR RECOGNIZING FACIAL EXPRESSIONS - A method for recognizing face expressions is provided. The method includes: recognizing expression categories of expressions in a plurality of face images and obtaining recognition results between each expression category and another expression category; obtaining similarities between each expression category and another expression category according to the recognition results; classifying the expression categories into a plurality of expression groups according to the similarities; training a first recognition model to classify the expressions in the face images into the expression groups; and training a second recognition model for each of the expression groups to classify the face images in each of the expression group into one of the expression categories. | 2019-06-20 |
20190188459 | TERMINAL AND SERVER FOR PROVIDING VIDEO CALL SERVICE - There is provided an application stored in a computer-readable storage medium for a first terminal to perform a method of providing a video call service, the method including: establishing a video call session between the first terminal of a first user and a second terminal of a second user; preparing facial expression information of the second user accumulated in at least one video call session executed by the second terminal before the video call session; determining emotion information corresponding to the facial expression information of the second user based on the facial expression information of the second user; and providing the determined emotion information through an execution screen of the first terminal. | 2019-06-20 |
20190188460 | METHOD AND DEVICE FOR USE IN HAND GESTURE RECOGNITION - A method and device for use in hand gesture recognition is applicable to image processing. The method includes: acquiring a depth map of a hand in a current image; estimating first positions of joints of the hand according to the depth map of the hand; creating a 3D point cloud of the hand according to the depth map of the hand; matching the first position of the joints of the hand and a stored 3D hand model to the 3D point cloud of the hand to obtain second positions of the joints and first degree of freedom parameters of the joints; and recognizing the hand's gestures according to the second positions of the joints and the first degree of freedom parameters of the joints. The method achieves a practical hand gesture recognition technique and recognizes hand gestures accurately. | 2019-06-20 |
20190188461 | METHOD FOR IDENTIFYING RT POSITIONING IMAGE, COMPUTER PROGRAM, AND COMPUTER STORAGE - Embodiments of the present invention provide a method for processing a radiotherapy CT positioning image, comprising: defining a pixel having a CT value greater than or equal to a first threshold in an original CT image as a human body pixel; counting a number of human body pixels of each pixel row in the original CT image in an order from top to bottom; and determining a boundary of the human body according to the counting result. | 2019-06-20 |
20190188462 | FORM TYPE LEARNING SYSTEM AND IMAGE PROCESSING APPARATUS - To accurately classify a form without using form layout information, the image processing apparatus utilizes a classifier that accepts a filled-in form whose image has been reduced into a specific size as an input and specifies the form type of the filled-in form. Machine learning has been performed to the classifier by a form type learning system and the form type learning system reduces an image of a filled-in form as an original document image, adds a noise to the original document image, which has not been reduced or has been reduced, to generate multiple images for machine learning, associates the form type of the original document image with the multiple images for machine learning as a label, and performs machine learning of the classifier using the multiple images for machine learning and the label as training data. | 2019-06-20 |
20190188463 | USING DEEP LEARNING TECHNIQUES TO DETERMINE THE CONTEXTUAL READING ORDER IN A FORM DOCUMENT - Techniques for determining reading order in a document. A current labeled text run (R1), RIGHT text run (R1) and DOWN text run (R3) are generated. The R1 labeled text run is processed by a first LSTM, the R2 labeled text run is processed by a second LSTM, and the R3 labeled text run is processed by a third LSTM, wherein each of the LSTMs generates a respective internal representation (R1′, R2′ and R3′). Deep learning tools other than LSTMs can be used, as will be appreciated. The respective internal representations R1′, R2′ and R3′ are concatenated or otherwise combined into a vector or tensor representation and provided to a classifier network that generates a predicted label for a next text run as RIGHT, DOWN or EOS in the reading order of the document. | 2019-06-20 |
20190188464 | SYSTEMS AND METHODS FOR ENROLLMENT AND IDENTITY MANAGEMENT USING MOBILE IMAGING - Systems and methods for automatic enrollment and identity verification based upon processing a captured image of a document are disclosed herein. Various embodiments enable, for example, a user to enroll in a particular service by taking a photograph of a particular document (e.g., his driver license) with a mobile device. One or more algorithms can then extract relevant data from the captured image. The extracted data (e.g., the person's name, gender, date of birth, height, weight, etc.) can then be used to automatically populate various fields of an enrollment application, thereby reducing the amount of information that the user has to manually input into his mobile device in order to complete the enrollment process. In some embodiments, a set of internal and/or external checks can be run against the data to ensure that the data is valid, has been read correctly, and is consistent with other data. | 2019-06-20 |
20190188465 | RECOGNIZING TEXT IN IMAGE DATA - A device may receive image data representing a document, the document including: text, and edges. Based on the edges, the device may identify, a segment of interest within the image data and crop the segment of interest to obtain a portion of the image data. In addition, the device may perform optical character recognition on the portion of the image data, the optical character recognition producing recognized text. The device may obtain, based on the recognized text, validation data that includes verification text, and determine whether the recognized text is verified based on the verification text. Based on a result of the determination, the device may perform an action. | 2019-06-20 |
20190188466 | METHOD, SYSTEM AND APPARATUS FOR PROCESSING A PAGE OF A DOCUMENT - A method of processing a page of a document. A plurality of commands describing graphical objects of the page configured to be reproduced in a first presentation mode are received, the plurality of commands indicating a type of each of the graphical objects, an enclosing region and a drawing order associated with each of the graphical objects. An object significance score is determined for each of the graphical objects using the respective object type and object depth. A significance profile for the page is determined by combining the determined object significance scores according to a page layout, the page layout being determined using the enclosing regions associated with the graphical objects. Logical structure elements of the page are determined using the significance profile. The plurality of commands are processed according to the determined logical structure elements for generating the page of the document in a second presentation mode. | 2019-06-20 |
20190188467 | METHOD FOR RECOGNIZING THE DRIVING STYLE OF A DRIVER OF A LAND VEHICLE, AND CORRESPONDING APPARATUS - A method for recognizing the driving style of a driver of a land vehicle, of the type that envisages acquiring information on the dynamics of the vehicle from sensors and calculating, as a function of said information on the dynamics of the vehicle, a class of membership of the driving style of the driver. The method comprises the steps of analysing information on the dynamics of the vehicle to start a procedure of recognition of the event that comprises: reconstructing a manoeuvre performed by the driver; identifying the manoeuvre performed, by comparing said displacement time series with models of time series corresponding to pre-determined manoeuvres stored in a database; defining regions in a cartesian plane having as axes a lateral acceleration and a longitudinal acceleration, in particular manifolds; computing cost functionals for the three driving styles; and recognising the driving style, on the basis of said cost functionals. | 2019-06-20 |
20190188468 | SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - A signal processing device includes: a basis storage that stores an acoustic event basis group; a model storage that stores an identification model, as a feature amount, a combination of activation levels of spectral; an identification signal analysis unit that, upon input of a spectrogram of an acoustic signal for identification, performs sound source separation on the spectrogram by using a spectral basis set that is obtained by appending spectral bases corresponding to an unknown acoustic event that is an acoustic event other than the acoustic event specified as a detection target to the acoustic event basis group and causing only unknown spectral bases within the spectral basis set to be learned, and thereby calculating activation levels of spectral bases of the acoustic events in the spectrogram of the acoustic signal for identification; and a signal identify unit that identifies an acoustic event included in the acoustic signal for identification. | 2019-06-20 |
20190188469 | TERMINAL WITH LINE-OF-SIGHT TRACKING FUNCTION, AND METHOD AND APPARATUS FOR DETERMINING POINT OF GAZE OF USER - A terminal with a line-of-sight tracking function is disclosed. The terminal with a line-of-sight tracking function includes a body, a camera, and at least two light emitting diodes. The camera and the at least two light emitting diodes are mounted on the body, so that the terminal can emit a ray by using the at least two light emitting diodes, to ensure that the emitted ray can be shined on an eye of the user when the user is at different angles. After the ray is reflected by the eye of the user, the terminal can collect the reflected ray by using the camera, obtain an eye image of the user, and track a line of sight of the user based on the eye image, thereby increasing a success rate of line-of-sight tracking. | 2019-06-20 |
20190188470 | IRIS CAPTURE APPARATUS, IRIS CAPTURE METHOD, AND STORAGE MEDIUM - The present invention provides a technology that acquires a high resolution iris image more quickly than before. An iris capture apparatus according to one example embodiment of the present invention includes a rotatable movable mirror; a control unit that controls rotation of the movable mirror; a capture unit that captures different regions of a face of a user via the movable mirror and outputs a group of images every time the control unit rotates the movable mirror by a predetermined angle; and an iris image acquisition unit that acquires an image of an iris of the user from the group of images. | 2019-06-20 |
20190188471 | METHOD AND APPARATUS FOR BIOMETRIC DATA CAPTURE - A method and apparatus for biometric data capture are provided. The apparatus includes in interactive head-mounted eyepiece worn by a user that includes an optical assembly through which a user views a surrounding environment and displayed content. The optical assembly comprises a corrective element that corrects the user's view of the surrounding environment and an integrated processor for handling content to the user. An integrated optical sensor captures biometric data when the eyepiece is positioned so that a nearby individual is proximate to the eyepiece. Biometric data is captured using the eyepiece and is transmitted to a remote processing facility for interpretation. The remote processing facility interprets the captured biometric data and generates display content based on the interpretation. This display content is delivered to the eyepiece and displayed to the user. | 2019-06-20 |
20190188472 | SCHEMES FOR RETRIEVING AND ASSOCIATING CONTENT ITEMS WITH REAL-WORLD OBJECTS USING AUGMENTED REALITY AND OBJECT RECOGNITION - A method includes identifying a real-world object in a scene viewed by a camera of a user device, matching the real-world object with a tagged object based at least in part on image recognition and a sharing setting of the tagged object, the tagged object having been tagged with a content item, providing a notification to a user of the user device that the content item is associated with the real-world object, receiving a request from the user for the content item, and providing the content item to the user. A computer readable storage medium stores one or more computer programs, and an apparatus includes a processor-based device. | 2019-06-20 |
20190188473 | SEMANTIC PLACE RECOGNITION AND LOCALIZATION - Methods, systems, and apparatus for receiving data that represents a portion of a property that was obtained by a robot, identifying, based at least on the data, objects that the data indicates as being located within the portion of the property, determining, based on the objects, a semantic zone type corresponding to the portion of the property, accessing a mapping hierarchy for the property, wherein the mapping hierarchy for the property specifies semantic zones of the property that have corresponding semantic zone types and are associated with locations at the property, and specifies characteristics of the semantic zones, and selecting, from among the semantic zones and based at least on the semantic zone type and the data, a particular semantic zone, and setting, as a current location of the robot at the property, a particular location at the property associated with the particular semantic zone. | 2019-06-20 |
20190188474 | ENHANCED POSE DETERMINATION FOR DISPLAY DEVICE - To determine the head pose of a user, a head-mounted display system having an imaging device can obtain a current image of a real-world environment, with points corresponding to salient points which will be used to determine the head pose. The salient points are patch-based and include: a first salient point being projected onto the current image from a previous image, and with a second salient point included in the current image being extracted from the current image. Each salient point is subsequently matched with real-world points based on descriptor-based map information indicating locations of salient points in the real-world environment. The orientation of the imaging devices is determined based on the matching and based on the relative positions of the salient points in the view captured in the current image. The orientation may be used to extrapolate the head pose of the wearer of the head-mounted display system. | 2019-06-20 |
20190188475 | SOCIAL MEDIA SYSTEMS AND METHODS - A method and system for providing augmented reality experiences unique to a user's interface format is provided. The method includes receiving from a mobile device identification of an interface format from which a camera view is opened in the mobile device. The method further includes identifying a trigger within the camera view for launching one version of an augmented reality experience; identifying, from the plurality of versions of the augmented reality experience, the one version that uniquely corresponds to the interface format; and providing to the mobile device for display the one version. | 2019-06-20 |
20190188476 | USE OF CAMERA METADATA FOR RECOMMENDATIONS - In various example embodiments, a system and method for using camera metadata for making recommendations are presented. At least one image file having camera metadata is received. The camera metadata of the at least one image file is analyzed to determine improvements to image capture aspects associated with the at least one image file. Feedback related to the improvements to the image capture aspects associated with the at least one image file is generated. In some embodiments, the feedback may be used to generate camera and other product upgrade recommendations. | 2019-06-20 |
20190188477 | SEMANTIC ZONE SEPARATION FOR MAP GENERATION - Methods, systems, and apparatus for receiving a mapping of a property that includes a three-dimensional representation of the property, receiving observations of the property that each depict a portion of the property, providing the mapping and the observations to an object mapping engine, receiving an object mapping of the property, wherein the object mapping includes a plurality of object labels that each identify an object that was recognized from the observations and a location of the object within the three-dimensional representation that corresponds to a physical location of the object in the property, and obtaining a semantic mapping of the property that identifies semantic zones of the property with respect to the three-dimensional representation, wherein the semantic mapping is generated based on an output that results from a semantic mapping model processing the object mapping. | 2019-06-20 |
20190188478 | METHOD AND APPARATUS FOR OBTAINING VIDEO PUBLIC OPINIONS, COMPUTER DEVICE AND STORAGE MEDIUM - The present disclosure provides a method and apparatus for obtaining video public opinions, a computer device and a storage medium, wherein the method comprises: obtaining an information source and a monitored entity; obtaining real-time stream data from the information source; for each video in the real-time stream data, performing predetermined content recognition respectively for the video to obtain a recognition result; determining whether the video matches with the monitored entity according to the recognition result, and generating and storing public opinion information corresponding to the video if the video matches with the monitored entity. The solution of the present disclosure can be employed to obtain video-like public opinion information. | 2019-06-20 |
20190188479 | GENERATING SYNTHESIS VIDEOS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating synthesis videos. In one aspect, a method comprises identifying one or more topics for generation of a synthesis video. Videos are identified that are determined to be relevant to one or more of the identified topics. Video segments are extracted from one or more of the identified videos. For each of the video segments, a segment level score and a video level score are determined. A composite score for the video segment is determined by combining the segment level score and the video level score for the video segment. Video segments are selected for inclusion in the synthesis video based on the composite scores for the video segments. A synthesis video is generated by combining the selected video segments. | 2019-06-20 |
20190188480 | AUTOMATED VIDEO CORRECTION - Automated video correction techniques are disclosed. In some examples, an example method may include identifying features in each video frame of the multiple video frames in a video, and identifying one or more major scenes in the video based on a matching of the features in each video frame. The method may also include, for each identified major scene, identifying a key reference frame based on the features in each video frame, identifying one or more bad video frames based on a comparison with the key reference frame, and identifying one or more sequences of bad video frames based on the identified one or more bad video frames. The video may then be corrected by removing the identified one or more sequences of bad video frames from the video. | 2019-06-20 |
20190188481 | MOTION PICTURE DISTRIBUTION SYSTEM - A motion picture distribution system is provided with: an extractor configured to perform an extraction operation of extracting one or a plurality of scenes associated with a particular action of a target person from a motion picture taken by an imager; a generator configured to edit the extracted one or plurality of scenes and to generate a digest motion picture; and a distributor configured to distribute the generated digest motion picture. The extractor is configured to perform machine learning associated with the particular action, by using at least a part of a motion picture including a person as input data, in order to improve the extraction operation. | 2019-06-20 |
20190188482 | SPATIO-TEMPORAL FEATURES FOR VIDEO ANALYSIS - A method of determining a spatio-temporal feature value for frames of a sequence of video. A first frame and second frame from the sequence of video are received. Spatial feature values in each of the first and second frames are determined according to a plurality of spatial feature functions. For each of the spatial feature functions, a change in the spatial feature values between the first and second frames is determined. The spatio-temporal feature value is determined by combining the determined change in spatial feature values for each of the spatial feature functions. | 2019-06-20 |
20190188483 | SYSTEMS AND METHODS FOR SHARING CONTENT - Systems, methods, and non-transitory computer-readable media can determine a video being posted through a social networking system; one or more portions of the video to be compressed are determined; and the one or more portions of the video are compressed, wherein, upon being compressed, at least one frame corresponding to at least one of the portions is deleted. | 2019-06-20 |
20190188484 | CAPTURING SERIES OF EVENTS IN MONITORING SYSTEMS - Implementations are directed to receiving a multi-dimensional data set including, for each device in a set of devices of a monitoring system, a feature set over a respective time period and over devices in the set of devices, processing multi-dimensional data to identify sets of features recorded by respective devices in the set of devices of the monitoring system, comparing a feature set of a first device relative to a feature set of a second device in a location dimension to determine that a first feature in the feature set of the first device corresponds to a second feature in the feature set of the second device, and providing a sequence of feature sets by selecting appropriate feature sets from the multi-dimensional data set based on the comparison, the sequence providing an order of progress of an object between the feature sets of the set of devices. | 2019-06-20 |
20190188485 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM - To efficiency search for an object associated with a sensed event, an information processing apparatus includes a sensor that analyzes a captured video and senses whether a predetermined event has occurred, a determining unit that determines a type of an object to be used as query information based on a type of the event in response to sensing of the event occurrence, and a generator that detects the object of the determined type from the video and generates the query information based on the detected object. | 2019-06-20 |
20190188486 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM - To efficiency search for an object associated with a sensed event, an information processing apparatus includes a sensor that analyzes a captured video and senses whether a predetermined event has occurred, a determining unit that determines a type of an object to be used as query information based on a type of the event in response to sensing of the event occurrence, and a generator that detects the object of the determined type from the video and generates the query information based on the detected object. | 2019-06-20 |
20190188487 | METHOD, DEVICE AND SYSTEM FOR DETECTING A LOITERING EVENT - The present invention relates to monitoring applications. In particular, the present invention relates to a method, device and system for detecting a loitering event in which the loitering time of objects with different object IDs, which subsequent to each other spends time within an area of interest within a monitored scene, will be combined. | 2019-06-20 |
20190188488 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM RECORDING MEDIUM - Provided are an image processing device and the like which implement personal privacy protection while suppressing a reduction in visibility for an image. The image processing device is provided with: a memory storing instructions; and one or more processors configured to execute the instructions to: detect a person region that is a region where a person appears in an image captured by a camera device; and perform, on the person region, privacy processing a strength of which differs according to a depth associated with coordinates of the person region or a predetermined index related to the depth. | 2019-06-20 |
20190188489 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM RECORDING MEDIUM - Provided are an image processing device and the like which implement personal privacy protection while suppressing a reduction in visibility for an image. The image processing device is provided with: a memory storing instructions; and one or more processors configured to execute the instructions to: detect a person region that is a region where a person appears in an image captured by a camera device; and perform, on the person region, privacy processing a strength of which differs according to a depth associated with coordinates of the person region or a predetermined index related to the depth. | 2019-06-20 |
20190188490 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM RECORDING MEDIUM - Provided are an image processing device and the like which implement personal privacy protection while suppressing a reduction in visibility for an image. The image processing device is provided with: a memory storing instructions; and one or more processors configured to execute the instructions to: detect a person region that is a region where a person appears in an image captured by a camera device; and perform, on the person region, privacy processing a strength of which differs according to a depth associated with coordinates of the person region or a predetermined index related to the depth. | 2019-06-20 |
20190188491 | SYSTEMS AND METHODS FOR IDENTIFYING HIERARCHICAL STRUCTURES OF MEMBERS OF A CROWD - A method and system for identifying hierarchical structures of members of a crowd ( | 2019-06-20 |
20190188492 | DETECTION OF THE PRESENCE OF STATIC OBJECTS - The presence of a stationary object is to be reliably recognized. Thereto, a presence detection device for detecting a presence of an object in its environment is provided, which comprises a movement detection unit for detecting an initial movement of the object in the environment of the presence detection device and for outputting a movement signal depending on the detection as well as a control unit for generating an activation signal depending on the movement signal. Moreover, the presence detection device comprises a camera, which can be activated by the activation signal, for obtaining a video signal of the environment of the presence detection device and an evaluation unit for generating a presence signal relating to the presence of the object by evaluating the video signal. | 2019-06-20 |
20190188493 | Providing Autonomous Vehicle Assistance - Systems and methods for providing autonomous vehicle assistance are disclosed. In one embodiment, a method is disclosed comprising recording an image of a scene surrounding an autonomous vehicle; classifying the image using a machine learning system, the classifying comprising identifying whether the image includes a danger; determining whether the autonomous vehicle is able to respond to the danger in response to identifying that the image includes the danger; and executing one or more security maneuvers, the security maneuvers manipulating the operation of the autonomous vehicle in response to the danger. | 2019-06-20 |
20190188494 | VEHICLE DRIVING ASSIST DEVICE, INSTRUCTION DISPLAY DEVICE AND VEHICLE LAMP - A vehicle driving assist device which is mounted on a vehicle, includes an instruction recognition device which is configured to recognize operation instruction information for a vehicle displayed on a road surface. The vehicle on which the vehicle driving assist device is mounted is operated based on the operation instruction information. An instruction display device is mounted on a vehicle and is configured to display operation instruction information for another vehicle on a road surface with near-infrared light. | 2019-06-20 |
20190188495 | METHOD AND APPARATUS FOR EVALUATING A VEHICLE TRAVEL SURFACE - A vehicle includes a plurality of on-vehicle cameras, and a controller executes a method to evaluate a travel surface by capturing images for fields of view of the respective cameras. Corresponding regions of interest for the images are identified, wherein each of the regions of interest is associated with the portion of the field of view of the respective camera that includes the travel surface. Portions of the images are extracted, wherein each extracted portion is associated with the region of interest in the portion of the field of view of the respective camera that includes the travel surface and wherein one extracted portion of the respective image includes the sky. The extracted portions of the images are compiled into a composite image datafile, and an image analysis of the composite image datafile is executed to determine a travel surface state. The travel surface state is communicated to another controller. | 2019-06-20 |
20190188496 | LOW-DIMENSIONAL ASCERTAINING OF DELIMITED REGIONS AND MOTION PATHS - An apparatus for ascertaining, from at least one image, a delimited region and/or a motion path of at least one object includes at least one artificial neural network (ANN) made up of several successive layers. The first layer of the ANN receives as input the at least one image or a part thereof. The second layer supplies as output a boundary line of the delimited region, a linear course of the motion path, or a portion of the boundary line or motion path. The dimensionality of the second layer is lower than the dimensionality of the first layer. | 2019-06-20 |
20190188497 | APPARATUS FOR IDENTIFYING LINE MARKING ON ROAD SURFACE - An apparatus for identifying a line marking on a road surface. In the apparatus, an extractor extracts a paint candidate that is a candidate for road surface paint used to identify a line marking in an image captured by a camera mounted on a vehicle to capture an image of an area including a road surface ahead of the vehicle. A determiner determines whether or not the paint candidate has at least one predefined flare feature. A line marking identifier identifies a line marking using the paint candidate which meets an identification condition used to identify a line marking. The line marking identifier sets the identification condition to be more stringent for a flare paint candidate that is the paint candidate determined by the determiner to have the at least one predefined flare feature than for the paint candidate determined by the determiner to have no predefined flare feature. | 2019-06-20 |
20190188498 | Image Processing Method For Recognizing Ground Marking And System For Detecting Ground Marking - The present invention relates to an image processing method for recognizing ground marking, comprising a step of receiving at least one image of the ground at the front and/or at the rear of the vehicle, characterised in that it comprises a step of calculating a digital image corresponding to a confidence map consisting of assigning, to each pixel of the acquired image, a value corresponding to the degree of confidence that said pixel belongs to an area of marking, then performing a marking detection step by minimising the function f of the following equation: -F is the regression function-x±corresponding to the x coordinate of the ith pixel crossed by the agent-y±corresponding to the y coordinate of the ith pixel crossed by the agent-w±corresponding to the grey value V± of the ith pixel crossed by the agent-B designates a function space, and-λ designates the smoothing parameter which is a function of the type of road. | 2019-06-20 |
20190188499 | PROVING HYPOTHESES FOR A VEHICLE USING OPTIMAL EXPERIMENT DESIGN - Embodiments described herein disclose methods and systems for object recognition using optimal experimental design. Using detection information from the sensor systems, a detection hypothesis can be generated for the detected object. The detection hypothesis can include 3D models, which have distinctive locations. The distinctive locations can be compared to identified distinctive locations using location estimators. Distinctive locations allow for rejection of a hypothesis, should any distinctive location not have an identified distinctive location on the detected object or within the detection information. In this way, recognition of objects can be performed more quickly and efficiently. | 2019-06-20 |
20190188500 | APPARATUS FOR MONITORING OBJECT IN LOW LIGHT ENVIRONMENT AND MONITORING METHOD THEREOF - An apparatus for monitoring an object in a low light environment includes an image capturing unit, a vehicle speed unit, an image recognizing unit, and an object determining unit. The image capturing unit continuously captures and outputs time-sliced images. The vehicle speed unit detects and outputs current vehicle speed information. The image recognizing unit recognizes a region having pixel brightness higher than a threshold in each of the time-sliced images and marks the region as a high brightness block. The object determining unit selects at least two successive time-sliced images having high brightness blocks in a continuous corresponding variation relationship from the time-sliced images, generates and outputs estimated speed information, and when the estimated speed information is different from the current vehicle speed information, determines that the high brightness block is a moving object block and monitors the moving object block. | 2019-06-20 |
20190188501 | ARTIFICIAL INTELLIGENCE SYSTEM FOR PROVIDING ROAD SURFACE RISK INFORMATION AND METHOD THEREOF - Provided is an artificial intelligence system for providing road risk information and a method thereof. The system for providing road risk information includes: an information collection unit for receiving various road state information acquired from a vehicle device; an information processing unit for performing image processing on the collected road state information, and converting a result of the image processing into a predefined grayscale image; an information learning unit for learning the converted predefined grayscale image on the basis of a predetermined learning model based on deep learning, and recognizing the road risk information on the basis of a result of the learning; and an information classification unit for classifying the road risk information from the road state information on the basis of a result of the recognition, and detecting road surface defects on the basis of a result of the classification. | 2019-06-20 |
20190188502 | Methods and Systems for Controlling Extent of Light Encountered by an Image Capture Device of a Self-Driving Vehicle - Example implementations may relate to use of a light-control feature to control extent of light encountered by an image capture device of a self-driving vehicle. In particular, a computing system of the vehicle may make a determination that quality of image data generated by an image capture device is or is expected to be lower than a threshold quality due to external light encountered or expected to be encountered by the image capture device. In response to the determination, the computing system may make an adjustment to the light-control feature to control the extent of external light encountered or expected to be encountered by the image capture device. This adjustment may ultimately help improve quality of image data generated by the image capture device. As such, the computing system may operate the vehicle based at least on image data generated by the image capture device. | 2019-06-20 |
20190188503 | VEHICLE MONITORING OF INFRASTRUCTURE LIGHTING - A method and apparatus for vehicle monitoring of infrastructure lighting, an example of which includes a vehicle having a camera, an inter-vehicle communication module and a controller. The controller is to identify a stationary infrastructure object within an image captured by the camera and identify, in response to determining that the stationary infrastructure object includes a lamp, whether the lamp is inoperable. The controller also is to send, via vehicle-to-infrastructure communication utilizing the inter-vehicle communication module, an alert to an infrastructure communication node indicating that the lamp is inoperable. | 2019-06-20 |
20190188504 | IMAGING DEVICE AND IMAGING SYSTEM - An object of the present disclosure is to increase resolution of a part required for sensing while achieving a wide view angle. An imaging device according to the present disclosure includes an image sensor and an optical system. An imaging surface of the image sensor includes a first region and a second region different from the first region. The optical system forms a subject image on the imaging surface so as to cause resolution of an image in the first region to be higher than resolution of an image in the second region. The first region is arranged so as to include a region where an image of a person's face is formed on the imaging surface. | 2019-06-20 |
20190188505 | DISTRACTED DRIVER DETECTION - Distracted driver detection is provided. In various embodiments, a video frame is captured. The video frame is provided to a trained classifier. The presence of a predetermined action by a motor vehicle operator depicted therein is determined from the trained classifier. An alert is sent via a network indicating the presence of the predetermined action and at least one identifier associated with the motor vehicle operator. | 2019-06-20 |
20190188506 | A VEHICLE-MOUNTED DISPLAY SYSTEM AND METHOD FOR PREVENTING VEHICULAR ACCIDENTS - A vehicle-mounted display system for enhancing a driver's forward viewability, which comprises a forwardly directed camera mounted within an interior of the vehicle, for imaging a forwardly directed expected field of view (EFOV) of the driver; an image generator; and a processing unit in data communication with both the forwardly directed camera and the image generator. The processing unit is operable to monitor a parameter indicative of a driver's forward viewability through a front windshield of a road over which the vehicle advances during a transportation operation and to command the image generator to generate when a value of the monitored parameter deviates from a predetermined parameter range, within the vehicle interior, an EFOV-related image which is visible to the driver and is based on image data received from the forwardly directed camera, to ensure the driver's forward viewability during the course of a transportation operation. | 2019-06-20 |
20190188507 | Altering Biometric Data Found in Visual Media Data - An approach is disclosed that receives a set of visual media data that corresponds to a person. The process detects that a portion of the visual media data is biometric data that corresponds to the person. Responsively, the process alters the biometric data so that the altered biometric data fails to identify the person. | 2019-06-20 |
20190188508 | DIFFERENT LEVELS OF ACCESS TO AIRCRAFT BASED ON BIOMETRIC INPUT DATA - In some examples, this disclosure describes a system for verifying identities of a first user and a second user. In some examples, the system includes processing circuitry and a memory device configured to store biometric verification data associated with the first user and the second user. In some examples, the system also includes an input device configured to receive biometric input data and transmit the biometric input data to the processing circuitry. In some examples, the processing circuitry is configured to determine whether the biometric input data matches biometric verification data for the first user or the second user, unlock the aircraft in response to determining that the biometric input data matches biometric verification data for the first user or the second user, and activate the aircraft for operation in response to determining that the biometric input data matches biometric verification data for the first user. | 2019-06-20 |
20190188509 | USER IDENTITY VERIFICATION METHOD, APPARATUS AND SYSTEM - This specification discloses a user identity verification method, apparatus, and system, relating to the field of information technology. The method comprises: receiving a facial image and one or more eye-print pair images corresponding to an identity verification object from a client, wherein a number of the one or more eye-print pair images corresponds to a number of eye-print collection steps, comparing the facial image to a preset facial image and comparing the one or more eye-print pair images to preset eye-print templates, and sending successful identity verification information to a client when comparison results for the facial image and the one or more eye-print pair images meet preset conditions. | 2019-06-20 |
20190188510 | OBJECT RECOGNITION METHOD AND APPARATUS - An object recognition apparatus and method are provided. The apparatus includes a processor configured to verify a target image using an object model and based on reference intermediate data extracted by a partial layer of the object model as used in an object recognition of an input image, in response to a failure of a verification of the input image after a success of the object recognition of the input image, and perform an additional verification of the target image in response to the target image being verified in the verifying of the target image. | 2019-06-20 |
20190188511 | METHOD AND SYSTEM FOR OPTICAL CHARACTER RECOGNITION OF SERIES OF IMAGES - Systems and methods for performing OCR of a series of images depicting text symbols. An example method comprises: receiving, by a processing device, a current image of a series of images of an original document, wherein the current image at least partially overlaps with a previous image of the series of images; performing optical symbol recognition (OCR) of the current image to produce an OCR text and a corresponding text layout; associating, using a coordinate transformation, at least part of the OCR text with a first cluster of a plurality of clusters of symbol sequences, wherein the OCR text is produced by processing the current image and wherein the symbol sequences are produced by processing one or more previously received images of the series of images; identifying a first median string representing the first cluster of symbol sequences based on a first subset of images of the series of images; identifying a first template field of a document template corresponding to the first cluster based on the first median string representing the first cluster and the text layout of the current image; analyzing the symbol sequences from the first cluster to identify suitable symbol sequences, wherein the suitable symbol sequences satisfy first parameters of the first template field; identifying, for the first cluster, a second-level median string representing the cluster of symbol sequences based on a plurality of the suitable symbol sequences; producing, using the second-level median string, a resulting OCR text representing at least a portion of the first template field of the original document. | 2019-06-20 |
20190188512 | METHOD AND IMAGE PROCESSING ENTITY FOR APPLYING A CONVOLUTIONAL NEURAL NETWORK TO AN IMAGE - A method and an image processing entity for applying a convolutional neural network to an image are disclosed. The image processing entity processes the image while using the convolutional kernel to render a feature map, whereby a second feature map size of the feature map is greater than a first feature map size of the feature maps with which the feature kernel was trained. Furthermore, the image processing entity repeatedly applies the feature kernel to the feature map in a stepwise manner, wherein the feature kernel was trained to identify the feature based on the feature maps of the first feature maps, wherein the feature kernel has the first feature map size. | 2019-06-20 |
20190188513 | SYSTEMS AND METHODS FOR OBJECT DESKEWING USING STEREOVISION OR STRUCTURED LIGHT - A system and method of deskewing an image of an object to be identified is disclosed. In a first embodiment, a first image and a second image are captured using a stereoscopic camera, and features are extracted from each of the first and second images. The extracted features may be matched and depths for each of the matched features may be calculated. Alternatively, a structured light pattern may be projected to a scene and reflections of the light pattern may be sensed. Depth information of the sensed light pattern may be calculated. In both embodiments, a region-of-interest inclusive of the object may be selected and skew of the region-of-interest may be calculated using depth information for the sensed light pattern and/or correlated points within the region. The region-of-interest may be deskewed based on the calculated skew. Visual pattern matching may be performed to identify the object in the deskewed region-of-interest. | 2019-06-20 |
20190188514 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, CONTROL METHOD, AND PROGRAM - A first analysis unit ( | 2019-06-20 |
20190188515 | INFORMATION PROCESSING DEVICE AND RECOGNITION SUPPORT METHOD - In order to acquire recognition environment information impacting the recognition accuracy of a recognition engine, an information processing device | 2019-06-20 |
20190188516 | Computer Vision Systems and Methods for Geospatial Property Feature Detection and Extraction from Digital Images - Systems and methods for property feature detection and extraction using digital images. The image sources could include aerial imagery, satellite imagery, ground-based imagery, imagery taken from unmanned aerial vehicles (UAVs), mobile device imagery, etc. The detected geometric property features could include tree canopy, pools and other bodies of water, concrete flatwork, landscaping classifications (gravel, grass, concrete, asphalt, etc.), trampolines, property structural features (structures, buildings, pergolas, gazebos, terraces, retaining walls, and fences), and sports courts. The system can automatically extract these features from images and can then project them into world coordinates relative to a known surface in world coordinates (e.g., from a digital terrain model). | 2019-06-20 |
20190188517 | IMAGE PARAMETER CALCULATING METHOD, OBJECT TRACKING METHOD, AND IMAGE PARAMETER CALCULATING SYSTEM - An image parameter calculating method comprising: (a) transforming a spatial domain target image to a frequency domain target image; (b) multiplying the frequency domain target image with a frequency domain reference image to acquire a frequency domain multiplying result; and (c) calculating at least one peak location of the spatial domain target image according to the frequency domain multiplying result. | 2019-06-20 |
20190188518 | IMAGE PROCESSING CIRCUIT AND ASSOCIATED IMAGE PROCESSING METHOD - An image processing circuit includes a determining circuit, a converting circuit and a color removing circuit. The determining circuit determines a specific color that is different from all colors in a palette table. The converting circuit, coupled to the determining circuit, converts a first image in a palette mode to a second image having a color space according to the palette table, wherein a pixel having a specific index in the first image is converted to a pixel having the specific color. The color removing circuit, coupled to the converting circuit, removes the specific color from the second image. | 2019-06-20 |
20190188519 | VECTOR ENGINE AND METHODOLOGIES USING DIGITAL NEUROMORPHIC (NM) DATA - A system and methodologies for neuromorphic vision simulate conventional analog NM system functionality and generate digital NM image data that facilitate improved object detection, classification, and tracking. | 2019-06-20 |
20190188520 | SYSTEMS AND METHODS FOR REDUCING DATA DENSITY IN LARGE DATASETS - Techniques and systems are provided for identifying unknown content. For example, a number of vectors out of a plurality of vectors projected from an origin point can be determined that are between a reference data point and an unknown data point. The number of vectors can be used to estimate an angle between a first vector (from the origin point to a reference data point) and a second vector (from the origin point to an unknown data point). A distance between the reference data point and the unknown data point can then be determined. Using the determined distance, candidate data points can be determined from a set of reference data points. The candidate data points can be analyzed to identify the unknown data point. | 2019-06-20 |
20190188521 | IDENTIFYING TEMPORAL CHANGES OF INDUSTRIAL OBJECTS BY MATCHING IMAGES - Technology for matching images (for example, video images, still images) of an identical infrastructure object (for example, a tower component of a tower supporting power lines) for purposes of comparing the infrastructure object to itself at different points in time to detect a potential anomaly and the potential need for maintenance of the infrastructure object. In some embodiments, this matching of images is done using creation of a three dimensional (#D) computer model of the infrastructure object and by tagging captured images with location on the 3D model across multiple videos taken at different points in time. | 2019-06-20 |
20190188522 | ADDING NEW CONNECTIONS USING IMAGE RECOGNITION - A method can include comparing a first feature vector detailing features of an image of a newsfeed of a user of users of a social network to a subset of second feature vectors detailing features of newsfeeds presented to the users of the social network; and in response to determining the first feature vector matches a second feature vector of the subset of second feature vectors, providing a name, profile data, and profile picture of a user associated with the newsfeed. | 2019-06-20 |
20190188523 | Process and System for Computing the Cost of Usable and Consumable Materials for Painting of Motor Vehicles, From Analysis of Deformations in Motor Vehicles - A system and a process are described for computing the cost of usable and consumable materials for painting motor vehicles, from the analysis of deformations in the motor vehicles, the process comprising the following steps: loading image data relevant for a three-dimensional image of a damaged vehicle in a vehicle images memory; in the images memory of the damaged vehicles, recalling the image data of at least one three-dimensional image of a sample vehicle from a database of images of sample vehicles; automatically comparing the three-dimensional image of the damaged vehicle with the corresponding three-dimensional image of the sample vehicle identifying, through an automatic comparison between the two images: damage position or deformation and detecting the distorted regions; selecting through graphical tools for delimiting or pointing out the damaged or distorted regions identified from the automatic comparison on at least one of the two images; computing perimeter, area and/or volume from the damaged or distorted region or regions; computing a deformation severity degree and assigning the deformation severity degree to every damaged or distorted region; computing labor times and costs for repairing the damaged or distorted area; and producing a virtual image of the sample vehicle. | 2019-06-20 |
20190188524 | METHOD AND SYSTEM FOR CLASSIFYING AN OBJECT-OF-INTEREST USING AN ARTIFICIAL NEURAL NETWORK - Methods, systems, and techniques for classifying an object-of-interest using an artificial neural network, such as a convolutional neural network. An artificial neural network receives a sample image including the object-of-interest overlaying a background and a sample background image excluding the object-of-interest and corresponding to the background overlaid by the object-of-interest. The object-of-interest is classified using the artificial neural network. The artificial neural network classifies the object-of-interest using the sample background and sample images. Prior to receiving the sample background and sample images the artificial neural network has been trained to classify the object-of-interest using training image pairs. Each of at least some of the training image pairs includes a first training image that includes a training object-of-interest overlaying a training background and a training background image excluding the training object-of-interest and corresponding to the training background. | 2019-06-20 |
20190188525 | METHOD AND APPARATUS FOR RECOGNIZING IMAGE - An image recognition method using a region-based convolutional neural network (R-CNN) includes generating a feature map from an input image, detecting one or more regions of interest (ROIs) in the feature map, classifying the ROIs into groups based on setting information, performing pooling on the ROIs classified into the groups independently for each of the groups, and performing a regression operation on a result of the pooling and applying an image classifier to a result of the regression operation. | 2019-06-20 |
20190188526 | FUSING SPARSE KERNELS TO APPROXIMATE A FULL KERNEL OF A CONVOLUTIONAL NEURAL NETWORK - Techniques facilitating generation of a fused kernel that can approximate a full kernel of a convolutional neural network are provided. In one example, a computer-implemented method comprises determining a first pattern of samples of a first sample matrix and a second pattern of samples of a second sample matrix. The first sample matrix can be representative of a sparse kernel, and the second sample matrix can be representative of a complementary kernel. The first pattern and second pattern can be complementary to one another. The computer-implemented method also comprises generating a fused kernel based on a combination of features of the sparse kernel and features of the complementary kernel that are combined according to a fusing approach and training the fused kernel. | 2019-06-20 |
20190188527 | METHODS AND APPARATUS TO DETERMINE THE DIMENSIONS OF A REGION OF INTEREST OF A TARGET OBJECT FROM AN IMAGE USING TARGET OBJECT LANDMARKS - Methods and apparatus to determine the dimensions of a region of interest of a target object and a class of the target object from an image using target object landmarks are disclosed herein. An example method includes identifying a landmark of a target object in an image based on a match between the landmark and a template landmark; classifying a target object based on the identified landmark; projecting dimensions of the template landmark based on a location of the landmark in the image; and determining a region of interest based on the projected dimensions, the region of interest corresponding to text printed on the target object. | 2019-06-20 |
20190188528 | TEXT DETECTION METHOD AND APPARATUS, AND STORAGE MEDIUM - Embodiments of the present disclosure provide a text detection method and apparatus, and a storage medium. The method includes: obtaining edge information of a to-be-detected image; and determining candidate text pixels in the to-be-detected image according to the edge information of the to-be-detected image by using a preset candidate text pixel determining strategy. The method also includes performing projection based segmentation on the candidate text pixels to obtain a projection based segmentation result. The method also includes determining one or more text regions in the to-be-detected image according to the projection based segmentation result. | 2019-06-20 |
20190188529 | SYSTEM, METHOD AND RECORDING MEDIUM FOR USER INTERFACE (UI)-LEVEL CLONE DETECTION - A user interface (UI)-level clone detection method, system, and computer program product, include running applications from an application database to obtain a screenshot of each of the applications, comparing a first object of a first screenshot of a first application with a second object from a second screenshot of a second application to determine a similarity between the first object and the second object, and analyzing a code for each of the first object and the second object when the similarity is greater than a predetermined threshold value to identify a same-functionality code. | 2019-06-20 |
20190188530 | METHOD AND APPARATUS FOR PROCESSING IMAGE - Embodiments of the present disclosure disclose a method and an apparatus for processing an image. A specific embodiment of the method includes: acquiring a target image including a polygon image; inputting the target image into a pre-trained convolutional neural network to obtain a characteristic vector of the target image, the convolutional neural network being used to represent a correspondence relationship between an image and a characteristic vector, and the characteristic vector being a vector including a category feature, a position feature, and a keypoint feature of the polygon image; and recognizing an image area of the polygon image based on the category feature, the position feature, and the keypoint feature. This embodiment improves the accuracy of polygon image recognition. | 2019-06-20 |
20190188531 | FEATURE SELECTION IMPACT ANALYSIS FOR STATISTICAL MODELS - The disclosed embodiments provide a system for processing data. During operation, the system obtains a set of feature additions and an evaluation metric for assessing the performance of a statistical model. Next, the system automatically builds treatment versions of the statistical model using a set of baseline features for the statistical model and feature combinations generated using the feature additions. The system then uses a hypothesis test and a fixed set of feature values to compare a baseline value of the evaluation metric for a baseline version of the statistical model that is built using the set of baseline features with additional values of the evaluation metric for the treatment versions. Finally, the system outputs a result of the hypothesis test for use in assessing an impact of the feature combinations on a performance of the statistical model. | 2019-06-20 |
20190188532 | METHOD, APPARATUS, AND PROGRAM FOR INFORMATION PRESENTATION - An information presentation method causes a computer to perform, among a plurality of evaluation items for each of a plurality of decision making entities, extracting one or a plurality of evaluation items that indicate features of a target decision making entity being a target specified by a user for information presentation, for each of the other decision making entities other than the target decision making entity included in the plurality of decision making entities, calculating a feature similarity degree corresponding to the extracted evaluation items and becoming higher with an increase in a degree indicating a feature of the target decision making entity and each of the other decision making entities based on the evaluation value of the evaluation item; and outputting information on the other decision making entity having the calculated feature similarity degree equal to or higher than a predetermined value to an output device. | 2019-06-20 |
20190188533 | POSE ESTIMATION - A method for pose recognition includes storing parameters for configuration of an automated pose recognition system for detection of a pose of a subject represented in a radio frequency input signal. The parameters having been determined by a first process including accepting training data including a number of images including poses of subjects and a corresponding number of radio frequency signals and executing a parameter training procedure to determine the parameters. The parameter training procedure including, receiving features characterizing the poses in each of the images, and determining the parameters that configure the automated pose recognition system to match the features characterizing the poses from the corresponding radio frequency signals. | 2019-06-20 |
20190188534 | METHODS AND SYSTEMS FOR CONVERTING A LINE DRAWING TO A RENDERED IMAGE - The system includes a memory that stores instructions for executing processes converting line drawings to rendered images. The system also includes a processor configured to execute the instructions. The instructions cause the processor to: train a neural network to account for irregularities in the line drawings by introducing noise data into training data of the neural network; receive a first line drawing from an input device; generate a first rendered image based on features identified in the first line drawing; and display the first rendered image on an output device. | 2019-06-20 |
20190188535 | Machine-Learning Based Technique for Fast Image Enhancement - Systems and methods described herein may relate to image transformation utilizing a plurality of deep neural networks. An example method includes receiving, at a mobile device, a plurality of image processing parameters. The method also includes causing an image sensor of the mobile device to capture an initial image and receiving, at a coefficient prediction neural network at the mobile device, an input image based on the initial image. The method further includes determining, using the coefficient prediction neural network, an image transformation model based on the input image and at least a portion of the plurality of image processing parameters. The method additionally includes receiving, at a rendering neural network at the mobile device, the initial image and the image transformation model. Yet further, the method includes generating, by the rendering neural network, a rendered image based on the initial image, according to the image transformation model. | 2019-06-20 |
20190188536 | DYNAMIC FEATURE SELECTION FOR MODEL GENERATION - Embodiments generate a model of demand of a product that includes an optimized feature set. Embodiments receive sales history for the product and receive a set of relevant features for the product and designate a subset of the relevant features as mandatory features. From the sales history, embodiments form a training dataset and a validation dataset and randomly select from the set of relevant features one or more optional features. Embodiments include the selected optional features with the mandatory features to create a feature test set. Embodiments train an algorithm using the training dataset and the feature test set to generate a trained algorithm and calculate an early stopping metric using the trained algorithm and the validation dataset. When the early stopping metric is below a predefined threshold, the feature test set is the optimized feature set. | 2019-06-20 |
20190188537 | EFFECTIVE BUILDING BLOCK DESIGN FOR DEEP CONVOLUTIONAL NEURAL NETWORKS USING SEARCH - A search framework for finding effective architectural building blocks for deep convolutional neural networks is disclosed. The search framework described herein utilizes a building block which incorporates branch and skip connections. At least some operations of the architecture of the building block are undefined and treated as hyperparameters which can be automatically selected and optimized for a particular task. The search framework uses random search over the reduced search space to generate a building block and repeats the building block multiple times to create a deep convolutional neural network. | 2019-06-20 |
20190188538 | METHOD, APPARATUS, AND SYSTEM FOR PROVIDING SKIP AREAS FOR MACHINE LEARNING - An approach is provided for using one or more skip areas to label, train, and/or evaluate a machine learning model. The approach, for example, involves specifying the one or more skip areas with respect to an image. By way of example, a non-skip area of the image is a portion of the image that is not in the one or more skip areas. The approach also involves initiating a labeling of one or more features in the non-skip area of the image while excluding the one or more skip areas from the labeling to create a partially labeled image. The partially labeled image is then included in a training dataset for training a machine learning model. | 2019-06-20 |
20190188539 | ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF - An electronic apparatus is provided. The electronic apparatus includes: a storage configured to store a plurality of filters each corresponding to a plurality of image patterns; and a processor configured to classify an image block including a target pixel and a plurality of surrounding pixels into one of the plurality of image patterns based on a relationship between pixels within the image block and to obtain a final image block in which the target pixel is image-processed by applying at least one filter corresponding to the classified image pattern from among the plurality of filters to the image block, wherein the plurality of filters are obtained by learning, through an artificial intelligence algorithm, a relationship between a plurality of first sample image blocks and a plurality of second sample image blocks corresponding to the plurality of first sample image blocks based on each of the plurality of image patterns. | 2019-06-20 |
20190188540 | Method and Apparatus for Employing Specialist Belief Propagation Networks - A method and apparatus for processing image data is provided. The method includes the steps of employing a main processing network for classifying one or more features of the image data, employing a monitor processing network for determining one or more confusing classifications of the image data, and spawning a specialist processing network to process image data associated with the one or more confusing classifications. | 2019-06-20 |
20190188541 | JOINT 3D OBJECT DETECTION AND ORIENTATION ESTIMATION VIA MULTIMODAL FUSION - The present disclosure generally relates to methods and systems for identifying objects from a 3D point cloud and a 2D image. The method may include determining a first set of 3D proposals using Euclidean clustering on the 3D point cloud and determining a second set of 3D proposals from the 3D point cloud based on a 3D convolutional neural network. The method may include pooling the first and second sets of 3D proposals to determine a set of 3D candidates. The method may include projecting the first set of 3D proposals onto the 2D image and determining a first set of 2D proposals using 2D convolutional neural network. The method may include pooling the projected first set of 3D proposals and the first set of 2D proposals to determine a set of 2D candidates then pooling the set of 3D candidates and the set of 2D candidates. | 2019-06-20 |
20190188542 | Using Deep Video Frame Prediction For Training A Controller Of An Autonomous Vehicle - An image predictor is trained to produce a predicted image based on N preceding images captured by a vehicle camera and vehicle controls. A discriminator is trained to distinguish between an image following P preceding images in an image stream and one that is not a subsequent image. A control generator generates estimated controls based on a set of N images and the estimated controls and set of N images are input to the image predictor. A predicted image and the set of N images are input to the image predictor which outputs a value indicating whether the predicted image is accurate. A loss function based on this value and a difference between the vehicle controls and the estimated controls for the set of N images is used as feedback for training the control generator. | 2019-06-20 |
20190188543 | DETECTION SYSTEM, INFORMATION PROCESSING APPARATUS, EVALUATION METHOD, AND PROGRAM - A detection system includes an image processing apparatus configured to discriminate whether or not there is a detection target contained in an object, using a discriminator, and an information processing apparatus configured to provide the discriminator to the image processing apparatus. The information processing apparatus includes an evaluation unit configured to evaluate, for each attribute, discrimination precisions of the discriminator before and after the additional training, using evaluation data associated with each of a plurality of attributes of an object, when the discriminator is additionally trained using the training data, and an output unit configured to output the discrimination precisions for each attribute. | 2019-06-20 |
20190188544 | RETURN MAIL SERVICES - In an embodiment, an apparatus comprises one or more processors and one or more memories communicatively coupled to the one or more processors and storing instructions which, when processed by the one or more processors, cause: receiving a digital image of undeliverable mail and storing the digital image in a first database; causing data to be extracted from the digital image using Optical Character Recognition (OCR) or by processing encoded data; causing additional data to be requested from a second database based on the data extracted from the digital image; automatically generating one or more options for the undeliverable mail based on the data from the first database and the additional data from the second database; and causing the digital image, the data, the additional data, and the one or more options for the undeliverable mail to be displayed using a graphical user interface. | 2019-06-20 |
20190188545 | RADIO-FREQUENCY IDENTIFICATION-BASED SHELF LEVEL INVENTORY COUNTING - A radio frequency identification tag at a plurality of detection locations is detected and an expected location of the radio frequency identification tag is determined. A confidence level for each of the detection locations that detected the radio frequency identification tag is determined based on a relative location of the corresponding detection location as compared to the expected location. The determined confidence levels of the detection locations is analyzed to select at least one of the detection locations and an action based on the at least one selected detection location is performed. | 2019-06-20 |
20190188546 | NANO-ELECTRO-MECHANICAL LABELS AND ENCODER - Data is encoded for identification and labeling using a multitude of nano-electro-mechanical structures formed on a substrate. The number of such structures, their shapes, choice of materials, the spacing therebetween and the overall distribution of the structures result in a vibrational pattern or an acoustic signature that uniquely corresponds to the encoded data. A first group of the structures is formed in conformity with the design rules of a fabrication process used to manufacture the device that includes the structures. A second group of the structures is formed so as not to conform to the design rules and thereby to undergo variability as a result of the statistical variations that is inherent in the fabrication process. | 2019-06-20 |
20190188547 | AUXILIARY ANTENNA, RFID SYSTEM, AND METHOD FOR READING RFID TAG - An auxiliary antenna is provided that enables communication between a small antenna of an RFID tag and an antenna of a reader device without using a small antenna as the antenna of the reader device. The auxiliary antenna is an auxiliary antenna configured to expand a communication range of an antenna of an RFID tag to enable communication between the small antenna included in the RFID tag and an antenna included in a reader device. The auxiliary antenna includes a resonance loop group in which a plurality of resonance loops having a resonance frequency corresponding to a communication frequency is arranged to be coupled through a magnetic field. Moreover, the resonance loop group has an antenna area larger than the antenna area of the antenna of the RFID tag and equivalent to or larger than the antenna area of the antenna of the reader device. | 2019-06-20 |
20190188548 | RFID TRANSPONDER-BASED MODULE FOR COMMUNICATING INFORMATION TO A READING DEVICE - A transponder-based module is placed on a mobile object so as to transmit information in proximity to a reading device. The transponder-based module includes at least one energy source for supplying the transponder-based module, at least one sensor for performing measurements of a physical parameter, and a microcontroller linked to the measurement sensor for processing the measurements of the sensor. The module further includes a memory unit for storing the measurement data of the measurements performed by the sensor, a receiver of an interrogation signal from a reading device in proximity, and a transmitter for transmitting measurement data stored at ultra-high frequency and at very high bitrate subsequent to the reception of an interrogation signal from the reading device in proximity. | 2019-06-20 |
20190188549 | TAG MANAGEMENT DEVICE, TAG MANAGEMENT METHOD, AND PROGRAM - A tag management device includes a signal detecting unit configured to detect signals which are emitted from an old RFID tag and a new RFID tag, a comparison unit configured to compare the signals emitted from the old RFID tag and the new RFID tag, and a comparison result output unit configured to output a result of the comparison. | 2019-06-20 |