Patent application number | Description | Published |
20100026700 | GPU SCENE COMPOSITION AND ANIMATION - Architecture that expresses scene composition and animation in a form that can run entirely on the graphics processing unit (GPU). The architecture stores retained graph information (e.g., scene graph and animation information) as texture information, and uses shaders (e.g., vertex and pixel) to evaluate time information, evaluate animation, evaluate transforms, and rasterize paths. Additionally, the architecture provides the ability to compute animation positions and redraw entirely on the GPU without per primitive CPU intervention. | 02-04-2010 |
20130063459 | PRIMITIVE COMPOSITION - Performing primitive composition within a user interface thread, enhancing the ability to scale a user interface framework to computing devices having limited resources. In one or more embodiments, a user interface thread walks a user interface hierarchy that describes elements of a program's user interface and directly generates static Graphics Processing Unit (GPU) data structures representing at least a portion of the user interface hierarchy. The user interface thread passes the static GPU data structures to a composition thread, which uses these static GPU data structures during generation of a plurality of video frames. This includes the composition thread, based on the static GPU data structures, sending GPU data and GPU commands for the plurality of video frames to a GPU for rendering. | 03-14-2013 |
20130063464 | PRIMITIVE RENDERING USING A SINGLE PRIMITIVE TYPE - Rendering different types of graphical content using a single primitive type. Embodiments enable graphical elements of different content types representing a scene to be rendered as a batch based on the single primitive type, thereby reducing data transfer and improving processing performance. For example, each graphical element in a batch of graphical elements can rendered based modifications to instances of a template shape, which represents a single primitive type usable for rendering different types of graphical content. The modifications to each instance can include modifying the instance according transformation data, clip data, and/or width and height data to position the instance in a scene, and filling the modified instance according to one or more of shape or brush data corresponding to the graphical element. | 03-14-2013 |
20130106885 | ALIASING OF LIVE ELEMENTS IN A USER INTERFACE | 05-02-2013 |
20150035744 | NEAR-EYE OPTIC POSITIONING IN DISPLAY DEVICES - Embodiments are disclosed for adjusting alignment of a near-eye optic of a see-through head-mounted display system. In one embodiment, a method of detecting eye location for a head-mounted display system includes directing positioning light to an eye of a user and detecting the positioning light reflected from the eye of the user. The method further includes determining a distance between the eye and a near-eye optic of the head-mounted display system based on attributes of the detected positioning light, and providing feedback for adjusting the distance between the eye and the near-eye optic. | 02-05-2015 |
20150193920 | MAPPING GLINTS TO LIGHT SOURCES - The technology disclosed herein provides various embodiments for mapping glints that reflect off from an object to light sources responsible for the glints. Embodiments disclosed herein are able to correctly map glints to light sources by capturing just a few images with a camera. Each image is captured while illuminating the object with a different pattern of light sources. A glint free image may also be determined. A glint free image is one in which the glints have been removed by image processing techniques. | 07-09-2015 |
20150268821 | SELECTION USING EYE GAZE EVALUATION OVER TIME - Various embodiments relating to selection of a user interface object displayed on a graphical user interface based on eye gaze are disclosed. In one embodiment, a selection input may be received. A plurality of eye gaze samples at different times within a time window may be evaluated. The time window may be selected based on a time at which the selection input is detected. A user interface object may be selected based on the plurality of eye gaze samples. | 09-24-2015 |