Patent application number | Description | Published |
20080310723 | TEXT PREDICTION WITH PARTIAL SELECTION IN A VARIETY OF DOMAINS - A computing system may predict a word based on received user input that selects a part of the word (e.g., the first characters, the first root, etc.). Specifically, a program, when run on the computing system, may perform a method including creating a candidate list of words based on received user input. These words may be then organized into a hierarchy, or tree structure, in which each word is associated with a parent and each parent is a partial match for its associated words. The top-tier partial matches may be presented, and user input corresponding to a selected partial match may be received. A set of candidates related to the selected partial match may then be presented for user selection. | 12-18-2008 |
20090159342 | INCORPORATED HANDWRITING INPUT EXPERIENCE FOR TEXTBOXES - Textboxes are provided to support both standard textbox operations and handwriting input. A textbox may be displayed as a standard textbox, receive input from a keyboard, a pointing device (e.g., a mouse), and/or a handheld writing device (e.g., an electronic pen or stylus), and interpret the input to perform standard textbox operations. Based on various user actions, the textbox is displayed as an enlarged textbox that provides a writing surface for receiving input from the handheld writing device that is interpreted as handwriting input. Text is recognized from the handwriting input, and the text from the enlarged textbox is synchronized with the textbox. | 06-25-2009 |
20090161958 | INLINE HANDWRITING RECOGNITION AND CORRECTION - As a user writes using a handheld writing device, such as an electronic pen or stylus, handwriting input is received and initially displayed as digital ink. The display of the digital ink is converted to recognized text inline with additional digital ink as the user continues to write. A user may edit a word of recognized text inline with other text by selecting the word. An enlarged version of the word is displayed in a character correction user interface that allows a user to make corrections on an individual character basis and also provides other correction options for the word. | 06-25-2009 |
20090161959 | HANDWRITING TEMPLATES - Apparatuses, methods, and computer-storage media provide character string templates to facilitate receiving non-prose handwriting input from a user and converting that input to text to create character strings capable of being provided to application and/or displayed to the user. Templates may be provided manually or automatically, and may or may not be associated with an application text box. A template generally contains pre-populated segments and open segments for receiving handwriting. | 06-25-2009 |
20090292989 | PANNING CONTENT UTILIZING A DRAG OPERATION - Computer-readable media, computerized methods, and computer systems for intuitively invoking a panning action (e.g., moving content within a content region of a display area) by applying a user-initiated input at the content region rendered at a touchscreen interface are provided. Initially, aspects of the user-initiated input include a location of actuation (e.g., touch point on the touchscreen interface) and a gesture. Upon ascertaining that the actuation location occurred within the content region and that the gesture is a drag operation, based on a distance of uninterrupted tactile contact with the touchscreen interface, a panning mode may be initiated. When in the panning mode, and if the application rendering the content at the display area supports scrolling functionality, the gesture will control movement of the content within the content region. In particular, the drag operation of the gesture will pan the content within the display area when surfaced at the touchscreen interface. | 11-26-2009 |
20090319894 | RENDERING TEACHING ANIMATIONS ON A USER-INTERFACE DISPLAY - Computer-readable media, computerized methods, and computer systems for intuitively surfacing a teaching animation that demonstrates a manual gesture recognized by a writing-pad tool are provided. Initially, the writing-pad tool is interrogated to determine a context of a computing environment associated with a touchscreen interface. Generally, determining includes recognizing a current condition of the writing-pad tool based on whether text is provided within a content-entry area generated thereby, ascertaining whether a focus of a cursor tool resides within the content-entry area based on whether a portion of the text is selected, and ascertaining which actions are available for invocation at the writing-pad tool based on the current condition and the focus of the cursor. The context of the computing environment is utilized to identify which teaching animations to promote to an active state. Typically the promoted teaching animations are associated with the actions ascertained as available for invocation. | 12-24-2009 |
20100081476 | GLOW TOUCH FEEDBACK FOR VIRTUAL INPUT DEVICES - The claimed subject matter is directed to providing feedback in a touch screen device in response to an actuation of a virtual unit in a virtual input device. Specifically, the claimed subject matter provides a method and system for providing visual feedback in response to an actuation of a virtual key in a virtual keyboard. One embodiment of the claimed subject matter is implemented as a method for providing luminescent feedback in response to an actuation of a virtual key in a virtual keyboard. User input in a virtual keyboard corresponding to a virtual key is received. The corresponding virtual key is actuated and registered in response to the user input, and a luminescent feedback is displayed to the user as confirmation of the actuation of the virtual key. | 04-01-2010 |
20100141590 | Soft Keyboard Control - This document describes tools associated with soft keyboard control functions. In some implementations, the tools recognize a keyboard launch gesture on a touch sensitive screen and present a preview of a keyboard on the touch sensitive screen responsive to the launch gesture. The tools can also display the keyboard on the touch sensitive screen responsive to cessation of the launch gesture. | 06-10-2010 |
20100235733 | DIRECT MANIPULATION OF CONTENT - Various embodiments provide techniques for direct manipulation of content. The direct manipulation of content can provide an intuitive way for a user to access and interact with content. In at least some embodiments, content manipulation is “direct” in that content displayed in a user interface (e.g., one or more Web pages in a Web browser interface) can be moved in and/or out of the user interface in a direction that corresponds to user-initiated physical movements, such as the user dragging or flicking the content with the user's finger or some other type of input device. | 09-16-2010 |
20120121181 | INLINE HANDWRITING RECOGNITION AND CORRECTION - As a user writes using a handheld writing device, such as an electronic pen or stylus, handwriting input is received and initially displayed as digital ink. The display of the digital ink is converted to recognized text inline with additional digital ink as the user continues to write. A user may edit a word of recognized text inline with other text by selecting the word. An enlarged version of the word is displayed in a character correction user interface that allows a user to make corrections on an individual character basis and also provides other correction options for the word. | 05-17-2012 |
20120299933 | Collection Rearrangement Animation - Collection rearrangement animation techniques are described herein, which can be employed to represent changes made by a rearrangement in a manner that reduces or eliminates visual confusion. A collection of items arranged at initial positions can be displayed. Various interaction can initiate a rearrangement of the collection of items, such as to sort the items, add or remove an item, or reposition an item. An animation of the rearrangement is depicted that omits at least a portion of the spatial travel along pathways from the initial positions to destination positions in the rearranged collection. In one approach, items can be animated to disappear from the initial positions and reappear at destination positions. This can occur by applying visual transitions that are bound to dimensional footprints of the items in the collection. Additionally or alternatively, intermediate and overlapping positions can be omitted by the animation. | 11-29-2012 |
20120304061 | Target Disambiguation and Correction - Various embodiments enable target disambiguation and correction. In one or more embodiments, target disambiguation includes an entry mode in which attempts are made to disambiguate one or more targets that have been selected by a user, and an exit mode which exits target disambiguation. Entry mode can be triggered in a number of different ways including, by way of example and not limitation, acquisition of multiple targets, selection latency, a combination of multiple target acquisition and selection latency, and the like. Exit mode can be triggered in a number of different ways including, by way of example and not limitation, movement of a target selection mechanism outside of a defined geometry, speed of movement of the target selection mechanism, and the like. | 11-29-2012 |
20120304113 | GESTURE-BASED CONTENT-OBJECT ZOOMING - This document describes techniques and apparatuses for gesture-based content-object zooming. In some embodiments, the techniques receive a gesture made to a user interface displaying multiple content objects, determine which content object to zoom, determine an appropriate size for the content object based on bounds of the object and the size of the user interface, and zoom the object to the appropriate size. | 11-29-2012 |
20130033525 | Cross-slide Gesture to Select and Rearrange - Cross slide gestures for touch displays are described. In at least some embodiments, cross slide gestures can be used on content that pans or scrolls in one direction, to enable additional actions, such as content selection, drag and drop operations, and the like. In one or more embodiments, a cross slide gesture can be performed by dragging an item or object in a direction that is different from a scrolling direction. The different-direction drag can be mapped to additional actions or functionality. In one or more embodiments, one or more thresholds can be utilized, such as a distance threshold, in combination with the different-direction drag, to map to additional actions or functionality. | 02-07-2013 |
20130044141 | Cross-slide Gesture to Select and Rearrange - Cross slide gestures for touch displays are described. In at least some embodiments, cross slide gestures can be used on content that pans or scrolls in one direction, to enable additional actions, such as content selection, drag and drop operations, and the like. In one or more embodiments, a cross slide gesture can be performed by dragging an item or object in a direction that is different from a scrolling direction. The different-direction drag can be mapped to additional actions or functionality. In one or more embodiments, one or more thresholds can be utilized, such as a distance threshold, in combination with the different-direction drag, to map to additional actions or functionality. | 02-21-2013 |
20130063492 | SCALE FACTORS FOR VISUAL PRESENTATIONS - A device may display a presentation of elements (e.g., icons) on a display component. However, display components have a pixel density that affects aesthetic and practical aspects of the presentation (e.g., rendering the presentation at a variable and inconsistent size); yet, many presentations are not generated in view of the pixel density of the display component of the device. Presented herein are techniques for generating and displaying a presentation of elements in view of the pixel density of the display component, using a scale factor set of scale factors that specify a pixel density range and a scale factor value (e.g., 120%) to be applied to the elements of the presentation. The scale factor set may be kept small to reduce the administrative burden on the designer of the element, while also achieving approximately consistent sizing of the presentation on display components having variable pixel densities. | 03-14-2013 |
20130067373 | EXPLICIT TOUCH SELECTION AND CURSOR PLACEMENT - A system and method for implementing an efficient and easy to user interface for a touch screen device. A cursor may be placed by a user using simple inputs. The device operates places the cursor coarsely and refines the cursor placement upon further input from the user. Text may be selected using a gripper associated with the cursor. The user interface allows text selection without occluding the text being selected with the user's finger or the gripper. For selecting text in a multi-line block of text, a dynamic safety zone is implemented to simplify text selection for the user. | 03-14-2013 |
20130067391 | Semantic Zoom Animations - Semantic zoom techniques are described. In one or more implementations, techniques are described that may be utilized by a user to navigate to content of interest. These techniques may also include a variety of different features, such as to support semantic swaps and zooming “in” and “out.” These techniques may also include a variety of different input features, such as to support gestures, cursor-control device, and keyboard inputs. A variety of other features are also supported as further described in the detailed description and figures. | 03-14-2013 |
20130067392 | Multi-Input Rearrange - Multi-input rearrange techniques are described in which multiple inputs are used to rearrange items within navigable content of a computing device. Objects can be selected by first input, which causes the objects to remain visually available within a viewing pane as content is navigated through the viewing pane. In other words, objects are “picked-up” and held within the visible region of a user interface as long as the first input continues. Additional input to navigate content can be used to rearrange selected objects, such as by moving the object to a different file folder, attaching the objects to a message, and so forth. In one approach, one hand can be used for a first gesture to pick-up an object and another hand can be used for gestures/input to navigate content while the picked-up object is being “held” by continued application of the first gesture. | 03-14-2013 |
20130067398 | Semantic Zoom - Semantic zoom techniques are described. In one or more implementations, techniques are described that may be utilized by a user to navigate to content of interest. These techniques may also include a variety of different features, such as to support semantic swaps and zooming “in” and “out.” These techniques may also include a variety of different input features, such as to support gestures, cursor-control device, and keyboard inputs. A variety of other features are also supported as further described in the detailed description and figures. | 03-14-2013 |
20130067408 | CONTEXTUALLY APPLICABLE COMMANDS - A graphical user interface includes a collection of selectable content items, and a command surface for selectively displaying command selectors relating to the collection of selectable content items. Responsive to user selection of a first content item from the collection, the command surface is updated to include a first set of one or more command selectors applicable to the first content item. Responsive to user selection of a second content item, the command surface is updated to include a second set of one or more command selectors applicable to both the first content item and the second content item. Each command selector in the second set is selectable to execute a contextually applicable command related to both the first content item and the second content item. | 03-14-2013 |
20130067414 | SELECTING AND EXECUTING OBJECTS WITH A SINGLE ACTIVATION - Techniques of handling input from a pointing device within a computing system. The method includes, under control of one or more processors configured with executable instructions, receiving from the pointing device a first signal while the pointing device is pointing at an object related to an executable application. The origin of the first signal is determined and if the first signal originated based upon a single activation of a first user input on the pointing device, the object is selected. If the first signal originated based upon a single activation of a second user input on the pointing device, the object is executed. | 03-14-2013 |
20130067420 | Semantic Zoom Gestures - Semantic zoom techniques are described. In one or more implementations, techniques are described that may be utilized by a user to navigate to content of interest. These techniques may also include a variety of different features, such as to support semantic swaps and zooming “in” and “out.” These techniques may also include a variety of different input features, such as to support gestures, cursor-control device, and keyboard inputs. A variety of other features are also supported as further described in the detailed description and figures. | 03-14-2013 |
20130080979 | EXPLICIT TOUCH SELECTION AND CURSOR PLACEMENT - A system and method for implementing an efficient and easy to user interface for a touch screen device. A cursor may be placed by a user using simple inputs. The device operates places the cursor coarsely and refines the cursor placement upon further input from the user. Text may be selected using a gripper associated with the cursor. The user interface allows text selection without occluding the text being selected with the user's finger or the gripper. For selecting text in a multi-line block of text, a dynamic safety zone is implemented to simplify text selection for the user. | 03-28-2013 |
20130100045 | PRESSURE-BASED INTERACTION FOR INDIRECT TOUCH INPUT DEVICES - In an indirect interaction input device, z-information can be considered in defining a user interaction model for the device. Any measurement of displacement in a z-direction can be used, if such information is available from the input device. The pressure data can be used to define states of interaction, with transitions between these states determined by various thresholds. The device can provide raw or normalized data, and can provide state information or data defining its thresholds that specify state transitions. This information can be provided as an attribute of a contact point provided by the device. Data can be normalized across various similar devices. Applications can use the raw pressure data from the device for their own purposes, or rely on the device itself or host operating system software to interpret the pressure data according to thresholds and states. | 04-25-2013 |
20130113716 | INTERACTION MODELS FOR INDIRECT INTERACTION DEVICES - One or more techniques and/or systems are provided for utilizing input data received from an indirect interaction device (e.g., mouse, touchpad, etc.) as if the data was received from a direct interaction device (e.g., touchscreen). Interaction models are described for handling input data received from an indirect interaction device. For example, the interaction models may provide for the presentation of two or more targets (e.g., cursors) on a display when two or more contacts (e.g., fingers) are detected by indirect interaction device. Moreover, based upon a number of contacts detected and/or a pressured applied by respective contacts, the presented target(s) may be respectively transitioned between a hover visualization and an engage visualization. Targets in an engage visualization may manipulate a size of an object presented in a user interface, pan the object, drag the object, rotate the object, and/or otherwise engage the object, for example. | 05-09-2013 |
20130147749 | PANNING CONTENT UTILIZING A DRAG OPERATION - Computer-readable media, computerized methods, and computer systems for intuitively invoking a panning action (e.g., moving content within a content region of a display area) by applying a user-initiated input at the content region rendered at a touchscreen interface are provided. Initially, aspects of the user-initiated input include a location of actuation (e.g., touch point on the touchscreen interface) and a gesture. Upon ascertaining that the actuation location occurred within the content region and that the gesture is a drag operation, based on a distance of uninterrupted tactile contact with the touchscreen interface, a panning mode may be initiated. When in the panning mode, and if the application rendering the content at the display area supports scrolling functionality, the gesture will control movement of the content within the content region. In particular, the drag operation of the gesture will pan the content within the display area when surfaced at the touchscreen interface. | 06-13-2013 |
20130167058 | CLOSING APPLICATIONS - Application closing techniques are described. In one or more implementations, a computing device recognizes an input as involving selection of an application displayed in a display environment by the computing device and subsequent movement of a point of the selection toward an edge of the display environment. Responsive to the recognizing of the input, the selected application is closed by the computing device. | 06-27-2013 |
20130171607 | Self-revealing Gesture - Various embodiments provide self-revealing gestures that are designed to provide an indication of how to perform one or more different gestures. In at least one embodiment, an initiation gesture is received, relative to an object. The initiation gesture is configured to cause presentation of a visualization designed to provide an indication of how to perform a different gesture. Responsive to receiving the initiation gesture, the visualization is presented without causing performance of an operation associated with the different gesture. | 07-04-2013 |
20140237413 | GLOW TOUCH FEEDBACK FOR VIRTUAL INPUT DEVICES - The claimed subject matter is directed to providing feedback in a touch screen device in response to an actuation of a virtual unit in a virtual input device. Specifically, the claimed subject matter provides a method and system for providing visual feedback in response to an actuation of a virtual key in a virtual keyboard. One embodiment of the claimed subject matter is implemented as a method for providing luminescent feedback in response to an actuation of a virtual key in a virtual keyboard. User input in a virtual keyboard corresponding to a virtual key is received. The corresponding virtual key is actuated and registered in response to the user input, and a luminescent feedback is displayed to the user as confirmation of the actuation of the virtual key. | 08-21-2014 |
20140372923 | High Performance Touch Drag and Drop - High performance touch drag and drop are described. In embodiments, a multi-threaded architecture is implemented to include at least a manipulation thread and an independent hit test thread. The manipulation thread is configured to receive one or more messages associated with an input and send data associated with the messages to the independent hit test thread. The independent hit test thread is configured to perform an independent hit test to determine whether the input hit an element that is eligible for a particular action, and identify an interaction model associated with the input. The independent hit test thread also sends an indication of the interaction model to the manipulation thread to enable the manipulation thread to detect whether the particular action is triggered. | 12-18-2014 |
20140372934 | TETHERED SELECTION HANDLE - Technologies are generally described for providing a tethered selection handle for direct selection of content on a touch or gesture interface. Touch or gesture input on a computing device may be detected to begin content selection, and a start handle may be displayed near the initial input location. An end handle may be displayed to indicate an end of the selection. After the selection, the end handle, a portion of the end handle, or a separate indicator may be displayed at a location of user's current interaction point to indicate to the user that the computing device is aware of the movement of the user's interaction point away from the end handle, but the content selection had not changed. The newly displayed indicator may be tethered to the end handle to further indicate the connection between the end of the selected content and the user's current interaction point. | 12-18-2014 |