Patent application number | Description | Published |
20130321257 | Methods and Apparatus for Cartographically Aware Gestures - Methods and apparatus for a map tool on a mobile device for implementing cartographically aware gestures directed to a map view of a map region. The map tool may base a cartographically aware gesture on an actual gesture input directed to a map view and based on map data for the map region that may include metadata corresponding to elements within the map region. The map tool may then determine, based on one or more elements of the map data, a modification to be applied to an implementation to the gesture. Given the modification to the gesture implementation, the map tool may then render, based on performing the modification to the gesture, an updated map view instead of an updated map view based solely on the user gesture. | 12-05-2013 |
20130321402 | ROTATION OPERATIONS IN A MAPPING APPLICATION - A mapping program for execution by at least one processing unit of a device is described. The device includes a touch-sensitive screen and a touch input interface. The program renders and displays a presentation of a map from a particular view of the map. The program generates an instruction to rotate the displayed map in response to a multi-touch input from the multi-touch input interface. In order to generate a rotating presentation of the map, the program changes the particular view while receiving the multi-touch input and for a duration of time after the multi-touch input has terminated in order to provide a degree of inertia motion for the rotating presentation of the map. | 12-05-2013 |
20130322634 | CONTEXT-AWARE VOICE GUIDANCE - A context-aware voice guidance method is provided that interacts with other voice services of a user device. The voice guidance does not provide audible guidance while the user is making a verbal request to any of the voice-activated services. Instead, the voice guidance transcribes its output on the screen while the verbal requests from the user are received. In some embodiments, the voice guidance only provides a short warning sound to get the user's attention while the user is speaking on a phone call or another voice-activated service is providing audible response to the user's inquires. The voice guidance in some embodiments distinguishes between music that can be ducked and spoken words, for example from an audiobook, that the user wants to pause instead of being skipped. The voice guidance ducks music but pauses spoken words of an audio book in order to provide voice guidance to the user. | 12-05-2013 |
20130322665 | CONTEXT-AWARE VOICE GUIDANCE - A context-aware voice guidance method is provided that interacts with other voice services of a user device. The voice guidance does not provide audible guidance while the user is making a verbal request to any of the voice-activated services. Instead, the voice guidance transcribes its output on the screen while the verbal requests from the user are received. In some embodiments, the voice guidance only provides a short warning sound to get the user's attention while the user is speaking on a phone call or another voice-activated service is providing audible response to the user's inquires. The voice guidance in some embodiments distinguishes between music that can be ducked and spoken words, for example from an audiobook, that the user wants to pause instead of being skipped. The voice guidance ducks music but pauses spoken words of an audio book in order to provide voice guidance to the user. | 12-05-2013 |
20130325317 | SMART LOADING OF MAP TILES - Systems and methods are provided for displaying a portion of a map on a mobile device of a user while the user is traveling along a route. The mobile device can use a selected route and a current location of the device to load map tiles for parts of the map that are upcoming along the route. In this manner, the user can have quick access to the portions of the map that the user likely will want to view. For example, the map tiles can be loaded for the next 50 Km, and then when the stored tiles reaches only 25 Km ahead, another 25 Km of tiles can be retrieved. The amount of tiles loaded (e.g., minimum and maximum amounts) can vary based on a variety of factors, such as network state, distance traveled along the route, and whether the mobile device is charging. | 12-05-2013 |
20130325319 | INTEGRATED MAPPING AND NAVIGATION APPLICATION - An integrated map and navigation program is described. The program provides a first operational mode for browsing and searching a map. The program provides a second operational mode for providing a navigation presentation that provides a set of navigation directions along a navigated route by reference to the map. | 12-05-2013 |
20130325340 | ROUTING APPLICATIONS FOR NAVIGATION - Some embodiments provide a mapping application that provides routing information to third-party applications on a device. The mapping application receives route data that includes first and second locations. Based on the route data, the mapping application provides a set of routing applications that provide navigation information. The mapping application receives a selection of a routing application in the set of routing applications. The mapping application passes the route data to the selected routing application in order for the routing application to provide navigation information. | 12-05-2013 |
20130325341 | ROUTE DISPLAY AND REVIEW - For a device running a mapping application that includes a display area for displaying a map and a set of graphical user interface (GUI) items, a method for providing routes is described. The method computes a route between a starting location and a destination location. The route includes a sequence of maneuvering instructions for guiding a user through the route. The method provides a movable GUI item for showing each maneuvering instruction in the sequence in order to allow a user to navigate the route by moving the GUI items in and out of the display area. | 12-05-2013 |
20130325342 | NAVIGATION APPLICATION WITH ADAPTIVE INSTRUCTION TEXT - Some embodiments provide a navigation application. The navigation application includes an interface for receiving data describing junctures along a route from a first location on a map to a second location on the map. The data for each juncture includes a set of angles at which roads leave the juncture. The navigation application includes a juncture decoder for synthesizing, from the juncture data, instruction elements for each juncture that describe different aspects of a maneuver to be performed at the juncture. The navigation application includes an instruction generator for generating at least two different instruction sets for a maneuver by combining one or more of the instruction elements for the juncture at which the maneuver is to be performed. The navigation application includes an instruction retriever for selecting one of the different instruction sets for the maneuver according to a context in which the instruction set will be displayed. | 12-05-2013 |
20130325343 | MAPPING APPLICATION WITH NOVEL SEARCH FIELD - For a device that runs a mapping application, a method of displaying search completions in a display area of the mapping application that includes a search field for receiving inputs is described. The method identifies a set of search completions that include any recent search completions used to search locations on a map. Upon receiving a non-text input through the search field when the search field is empty, the method displays the set of search completions in the display area. | 12-05-2013 |
20130325481 | VOICE INSTRUCTIONS DURING NAVIGATION - A method of providing navigation on an electronic device when the display screen is locked. The method receives a verbal request to start navigation while the display is locked. The method identifies a route from a current location to a destination based on the received verbal request. While the display screen is locked, the method provides navigational directions on the electronic device from the current location of the electronic device to the destination. Some embodiments provide a method for processing a verbal search request. The method receives a navigation-related verbal search request and prepares a sequential list of the search results based on the received request. The method then provides audible information to present a search result from the sequential list. The method presents the search results in a batch form until the user selects a search result, the user terminates the search, or the search items are exhausted. | 12-05-2013 |
20130326384 | DISPLAYING LOCATION PREVIEW - A mapping application that provides a graphical user interface (GUI) for displaying information about a location is described. The GUI includes a first display area for displaying different types of media for a selected location on a map. The GUI includes a second display area for displaying different types of information of the selected location. The GUI includes a set of selectable user interface (UI) items, each of which for causing the second display area to display a particular type of information when selected. | 12-05-2013 |
20130326407 | Problem Reporting in Maps - For a mapping application, a method for reporting a problem related to a map displayed by the mapping application is described. The method identifies a mode in which the mapping application is operating. The method identifies a set of types of problems to report based on the identified mode. The method displays, in a display area of the mapping application, a graphical user interface (GUI) page that includes a set of selectable user interface (UI) items that represent the identified set of types of problems. | 12-05-2013 |
20130326425 | MAPPING APPLICATION WITH 3D PRESENTATION - A device that includes at least one processing unit and stores a multi-mode mapping program for execution by the at least one processing unit is described. The program includes a user interface (UI). The UI includes a display area for displaying a two-dimensional (2D) presentation of a map or a three-dimensional (3D) presentation of the map. The UI includes a selectable 3D control for directing the program to transition between the 2D and 3D presentations. | 12-05-2013 |
20130345959 | NAVIGATION APPLICATION - Some embodiments provide a navigation application that presents a novel navigation presentation on a device. The application identifies a location of the device, and identifies a style of road signs associated with the identified location of the device. The application then generates navigation instructions in form of road signs that match the identified style. To generate the road sign, the application in some embodiments identifies a road sign template image for the identified style, and generates the road sign by compositing the identified road sign template with at least one of text instruction and graphical instruction. In some embodiments, the road sign is generated as a composite textured image that has a texture and a look associated with the road signs at the identified location. | 12-26-2013 |
20130345962 | 3D NAVIGATION - Some embodiments provide a device that stores a novel navigation application. The application in some embodiments includes a user interface (UI) that has a display area for displaying a two-dimensional (2D) navigation presentation or a three-dimensional (3D) navigation presentation. The UI includes a selectable 3D control for directing the program to transition between the 2D and 3D presentations. | 12-26-2013 |
20130345975 | NAVIGATION APPLICATION WITH ADAPTIVE DISPLAY OF GRAPHICAL DIRECTIONAL INDICATORS - Some embodiments provide a navigation application. The navigation application includes an interface for receiving data describing junctures along a route from a first location to a second location. The data for each juncture comprises a set of angles at which roads leave the juncture. The navigation application includes a juncture simplifier for simplifying the angles for the received junctures. The navigation application includes an arrow generator for generating at least two different representations of the simplified juncture. The representations are for use in displaying navigation information describing a maneuver to perform at the juncture during the route. The navigation application includes an arrow selector for selecting one of the different representations of the simplified juncture for display according to a context in which the representation will be displayed. | 12-26-2013 |
20130345980 | PROVIDING NAVIGATION INSTRUCTIONS WHILE OPERATING NAVIGATION APPLICATION IN BACKGROUND - A method of displaying navigational instructions when a navigation application is running in a background mode of an electronic device is provided. The method displays a non-navigation application in the foreground on a display screen of the electronic device. The method displays a navigation bar without a navigation instruction when the device is not near a navigation point. The method displays the navigation bar with a navigation instruction when the device is near a navigation point. In some embodiments, the method receives a command to switch from running the navigation application in the foreground to running another screen view in the foreground. The method then runs the other screen view in the foreground while displaying a navigation status display on an electronic display of the device. | 12-26-2013 |
20130345981 | PROVIDING NAVIGATION INSTRUCTIONS WHILE DEVICE IS IN LOCKED MODE - A method of providing navigation instructions in a locked mode of a device is disclosed. The method, while the display screen of the device is turned off, determines that the device is near a navigation point. The method turns on the display screen and provides navigation instructions. In some embodiments, the method identifies the ambient light level around the device and turns on the display at brightness level determined by the identified ambient light level. The method turns off the display after the navigation point is passed. | 12-26-2013 |
20140019917 | DISAMBIGUATION OF MULTITOUCH GESTURE RECOGNITION FOR 3D INTERACTION - A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously. | 01-16-2014 |
20140218392 | INTELLIGENT ADJUSTMENT OF MAP VIEWPORTS AT LAUNCH - A method of selectively displaying maps on a mobile device. The method sends a map application, which had been displaying a first map, to the background. The method then returns the map application to the foreground. The method then determines whether to redisplay the previous map or display a map surrounding the then current location of the device. The determination is based on various factors including user interaction, time that the map application has been in the background and distance traveled while the map application is in the background. | 08-07-2014 |
20140232569 | AUTOMATIC IDENTIFICATION OF VEHICLE LOCATION - A mobile computing device can be used to locate a vehicle parking location. In particular, the mobile device can automatically identify when a vehicle in which the mobile device is located has entered into a parked state. The mobile device can determine that the vehicle is in a parked state by analyzing one or more parameters that indicate a parked state or a transit state. The location of the mobile device at a time corresponding to when the vehicle is identified as being parked can be associated with an identifier for the current parking location. | 08-21-2014 |
20140232570 | VEHICLE LOCATION IN WEAK LOCATION SIGNAL SCENARIOS - A mobile computing device can be used to locate a vehicle parking location in weak location signal scenarios (e.g., weak, unreliable, or unavailable GPS or other location technology). In particular, the mobile device can determine when a vehicle in which the mobile device is located has entered into a parked state. GPS or other primary location technology may be unavailable at the time the mobile device entered into a parked state (e.g., inside a parking structure). The location of the mobile device at a time corresponding to when the vehicle is identified as being parked can be determined using the first location technology as supplemented with sensor data of the mobile device. After the location of the mobile device at a time corresponding to when the vehicle is identified as being parked is determined, the determined location can be associated with an identifier for the current parking location. | 08-21-2014 |
20140278051 | Prediction Engine - Some embodiments of the invention provide a novel prediction engine that (1) can formulate predictions about current or future destinations and/or routes to such destinations for a user, and (2) can relay information to the user about these predictions. In some embodiments, this engine includes a machine-learning engine that facilitates the formulation of predicted future destinations and/or future routes to destinations based on stored, user-specific data. The user-specific data is different in different embodiments. In some embodiments, the stored, user-specific data includes data about any combination of the following: (1) previous destinations traveled to by the user, (2) previous routes taken by the user, (3) locations of calendared events in the user's calendar, (4) locations of events for which the user has electronic tickets, and (5) addresses parsed from recent e-mails and/or messages sent to the user. In some embodiments, the prediction engine only relies on user-specific data stored on the device on which this engine executes. Alternatively, in other embodiments, it relies only on user-specific data stored outside of the device by external devices/servers. In still other embodiments, the prediction engine relies on user-specific data stored both by the device and by other devices/servers. | 09-18-2014 |
20140278070 | Warning for Frequently Traveled Trips Based on Traffic - Some embodiments of the invention provide a novel prediction engine that (1) can formulate predictions about current or future destinations and/or routes to such destinations for a user, and (2) can relay information to the user about these predictions. In some embodiments, this engine includes a machine-learning engine that facilitates the formulation of predicted future destinations and/or future routes to destinations based on stored, user-specific data. The user-specific data is different in different embodiments. In some embodiments, the stored, user-specific data includes data about any combination of the following: (1) previous destinations traveled to by the user, (2) previous routes taken by the user, (3) locations of calendared events in the user's calendar, (4) locations of events for which the user has electronic tickets, and (5) addresses parsed from recent e-mails and/or messages sent to the user. In some embodiments, the prediction engine only relies on user-specific data stored on the device on which this engine executes. Alternatively, in other embodiments, it relies only on user-specific data stored outside of the device by external devices/servers. In still other embodiments, the prediction engine relies on user-specific data stored both by the device and by other devices/servers. | 09-18-2014 |
20140279723 | MOBILE DEVICE WITH PREDICTIVE ROUTING ENGINE - Some embodiments of the invention provide a mobile device with a novel route prediction engine that (1) can formulate predictions about current or future destinations and/or routes to such destinations for the device's user, and (2) can relay information to the user about these predictions. In some embodiments, this engine includes a machine-learning engine that facilitates the formulation of predicted future destinations and/or future routes to destinations based on stored, user-specific data. The user-specific data is different in different embodiments. In some embodiments, the stored, user-specific data includes data about any combination of the following (1) previous destinations traveled to by the user, (2) previous routes taken by the user, (3) locations of calendared events in the user's calendar, (4) locations of events for which the user has electronic tickets, and (5) addresses parsed from recent e-mails and/or messages sent to the user. The device's prediction engine only relies on user-specific data stored on the device in some embodiments, relies only on user-specific data stored outside of the device by external devices/servers in other embodiments, and relies on user-specific data stored both by the device and by other devices/servers in other embodiments. | 09-18-2014 |
20140313525 | Systems and Methods for Printing Maps and Directions - This is directed to systems, methods, and computer-readable media for printing maps and directions. In response to receiving an instruction to print directions, a device can define a layout optimized to show the route to travel, along with distinct steps that correspond to the route. The layout can include a map overview showing the entire route, with callouts identifying each step on the route. The layout can also include listings of individual steps, where each listing includes a reference number referring back to a callout and a description of the step. Each listings can also include a map tile showing a detailed view of the step corresponding to the listing. The map overview and the listings can be disposed, for example, in different columns of a landscape view. | 10-23-2014 |
20140365113 | Navigation Application with Several Navigation Modes - A method for providing navigation instructions on a device is described. As the device traverses a navigated route according to a first mode of transportation, the method displays a first turn-by-turn navigation presentation defined for the first mode. Based on data gathered by the device, the method determines that the device is navigating the route according to a second mode of transportation. The method automatically displays a second, different turn-by-turn navigation presentation defined for the second mode. | 12-11-2014 |
20140365118 | Direction List - A mobile device that displays a list of traveling maneuvers or driving directions according to a route from a start location to a destination location is provided. The displayed list of driving directions includes a series of graphical items that each corresponds to a maneuver in the route. The displayed list of driving directions is updated dynamically according to the current position of the mobile device. Each maneuver actually taken or traveled causes the mobile device to display the item that corresponds to the taken maneuver differently. After a number of maneuvers have been taken, the graphical items that correspond to the taken maneuvers are removed from display and new maneuvers are brought into view. | 12-11-2014 |
20140365120 | Mapping Application with Several User Interfaces - Some embodiments provide a method, for a mobile device, for controlling an interactive communication system of a vehicle that includes a display screen. If the display screen is touch-sensitive, the method provides a first user interface display that includes a first map the presentation of which is modifiable with touch input received through the touch-sensitive screen. If the display screen is not touch-sensitive, the method provides a second user interface display that includes a second map the presentation of which is modifiable through physical controls mounted in the vehicle. In some embodiments, if the touchscreen meets a particular set of characteristics, the first map presentation is directly modifiable with gestural input received through the display screen. If the display screen does not meet the particular set of characteristics, the first map presentation is modifiable with different, non-gestural touchscreen input received through the display screen. | 12-11-2014 |
20140365122 | Navigation Peek Ahead and Behind in a Navigation Application - A method of providing a sequence of turn-by-turn navigation instructions on a device traversing a route is provided. Each turn-by-turn navigation instruction is associated with a location on the route. As the device traverses along the route, the method displays a turn-by-turn navigation instruction associated with a current location of the device. The method receives a touch input through a touch input interface of the device while displaying a first turn-by-turn navigation instruction and a first map region that displays the current location and a first location associated with the first turn-by-turn navigation instruction. In response to receiving the touch input, the method displays a second turn-by-turn navigation instruction and a second map region that displays a second location associated with the second turn-by-turn navigation instruction. Without receiving additional input, the method automatically returns to the display of the first turn-by-turn navigation instruction and the first map region. | 12-11-2014 |
20140365124 | Mapping Application Search Function - Some embodiments provide a method for a mobile device connected to a vehicle's interactive communication system that includes a display screen. The method identifies at least one search term from audio data received through the vehicle's interactive communication system. At the mobile device, the method performs a search to identify at least one location related to the identified search term. The method generates a map, that displays the identified location, for output on the display screen of the interactive communication system of the vehicle. | 12-11-2014 |
20140365125 | User Interface for Displaying Predicted Destinations - Some embodiments provide a method for an application that operates on a mobile device. The method predicts several likely destinations for a vehicle to which the mobile device is connected based on data from a several different sources. The method generates, for a display screen of the vehicle, a display that includes the several likely destinations. In some embodiments, the method receives a first type of input through a control of the vehicle to select one of the likely destinations, and enters a turn-by-turn navigation mode to the selected destination in response to the received input. In some embodiments, the display is for a first destination of the several likely destinations. The method receives a second type of input through a control of the vehicle to step through the set of likely destinations, and generates a display for a second destination in response to the input. | 12-11-2014 |
20140365126 | Mapping Application with Turn-by-Turn Navigation Mode for Output to Vehicle Display - Some embodiments provide a method for an application executing on a mobile device. The method renders an animated navigation presentation for output to an external display screen not part of the mobile device. The navigation presentation includes an animated map showing at least a portion of a route to a destination. The method simultaneously displays information regarding a maneuver along the route on a display screen of the mobile device without displaying a same animated map on the mobile device. In some embodiments, the displayed information regarding the maneuver comprises a graphical instruction and a text instruction for a next maneuver along the route. | 12-11-2014 |
20140365934 | Mapping Application with Interactive Dynamic Scale and Smart Zoom - Some embodiments provide a mapping application that includes a novel dynamic scale that can be used to perform different zoom operations. In some embodiments, the scale also serves as a distance measurement indicator for a corresponding zoom level. The application continuously adjusts several different attributes of the scale, including the scale size, the number of segments on the scale and the representative distance of a segment on the scale. In some embodiments, the mapping application provides a smart zoom feature that guides a user during a zoom to a location. In particular, the smart zoom detects that a location of a zoom is near a pin on the map, and if so, zooms to the pin on the map. Otherwise, if the location is near a cloud of pins, the application zooms to the cloud of pins. Otherwise the zoom is directed towards the user's selected location. | 12-11-2014 |
20140365944 | Location-Based Application Recommendations - A method to share map information between an electronic device and other nearby devices using peer-to-peer communication is provided. The method receives identification of different map items such as a route, points of interest, search results, a current map view and sends to a selected nearby device. The method provides different options to select the map items to share. The particular map information to share in some embodiments depends on what is currently displayed and/or selected on the map. When there are several items that can be shared and there is not a clear indication for what the user intends to share, an action list is shown to allow the user to select the information to share. Once a map item to share is selected, the method displays a share list to display a list of nearby devices. The method sends the shared information to selected devices. | 12-11-2014 |
20150046867 | CONTEXT SENSITIVE ACTIONS - Techniques for performing context-sensitive actions in response to touch input are provided. A user interface of an application can be displayed. Touch input can be received in a region of the displayed user interface, and a context can be determined. A first action may be performed if the context is a first context and a second action may instead be performed if the context is a second context different from the first context. In some embodiments, an action may be performed if the context is a first context and the touch input is a first touch input, and may also be performed if the context is a second context and the touch input is a second touch input | 02-12-2015 |
20150046884 | CONTEXT SENSITIVE ACTIONS IN RESPONSE TO TOUCH INPUT - Techniques for performing context-sensitive actions in response to touch input are provided. A user interface of an application can be displayed. Touch input can be received in a region of the displayed user interface, and a context can be determined. A first action may be performed if the context is a first context and a second action may instead be performed if the context is a second context different from the first context. In some embodiments, an action may be performed if the context is a first context and the touch input is a first touch input, and may also be performed if the context is a second context and the touch input is a second touch input. | 02-12-2015 |
20150247736 | Map Application with Improved Navigation Tools - Some embodiments provide a mapping application with novel navigation and/or search tools. In some embodiments, the mapping application formulates predictions about future destinations of a device that executes the mapping application, and provides dynamic notifications regarding these predicted destinations. For instance, when a particular destination is a likely destination (e.g., most likely destination) of the device, the mapping application in some embodiments presents a notification regarding the particular destination (e.g., plays an animation that presents the notification). This notification in some embodiments provides some information about (1) the predicted destination (e.g., a name and/or address for the predicted destination) and (2) a route to this predicted destination (e.g., an estimated time of arrival, distance, and/or amount of ETD for the predicted destination). In some embodiments, the notification is a dynamic not only because it is presented dynamically as the device travels, but also because the information that the notification displays about the destination and/or route to the destination is dynamically updated by the mapping application as the device travels. | 09-03-2015 |
20150253148 | Map Application with Improved Search Tools - Some embodiments of the invention provide a mapping application with novel tools for examining potential destinations and routes to the potential destinations. In some embodiments, the mapping application includes a route preview page that provides a novel combination of user element tools that allow a user (1) to explore alternative routes to a selected destination, (2) to explore routes to other destinations in a list of destinations. In some embodiments, the route overview page also provides a modal zoom tool that allows the map on this page to zoom to the destination/search result or zoom out to an overview of the entire route to the destination/search result. These three tools (i.e., the tool for exploring alternative routes to one location, the tool for exploring routes to other locations, and the tool for providing modal zoom operations) are highly beneficial in allowing a user to navigate to a location because they allow the user to quickly explore the two-dimensional solution space of possible locations and possible routes to the locations. | 09-10-2015 |
20150300832 | Hierarchy of Tools for Navigation - Some embodiments provide a mapping application that provides a variety of UI elements for allowing a user to specify a location (e.g., for viewing or serving as route destinations). In some embodiments, these location-input UI elements appear in succession on a sequence of pages, according to a hierarchy that has the UI elements that require less user interaction appear on earlier pages in the sequence than the UI elements that require more user interaction. In some embodiments, the location-input UI elements that successively appear in the mapping application include (1) selectable predicted-destination notifications, (2) a list of selectable predicted destinations, (3) a selectable voice-based search affordance, and (4) a keyboard. In some of these embodiments, these UI elements appear successively on the following sequence of pages: (1) a default page for presenting the predicted-destination notifications, (2) a destination page for presenting the list of predicted destinations, (3) a search page for receiving voice-based search requests, and (4) a keyboard page for receiving character input. | 10-22-2015 |
20150300833 | PROVIDING NAVIGATION INSTRUCTIONS WHILE DEVICE IS IN LOCKED MODE - A method of providing navigation instructions in a locked mode of a device is disclosed. The method, while the display screen of the device is turned off, determines that the device is near a navigation point. The method turns on the display screen and provides navigation instructions. In some embodiments, the method identifies the ambient light level around the device and turns on the display at brightness level determined by the identified ambient light level. The method turns off the display after the navigation point is passed. | 10-22-2015 |
20150323340 | NAVIGATION APPLICATION WITH SEVERAL NAVIGATION MODES - A method for providing navigation instructions on a device is described. As the device traverses a navigated route according to a first mode of transportation, the method displays a first turn-by-turn navigation presentation defined for the first mode. Based on data gathered by the device, the method determines that the device is navigating the route according to a second mode of transportation. The method automatically displays a second, different turn-by-turn navigation presentation defined for the second mode. | 11-12-2015 |
20150323342 | ROUTING APPLICATIONS FOR NAVIGATION - Some embodiments provide a mapping application that provides routing information to third-party applications on a device. The mapping application receives route data that includes first and second locations. Based on the route data, the mapping application provides a set of routing applications that provide navigation information. The mapping application receives a selection of a routing application in the set of routing applications. The mapping application passes the route data to the selected routing application in order for the routing application to provide navigation information. | 11-12-2015 |
20160084668 | VOICE INSTRUCTIONS DURING NAVIGATION - A method of providing navigation on an electronic device when the display screen is locked. The method receives a verbal request to start navigation while the display is locked. The method identifies a route from a current location to a destination based on the received verbal request. While the display screen is locked, the method provides navigational directions on the electronic device from the current location of the electronic device to the destination. Some embodiments provide a method for processing a verbal search request. The method receives a navigation-related verbal search request and prepares a sequential list of the search results based on the received request. The method then provides audible information to present a search result from the sequential list. The method presents the search results in a batch form until the user selects a search result, the user terminates the search, or the search items are exhausted. | 03-24-2016 |
Patent application number | Description | Published |
20130321400 | 3D Map Views for 3D Maps - Some embodiments provide a non-transitory machine-readable medium that stores a program which when executed on a device by at least one processing unit provides different viewing modes for viewing a three-dimensional (3D) map. The program renders a first view of the 3D map for display in a first viewing mode based on a first set of map data. The program receives input to adjust the view of the 3D map. In response to the input, the program renders a second view of the 3D map for display in a second viewing mode based on a second set of map data different from the first set of map data. | 12-05-2013 |
20130321401 | Virtual Camera for 3D Maps - Some embodiments provide a non-transitory machine-readable medium that stores a mapping application which when executed on a device by at least one processing unit provides automated animation of a three-dimensional (3D) map along a navigation route. The mapping application identifies a first set of attributes for determining a first position of a virtual camera in the 3D map at a first instance in time. Based on the identified first set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the first instance in time. The mapping application identifies a second set of attributes for determining a second position of the virtual camera in the 3D map at a second instance in time. Based on the identified second set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the second instance in time. The mapping application renders an animated 3D map view of the 3D map from the first instance in time to the second instance in time based on the first and second positions of the virtual camera in the 3D map. | 12-05-2013 |
20130322702 | Rendering Maps - Some embodiments provide a mapping application for rendering map portions. The mapping application includes a map receiver for receiving map tiles from a mapping service in response to a request for the map tiles needed for a particular map view. Each map tile includes vector data describing a map region. The mapping application includes a set of mesh building modules. Each mesh building module is for using the vector data in at least one map tile to build a mesh for a particular layer of the particular map view. The mapping application includes a mesh aggregation module for combining layers from several mesh builders into a renderable tile for the particular map view. The mapping application includes a rendering engine for rendering the particular map view. | 12-05-2013 |
20130328924 | Constructing Road Geometry - Some embodiments provide a method for a mapping service. The method generates an initial set of geometries for a road graph defined for a map region. The road graph includes several road segments and junctions aggregated into roads. The method identifies an overlap between a first geometry of a first road segment and a second geometry of a second road segment. The first road segment and the second road segment are parts of different roads and do not meet at a junction. The method automatically modifies at least one of the first and second geometries in order to prevent the first geometry and second geometry from overlapping. In some embodiments each geometry is defined by a set of vertices that specify its boundaries. The method of some embodiments automatically modifies the vertices of at least one of the first and second geometries. | 12-12-2013 |
20130332057 | Representing Traffic Along a Route - Some embodiments provide a mapping application that has a novel way of displaying traffic congestion along roads in the map. The mapping application in some embodiments defines a traffic congestion representation to run parallel to its corresponding road portion when the map is viewed at a particular zoom level, and defines a traffic congestion representation to be placed over its corresponding road portion when the map is viewed at another zoom level. The mapping application in some embodiments differentiates the appearance of the traffic congestion representation that signifies heavy traffic congestion from the appearance of the traffic congestion representation that signifies moderate traffic congestion. In some of these embodiments, the mapping application does not generate a traffic congestion representation for areas along a road that are not congested. | 12-12-2013 |
20140365113 | Navigation Application with Several Navigation Modes - A method for providing navigation instructions on a device is described. As the device traverses a navigated route according to a first mode of transportation, the method displays a first turn-by-turn navigation presentation defined for the first mode. Based on data gathered by the device, the method determines that the device is navigating the route according to a second mode of transportation. The method automatically displays a second, different turn-by-turn navigation presentation defined for the second mode. | 12-11-2014 |
20140365122 | Navigation Peek Ahead and Behind in a Navigation Application - A method of providing a sequence of turn-by-turn navigation instructions on a device traversing a route is provided. Each turn-by-turn navigation instruction is associated with a location on the route. As the device traverses along the route, the method displays a turn-by-turn navigation instruction associated with a current location of the device. The method receives a touch input through a touch input interface of the device while displaying a first turn-by-turn navigation instruction and a first map region that displays the current location and a first location associated with the first turn-by-turn navigation instruction. In response to receiving the touch input, the method displays a second turn-by-turn navigation instruction and a second map region that displays a second location associated with the second turn-by-turn navigation instruction. Without receiving additional input, the method automatically returns to the display of the first turn-by-turn navigation instruction and the first map region. | 12-11-2014 |
20140365154 | COMPASS CALIBRATION - A method that performs a series of interactive operations to calibrate a compass in a mobile device. The method requires a user to move the device to a variety of different orientations. In order to ensure that the device moves to a sufficient number and variety of orientations, the method instructs the user to rotate the device in a series of interactive operations. The interactive operations provide feedback to inform the user how well the user is performing the interactive operations. In some embodiments, the feedback is tactile (e.g., a vibration). In some embodiments the feedback is audible (e.g., a beep or buzz). In some embodiments, the feedback is visual (e.g., an image or images on a video display of the device). The feedback in some embodiments is continuous (e.g., a changing visual display) and in some embodiments is discrete (e.g., the device beeps after taking a good reading). | 12-11-2014 |
20140365965 | NIGHT MODE - A device that provides a map and/or navigation application that displays items on the map and/or navigation instructions differently in different modes. The applications of some embodiments provide a day mode and a night mode. In some embodiments the application uses the day mode as a default and activates the night mode when the time is after sunset at the location of the device. Some embodiments activate night mode when multiple conditions are satisfied (for example, when (1) the time is after sunset at the location of the device and (2) the ambient light level is below a threshold brightness). | 12-11-2014 |
20150046867 | CONTEXT SENSITIVE ACTIONS - Techniques for performing context-sensitive actions in response to touch input are provided. A user interface of an application can be displayed. Touch input can be received in a region of the displayed user interface, and a context can be determined. A first action may be performed if the context is a first context and a second action may instead be performed if the context is a second context different from the first context. In some embodiments, an action may be performed if the context is a first context and the touch input is a first touch input, and may also be performed if the context is a second context and the touch input is a second touch input | 02-12-2015 |
20150046884 | CONTEXT SENSITIVE ACTIONS IN RESPONSE TO TOUCH INPUT - Techniques for performing context-sensitive actions in response to touch input are provided. A user interface of an application can be displayed. Touch input can be received in a region of the displayed user interface, and a context can be determined. A first action may be performed if the context is a first context and a second action may instead be performed if the context is a second context different from the first context. In some embodiments, an action may be performed if the context is a first context and the touch input is a first touch input, and may also be performed if the context is a second context and the touch input is a second touch input. | 02-12-2015 |
20150323340 | NAVIGATION APPLICATION WITH SEVERAL NAVIGATION MODES - A method for providing navigation instructions on a device is described. As the device traverses a navigated route according to a first mode of transportation, the method displays a first turn-by-turn navigation presentation defined for the first mode. Based on data gathered by the device, the method determines that the device is navigating the route according to a second mode of transportation. The method automatically displays a second, different turn-by-turn navigation presentation defined for the second mode. | 11-12-2015 |
20150345976 | NAVIGATION PEEK AHEAD AND BEHIND - Some embodiments of the invention provide a navigation application that allows a user to peek ahead or behind during a turn-by-turn navigation presentation that the application provides while tracking a device (e.g., a mobile device, a vehicle, etc.) traversal of a physical route. As the device traverses along the physical route, the navigation application generates a navigation presentation that shows a representation of the device on a map traversing along a virtual route that represents the physical route on the map. While providing the navigation presentation, the navigation application can receive user input to look ahead or behind along the virtual route. Based on the user input, the navigation application moves the navigation presentation to show locations on the virtual route that are ahead or behind the displayed current location of the device on the virtual route. This movement can cause the device representation to no longer be visible in the navigation presentation. Also, the virtual route often includes several turns, and the peek ahead or behind movement of the navigation presentation passes the presentation through one or more of these turns. In some embodiments, the map can be defined presented as a two-dimensional (2D) or a three-dimensional (3D) scene. | 12-03-2015 |