Google thinks that it's time to introduce an all new input interface advancement for mobile devices running Android. The new input interface is designed to detect motions consisting of single and double taps, hovering and swiping the camera that's built right into each and every Android device. The input interface also allows for a combination of gestures so that tasks could get done a little quicker. The new system appears to allow a user to map out a combination of gestures to perform tasks associated with web browsing, music and more. Like always, some will welcome these newly advanced gestures for Android while others will bemoan yet more changes being added to their already busy lives. Unfortunately, progress and revolutions have a way of steam rolling over the latter. It's just the way it is.
The Current Interfaces for Modern Phones & Tablets
Portable electronic devices such as mobile phones have different input interfaces that allow users to control the devices. These input interfaces may include touch screen, speaker, keypad, buttons and switches. Thus, a user may control various functions of the phone and user applications through touching motions and spoken words/phrases.
Portable devices often have various peripheral components as well, such as a camera, which is used to capture subjects and environments surrounding the device. Some portable devices have a built-in compass or an accelerometer that may be able to detect orientations or motions of the device.
Google Looks to Advance Modern Interfaces by Introducing Camera Motions and Timing
Aspects of Google's invention provide systems and methods of controlling a mobile device by detecting and recognizing an object's motions based on images taken by a camera on Android devices as well as recognizing the timing information related to the motions.
Ten Points about the New Gesture Methodology
1. In one example, the type of the detected motions comprises single tapping, double tapping, hovering, holding and swiping.
2. In another example, detecting motions comprises receiving images from the image capturing device. Each of the received images is associated with a motion of the object. Detecting motions also comprises determining an illumination level for each of the received images, comparing each of the determined illumination level to a predetermined threshold, and determining, for each received image, if the associated motion is a touch motion where the object touches on the image capturing device or a release motion where the object releases or stays away from the image capturing device based on the comparison.
3. In a further example, determining a type of the detected motions comprises determining time from a touch motion to a release motion.
4. In one alternative, determining a type of the detected motions comprises determining time from a release motion to a touch motion.
5. In another alternative, detecting motions comprises receiving images from the image capturing device, and detecting edges for each of the received images. Each of the received images is associated with a motion of the object.
6. In one example, the type of the detected motions comprises single tapping, double tapping, hovering, holding and swiping.
7. In another example, the instructions for detecting motions comprise instructions for receiving images from the image capturing device. Each of the received images is associated with a motion of the object. The instructions for detecting motions also comprise instructions for determining an illumination level for each of the received images, comparing each of the determined illumination level to a predetermined threshold, and determining, for each received image, if the associated motion is a touch motion where the object touches on the image capturing device or a release motion where the object releases or stays away from the image capturing device based on the comparison.
8. In a further example, the instructions for determining a type of the detected motions comprise instructions for determining time from a touch motion to a release motion.
9. In one alternative, instructions for determining a type of the detected motions comprise instructions for determining time from a release motion to a touch motion.
10. In a further alternative, the instructions further comprise instructions for detecting, for images with edges, direction of a swiping motion.
Future Android Device Cameras to Detect & Recognize Finger Motion Patterns
Google's patent FIG. 1 shown below is example system 100 which may a smart phone such as the Google Nexus that's capable of communicating with other mobile devices through network 90. The portable electronic device may also be a music player, or any type of multi-function portable user device that has a built-in camera.
The system and method may detect and recognize pattern of finger motions 106 within the field of view of the camera, and generate commands based on the pattern of the detected motions to control the user device accordingly. For example, the system and method may detect and determine a particular motion event over the camera (e.g., a touch on or a release from the camera) based on the illumination changes of the images captured by the camera. It may further derive, from a series of motion events, a pattern – such as a tap, double tap, a swipe or a hover on or across the camera) using timing information related to the motion events.
What will these "Motion Patterns" Control?
Google states that the motion pattern alone, or a combination of the motion pattern and input events from other input devices, may be mapped to one of the predetermined input gestures that are stored in the device. Each predetermined input gesture may correlate with one or more control commands for controlling various functions of the device.
These functions may include call or voice message functions, music playing functions, web browsing functions, or document retrieval functions. For instance, a double tap on the camera while pressing on an address book entry displayed on a touch screen may be an input gesture to initiate a phone call to that particular contact.
Google's Double Tap Scenario
Google's patent FIG. 7 shown below illustrates a double tap scenario. Here, by collecting images during a time period from t0 to t5, calculating the illumination levels for the collected images and motion event durations, the system is able to determine the pattern of the series of motion events during the time period.
Specifically, based on the changes in average illumination levels, the system may determine that the finger motion has switched from a release state to a touch state at t1 and back to a release state at t2. The system may also determine that, at time t4, the finger motion state has switched again to a touch state and back to a release state at t5.
The system then calculates the time taken from release to touch (i.e., the time span from t0 to t1). The system may further calculate the time taken for the finger motion state to switch form touch to release (i.e., the time span from t1 to t2), and if such time is below a predetermined maximum value of a touch event (e.g., 5 milliseconds) the system may map the motion pattern before time t2 to a single tap.
The patent reviews several other scenarios relating to this system which include a single tap scenario, a long pause scenario and various image models of swiping motions. Based on comparison, a pattern of the swiping motions may be derived from vertical downward, vertical upward, horizontal right and horizontal left, or swiping with in other angles and directions.
One of Google's Overview Flowcharts
Google's patent FIG. 3B shown below illustrates a functional diagram. User device 302 may be configured to detect multiple input events 306 and 320 from different input devices and translate the combination of the concurrent input events into a control command based on the rules in the input gesture library.
One of the little Factoids worth noting is that Google prominently mentions that the processors used in this system would be from Intel and/or AMD and not the traditional CPU suppliers such as Qualcomm or ARM.
At the end of the day, it's another very interesting patent from Google and more so because the Apple-Google rivalry is at play here too. Apple filed for a similar patent six months prior to Google's filing. Sometimes I think that Jobs and Schmidt created this rivalry just to keep Microsoft out of the conversation. But that's a story for another day.
Notice: Patent Bolt presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. Revelations found in patent applications shouldn't be interpreted as rumor or fast-tracked according to rumor timetables. About Comments: Patent Bolt reserves the right to post, dismiss or edit comments.
A Few Great Community Sites Covering our Report
MacSurfer, Geeky Gadgets, GottaBeMobile, Droid Life, Phandroid, phoneArena, MobiFrance, AccessOWeb, UnWire Hong Kong, Clubic France, Mobilsiden Denmark, Inside Handy Germany, Android App Tests Germany, and more.