The Limitations of Single Input Gesturing Modes
Handheld computing devices commonly provide a touch input mechanism or a pen input mechanism for receiving commands and other information from users. A touch input mechanism provides touch input events when a user touches a display surface of the computing device with a finger (or multiple fingers). A pen input mechanism provides pen input events when a user touches the display surface with a pen device, also known as a stylus. Some devices allow a user to enter either touch input events or pen input events on the same device.
Computing devices also permit a user to perform gestures by using one or more fingers or a pen device. For example, a gesture may correspond to a telltale mark that a user traces on the display surface with a finger and/or pen input device. The computing device correlates this gesture with an associated command. The computing device then executes the command. Such execution can occur in the course of the user's input action (as in direct-manipulation drag actions), or after the user finishes the input action
To provide a rich interface, a developer may attempt to increase the number of gestures recognized by the computing device. For instance, the developer may increase a number of touch gestures that the computing device is able to recognize. While this may increase the expressiveness of the human-to-device interface, it also may have shortcomings. First, it may be difficult for a user to understand and/or memorize a large number of touch gestures or pen gestures. Second, an increase in the number of possible gestures makes it more likely that a user will make mistakes in entering gestures. That is, the user may intend to enter a particular gesture, but the computing device may mistakenly interpret that gesture as another, similar, gesture. This may understandably frustrate the user if it becomes a frequent occurrence, or, even if uncommon, if it causes significant disruption in the task that the user is performing. Generally, the user may perceive the computing device as too susceptible to accidental input actions.
Microsoft Introduces the Cooperative Gesture: Touch + Pen
A computing device is described which allows a user to convey gestures via a cooperative use of at least two input mechanisms. For example, a user may convey a gesture through the joint use of a touch input mechanism and a pen input mechanism. In other cases, the user may convey a gesture through two applications of a touch input mechanism, or two applications of a pen input mechanism, etc. Still other cooperative uses of input mechanisms are possible.
In one implementation, a user uses a touch input mechanism to define content on a display surface of the computing device. For example, in one case, the user may use a finger and a thumb to span the desired content on the display surface. The user may then use a pen input mechanism to enter pen gestures to the content demarcated by the user's touch. The computing device interprets the user's touch as setting a context in which subsequent pen gestures applied by the user are to be interpreted.
To cite merely a few illustrative examples, the user can cooperatively apply two input mechanisms to copy information (e.g., text or other objects), to highlight information, to move information, to reorder information, to insert information, and so on.
More generally summarized, the computing device can act in three modes: a touch only mode, a pen only mode, and a joint use mode – as noted below in patent FIG. 2.
IBSM: The Interpretation and Behavior Selection Module
Microsoft's patent FIG. 1 shown below illustrates an example of a computing device (100) that can accommodate the use of two or more input mechanisms in cooperative conjunction.
Microsoft states that an interpretation and behavior selection module (IBSM) 110 receives input events from the input mechanisms 104. As the name suggests, the IBSM performs the task of interpreting the input events, e.g., by mapping the input events to corresponding gestures. It performs this operation by determining whether one of three modes have been invoked by the user.
In a first mode, the IBSM determines that a touch input mechanism is being used by itself, e.g., without a pen input mechanism. In a second mode, the IBSM determines that a pen input mechanism is being used by itself, e.g., without a touch input mechanism. In a third mode, also referred to herein as a joint use mode, the IBSM determines that both a touch input mechanism and a pen input mechanism are being used in cooperative conjunction. As noted above, the computing device can accommodate the pairing of "other input mechanisms" besides touch and pen input. In a report posted by Patently Apple last week, one of the "other input mechanisms" that Microsoft is seriously considering relates to user interface input via "eye control."
Microsoft further states that after performing its interpretation role, the IBSM performs appropriate behavior. For example, if the user has added a conventional mark on a document using a pen device, the IBSM can store this annotation in an annotation file associated with the document. If the user has entered a gesture, then the IBSM can execute appropriate commands associated with that gesture. More specifically, in a first case, the IBSM executes a behavior at the completion of a gesture. In a second case, the IBSM executes a behavior over the course of the gesture.
The New Joint Mode Gesturing Behavior Applies to a Full Range of Devices
Microsoft's patent FIG. 3 shown below provides us with an overview of some of the devices that the new joint mode behavior will apply to. They include any type of handheld device including a PDA, smartphone, tablet, book reader, a handheld game device, a laptop, a personal computer, work station, a game console device, a set-top box, a wall-type of display device and finally a future Surface related tabletop display.
First Prime Example of Joint Mode Gesturing in Use
In our first prime example of joint gesturing in use we turn to patent FIG. 8. In this case scenario, the user points a single finger to demarcate content (804) such as a paragraph within a multi-paragraph document on the display. The IBSM interprets this touch command as a request to select the entire paragraph. This behavior can be modified in various ways. For example, the user can tap once on a sentence to designate that individual sentence. The user can tap twice in quick succession to designate the entire paragraph. The user can tap three times in quick succession to designate a yet larger unit of text.
Further note that, as a result of the user's selection via the left hand 802, the IBSM presents a visual cue 806 in the right hand top corner of the content or in any another application-specific location. In one case, the user can activate the menu by hovering over the visual cue 806 with a pen device or finger.
The IBSM can display the menu in a region that doesn't interfere with (e.g., overlap) the selected content. In the particular illustrative example depicted in FIG. 8 above, the IBSM presents a radial menu 812, also known as a marking menu. A user can make a mark in one of the radial directions identified by the menu to invoke a corresponding command.
Second Prime Example of Joint Mode Gesturing in Use
Microsoft's patent FIG. 11 shows us another scenario in which the user applies two input mechanisms in the joint use mode of operation. In this case, the user executes a gesture that includes two parts or phases. In a first phase, the user applies their left hand to frame particular content on the display surface. The user then uses the pen device operated by their right hand to identify a portion of the content. For example, the user can identify the portion by adding crop marks around the portion. The crop mark 1108 is one such crop mark added by the user in this example. In a second phase, the user uses the pen device to move the portion identified by crop marks to another location.
In other cases, the IBSM can seamlessly transition from one gesture to another based on the flow of input events that are received. For example, the user may begin by making handwritten notes on the display surface using the pen device, without any touch contact applied to the display surface. Then the user can apply a framing-type action with their hand. In response, the IBSM can henceforth interpret the pen strokes as invoking particular commands within the context established by the framing action.
What's not covered in this document is how using combination inputs will avoid clashing with their new "Palm Block" technology introduced at the Surface Event. Microsoft's "Palm Block" feature locks the screen positioning down automatically as the tablet's digitizer recognizes that a stylus and not a finger is touching its surface. This way you could write without the display beneath rolling and inhibiting your ability to write. I'm sure that a simple menu adjustment could deal with this conflict, but for now it's a conflict worth noting.
Microsoft's patent application was originally filed in Q4 2010 and published by the US Patent and Trademark Office in Q2 2012.
Note to Referring Sites: We ask that referring sites limit the use of our graphics to a maximum of two per report. Thank you for your cooperation.
The Patent Bolt blog presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. Revelations found in patent applications shouldn't be interpreted as rumor or fast-tracked according to rumor timetables. About Comments: Patent Bolt reserves the right to post, dismiss or edit comments.