Abstract
Gaze has the potential to complement multi-touch for interaction on the same surface. We present gaze-touch, a technique that combines the two modalities based on the principle of ''gaze selects, touch manipulates''. Gaze is used to select a target, and coupled with multi-touch gestures that the user can perform anywhere on the surface. Gaze-touch enables users to manipulate any target from the same touch position, for whole-surface reachability and rapid context switching. Conversely, gaze-touch enables manipulation of the same target from any touch position on the surface, for example to avoid occlusion. Gaze-touch is designed to complement direct-touch as the default interaction on multi-touch surfaces. We provide a design space analysis of the properties of gaze-touch versus direct-touch, and present four applications that explore how gaze-touch can be used alongside direct-touch. The applications demonstrate use cases for interchangeable, complementary and alternative use of the two modes of interaction, and introduce novel techniques arising from the combination of gaze-touch and conventional multi-touch.
Original language | English |
---|---|
Title of host publication | UIST '14 Proceedings of the 27th annual ACM symposium on User interface software and technology |
Publisher | Association for Computing Machinery |
Pages | 509-518 |
Number of pages | 10 |
ISBN (Print) | 9781450330695 |
DOIs | |
Publication status | Published - 5 Oct 2014 |
Keywords
- Gaze input
- multi-touch
- multimodal UI
- interactive surface