Gaze-touch: combining gaze with multi-touch for interaction on the same surface

Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Hans Gellersen

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

87 Citations (SciVal)
103 Downloads (Pure)


Gaze has the potential to complement multi-touch for interaction on the same surface. We present gaze-touch, a technique that combines the two modalities based on the principle of ''gaze selects, touch manipulates''. Gaze is used to select a target, and coupled with multi-touch gestures that the user can perform anywhere on the surface. Gaze-touch enables users to manipulate any target from the same touch position, for whole-surface reachability and rapid context switching. Conversely, gaze-touch enables manipulation of the same target from any touch position on the surface, for example to avoid occlusion. Gaze-touch is designed to complement direct-touch as the default interaction on multi-touch surfaces. We provide a design space analysis of the properties of gaze-touch versus direct-touch, and present four applications that explore how gaze-touch can be used alongside direct-touch. The applications demonstrate use cases for interchangeable, complementary and alternative use of the two modes of interaction, and introduce novel techniques arising from the combination of gaze-touch and conventional multi-touch.
Original languageEnglish
Title of host publicationUIST '14 Proceedings of the 27th annual ACM symposium on User interface software and technology
PublisherAssociation for Computing Machinery
Number of pages10
ISBN (Print)9781450330695
Publication statusPublished - 5 Oct 2014


  • Gaze input
  • multi-touch
  • multimodal UI
  • interactive surface

Cite this