Abstract
The design of remote gesturing technologies is an area of growing interest. Current technologies have taken differing approaches to the representation of remote gesture. It is not clear which approach has the most benefit to task performance. This study therefore compared performance in a collaborative physical (assembly) task using remote gesture systems constructed with combinations of three different gesture formats (unmediated hands only, hands and sketch and digital sketch only) and two different gesture output locations (direct projection into a worker's task space or on an external monitor). Results indicated that gesturing with an unmediated representation of the hands leads to faster performance with no loss of accuracy. Comparison of gesture output locations did not find a significant difference between projecting gestures and presenting them on external monitors. These results are discussed in relation to theories of conversational grounding and the design of technologies from a 'mixed ecologies' perspective.
Original language | English |
---|---|
Pages | 1191-1200 |
Number of pages | 10 |
Publication status | Published - 2006 |
Event | SIGCHI Conference on Human Factors in Computing Systems (CHI 2006) - Montréal, Québec, Canada Duration: 22 Apr 2006 → 27 Apr 2006 |
Conference
Conference | SIGCHI Conference on Human Factors in Computing Systems (CHI 2006) |
---|---|
Country/Territory | Canada |
City | Montréal, Québec |
Period | 22/04/06 → 27/04/06 |