Abstract
In this paper, we present a technique for recognising hand gestures using data obtained from wearable sensors. A Neural Network takes three input streams- accelerometer data, EMG data, and joint angle data- from sensors affixed to the user’s arm and estimates the gesture the user is performing based on the incoming data. Our proposed method handles each of the three input streams separately using multiple convolution and LSTM layers before concatenating them together and passing them through 3 layers of fully connected neurons. This work was trained on a subset of DB2 Exercise 3 from the Ninapro dataset, composed of data collected from 40 participants performing 23 gestures. This work achieved a classification accuracy of 82.11%. As this falls below the classification accuracy of established literature on similar data, we aim to improve its performance by training and validating the model with a larger quantity of training data.
Original language | English |
---|---|
Title of host publication | IEEE RO-MAN |
Publication status | Acceptance date - 9 Jun 2025 |
Event | IEEE International Conference on Robot and Human Interactive Communication - Eindhoven University of Technology, Eindhoven, Netherlands Duration: 25 Aug 2025 → 29 Aug 2025 Conference number: 34 |
Conference
Conference | IEEE International Conference on Robot and Human Interactive Communication |
---|---|
Abbreviated title | IEEE RO-MAN 2025 |
Country/Territory | Netherlands |
City | Eindhoven |
Period | 25/08/25 → 29/08/25 |