Recognition of Hand Gestures using Wearable Sensors

  • Joel Hodgson

Student thesis: Masters ThesisMEng


Approximately 5200 amputations are reported annually in the UK. Prosthetic limbs have been used throughout history in an attempt to improve the quality of life of an amputee. A recent area of development is Myoelectric Controlled Prosthetic limbs. Myoelectric prosthetics use electromyography to record data from the users' muscle activity to control the limb. Mechanical advancements of the limbs have resulted in an increasing number of degrees of freedom and functionality. However, current myoelectric prosthetic limbs are still limited to basic opening and closing functions. There is a clear need for the development of a Myoelectric Controlled Prosthetic arm with a range of dexterous gesture

This project developed three different machine learning models for the classification of hand gestures using electromyography (EMG) signals. These models included a Support Vector Machine (SVM), a Convolutional Neural Network (CNN) and an integrated CNN with Naive Bayes Network. The models
were trained and tested using a combination of EMG data taken from an opensource database and data recorded from the real-time user. The performance of each model was tested for a set of 5 and 10 gestures, both offline and in real-time. The Convolutional Neural Network (CNN) with a Naive Bayes Network achieved the highest offline accuracy of 96.5% and 94.1%, for 5 and 10 gestures respectively.

In real-time, the standard CNN had the highest performance for 5 gestures, with an overall accuracy of 98.5% and 2 out of 5 gestures achieving 100% accuracy. Only 8 out of 10 of the gestures were able to be classified in real-time. The CNN with Naive Bayes model had the highest performance classifying 8 gestures with a mean accuracy of 88%. Although the CNN with Naive Bayes model performed the best when classifying 10 gestures, the model occasionally became stuck and unable to make a decision. Therefore, the standard CNN model was chosen to be the most effective real-time model. The CNN had a constant decision rate of 5 decisions per second, and an accuracy of 98.5% for classifying 5 real-time gestures.

It was found that training the models with EMG data from the real-time user was a necessity. When the CNN model was trained using only data from the real-time user, 5 gestures were classified with 99.5% accuracy and all 10 gestures were classified in real-time. When the model was trained with data not from the real-time user, the model accuracy dropped and was unable to accurately make real-time decisions. Future development of this project should focus on the development of a graphical user interface, optimisation algorithms and investigations to new features and CNN structures. Long term developments should integrate the gesture classifications into the control system of a Myoelectric Prosthetic arm.

The development of these machine learning models hopes to bring increased dexterity and control to such devices.
Date of Award2019
Original languageEnglish
SupervisorUriel Martinez Hernandez (Supervisor)

Cite this