Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG

Ciaran Cooney, Folli Raffaella, Damien Coyle

Research output: Contribution to conferencePaperpeer-review

Abstract

A brain-computer interface (BCI) that employs imagined speech as the mode of determining user intent requires strong generalizability for a feasible system to be realized. Research in this field has typically applied data to training algorithms on a within-subject basis. However, even within-subject training and test data are not always of the same feature space and distribution. Such scenarios can contribute to poor BCI performance, and real-world applications for imagined speechbased BCIs cannot assume homogeneity in user data. Transfer Learning (TL) is a common approach used to improve generalizability in machine learning models through transfer of knowledge from a source domain to a target task. In this study, two distinct TL methodologies are employed to classify EEG data corresponding to imagined speech production of vowels, using a deep convolutional neural network (CNN). Both TL approaches involved conditional training of the CNN on all subjects, excluding the target subject. A subset of the target subject data was then used to fine-tune either the input or output layers of the CNN. Results were compared with a standard benchmark using a within-subject approach. Both TL methods significantly outperformed the baseline and fine-tuning of the input layers resulted in the highest overall accuracy (35.68%; chance: 20%).
Original languageEnglish
Publication statusPublished - 2019

Bibliographical note

IEEE International Conference on Systems, Man, and Cybernetics, 2019 : Industry 4.0, IEEE SMC 2019 ; Conference date: 06-10-2019 Through 09-10-2019

Keywords

  • Electroencephalogram (EEG)
  • Imagined Speech
  • Convolutional Neural Network
  • Deep Learning
  • Transfer Learning
  • brain computer interface

Fingerprint

Dive into the research topics of 'Optimizing Input Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG'. Together they form a unique fingerprint.

Cite this