Abstract
This paper presents a feature extraction procedure (FEP) for a brain-computer interface (BCI) application where features are extracted from the electroencephalogram (EEG) recorded from subjects performing right and left motor imagery. Two neural networks (NNs) are trained to perform one-step-ahead predictions for the EEG time-series data, where one NN is trained on right motor imagery and the other on left motor imagery. Features are derived from the power (mean squared) of the prediction error or the power of the predicted signals. All features are calculated from a window through which all predicted signals pass. Separability of features is achieved due to the morphological differences of the EEG signals and each NNs specialization to the type of data on which it is trained. Linear discriminant analysis (LDA) is used for classification. This FEP is tested on three subjects off-line and classification accuracy (CA) rates range between 88% and 98%. The approach compares favorably to a well-known adaptive autoregressive (AAR) FEP and also a linear AAR model based prediction approach.
Original language | English |
---|---|
Pages (from-to) | 461-467 |
Number of pages | 7 |
Journal | IEEE Transactions on Neural Systems and Rehabilitation Engineering |
Volume | 13 |
Issue number | 4 |
DOIs | |
Publication status | Published - 1 Dec 2005 |