TY - GEN
T1 - Decoding Neural Activity for Part-of-Speech Tagging (POS)
AU - Ahmed, Salman
AU - Singh, Muskaan
AU - Bhattacharyya, Saugat
AU - Coyle, Damien
PY - 2023/10/4
Y1 - 2023/10/4
N2 - Decoding Part of Speech(POS) tagging directly from electroencephalography (EEG) signals whilst user overtly spoke (voiced speech) sentences could improve direct speech brain-computer interfaces (BCIs) using imagined or inner speech. To the best of our knowledge, earlier work uses machine learning approach using 74,953 sentences/tokens recorded in 75 EEG sessions. The tokens can be found in 4,479 phrases consisting of terms from the English Online treebank which contains the record of weblogs, newsgroups, reviews, and Yahoo Answers. The results demonstrated the feasibility of POS decoding from EEG based on word class, word frequency, and word length with accuracy of 71%, 86%, 89%, respectively. We believe that there is significant room for improvement with more advanced artificial intelligence. In this paper, we further extend the existing work with end-to-end transformers. Our results presents transformer model outperforms benchmark traditional ML results with +20% in length, +13% for the open vs closed class and +12% in frequency. In our empirical analysis, we find the decoding performance was better when using multi-electrode recordings as compared to single-electrode recordings.
AB - Decoding Part of Speech(POS) tagging directly from electroencephalography (EEG) signals whilst user overtly spoke (voiced speech) sentences could improve direct speech brain-computer interfaces (BCIs) using imagined or inner speech. To the best of our knowledge, earlier work uses machine learning approach using 74,953 sentences/tokens recorded in 75 EEG sessions. The tokens can be found in 4,479 phrases consisting of terms from the English Online treebank which contains the record of weblogs, newsgroups, reviews, and Yahoo Answers. The results demonstrated the feasibility of POS decoding from EEG based on word class, word frequency, and word length with accuracy of 71%, 86%, 89%, respectively. We believe that there is significant room for improvement with more advanced artificial intelligence. In this paper, we further extend the existing work with end-to-end transformers. Our results presents transformer model outperforms benchmark traditional ML results with +20% in length, +13% for the open vs closed class and +12% in frequency. In our empirical analysis, we find the decoding performance was better when using multi-electrode recordings as compared to single-electrode recordings.
UR - http://www.scopus.com/inward/record.url?scp=85187306146&partnerID=8YFLogxK
U2 - 10.1109/SMC53992.2023.10394253
DO - 10.1109/SMC53992.2023.10394253
M3 - Chapter in a published conference proceeding
AN - SCOPUS:85187306146
SN - 9798350337037
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 3079
EP - 3084
BT - 2023 IEEE International Conference on Systems, Man, and Cybernetics
PB - IEEE
CY - U. S. A.
T2 - 2023 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2023
Y2 - 1 October 2023 through 4 October 2023
ER -