New Horizons 21 - Decoding Speech using Invasive Brain-Computer Interfaces based on Intracranial Brain Signals (dSPEECH)

Project: Research council

Project Details

Description

Some patients cannot speak because of impairment or degeneration of neural pathways recruited in speech production or movement such as motor neurone disease (MND) or amyotrophic lateral sclerosis (ALS). However, brain-computer interfaces (BCIs) may benefit them if their brain structure responsible for language or cognition is intact, as BCIs have the potential to bypass damaged neural pathways by decoding brain signals into speech directly.

BCIs may use invasive or non-invasive ways for brain signal recording. In general, the brain signals recorded by non-invasive BCIs such as electroencephalography (EEG) is of poor quality with low signal-to-noise ratio, so non-invasive BCIs cannot decode speech with acceptable performance at present. Differently, invasive BCIs such as electrocorticography (ECoG) and stereo-electroencephalography (SEEG) can collect high-quality intracranial brain signals with adequate spatial and temporal resolution, so it is promising to use invasive BCIs to decode speech.

Through overt speech using invasive BCIs (ECoG and SEEG) have been developed rapidly and many excellent results are generated in recent years, the covert (imagined) speech decoding is still challenging. There are several reasons behind this situation. The first and major reason is because the associated neural signals are weak and variable compared to overt speech, hence it is very difficult to decode covert speech by machine learning algorithms The second reason is the limited imagined speech dataset recorded invasively. This limited dataset cannot be relieved using an animal model because animals use a system of communication that is believed to be limited to expression of a finite number of utterances that is mostly determined genetically. In addition, recordings on humans are generally restricted to non-invasive techniques. Intracranial data can only be obtained in a clinical environment from patients with drug resistant epilepsy or other neural related conditions. Inclusion criteria, such as normal cognition and the ability to articulate, for speech study further decrease the number of potential subjects. This project (dSPEECH) aims to make some breakthrough regarding the above factors and do a novel study.

In dSPEECH, we are ambitious to decode covert speech using invasive BCIs (ECoG and SEEG). We will establish new paradigms for a new generation of brain-to-text BCIs, develop advanced machine learning/deep learning algorithms for decoding covert speech, and construct the world's first ECoG/SEEG dataset for covert speech. We will also tackle the possible problems on ethical issues and data management. With the available of such ECoG/SEEG dataset and the proper decoding methods, we are confident to make big progress on research of decoding covert speech.

dSPEECH is a joint project that comprises of multidisciplinary members including researchers from neural engineering and neurosurgeons from clinical medicine. We also have got strong support from partners including famous BCI companies and oversea experienced neurosurgeons. Based on the close and solid collaboration, we believe the world-leading results can be generated in dSPEECH.
StatusActive
Effective start/end date1/01/2331/12/24

Collaborative partners

Funding

  • Engineering and Physical Sciences Research Council

RCUK Research Areas

  • Information and communication technologies
  • Human Communication in ICT
  • Human-Computer Interactions

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.