Summary: Researchers have developed a non-invasive method to identify hand gestures through brain imaging.
The technique utilizes magnetoencephalography (MEG) and could contribute to the evolution of brain-computer interfaces. Such interfaces could help individuals with physical challenges, like paralysis or amputated limbs, to control supportive devices using their minds.
This work represents the most successful non-invasive single-hand gesture differentiation achieved to date.
- The researchers successfully used non-invasive MEG to distinguish different hand gestures with over 85% accuracy.
- The technique, which is as safe as taking a patient’s temperature, has potential applications for those with physical challenges.
- MEG measurements from only half of the brain regions sampled yielded nearly comparable results, suggesting future MEG helmets might require fewer sensors.
Researchers from University of California San Diego have found a way to distinguish among hand gestures that people are making by examining only data from noninvasive brain imaging, without information from the hands themselves.
The results are an early step in developing a non-invasive brain-computer interface that may one day allow patients with paralysis, amputated limbs or other physical challenges to use their mind to control a device that assists with everyday tasks.
The research, recently published online ahead of print in the journal Cerebral Cortex, represents the best results thus far in distinguishing single-hand gestures using a completely noninvasive technique, in this case, magnetoencephalography (MEG).
“Our goal was to bypass invasive components,” said the paper’s senior author Mingxiong Huang, Ph.D., co-director of the MEG Center at the Qualcomm Institute at UC San Diego. Huang is also affiliated with the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering and the Department of Radiology at UC San Diego School of Medicine, as well as the Veterans Affairs (VA) San Diego Healthcare System.
“MEG provides a safe and accurate option for developing a brain-computer interface that could ultimately help patients.”
The results are an early step in developing a non-invasive brain-computer interface that may one day allow patients with paralysis, amputated limbs or other physical challenges to use their mind to control a device that assists with everyday tasks. Credit: Neuroscience News
The researchers underscored the advantages of MEG, which uses a helmet with embedded 306-sensor array to detect the magnetic fields produced by neuronal electric currents moving between neurons in the brain.
Alternate brain-computer interface techniques include electrocorticography (ECoG), which requires surgical implantation of electrodes on the brain surface, and scalp electroencephalography (EEG), which locates brain activity less precisely.
“With MEG, I can see the brain thinking without taking off the skull and putting electrodes on the brain itself,” said study co-author Roland Lee, MD, director of the MEG Center at the UC San Diego Qualcomm Institute, emeritus professor of radiology at UC San Diego School of Medicine, and physician with VA San Diego Healthcare System.
“I just have to put the MEG helmet on their head. There are no electrodes that could break while implanted inside the head; no expensive, delicate brain surgery; no possible brain infections.”
Lee likens the safety of MEG to taking a patient’s temperature. “MEG measures the magnetic energy your brain is putting out, like a thermometer measures the heat your body puts out. That makes it completely noninvasive and safe.”
The current study evaluated the ability to use MEG to distinguish between hand gestures made by 12 volunteer subjects. The volunteers were equipped with the MEG helmet and randomly instructed to make one of the gestures used in the game Rock Paper Scissors (as in previous studies of this kind). MEG functional information was superimposed on MRI images, which provided structural information on the brain.
To interpret the data generated, Yifeng (“Troy”) Bu, an electrical and computer engineering Ph.D. student in the UC San Diego Jacobs School of Engineering and first author of the paper, wrote a high-performing deep learning model called MEG-RPSnet.
“The special feature of this network is that it combines spatial and temporal features simultaneously,” said Bu. “That’s the main reason it works better than previous models.”
When the results of the study were in, the researchers found that their techniques could be used to distinguish among hand gestures with more than 85% accuracy. These results were comparable to those of previous studies with a much smaller sample size using the invasive ECoG brain-computer interface.
The team also found that MEG measurements from only half of the brain regions sampled could generate results with only a small (2—3%) loss of accuracy, indicating that future MEG helmets might require fewer sensors.
Looking ahead, Bu noted, “This work builds a foundation for future MEG-based brain-computer interface development.”
In addition to Huang, Lee and Bu, the article, “Magnetoencephalogram-based brain–computer interface for hand-gesture decoding using deep learning,” was authored by Deborah L. Harrington, Qian Shen and Annemarie Angeles-Quinto of VA San Diego Healthcare System and UC San Diego School of Medicine; Hayden Hansen of VA San Diego Healthcare System; Zhengwei Ji, Jaqueline Hernandez-Lucas, Jared Baumgartner, Tao Song and Sharon Nichols of UC San Diego School of Medicine; Dewleen Baker of VA Center of Excellence for Stress and Mental Health and UC San Diego School of Medicine; Imanuel Lerman of UC San Diego, its School of Medicine and VA Center of Excellence for Stress and Mental Health; and Ramesh Rao (director of Qualcomm Institute), Tuo Lin and Xin Ming Tu of UC San Diego.
Magnetoencephalogram-based brain–computer interface for hand-gesture decoding using deep learning
Advancements in deep learning algorithms over the past decade have led to extensive developments in brain–computer interfaces (BCI). A promising imaging modality for BCI is magnetoencephalography (MEG), which is a non-invasive functional imaging technique.
The present study developed a MEG sensor-based BCI neural network to decode Rock-Paper-scissors gestures (MEG-RPSnet). Unique preprocessing pipelines in tandem with convolutional neural network deep-learning models accurately classified gestures.
On a single-trial basis, we found an average of 85.56% classification accuracy in 12 subjects. Our MEG-RPSnet model outperformed two state-of-the-art neural network architectures for electroencephalogram-based BCI as well as a traditional machine learning method, and demonstrated equivalent and/or better performance than machine learning methods that have employed invasive, electrocorticography-based BCI using the same task.
In addition, MEG-RPSnet classification performance using an intra-subject approach outperformed a model that used a cross-subject approach.
Remarkably, we also found that when using only central-parietal-occipital regional sensors or occipitotemporal regional sensors, the deep learning model achieved classification performances that were similar to the whole-brain sensor model. The MEG-RSPnet model also distinguished neuronal features of individual hand gestures with very good accuracy.
Altogether, these results show that noninvasive MEG-based BCI applications hold promise for future BCI developments in hand-gesture decoding.