Undoubtedly one of the worst neurological syndromes we confront is the locked-in syndrome (LIS). This can result from a stroke or other brain damage in which the thinking part of the brain is intact and functioning – the patient is awake – but the brainstem is damaged so that their brain cannot communicate with the rest of their body. They are essentially paralyzed below the eyes. In LIS some eye movements and blinking are still retained. This may not sound like much but it enables them to communicate yes-and-no answers, which provides some connection to the outside world.

There is also a syndrome known as complete locked-in syndrome (CLIS) in which even eye movements are lacking. This can result from end stage amyotrophic lateral sclerosis (ALS), in which all muscle control is lost. Eventually even the eye muscles are lost and the patient has no motor function. There is currently no way to communicate with people in CLIS, but a new study hopes to change that.

Brain computer interface

The best hope for paralyzed patients is probably advances in brain-computer interface (BCI) technology. There have been some impressive advances in the technology for allowing brains to communicate with computers, and vice-versa. I wrote about such advances here in 2012, and progress has continued since.

Researchers, for example, have been able to train a monkey to operate a robotic arm with its thoughts alone. This advance has been replicated in humans, giving a patient with tetraplegia (complete paralysis of all four limbs) the ability to operate a prosthetic limb. People have also been trained to use the electrical activity of their brains to control a cursor on a computer screen, enabling them to type out full messages.

Computer processing power and software technology is more than up to the task, and will only improve. The real limiting factor technologically is producing a stable physical connection between the computer and the brain. Computer chips tend to get hot and need to be powered. Wires tend to form scar tissue and make infections more likely.

Researchers are experimenting with intravascular electrodes, which may solve some of the problems. Scalp surface electrodes are safer and more convenient, but lose a lot of information resolution because of the skull and other tissue between the electrodes and the brain.

At this point you are probably thinking – OK, if a monkey can control a robotic arm using its brain activity, and a person can control a computer cursor, then conveying a simple yes-no should be a piece of cake. Surprisingly, however, researchers so far have been unable to use BCI to convey such abstract information in subjects who are completely paralyzed.

The possible reasons for this are interesting. It seems to be due to the fact that the learning process requires some motor or sensory feedback. In patients with CLIS, there is no motor feedback. Their motor neurons are completely gone and their muscles have wasted away. Without this motor feedback they cannot learn to operate a BCI.

This is disappointing, but provides useful clues as to how current BCIs are functioning. The brain has plasticity, it can adapt to new inputs and outputs, but it apparently needs them to map its activity to new function. It is much more difficult to adapt to a computer interface using pure thought without the feedback.

The new study

A study published in PLOS Biology looked at four patients with CLIS or entering CLIS due to ALS. They used functional near infrared spectroscopy (fNIS) to measure the oxygenation of blood in the frontal lobes of the brain. When the brain is active the neurons draw more oxygen from the blood, decreasing the oxygen concentration in the blood which can be measured using fNIS.

They trained the subjects using yes-no questions with a known answer, like the name of their spouse. The subjects were told to think “yes, yes, yes” or “no, no, no” for 15 seconds. Computer software then learned how to distinguish their fNIS activity in the two “yes-no” states. They also simultaneously recorded scalp-surface EEG. Once calibrated they tested the accuracy of this system with further yes-no questions with known answers. The authors report that they were able to achieve >70% accuracy with fNIS overall, which was statistically significant.

This is the first time statistically significant results were found in CLIS subjects with a BCI. However, I think the results should be considered preliminary. There were only four subjects, and the results are only modestly above chance. Outcomes varied from 60-70% and there did not appear to be a significant learning curve.

If these results are valid, they can still be useful. You can ask the patient a question 10 times, for example, and if they give one answer 8 out of those 10 times that is probably the answer they intend.

The future

Of course I would like to see these experiments replicated, both exactly and with variations to see if the results can be improved. I would be less suspicious of the outcome if they were able to achieve >90% accuracy. Also, based upon prior studies, it seems that asking the patient to think other things might be more useful. For example, in one study using fMRI scans researchers were able to distinguish when an apparently comatose patient was asked to imagine themselves playing tennis vs walking around their home. But again, asking the patient to imagine a motor task may be problematic if they lack all motor feedback. This needs to be sorted out.

Also, researchers may have better luck with higher resolution imaging, such as brain surface or intravascular electrodes.

Part of the difficulty researchers are having is that they have no way of knowing what is actually happening inside the minds of these patients. Are they paying attention? Are they distracted by some pain or sensation? Maybe they have an itch or an ear worm. It is impossible to know where patients go mentally when they are completely paralyzed.

This is important research, however, and I anticipate how far and how quickly it will advance. We need better physical interfaces to have stable high-resolution connections to the brain. We have also been picking the low hanging fruit, such as motor function or interpreting the activity in the primary visual cortex. We may be able to use this to great effect, but what we really want is to be able to decode the abstract thinking of the brain. I don’t think we are anywhere near this ability. Interpreting abstract thoughts, even yes-no, will probably take much higher resolution and much more sophisticated software and calibration algorithms. We are not entering The Matrix yet.

One possibility, at least with ALS patients, is to begin teaching patients to use such an interface when they are diagnosed, and not waiting until they are near CLIS. They may have a year or two to train their brains, and to train the software, to communicate while they still have motor feedback. By the time they are completely paralyzed they can slide seamlessly into the virtual reality of their BCI.

It’s both exciting and frustrating to be at the dawn of this new technology. There are exciting advances, and we can see the potential, but the real benefits are still out of reach.

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the president and co-founder of the New England Skeptical Society, the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also contributes every Sunday to The Rogues Gallery, the official blog of the SGU.