Devices to help people who are physically unable to speak aren't new.
Professor Stephen Hawking was a famous example of someone who used one. His device depended on the operator using tiny muscular movements to make a selection from an index of letters, words, and phrases. It works, but slowly.
Recent research has suggested that every different word in a person's vocabulary exists in a certain very particular part of the brain, giving hope of direct brain-to-speech-machine system; but the trouble with that is that each individual brain is quite likely to be wired up differently - and that each different language is very likely to work differently, too.
That's why some new research, announced in the Journal of Neuroscience, is so exciting. It takes a completely new and very elegant approach.
What lead researcher Professor Marc Slutzky and his team at Northwestern University did was to record brain activity when brain-surgery patients were making the various sounds (only forty four in total) that make up the English language. As a result, Dr Slutzky's team have now been able to developed a machine/brain interface that can decode the sounds the brain is instructing the lips, tongue and vocal chords to make.
It turns out that these instructions work in much the same way that the brain's instructions for arm and leg movements.
The next step is to design a machine that can produce the decoded sounds.
This is so promising, elegant and exciting. Think of it, a machine that can automatically turn thoughts into speech...
...well, I just hope they make sure the thing has an off-switch, that's all.
Nuts and Bolts: brain. The word brain comes, via Old English, from the Greek brekhmos, which means forehead.