国产精品99一区二区三_免费中文日韩_国产在线精品一区二区_日本成人手机在线

Scientists find how brain encodes speech, closing to making brain-speech machine interface

Source: Xinhua| 2018-09-27 02:34:47|Editor: yan
Video PlayerClose

WASHINGTON, Sept. 26 (Xinhua) -- American researchers have unlocked new information about how the brain encoded speech, moving closer to developing speech-brain machine interface that can decode the commands the brain is sending to the tongue, palate, lips and larynx.

The study published on Wednesday in the Journal of Neuroscience revealed that the brain controls speech production in a similar manner to how it controls the production of arm and hand movements.

Northwestern University researchers recorded signals from two parts of the brain and decoded what these signals represented.

They found that the brain represented both the goals of what we are trying to say (speech sounds like "pa" and "ba") and the individual movements that we use to achieve those goals (how we move our lips, palate, tongue and larynx). The different representations occur in two different parts of the brain.

The discovery could potentially help people like the late Stephen Hawking communicate more intuitively with an effective brain machine interface (BMI) and those people with speech disorders like apraxia of speech.

"This can help us build better speech decoders for BMIs, which will move us closer to our goal of helping people that are locked-in speak again," said the paper's lead author Marc Slutzky, associate professor of neurology and of physiology at Northwestern.

Speech is composed of individual sounds, called phonemes, which are produced by coordinated movements of the lips, tongue, palate and larynx. However, scientists didn't know exactly how these movements, called articulatory gestures, are planned by the brain.

Slutzky and his colleagues found speech motor areas of the brain had a similar organization to arm motor areas of the brain.

Scientists recorded brain signals from the cortical surface using electrodes placed in patients undergoing brain surgery to remove brain tumors, keeping the patients awake during surgery and asking them to read words.

After the surgery, scientists marked the times when the patients produced phonemes and gestures. Then they used the recorded brain signals from each cortical area to decode which phonemes and gestures had been produced, and measured the decoding accuracy.

The brain signals in the precentral cortex were more accurate at decoding gestures than phonemes, while those in the inferior frontal cortex, a higher level speech area, were equally good at decoding both phonemes and gestures, according to the study.

This finding helped support linguistic models of speech production and guide engineers in designing brain machine interfaces to decode speech from these brain areas.

The next step for the research is to develop an algorithm for brain machine interfaces that would not only decode gestures but also combine those decoded gestures to form words.

TOP STORIES
EDITOR’S CHOICE
MOST VIEWED
EXPLORE XINHUANET
010020070750000000000000011105521374949131
主站蜘蛛池模板: 新巴尔虎左旗| 滨海县| 台湾省| 邻水| 苏尼特左旗| 广宁县| 龙胜| 德阳市| 项城市| 滨州市| 武邑县| 广宁县| 贺兰县| 娄底市| 镇赉县| 固原市| 西青区| 永寿县| 海城市| 都昌县| 宜宾县| 光山县| 茌平县| 竹山县| 永丰县| 娄底市| 井冈山市| 宁南县| 疏勒县| 永济市| 农安县| 钦州市| 西平县| 垣曲县| 温宿县| 罗江县| 古交市| 黑水县| 襄垣县| 永川市| 宣化县|