Friday , November 22 2019
Home / argentina / Neuroscientists Returns to Brain Waves Knowing to Speech

Neuroscientists Returns to Brain Waves Knowing to Speech



Using brain scan technology, artificial intelligence and speech synthesis, scientists have become brain models in oral verbal comprehension, without giving voice to them in advance.

It's a shame Stephen Hawking does not see this alive, because he got a real shot. A new speech system by the New Laboratory Laboratory Laboratory at the Laboratory of Acoustics Laboratory at Columbia University in New York could be an innovative physicist.

Hawking had an amyotrophic lateral sclerosis (ALS), a motor neuronal disease that leads to its verbal language, but continued to communicate with a computer and speech synthesizer. Glasses ignored the cheeky way, as Hawking was able to pre-select a computer, as the synthesizer reads. He was a little stolen, but Hawking allowed him to produce a dozen-minute words.

But imagine that Hawking did not pick and talk manually. In fact, some people, ALS, who have recovered a blocked syndrome or a stroke can have unnecessary skills to control a computer, even though it's just a cheek image. Ideally, an artificial voice system would be able to create individual thoughts straight away by eliminating the need for computer control.

A new publication published in scientific advances will bring us closer to this step, but rather instead of capturing inner thoughts instead of speech, it uses brain-based brainstorming patterns.

In order to guess a neuroprosthesia, Nima Mesgarani, the neurocientists and their colleagues, joined with the progress of advanced learning with speech synthesis technologies. Due to their computer interface, they were still rudimentary, they captured brain models from the head cortex, decoded through AI-powered vocoder or speech synthesizers to achieve understandable speech. The talk was a very robotic sound, but almost three three listeners were able to distinguish content. It's an exciting breakthrough to help people who have lost the capacity of the conference.

In order to be clean, Mesgarani's neuroprosthetic gland does not return to his own language, that is, he also mentions words imagined by the head. Unfortunately, we are still not according to science. Instead, the system caught the individual's distinctive cognitive responses, listening to the recordings of the people they talked about. To decode a deep neural network, or to return, these models were able to reconstruct the lecture system.

"This study applies deep learning techniques following the latest trend in decoding neuronal signals", Andrew Jackson was a professor at Neuronal Interface at Newcastle University that did not participate in the new study. Gizmodo. "In this case, epilepsy signs of brain neurons in humans have been recorded, and participants listen to different words and phrases that the actors read, and they have been trained to learn the relationship between the trained signal and the soundtrack to the neuron networks, which is based solely on brain signs words / phrases can be played. "

Epilepsy patients were selected for exams, which often undergo brain surgery. Mesgarani, with the collaboration of Ashesh Dinesh Mehta, co-authored the Northwell Health Medical Partners Neuroscience Institute and a new research author, hired five volunteers in the experiment. The group used invasive electrocorticography (ECoG) for measuring neuron activity while listening to the sound of continuous words. Patients listened to speakers who counted, for example, zero and nine. Their brain models were included in the AI ​​ability loudspeakers, creating a synthesized language.

The results were very robotic-sounding, but quite understandable. In the tests, listeners have been able to identify well enough oral pronunciations of 75 per cent per hour. Even if it was a male or female speaker. It's not a bad thing, and Mesgaran also said that it could be a "surprise" Gizmodo in an email.

The recordings of the speech synthesizer can be found here (researchers have tried different techniques, but the best result was combined with deep neural networks.

In order to use the synthesizer in this context, it is important for Mesgarani to use a matching and repetitive system with pre-recorded words. As he explained GizmodoThere are more words than those that put words right.

"The objective of this work is to restore speech communication when speaking of the ability to speak, we intend to learn cartoon from a brain signal, the same sound in the speech," he said Gizmodo. "Phonetics can also be decoded [distinct units of sound] or words, however, the speaker has much more information than content, for example, the speaker [with their distinct voice and style], intonation, emotional tone, etc. That is why our goal has been to recover the sound in a special role. "

Looking forward, Mesgarani intends to synthesize more complex words and phrases, and brings together the brain signals of people who think or represent action to speak.

Jackson was surprised by the new study, but it is still not clear how to directly focus directly on brain computer interfaces.

"On paper, decoder signals reflect the real words that are heard in the brain. It would be useful if the communication device should decode the user's words," said Jackson Gizmodo. "Although they often overlap some brain areas involved in hearing, speech, and imagination, we do not know exactly how brain-related signals are."

William Tatum, a neurologist at the Mayo Clinic that did not participate in the new study, said that research is essential for first-time use of artificial intelligence to create brain waves that create acoustic acoustic stimuli. Significant significance is "in the next generation of better-designed design designed for the development of deep learning", he said Gizmodo. That said, the sample of participants was small, and the use of data directly from human brain during the surgery is not appropriate.

The limit of the study is another, more than just neural networks can reproduce more than zero and nine words, many of the brain signals of each participant needed. The patient is a specific system when we develop brain models when we listen to the speech.

"It will be interesting in the future to see a person decode them to other people," Jackson said. "It is as little as the user's voice recognition recognition system, as compared to today's technology, such as Siri and Alexa, the sense of one's own voice, using the neural networks again. It can be said that only one time can be done by the same signals of cleanliness ".

There is no doubt that there is still a lot to work on. But a new paper is a steady step to get an implantable neuroprosthetic word.

[Scientific Reports]

Source link