In a study funded by Facebook, neuroscientists at the University of California in San Francisco have found a new way to decode speech directly from the human brain. The study was carried on three epilepsy patients (who could speak normally) who had electrodes placed on their brains. The electrodes were used to record brain activity while the patients were presented with nine pre-determined questions and asked to read 24 potential responses. The neuroscientists then used the brain signals recordings to build computer models that learned to match specific brain activity to the questions presented to the patients and the answers they gave. A software trained on these models was able to identify only from brain signals what question a patient heard and what response he/she gave, with an accuracy of 76% and 61% respectively. The software is now limited, as it only works for stock sentences it has been trained on, but the scientists view it as an important step towards a more complex system that can translate brain signals into more varied speech. For this to happen, algorithms will need to be trained on large amounts of spoken language and corresponding brain signals, which may be different from individual to individual.

cross-circle