A new artificial intelligence system called a semantic decoder has the ability to translate a person's brain activity into a continuous stream of text, while listening to a story or imaging telling a story.
The system was developed by researchers at the University of Texas at Austin who said it might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.
The work from the scientist was published in the journal Nature Neuroscience, and relies – in part – on a transformer model that is similar to the ones that Open AI’s ChatGPT and Google’s Bard.
Brain activity is measured using a functional MRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner.