AI Model for Mind Reading: It is the year 2023, and the world is swiftly moving away from conventional approaches to interpreting dreams. With the emergence of artificial intelligence, novel methods of deciphering the human mind have come into play. In March, there was news of Japanese scientists utilizing stable diffusion to recreate high-resolution images from brain activity scans. Now, it appears that another groundbreaking development is on the horizon.
A group of researchers from the esteemed University of Texas at Austin has successfully created an AI model capable of reading one’s thoughts. Known as the semantic decoder, this noninvasive AI system focuses on converting brain activity into a continuous stream of text. It was revealed in a peer-reviewed study published in the journal Nature Neuroscience.
Leading the research were Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. Their study draws inspiration from the transformer model, similar to the one powering Google Bard and OpenAI’s ChatGPT.
Also read: Check out which Macy’s stores have closed down
With this remarkable innovation, scientists anticipate that it could greatly assist individuals with paralysis or other disabilities. Essentially, this newly developed technology serves as an AI-based decoder, translating brain activity into a textual representation. This means that AI can now read a person’s thoughts without invasive procedures. It marks an unprecedented milestone in the fields of neuroscience and medical science as a whole.
During the study, three participants underwent MRI scans while listening to stories. A significant breakthrough was achieved, as scientists claimed to have extracted the text corresponding to the participants’ thoughts without the need for any brain implants. It is worth noting that the mind-reading technology captured the key aspects of their thoughts. However, it did not perfectly replicate them in their entirety.
“For a noninvasive method, this represents a substantial leap forward compared to previous endeavors. The previous endeavors typically focused on single words or short sentences. Our model can decode continuous language over extended periods. It encompasses complex ideas,” stated Huth in a report featured on the UT Texas website.
According to the researchers, the AI system can generate a stream of text when a participant listens to or imagines a story. This capability is achieved once the AI system is fully trained. In essence, the researchers employed a technology similar to ChatGPT to interpret the thoughts of individuals. It is while they watched silent films or imagined themselves narrating a story. However, this new study has also sparked concerns regarding mental privacy.
In addition to Tang and Huth, the study’s co-authors include Amanda LeBel, a former research assistant at the Huth Lab, and Shailee Jain, a graduate of computer science from UT Austin.