The world is rapidly moving fast and is now drifting away from the rather traditionally known methods of dream interpretation.
AI has affected our lives, essentially revolutionized as a pattern recognition technology which can enable machines to learn and take their own decisions, AI has now entered into the human psyche as newer methods of reading the human mind are at play.
In the month of March, there were reports regarding the recreation of high-resolution images from scans of brain activity using stable diffusion by Japanese scientists. There is similar breakthrough on the same lines.
An AI model which can potentially read human mind has been developed by a team of scientists from the University of Texas at Austin.
How this AI Model Works?
The noninvasive AI system known as semantic decoder lays emphasis on translating brain activity into a stream of text according to the peer-reviewed study published in the journal Nature Neuroscience.
The research was led by Jerry Tang, a doctoral student in computer science; Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.
The study which was conducted by Tang and Huth, is based partly on a transformer model which is similar to the one that powers Google Bard and OpenAI’s ChatGPT.
Now, for sciences and its inventions to be taken seriously and given necessary commercial scale, its applications and benefits are very important.
Now, you will ask where this technology can be used? The answer is that this technology can be of great assistance to the people with paralysis or some form of disability.
This newly developed AI-based decoder can translate brain activities into a stream of text. This means now AI will allow a person’s thoughts to be read in a non-invasive way, something that has never been attempted in the history of neuroscience or medical science in general.
How Was the Study Conducted?
Three people were asked to enter into the MRI machines and while they were at it, they were asked to listen to stories. Now while listening to their stories, there were thoughts that were generated by the humans.
In what can be called a major breakthrough, scientists claim that they produced the text of the participants’ thoughts without the help of any brain implant.
Notably, the AI based decoder technology could only capture the main essence of the thought, not the details of it or the thought in its entirety.
Huth was quoted as saying in a report published on the UT Texas website that “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences. We’re getting the model to decode continuous language for extended periods of time with complicated ideas”.
As per the scientists, the AI system could generate a stream of text when a participant listens to or imagines a story.
As per the researchers, such a feat can be achieved once the AI system is fully trained.
The thoughts of the people were interpreted by technology like ChatGPT while they were watching silent films or when they imagined themselves to be telling a story.
Since everything that has a scope of use, has one for misuse. Hence, concerns of mental privacy were raised after the study.
Apart from Tang and Huth, Amanda LeBel a former research assistant at the Huth Lab and Shailee Jain, a computer science graduate at UT Austin, are co-authors of the study.