Google is currently in the process of developing an innovative AI system known as ‘Brain2Music,’ which utilizes brain imaging data to generate musical compositions. Researchers have observed that this AI model can create music that bears a strong resemblance to portions of songs that an individual was listening to while their brain was being scanned. A recently published research paper, authored jointly by Google and Osaka University and available on arXiv, details how this AI model is capable of reconstructing music by analyzing brain activity data obtained from functional magnetic resonance imaging (fMRI).
fMRI Analysis and Neural Network Training for Brain-Generated Music
For those unfamiliar with the technology, fMRI functions by monitoring the flow of oxygen-rich blood to different parts of the brain, revealing areas of heightened activity. In the course of the research, scientists examined fMRI data from five individuals, all of whom listened to the same 15-second music clips representing diverse genres such as classical, blues, disco, hip-hop, jazz, metal, pop, reggae, and rock.
The brain activity data gathered from these individuals was then used to train a sophisticated deep neural network, which aimed to establish a connection between specific brain activity patterns and essential music attributes like emotion and rhythm. Researchers further categorized the mood of the music into various descriptive categories, including tender, sad, exciting, angry, scary, and happy.
Brain2Music was personalized for each participant in the study, and it was successfully able to convert brain data into original song clips. These clips were subsequently fed into Google’s MusicLM AI model, which is designed to generate music based on textual descriptions.
Insights and Challenges from fMRI and AI Research
During the course of the study, it was observed that when both a human and the MusicLM were exposed to identical music, the internal patterns within MusicLM showed notable connections with distinct regions of brain activity. Yu Takagi, a professor specializing in computational neuroscience and AI at Osaka University and one of the co-authors of the research paper, communicated to LiveScience that the central objective of this project was to attain a more comprehensive comprehension of the mechanisms by which the human brain interprets and processes music.
However, the research paper does indicate that due to the unique and individual wiring of each person’s brain, it is improbable that a model created for one individual could be directly applied to another person.
Furthermore, the practicality of this technology in the immediate future appears to be limited, primarily because recording fMRI signals necessitates users to spend extended periods inside an fMRI scanner. Nevertheless, future studies might explore the intriguing possibility of whether AI can successfully reconstruct music that people imagine within their minds.