New-age technologies like machine learning and artificial intelligence are now taking over the world, pledging to make the lives of human beings easier.
In the most recent development, the US tech giant Google has launched an artificial intelligence system called the MusicLM, which can turn words or text into music or a song.
The AI generation engine, in order to do so, has been fed by 2.8 lakh hours of music.
The AI converts your text instructions into a song by analysing the text, deciphering the scale for it and is capable of working with complex composition and arrangements with high fidelity, and can mix and match different if required.
Besides deciphering the scale and working with complexity of compositions, the team of MusicLM is able to capture little nuances of mood, melody and instruments.
In an academic paper published by the AI team of Google, MusicLM said that the text descriptions for conversion to music can be specific, like this description –
“The main soundtrack of an arcade game. It is fast-paced and upbeat, with a catchy electric guitar riff. The music is repetitive and easy to remember, but with unexpected sounds, like cymbal crashes or drum rolls”.
Or the description could be pretty vague, like this statement –
“The composition shows a strongly idealised view of the real crossing that Napoleon and his army made across the Alps through the Great St Bernard Pass in May 1800″.
However, the icing on the cake with MusicLM is that the software can take various instructions together and combine them into one composition fluidly. Besides, it is capable of not just turning fully formed texts into songs but also caption and picture captions.
While the MusicLM software seems perfect, one of the major drawbacks of the text-to-music generation system is not made available to the public.
That being said, Google does not plan to release it to everyone anytime soon. This is due to copyright reasons.
“We acknowledge the risk of potential misappropriation of creative content associated to the use case.We strongly emphasise the need for more future work in tackling these risks associated to music generation,” said the co-authors of the academic paper.