First of all, artificial intelligence technologies are good at making music. For example, Google DeepMind's Lyria allows you to create tracks without being able to sing or know how to read music — a whole set of technologies will work instead of a human. With Lyria, you can not only create your own music, but also mimic popular stars. The AI can write an entire song on demand in the style of "Lyrical pop ballad about cats", but one can still sometimes get the uncanny valley feeling.
But more abstract music — the AI handles it with no problem. For example, the Edel project by the popular singer Grimes allows you to create a personalized musical space — a stream of ambient for focused work or meditation. Edel already has millions of active users around the world. The app features a special mode that generates music for sleep, including children's sleep — it was developed together with neurophysiologists.
The market for AI music is projected to reach $1.2 billion by 2025. Already today, more than 60% of popular artists use AI services to create or improve lyrics and melodies.
At the same time, computer vision is also related to music — although not only microphones, but even cameras are not used for this purpose. Sound tracks (e.g. songs or voice recordings) are represented as spectrograms. And afterwards, the algorithm works with spectrograms as with images: to analyze sound, recommend similarities — and even create other spectrograms to recreate new tracks from them.
And there are already enough fully AI-generated musicians in pop culture. Neural networks create not only the tracks, but also the image of the performer. For example, the virtual singer Noooooouri has hundreds of thousands of fans and successful collaborations with global brands. And the number of track listens has passed 1 million.
Even more popular is virtual performer Hatsune Miku. In addition to being famous online, the 3D singer even appears on stage: for this purpose, they use a special technology of projecting images onto layers of opaque glass. Hatsune has an army of fans, and she has even been invited to perform at one of the most popular music festivals in the world, Coachella.
And vice versa— human artists are not averse to entering the 3D world too. For example, popular musicians like Twenty One Pilots hold large-scale concerts in Fortnite, which can be attended in virtual reality. The concert by rapper Travis Scott in Fortnite drew 12.3 million people — even though it lasted only 15 minutes.
AR and VR are also coming to the world of concerts. Visitors to Coachella this year could attend the first AR stage in the history of the festival. By pointing their smartphones at the stage, the audience saw mashup visualizations all around them. Gorillaz performed on stage in a fully 3D format: people-musicians sang, and their virtual images could be seen on the phone screen.
The band Nine Inch Nails is also famous for their love of modern technology. The band even has inhouse technical team, which develops complex light and music shows with AR and VR elements. They use cameras installed on stage to broadcast distorted images onto moving screens behind the musicians.
Digital technology is also giving a second life to bands that can no longer perform on stage. In 2023, ABBA started to tour with three-dimensional avatars of older musicians. To create them, the images of the stars were digitized using a set of 160 cameras, and then AI was used to make the images move and sing. The virtual artists perform a total of 22 songs during the 2-hour concert.
In addition to 3D projections of musicians, the audience of the show will enjoy "the best sound in the history of concerts" — because digital performers do not have to struggle with the noises and shortcomings of musical instruments and microphones
We at OpenCV.ai are amateurs in music and shows, but professionals in computer vision and artificial intelligence. We will be happy to energize your audiovisual shows with our technologies.