A few weeks ago, some of us at work were discussing the lawyer who used ChatGPT to write a brief for him. Unfortunately, the artificial intelligence program cited cases it fabricated, landing the lawyer and his firm in hot water.
One of my friends then said that AI is getting out of hand. He then proceeded to play a song on his phone – “I’m a Barbie Girl” sung by Johnny Cash. It sounded so right, but at the same time, so wrong. While Cash sang about being “A Boy Named Sue,” this was not the same.
Apparently, AI-produced music using the voices of popular music artists is the latest social media trend. You can make Frank Sinatra rap a Kanye West song or have the rapper sing a country song. The technology is so good that some people believe they’re hearing a new song, such as “Heart on my Sleeve,” which was fabricated to sound like it was performed by hip-hop artists the Weeknd and Drake.
The song racked up millions of streams before platforms like Spotify, Apple Music, Youtube and TikTok removed it. After learning of the song, Drake proclaimed, “This is the last straw.”
In response, Universal Music Group, a popular music company that represents big artists like the Weeknd and Drake, along with Taylor Swift and Ariana Grande, sent a letter to streaming services asking them to block AI platforms from using their services to train on the lyrics and melodies of copyrighted songs. Their attorney, Jeffrey Harleston, said during a Senate hearing in July that “an artist’s voice is often the most valuable part of their livelihood and public persona.”
Some artists are cashing in on the fad, though. According to Rolling Stone, the singer Grimes announced that anyone can use AI to create songs using her voice “without penalty,” so long as she receives 50% of any royalties. Likewise, in June, Paul McCartney announced that a new Beatles song would be released later this year that used AI to “extricate” John Lennon’s voice from an old demo tape to create “the final Beatles record.”
So, the biggest question of the day is whether it is legal to use AI to create songs sounding like they are performed by popular artists. While the ditties may be original, to create the singing, AI is trained on copyrighted material. There are currently no regulations in place to dictate what AI can and can’t be trained on.
In March, the U.S. Copyright Office released new regulations on how to register music, art and writing created with AI in response to recent “striking advances in generative AI technologies.” The guidelines require registrants to disclose the inclusion of AI-generated content within work submitted for registration. The U.S. Copyright Office will then “consider whether the AI contributions are the result of ‘mechanical reproduction’” or instead are an author’s “own original mental conception.” It will only copyright works “created by a human being” and not “register works produced by a machine.”
While originally opposed to the concept, Universal Music Group is now exploring whether to license artists’ melodies and vocals for AI-generated music. The Financial Times recently reported that UMG is working with Google to develop a tool for users to create AI-generated music using an artist’s voice, lyrics or sounds. Users would pay a fee to access the tool, and a portion of that fee would pay the copyright holders. Artists would have the option to lend their art to the program or not.
All I know is that listening to Johnny Cash sing “I’m a Barbie Girl” gave me the Folsom Prison Blues.