Artificial Intelligence transforms the creation and consumption of music


Emily Wong

The applications of generative artificial intelligence to music composition have the potential to revolutionize the industry. However, these technologies also raise questions about their limited capacity for creativity, as well as the possibility of copyright infringement and bias.

Jennifer Sheng and Allan Zhang

In late January, Google announced the development of MusicLM, a novel artificial intelligence with the ability to translate text-based prompts into music. The growth of such forms of generative artificial intelligence in recent years has the potential to transform the music industry, revolutionizing the composition and consumption of music.

The development of machine-generated music was pioneered by British computer scientist Alan Turing, who generated the first recording of a computer-generated piece in 1951. Since then, the growth of machines in the music industry has accelerated significantly. In 1997, a program called Experiments in Musical Intelligence developed rudimentary abilities to produce pieces mimicking the style of classical composers like Bach, Chopin and Beethoven, though the music was found to be generally lacking in depth and richness. Twenty years later, an explosion of innovation has dramatically increased the computational capacity of such technology, and these tools have proliferated online with the development of software like OpenAI’s Jukebox, Sony’s Flow Machines and the more recently unveiled MusicLM.

One of the core benefits of the computer generation of melodies is that it can save composers time and create a starting point for more advanced compositions. 

“My experience with AI is that it’s very good at doing a lot of things that humans do, and composition-wise, it’s a very good way to get started. For example, you can make a chord progression, a melody or a randomly generated sequence of notes, which you can develop off of,” DVHS senior Boqian Liu, an avid musician and composer, explained. “Maybe it can’t replace you, but it gives you a ton of ideas.”

Some have hailed the development of such technology as a source of fresh innovation in the music industry. As Forbes notes, because these algorithms create music by synthesizing patterns and characteristics found in other pieces, they can effectively imitate the style of multiple genres of music. Thus, through blending different components of different genres, such models can compose creative combinations of musical styles.

“Music is a set of rules punctuated by creativity, and AI can do that in surprising ways, because it can learn from existing systems about musical rules, and then it can use random generation based on that,” Liu said. “A very important thing to know about AI is that even though it doesn’t have creativity, it can build off the creativity of others and therefore itself be like a creative medium.”

These characteristics of AI could facilitate the development of innovative music genres, allowing more people to enter the music composition scene.

“There’s a time and a place for it. Especially with electronic music and people creating their own beats, I think that is a great avenue for AI,” DVHS Choir and AP Music Theory teacher Diana Walker said. “There is a lot of potential in certain genres of music, like party music and great experimental music. And maybe aleatoric music, based on the philosophical question of, ‘Is any sound music?’ I think that’s an appropriate use of AI.”

Yet, some believe that artificial intelligence and machines lack personal experiences or cultural backgrounds that can be conveyed through the meanings of songs. Therefore, although AI can analyze the data and patterns in music, it may have significant challenges in understanding the deeper emotions and experiences that drive a musician’s creative process.

“You can program AI to create different sounds, but AI will never have those experiences and cultures, which is really important to making new music,” Liu noted. “A professional composer or arranger will still be used because they have their own distinctive styles, and even though AI can train itself as a model, it can never fully imitate that.”

Some of the most popular songs of all time, such as those by Miles Davis and Aretha Franklin, are arguably built on the foundation of personal feelings of pain, joy or other emotions that inspire musicians to create music. Furthermore, many contend that musicians have unique styles with nuance and subtlety that make their music stand out from others, and it is this personal touch that AI may have difficulty replicating.

“In terms of creating studio music, movie music or even choir music, I just don’t think that anything can replace the human touch to that. There is something very soulful about the creation of music, and I don’t think AI has the capacity to really convey the depth of the human experience. Because as much as we can teach a machine and as much as it can make adjustments and train, it’s not going to have that same sense of humanity that touches music,” Walker said. “I think it would be a loss for us to replace that human touch with a machine.”

Not only does AI pose concerns regarding a lack of creativity and humanity, but the widespread availability of such technology also raises questions about copyright and intellectual property. According to The Verge, the practice of training artificial intelligence models with possibly copyrighted data runs into ethical and legal questions of copyright infringement. 

“Many AI models are trained on existing works created by other people, which raises concerns about ownership and control over these works. It’s important to respect the original creators and give credit where credit is due,” DVHS senior Veer Jain explained.

Another potential ethical issue lies within inherent biases of artificial intelligence, resulting from the training of such models on pre-existing data. Machine learning models, which form the basis of most AI applications, are trained on large datasets, often scraped from the Internet or compiled from other sources. Some argue that these datasets reflect the biases and preferences of the people who created them, which can lead to skewed results.

“Trained bias is what can cause AI to create music that favors one style over another. This is when the creator of AI trains it to think and favor certain aspects, like a beat over the tone of a singer,” Jain said. “These are basically created by societal values that have been unintentionally applied to the AI, because the creator thought it was the right thing to do.”

In spite of these concerns, though AI may not completely replace the role of human musicians and composers, many still see hope for its future implementation in the music industry.

“I think there’s going to be a lot of positive impacts on music. Composers now actually already use AI software to help them with their practice and give them inspiration. So I feel like in the future, there’s going to be a lot of progressions and tools with AI,” Liu said. “Like with ChatGPT right now, with literature and writing, it’s completely dominating, getting millions of users in three months. And I feel like the same thing will happen to AI music.”