After writing about it for years, the AI-music debate has finally entered the mainstream consciousness. Conversations center around the increasing role of technology in the arts, the sensitivity of leaky IP rights, and the inevitable threat it presents to creatives.

Most technologists seem to think AI could one day replace musicians. I'm here to tell you why that will never happen.

Busy? Try the speed read.

The scoop Tech companies can use AI to compose new songs using existing datasets of music. This poses a serious threat to musicians and artists. In the spirit of Grimes, let's not ignore it. Let's talk about it freely and transparently.

About the tech

AIVA Technologies, based in Luxembourg, is one of the older players in the space. They created an AI tool that composes music for movies, commercials, games, and TV shows.

OpenAI’s MuseNet allows users to generate genre-specific music. You can look up an artist and select a genre. Theoretically, it would fuse that artist with a Mississippi the selected genre.

Holograms tours are becoming increasingly popular. Eventually, using AI composition tools and hologram tech, deceased artists will be able to tour new music... and it will be hard to tell the difference from a standard pop concert.

VOCALOID is a voice synthesizing software that allows users to create 'virtual pop stars'. They are already widely popular in Asia.

Other voice synthesizing tools allow users to imitate famous voices and spit out whatever output you'd like. Copyright law hasn't caught up to this deepfake dystopian reality, so feel free to go make Jay-Z say whatever you want.

Soundful is an easy-to-use generative AI tool that allows you to create royalty-free music in seconds.

Humans > robots ... for now At least for the foreseeable future, AI is incapable of creating music without mimicking an existing data set that originated from human innovation. Similar to the way AIVA pitched their product, Artificial Intelligence can be used to help the artist speed up and maximize the composition process. It should be treated as a tool, not a replacement.

Zoom out There will always be a place for bipedal fleshbags in the arts. With or without AI in music. Why? Because the consumers of creation are also fleshbags, and we want to be wowed and wooed by the hairy, smelly creatures that feel and squeal just like we do.

Computers making music with AI

If you didn’t already know, computers know how to make music. In fact, computers knew how to sing 50 years ago.

Today, AI can write full compositions. As a musician, it is admittedly hard to tell the difference between your favorite composer and a well-funded software tool. And I’m not talking about an EDM instrumental track.

We’re talking about a brand new, original Beethoven symphony or another Beatles album. These machines use historical data sets and neural networks to recognize patterns and produce novel compositions.

A new Beatles album?

If we decided to input thousands of hours worth of audio tapes from Lennon, Harrison, McCartney, and Starr, and blasted it into a computer, the AI could shoot out its very own Beatles album like it was 1967 again. That includes lyrics and song titles, which are their own data set of words and replicable patterns, just like music notes.

Every song, every composition, no matter how innovative or genius it is, has a recognizable pattern. What does it all have in common? A rhythm, a melody, an accompaniment, vocals, instruments. In fact, most modern songs even have their own common structure:

Intro, verse, pre-chorus, chorus (or refrain), verse, pre-chorus, chorus, bridge (“middle eight”), verse, chorus, and outro.

Pick your favorite pop or rock song off your shuffle playlist, and it is more than likely that your song follows that very structure.

Similarly, classical and jazz music have their own unique patterns and structures. A computer can process and form outputs for those genres too.

Let’s look at some examples.

AIVA Technologies: AI music composition

AIVA Technologies, based in Luxembourg, created an AI that composes music for movies, commercials, games, and TV shows.

OK, now there is this other technology that uses speech synthesis based on audio inputs (just like music) to mimic someone’s voice. And pretty much any Average Joe with enough time on their hands and a laptop can do it.

So, if we input voice data from Notorious B.I.G and Tupac, we can make a new song using their voice.

You just give the machine some lyrics to read off from, and we could make Jay-Z rap the bible. Or, we could combine our favorite White Stripes lyrics with legendary rappers, as Vocal Synthesis did below:

The tech moves too fast

Copyright law hasn’t caught up to this new technology.

Back in April, YouTube removed a Jay-Z video from Vocal Synthesis for copyright infringement put forth by Rock Nation, LLC. But Vocal Synthesis struck back, arguing that it wasn’t a copyright violation at all.

Shockingly, YouTube reversed the decision and let the video stay up. Go ahead and make Jay-Z’s deep-fake voice say whatever you want. For now, there is nothing he can do about it.

Let’s use our imagination here.

Using holograms, voice synthesis, and AI music composition tools, we can drop a new Beatles album, tour that new album, and put up holograms that look like the original young stars, and it would be damn hard for the biggest Beatles fan to tell the difference from an old-man $250 Rolling Stones concert.

This is where folks start suggesting that robots will take over music and we measly human musicians are doomed forever. I’m here to tell you why that won’t happen.

AI mimics, humans imagine

At least for the foreseeable future, AI is incapable of creating music without mimicking an existing data set that originated from human innovation.

At some point, you can input tens of thousands of hours' worth of music and get something entirely original. In other words, Artificial Intelligence would create something entirely groundbreaking in the arts, independent of human creativity.

But as technology exists today, AI in music is limited by the genres, keys, and time signatures that define its rules. Sure, Jukebox AI can bust out a Luke Bryant hit single in 30 seconds, but that hit single is based on the previous works of Luke Bryant. Artists don’t produce new music like that.

Finding inspiration in experiences

Consider the Beatles after India, the Grateful Dead after Egypt, or even the dark influence of heroin over John Coltrane. Those human experiences instigated new definitions and new perspectives for their music. Humans — musicians — artists are not linear thinkers. We derive inspiration from current and past events, abstract ideas, and unexplainable moments where words of poetry and musical phrases scream at our minds.

You can’t input “traditional Indian folk music” and “Revolver” (the album that preceded the Beatles’ trip to India) and expect a “Within You Without You” Beatles masterpiece.

Pink Floyd guitarist David Gilmour wrote “Shine on You Crazy Diamond” after his former best friend Syd Barrett lost his mind to LSD and schizophrenia in the 1960s. Bob Dylan wrote his brilliant “Masters of War” in response to the Vietnam War, or “Blowin’ in the Wind” in response to the civil rights movement, with the powerful line, “How many roads must a man walk down, before you call him a man?” The list goes on and on.

In contrast, new music produced with AI would use “Blowin’ in the Wind” as its source of inspiration, rather than using a life experience to spark new ideas.

Those experiences could be something as deep as losing a parent or as simple as walking past a cute girl. Those are the moments that manifest history’s greatest ballads.

Improvisation sweats and breathes

No one is looking to pay money, as far as I know, to watch a live 26-minute robot rendition of Scarlett Begonias / Fire on the Mountain. But 43 years later, 3.5 million people still want to listen to this one.

There is something magical and timeless about improvisation in music. Even when mistakes are made, it somehow amplifies the fan-musician relationship into a deeper, more intimate understanding.

Jerry Garcia was well-known for forgetting lyrics, cracking his voice or missing notes.

Some of the Dead’s most famous shows came from when Garcia was too sick to sing, so he had to jam extra well to make up for the lost voice. Those moments are inherently human. A choice for computer music could never be as authentic and clumsy as its flawed anthropomorphic alternative.

Now to think of it, just last week, my guitarist broke a string during an outdoor gig. We laughed, and the crowd cheered. Isn’t that the beauty of it all?

A deeper meaning for music

Creativity stems from inspiration; it is not programmed or confined by the boundaries of linear thinking.

Live musical performances, for example, are designed by the present moment that music is being produced. The creative expressions put forth by a collection of human-manipulated instruments depend on the bad food we ate that morning, the nasty fight we got in the night before, the painful sunburn we got that afternoon, the family tragedy we faced that year.

All of those loves, losses, break-evens, and wins are scrambled together into one medley outburst of raw emotional creativity. No code could match the unpredictable interface between an artist’s heart, mind, and soul.

Let’s look at jazz music: the archetype of human improvisation, a genre entirely based on impromptu creativity. John Coltrane, the legendary saxophonist battling drug addiction his entire life, performed “My Favorite Things” differently each time.

But any musician who has performed in a band understands that the varying directions of his saxophone solos were based on the drummer’s fills, the bassist’s grooves, or the pianist’s chosen melodies.

It all sort of happens in one simultaneous telepathic “this is how we feel right now” that constantly changes with each player’s tap, hit, blow or pluck.

Using AI to amplify music

Similar to the way AIVA pitched their product, Artificial Intelligence can be used to help the artist speed up and maximize the composition process.

If I want to write a folk-electronic song (think Mumford & Sons + Avicii), I could plug in what that may sound like, and the machine would give me a great foundation and source of inspiration that didn’t previously exist before this technology. Apply that same philosophy to writing lyrics.

In a commercial sense, business execs can rely on a machine to generate a catchy jingle. Yet, do decision-makers know what jingles work best for their business?

A professional musician can take an AI-generated song and take it a step further, adding that much-needed hot sauce to an otherwise dry burger.

There will always be a place for bipedal fleshbags in the arts. With or without AI in music. Why? Because the consumers of creation are also fleshbags, and we want to be wowed and wooed by the hairy, smelly creatures that feel and squeal just like we do.