Recently, there has been a lot of buzz in the media about AI and its influence on the way we create content in the world.
As a content creator, I am keenly aware of how the rise of AI will alter the immediate future and the long-term future of not just the job that I do but the entire world.
ChatGPT is the hottest thing right now. And with Google releasing their Bard AI, which they’ve been working on for a long time, in addition to all of the other AI writing tools, I started to wonder what will AI do for musicians and the creation of music in the future.
AI For Writing
Just to give some context, before I go into my theories or hypothesis on AI for music creation in the future, I want to highlight the two areas that are hitting the media right now.
The first is AI in writing.
ChatGPT has done a great job of presenting itself as an intelligent, and on the whole, accurate AI engine that can create content that can fool most people into believing it was written by a human.
ChatGPT takes all of the data and information in the world that it can process and assimilate, and then teaches itself with the help of crafty engineers, to output writing, that is as close to human output as possible while still being unique.
It’s almost like it paraphrases everything in the world to create unique content for whatever you ask it.
I recently asked it to create a story for my kids about a crocodile and a tiger that play football and have a great time doing it. And ChatGPT spat back a short story, which was beyond impressive.
The story included sentiment, a happy ending, a challenging game, and a good outcome overall, with even some moral story that created a positive impact for my children.
AI and Art Creation
It’s now also possible to have AI create art for you if you give it some guidelines and parameters.
It can create unique art which is an amalgamation of everything that it has learned from around the world. At times it can create some really good-looking stuff.
The story of the father who created a picture book using ChatGPT and art AI received a lot of criticism. But I think people banging their pots and being upset about AI plagiarizing their work in a way won’t be able to stand up to the wave of adoption that AI is going to experience moving forward.
Loads of AI bots will ultimately be used in pretty much every profession, from accounting to law to teaching to programming and more.
We already do use AI. We already have AI built into a lot of the tools that we use.
It’s just now we’re getting to the point where not only does AI improve what we do, it’s now beginning to create, and that’s a level of AI that we haven’t seen before where a relatively autonomous computer can create art, writing, design, and more.
One day, I expect I’ll ask a computer to design my ultimate four-bedroom home with a sleepout and pool. I can sit there telling it what I want the design to look like and I will continue to alter the dimensions of the house until I’m happy.
I expect that this house will be designed to follow local government guidelines and be safe to inhabit.
I can imagine a computer spitting out building plans, which I’ll give to my builder, who’ll probably tell his robots one day to build for me. It’s incredible to think how this is going to change the world.
But enough about AI in general as I’ve been thinking lately about how music will be changed by AI.
Music Creation With AI
Here’s something that I thought of recently.
For content creators and human beings in general, to stand apart from AI, we have to look at the subject of arts to understand where we differ from something that’s not sentient, and not able to create anything at this stage, beyond what it has already learned.
For example, playing guitar with feel and style is something that I think will still be very difficult for AI to achieve.
But I think that creating tracks with music will one day be as simple as saying in the studio to a computer, “Hey computer, start me a four-on-the-floor beat for 12 bars with a one-bar fill going into four beats of halftime using a snare, kick drum, splash cymbal, high hats, and in the one bar-fill, I want a Tom roll.”
This may be a very primitive and rather clunky example but I do think that if the AI understands the basic conversation of music, then verbally we’ll be able to ask it to create things.
If you’re a musician who reads music, then perhaps you can speak to the computer in terms of what kind of music you want, and perhaps it will either notate that or lay it out in a MIDI pattern.
Perhaps after the initial recording, you can do things like ask for an additional track with a ride cymbal being hit at quarter notes on bars one and three.
Perhaps you could say, “Add a crescendo to all tracks in the recording at bar five, and add a reverse snare to the rest in bar 16.”
I guess what I’m trying to illustrate here is, just by imagining what music you want to create, you could probably talk to a computer that has enough AI capability to fill in the gaps of what you’re asking and still come up with what you need.
Then you’d be able to verbally edit the music as you’d like without having to touch anything.
This means that after a couple of hours of sitting in front of a computer and talking to it, you could probably come up with a complete band track using any instrument that you want, without even having to play it.
You could lay out your entire song, and then ask for an anacrusis or drop-in in the first bar. You could say, “Drop all instruments in bar 16, and do a deep beat drop on the fourth beat of that bar, add a light keys pad underneath bar 24.”
The ability to change this is unlimited if we get AI and music to that point.
For me, it’s not too different from the way we produce music now. So much of it is done in a studio on a keyboard, and perhaps a drum panel for triggering different elements of the tracks, and mastering music with AI.
So, let’s imagine that we’ve just created a cool song. I don’t know how long it’ll be before AI can sing, but let’s assume that’s a little bit further down the road because playing any instrument with style and originality will be a lot harder for AI to recreate.
But if we can do deep fake videos and voice audios, then perhaps in time, it’ll also be deep fake singing and vocal tracks. Then the next step is mastering.
So then we could be playing back a track, whether we created this or not, perhaps it was even created from acoustic and analog musicians recording their individual tracks into a song. And let’s say we end up with about 12 to 16 tracks that we need to now mix and master.
As a mix and master, it could be that AI gets to the point with voice recognition that we can sit in the studio, and then say, “Play the track.” And then simply sit there and say, “Add a little bit more 6k frequency to the vocal track.”
Or it could be something as vague as, “Add a little bit more impact to the kick drum, or make the guitar a little bit edgier.”
These are very subjective statements but it’s not to say that AI won’t eventually be able to do this in the studio.
You’re still using your ears to mix and master, but the computer is doing all the heavy lifting. The computer will be using what it has learned to be able to interpret these vague statements and try to produce an outcome that’s suitable for what you want.
“Bring the rhythm guitar up in the mix a little bit between 2.35 minutes and 2.58 minutes. Increase the lead guitar in the mix during the solo, and then bring it back down to its previous DB.”
In terms of accessibility, these kinds of functions and features would allow people who have disabilities to also create whatever they want without needing to be on the instruments or even in the studio.
To have an incredible ear as a deaf person and be able to sit in a studio, and almost dictate to the music system how you’d like the music to be mixed and mastered would be an incredible thing. And that’s just one example.
Or it could all be a disaster.
Maybe we’d never end up with the ability to tune music to incredibly fine micro-nuanced changes, as we can do in the studio with just an incredibly light bump on a fader.
But I think it’s not only possible but probable that this is the direction that we’re heading in. Even if it’s just adding vocal controls to mixing desks to say, “Increase guitar fade by 0.2, or similar.”
So if AI really is taking over the world and it’s happening faster than we realize, then perhaps it’s only a matter of time before things like music creation, mixing, and mastering are handed over to the robots, even if all they’re doing is interpreting our commands.
If you crawled through Spotify, Apple Music, Amazon Music, Deezer, and TIDAL long enough, I think you’d find that their algorithms have already established the most popular frequency responses and music mastering outputs that people like listening to.
It’s not just the hook, the line, the crazy track, or the deep drop in the beats, it’s also how it sounds from when it leaves the mastering desk.
Those are my thoughts. I’d be interested to hear yours below.
Endless hours of experimentation, professional work, and personal investment in Home Theatre, Hi-Fi, Smart Home Automation and Headphones have come to this.
Former owner of Headphones Canada, a high-end headphone specialty retailer.