AI is yet to find its place in music production — but it’s only a matter of time...
Miquela Sousa, who goes by the nom d’artiste Lil Miquela, has everything a record label might want in a young recording artist. At 20 or so (her Wiki page has her born in 1997), the Brazilian is a model who lives in Los Angeles, has tracks trending on Spotify, and boasts over 600,000 Instagram followers. She’s also not human — not that you could tell that from the extreme vocal tuning on her lyrically clever mid-tempo ballad ‘Not Mine’, which would barely elicit a slightly raised eyebrow from most listeners now. But otherwise she’s about as real as anyone else with those specs. And she might be your next client.
AI enables Lil Miquela to respond to posts on her social media feeds, and she’s as happy as Dua Lipa, Rita Ora or Grimes to sell you merch online. And while it’s clear that Lil Miquela is a figment of CGI at the moment, she also presages another era in music production: one in which AI is a fully fledged collaborative partner, or actually becomes the process itself, with little or no human input into the mechanics of creativity.
Lil Miquela is the logical outcome of a process that’s already under way. It began in labs, like the IBM Watson Beat project, but quickly migrated to desktops and laptops in things like Google’s open-source Magenta and AlphaGo, which used samples of classical music as elements to create new compositions and productions. Popgun, ALICE, JukeDeck and a small horde of other AI-based interactive compositional programs are already ready at work writing songs. LANDR, CloudBounce and other AI programs are mixing and mastering songs.
Amper Music last year released an LP entitled I AM AI, which they called the first album entirely composed and produced by an artificial intelligence. (Although they generously added that “the process of releasing AI music has involved humans making signiﬁcant manual changes — including alteration to chords and melodies — to the AI notation.”) Spotify, Pandora and other streaming distribution platforms are using AI and its older cousin, collaborative filtering, to determine what gets added to the playlists that are now dominating what gets heard. In what might turn out to become a viable operational model for the collaboration between humans and AI, Spotify last year hired the founder of Flow Records, a label created to produce music composed with AI and home to ‘artist’ Skygge — actually a combination of French producer Benoit Carré and the AI program called Flow-Machines — to advise the streaming company on AI’s place in music.
Algorithmic audio mixing isn’t difficult. In closely controlled live-sound environments, like corporate boardroom meetings or panels at conferences and conventions, automated mixers like the Dan Dugan Speech System can sense when a mic is not being used and mute it, and can dynamically balance several microphones at once when a number of participants are speaking simultaneously. But it’s far from creating anything memorable yet. More sophisticated AI uses a neural network that can learn tasks by identifying patterns in large pools of data; the same techniques are now being trained to automatically recognise faces and objects, identify commands spoken into smartphones, and translate from one language to another.
Much of what’s happening in the algorithmically propelled music production space is focused on what’s known as production music — near-generic soundtrack-type stuff that’s used to score the millions of homemade YouTube videos out there by auteurs aware enough not to just appropriate copyrighted tracks. It already reaches a standard that music industry consultant Mark Mulligan regards as ‘sufficient’, in terms of sonic quality if not artistic excellence. It’s not quite self-awareness, but it cannot be that long before it’s going to create (and I stopped and thought for a moment after typing that particular verb) a compelling production of an aesthetically pleasing song. Given that this generation of corporations is programmed to find ways to eliminate human cost and frailty at every opportunity, and that Roy Orbison, Elvis, Billie Holliday and Ronnie James Dio are currently on tour (their holograms, anyway), be afraid, be very afraid. Think about that next time Alexa or Siri decide what you listen to next. This is not the shuffle function on your CD player.
Elon Musk has said AI is more dangerous than North Korea and that it is “a fundamental risk to the existence of human civilisation”. Stephen Hawking stated ominously that “AI could be the worst event in the history of our civilisation”. Those observations may be a bit overwrought, but if you’re an Uber or Lyft driver, you’re probably already anxious about your economic future. You now have way more reason to be nervous, since on-demand rides will start becoming driverless in the next few years. AI is at the root of that and scores of other disruptions to come, and music production will not be an exception.
The Silicon Valley voices behind everything from self-driving cars to autonomous delivery drones like to coo that the sectors they impact won’t hurt the workers already in them. That’s right up there with “the cheque is in the mail”. Popgun’s startup team, mainly composed of software engineers, contends that the goal of their work is to foster collaboration and “not replace human abilities”. That’s what they told elevator operators once, too.
The beauty of the last 60 years of record production was that it created music that people didn’t know they wanted. What an AI-fuelled future of music production will likely do is give people exactly what they want, before they know it, just as the data that Amazon derive from their online portals lets them give their shoppers just what they’re looking for. I’m not sure which scenario will make people happier. Serendipitous discovery, in music or in love, may be a luxury affordable when people feel good about the world around them. When you’re wondering if nuclear war is once again an actual possibility, you may want to go with the sure thing. Who could blame you?