Last November, at the Stockholm University of the Arts, a human and an AI made music together. The performance began with musician David Dolan playing a grand piano into a microphone. As he played, a computer system, designed and overseen by composer and Kingston University researcher Oded Ben-Tal, “listened” to the piece, extracting data on pitch, rhythm, and timbre. Then, it added its own accompaniment, improvising just like a person would. Some sounds were transformations of Dolan’s piano; some were new sounds synthesized on the fly. The performance was icy and ambient, eerie and textural.
This scene, of a machine and human peacefully collaborating, seems irreconcilable with the current artists-versus-machines discourse. You will have heard that AI is replacing journalists, churning out error-riddled SEO copy. Or that AI is stealing from illustrators, who are suing Stability AI, DeviantArt, and Midjourney for copyright infringement. Or that computers are rapping, or at least trying to: the “robot rapper” FN Meka was dropped by Capitol Records following criticism that the character was “an amalgamation of gross stereotypes.” In the most recent intervention, none other than Noam Chomsky claimed that ChatGPT exhibits the “banality of evil.”
These anxieties slot neatly among concerns about automation, that machines will displace people—or, rather, that the people in control of these machines will use them to displace everyone else. Yet some artists, musicians prominent among them, are quietly interested in how these models might supplement human creativity, and not just in a “hey, this AI plays Nirvana” way. They are exploring how AI and humans might collaborate rather than compete.
“Creativity is not a unified thing,” says Ben-Tal, speaking over Zoom. “It includes a lot of different aspects. It includes inspiration and innovation and craft and technique and graft. And there is no reason why computers cannot be involved in that situation in a way that is helpful.”
Speculation that computers might compose music has been around as long as the computer itself. Mathematician and writer Ada Lovelace once theorized that Charles Babbage’s steam-powered Analytical Engine, widely hailed as the first computer, could be used for something other than numbers. In her mind, if the “science of harmony and of musical composition” could be adapted for use with Babbage’s machine, “the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
The first book on the subject, Experimental Music: Composition with an Electronic Computer, written by American composer and professor Lejaren Hiller Jr. and mathematician Leonard Isaacson, appeared in 1959. In popular music, artists like Ash Koosha, Arca, and, most prominently, Holly Herndon have drawn on AI to enrich their work. When Herndon spoke to WIRED last year about her free-to-use, “AI-powered vocal clone,” Holly+, she explained the tension between tech and music succinctly. “There’s a narrative around a lot of this stuff, that it’s scary dystopian,” she said. “I’m trying to present another side: This is an opportunity.”