2016 has been a year full of applying artificial intelligence to all kinds of problems. A lot of DJs and music producers are starting to wonder how these technologies could be implemented in their fields. In this article, DJTT’s Steven Maude takes deep dive into current AI music projects, and how they could change the process of music creation in the very near future.
Artificial Intelligence in 2016
From language translation, self-driving cars, to beating humans at traditional games or learning to play classic games from the modern era, artificial intelligence (AI) is a big deal in computer science right now. Thanks to the large data stores that, for better or worse, technology giants are collecting, and powerful graphics cards accelerating the math required, we’re in a time of rapid progress in diverse fields.
The natural question for DJs and producers: what are the possible implications for AI in music?
Current AI Projects In Music
It’s still early days for music-related AI projects. But big technology names have looked at applying artificial intelligence techniques to music creation. The past year or so has seen several notable announcements:
IBM revealed their IBM Watson Beat project which they’re hoping to give the public access to soon. More recently, Waston Beat collaborated on a chart single – helping to write the lyrics:
Google’s DeepMind team, notable for developing AlphaGo, recently showcased a demonstration. They trained a model on samples of classical music to construct new samples, which do incredibly sound somewhat musical. Another Google team are making progress in developing Magenta, is a free to use and open source tool, capable of generating music based on the input data it’s fed.
FlowComposer is a current tool being developed at the Sony Computer Science Laboratory (CSL) Paris, as part of their Flow Machines research project, led by François Pachet. It recently made the press in assisting in the creation of pop songs in collaboration with the composer Benoît Carré. Here’s one it wrote in the style of the Beatles:
To learn more about it, I spoke with Dr. Fiammetta Ghedini, press officer for Sony CSL.
How FlowComposer Works: A Mark Of Distinction
How does FlowComposer work? It’s seen as a collaborative tool, where the computer doesn’t replace humans, but takes on the role of a creative partner or assistant.
That’s clear in its present form. It’s a web application, with a score editing workflow demonstrated in this video:
The composer chooses initial options, including a musical catalogue for inspiration, generates some melodic and harmonic sequences, edits them, and then iterates on this, letting the system fill in the blanks, until he finally gets a result that catches his ear.
Behind the scenes, FlowComposer relies on Markov chains. A Markov chain describes a system in terms of states and probabilities of moving between states. A simple example is the game of Chutes (or Snakes) and Ladders. Your next position, or state, is governed only by your current position, and on your next dice roll, whose outcome has associated well-defined probabilities.
The Continuator: Musician Mimicry
Markov models can also be extended to incorporate memory of previous states. This was enough for project leader Pachet to develop a successful earlier project: the Continuator:
The Continuator, Ghedini told me, “could mimic style of the musician”. Namely, it records MIDI input, builds a model of the player, and continues to play based on this model, even after the human player stops.
The Continuator is a fascinating and playful jam tool, but as Dr. Ghedini continued, “it was not able to create, for instance, a song with beginning, an end, a bridge”. In fact, music also incorporates a lot of other structures, which systems like the Continuator can’t reproduce.
Adding Rules + Constraints To AI Composition Tools
As project leader François Pachet explains in the talk below (jump to 16:42), music often involves constraints, rules we wish to impose. For instance, we might expect music in a particular key to both start and end on the key note.
A less obvious example is avoiding repetition. Transitions between two notes might be particularly favourable according to a model. What might be valid output might bounce between those notes frequently, but is unlikely to interest listeners.
Extending Markov models indefinitely to try to accommodate these requirements ultimately becomes problematic. What the team have done is reformulated the problem to generate musical compositions that satisfy certain constraints. The Markov model can be added as another constraint which generated musical sequences should satisfy, allowing it to be included in combination with the other desirable requirements.
Another change with FlowComposer over the Continuator, as Ghedini explained further, is that musical style is not applied to a live performer: “it’s not in real-time, but it’s a style based on a huge database that we are building, so you can choose really a lot of different styles”.
Going beyond lead sheets alone, they have a database of recorded audio that can be used to add accompaniment. Using another technology, Rechord, they can apply existing recorded audio intelligently to a lead sheet, using a technique known as concatenative synthesis. An example Ghedini gave is maybe applying sounds from Daft Punk to a lead sheet composed in the style of the Beatles. On the Flow Machines site there’s demo audio of music composed in Miles Davis’ style combined with guitar from “Get Lucky”.
New Questions About Music Creation
Having an artificial intelligence program that gets good at writing music opens up a number of questions surrounding creativity:
How Do You Teach AI To Avoid Music Plagiarism?
Since the source musical lead sheets are good examples of the desired output, how do you know the computer isn’t just plagiarizing existing works in what it produces? This is one aspect that the FlowComposer team has considered. With the tool, the size of chunks reproduced from the existing data are kept to a minimum that allows for creative expression, but without long sections reproduced verbatim.
Given there are copyright lawyers quoted as saying, “copyright law is by far the most metaphysical“, it’s likely a tough problem to define what musical plagiarism even is.
Avoiding it may also be tricky. Where’s the boundary between a novel work,(a distinctive hook or melody), versus part of the shared vocabulary of a musical genre (cadences or common chord progressions, which might be a similar length)? Incorporating an obedience to such unspecified rules in an AI system might be difficult.
Who Gets Credit For Writing AI Music?
Creative attribution is another issue. Although FlowComposer aims to augment, not replace the human composer, the examples from DeepMind show that in the future you might not need very much musical talent at all. Just select a dataset and generate music.
When taking ideas built on a data set created of others’ work and feeding through software coded by someone else, does the end-user even have a real creative role?
Using Existing Music To Generate New Music
There are questions over the reuse of existing music as inputs. Is this fair use? In a way, this reuse of music is a sampling, albeit one where you’re bottling the essence of compositions, not directly reusing audio. When using large music databases, individual musical pieces are like drops of water, almost insignificant, and traces of these might almost be undetectable in the output.
It is the combination of the work that produces the well to draw the output from. But without those constituents, you wouldn’t get the same final result.
A bellwether of the storms which may come in this field is the case of book authors who had made their work freely available but objected to Google’s use in an artificial intelligence project.
There is one application where worries of ownership largely evaporate: when a composer loads in their own work and uses the system to compose in their own style. An idea, Dr. Ghedini told me, that several artists have tried out of curiosity and often been impressed with. If you’re suffering creative block and have an existing body of work, you can use your own compositions to kickstart a new one. Or maybe try melding your ideas with your selection of renowned musicians…
What might this mean for producers and DJs?
The Flow Machines’ team has focused on working with traditional songwriters and composers, and not dance music producers. Since dance music is usually more structured and simpler than, say classical or jazz, I felt like this kind of technology would be directly applicable.
Dr. Ghedini quickly observed something missing in my argument: production is maybe a much more important part of dance music than other genres. (Try looking for a MIDI version of your favorite dance track and you’ll probably agree.) That production information only exists in hardware presets and programs or digital audio workstation project files, not in lead sheets.
Even as it stands, FlowComposer, and tools like it, might be very useful in a traditional songwriting process, inspiring melodies or chord progressions. This could be done before a producer steps in to flesh out the production. With tools like DeepMind’s WaveNet, it might be possible to skip lead sheets entirely and generate entirely new audio from existing audio recordings.
Could AI Help Arrange DJ Setlists?
Playlisting DJ sets might be another area that might see AI assistance in future. This isn’t something the Sony team have tackled, but organising a DJ playlist is again another musical structuring problem, albeit one at a higher level of notes and chords, but of full musical pieces.
A University of Texas at Austin group used online mixtape listings as a data source, modeled track selection with a Markov approach (this time, a Markov decision process) , and was able to adapt based on listener feedback. It’s not much of a stretch to suggest that DJ set lists could be constructed with a similar approach.
2017 Could Be The Year Of AI In Music Production
There’s clearly a trend here. If these tools are packaged up in the right way, maybe as a plugin or integrated into a digital audio workstation, it seems an intriguing, and realistic, possibility that they could find their way into the hands of producers everywhere. This could enable musical ideas to be conceived in a very different fashion. Certainly, the Sony team have had positive feedback in this regard: musicians often tell them that the Flow Machines tools helped them to break free of the limits and styles they’d often set for themselves.
In 2016, a year that has witnessed the passing away of many hugely influential musicians, perhaps the idea that some essence of their style might live on for others to utilise as direct inspiration in their work is some comfort.