Artificial Intelligence In Music Production: What Does It Mean For Artists?

2016 has been a year full of applying artificial intelligence to all kinds of problems. A lot of DJs and music producers are starting to wonder how these technologies could be implemented in their fields. In this article, DJTT’s Steven Maude takes deep dive into current AI music projects, and how they could change the process of music creation in the very near future.

Artificial Intelligence in 2016

From language translation, self-driving cars, to beating humans at traditional games or learning to play classic games from the modern era, artificial intelligence (AI) is a big deal in computer science right now. Thanks to the large data stores that, for better or worse, technology giants are collecting, and powerful graphics cards accelerating the math required, we’re in a time of rapid progress in diverse fields.

The natural question for DJs and producers: what are the possible implications for AI in music?

Current AI Projects In Music

It’s still early days for music-related AI projects. But big technology names have looked at applying artificial intelligence techniques to music creation. The past year or so has seen several notable announcements:

IBM revealed their IBM Watson Beat project which they’re hoping to give the public access to soon. More recently, Waston Beat collaborated on a chart single – helping to write the lyrics:

Google’s DeepMind team, notable for developing AlphaGo, recently showcased a demonstration. They trained a model on samples of classical music to construct new samples, which do incredibly sound somewhat musical. Another Google team are making progress in developing Magenta, is a free to use and open source tool, capable of generating music based on the input data it’s fed.

FlowComposer is a current tool being developed at the Sony Computer Science Laboratory (CSL) Paris, as part of their Flow Machines research project, led by François Pachet. It recently made the press in assisting in the creation of pop songs in collaboration with the composer Benoît Carré. Here’s one it wrote in the style of the Beatles:

To learn more about it, I spoke with Dr. Fiammetta Ghedini, press officer for Sony CSL.

How FlowComposer Works: A Mark Of Distinction

How does FlowComposer work? It’s seen as a collaborative tool, where the computer doesn’t replace humans, but takes on the role of a creative partner or assistant.

That’s clear in its present form. It’s a web application, with a score editing workflow demonstrated in this video:

The composer chooses initial options, including a musical catalogue for inspiration, generates some melodic and harmonic sequences, edits them, and then iterates on this, letting the system fill in the blanks, until he finally gets a result that catches his ear.

Behind the scenes, FlowComposer relies on Markov chains. A Markov chain describes a system in terms of states and probabilities of moving between states. A simple example is the game of Chutes (or Snakes) and Ladders. Your next position, or state, is governed only by your current position, and on your next dice roll, whose outcome has associated well-defined probabilities.

The Continuator: Musician Mimicry

Markov models can also be extended to incorporate memory of previous states. This was enough for project leader Pachet to develop a successful earlier project: the Continuator:

The Continuator, Ghedini told me, “could mimic style of the musician”. Namely, it records MIDI input, builds a model of the player, and continues to play based on this model, even after the human player stops.

The Continuator is a fascinating and playful jam tool, but as Dr. Ghedini continued, “it was not able to create, for instance, a song with beginning, an end, a bridge”. In fact, music also incorporates a lot of other structures, which systems like the Continuator can’t reproduce.

Adding Rules + Constraints To AI Composition Tools

As project leader François Pachet explains in the talk below (jump to 16:42), music often involves constraints, rules we wish to impose. For instance, we might expect music in a particular key to both start and end on the key note.

A less obvious example is avoiding repetition. Transitions between two notes might be particularly favourable according to a model. What might be valid output might bounce between those notes frequently, but is unlikely to interest listeners.

Extending Markov models indefinitely to try to accommodate these requirements ultimately becomes problematic. What the team have done is reformulated the problem to generate musical compositions that satisfy certain constraints. The Markov model can be added as another constraint which generated musical sequences should satisfy, allowing it to be included in combination with the other desirable requirements.

Another change with FlowComposer over the Continuator, as Ghedini explained further, is that musical style is not applied to a live performer: “it’s not in real-time, but it’s a style based on a huge database that we are building, so you can choose really a lot of different styles”.

Going beyond lead sheets alone, they have a database of recorded audio that can be used to add accompaniment. Using another technology, Rechord, they can apply existing recorded audio intelligently to a lead sheet, using a technique known as concatenative synthesis. An example Ghedini gave is maybe applying sounds from Daft Punk to a lead sheet composed in the style of the Beatles. On the Flow Machines site there’s demo audio of music composed in Miles Davis’ style combined with guitar from “Get Lucky”.

New Questions About Music Creation

Having an artificial intelligence program that gets good at writing music opens up a number of questions surrounding creativity:

How Do You Teach AI To Avoid Music Plagiarism?

Since the source musical lead sheets are good examples of the desired output, how do you know the computer isn’t just plagiarizing existing works in what it produces? This is one aspect that the FlowComposer team has considered. With the tool, the size of chunks reproduced from the existing data are kept to a minimum that allows for creative expression, but without long sections reproduced verbatim.

Given there are copyright lawyers quoted as saying, “copyright law is by far the most metaphysical“, it’s likely a tough problem to define what musical plagiarism even is.

Avoiding it may also be tricky. Where’s the boundary between a novel work,(a distinctive hook or melody), versus part of the shared vocabulary of a musical genre (cadences or common chord progressions, which might be a similar length)? Incorporating an obedience to such unspecified rules in an AI system might be difficult.

Who Gets Credit For Writing AI Music?

Creative attribution is another issue. Although FlowComposer aims to augment, not replace the human composer, the examples from DeepMind show that in the future you might not need very much musical talent at all. Just select a dataset and generate music.

When taking ideas built on a data set created of others’ work and feeding through software coded by someone else, does the end-user even have a real creative role?

Using Existing Music To Generate New Music

There are questions over the reuse of existing music as inputs. Is this fair use? In a way, this reuse of music is a sampling, albeit one where you’re bottling the essence of compositions, not directly reusing audio. When using large music databases, individual musical pieces are like drops of water, almost insignificant, and traces of these might almost be undetectable in the output.

It is the combination of the work that produces the well to draw the output from. But without those constituents, you wouldn’t get the same final result.

A bellwether of the storms which may come in this field is the case of book authors who had made their work freely available but objected to Google’s use in an artificial intelligence project.

There is one application where worries of ownership largely evaporate: when a composer loads in their own work and uses the system to compose in their own style. An idea, Dr. Ghedini told me, that several artists have tried out of curiosity and often been impressed with. If you’re suffering creative block and have an existing body of work, you can use your own compositions to kickstart a new one. Or maybe try melding your ideas with your selection of renowned musicians…

What might this mean for producers and DJs?

The Flow Machines’ team has focused on working with traditional songwriters and composers, and not dance music producers. Since dance music is usually more structured and simpler than, say classical or jazz, I felt like this kind of technology would be directly applicable.

Dr. Ghedini quickly observed something missing in my argument: production is maybe a much more important part of dance music than other genres. (Try looking for a MIDI version of your favorite dance track and you’ll probably agree.) That production information only exists in hardware presets and programs or digital audio workstation project files, not in lead sheets.

Even as it stands, FlowComposer, and tools like it, might be very useful in a traditional songwriting process, inspiring melodies or chord progressions. This could be done before a producer steps in to flesh out the production. With tools like DeepMind’s WaveNet, it might be possible to skip lead sheets entirely and generate entirely new audio from existing audio recordings.

Could AI Help Arrange DJ Setlists?

Playlisting DJ sets might be another area that might see AI assistance in future. This isn’t something the Sony team have tackled, but organising a DJ playlist is again another musical structuring problem, albeit one at a higher level of notes and chords, but of full musical pieces.

A University of Texas at Austin group used online mixtape listings as a data source, modeled track selection with a Markov approach (this time, a Markov decision process) , and was able to adapt based on listener feedback. It’s not much of a stretch to suggest that DJ set lists could be constructed with a similar approach.

2017 Could Be The Year Of AI In Music Production

There’s clearly a trend here. If these tools are packaged up in the right way, maybe as a plugin or integrated into a digital audio workstation, it seems an intriguing, and realistic, possibility that they could find their way into the hands of producers everywhere. This could enable musical ideas to be conceived in a very different fashion. Certainly, the Sony team have had positive feedback in this regard: musicians often tell them that the Flow Machines tools helped them to break free of the limits and styles they’d often set for themselves.

In 2016, a year that has witnessed the passing away of many hugely influential musicians, perhaps the idea that some essence of their style might live on for others to utilise as direct inspiration in their work is some comfort.

Up Next: Read what Ean Golden predicted might be the future of nightclubs and DJs back in 2013

aiartificial intelligenceflowcomposergoogle deepmindmusicmusic productionsong writingwatson beat
Comments (28)
Add Comment
  • Tomas Morey

    Is a very objetive point of view…

  • TheQuakerOatsGuy

    For further insight on how far Watson has actually gotten, you should check out the Watson soundcloud account. If it can make it through, I honestly believe that there will be a top ten hit within the next 5 years that is produced by Watson Beat and there will also be a huge uproar by creatives, particularly producers and possibly some writers if Watson writes the lyrics as well. The mainstream is ready for it. They could give two craps about who’s producing the song. When people were singing “Uptown Funk” they weren’t going on about Mark Ronson.

  • Momo Slai

    I think it’s worth mentioning izotope neutron which have some sort of AI assistant that helps you mix your tracks in a DAW

  • Ralf S

    Did you ever try an AI translator, translate a simple sentence into another language, then translate this into a second language and then translate this into your own language back? Funny results or not? The same is AI for self-driving cars. What are these cars doing? They are killing or hurting other people. The human spirit and emotions cannot be replaced by AI so fasst as some other people think. Music is emotions and this should stay, meanwhile we are using machines for production and playing, but it’s up to us to show creativity.

    • Spacecamp

      I think it’s important to understand that translators are simple tools, not AI. AI usually is an active system, not just a tool that has a basic single input and single output.

      • Ezmyrelda Andrade

        It’s also important to note that an AI has recently just created a new language in order to ease translating from one language to another.

      • Dan Rosenstark

        Ummmm… what? Natural-language translation is up there amongst the hardest AI problems that exist (in spite of the impression you might’ve gleaned in High School French).

        But to @disqus_zgeVriS3Tc:disqus’s point: yes, this stuff will take a long time before you can replace your significant other with a robot. On the other hand, Uber just picked up its first passengers with self-driving cars, so we’re apparently moving forward quite quickly.

  • Ezmyrelda Andrade

    That’s easy.. beyond a little help in the form of apps or workflows.. It doesn’t mean shit.

  • Enufbsalreasy

    I wonder if how many people working as programmer would invent the idea of computing if it did not exist.
    Are those writing code truly creators, or are they simply translators between human and machine languages. If the latter, should we expect much “creation” from them. Tranlating a book doesn’t make one an author. Captioning a film doesn’t make one an editor.
    It seems that we’ve given too much credence to these translators, and are too accepting of anything the get a computer to do, as “innovation”
    I find it difficult to imagine someone who can’t change a doorknob as being able to envision and create great things.

    • Ztronical

      You would probably be surprised how many people can’t or at least have never changed a doorknob, plus some doorknobs are actually very difficult to change.

      • Ztronical

        Not everyone that works at NASA can build a rocket. Not everyone can write a blueprint, architects don’t build shelves or sink basins.
        Most of what we have and create is done by a collective, so if someone uses a machine, is it cheating and thoughtless? Or are they now just selecting a choice of results?

    • Ezmyrelda Andrade

      Yes, programmers are creators. The machine language didn’t spring from the void. The “machine” language exists because someone decided to use binary instead of trinary. It exists because someone decided that specific sequences of numbers need to logically mean very specific things. I don’t think you have a solid idea of what it is that devs do or how it is that they do it. If you think that programmers aren’t creatives I suggest you look into the history of the demoscene.. If you think that a person who knows how to program couldn’t change a doorknob, I think you are overestimating the difficulty of changing a doorknob.

      • Iknowltoowell

        The rainman was great at math.
        I just dont need his ideas on how to make my life better.

        • Ezmyrelda Andrade

          I have no idea what you are trying to imply but neither do I care.

          • Clearenough

            Im not implying anything.
            I’m saying, explicitly, that an ability to do math easily, does bot necessarily make one intelligent. I’m saying, directly, that I work with people every day, with degrees in computer science, who are, for all intents and purposes, idiots. They have little understanding of anything beyond theory, have a skewed sense of their own technical prowess and worth.

    • Marco Hooghuis

      By that analogy a 3d modeller isn’t a creator either. Try writing a program of watch a tutorial. I think you’d be surprised how much creativity goes into programming.

  • Ztronical

    They should focus more on AI technology for training or as a virtual assistant.
    Everyone eventually wants the control and knowledge for making music, as well if technology could give everyone the opportunity for a strong base education of music creation that would be amazing.

  • zendoo

    if my AI independently, without any actual nintendo source input, recreates the theme from Mario Bros. 2, is that copyright infringement? My AI has no knowledge of Mario as prior art. Or is it an original work?

    • zendoo

      Record companies already have algorithmic hit making software. Hey Ya was famously designed by algorithms, and designed to be massively appealing. People HATED it. So radio and club DJs were paid to intersperse Hey Ya with other, popular tracks. Eventually, people started to like it, without really knowing why. Sounds crazy, right? You can read about it in The Power of Habit; https://www.amazon.com/Power-Habit-Change-Charles-Paperback/dp/B00IIB5G6Q/ref=sr_1_8?ie=UTF8&qid=1481780008&sr=8-8&keywords=the+power+of+habit

    • Ztronical

      HAL would own it.
      Why would anyone try to sell or monetize an exact duplication of any music somehow accidentally created?
      I think what might be of interest in a phenomenon like that would be the video of the epic event.
      I would more interested in watching and being amazed by technology like this, or the use as a tool to train or assist.

      I guess anyone trying to make money would still want original sounding music and my guess is they would still create and sample or shape the result as their own.

      A resulting track that infringed on copyrights would in my opinion be an unwanted result. But a novelty that could be YouTube worthy.

    • Ezmyrelda Andrade

      If it was provable that the source code did not infringe upon the original the source code would be safe, however the franchise would still be infringed upon if the sprites ultimately looked the same.