Photo by Joshua Woroniecki on Unsplash

Generative music using Klang

Plan8

--

Plan8 has a long history of creating interactive music systems. Our bespoke audio library Klang is built to provide everything one needs to play audio in the browser from sequencing, sound cueing, asset loading, and scene management to spatial audio and a sophisticated effects bus system. The only thing it has been missing up until now are some more complex generative music tools.

Every developer at Plan8 is also a musician in their own right and we all have a keen interest in creative computation both in and out of work. So we decided to build on our collective experience and construct more sophisticated generative music tools and utilities into Klang.

Many often think of generative music as having a stylistic sound. However, there are many examples that easily throw that perception into the wind: Lejaren Hiller’s “Iliac Suite” which made use of Markov chains, generative grammars, and algorithmic cantus firmi, Laurie Spiegel and her invention of the “Music Mouse” which allows a composer to work collaboratively with the computer, Arvo Pärt’s Tintinnabuli system which helped create some of his most gorgeous works. And it’s hard not to mention the many algorithmic musical systems employed by composers in the medieval and classical eras (Mozart being the most famous example).

We wanted to make sure our system was flexible enough to allow for more explorations of the creative possibilities of generative music that can go beyond a stylistic definition.

Why generative tools?

Variety

Beside our general interest and experience with generative music, we’ve been busy doing a lot of spatial sound and music for various multi-user 3D worlds lately (some would call them “metaverses”). A key requirement in all of them is that the music needs to be varied and evolving rather than static and repetitive. This can be hard with traditional methods and a typical approach would be to simply load large sound files that play on a loop. Maybe one uses multiple layers of different length played at the same time to push that audible loop point even further. This, however, is ultimately bad for the end-user as it increases the amount of data that has to be downloaded, doesn’t scale well in terms of variety for multi-day events, and it affects the end-user’s performance by loading large files into memory on their computer.

An “on the fly” generative approach partially solves this. This allows the user to only download the component parts for playing back an infinitely long and constantly unfolding music composition that can be unique every time they hear it. It also cuts down memory consumption by loading much smaller files like a single piano note that can play an entire melody.

Sound design

Using a generative approach also allows us some new creative benefits that would be near impossible with traditional approaches. We can adapt the music instantly to what is happening on the screen by changing certain parameters in the system. For instance, when an avatar starts running the tempo picks up and becomes more regular and when it stops the music slows down and rhythms become more asymmetric and sparse.

We can also use musical cues as sound effects. When you discover an object in the world the music can respond by playing the next few notes in a slightly different fashion to bring attention to it. We can connect the music and sound design to be completely in sync and in harmony.

This is an excerpt of a piece we did for a virtual world. The chord progression and melodies are generated using our new Klang toolset.

A toolset for creative ideas

Beyond being a purely practical tool for generating more variety over time, the toolset we’ve built is itself an instrument that lends new ideas and options for discovering creative pathways.

We have for a long time been fans of the SuperCollider programming language (“a platform for audio synthesis and algorithmic composition…“) and we took a lot of inspiration from its “pattern” system which is a very modular system for algorithmic composition. You can see/hear a lot of SuperCollider’s pattern system in the AlgoRave scene currently. Because of this we built a set of tools based on some ideas within SuperCollider that allow us to combine various sequences with rule sets as well as more stochastic rule sets such as Markov chains.

A code excerpt showing a set of modules (Pseq, Prand, and Pmarkov) that are given note values and can be combined with each other to produce endless streams.
This is an excerpt of the melody generated with the above code.

These new tools give us the ability to create highly dynamic music with just a few lines of code and still have it be rooted in something pre-composed within a digital audio workstation such as Reaper. We have the option to take pre-composed music and expand it into a generative system or to directly implement a generative composition that is idiomatic to the project. In other words our toolset does not build generic music that can be used in any situation but rather puts the composer in full control so that the music is completely tailored to the project from top to bottom. The tools are there to help us along the way.

Rather than using a system that creates generic music infinitely, we are building systems that allow us to create very specifically tailored compositions in a variety of styles for each project and firmly place the composer at the creative helm.

--

--

Plan8
Plan8

Written by Plan8

Plan8 — a design agency for music and sound.

No responses yet