Tables & Waves

Automata Study: Guessing Game

This is another exploration of automated musical processes that will create musical data in real time, rule based systems that are comprised of independent components that can employ simple random processes and interact with each other. This is a little musical automata system.

In this experiment I have set up a generative system in which two different musical voices interact with each other by playing a guessing game. Here is an example of the process running:

Basic Overview

What is going on in this example? Ableton Live sends timing info to a little JavaScript program, which then responds with MIDI note data for two voices. In this case there are two tracks:

  1. Leader: a software synth that plays a rhythmically stable melody (track 1),
  2. Follower (accompaniment): a software synth builds an accompanying melody in real time in coordination with the leader (track 2).

On the left side of the screen recording is a command line terminal logging events as they occur. This little program that plays the two FM synths in Live is JavaScript code that runs at the command line using Node.js. The program started when I typed node main.js then the return key into the terminal prompt.

NOTE: since the video was made, the repository has updated how this program is run. It should now be run at the command line from Automata Studies repository root directory using: node main.js guessing-game.

When the program starts the lead voice loads its initial state and the accompaniment voice prepares its first melodic accompaniment note guess. Then it waits for 16th note clock ticks/pulses from Live. Once Live's transport is started, a Max for Live device sends those clock ticks to the JavaScript program, which responds by sending MIDI note data for specific steps of the 1 bar, 16th note sequence. The JavaScript program stops sending note data when the transport is stopped, but does not shut down. The simple JavaScript program stops running when Ctrl+C is typed in the terminal program.

Musical Description

This program uses the following time scales in order from smallest to largest:

  1. Step: a single sequencer step, a 16th note of Live's current BPM;
  2. Bar: 1 4/4 bar of exactly 16 steps, the cycle in which the follower makes individual guesses;
  3. Single Guess Cycle: a variable number of Bars that it takes for the follower voice to correctly guess one step;
  4. Sequence Guess Cycle: a variable amount of time it takes for the follower to correctly guess all steps the leader makes available for accompaniment.

The Bar and Cycle events produce logged entries in the video above. At the level of a Bar there will always be one log entry for each follower guess and one for checking the guess against what the leader voice is seeking. When the leader voice logs a message that it is evolving the sequence, a Single Guess Cycle has completed. At approximately the 1:03 and 1:52 marks in the video above, the complete Sequence Guess Cycle completes. The video runs through two complete Sequence Guess Cycles and then stops shortly after the third one begins.

Over the course of a complete Sequence Guess Cycle this program builds up its complexity by adding notes to the follower's sequence. Once it has made correct guesses for all sequence steps, the complexity collapses and returns to its simplest starting state.

Code Description

The main script is responsible for receiving clock ticks/pulses from Ableton Live. This is implemented using Open Sound Control (OSC) style messages over the internal localhost network of the computer. A Max for Live device uses the Live API to observe Live's transport timings and sends pulses at the 16th note divisions of the bar as the integers 0-15 using the Max [udpsend] object. Once the main script receives a sequencer step number, it forwards it on to the leader voice.

The voices are implemented as JavaScript classes that respond to individual clock ticks and communicate with each other via function calls. The relationship between the leader and follower is loosely based on an Observer Pattern in which the follower "subscribes" to the leader. Modeling the classes in this way will make it relatively easy to modify this example for multiple follower voices. In that case, multiple voices coulde make guessing attempts.

Coordinating timing alongside the sequence state management is the biggest technical concern. The data structure for managing each voice's sequence is a simple array of numbers. The leader's sequence array is initialized based on a STARTING_STEPS array:

[ 1, -1, 0, 0,  1, -1, 0, -1,  0, 0, 1, -1,  1, -1, 0, 0 ]

This initialization happens when the program first starts and after a full Sequence Guess Cycle has completed. During initialization, the STARTING_STEPS' values are interpreted in the following way:

For the STARTING_STEPS array above and a scale of C pentatonic minor with the root set to MIDI note 60, the leader's sequence array will be:

[ 60, -1, 0, 0,  60, -1, 0, -1,  0, 0, 60, -1,  60, -1, 0, 0 ]

During initialization, the follower's sequence array begins with a value of all zeros:

[ 0, 0, 0, 0,  0, 0, 0, 0,  0, 0, 0, 0,  0, 0, 0, 0 ]

While Live's transport is running, the main script passes the current 16th note index to the leader voice. The leader forwards the step index to the follower voice. Both voices then check whether the value at the current step index in their own internal sequences is greater than 0. If yes, the JavaScript program sends a MIDI note message to the voice's corresponding MIDI channel. In addition to the internal sequence steps, the follower voice will also check whether the current step corresponds to its current guess. If it does match its current guess step index, it will also play the guessed note.

When the guessing game starts, the leader voice chooses one of its 0 valued sequence slots and chooses a random scale note for it. The follower then asks the leader for a list of available sequence index slots to guess and the leader provides an array of the indices for which the leader sequence currently still has a 0 value. Then the Single Guess Cycle proceeds with the follower making a guess once for each bar. When the correct guess has been made the leader will place a value of -1 at the corresponding index slot of its own sequence and the follower will add the correctly guessed MIDI note number at the correctly guessed sequence index to its own sequence. When a Single Guess Cycle repeats, the leader also selects random scale notes instead of the scale root/tonic as it does on the first cycle.

Continuing with the example arrays above, if the leader selected index 3 and MIDI note number 67 for the first Single Guess Cycle, after the correct guess was made and both voices had updated their internal sequence arrays, the leader's sequence would look like the following:

[ 60, -1, 0, -1,  60, -1, 0, -1,  0, 0, 60, -1,  60, -1, 0, 0 ]

where array index 3 has replaced its previous value of 0 with the value -1 so it is no longer available as a guessing slot until the full Sequence Guess Cycle has completed. And the follower sequence would then look like the following:

[ 0, 0, 0, 67,  0, 0, 0, 0,  0, 0, 0, 0,  0, 0, 0, 0 ]

where array index 3 has had its previous value of 0 replaced with the MIDI note number 67.

At this point the process repeats the Single Guess Cycle until there are no more 0 values in the leader sequence. When that happens, the leader sequence is reinitialized from the STARTING_STEPS array as described above and the follower sequence is reinitialized to all zeros, thereby restarting the macro time loop for the entire Sequence Guess Cycle.