Tables & Waves
Automata Study: Improvising Voices
- Published: November 26, 2025
- Keywords: automata, MIDI, JavaScript, generative music, sketches
- Code: Improvisers sub-project from the Automata Studies repository.
This exploration of automated musical processes creates musical data in real time. It is another little musical automata system. As with all experiments posted in this series, the final musical results are not intended to be a finished piece, rather simple sketches with a high degree of interpretability at a process level. As such, it includes a minimal amount of sound design and composition in the hope that this will let the algorithm and/or code be comprehensible.
In this experiment I have set up a generative system in which two musical voices improvise together by taking turns in leader and follower modes. One voice generates melodic lines and the other generates chords and the process starts with either chords in a lead mode accompanied by a melodic line, or a melody in a lead mode with a chord providing accompaniment. Here is an example of the process running:
The intention for this automata study is that the voices behave like rudimentary improvisers. One voice leads while the other listens and reacts.
Basic Overview
Ableton Live (the application on the right side) sends timing info to a little JavaScript program, which then responds with MIDI note data for three voices. Live is using three tracks:
- Pad Chords Voice: a software synth that plays chords (track 1)
- Keys Melodic Voice: a software synth that plays melodies (track 2)
- Max for Live Timing Track: a kind of "dummy" track that hosts a Max for Live device that communicates transport timing information to a JavaScript program (track 3)
On the left side of the screen recording is a terminal logging macro events as they occur. This little program that plays the two synths in Live is JavaScript code that runs at the command line using Node.js. The program started when I typed node main.js improvisers then the return key into the terminal prompt.
When the program starts the pad (chords) voice was randomly selected to be the first leader as indicated by the message that prints to the terminal window. When the Live transport starts, a Max for Live device begins sending a 16th note clock messages to the JavaScript program. Chords built from the notes of the current key (C minor pentatonic) are randomly generated and played. For each chord a random duration is chosen, measured in 16th notes. The fixed set of duration options include 4, 8, 12, 16, 24 or 32 16th notes.
When a new chord begins, its note list and the duration are sent as a message from the pad voice to the keys voice. The keys voice then provides a melodic accopmaniment by arpeggiating the current chord. The keys voice has a fixed set of rhythmic patterns that correspond each of the possible chord durations.
In this simple implementation, the pad voice will play eight chords and then a sequencer object will rechoose which voice should be the leader and which should be the follower. If the same voice, keys or pad, is chosen as the leader three times in a row, the sequencer will switch to the other voice to be the next leader.
In the video above, at the first opportunity to select the next leader, the keys voice was chosen. When the keys voice plays, it will first make a random selection from a predefined list of note sequences. Once the note sequence has been selected, it will then generate a random rhythm for the melody. When the melody begins playing, it will notify the pad voice of the notes in the selected melody. The pad voice will then choose four random notes from the melody and play a single chord for the duration of the melodic line.
Similar to how the pad voice functions when it is the leader, the keys voice will repeat melodies a few times before letting the sequencer reselect the leader. The keys voice will play four melodies before the sequencer chooses again. In the video, the keys voice is selected two times in a row before the pad voice took its next turn as the leader.
Musical Description
This is a very basic example of different voices fulfilling two different roles. First, a voice has either a keys/melodic role or a pad/chords role. These roles are fixed for the program. However, each voice also has the role of leader or follower and these roles change over time. The leader selects what to play and the follower is responsible for accompaniment. In this example, the leaders do not generate particularly musical data. Chords are selected at random within a key and melodies have fixed note sequences and random rhythms.
Code Description
In this automata study, the voices both communicate directly with each other and also with the coordinating sequencer.
A coordinating sequencer object is responsible for managing the two voices. It holds onto the voice objects and passes timing ticks/steps to the melodic voices. Each voice manages its own timing based on its own rhythmic data. The voices also communicate with each other in the direction from the leader to the follower. In this way, the follower voice is listening to the leader voice and reacts to it accordingly.
Additionally, the leader voice will report to the coordinating sequencer each time an iteration begins. When a new melody or chord begins by the current leader, that voice will increment a count of the number of iterations that it has played. This count is stored with the sequencer object. When that count reaches a certain threshold, 8 for chords and 4 for melodies, the sequencer will reselect the leader and follower roles.