Tables & Waves
Automata Study: Chord / Melody Interaction
- Published: September 4, 2025
- Keywords: automata, MIDI, JavaScript, generative music, sketches
- Code: Chord / Melody Interaction sub-project from the Automata Studies repository.
I am interested in generative music and have been exploring automated musical processes that will create musical data in real time. Lately I have been thinking about rule based systems that are comprised of independent components that can employ simple random processes and also interact with each other. In other words, I am interested in musical automatons.
In this experiment I have set up a generative system in which two different musical voices interact with each other by passing messages back and forth to influence what the other one does. Here is an example of the process running:
Basic Overview
What is going on in this example? There is a passive instance of Ableton Live running. I am calling it passive simply because it is not instigating any sound generation. It is responsible for receiving and responding to MIDI note data. Additionally, its transport is not even running. In this case there are three tracks:
- Chords/Pad: a software synth that plays chords (track 1),
- Melody/Lead: a software synth that plays individual notes (track 2),
- Rhythm/Drum Kit: a software drum machine that plays percussive sounds (track 3);
plus Live's default reverb and delay sends.
In the upper right portion of the screen recording is a command line terminal. This little program that plays Live is JavaScript code that runs at the command line using Node.js. The program started when I typed node main.js
then the return
key into the terminal prompt. Instantly it prints out its musical configuration details and then after waiting 2 seconds for Live to recognize the virtual MIDI port it will use for communication, it starts sending note data to the three tracks. With each batch of data set to Live it will log an iteration event to the terminal so you can follow what is happening. After 100 iterations, it stops and the command line prompt returns.
NOTE: since the video was made, the repository has updated how this program is run. It should now be run at the command line from Automata Studies repository root directory using: node main.js chord-melody-interaction
.
Musical Description
Embedded in this little program is another JavaScript library I wrote called tblswvs.js
. This library is used for managing musical keys and scales. The key data can be changed by supplying an alternative to its default configuration file, which is a YAML data file within the code base (see the codebase README for details).
This program generates MIDI data for three voices, a chord voice, a melodic voice and a drum voice. The chord notes are used to constrain which melodic note is selected per iteration/event. And subsequently, the next chord root note will be based on the previous melodic accompaniment note that was played. If no melodic note was played during a given iteration event, a random scale note wil be selected as a chord root. The rhythm is a simple combination of a kick drum that accompanies each chord and optionally another snare or hihat/perc hit within the current iteration duration if the duration is long enough.
This sketch is intended to work out a minimal amount of interaction between two automaton voices. It is a trivially simple harmonic/melodic interaction in which each of the corresponding voices influences the other from one iteration to the next.
Notes on Code
These interactive automatons are implemented using Node.js's MessageChannel
system from the node:worker_threads
module. Each voice passes a message to the other one to share data about what it did as a way of influencing the other voice's subsequent action. By creating an instance of the MessageChannel
and supplying its ports with "message"
events, the Node process is held open and the script does not terminate until the ports' close()
method is called.
Musical event sequencing is accomplished by simply using the JavaScript setTimeout()
function. The musical configuration data file described above provides a list of musical timings to choose choose from for each iteration. The default configuration uses simple millisecond divisions of a 120 BPM note lengths. For example, a quarter note at 120 BPM is 500 milliseconds.