On C0NVT (From MUSIC V)

In Max Mathews’ “The Technology of Computer Music,” (the manual to Music V, for those who haven’t encountered it before), are two wonderful quotes:

“Scores for the instruments thus far discussed contain many affronts to a lazy composer, and all composers should be as lazy as possible when writing scores.” – pg. 62

“Furthermore, the note duration is already written in P4; it is an indignity to have to write P6 (= .02555/duration).” – pg. 63

I think these quotes describe that even in 1969, a schism was identified between what one writes that is relevant to them and their work and the underlying representation that is necessary for the operation of the musical system.  It’s been an interesting thing to think about what it signifies and how it presents itself even today.
The above quotes come from the section describing C0NVT, the routine one defined to transform what the composer wrote into the instructions that would work within the MUSIC V base system. While C0NVT would later see a descendant in Csound’s SCOT system, I think the general idea would be superceded by custom tools and/or language changes that allowed users to customize things in other ways.  Thinking more broadly outside of just the world of MUSIC N systems, today’s computer music systems generally involve many layers between what the user uses (i.e., the GUI interface of a DAW, a custom score language, programming in a general purpose language) and the representation used for performance (text score events, MIDI, OSC).
These layers of notation–what the composer writes, the various processes of transformation, and arriving at what is used for performance–need not be seen only to the digital music world. For example, the metric modulation notation of Carter might represent well what the performer needs for performing material that changes tempi in synchrony with other performers in other tempi, but it obscures the musical character of the material, which is often much simpler rhythmically when viewed within its native tempo. This might be seen then as score that has already been processed by C0NVT, and we are left to analyze and make out the pre-processed material.  (I do wonder then if Carter might have seen the modulations as a kind of indignity in having to write…)
Returning to the digital world, it is somewhat of a painful thing to see composers who must jump through many hoops to use the technologies available to realize their goals for which the technologies were not designed. Here I am thinking of the microtonal composer using MIDI, who might resort to using tuning tables and pitch bends to arrive at their desired tuning.  When microtonal composers use tools that allows them to write in one notation and transforms the material to one suitable for the target system automatically, it is quite a delight to see the notation and to hear the results and to clearly see the mapping between the two.  But for those who must work within tunings with many divisions of the octave, to see the coercion of a MIDI piano roll to fulfill their notation needs is quite painful.
On a personal note, these musings touch on recent issues in my musical practice of finding the right balance between how I would like to write and notate ideas and what I need to do to realize them in sound.  Sometimes text/code works so well and is so precise and clear, yet other times visual interfaces yield more overall information and provide intuitive handling and development of material.  Both have their appeals and drawbacks, and I suppose Blue represents a kind of hybrid system that allows one to not only explore either end of the spectrum but also the area in-between.  I have thought of this off an on for a long time, but not so much recently. Perhaps some quiet time to meditate upon it all and experiment with different designs is in order.

Wuji

Completed: 2017.03.28
Duration: 5:32
Ensemble: Electronic (Csound)

MP3: Click Here
OGG: Click Here
FLAC: Click Here
Project Files – Click here (.csd)

DESCRIPTION

“Wuji” was written for the Eastman Mobile Acousmonium (EMA), “an ensemble of individual custom-built loudspeakers” that is a project developed out of the Eastman Audio Research Studio, lead by Oliver Schneller. The piece is designed for any number of available speakers spatially distributed in a room. It is made up of multiple renderings of the primary single-channel source that is then mapped with one rendering per speaker.

In writing for EMA, one of the sonic experiences that came mind were memories of performing in instrumental ensembles and sitting there on stage within a field of sound. It has been many years now since I last performed in an ensemble, but I remember that sound world as a unique and wonderful experience, one not easily reproduced by typical speaker setups that might surround the periphery of the listener. It is my hope that the listener experiences this work by not only being surrounded by the sound but also within it.

Many thanks to Oliver Schneller and the members of EARS for the opportunity to compose for the EMA speaker ensemble.

TECHNICAL NOTES

“Wuji” was originally designed as a realtime work. The primary Csound CSD project uses limited amounts of indeterminacy to provide unique performances every render. The concept was first designed to be performed by multiple computers and/or mobile devices that would then be connected to various speakers spatially distributed in a room. However, as the nature of the hardware ensemble changed, it became easier to pre-generate multiple renders of the core project and provide them as a single 24-channel audio file. The 24-channel file would then be played back with each channel mapped to an available speaker.

To simulate the experience of the realtime rendering, the 24 single-channel renders were created, each unique by the nature of the indeterminacy in the CSD. A second 24chanmix CSD was developed to take each single-channel file and map it to one of the 24 channels. The channels were played in groups roughly 2 seconds apart from each other, with slight randomness used to offset the start times of the channels within the group. To cover about 8 seconds of group offsets, the channels were batched into four 6-channel groups. This matches the intended realtime performance to start groups of machines rendering about every two seconds and to simulate the slight imprecisions in trying to start separate machines at the same time.

For the 2-channel mix, the same 24 single-channel renders were used with a second 2chanmix CSD. The 2chanmix CSD uses the same offset time algorithm as the 24-channel mix (four 6-channel groups, roughly 2 seconds apart). Each channel is randomly panned within the stereo field and a random gain applied. The 2-channel mix is, at best, a rough approximation of the intended experience. The effect of listening to a heterogenous group of speakers–each with their own filtering characteristics, frequency responses, and directionality–performing sound and interacting with a room, and the freedom to walk around the roomas a listener, can not be adequately captured in a 2-channel mix. A better 2-channel mix that could account for some of these factors could certainly be done and may be revisited in the future.

A Makefile is provided that will, by default, render the 24 single-channel renders, render the 2-channel mix to WAV file, then process the WAV to create MP3, OGG, and FLAC versions. To build the 24-channel performance version, a separate Makefile target (make syi_wuji_24chanmix.wav) must be run manually. A “repl” target is also provided that will start the project using Csound with –port=10000, suitable for use with Vim csound-repl plugin. It will also define the REPL macro so that running the project CSD will not execute the main score generating command. This allows one to load the project then go about experimenting with live coding.

COMPOSITION NOTES

“Wuji” is made up of three sound groups: a justly-tuned major chord, an ascending and descending pattern of dyads in Bohlen-Pierce tuning (equal-tempered version), and a multi-LFO modulated “chaotic” sound. The first two groups use the same simple instrument made up of a sawtooth oscillator filtered by the moogladder 24/db lowpass filter. The chaotic sound was designed around a triangle wave oscillator filtered by the moogladder filter. The two tunings were chosen for their particular sonic qualities for both their respective materials alone as well as their rich interactions.