In Max Mathews’ “The Technology of Computer Music,” (the manual to Music V, for those who haven’t encountered it before), are two wonderful quotes:
“Scores for the instruments thus far discussed contain many affronts to a lazy composer, and all composers should be as lazy as possible when writing scores.” – pg. 62
“Furthermore, the note duration is already written in P4; it is an indignity to have to write P6 (= .02555/duration).” – pg. 63
I think these quotes describe that even in 1969, a schism was identified between what one writes that is relevant to them and their work and the underlying representation that is necessary for the operation of the musical system. It’s been an interesting thing to think about what it signifies and how it presents itself even today.
The above quotes come from the section describing C0NVT, the routine one defined to transform what the composer wrote into the instructions that would work within the MUSIC V base system. While C0NVT would later see a descendant in Csound’s SCOT system, I think the general idea would be superceded by custom tools and/or language changes that allowed users to customize things in other ways. Thinking more broadly outside of just the world of MUSIC N systems, today’s computer music systems generally involve many layers between what the user uses (i.e., the GUI interface of a DAW, a custom score language, programming in a general purpose language) and the representation used for performance (text score events, MIDI, OSC).
These layers of notation–what the composer writes, the various processes of transformation, and arriving at what is used for performance–need not be seen only to the digital music world. For example, the metric modulation notation of Carter might represent well what the performer needs for performing material that changes tempi in synchrony with other performers in other tempi, but it obscures the musical character of the material, which is often much simpler rhythmically when viewed within its native tempo. This might be seen then as score that has already been processed by C0NVT, and we are left to analyze and make out the pre-processed material. (I do wonder then if Carter might have seen the modulations as a kind of indignity in having to write…)
Returning to the digital world, it is somewhat of a painful thing to see composers who must jump through many hoops to use the technologies available to realize their goals for which the technologies were not designed. Here I am thinking of the microtonal composer using MIDI, who might resort to using tuning tables and pitch bends to arrive at their desired tuning. When microtonal composers use tools that allows them to write in one notation and transforms the material to one suitable for the target system automatically, it is quite a delight to see the notation and to hear the results and to clearly see the mapping between the two. But for those who must work within tunings with many divisions of the octave, to see the coercion of a MIDI piano roll to fulfill their notation needs is quite painful.
On a personal note, these musings touch on recent issues in my musical practice of finding the right balance between how I would like to write and notate ideas and what I need to do to realize them in sound. Sometimes text/code works so well and is so precise and clear, yet other times visual interfaces yield more overall information and provide intuitive handling and development of material. Both have their appeals and drawbacks, and I suppose Blue represents a kind of hybrid system that allows one to not only explore either end of the spectrum but also the area in-between. I have thought of this off an on for a long time, but not so much recently. Perhaps some quiet time to meditate upon it all and experiment with different designs is in order.