On C0NVT (From MUSIC V)

In Max Mathews’ “The Technology of Computer Music,” (the manual to Music V, for those who haven’t encountered it before), are two wonderful quotes:

“Scores for the instruments thus far discussed contain many affronts to a lazy composer, and all composers should be as lazy as possible when writing scores.” – pg. 62

“Furthermore, the note duration is already written in P4; it is an indignity to have to write P6 (= .02555/duration).” – pg. 63

I think these quotes describe that even in 1969, a schism was identified between what one writes that is relevant to them and their work and the underlying representation that is necessary for the operation of the musical system.  It’s been an interesting thing to think about what it signifies and how it presents itself even today.
The above quotes come from the section describing C0NVT, the routine one defined to transform what the composer wrote into the instructions that would work within the MUSIC V base system. While C0NVT would later see a descendant in Csound’s SCOT system, I think the general idea would be superceded by custom tools and/or language changes that allowed users to customize things in other ways.  Thinking more broadly outside of just the world of MUSIC N systems, today’s computer music systems generally involve many layers between what the user uses (i.e., the GUI interface of a DAW, a custom score language, programming in a general purpose language) and the representation used for performance (text score events, MIDI, OSC).
These layers of notation–what the composer writes, the various processes of transformation, and arriving at what is used for performance–need not be seen only to the digital music world. For example, the metric modulation notation of Carter might represent well what the performer needs for performing material that changes tempi in synchrony with other performers in other tempi, but it obscures the musical character of the material, which is often much simpler rhythmically when viewed within its native tempo. This might be seen then as score that has already been processed by C0NVT, and we are left to analyze and make out the pre-processed material.  (I do wonder then if Carter might have seen the modulations as a kind of indignity in having to write…)
Returning to the digital world, it is somewhat of a painful thing to see composers who must jump through many hoops to use the technologies available to realize their goals for which the technologies were not designed. Here I am thinking of the microtonal composer using MIDI, who might resort to using tuning tables and pitch bends to arrive at their desired tuning.  When microtonal composers use tools that allows them to write in one notation and transforms the material to one suitable for the target system automatically, it is quite a delight to see the notation and to hear the results and to clearly see the mapping between the two.  But for those who must work within tunings with many divisions of the octave, to see the coercion of a MIDI piano roll to fulfill their notation needs is quite painful.
On a personal note, these musings touch on recent issues in my musical practice of finding the right balance between how I would like to write and notate ideas and what I need to do to realize them in sound.  Sometimes text/code works so well and is so precise and clear, yet other times visual interfaces yield more overall information and provide intuitive handling and development of material.  Both have their appeals and drawbacks, and I suppose Blue represents a kind of hybrid system that allows one to not only explore either end of the spectrum but also the area in-between.  I have thought of this off an on for a long time, but not so much recently. Perhaps some quiet time to meditate upon it all and experiment with different designs is in order.

Wuji

Completed: 2017.03.28
Duration: 5:32
Ensemble: Electronic (Csound)

MP3: Click Here
OGG: Click Here
FLAC: Click Here
Project Files – Click here (.csd)

DESCRIPTION

“Wuji” was written for the Eastman Mobile Acousmonium (EMA), “an ensemble of individual custom-built loudspeakers” that is a project developed out of the Eastman Audio Research Studio, lead by Oliver Schneller. The piece is designed for any number of available speakers spatially distributed in a room. It is made up of multiple renderings of the primary single-channel source that is then mapped with one rendering per speaker.

In writing for EMA, one of the sonic experiences that came mind were memories of performing in instrumental ensembles and sitting there on stage within a field of sound. It has been many years now since I last performed in an ensemble, but I remember that sound world as a unique and wonderful experience, one not easily reproduced by typical speaker setups that might surround the periphery of the listener. It is my hope that the listener experiences this work by not only being surrounded by the sound but also within it.

Many thanks to Oliver Schneller and the members of EARS for the opportunity to compose for the EMA speaker ensemble.

TECHNICAL NOTES

“Wuji” was originally designed as a realtime work. The primary Csound CSD project uses limited amounts of indeterminacy to provide unique performances every render. The concept was first designed to be performed by multiple computers and/or mobile devices that would then be connected to various speakers spatially distributed in a room. However, as the nature of the hardware ensemble changed, it became easier to pre-generate multiple renders of the core project and provide them as a single 24-channel audio file. The 24-channel file would then be played back with each channel mapped to an available speaker.

To simulate the experience of the realtime rendering, the 24 single-channel renders were created, each unique by the nature of the indeterminacy in the CSD. A second 24chanmix CSD was developed to take each single-channel file and map it to one of the 24 channels. The channels were played in groups roughly 2 seconds apart from each other, with slight randomness used to offset the start times of the channels within the group. To cover about 8 seconds of group offsets, the channels were batched into four 6-channel groups. This matches the intended realtime performance to start groups of machines rendering about every two seconds and to simulate the slight imprecisions in trying to start separate machines at the same time.

For the 2-channel mix, the same 24 single-channel renders were used with a second 2chanmix CSD. The 2chanmix CSD uses the same offset time algorithm as the 24-channel mix (four 6-channel groups, roughly 2 seconds apart). Each channel is randomly panned within the stereo field and a random gain applied. The 2-channel mix is, at best, a rough approximation of the intended experience. The effect of listening to a heterogenous group of speakers–each with their own filtering characteristics, frequency responses, and directionality–performing sound and interacting with a room, and the freedom to walk around the roomas a listener, can not be adequately captured in a 2-channel mix. A better 2-channel mix that could account for some of these factors could certainly be done and may be revisited in the future.

A Makefile is provided that will, by default, render the 24 single-channel renders, render the 2-channel mix to WAV file, then process the WAV to create MP3, OGG, and FLAC versions. To build the 24-channel performance version, a separate Makefile target (make syi_wuji_24chanmix.wav) must be run manually. A “repl” target is also provided that will start the project using Csound with –port=10000, suitable for use with Vim csound-repl plugin. It will also define the REPL macro so that running the project CSD will not execute the main score generating command. This allows one to load the project then go about experimenting with live coding.

COMPOSITION NOTES

“Wuji” is made up of three sound groups: a justly-tuned major chord, an ascending and descending pattern of dyads in Bohlen-Pierce tuning (equal-tempered version), and a multi-LFO modulated “chaotic” sound. The first two groups use the same simple instrument made up of a sawtooth oscillator filtered by the moogladder 24/db lowpass filter. The chaotic sound was designed around a triangle wave oscillator filtered by the moogladder filter. The two tunings were chosen for their particular sonic qualities for both their respective materials alone as well as their rich interactions.

Extensible Computer Music Systems – PhD Thesis Available Online

My PhD thesis, “Extensible Computer Music Systems” is now freely available online as a PDF at:

http://eprints.maynoothuniversity.ie/7554/

It captures my thoughts on the importance of extensibility in computer music software and different ways of approaching it for both developers and users.  The thesis discusses various extensibility strategies implemented in Csound, Blue, Pink and Score, from 2011-2016.

Looking back at the thesis, I’m proud of the work I was able to do.  I am sure my thoughts will continue to evolve over time, but I think the core ideas have been represented well within the thesis.  I hope those who take a look may find something of interest.

In addition to my acknowledgements in the thesis, I would also like to thank Ryan Molloy and Stephen Travis Pope for their close readings of my thesis as part of the Viva process.  I will be forever grateful for their comments and insights.

Published
Categorized as General

Transit

Completed: 2016.08.09
Duration: 2:44
Ensemble: Electronic (blue, Csound)

MP3: Click Here
OGG: Click Here
FLAC: Click Here
Project Files – Click here (.blue, .csd)

“Transit” was inspired by listening to the music of Terry Riley to create a work that involved long feedback delay lines. I began with a mental image of performers in a space working with electronics and worked to develop a virtual system to mimic what I had in mind. Once the setup was developed, I experimented with improvising material live and notating what felt right. I then continued this cycle of improvisation and notation to extend and develop the work.

Pink 0.3.0

Hi All,

I’d like to announce the release of Pink 0.3.0:

https://clojars.org/kunstmusik/pink
[kunstmusik/pink “0.3.0”]

Pink is an audio engine library for making music systems and compositions.

The ChangeLog is available at:

https://github.com/kunstmusik/pink/blob/master/CHANGELOG.md

This release introduces Pink processes. It reuses the state machine
macro system from core.async to allow writing event generation code
using loops and waits.  The state machine execution is wrapped in a
Pink control function and run synchronously with the engine.  Waits
may wait upon a given time value in seconds, a Pink Signal (i.e. cues
or latches), or upon a predicate function.  The use of signals and
predicates provides a means for interprocess communication, enabling
things like Lutoslawski-style aleatory (i.e., ad libitum writing)
where performers and conductors signal one other.  Further information
is available in the documentation for processes [1] and example code
is shown in [2]. (For those unfamiliar with Lutoslawski’s writing, the
performance instructions given at the bottom of pages 1 and 2 of his
3rd Symphony [3] may shed some light.)

This release also provides a translation of Scott Van Duyne’s piano
model from Common Lisp Music, biquad-based filters, and a number of
other audio, control, and utility functions.

Many thanks to Timothy Baldridge and other core.async contributors for
making the core.async ioc_macros easily extensible and reusable, and
again to Tim for the wonderfully clear videos on YouTube explaining
the design of the macros.

Thanks!
steven

[1] – https://github.com/kunstmusik/pink/blob/master/doc/processes.md
[2] – https://github.com/kunstmusik/music-examples/blob/master/src/music_examples/processes.clj
[3] – https://issuu.com/scoresondemand/docs/symphony_no3_7711

Reflections after a Thesis Submission

After a very long and tiring week, I managed to finish (with great support from my advisor, Victor, and my wife, Lisa) and submit my thesis for the PhD this past Friday. I flew back home on Saturday and have been focusing on getting myself organized and resting.  I am waiting now for the Viva Voce (thesis defense), which should be in November or December, depending upon the availability of the examiners. If that all goes well, I’ll have to do revisions to the thesis, then can submit the final hardcopy and will be done.

I haven’t had much time to write on this site for a very long while.  I’m happy to have a free moment now to sit and breathe and reflect. I think more than anything, after spending a long time developing and creating music systems, I have been extremely happy the past few days to spend time using those programs for composing.  I imagine it will take some time to integrate music making back into my daily life, to make it a real practice, but so far it is going well and I am excited to just to continue on and see where it all goes.

I have a number of projects for the short term, and should be busy through the rest of the year.  The weight of writing the thesis is now absent, and all the rest of the work seems much more manageable now.  If all goes well with the Viva, I will certainly enjoy this coming December, and I am looking forward to it already.

 

Published
Categorized as General

Pink 0.2.0, Score 0.3.0

Hi All,

I’d like to announce the release of Pink 0.2.0 and Score 0.3.0:

[kunstmusik/pink “0.2.0”]
[kunstmusik/score “0.3.0”]

Pink is an audio engine library, and Score is a library for
higher-level music representations (e.g. notes, phrases, parts,
scores).

ChangeLogs are available at:

https://github.com/kunstmusik/pink/blob/master/CHANGELOG.md
https://github.com/kunstmusik/score/blob/master/CHANGELOG.md

The quick version is Pink now has some new effects (ringmod, freeverb, chorus) as well as some new filters and delay-based audio functions.  Score has a new sieves namespace for Xenakis-style sieves. (An example of sieves and freeverb is available in the music-examples project [1]).

For any questions, please feel free to email me or post on the pink-users list.  For issues and PR’s, please use the facilities on Github for each of the projects.

Thanks!
steven
[1] – https://github.com/kunstmusik/music-examples/blob/master/src/music_examples/sieves.clj

Pink 0.1.0, Score 0.2.0

Hi All,

I’d like to announce the release of Pink 0.1.0 and Score 0.2.0:

[kunstmusik/pink “0.1.0”]
[kunstmusik/score “0.2.0”]

Pink is an audio engine library, and Score is a library for higher-level music representations (e.g. notes, phrases, parts, scores).

For more information, please see the projects’ docs at:

http://github.com/kunstmusik/pink
http://github.com/kunstmusik/score

and examples of using both at:

http://github.com/kunstmusik/music-examples

To note, this is the first stable version of Pink.  I got into a bit of “let me add just one more feature…” but decided it was time enough to issue a release. Score’s changes since 0.1.0 are primarily a better organization of files, as well as organizing music into measured and/or timed scores.  See [1] for an example of measured score use.

Next steps planned for Pink are some more unit generators (i.e. comb filter, convolution) and effects (i.e. reverbs). Next steps planned for Score are currently just adding Xenakis-style sieves.  I’ve also been using plotting code enabled as a separate Leiningen profile in Pink, which I am planning to move to an additional library (tentatively called pink-viz).  The plans for pink-viz are to collect useful functions for helping to write unit generators (i.e. oscilloscope, bode plot, FFT spectrogram).

Thanks!
steven

[1] – https://github.com/kunstmusik/music-examples/blob/master/src/music_examples/track1.clj#L263

Csound: adsr140

I’ve put together a UDO called adsr140 that is based on the Doepfer A-140 envelope generator[1].  It uses code by Nigel Redmon for its ADSR[2], but has been ported to Csound ORC code as well has the added ability to take in a retrigger signal.

To note: adsr140 uses positive values from signals for gate and retrigger as a gate on.  (The examples use the lfo opcode with default sine as a gate and retrigger signal.)

The first example sounds use instr 1 which only uses the gate signal to trigger the adsr.  The second example that comes in at 18 seconds uses instr 2 which uses both gate and retrigger.

Also to note, the code for adsr140 is using a-rate signals for gate and retrigger.  It also requires Csound 6.04 as it uses a while-loop.

Enjoy!
steven

[1] – http://www.doepfer.de/a100_man/A140_man.pdf
[2] – http://www.earlevel.com/main/2013/06/02/envelope-generators-adsr-part-2/

p.s. – Life’s been busy lately, but I plan to add this to Pink as soon as I have a chance.

Published
Categorized as csound

Developing Music Systems on the JVM with Pink and Score

My talk at Clojure/Conj 2014, entitled “Developing Music Systems on the JVM with Pink and Score” is now available online at:

I was a bit dismayed afterwards that I had mismanaged my time on stage and that my final example did not run (ended up being a small bug that was introduced while practicing the presentation earlier that day; now fixed in the code repository). However, I think overall I was able to cover enough of the systems. I also got some good feedback from people, both as compliments as well as great notes and questions that I look forward to incorporating back into the work.

I’m happy now to be back home and look forward to collecting my thoughts and figuring out next steps for everything. I am extremely grateful to have had the opportunity to present my work at the conference; many thanks to Cognitect for the opportunity and their incredible support.  I’m also blown away by the other speakers at the conference, as well as all the people I met there.  It’s a wonderful community, one which I hope continues to grow and keeps on being as positive a group as it is today.