Live coding session using csound-live-code and https://live.csound.com. Code uses a single-oscillator synthesizer and feedback delay effect.
This past weekend I was happy to participate in the Algosix celebration of Algorave with a live code performance. (The first few minutes of the test sound were me trying to check sound on the stream and failing to realize it was working…).
The video shows a little bit of vim, csound, and csound-live-code. In particular, it demonstrates the hex beats work in the live code project, as well as using phasors and non-interpolating oscillator functions for pitch values. Drum sounds are from Iain McCurdy’s TR808 code and synth sounds were ones I have been working on in the live code project.
The event was a lot of fun with lots of different approaches, aesthetics, tools, etc. Lots of appreciation for the community and organizers of the event! (And many thanks for the opportunity to perform!)
I’d like to announce the release of Pink 0.4.0 and Score 0.4.0:
Pink is an audio engine library, and Score is a library for higher-level music representations (e.g. notes, phrases, parts, scores).
Change logs are available at:
Short version is that Pink has a number of new filters and effects, updates to minimize object allocations (e.g., Distruptor-style message ring buffers), and utility code for building streaming disk-based caches for pre-rendered (“frozen”) parts. Score has new functions geared towards live coding (e.g., euclidean and hexadecimal beats), a new mini-language for notating musical lines, and number of other updates.
Codox-generated documentation/site is now published at:
For any questions, please feel free to email me or post on the pink-users list. For issues and PR’s, please use the facilities on Github for each of the projects.
I have long avoided FM (Frequency Modulation) synthesis in my own musical practice as I never felt connected with the results I was able to get myself. However, I recently had the great pleasure to attend a talk about the 50th anniversary of FM synthesis, given by its creator, John Chowning, and I was very inspired to explore FM once again. In so doing, I came across the Yamaha reface DX synthesizer and became fascinated with reproducing its feedback system to morph an operator’s output from a Sine to either Saw or Square waveform.
Now, I do not own a reface DX, so most of my research into it was through looking at manuals and watching video demonstrations on YouTube to try to get an idea of how it might be done. I knew from going through literature on FM and PM (Phase Modulation) that using PM with feedback could get an operator’s signal to move from a Sine to Sawtooth wave, depending upon the amount of feedback. I was quickly able to setup a PM instrument in Csound and test this out and it sounded much like what I had heard for the reface DX.
;; feedback PM - feedback moves towards saw instr PMFBSaw ifreq = p4 iamp = p5 kfb = linseg(0, p3 * .5, 0.3, p3 * .5, 0) aphs = phasor(ifreq) ; init for feedback acar init 0 acar = tablei(aphs+(acar*kfb), 1, 1, 0, 1) acar *= linen:a(1, 0.1, p3, 0.1) * iamp outc(acar, acar) endin
In the code above, one can see that the
acar output from
tablei is also used as input into the opcode. The code above runs in a single-sample context (in Csound parlance, with
Now, the part I could not find anywhere in literature or discussion online was how to use operator feedback to morph from Sine to Square. (This is done by using 0 to -127 range for feedback on the reface DX.) After a couple days of research and exploration, I stumbled upon a calculation that sounded to my ears very much like what I had heard on the reface DX videos.
The code below shows the entire instrument:
;; feedback PM - feedback moves towards square instr PMFBSquare ifreq = p4 iamp = p5 kfb = linseg(0, p3 * .5, 0.3, p3 * .5, 0) aphs = phasor(ifreq) ; init for feedback acar init 0 acar = tablei(aphs+(acar*acar*kfb), 1, 1, 0, 1) acar *= linen:a(1, 0.1, p3, 0.1) * iamp outc(acar, acar) endin
This instrument is virtually the same as the first instrument with the exception of one additional calculation: the multiplication of the
acar feedback by itself. (This is seen in the
acar*acar calculation.) Adding this one additional multiplication made the signal move from Sine to Square.
I posted this to the Csound User list and Iain McCurdy gave great feedback that the waveform could be morphed between Saw and Square by interpolating between acar and 1. This made a lot of sense as when one of the
acar‘s becomes 1, it reduces back down to the normal feedback addition to produce a Saw sound. After some further emails, I did some experiments to use a cosine-based mapping for the interpolation that resulted in a nice transition.
;; feedback PM - feedback moves from square to saw ;; Based on Iain McCurdy's comments on Csound User List instr PMFBSquareSaw ifreq = p4 iamp = p5 kfb = 0.25 ;;kfb = linseg(0, p3 * .5, 0.5, p3 * .5, 0) kwaveshape = linseg(0, p3 * .5, 1, p3 * .5, 0) ;; range 0-1 for saw->square kwaveshape *= kwaveshape ;; adjust curve kwaveshape = $M_PI * (kwaveshape + 1) ;; adjust from PI->2PI kwaveshape = (cos(kwaveshape) * 0.5) + 0.5 ;; adjust to 0-1 aphs = phasor(ifreq) ; init for feedback acar init 0 acar = tablei(aphs+(ntrpol(acar, a(1), kwaveshape)*acar*kfb), 1, 1, 0, 1) acar *= linen:a(1, 0.1, p3, 0.1) * iamp outc(acar, acar) endin
I do not know if these calculations are what are used in the reface DX, but regardless, the sine->square sounded good to my ear and I felt it was usable for the kind of sound work I was interested in doing. For now, I have posted the Csound CSD project file here. The audio example at the top of this post is an MP3 version of the output rendered from this project.
Hexadecimal (base 16) has been used in various forms of computer music for a very long time, generally as a condensed way to notate values within a power-of-two range. For example, rather than write out “15” as a decimal value (base 10), one can use “F”, and rather than write out “255”, one can use “FF”. The notation of hexadecimal numbers, in general, take up less horizontal space on the screen than its base 10 counterpart.
The differences in screen real estate is even more pronounced when comparing the binary value (base 2) to the decimal and hex values. Let’s compare some values here:
Binary: 1101 Decimal: 14 Hex: E Binary: 11001111 Decimal: 207 Hex: CF
A chart showing the binary, decimal, and hex values for number 0-255 are available here.
Now, one of the interesting challenges in live coding pattern-oriented music for me has been trying to have a very condensed notation for expressing beats (onsets). One way I’ve seen used is to notate values in a binary form within a string, such as “1000100010101000” which would mean “play notes where there are 1’s, but don’t play notes where there are 0’s”. In this case, on beat 1, 5, 9, 11, and 13.
Binary values in a string, on the one hand, quite clearly notates when an instrument should play. On the other hand, I’ve found it visually takes up quite some space and can be a bit slow to parse mentally.
One thing I’ve found rather useful is to notate onset patterns using hexadecimal strings. I first explored this in my Clojure systems Pink and Score, but recently translated the function I was using to Csound code. The Csound code turned out to be quite simple:
opcode hexbeat, i, Si Spat, ibeat xin ;; 4 bits/beats per hex value ipatlen = strlen(Spat) * 4 ;; get beat within pattern length ibeat = ibeat % ipatlen ;; figure which hex value to use from string ipatidx = int(ibeat / 4) ;; figure out which bit from hex to use ibitidx = ibeat % 4 ;; convert individual hex from string to decimal/binary ibeatPat = strtol(strcat("0x", strsub(Spat, ipatidx, ipatidx + 1))) ;; bit shift/mask to check onset from hex's bits xout (ibeatPat >> (3 - ibitidx)) & 1 endop
And an example of its use is shown here:
if(hexbeat("f0d0d0f0", ibeat % 32) == 1) then schedule("Synth1", 0, p3, inScale(48, 0) ) endif
The above is saying: “within the hexadecimal beat string of f0d0d0f0, and given the current beat value between 0 and 32, check if the onset is a 1 and, if so, perform Synth 1”.
The code above may be a little tricky to grok at first glance. I’ve started a Github repository for this code and made an online web app for live coding with Csound and this library. The live web site is available at:
and the source code is available at:
In working with the hex beat patterns, I found it took a little practice but the meaning of various hex values started to become intuitive over time. Hexadecimal works really well, in my opinion, for notating pattern onsets as each hex value maps to 4 bits, which works perfectly for 4 16th-notes. With this, 4 hex values can be used to notate a single measure of 16 16th-notes, 8 hex for 2 measures, and so on.
Ensemble: Electronic (Csound)
Project Files – Click here (.csd)
Reflections is a three-movement study that arose out of exploration into randomly generated symmetric odd/even signal wave tables for synthesis. The piece uses a number of different “reflected” table generation methods with each movement developed intuitively according to the sound qualities of the table methods. Each piece has some indeterminant qualities in form as well as sound, thus each rendering of the piece is its own unique performance.
In Max Mathews’ “The Technology of Computer Music,” (the manual to Music V, for those who haven’t encountered it before), are two wonderful quotes:
“Scores for the instruments thus far discussed contain many affronts to a lazy composer, and all composers should be as lazy as possible when writing scores.” – pg. 62
“Furthermore, the note duration is already written in P4; it is an indignity to have to write P6 (= .02555/duration).” – pg. 63
Ensemble: Electronic (Csound)
“Wuji” was written for the Eastman Mobile Acousmonium (EMA), “an ensemble of individual custom-built loudspeakers” that is a project developed out of the Eastman Audio Research Studio, lead by Oliver Schneller. The piece is designed for any number of available speakers spatially distributed in a room. It is made up of multiple renderings of the primary single-channel source that is then mapped with one rendering per speaker.
In writing for EMA, one of the sonic experiences that came mind were memories of performing in instrumental ensembles and sitting there on stage within a field of sound. It has been many years now since I last performed in an ensemble, but I remember that sound world as a unique and wonderful experience, one not easily reproduced by typical speaker setups that might surround the periphery of the listener. It is my hope that the listener experiences this work by not only being surrounded by the sound but also within it.
Many thanks to Oliver Schneller and the members of EARS for the opportunity to compose for the EMA speaker ensemble.
“Wuji” was originally designed as a realtime work. The primary Csound CSD project uses limited amounts of indeterminacy to provide unique performances every render. The concept was first designed to be performed by multiple computers and/or mobile devices that would then be connected to various speakers spatially distributed in a room. However, as the nature of the hardware ensemble changed, it became easier to pre-generate multiple renders of the core project and provide them as a single 24-channel audio file. The 24-channel file would then be played back with each channel mapped to an available speaker.
To simulate the experience of the realtime rendering, the 24 single-channel renders were created, each unique by the nature of the indeterminacy in the CSD. A second 24chanmix CSD was developed to take each single-channel file and map it to one of the 24 channels. The channels were played in groups roughly 2 seconds apart from each other, with slight randomness used to offset the start times of the channels within the group. To cover about 8 seconds of group offsets, the channels were batched into four 6-channel groups. This matches the intended realtime performance to start groups of machines rendering about every two seconds and to simulate the slight imprecisions in trying to start separate machines at the same time.
For the 2-channel mix, the same 24 single-channel renders were used with a second 2chanmix CSD. The 2chanmix CSD uses the same offset time algorithm as the 24-channel mix (four 6-channel groups, roughly 2 seconds apart). Each channel is randomly panned within the stereo field and a random gain applied. The 2-channel mix is, at best, a rough approximation of the intended experience. The effect of listening to a heterogenous group of speakers–each with their own filtering characteristics, frequency responses, and directionality–performing sound and interacting with a room, and the freedom to walk around the roomas a listener, can not be adequately captured in a 2-channel mix. A better 2-channel mix that could account for some of these factors could certainly be done and may be revisited in the future.
A Makefile is provided that will, by default, render the 24 single-channel renders, render the 2-channel mix to WAV file, then process the WAV to create MP3, OGG, and FLAC versions. To build the 24-channel performance version, a separate Makefile target (make syi_wuji_24chanmix.wav) must be run manually. A “repl” target is also provided that will start the project using Csound with –port=10000, suitable for use with Vim csound-repl plugin. It will also define the REPL macro so that running the project CSD will not execute the main score generating command. This allows one to load the project then go about experimenting with live coding.
“Wuji” is made up of three sound groups: a justly-tuned major chord, an ascending and descending pattern of dyads in Bohlen-Pierce tuning (equal-tempered version), and a multi-LFO modulated “chaotic” sound. The first two groups use the same simple instrument made up of a sawtooth oscillator filtered by the moogladder 24/db lowpass filter. The chaotic sound was designed around a triangle wave oscillator filtered by the moogladder filter. The two tunings were chosen for their particular sonic qualities for both their respective materials alone as well as their rich interactions.
My PhD thesis, “Extensible Computer Music Systems” is now freely available online as a PDF at:
It captures my thoughts on the importance of extensibility in computer music software and different ways of approaching it for both developers and users. The thesis discusses various extensibility strategies implemented in Csound, Blue, Pink and Score, from 2011-2016.
Looking back at the thesis, I’m proud of the work I was able to do. I am sure my thoughts will continue to evolve over time, but I think the core ideas have been represented well within the thesis. I hope those who take a look may find something of interest.
In addition to my acknowledgements in the thesis, I would also like to thank Ryan Molloy and Stephen Travis Pope for their close readings of my thesis as part of the Viva process. I will be forever grateful for their comments and insights.
Ensemble: Electronic (blue, Csound)
“Transit” was inspired by listening to the music of Terry Riley to create a work that involved long feedback delay lines. I began with a mental image of performers in a space working with electronics and worked to develop a virtual system to mimic what I had in mind. Once the setup was developed, I experimented with improvising material live and notating what felt right. I then continued this cycle of improvisation and notation to extend and develop the work.