How Csound Works – Presentation from the 2nd International Csound Conference

I recently gave a talk at the 2nd International Csound Conference, held at the Berklee College of Music in Boston, entitled “How Csound Works”.  In the talk, I went through high-level design, key data structures, the Orchestra compiler, the event and runtime system, and some other features.  I have placed a copy of the presentation slides as a zip file here:

Download Slides

Additionally, the slides can be viewed online here.

Regarding the slides, I used Hakim El Hattab’s wonderful Javascript Slide framework, Reveal.js.

Note: I believe the presentation was recorded.  When that is made available, I will update this entry with a link to the video.

2nd International Csound Conference 2013

The 2nd International Csound Conference took place this past weekend at the Berklee College of Music in Boston.  I had a fantastic time there getting to see Jean-Claude Risset, John Chowning, and Barry Vercoe all give keynotes.  There was a reallly nice tribute session for Max Matthews, with people like Tom Oberheim, David Ziccarelli, and Max’s family there sharing some beautiful stories about Max.

Beyond that, it was great to see all the latest going on in the Csound community. I thoroughly enjoyed Rory Walsh’s presentation on Cabbage, as well as Andres Cabrera’s presentation on CsoundQt. They’ve both made some great developments in their software!  I also really enjoyed Oeyvind Brandtsegg’s presentation “Sonification with Csound ­ Quasar Correlations”, discussing an upcoming installation work.

There were certainly many presentations given on the 2nd day, and as they were in parallel tracks one simply couldn’t attend everything.  I ended up giving one presentation on the first day, and two presentations on the second, one of which spanned two parts.  Because of that I certainly felt like I missed out on going to presentations, but I believe everything was being filmed so I am looking forward to watching those when they are out.

Regarding the presentations I gave, I think I did just a so-so job on the Blue presentation, and was happy with how the “How Csound Works” and “How to use the Csound API” talks went. Given that there was a lot to prepare, I was happy in the end with how it all turned out.

The concerts were a very nice variety of pieces in different aesthetics. I was happy to be listening to music on very nice speakers, and especially enjoyed being in the company of many friends to do so.  I also enjoyed meeting a number of new people, and also finally putting faces to names I had long known from the mailing list.

Overall, I had a great time in Boston. I think Dr. Boulanger and the Berklee College of Music did a wonder job in organizing and creating a very special and memorable event. I’m already looking forward to the next Csound Conference!

Julian Parker Ring Modulator

I wanted to share an implementation of Julian Parker’s digital model of a Ring Modulator. The paper he wrote from DAFx 2011 [1][2] was also used by the BBC Radiophonic Workshop in the “Recreating the sounds of the BBC Radiophonic Workhop using the Web Audio API” [3].

I’ve implemented the ringmodulator as a UDO, available here:

Blue Project and CSD ( 

To run the CSD on the commandline, you can use:

csound -i adc -o dac -b 128 -B 512 ringmod.csd

The Blue project has knobs you can use to adjust the carrier’s amp and frequency. In the generated CSD, you can adjust gk_blue_auto0 for amp and gk_blue_auto1 for frequency, or just modify the poscil line in instr 1.

Note, the paper suggests using a high amount of oversampling (32x; 8x or 16x being reasonable for using sinusoidal carrier). This implementation does not do oversampling, which I believe the BBC version does do not as well.

As far as I’ve checked, the implementation matches the BBC one with the exception of using a limiter instead of a compressor. Also, I did one optimization to the wavetable generation to extract out a constant in the part where v > vl, but this is a minimal optimization as the wavetable generation is done only once really.

  • [1] –
  • [2] –
  • [3] –

UPDATE (2015-04-07):  I found a coding bug in the table generation in the original version I posted.  The zip file above has been updated with the corrected code.



Completed: 2013.07.27
Duration: 7:03
Ensemble: Electronic (blue, Csound)

MP3: Click Here
OGG: Click Here
FLAC: Click Here
Project Files – Click here (.blue, .csd)

“TimeSphere” is inspired by the idea that time is not infinite but bounded, like a sphere, and that there are inifinite possible projections through time within this sphere. (I don’t remember the exact origin of this thought, but I believe I may have derived it from Stephen Hawking’s idea of a closed universe in “Brief History of Time”.)

I’ve always found the world to be filled with many strata of time. Things move together, alone, and somewhere in between, moving from one time flow to another. The idea of a sphere of time in which the world moves was an inspiration for this work, and not interpreted literally. While composing this piece, I was very aware of the the interplay between rational development and the exploration of where intuition guided me.

waveseq – Wave Sequencing User-Defined Opcode for Csound

Lately I’ve been interested in a number of hardware synthesizers that came out during the late 80’s/early 90’s, as I’ve found their synthesis methods rather curious and inventive.  One of them, the Korg Wavestation, has a very interesting synthesis system, using a combination of Vector Synthesis and Wave Sequencing. Vector Synthesis is easy enough to implement using a cross-fading between different oscillators or sound generators, but I was curious to see about implementing the Wave Sequencing in Csound code.

To implement this, I used information obtained online, information in the manuals, time experimenting on a hardware Korg Wavestation, as well as time with the Korg Legacy Wavestation Software (I ended up purchasing the whole Legacy Collection). Here is an example of the waveseq User-Defined Opcode (UDO) using f-tables generated by GEN10:

Example 1:

As well as f-tables using sampled drum sounds:

Example 2:

The UDO is implemented such that it takes in an f-table that describes the entire wave sequence.  Therefore, most of the work to using this opcode is done in creating the set of f-tables to sequence through.  I did implement the following features:

  • Tempo: 24 duration is a quarter note; if tempo is non-zero, it will be used to set the duration of the quarter note, if 0, tempo is about 105 bpm
  • WaveSequence: start wave, looping type (0 = forwards, 1 = forwards and backwards), start wave for loop, end wave
  • Wave Tables: single-cycle wave/single-shot wave/looped wave (determined on whether sample rate given in the waveseq table is 0, positive, or negative), amplitude adjustment, cross-fade time, duration of table to play
About the design, a wave sequence table holds information about how many tables are in the sequence, and how to play them. For example, in example 2, the wave sequence table used is:
itab_bass ftgenonce 0, 0, -9512, 1, "BDRUM11.WAV", 0, 0, 0
itab_tom ftgenonce 0, 0, -17600, 1, "TOM5.WAV", 0, 0, 0
itab_snare ftgenonce 0, 0, -10947, 1, "SNARE11.WAV", 0, 0, 0

iwaveseqtab ftgenonce 0, 0, -32, -2, 3, 1, 0, 0, 2,  
	itab_bass, ftsr(itab_bass), 1, 1, ixfade, iwavedur,  
	itab_tom, ftsr(itab_tom), 2, 1, ixfade, iwavedur,  
	itab_snare, ftsr(itab_snare), 2, 1, ixfade, iwavedur
The iwaveseqtab has 32 size (just needs to be big enough to hold the information for the other tables), and in its first line it describes:
  • 3 tables are in this wave sequence
  • 1 is used to denote backwards and forwards playing through the sequence
  • 0 is the index of the start wave
  • 0 is the index of the loop start
  • 2 is the index of the loop end
after that come the tables to be used.  For example, the part that starts with itab_bass says:
  • sample rate of the table (positive here, so play as single-shot)
  • amplitude adjustment of 1 (amplitude is multiplied by this factor)
  • pitch adjustment of 1 (not currently implemented)
  • crossfade of 0 (ixfade = 0 earlier in the code, not listed above)
  • duration of 6 (iwavedur = 6 earlier in the code, not listed above), this is equivalent to a 16th note
The waveseq UDO is uses the tablexkt opcode, does manual incrementing of phaser variables, use linear amplitude adjustments when cross-fading, and a lot of code for reading from the wave sequence table and configuring things. The code still requires some cleanup work, but I wanted to go ahead and make this initial, mostly-complete implementation available.  I plan to create implement some further features for the waveseq opcode, then create either a full Blue instrument plugin or a BlueSynthBuilder version of this instrument that will allow easier creation and organization of f-tables into wave sequences. I am also thinking about adding Vector Synthesis as well (using then four waveseq instances).
Overall, it was quite an enjoyable experience to study the Wavestation and learn to implement wave sequencing in Csound code.  In the end, I’m still looking at where I might use this opcode in my own work, but it’s nice to know it’s available should I find a use for it.
Download the Examples and MP3’s here: waveseq – example CSD’s and MP3’s


NoteParse – code for shorthand Csound score creation

noteParse – 2012.10.27

As part of my composition work lately I’ve been developing some new
scripts. Some are custom to the piece I’m working on, while others
have been a bit more generic.  I thought I’d share this NoteParse
Python code as I thought others might find it useful.

One of the things I’ve wanted is a shorthand for creating scores.  I’m
aware of a number of different approaches for this (microcsound,
lilypond, abc, mck/mml), but wanted something that worked more closely
to my own personal style.  I found I was able to write something
fairly quickly that is simple but flexible enough.

Attached are two python scripts.  The first works with standard
python, while the other requires being used within blue as it depends
on my python orchestra library that comes with blue.  Both use the
basic syntax I created, while the blue version allows score modifiers.
An example of the syntax with modifiers:

def stoccato(n):
    n.duration = n.duration * .5

modifiers = {"stoccato": stoccato }

a = "m:stoccato 8.00d.25a-12 4 m:clear 3d.5 2 1 0d1a-8"
notes = parseOrchScore(a, modifiers)

generates the following:

i x               0.0             0.125           8.00            8.00 -12.0           0   0
i x               0.25            0.125           8.04            8.04 -12.0           0               0
i x               0.5             0.5             8.03            8.03 -12.0           0               0
i x               1.0             0.5             8.02            8.02 -12.0           0               0
i x               1.5             0.5             8.01            8.01 -12.0           0               0
i x               2.0             1.0             8.00            8.00 -8.0            0               0

(disregard that x is used for p1, these notes get further processed by
performers and performerGroup objects in my composition library).

The things I’d like to point out:

1. The score line is:

a = "m:stoccato 8.00d.25a-12 4 m:clear 3d.5 2 1 0d1a-8"

Disregarding the m: statements, the line looks like:

a = "8.00d.25a-12 4 3d.5 2 1 0d1a-8"

How the library works is that the first note becomes a template note
that carries over values to the next note.  So, for the first note, it
uses pch 8.00, has a duration of .25, and amplitude of -12.  The next
note, which is just a 4, means scale degree four for the same octave
as previously given, so the next generated note has a pch of 8.04, a
duration of .25, and amplitude of -12.  The third note uses
scaleDegree 3, but now has a duration of .5, and carries over the

2. The use of modifiers is completely customizable.  To use modifiers,
supply a map that has the name of the modifier together with a
single-arg function that will process the note.  When an m: statement
is encountered, it will look up that modifier and set it as the
current modifier.  If an m:clear is found, it will set the modifier
function to None.  What I like about this is that a standard set of
modifiers could be created, but it’d be easy to create a custom one
while working on a new piece that might be very specific to that

The non-blue version attached to this email currently just returns a
list of lists of values that one can then process to generate notes
that work with the individual instrument’s p-field signatures (i.e.
your instrument may have 5 p-fields, or 10, etc. and you’d just apply
a transform to the list to generate the notes you want).

I’m still debating on other features to add, and my current plan is to
add this to my orchestra composition library that comes with blue.  My
thought is that the code is fairly small and useful to my own
composition method, but might be easy for others to take and modify
for their own use pretty easily.

On Isaac Asimov’s Robot, Galactic Empire, and Foundation Series

Earlier this year I had been reading a lot of non-fiction and felt a need to balance out my reading with some fiction. I had noted in my list of books to read Asimov’s Foundation, and downloaded it for my Kindle. Very quickly I was consumed by the world created in this story: a rich and fascinating vision of a possible future history of mankind. After reading through the first book, I looked online and found that Asimov had connected up three different series of books: the Robot series, the Galactic Empire series, and the Foundation series.

Having read the first of the original Foundation Trilogy, I continued my way through the trilogy, then through the sequels, then the prequels. Afterwards, I began with I, Robot and moved through the Robot series and ended with the Galactic Empire series. I think if I were to do it all over again, I would start from the Robot Series and go chronologically through the Foundation series.

I loved the I, Robot short stories and the criminal/mystery character of the Robot series. The short stories were very thought provoking; I enjoyed the small twists and turns that came up as the ethical and moral issues of robots/technology and culture were explored. The books that featured R. Daneel Olivaw and Elijah Bailey were exciting and fun, and I found myself very much attached to the characters by the end.

Of the fifteen books, I found the Galactic Empire to be the least compelling (though, still enjoyable reads). They had less cohesion, being separate and mostly unrelated stories, and I felt the stories were a bit more predictable or not quite as polished.

Of the Foundation Series, I found the original trilogy to be extremely solid. I enjoyed how the events unfolded and the vision of a galactic empire in decay and new Foundation rising developed. I found the prequels exploring Hari Seldon to be fun but perhaps not as tightly written, and I thought the sequels were good, though I felt a bit disappointed with the ending in the final book. (Somehow, it felt like it didn’t quite answer the questions it raised.)

Overall, it was interesting that Asimov spent time to connect these series together. I think he was mostly successful in doing so, and imagine I will spend time to read through all of the books at some later point in my life. I do think a lot the ideas he explored are as relevant today as when he wrote them, and would gladly recommend these books to others. In the end, they were a joy to read and inspired many thoughts.

Being Mostly Offline

Since we moved to Ireland, our primary way of going online while at home has been on our cellphones. Originally we had planned to get either a cable modem or cellular WiFi hotspot, but since we were the traveling we did not get around to it the first month here. Since then, we had been using our cellphones for tethering once in a while in the mornings and evenings, and have decided to try not having internet at home.

So far, things have worked out very well. We are no longer going online first thing in the morning, allowing ourselves to get up and enjoy our tai chi practices with a more peaceful mind. Our evenings have been very serene, and we have been getting either more work or a lot more reading done.

Granted, getting used to less internet did take a little time to get used to, but once we did it has been fantastic. Since the amount if internet we get on our phones is limited, we are much more conscious of using the internet purposefully. We are able to get most of the things involving larger data amounts done while at our offices spaces on campus or at WiFi hotspots at coffee shops. I think though that even when we do have internet access we are using it less and more purposefully.

In some ways, our current relationship to the internet and being connected reminds me of the time before when the internet was pervasive. I have been enjoying this setup very much and I think it has been a great boost for general productivity, focus, and peace of mind. I am curious how things will develop over time, but I expect that we will continue to enjoy being mostly offline at home.

In Response to Computer Music Journal, Summer 2012 review of “The Audio Programming Book”

I was catching up with Computer Music Journal issues and noticed a review by Jeffrey Trevino and Drew Allen of The Audio Programming Book, in which I was a contributing author. The review has some valid points to make regarding the organization of the book and what it covers. However, I did find problematic one section of the review regarding my chapter on Modeling Orchestral Composition:

But why, you might ask, does an introduction to audio programming culminate in a computer model of the orchestra? The chapter should be reframed as an introduction to Csound, to avoid the current sense of aesthetic blindness. Rather than acknowledge the presence of the numerous higher-level electronic music programming systems that invite alternatives to the conservative score-orchestra model built into Csound, the chapter precludes the discussion of other approaches by failing to mention them. Instead of treating the score-orchestra metaphor flexibly, as is the case in most uses of Csound, the author cements its literal interpretation with a table entitled, “Comparing the Steps of Composing for Live Orchestra and Composing with [Csound’s] Orchestra Library.” (Emphasis Yi’s.) Lastly, he deals aesthetic alternatives their death blows in the form of an all-encompassing first-person plural: “By reusing an existing music software system, we can leverage the solutions already available and focus more closely on our compositional interests.”

I found this section of the review rather problematic and takes what I wrote out of context to serve the narrative of the reviewers that “the remainder of the printed text presents an arbitrary collection of special topic essays on possible relationships between Csound and programs written in C.”

There are a couple of points I wanted to discuss.  First is that the text is specifically working on modeling orchestral composition, and proposes a library design to do so. The focus of the article was on the composition aspect, or, how the composer writes for an orchestra, and how to model that aspect of the relationship between composer and orchestra. It was not specifically about the orchestra itself, but requires a discussion of the orchestra to understand the composing process for it.

In that regard, I feel that the criticism that I did not discuss a “score-orchestra metaphor flexibly” is in a way, criticism that I did not write a different text altogether. It seems that the reviewers read the text as being about a Music-N style score/orchestra design, or specifically, about Csound’s score/orchestra design. The reviewer’s misunderstanding is notable in their additions of [Csound’s] here:

“Comparing the Steps of Composing for Live Orchestra and Composing with [Csound’s] Orchestra Library.” (Emphasis Yi’s.)

Note, the original title of the table did not have “Csound’s” as the article was not about Csound’s score/orchestra at all.  The library that is the focus of the chapter uses Csound as a backend, but it in itself is a generic design of orchestral composition that could be developed to target other backends as well (i.e. PD, Max, SuperCollider, Chuck, etc.).

Note: When I wrote the chapter, I did have some reservations that there may be confusion.  Csound and Music-N languages do have a long-held design of a score and orchestra.  I did worry that it might get confused with the concepts I was trying to model.  For me, the Music-N score/orchestra are not the model of what I was doing, but just the tool I used to support the software model of the orchestral composition library. I tried to be clear of this distinction, but it seems to still have lead to some confusion.

This leads to the second issue I had with the reviewers’ comments regarding aesthetics.  In the text, I wrote a brief section on wanting to reuse an existing synthesis system.  This was so that the article could focus on orchestral composition and less on the synthesis details, which I believe are covered in other chapters of the book.  I discussed why MIDI was not sufficient for the library’s design goals, and discussed why I chose Csound.  However, I would have thought that readers who are familiar with SuperCollider, PD, Common Lisp Music, or other synthesis systems, would understand the chapter to be a generic design that could easily be implemented in other languages and systems.  I did have to discuss Csound’s ORC/SCO model to explain how the results of the orchestral composition library would ultimately map its results to another systems software model to get audible results.

To me, this is not even an aesthetic choice, but an implementation detail.  To me, the aesthetic choice here is that I wrote from a perspective of a composer looking at achieving compositional practices commonly found in orchestral composition and reproducing them in a computer music context. In that regard, that is the focus of the chapter, and is even the title of the text. I do not think then that not mentioning other aesthetic goals is a valid criticism.  To me, It is as if I criticized an article about Impressionist technique as leaving out mention about Dada or Post-Modernism.

As for other tools and ways of working, I’d like to state that I have supported and continue to support other tools and software synthesis systems.  I have often promoted that everyone should use the tools that best suits them, and ultimately the important thing to me is the musical result, and not the tools used.  I think it is the reviewers’ reading of the article as being about Csound has lead to misplaced criticism.

To summarize, I appreciate the work of the reviewers to review The Audio Programming Book.  However, in regards to the section on my own chapter, I feel they had misread the text.  In the end, I still find the chapter and the design of the library interesting and hope it will be as useful to others–using whatever system they use–as it has been for my own compositional work. Hopefully I have explained my own perspective clearly, and would invite any further discussion.