How Csound Works – Presentation from the 2nd International Csound Conference

I recently gave a talk at the 2nd International Csound Conference, held at the Berklee College of Music in Boston, entitled “How Csound Works”.  In the talk, I went through high-level design, key data structures, the Orchestra compiler, the event and runtime system, and some other features.  I have placed a copy of the presentation slides as a zip file here:

Download Slides

Additionally, the slides can be viewed online here.

Regarding the slides, I used Hakim El Hattab’s wonderful Javascript Slide framework, Reveal.js.

Note: I believe the presentation was recorded.  When that is made available, I will update this entry with a link to the video.

2nd International Csound Conference 2013

The 2nd International Csound Conference took place this past weekend at the Berklee College of Music in Boston.  I had a fantastic time there getting to see Jean-Claude Risset, John Chowning, and Barry Vercoe all give keynotes.  There was a reallly nice tribute session for Max Matthews, with people like Tom Oberheim, David Ziccarelli, and Max’s family there sharing some beautiful stories about Max.

Beyond that, it was great to see all the latest going on in the Csound community. I thoroughly enjoyed Rory Walsh’s presentation on Cabbage, as well as Andres Cabrera’s presentation on CsoundQt. They’ve both made some great developments in their software!  I also really enjoyed Oeyvind Brandtsegg’s presentation “Sonification with Csound ­ Quasar Correlations”, discussing an upcoming installation work.

There were certainly many presentations given on the 2nd day, and as they were in parallel tracks one simply couldn’t attend everything.  I ended up giving one presentation on the first day, and two presentations on the second, one of which spanned two parts.  Because of that I certainly felt like I missed out on going to presentations, but I believe everything was being filmed so I am looking forward to watching those when they are out.

Regarding the presentations I gave, I think I did just a so-so job on the Blue presentation, and was happy with how the “How Csound Works” and “How to use the Csound API” talks went. Given that there was a lot to prepare, I was happy in the end with how it all turned out.

The concerts were a very nice variety of pieces in different aesthetics. I was happy to be listening to music on very nice speakers, and especially enjoyed being in the company of many friends to do so.  I also enjoyed meeting a number of new people, and also finally putting faces to names I had long known from the mailing list.

Overall, I had a great time in Boston. I think Dr. Boulanger and the Berklee College of Music did a wonder job in organizing and creating a very special and memorable event. I’m already looking forward to the next Csound Conference!

Julian Parker Ring Modulator

I wanted to share an implementation of Julian Parker’s digital model of a Ring Modulator. The paper he wrote from DAFx 2011 [1][2] was also used by the BBC Radiophonic Workshop in the “Recreating the sounds of the BBC Radiophonic Workhop using the Web Audio API” [3].

I’ve implemented the ringmodulator as a UDO, available here:

Blue Project and CSD (ringmod.zip) 

To run the CSD on the commandline, you can use:

csound -i adc -o dac -b 128 -B 512 ringmod.csd

The Blue project has knobs you can use to adjust the carrier’s amp and frequency. In the generated CSD, you can adjust gk_blue_auto0 for amp and gk_blue_auto1 for frequency, or just modify the poscil line in instr 1.

Note, the paper suggests using a high amount of oversampling (32x; 8x or 16x being reasonable for using sinusoidal carrier). This implementation does not do oversampling, which I believe the BBC version does do not as well.

As far as I’ve checked, the implementation matches the BBC one with the exception of using a limiter instead of a compressor. Also, I did one optimization to the wavetable generation to extract out a constant in the part where v > vl, but this is a minimal optimization as the wavetable generation is done only once really.

  • [1] – http://www.acoustics.hut.fi/publications/papers/dafx11-ringmod/
  • [2] – http://recherche.ircam.fr/pub/dafx11/Papers/66_e.pdf
  • [3] – http://webaudio.prototyping.bbc.co.uk/ring-modulator/

UPDATE (2015-04-07):  I found a coding bug in the table generation in the original version I posted.  The zip file above has been updated with the corrected code.

 

waveseq – Wave Sequencing User-Defined Opcode for Csound

Lately I’ve been interested in a number of hardware synthesizers that came out during the late 80’s/early 90’s, as I’ve found their synthesis methods rather curious and inventive.  One of them, the Korg Wavestation, has a very interesting synthesis system, using a combination of Vector Synthesis and Wave Sequencing. Vector Synthesis is easy enough to implement using a cross-fading between different oscillators or sound generators, but I was curious to see about implementing the Wave Sequencing in Csound code.

To implement this, I used information obtained online, information in the manuals, time experimenting on a hardware Korg Wavestation, as well as time with the Korg Legacy Wavestation Software (I ended up purchasing the whole Legacy Collection). Here is an example of the waveseq User-Defined Opcode (UDO) using f-tables generated by GEN10:

Example 1:

As well as f-tables using sampled drum sounds:

Example 2:

The UDO is implemented such that it takes in an f-table that describes the entire wave sequence.  Therefore, most of the work to using this opcode is done in creating the set of f-tables to sequence through.  I did implement the following features:

  • Tempo: 24 duration is a quarter note; if tempo is non-zero, it will be used to set the duration of the quarter note, if 0, tempo is about 105 bpm
  • WaveSequence: start wave, looping type (0 = forwards, 1 = forwards and backwards), start wave for loop, end wave
  • Wave Tables: single-cycle wave/single-shot wave/looped wave (determined on whether sample rate given in the waveseq table is 0, positive, or negative), amplitude adjustment, cross-fade time, duration of table to play
About the design, a wave sequence table holds information about how many tables are in the sequence, and how to play them. For example, in example 2, the wave sequence table used is:
itab_bass ftgenonce 0, 0, -9512, 1, "BDRUM11.WAV", 0, 0, 0
itab_tom ftgenonce 0, 0, -17600, 1, "TOM5.WAV", 0, 0, 0
itab_snare ftgenonce 0, 0, -10947, 1, "SNARE11.WAV", 0, 0, 0

iwaveseqtab ftgenonce 0, 0, -32, -2, 3, 1, 0, 0, 2,  
	itab_bass, ftsr(itab_bass), 1, 1, ixfade, iwavedur,  
	itab_tom, ftsr(itab_tom), 2, 1, ixfade, iwavedur,  
	itab_snare, ftsr(itab_snare), 2, 1, ixfade, iwavedur
The iwaveseqtab has 32 size (just needs to be big enough to hold the information for the other tables), and in its first line it describes:
  • 3 tables are in this wave sequence
  • 1 is used to denote backwards and forwards playing through the sequence
  • 0 is the index of the start wave
  • 0 is the index of the loop start
  • 2 is the index of the loop end
after that come the tables to be used.  For example, the part that starts with itab_bass says:
  • sample rate of the table (positive here, so play as single-shot)
  • amplitude adjustment of 1 (amplitude is multiplied by this factor)
  • pitch adjustment of 1 (not currently implemented)
  • crossfade of 0 (ixfade = 0 earlier in the code, not listed above)
  • duration of 6 (iwavedur = 6 earlier in the code, not listed above), this is equivalent to a 16th note
The waveseq UDO is uses the tablexkt opcode, does manual incrementing of phaser variables, use linear amplitude adjustments when cross-fading, and a lot of code for reading from the wave sequence table and configuring things. The code still requires some cleanup work, but I wanted to go ahead and make this initial, mostly-complete implementation available.  I plan to create implement some further features for the waveseq opcode, then create either a full Blue instrument plugin or a BlueSynthBuilder version of this instrument that will allow easier creation and organization of f-tables into wave sequences. I am also thinking about adding Vector Synthesis as well (using then four waveseq instances).
Overall, it was quite an enjoyable experience to study the Wavestation and learn to implement wave sequencing in Csound code.  In the end, I’m still looking at where I might use this opcode in my own work, but it’s nice to know it’s available should I find a use for it.
Download the Examples and MP3’s here: waveseq – example CSD’s and MP3’s

 

NoteParse – code for shorthand Csound score creation

noteParse – 2012.10.27

As part of my composition work lately I’ve been developing some new
scripts. Some are custom to the piece I’m working on, while others
have been a bit more generic.  I thought I’d share this NoteParse
Python code as I thought others might find it useful.

One of the things I’ve wanted is a shorthand for creating scores.  I’m
aware of a number of different approaches for this (microcsound,
lilypond, abc, mck/mml), but wanted something that worked more closely
to my own personal style.  I found I was able to write something
fairly quickly that is simple but flexible enough.

Attached are two python scripts.  The first works with standard
python, while the other requires being used within blue as it depends
on my python orchestra library that comes with blue.  Both use the
basic syntax I created, while the blue version allows score modifiers.
An example of the syntax with modifiers:

def stoccato(n):
    n.duration = n.duration * .5

modifiers = {"stoccato": stoccato }

a = "m:stoccato 8.00d.25a-12 4 m:clear 3d.5 2 1 0d1a-8"
notes = parseOrchScore(a, modifiers)

generates the following:

i x               0.0             0.125           8.00            8.00 -12.0           0   0
i x               0.25            0.125           8.04            8.04 -12.0           0               0
i x               0.5             0.5             8.03            8.03 -12.0           0               0
i x               1.0             0.5             8.02            8.02 -12.0           0               0
i x               1.5             0.5             8.01            8.01 -12.0           0               0
i x               2.0             1.0             8.00            8.00 -8.0            0               0

(disregard that x is used for p1, these notes get further processed by
performers and performerGroup objects in my composition library).

The things I’d like to point out:

1. The score line is:

a = "m:stoccato 8.00d.25a-12 4 m:clear 3d.5 2 1 0d1a-8"

Disregarding the m: statements, the line looks like:

a = "8.00d.25a-12 4 3d.5 2 1 0d1a-8"

How the library works is that the first note becomes a template note
that carries over values to the next note.  So, for the first note, it
uses pch 8.00, has a duration of .25, and amplitude of -12.  The next
note, which is just a 4, means scale degree four for the same octave
as previously given, so the next generated note has a pch of 8.04, a
duration of .25, and amplitude of -12.  The third note uses
scaleDegree 3, but now has a duration of .5, and carries over the
amplitude.

2. The use of modifiers is completely customizable.  To use modifiers,
supply a map that has the name of the modifier together with a
single-arg function that will process the note.  When an m: statement
is encountered, it will look up that modifier and set it as the
current modifier.  If an m:clear is found, it will set the modifier
function to None.  What I like about this is that a standard set of
modifiers could be created, but it’d be easy to create a custom one
while working on a new piece that might be very specific to that
piece.

The non-blue version attached to this email currently just returns a
list of lists of values that one can then process to generate notes
that work with the individual instrument’s p-field signatures (i.e.
your instrument may have 5 p-fields, or 10, etc. and you’d just apply
a transform to the list to generate the notes you want).

I’m still debating on other features to add, and my current plan is to
add this to my orchestra composition library that comes with blue.  My
thought is that the code is fairly small and useful to my own
composition method, but might be easy for others to take and modify
for their own use pretty easily.