Clojure and Blue/Csound Example

I have started to work on a new composition and wanted to move from using Python to Clojure as the scripting language for my music.  I was experimenting yesterday and was pleased to be able to write a score generation function that was fairly flexible, allowing for easily using hardcoded values as well as any sequence for filling in p-fields of generated Csound Score.

From my work session I came up with some fairly condensed code I was happy with:

(require '[clojure.string :refer  [join]])

(defn pch-add  [bpch interval]
      (let  [scale-degrees 12
                       new-val  (+  (* scale-degrees  (first bpch))
                                                      (second bpch) interval)]
                [(quot new-val scale-degrees)
                          (rem new-val scale-degrees)]))

(defn pch->sco  [[a b]] 
      (format "%d.%02d" a b ))

(defn pch-interval-seq  [pch & intervals] 
    (reduce  (fn  [a b]  (conj a  (pch-add  (last a) b)))  [pch] intervals))

(defn pch-interval-sco  [pch & intervals] 
    (map pch->sco  (apply pch-interval-seq pch intervals)))

(defn score-arg  [a]
      (if  (number? a)
                (repeat a)
                a))

(defn gen-score  [& fields]
      (let  [pfields  (map score-arg fields)]
                (join "\n"
                                  (apply map (fn [& args] (join " " args)) (repeat "i") pfields))))

;; EXAMPLE CODE

(def score 
      (gen-score 1 0 1 
                         (pch-interval-sco  [6 0] 12 8 6 2 6)
                         (pch-interval-sco  [6 0] 12 8 6 2 6)
                         (range -10 -100 -1) 0 1))

(print score)

The output from the print statement is:

i1    0.0    1    6.00    6.00    -10    0    1
i1    0.0    1    7.00    7.00    -11    0    1
i1    0.0    1    7.08    7.08    -12    0    1
i1    0.0    1    8.02    8.02    -13    0    1
i1    0.0    1    8.04    8.04    -14    0    1
i1    0.0    1    8.10    8.10    -15    0    1

The key part is the gen-score function.  It can take in either a number or a sequence as arguments.  If a number is given, it will be repeated for each note.  For a sequence, they can be infinite or finite. The only important part of using gen-score is that the at least one of the arguments given is a finite list.

This is fairly similar to SuperCollider’s Pattern system, though it uses standard abstractions found in Clojure. To me, it is a bit simpler to think in sequences to generate events than to think about the Pattern library’s object-oriented abstractions, but that is just my own preference.  Also, the Pattern system in SC is designed for real-time scheduling and also has an option for a delta-time generator.  I think the delta-time aspect could be added fairly easily using an optional keyword-argument to gen-score.

As for making this work in realtime, gen-score would have to be rewritten to not format the strings but instead return the list of p-field values. A priority-queue could then be used to check against the time values of the notes and pause and wait to generate new notes when the scheduling time demands it.

Ultimately, I was very pleased with the short amount of time required to write this code, as well as the succinctness of it.  One thing that has been on my mind is whether to use a CMask/PMask kind of approach to the p-field sequence generators.  In those systems, what would be comparable to the sequence generators–I think they’re just called Fields in those systems–actually take in a time argument.  That gives the generators some context about what to generate next and allows the ability to mask values over time.  I am fairly certain I will need to update or create an alternative to the gen-score function to allow generator functions.  I’ll have to consider whether to use a Clojure protocol, but I may be able to get away with just testing if the argument to gen-score is a function, sequence, or number and act appropriately.  (Something to look at next work session. :) )

How Csound Works – Presentation from the 2nd International Csound Conference

I recently gave a talk at the 2nd International Csound Conference, held at the Berklee College of Music in Boston, entitled “How Csound Works”.  In the talk, I went through high-level design, key data structures, the Orchestra compiler, the event and runtime system, and some other features.  I have placed a copy of the presentation slides as a zip file here:

Download Slides

Additionally, the slides can be viewed online here.

Regarding the slides, I used Hakim El Hattab’s wonderful Javascript Slide framework, Reveal.js.

Note: I believe the presentation was recorded.  When that is made available, I will update this entry with a link to the video.

2nd International Csound Conference 2013

The 2nd International Csound Conference took place this past weekend at the Berklee College of Music in Boston.  I had a fantastic time there getting to see Jean-Claude Risset, John Chowning, and Barry Vercoe all give keynotes.  There was a reallly nice tribute session for Max Matthews, with people like Tom Oberheim, David Ziccarelli, and Max’s family there sharing some beautiful stories about Max.

Beyond that, it was great to see all the latest going on in the Csound community. I thoroughly enjoyed Rory Walsh’s presentation on Cabbage, as well as Andres Cabrera’s presentation on CsoundQt. They’ve both made some great developments in their software!  I also really enjoyed Oeyvind Brandtsegg’s presentation “Sonification with Csound ­ Quasar Correlations”, discussing an upcoming installation work.

There were certainly many presentations given on the 2nd day, and as they were in parallel tracks one simply couldn’t attend everything.  I ended up giving one presentation on the first day, and two presentations on the second, one of which spanned two parts.  Because of that I certainly felt like I missed out on going to presentations, but I believe everything was being filmed so I am looking forward to watching those when they are out.

Regarding the presentations I gave, I think I did just a so-so job on the Blue presentation, and was happy with how the “How Csound Works” and “How to use the Csound API” talks went. Given that there was a lot to prepare, I was happy in the end with how it all turned out.

The concerts were a very nice variety of pieces in different aesthetics. I was happy to be listening to music on very nice speakers, and especially enjoyed being in the company of many friends to do so.  I also enjoyed meeting a number of new people, and also finally putting faces to names I had long known from the mailing list.

Overall, I had a great time in Boston. I think Dr. Boulanger and the Berklee College of Music did a wonder job in organizing and creating a very special and memorable event. I’m already looking forward to the next Csound Conference!

Julian Parker Ring Modulator

I wanted to share an implementation of Julian Parker’s digital model of a Ring Modulator. The paper he wrote from DAFx 2011 [1][2] was also used by the BBC Radiophonic Workshop in the “Recreating the sounds of the BBC Radiophonic Workhop using the Web Audio API” [3].

I’ve implemented the ringmodulator as a UDO, available here:

Blue Project and CSD (ringmod.zip) 

To run the CSD on the commandline, you can use:

csound -i adc -o dac -b 128 -B 512 ringmod.csd

The Blue project has knobs you can use to adjust the carrier’s amp and frequency. In the generated CSD, you can adjust gk_blue_auto0 for amp and gk_blue_auto1 for frequency, or just modify the poscil line in instr 1.

Note, the paper suggests using a high amount of oversampling (32x; 8x or 16x being reasonable for using sinusoidal carrier). This implementation does not do oversampling, which I believe the BBC version does do not as well.

As far as I’ve checked, the implementation matches the BBC one with the exception of using a limiter instead of a compressor. Also, I did one optimization to the wavetable generation to extract out a constant in the part where v > vl, but this is a minimal optimization as the wavetable generation is done only once really.

  • [1] – http://www.acoustics.hut.fi/publications/papers/dafx11-ringmod/
  • [2] – http://recherche.ircam.fr/pub/dafx11/Papers/66_e.pdf
  • [3] – http://webaudio.prototyping.bbc.co.uk/ring-modulator/

TimeSphere

Completed: 2013.07.27
Duration: 7:03
Ensemble: Electronic (blue, Csound)

MP3: Click Here
OGG: Click Here
FLAC: Click Here
Project Files - Click here (.blue, .csd)

“TimeSphere” is inspired by the idea that time is not infinite but bounded, like a sphere, and that there are inifinite possible projections through time within this sphere. (I don’t remember the exact origin of this thought, but I believe I may have derived it from Stephen Hawking’s idea of a closed universe in “Brief History of Time”.)

I’ve always found the world to be filled with many strata of time. Things move together, alone, and somewhere in between, moving from one time flow to another. The idea of a sphere of time in which the world moves was an inspiration for this work, and not interpreted literally. While composing this piece, I was very aware of the the interplay between rational development and the exploration of where intuition guided me.

waveseq – Wave Sequencing User-Defined Opcode for Csound

Lately I’ve been interested in a number of hardware synthesizers that came out during the late 80′s/early 90′s, as I’ve found their synthesis methods rather curious and inventive.  One of them, the Korg Wavestation, has a very interesting synthesis system, using a combination of Vector Synthesis and Wave Sequencing. Vector Synthesis is easy enough to implement using a cross-fading between different oscillators or sound generators, but I was curious to see about implementing the Wave Sequencing in Csound code.

To implement this, I used information obtained online, information in the manuals, time experimenting on a hardware Korg Wavestation, as well as time with the Korg Legacy Wavestation Software (I ended up purchasing the whole Legacy Collection). Here is an example of the waveseq User-Defined Opcode (UDO) using f-tables generated by GEN10:

Example 1:

As well as f-tables using sampled drum sounds:

Example 2:

The UDO is implemented such that it takes in an f-table that describes the entire wave sequence.  Therefore, most of the work to using this opcode is done in creating the set of f-tables to sequence through.  I did implement the following features:

  • Tempo: 24 duration is a quarter note; if tempo is non-zero, it will be used to set the duration of the quarter note, if 0, tempo is about 105 bpm
  • WaveSequence: start wave, looping type (0 = forwards, 1 = forwards and backwards), start wave for loop, end wave
  • Wave Tables: single-cycle wave/single-shot wave/looped wave (determined on whether sample rate given in the waveseq table is 0, positive, or negative), amplitude adjustment, cross-fade time, duration of table to play
About the design, a wave sequence table holds information about how many tables are in the sequence, and how to play them. For example, in example 2, the wave sequence table used is:
itab_bass ftgenonce 0, 0, -9512, 1, "BDRUM11.WAV", 0, 0, 0
itab_tom ftgenonce 0, 0, -17600, 1, "TOM5.WAV", 0, 0, 0
itab_snare ftgenonce 0, 0, -10947, 1, "SNARE11.WAV", 0, 0, 0

iwaveseqtab ftgenonce 0, 0, -32, -2, 3, 1, 0, 0, 2,  
	itab_bass, ftsr(itab_bass), 1, 1, ixfade, iwavedur,  
	itab_tom, ftsr(itab_tom), 2, 1, ixfade, iwavedur,  
	itab_snare, ftsr(itab_snare), 2, 1, ixfade, iwavedur
The iwaveseqtab has 32 size (just needs to be big enough to hold the information for the other tables), and in its first line it describes:
  • 3 tables are in this wave sequence
  • 1 is used to denote backwards and forwards playing through the sequence
  • 0 is the index of the start wave
  • 0 is the index of the loop start
  • 2 is the index of the loop end
after that come the tables to be used.  For example, the part that starts with itab_bass says:
  • sample rate of the table (positive here, so play as single-shot)
  • amplitude adjustment of 1 (amplitude is multiplied by this factor)
  • pitch adjustment of 1 (not currently implemented)
  • crossfade of 0 (ixfade = 0 earlier in the code, not listed above)
  • duration of 6 (iwavedur = 6 earlier in the code, not listed above), this is equivalent to a 16th note
The waveseq UDO is uses the tablexkt opcode, does manual incrementing of phaser variables, use linear amplitude adjustments when cross-fading, and a lot of code for reading from the wave sequence table and configuring things. The code still requires some cleanup work, but I wanted to go ahead and make this initial, mostly-complete implementation available.  I plan to create implement some further features for the waveseq opcode, then create either a full Blue instrument plugin or a BlueSynthBuilder version of this instrument that will allow easier creation and organization of f-tables into wave sequences. I am also thinking about adding Vector Synthesis as well (using then four waveseq instances).
Overall, it was quite an enjoyable experience to study the Wavestation and learn to implement wave sequencing in Csound code.  In the end, I’m still looking at where I might use this opcode in my own work, but it’s nice to know it’s available should I find a use for it.
Download the Examples and MP3′s here: waveseq – example CSD’s and MP3′s

 

NoteParse – code for shorthand Csound score creation

noteParse – 2012.10.27

As part of my composition work lately I’ve been developing some new
scripts. Some are custom to the piece I’m working on, while others
have been a bit more generic.  I thought I’d share this NoteParse
Python code as I thought others might find it useful.

One of the things I’ve wanted is a shorthand for creating scores.  I’m
aware of a number of different approaches for this (microcsound,
lilypond, abc, mck/mml), but wanted something that worked more closely
to my own personal style.  I found I was able to write something
fairly quickly that is simple but flexible enough.

Attached are two python scripts.  The first works with standard
python, while the other requires being used within blue as it depends
on my python orchestra library that comes with blue.  Both use the
basic syntax I created, while the blue version allows score modifiers.
An example of the syntax with modifiers:

def stoccato(n):
    n.duration = n.duration * .5

modifiers = {"stoccato": stoccato }

a = "m:stoccato 8.00d.25a-12 4 m:clear 3d.5 2 1 0d1a-8"
notes = parseOrchScore(a, modifiers)

generates the following:

i x               0.0             0.125           8.00            8.00 -12.0           0   0
i x               0.25            0.125           8.04            8.04 -12.0           0               0
i x               0.5             0.5             8.03            8.03 -12.0           0               0
i x               1.0             0.5             8.02            8.02 -12.0           0               0
i x               1.5             0.5             8.01            8.01 -12.0           0               0
i x               2.0             1.0             8.00            8.00 -8.0            0               0

(disregard that x is used for p1, these notes get further processed by
performers and performerGroup objects in my composition library).

The things I’d like to point out:

1. The score line is:

a = "m:stoccato 8.00d.25a-12 4 m:clear 3d.5 2 1 0d1a-8"

Disregarding the m: statements, the line looks like:

a = "8.00d.25a-12 4 3d.5 2 1 0d1a-8"

How the library works is that the first note becomes a template note
that carries over values to the next note.  So, for the first note, it
uses pch 8.00, has a duration of .25, and amplitude of -12.  The next
note, which is just a 4, means scale degree four for the same octave
as previously given, so the next generated note has a pch of 8.04, a
duration of .25, and amplitude of -12.  The third note uses
scaleDegree 3, but now has a duration of .5, and carries over the
amplitude.

2. The use of modifiers is completely customizable.  To use modifiers,
supply a map that has the name of the modifier together with a
single-arg function that will process the note.  When an m: statement
is encountered, it will look up that modifier and set it as the
current modifier.  If an m:clear is found, it will set the modifier
function to None.  What I like about this is that a standard set of
modifiers could be created, but it’d be easy to create a custom one
while working on a new piece that might be very specific to that
piece.

The non-blue version attached to this email currently just returns a
list of lists of values that one can then process to generate notes
that work with the individual instrument’s p-field signatures (i.e.
your instrument may have 5 p-fields, or 10, etc. and you’d just apply
a transform to the list to generate the notes you want).

I’m still debating on other features to add, and my current plan is to
add this to my orchestra composition library that comes with blue.  My
thought is that the code is fairly small and useful to my own
composition method, but might be easy for others to take and modify
for their own use pretty easily.

On Isaac Asimov’s Robot, Galactic Empire, and Foundation Series

Earlier this year I had been reading a lot of non-fiction and felt a need to balance out my reading with some fiction. I had noted in my list of books to read Asimov’s Foundation, and downloaded it for my Kindle. Very quickly I was consumed by the world created in this story: a rich and fascinating vision of a possible future history of mankind. After reading through the first book, I looked online and found that Asimov had connected up three different series of books: the Robot series, the Galactic Empire series, and the Foundation series.

Having read the first of the original Foundation Trilogy, I continued my way through the trilogy, then through the sequels, then the prequels. Afterwards, I began with I, Robot and moved through the Robot series and ended with the Galactic Empire series. I think if I were to do it all over again, I would start from the Robot Series and go chronologically through the Foundation series.

I loved the I, Robot short stories and the criminal/mystery character of the Robot series. The short stories were very thought provoking; I enjoyed the small twists and turns that came up as the ethical and moral issues of robots/technology and culture were explored. The books that featured R. Daneel Olivaw and Elijah Bailey were exciting and fun, and I found myself very much attached to the characters by the end.

Of the fifteen books, I found the Galactic Empire to be the least compelling (though, still enjoyable reads). They had less cohesion, being separate and mostly unrelated stories, and I felt the stories were a bit more predictable or not quite as polished.

Of the Foundation Series, I found the original trilogy to be extremely solid. I enjoyed how the events unfolded and the vision of a galactic empire in decay and new Foundation rising developed. I found the prequels exploring Hari Seldon to be fun but perhaps not as tightly written, and I thought the sequels were good, though I felt a bit disappointed with the ending in the final book. (Somehow, it felt like it didn’t quite answer the questions it raised.)

Overall, it was interesting that Asimov spent time to connect these series together. I think he was mostly successful in doing so, and imagine I will spend time to read through all of the books at some later point in my life. I do think a lot the ideas he explored are as relevant today as when he wrote them, and would gladly recommend these books to others. In the end, they were a joy to read and inspired many thoughts.

Being Mostly Offline

Since we moved to Ireland, our primary way of going online while at home has been on our cellphones. Originally we had planned to get either a cable modem or cellular WiFi hotspot, but since we were the traveling we did not get around to it the first month here. Since then, we had been using our cellphones for tethering once in a while in the mornings and evenings, and have decided to try not having internet at home.

So far, things have worked out very well. We are no longer going online first thing in the morning, allowing ourselves to get up and enjoy our tai chi practices with a more peaceful mind. Our evenings have been very serene, and we have been getting either more work or a lot more reading done.

Granted, getting used to less internet did take a little time to get used to, but once we did it has been fantastic. Since the amount if internet we get on our phones is limited, we are much more conscious of using the internet purposefully. We are able to get most of the things involving larger data amounts done while at our offices spaces on campus or at WiFi hotspots at coffee shops. I think though that even when we do have internet access we are using it less and more purposefully.

In some ways, our current relationship to the internet and being connected reminds me of the time before when the internet was pervasive. I have been enjoying this setup very much and I think it has been a great boost for general productivity, focus, and peace of mind. I am curious how things will develop over time, but I expect that we will continue to enjoy being mostly offline at home.