Excursions with Hydra

I’ve been practicing visuals with Olivia Jack‘s wonderful system Hydra the past couple of days and I have been enjoying it very, very much. It’s been a blast to have a higher-level abstraction layer to work with over coding GLSL shaders directly. I suppose a big factor in my joy is that I tend to spend a lot more time with JS than I do GLSL too. 😉

I think knowing some shader programming and practices certainly made learning Hydra a lot quicker than it would have been otherwise. Still, lots to learn and practice. 🙂

Hidden Laws

Completed: 2019.07.11
Duration: 7:55
Ensemble: Electronic (Blue, Csound)

MP3: Click Here
OGG: Click Here
FLAC: Click Here
Project Files – Click here (.blue, .csd)

A quiet meditation developed using four processes made up of bit-shifting and bit-masking operations. The rules, or “laws”, of each process are not complex on their own but together create an intricate texture and rhythm.

Lockhart Wavefolder

I’d like to share a Csound user-defined opcode port of the “Virtual Analog Model of the Lockhart Wavefolder” by Fabián Esqueda, Henri Pöntynen, Julian D. Parker and Stefan Bilbao that was presented at SMC 2017. The paper reference is:

F. Esqueda, H. Pöntynen, J. D. Parker and S. Bilbao, “Virtual Analog Model of the Lockhart Wavefolder”, in Proceedings of the 14th International Sound and Music Computing Conference (SMC-17), Espoo, Finland, July 5-8, 2017, pp. 336—342.

The authors have placed a website with the original paper, errata, and a Max/MSP/gen~ implementation at:

I have converted the gen~ code to a Csound user-defined opcode and placed the code and test file here:

The test instrument follows the recommendations of the paper to use a cascaded wavefolder structure with input gain and DC offset parameters. (See figure 9 in the paper.) The sampling rate is 88.2khz as mentioned in the paper to deal with anti-aliasing issues without having to use oversampling.

An example render of the test CSD file is below, modulating both the gain and DC offset parameters:

I’m very excited by all of the wonderful sonic possibilities! Bravo to the original authors for their work!

Additive Pitch Rhythms Using Hexbeat

Practice session today using additive pitch hexbeat rhythms to generate melodic contours.

Each hexbeat() is generating sequences of 1’s and 0’s which are then multiplied to alternate between things like 7 and 0.  So if I add one that alternates between 2 and 0, I get 9,7,2, and 0 as possibilities.  Then with say 4 and 0, I get additional combinations.  With the patterns of different lengths (I’ve been using mostly prime number lengths) it generates a nice long overall pitch pattern, which is then masked by the rhythmic hexplay() pattern.  I then add a choose() to say “play 70% of the time” and I find all of that together is quick to write, generates good variety, but has an underlying structure that is stable.  (It’s been on my mind how to mix randomness + stability in interesting ways and I’ve found these explorations have been leading to some interesting pattern generation.)

This desmos graph visualizes an example of a 3-part hex pitch rhythm added together:

(Click on the “Edit on Desmos” link in the graph to turn on/off visualization of the various individual hex pitch rhythms.)

Cellular Automata Streams

Download CSD Project Here

Implementation of 1-dimensional (Elementary) Cellular Automata as a
stream using feedback and circular buffer delay line. Stream
generates 1 (Live) and 0 (Dead) values, according to initial state
and rule.

Initial state may be any length array. Different array lengths
affects the rate of mutation, comparable to classical cellular
automata implementations that use a fixed array as value between CA
processing steps.

Rule numbers are implemented using Wolfram-style encoding where
number is interpreted as bits. This allows user to use Wolfram rule
numbers. For example, Rule 30 gives bit value of 00011110.

For this project, the CA stream values are used to turn and off a
held note of a specific frequency and amplitude. Actions occur only
when the stream values has transitioned from 0 to 1 or vice versa. The project runs indefinitely but was capped off at ten minutes here.

Using a feedback+delay-based approach could be interesting to allow for user input. The ca_stream user-defined opcode here could be modified to take in input and use bitwise-or with a generated value before writing it back into the stream. This could allow users to “play” the stream, and could make the Class 1 rules that evolve to 0’s be interesting for generating a limited amount of output in time to the user input. (I am curious to know if this could be useful, and will plan to investigate shortly by implementing an interactive web application.)

Published
Categorized as csound