Practice session today using additive pitch hexbeat rhythms to generate melodic contours.
Each hexbeat() is generating sequences of 1’s and 0’s which are then multiplied to alternate between things like 7 and 0. So if I add one that alternates between 2 and 0, I get 9,7,2, and 0 as possibilities. Then with say 4 and 0, I get additional combinations. With the patterns of different lengths (I’ve been using mostly prime number lengths) it generates a nice long overall pitch pattern, which is then masked by the rhythmic hexplay() pattern. I then add a choose() to say “play 70% of the time” and I find all of that together is quick to write, generates good variety, but has an underlying structure that is stable. (It’s been on my mind how to mix randomness + stability in interesting ways and I’ve found these explorations have been leading to some interesting pattern generation.)
This desmos graph visualizes an example of a 3-part hex pitch rhythm added together:
(Click on the “Edit on Desmos” link in the graph to turn on/off visualization of the various individual hex pitch rhythms.)
Live code session using csound-live-code and https://live.csound.com.
Initial code happens for about 2m40s, then sound begins.
For those interested in the code, the session uses:
1. start UDO for working with the different always-on instruments
2. vco2 square wave for enveloping (has a nicer quality to it than
using lfo with type 3, IMO)
3. portk for frequency glide
4. chnset for immediate setting of a channel value as part of performance
5. chnset within an always-on instrument (“Mod”) together with k-rate
randh to show how to approach using continuous values with channels
This past weekend I was happy to participate in the Algosix celebration of Algorave with a live code performance. (The first few minutes of the test sound were me trying to check sound on the stream and failing to realize it was working…).
The video shows a little bit of vim, csound, and csound-live-code. In particular, it demonstrates the hex beats work in the live code project, as well as using phasors and non-interpolating oscillator functions for pitch values. Drum sounds are from Iain McCurdy’s TR808 code and synth sounds were ones I have been working on in the live code project.
The event was a lot of fun with lots of different approaches, aesthetics, tools, etc. Lots of appreciation for the community and organizers of the event! (And many thanks for the opportunity to perform!)
It captures my thoughts on the importance of extensibility in computer music software and different ways of approaching it for both developers and users. The thesis discusses various extensibility strategies implemented in Csound, Blue, Pink and Score, from 2011-2016.
Looking back at the thesis, I’m proud of the work I was able to do. I am sure my thoughts will continue to evolve over time, but I think the core ideas have been represented well within the thesis. I hope those who take a look may find something of interest.
In addition to my acknowledgements in the thesis, I would also like to thank Ryan Molloy and Stephen Travis Pope for their close readings of my thesis as part of the Viva process. I will be forever grateful for their comments and insights.
After a very long and tiring week, I managed to finish (with great support from my advisor, Victor, and my wife, Lisa) and submit my thesis for the PhD this past Friday. I flew back home on Saturday and have been focusing on getting myself organized and resting. I am waiting now for the Viva Voce (thesis defense), which should be in November or December, depending upon the availability of the examiners. If that all goes well, I’ll have to do revisions to the thesis, then can submit the final hardcopy and will be done.
I haven’t had much time to write on this site for a very long while. I’m happy to have a free moment now to sit and breathe and reflect. I think more than anything, after spending a long time developing and creating music systems, I have been extremely happy the past few days to spend time using those programs for composing. I imagine it will take some time to integrate music making back into my daily life, to make it a real practice, but so far it is going well and I am excited to just to continue on and see where it all goes.
I have a number of projects for the short term, and should be busy through the rest of the year. The weight of writing the thesis is now absent, and all the rest of the work seems much more manageable now. If all goes well with the Viva, I will certainly enjoy this coming December, and I am looking forward to it already.
I was a bit dismayed afterwards that I had mismanaged my time on stage and that my final example did not run (ended up being a small bug that was introduced while practicing the presentation earlier that day; now fixed in the code repository). However, I think overall I was able to cover enough of the systems. I also got some good feedback from people, both as compliments as well as great notes and questions that I look forward to incorporating back into the work.
I’m happy now to be back home and look forward to collecting my thoughts and figuring out next steps for everything. I am extremely grateful to have had the opportunity to present my work at the conference; many thanks to Cognitect for the opportunity and their incredible support. I’m also blown away by the other speakers at the conference, as well as all the people I met there. It’s a wonderful community, one which I hope continues to grow and keeps on being as positive a group as it is today.
Last week at the International Computer Music Conference 2014/Sound and Music Computing 2014 joint conference in Athens, Greece, I gave a paper entitled “Extending Aura with Csound Opcodes”. The paper discusses work to allow using Csound opcodes by themselves, outside of the normal Csound engine, within Roger Dannenberg’s Aura interactive music framework. I’ve placed a copy of the paper here:
The paper may be of interest to those involved with music systems design, particularly focusing on unit generators, and those wanting to know more about Csound’s internals. In the presentation, I focused on looking at all of the other things involved with Unit Generators besides just the processing algorithm, and gave a demonstration using an Aura-based application, using the Serpent programming language to create a short generative example using temporal recursion and dynamic allocation of Csound and Aura unit generators working together.
While the conference was exhausting, and certainly had technical issues due to the scope of what was scheduled, I had a great time at the conference. I saw many old friends and made a few new ones. 🙂 I’ll certainly be attending next year’s SMC 2015 in Maynooth, and hope to attend the ICMC 2015 in Denton, Texas too!
I’m very honored that my friend and artist, Matthew Felix Sun, has chosen two pieces of mine for his yearly video reviews of his work. I’ve always admired his work and that admiration grows with each new piece of his. Below are two videos that I hope you’ll check out.