Since we moved to Ireland, our primary way of going online while at home has been on our cellphones. Originally we had planned to get either a cable modem or cellular WiFi hotspot, but since we were the traveling we did not get around to it the first month here. Since then, we had been using our cellphones for tethering once in a while in the mornings and evenings, and have decided to try not having internet at home.
So far, things have worked out very well. We are no longer going online first thing in the morning, allowing ourselves to get up and enjoy our tai chi practices with a more peaceful mind. Our evenings have been very serene, and we have been getting either more work or a lot more reading done.
Granted, getting used to less internet did take a little time to get used to, but once we did it has been fantastic. Since the amount if internet we get on our phones is limited, we are much more conscious of using the internet purposefully. We are able to get most of the things involving larger data amounts done while at our offices spaces on campus or at WiFi hotspots at coffee shops. I think though that even when we do have internet access we are using it less and more purposefully.
In some ways, our current relationship to the internet and being connected reminds me of the time before when the internet was pervasive. I have been enjoying this setup very much and I think it has been a great boost for general productivity, focus, and peace of mind. I am curious how things will develop over time, but I expect that we will continue to enjoy being mostly offline at home.
I was catching up with Computer Music Journal issues and noticed a review by Jeffrey Trevino and Drew Allen of The Audio Programming Book, in which I was a contributing author. The review has some valid points to make regarding the organization of the book and what it covers. However, I did find problematic one section of the review regarding my chapter on Modeling Orchestral Composition:
But why, you might ask, does an introduction to audio programming culminate in a computer model of the orchestra? The chapter should be reframed as an introduction to Csound, to avoid the current sense of aesthetic blindness. Rather than acknowledge the presence of the numerous higher-level electronic music programming systems that invite alternatives to the conservative score-orchestra model built into Csound, the chapter precludes the discussion of other approaches by failing to mention them. Instead of treating the score-orchestra metaphor flexibly, as is the case in most uses of Csound, the author cements its literal interpretation with a table entitled, “Comparing the Steps of Composing for Live Orchestra and Composing with [Csound’s] Orchestra Library.” (Emphasis Yi’s.) Lastly, he deals aesthetic alternatives their death blows in the form of an all-encompassing first-person plural: “By reusing an existing music software system, we can leverage the solutions already available and focus more closely on our compositional interests.”
I found this section of the review rather problematic and takes what I wrote out of context to serve the narrative of the reviewers that “the remainder of the printed text presents an arbitrary collection of special topic essays on possible relationships between Csound and programs written in C.”
There are a couple of points I wanted to discuss. First is that the text is specifically working on modeling orchestral composition, and proposes a library design to do so. The focus of the article was on the composition aspect, or, how the composer writes for an orchestra, and how to model that aspect of the relationship between composer and orchestra. It was not specifically about the orchestra itself, but requires a discussion of the orchestra to understand the composing process for it.
In that regard, I feel that the criticism that I did not discuss a “score-orchestra metaphor flexibly” is in a way, criticism that I did not write a different text altogether. It seems that the reviewers read the text as being about a Music-N style score/orchestra design, or specifically, about Csound’s score/orchestra design. The reviewer’s misunderstanding is notable in their additions of [Csound’s] here:
“Comparing the Steps of Composing for Live Orchestra and Composing with [Csound’s] Orchestra Library.” (Emphasis Yi’s.)
Note, the original title of the table did not have “Csound’s” as the article was not about Csound’s score/orchestra at all. The library that is the focus of the chapter uses Csound as a backend, but it in itself is a generic design of orchestral composition that could be developed to target other backends as well (i.e. PD, Max, SuperCollider, Chuck, etc.).
Note: When I wrote the chapter, I did have some reservations that there may be confusion. Csound and Music-N languages do have a long-held design of a score and orchestra. I did worry that it might get confused with the concepts I was trying to model. For me, the Music-N score/orchestra are not the model of what I was doing, but just the tool I used to support the software model of the orchestral composition library. I tried to be clear of this distinction, but it seems to still have lead to some confusion.
This leads to the second issue I had with the reviewers’ comments regarding aesthetics. In the text, I wrote a brief section on wanting to reuse an existing synthesis system. This was so that the article could focus on orchestral composition and less on the synthesis details, which I believe are covered in other chapters of the book. I discussed why MIDI was not sufficient for the library’s design goals, and discussed why I chose Csound. However, I would have thought that readers who are familiar with SuperCollider, PD, Common Lisp Music, or other synthesis systems, would understand the chapter to be a generic design that could easily be implemented in other languages and systems. I did have to discuss Csound’s ORC/SCO model to explain how the results of the orchestral composition library would ultimately map its results to another systems software model to get audible results.
To me, this is not even an aesthetic choice, but an implementation detail. To me, the aesthetic choice here is that I wrote from a perspective of a composer looking at achieving compositional practices commonly found in orchestral composition and reproducing them in a computer music context. In that regard, that is the focus of the chapter, and is even the title of the text. I do not think then that not mentioning other aesthetic goals is a valid criticism. To me, It is as if I criticized an article about Impressionist technique as leaving out mention about Dada or Post-Modernism.
As for other tools and ways of working, I’d like to state that I have supported and continue to support other tools and software synthesis systems. I have often promoted that everyone should use the tools that best suits them, and ultimately the important thing to me is the musical result, and not the tools used. I think it is the reviewers’ reading of the article as being about Csound has lead to misplaced criticism.
To summarize, I appreciate the work of the reviewers to review The Audio Programming Book. However, in regards to the section on my own chapter, I feel they had misread the text. In the end, I still find the chapter and the design of the library interesting and hope it will be as useful to others–using whatever system they use–as it has been for my own compositional work. Hopefully I have explained my own perspective clearly, and would invite any further discussion.
This past Saturday, Lisa, myself, and some friends went to Toronto to attend the performance of Philip Glass’s Einstein on the Beach, presented as part of the Luminato Festival 6. I had long wanted to see a performance of this work, ever since my early exposure and interests in Glass some 18 years ago, and was very happy to finally satisfy this long-held curiosity.The performance was done without intermissions, and lasted about 4.5 hours. The performance used the original staging. The performance was… astounding. I was mostly mesmerized from the start to finish, sitting in my seat throughout the performance. Glass’s music felt as fresh as ever (I find his early music has “it”, that quality that permeates a timeless work). Robert Wilson’s staging was equally as present, and Lucinda Childs’ choreography was a joy to watch.
The highlights for me were the first Dance scene and the Spaceship scene, the first for the dance and the latter for the music. The rest of the work was of fantastic, but those two scenes were particularly engaging for me. The violinist did a wonderful job, as did the rest of the performers.
If there was any criticism I would have for the work, I would say two things stuck out. The first is the Building scene: the solo saxophonist didn’t really work for me. Musically and dramatically it felt like a departure from the rest of the work. While the opera as a whole had a wonderfully surreal quality, I felt the saxophone solo broke from it. While the visuals and rest of what was going on stage may continued the surreal dream-like world of the rest of the scenes, the music was enough to break focus and threw me off a bit.
Secondly, the performer doing the narrations involving “Mr. Bojangles.” At times I had some difficulty in understanding what was being said, especially compared to the performer doing the narrations for “I was in this prematurely air-conditioned supermarket…”. It felt that the first performer had gotten a bit tongue-tied at times, keeping up with the rapid pace of the dialogue.
Overall, I thought the performance was fantastic, and I was more than ecstatic in satisfying this long held curiosity to see this work. I hope that later in my life that there will continue to be performances of this work to attend. I was also very inspired by this performance, and I look forward to now getting back to my own composing work.
I have been doing brain training games on lumosity.com for a while now. Doing the daily training has been fun, and I feel that they do have a positive impact. Of all the things I have taken away from the games, the one lesson that has most been on my mind lately has been that “slower and more correct is better than fast and being wrong.”
In certain games, one gets a higher score by getting more correct in a row, rather than total correct. I have always had a tendency to do things quickly, at the expense of not necessarily doing this completely correctly. I can see this manifest in a number of areas in my life. The earliest was when I was in middle and high school, taking exams I would often finish early and turn things in, getting a few mistakes that would have been easy to correct if I had taken my time. In my writing, I often find awkward phrases while editing that I imagine could have been done better the first time around. (Though, with writing, sometimes it’s better to just get the thoughts out and revise later.)
I have been thinking about this idea a fair amount during my tai chi practices recently. I have been focusing on practicing with more awareness and more precision, trying to do things correctly. Consequently, my practices have been slower and longer, but I find the work has been very rewarding.
I think too with programming, there’s a lot of conversations about incurring technical debt due to introducing quick fixes/hacks that will later need to be redone more properly. This too can be a case where doing things more correctly can be better.
The flip-side of this is trying to do things absolutely perfectly. This can be paralyzing to work on a project. It is a spectrum of correctness and speed: perfection and slowness on one side, fast but incorrect on the other. I think I have tended to be on the faster side, but I am now working to bring myself more in balance. I hope that the patience I have been working on in Lumosity and in my tai chi practice will continue to manifest in my music and programming work.
Lisa and I just attended the opening performance by the Canadian Opera Company of Kaija Saariaho’s hauntingly beautiful opera L’amour de loin (Love from Afar). This was the second opera we had seen there and I have to say I absolutely love the venue: intimate, stylish, classy. My experience there was as enjoyable if not more so than our last, and I think now it is my favorite opera venue.
The production was fantastic. The use of doubles was tasteful, as was the use of acrobats. The latter could easily be a simple spectacle, but here I felt it was well done, very much adding to the experience of the work. The production accomplished such a wonderful surrealism: I found myself enchanted by, and lost in the work throughout.
The three main singers performed admirably (Russell Braun as Jaufré Rudel, Erin Wall as Clémence, and Krisztina Szabò as The Pilgrim). Braun’s voice really opened up in Acts IV and V, while the other two were spot on throughout. The music was generally performed excellently, though I did feel like the chorus could have been a little better: adequate, but not exceptional.
Overall, I found the work incredibly gripping, the love so painful, so beautiful. Powerful and subtle. Mesmerizing. I have long enjoyed Saariaho’s work, and although I have owned a copy of the DVD for quite some time, I had not seen it before, something I will certainly do when we return home from Toronto. (I am very curious to see another production…) Just from my experience today though, I would easily say it is a major work that I would be surprised if it was not a common work in the repertoire many years from now.
We’re at an interesting point in mobile technology where the hardware for devices has, in my opinion, attained a level of power to enable the creation of very powerful music composition tools. To get a bit of background, for four years before coming to do my Ph.D. I developed software for mobile devices. When I started in May 2007, the landscape of mobile devices consisted mostly of cellphones with very limited constraints in processing power and memory. Software at that time could only deal with very basic abstractions of music and sound synthesis would really be limited to using the built-in MIDI synthesizer. There were other non-cellphone devices availables such as PDA’s and Windows Mobile smartphones, and although they were more powerful than their “feature phone” counterparts, there was still a real limitation in what could be done.
When the iPhone came out, the landscape of mobile devices and software began to change. With each generation of iPhone, more and more music software arrived as the processing power and memory allowed for things like realtime digital signal processing and large file storage/retrieval to be feasible. Android devices also arrived not too long after iPhone and manufacturers for both Android and Apple for iPhone continued to push with each new device.
With the advent of tablet computing (glancing over the Windows-based tablets of the past that never really took off), devices such as the iPad and now many Android tablets arrived with impressive hardware features. With the current generation of phones and tablets continuing to push what processing power is available, with most devices having at least dual-core cpus, and now with the Asus Transformer Prime tablet coming out with quad-core, we’re at a point where music software that can be developed on a desktop could just as well be developed on tablets and phones.
Granted, there are still differences in CPU power, but I believe the current generation of devices have finally broken a barrier in speed and power that makes kind of music software that can be developed very interesting. We’re now seeing things like GarageBand by Apple and soon to arrive FruityLoops–both full composition environments–arriving in slightly limited forms to tablets and phones. At this point, I thing things are powerful enough to consider how might we extend our traditional computer music composition workflow with these devices.
Regarding mobile devices, they certainly have different interfaces and form factors to deal with. What kinds of tools could we build that leverage those differences? Rather than just make a 1:1 translation of software meant for the desktop for these devices, what might be the best practices in building mobile software? Also, how might the software on our mobile devices interact with our desktop music composition workflow? Is it just a limited form of our desktop system, or can it be a real extension to what we do?
These thoughts have come to mind as I have been contemplating what I might develop in a mobile version of my software blue. At this point, I have been considering use cases for mobile software: capturing ideas, working with instruments, recording audio, perhaps even live performance. I have also been considering extensions to the desktop system, where the mobile software may act as a control interface while working with blue on the desktop. I have also been considering how synchronization between a mobile blue and desktop blue system might work, and that has caused me to consider the general issue of system synchronization, whether it is between a mobile and desktop device, or desktop to desktop, or mobile to mobile.
Certainly there is a lot to work out, but I find the possibility of working with mobile music tools that are not just self-contained software, but rather are integral parts of a larger composition workflow fascinating.
I have been working on some iOS audio programs lately and the topic of generative music came up for me. Some 10 or 11 years ago, I remember thinking about the issue that generative music–or music that was not fixed in performance–did not have a means of content delivery. At that time, I imagined some form of meta-music player that would have plugins that could read Csound, PD, and other kinds of projects, yet could be queued up in a playlist much like you would find in something like WinAMP or VLC Player. The idea was that composers could then distribute their works in a standardized format, and consumers could then queue it up to listen to in the same manner they might load up an MP3 or CD.
Back in the mid 90’s, Brian Eno had done some generative music work with the Koan Music Player, releasing an album that would be performed differently each time by the player. The Koan Player though never really caught on, and since then the format and program have died off. What a shame it is to me that musical work can be tied to the life of a commercial program.
These days things like Bjork’s Biophilia for iOS start to renew an interest in generative music. However, the distribution scheme here is a custom application for a closed platform (iOS). I wonder too if the kind of thing that happened with Eno’s album and Koan might not happen once again with custom formats and distribution means.
What I would love to see is a generic means of distribution for both fixed and non-fixed works. Such a system would allow meta-data to describe what plugins would be required, as well as what hardware requirements would be necessary. So, for example, if your work required an 8-speaker octagonal cube or a video camera feed, that could be marked up in the distribution meta-data. That way, the distribution format would then be usable not only for consumers at home, but also as a means for concert delivery or generic installation setups.
Thinking about this all again, I think the idea still has merit and that developing a standardized distribution package could expand the audience for such work as well as create a platform that promotes longevity of work.
A beautiful blue sky and sunny day, I took a walk to the village to take care of errands. Listening to John Luther Adams’ “In the White Silence” (Such an exquisitely beautiful piece of music…), it was the first time I really had a sense of Autumn here: the smell of the fresh air, a slight chill on the skin, the spectrum of colors on the trees. Walking slowly, breathing in the fullness of things, I felt amazed by it all.
Back at the apartment, I am awake and inspired. With a cup of warm peppermint tea, I return now back to my work…