Extending Composition Workflow with Mobile Computing

We’re at an interesting point in mobile technology where the hardware for devices has, in my opinion, attained a level of power to enable the creation of very powerful music composition tools. To get a bit of background, for four years before coming to do my Ph.D. I developed software for mobile devices.  When I started in May 2007, the landscape of mobile devices consisted mostly of cellphones with very limited constraints in processing power and memory.  Software at that time could only deal with very basic abstractions of music and sound synthesis would really be limited to using the built-in MIDI synthesizer.  There were other non-cellphone devices availables such as PDA’s and Windows Mobile smartphones, and although they were more powerful than their “feature phone” counterparts, there was still a real limitation in what could be done.

When the iPhone came out, the landscape of mobile devices and software began to change.  With each generation of iPhone, more and more music software arrived as the processing power and memory allowed for things like realtime digital signal processing and large file storage/retrieval to be feasible. Android devices also arrived not too long after iPhone and manufacturers for both Android and Apple for iPhone continued to push with each new device.

With the advent of tablet computing (glancing over the Windows-based tablets of the past that never really took off), devices such as the iPad and now many Android tablets arrived with impressive hardware features.  With the current generation of phones and tablets continuing to push what processing power is available, with most devices having at least dual-core cpus, and now with the Asus Transformer Prime tablet coming out with quad-core, we’re at a point where music software that can be developed on a desktop could just as well be developed on tablets and phones.

Granted, there are still differences in CPU power, but I believe the current generation of devices have finally broken a barrier in speed and power that makes kind of music software that can be developed very interesting. We’re now seeing things like GarageBand by Apple and soon to arrive FruityLoops–both full composition environments–arriving in slightly limited forms to tablets and phones.  At this point, I thing things are powerful enough to consider how might we extend our traditional computer music composition workflow with these devices.

Regarding mobile devices, they certainly have different interfaces and form factors to deal with.  What kinds of tools could we build that leverage those differences?  Rather than just make a 1:1 translation of software meant for the desktop for these devices, what might be the best practices in building mobile software?  Also, how might the software on our mobile devices interact with our desktop music composition workflow?  Is it just a limited form of our desktop system, or can it be a real extension to what we do?

These thoughts have come to mind as I have been contemplating what I might develop in a mobile version of my software blue.  At this point, I have been considering use cases for mobile software: capturing ideas, working with instruments, recording audio, perhaps even live performance.  I have also been considering extensions to the desktop system, where the mobile software may act as a control interface while working with blue on the desktop.  I have also been considering how synchronization between a mobile blue and desktop blue system might work, and that has caused me to consider the general issue of system synchronization, whether it is between a mobile and desktop device, or desktop to desktop, or mobile to mobile.

Certainly there is a lot to work out, but I find the possibility of working with mobile music tools that are not just self-contained software, but rather are integral parts of a larger composition workflow fascinating.

Published
Categorized as General