Free Time, I once knew thee…
I made some important discoveries of previous work done in the realm of collaborative composition. Apparently, this was a popular area about 6-7 years ago. The most interesting application was called Res Rocket DRGN (Dragon). This application allowed real time collaboration using midi, and made the pages of WIRED magazine a couple of times, but it apparently has fizzled out. I haven’t been able to reach anyone from the original team, and the website looks like it has been taken over by a domain squatter. I’m feeling a mix of emotion over why a project with great potential such as DRGN would eventually fold. I don’t know if the tech crash had something to do with it, or if maybe there was some underlying problem or lack of interest that kept it from taking off…
There are other projects that had similar aims for collaborative composition using the net. Some of them, such as F@UST 3.0 were for real time wave synthesis control. They ran into the problem of synchronization, and the range of music that the application was capable of producing was somewhat limited to synthesis “droning” much like Gregorian chant. The sound waves created in this fashion weren’t affected by minor discrepancies in timing due to network latency. However, I can see how this system wouldn’t be attractive to the highly structured rhythm focused electronic community that thrives today.
So… I’m worried that I’m out in this domain, surrounded by the crumbling remains of organized projects and platforms that were created by experts in this field. Where have all the researchers and musicians gone? Are there other projects that they have moved on to? Were the interpersonal/rights management/network reliability problems too much to be handled adequately by their older computing technology? Is it the right time to try again?
I hate ending a post on a question, so I’ll give my take on where I stand. First of all, electronic musicians are “sound hackers”, but not necessarily “computer hackers”. They tend to have a small collection of tools that they are familiar and comfortable with. In this case, I will use Propellerheads “Reason” as an example. Reason is a full fledged synthesis engine, sampler, sequencer, acoustic effect, and mixing environment. It can handle nearly any aspect of the composition of electronic music that a novice or professional could need. It’s also relatively inexpensive and its interface GUI is extremely well designed. Most electronic musicians will understand a tool such as this, and focus solely on how to operate it.
In other words, electronic musicians have learned an “instrument”, and feel comfortable using that system to express themselves. I think the best way to enable collaborative composition is not to create an all inclusive collaborative song production tool, but rather leverage the incredible wealth of software based systems and devise a way to connect them together.
I believe a combination of midi and simple TCP/IP may be all that it’s necessary to do this. However, the midi signals would not be used in the sense of a conventional general midi tone generator, but as a control parameter transmitted from a centralized collaborative project environment…say, some sort of specialized web server. The client side application (Reason) would be used to interpret the midi signals and generate the actual waveforms, allowing for a richer palette of sound and control, and the two co-collaboraters would be required to have exactly the same configuration in order to be sure that they are hearing the same performance of the song. Using Reason, this is immediately possible. I’m not sure of other applications out there, but in theory it could work.
I don’t want to come up with a solution before I adequately frame the problem, but this seems to be a promising lead. Hopefully this will pan out for me.