This will be the first post in a series discussing the audio tech we implemented for FRACT OSC. We’ve put a lot of effort into making it work (and making it work well!), and I’d like to write about the process we went through to get where we are. Along the way I’ll talk about different decisions we made and the technology we’ve built, in a good amount of detail!
When the three of us started working together we knew we wanted FRACT (which didn’t yet have the OSC suffix) to encourage musical expression on the part of the player. What we weren’t sure of at the time was what approach to take, or how far we could go. The first step was doing a few different prototypes of what we could do with the built-in Unity sound engine. Somewhat in parallel, we investigated some real synthesis options.
Working within the constraints of Unity 3.3, in theory we could get a wide range of different synth sounds just by baking a big library of samples, for different combinations of modulation and pitch. This would constrain the sort of control the user would have over the sound, but could more or less work with few changes to the engine technology. We could even combine multiple samples by fading between them to easily combine effects. But even this limited capability depended on one thing, being able to play sounds on a consistent beat.
Our brain is exceptionally good at associating what we hear to what we see, so if a sound doesn’t match an on-screen event precisely we still perceive the two things as being one event. As a result, for most games there’s a lot of leeway in audio timing, it really doesn’t need to be that precise. The moment you’re trying to synchronize sounds to anything resembling a regular beat, though, you can hear even small discrepancies in timing. With the audio functionality exposed in Unity 3.4, we had a lot of difficulty getting this timing to be reliable, and even then the precision went way down with fluctuating frame-rate (after all, we are running this alongside a full game!)
After running into this obstacle and a few others, it became clear that we should at least begin investigating alternatives. I had some experience using Max/MSP and Pure Data from previous projects, and knew they’d both allow us to quickly prototype sounds. I had never tightly integrated them into another program, though, so there was some unknown territory to explore before we knew whether we’d be able to go forward with it. If we handled all the sound generation outside of Unity, we could bypass the Unity sound engine completely by making our own connection to the sound card.
Next week I’ll go into how we started integrating Pure Data with Unity, and start discussing some of the technical issues that needed to be solved to make it work in the context of FRACT OSC.