In last week’s post, I discussed how we decided to integrate Pure Data though a native Unity plugin. This week I’ll discuss what the plugin actually does, and what sort of role it plays in bridging Unity and Pure Data.
Pure Data works with metaphorical “patches,” sets of signal-processing or message-handling objects which are connected to each other with patch cords. There are three main ways we need to interact with a patch: we need to be able to send messages to our patch, to receive messages back from our patch, and to actually process and retrieve audio samples for playback. All this has to happen fairly efficiently and not block from either side.
In order to understand how this all works, we need to know a bit about how message processing interacts with audio processing in Pure Data. Pure Data works with 64-sample “ticks,” in that whenever audio is processed it is done in multiples of 64 samples. Once per tick, all currently incoming messages are handled and outgoing messages are sent out.
This poses a bit of a problem for us. In Unity, as with virtually all other game engines, audio needs to be processed in a different thread from the game logic. This is because the audio always needs to be output at a fixed rate of 44100 (or sometimes 48000) samples per second, regardless of game logic or framerate. If so much as a single sample is missed, it’s clearly audible as an unpleasant “click” in the audio. The problem comes from the fact that we want the Pure Data audio processing to happen in our game’s audio thread, but we also want to send and receive messages from the main game update loop.
To deal with this, we need a couple of queues between the API and Pure Data, one to buffer incoming messages and one to buffer outgoing messages. (I used a lockless queue implementation from PortAudio.)
Whenever FRACTOSC sends messages to Pure Data, they get put in the incoming queue. As soon as the audio thread requests samples (in this case from Unity’s MonoBehaviour.OnAudioFilterRead callback,) audio is processed and the incoming messages are handled, often resulting in new outgoing messages which are added to the outgoing queue. When the game loop runs its next update, it pulls the messages out and handles them however it wants.
This is the complete high-level flow! Next time I’ll take a look at exactly what a message is, and how I went about dispatching messages from/to the right Unity scripts and Pure Data sub-patches.
Sorry we didn’t shout this great news from the rooftops yesterday, we had a baby with a fever on our hands and very little sleep between us.
That said, we’re thrilled to announce that FRACTOSC has been Greenlit on Steam! that means, as I’m sure you can imagine, that the game will be made available on Valve’s service, great news for gamers, and great news for us.
We’ll be poking around with the SDK this week, kicking the tires and evaluating features and will keep everyone posted as we know more more.
Here is part 2 of the series discussing the audio tech we implemented for FRACTOSC, check out part 1 here.
So once we’d decided to go with Pure Data for audio synthesis, the question was, how could we integrate that with the rest of FRACT’s engine?
When used in a live performance or gallery setting, the most common way to use Pure Data in tandem with other tools is simply to run them side by side as separate programs. The programs then communicate with each other in some way, often over the network using MIDI or Open Sound Control (a different OSC from the OSC in FRACT’s title). This is a flexible way to use any combination of programs together, but it has a few downsides for a case like ours where we need to put together a robust package that “just works.” There would be work involved in starting up Pure Data as a separate process, monitoring it, and closing it properly when the game exits, and since this sort of functionaly is usually platform-dependend it would all have to be rewritten on each platform. In addition, since Windows and OSX still see the two programs as separate entities, they might decide to force close (for whatever reason) one without closing the other, so you might end up with a soundless FRACTOSC (yuck) or an abandoned audio engine blaring forever (even more yuck).
Fortunately, there was another option. A fork of Pure Data called libpd wraps most of the core functionality of Pure Data into a library that can be embedded directly into native programs. In our case, we compiled it into a native plugin that we can use from Unity. (Native plugins are only available in pro or mobile versions of Unity.) This way Pure Data runs as a built-in part of our game.
There were also problems with this approach, though. Most notably, since the Unity editor runs in the same process as your game, it’s also affected by errors your game runs into. Normally these errors are in managed C# code, and are caught by Unity and logged. Unfortunately, errors in native code can’t be caught, and so if I make a mistake coding the plugin not only does it crash but it brings down the entire Unity editor with it! This wasn’t uncommon near the beginning of development, and so we formed a consistent habit of saving all changes before hitting “Play,” to prevent from losing unsaved work.
Next time I’ll talk about how the plugin works, and how we communicate through it to Pure Data!
This will be the first post in a series discussing the audio tech we implemented for FRACTOSC. We’ve put a lot of effort into making it work (and making it work well!), and I’d like to write about the process we went through to get where we are. Along the way I’ll talk about different decisions we made and the technology we’ve built, in a good amount of detail!
When the three of us started working together we knew we wanted FRACT (which didn’t yet have the OSC suffix) to encourage musical expression on the part of the player. What we weren’t sure of at the time was what approach to take, or how far we could go. The first step was doing a few different prototypes of what we could do with the built-in Unity sound engine. Somewhat in parallel, we investigated some real synthesis options.
Working within the constraints of Unity 3.3, in theory we could get a wide range of different synth sounds just by baking a big library of samples, for different combinations of modulation and pitch. This would constrain the sort of control the user would have over the sound, but could more or less work with few changes to the engine technology. We could even combine multiple samples by fading between them to easily combine effects. But even this limited capability depended on one thing, being able to play sounds on a consistent beat.
Our brain is exceptionally good at associating what we hear to what we see, so if a sound doesn’t match an on-screen event precisely we still perceive the two things as being one event. As a result, for most games there’s a lot of leeway in audio timing, it really doesn’t need to be that precise. The moment you’re trying to synchronize sounds to anything resembling a regular beat, though, you can hear even small discrepancies in timing. With the audio functionality exposed in Unity 3.4, we had a lot of difficulty getting this timing to be reliable, and even then the precision went way down with fluctuating frame-rate (after all, we are running this alongside a full game!)
After running into this obstacle and a few others, it became clear that we should at least begin investigating alternatives. I had some experience using Max/MSP and Pure Data from previous projects, and knew they’d both allow us to quickly prototype sounds. I had never tightly integrated them into another program, though, so there was some unknown territory to explore before we knew whether we’d be able to go forward with it. If we handled all the sound generation outside of Unity, we could bypass the Unity sound engine completely by making our own connection to the sound card.
Next week I’ll go into how we started integrating Pure Data with Unity, and start discussing some of the technical issues that needed to be solved to make it work in the context of FRACTOSC.
Sorry for the delay in updates! We had a busy and tiring GDC this year, with two presentations, a ton of hangouts, and a party or two.
Henk’s talk on how we do our synthesizer magic went splendidly, and I think my talk about just what the hell we’ve been doing for the past 2 years went OK too. We met with friends, made new ones, and got inspired by what other indies are up to!
I also put together a new little teaser trailer for our GDC presentations, which we published last week and got a ton of great feedback on. This also helped out with our geenlight traffic, thanks everyone!
The presentations and the trailer also brought a lot of new and renewed attention to FRACT, which is awesome! We also have been getting a lot of requests for the old IGF award winning prototype from 2011, and a few questions as to why it’s no longer available.
Basically, we’re in the final stretch of finishing FRACTOSC, which is from the ground up completely new and shares nothing with the old prototype. And while we’re very proud of the old FRACT prototype, it doesn’t paint an accurate picture of where we’re going with FRACTOSC. If you’re still really eager to try it, send us a message by the contact form and we can get it to you.
Otherwise, we’re working hard to finish the game – stay tuned for more updates!
Another quick sample of our new Core (‘curated’ sounds pretentious) Synths. I actually stepped away from my computer mid puzzle test/tweak, and came back into the office to this lovely wash of sound. The beats, as always, are temporary :)
It’s crazy to think how far our synths have come (Henk’s talk at the GDC will cover just how far). The current synths in the game are pretty much feature complete (we’re considering adding multiple LFO shapes, LFO sync and a few other little options, but we’ll see). This week I’ve been curating the synths towards their respective narrative tone, and am having a blast in the process:
Here is where the Pad Synth is heading, big long envelopes, with some interesting metallic flavour at one end of the ‘stability’ knob:
The Bass Synth is sounding good and bassy, emulating the classic bass synths that inspired its’ design. It still needs more work, but it’s getting there: