Hey all. Long-time listener, first-time caller. This is a project I’ve been working on for the past few years and have finally brought it all the way to a commercial release.
For those who are unfamiliar, in music production, there are host applications (such as Logic Pro, Ableton Live, or REAPER) in which most of the work (such as recording or sequencing MIDI data) is done. These applications generally have some built-in tools for the actual generation and processing of audio data, but there is a massive ecosystem of third-party plug-ins for doing this as well. Cadmium is such a project – it takes MIDI notes in and generates sound.
I basically built Cadmium from the ground up, based on my own framework for plugin abstraction (for which I only have one backend, but that’s not important right now), my own UI toolkit (rutabaga, a cross-platform OpenGL 3.2 scenegraph), and finally the synthesizer engine and plumbing on top of all of that. The whole thing is written in C, with some Python (using waf as my build system) for compile-time codegen and some GLSL for pieces of the UI. Runs on Mac, Windows, and Linux.
Realtime audio programming is a pretty challenging field, all things considered. Since it’s realtime, there’s a lot of mundane bits of programming that are strictly verboten – no allocation, no file I/O, no locking/mutexes – but only in the audio thread. So there’s a lot of ringbuffers, lock-free queues, things of that nature, and then there’s the actual UI programming and math/EE for the DSP on top.
I get that Lobsters isn’t a super audio-focused community, and I get that not a lot of folks here are necessarily interested in this (or in commercial products in general), but even though I’m a long-time lurker I still wanted to share. And, hey, if you’ve ever been curious about something involved in the development, consider this “ask an audio developer anything”.