1. 40

Hey all. Long-time listener, first-time caller. This is a project I’ve been working on for the past few years and have finally brought it all the way to a commercial release.

For those who are unfamiliar, in music production, there are host applications (such as Logic Pro, Ableton Live, or REAPER) in which most of the work (such as recording or sequencing MIDI data) is done. These applications generally have some built-in tools for the actual generation and processing of audio data, but there is a massive ecosystem of third-party plug-ins for doing this as well. Cadmium is such a project – it takes MIDI notes in and generates sound.

I basically built Cadmium from the ground up, based on my own framework for plugin abstraction (for which I only have one backend, but that’s not important right now), my own UI toolkit (rutabaga, a cross-platform OpenGL 3.2 scenegraph), and finally the synthesizer engine and plumbing on top of all of that. The whole thing is written in C, with some Python (using waf as my build system) for compile-time codegen and some GLSL for pieces of the UI. Runs on Mac, Windows, and Linux.

Realtime audio programming is a pretty challenging field, all things considered. Since it’s realtime, there’s a lot of mundane bits of programming that are strictly verboten – no allocation, no file I/O, no locking/mutexes – but only in the audio thread. So there’s a lot of ringbuffers, lock-free queues, things of that nature, and then there’s the actual UI programming and math/EE for the DSP on top.

I get that Lobsters isn’t a super audio-focused community, and I get that not a lot of folks here are necessarily interested in this (or in commercial products in general), but even though I’m a long-time lurker I still wanted to share. And, hey, if you’ve ever been curious about something involved in the development, consider this “ask an audio developer anything”.

-w

  1. 6

    First of all - congratulations on the release! This looks cool and I’ll definitely try it out.

    So to ask an audio developer anything: How did you (and how can I) get into DSP/audio programming? I’m thinking mostly resources to learn both the concepts and math of DSP, as well as the tricks of the trade in writing fast DSP code. It seems like if you want to learn ML, or compilers, or OS-design, etc, there are piles of good books, tutorials and videos available – but I’m having trouble finding good resources to learn audio stuff. Do you have any tips?

    1. 11

      I had my introduction to signal processing with a course at the university. At least I can recommend some books for you:

      PS: I think lobste.rs’ needs an dsp tag.

      Edit: typos.

      1. 1

        There’s at least two tags we need that will cover lots of articles that people might filter if we get too specific on application area or arcane internals users of black-box tools don’t need to know. Relevant here is “parallel:” techniques for parallel programming. It’s already a huge field that HPC draws on. It can cover DSP, SIMD, multicore, NUMA, ways of stringing hardware together, parallel languages, parallelizing protocols, macros/libraries that parallelize, and so on. I plan to ask the community about the other one after I collect relevant data. Mentioning this one here since you already brought it up.

      2. 4

        Hey, thanks! Do feel free to reach out and let me know what you think.

        With regards to DSP literature – klingtnet has provided some great resources already, so I’ll just talk a little about my path. My background has always just been in development, and my math has always been weak. Hence, the best resources for me were studying other people’s code (for which github is a particularly great resource) and figuring out enough math to implement research papers in code.

        Audio DSP has this weird thing going on still where companies in the space are generally incredibly guarded about their algorithms and approaches, but there’s a few places where they’ll talk a little more openly. For me, those have been the music-dsp mailing list and the KVR audio DSP forum. The KVR forum in particular has some deep knowledge corralled away – I always search thorough there when I start implementing something to see how others have done it.

        And, one final little tidbit about DSP: in real-time, determinism is key. An algorithm which is brutally fast but occasionally is very slow could be less useful than one slower but more consistent in its performance. Always assume you’re going to hit the pessimal case right when it’s the most damaging, and in this industry those moments are when a user is playing to a crowd of tens of thousands.

        That being said, I’d encourage just jumping in! Having a good ear and taste in sound will get you further than a perfect math background will.

        1. 4

          https://jackschaedler.github.io/circles-sines-signals/index.html is a really well done interactive intro to the basics (note that the top part is the table of contents, you’ll have to click there to navigate).

          1. 1

            Thanks a lot for this, I just finished and felt like I finally got some basic things that eluded me in the past. Good intro!

        2. 4

          You’ve got at least one audio synthesis nerd in your audience here! Looks nice, sounds nice, and I’m glad to see work like this. I’m going over somewhat similar ground, but doing the work in Rust.

          Do you do the DSP computations using SIMD? That’s one of the areas I’m focused right now. My latest explorations get sinewave generation in under half a nanosecond. That’s without modulation, though the algorithm is designed to be phase-modulable.

          Regarding other resources, I pointed to three books when a similar question came up. One of those is available online. Interesting how little overlap there was!

          1. 4

            Raph, it’s a pleasure to see you here! Long-term I would like to gradually move to Rust as well, but shipping 1.0 has naturally been the priority up until now.

            I’m doing a fair amount in SIMD, basically all xmmintrin.h. I am in fact using a heavily modified version of your state-space ladder filter, with nonlinearities, and extended to support pole mixing (which was no easy feat). It’s still not as efficient as I’d like, so my next steps are to further unroll the matrix construction (I already assemble them from unrolled versions of your bilinear version). I’m very interested in seeing your accelerated sine work – I’m using somebody’s SSE2 version which is accurate but not particularly fast currently. Oh, and I have an SSE2 implementation of your tanh approximation if you’d like it. :)

            1. 3

              Hah, excellent. Glad to see you adapted and extended my stuff, that’s very much what I was hoping for. I can see that pole mixing would be hard with the nonlinearities.

              Not to be too smug, but I think I’ve got SIMD tanh covered. I was planning on targeting SSE4.2 as a minimum, but I’ll probably have some SSE2 fallbacks in there, as you can absolutely count on it for x86_64 and going all the way to scalar would be quite a hit.

              1. 2

                Not smug at all – and, coming from you, I believe it! Very interested to see what your approach is.

              2. 2

                I would like to gradually move to Rust as well

                My colleague will be giving a talk about this at ADC this year that you may be interested in (scroll down to “An introduction to Rust for audio developers” on https://juce.com/adc/programme). IIUC this will be live-streamed, but I know ADC puts up past talks online as well.

                1. 4

                  Yeah, I saw that! For context, I’ve actually been using Rust professionally for a few years now, mostly for back-end network services, but I also gave a talk at the first Rustfest back in late 2016 about reverse-engineering USB HIDs, using the NI Maschine as my specimen.

                  I wanted to start with my UI layer, but I make heavy use of inheritance and sub-classing in Rutabaga, and that’s not going to be easy to port. I could probably find other ways of implementing the kind of toolkit I want, but that’s R&D I just haven’t spent time on yet. Soon, soon (probably).

                  I’ll check the livestream. Can’t make it out there in an official capacity this year, but perhaps next year. :)

                  1. 2

                    Very interesting, I did not know about that. It’s a week after a talk I will give at the SF Rust meetup with somewhat similar goals (“Fearless low-latency audio synthesis”). I’d be more than happy to chat with him about what I’m doing.

              3. 2

                As someone who has no experience with any of this, what should I be listening for in the audio samples?

                1. 3

                  By and large, it’s subjective. In the sound demos on the site, all of the tonal (i.e. not drums) sounds were made with Cadmium, so it serves to give producers and musicians a brief taste of what Cadmium could sound like in their own work.

                2. 2

                  I love classic FM synthesis, as well as analog synthesis in the west coast tradition of Buchla and Serge.

                  One thing they have in common is that they don’t rely on filters to control harmonics, like in classic analog subtractive synthesis. It’s a fun and different way to think about sound sculpting.

                  You instrument is really cool, and the demos sound great! They also lean pretty heavily on the analog-style filter (at least that’s what it sounds like), and I’d really like to hear some demos that show off this other style of expression.

                  I’m playing around with the demo as I write this, trying to get a feel for it. :)

                  1. 3

                    Yeah I can see how the filter can seem like it isn’t necessary – to be honest, I’ve been using Cadmium in my own tracks since long before I got the filter in. For me, the filter serves the same purpose that the filter in the Mutable Instruments Shruthi-1, in which the oscillators are 8-bit and come from an Arduino, but there’s anan analog filter board attached, and it helps smooth the sound and make it more versatile.

                    Cadmium’s oscillators are unabashedly digital, and VPS is hard to anti-alias. I did a pretty admirable job (IMHO) but things can definitely still get out of control. For me, the filter adds versatility and character, especially in those instances.

                    Still, you’re right – I could see about adding some “raw VPS” presets that have the filter turned off.

                    1. 1

                      For me, the filter serves the same purpose that the filter in the Mutable Instruments Shruthi-1

                      Absolutely. I think it’s a great feature; I love the combination of crude, digital oscillators and analog filter, like in the SID chip. I was just interested in hearing other aspects of the instrument, too.

                      VPS is hard to anti-alias

                      I can imagine! I made a simple FM synthesizer, and I read a lot of papers about it. I’m not a DSP person, so I ended up just oversampling a lot, and constraining the parameter ranges a bit. Sometimes aliasing and quantisation noise can sound nice though, for that gritty, retro sound.

                      1. 3

                        Actually, it occurred to me that a few of my favourite patches are almost exclusively VPS – it’s the three “WRL hollow/hollower/hollowest lead” ones in the factory bank. The filter is there for a bit of character but you can turn it off to get a better feel for the sources.

                  2. 2

                    I’d just like to say how happy I am to see a VST, and other VST authors, here on my front page of lobste.rs :)

                    1. 1

                      Hey, happy to be here! :D

                    2. 1

                      Wow didn’t think lobste.rs would be the place to discover a VST! Gonna give this a shot. Really digging your OpenGL UI lib too, the code looks neat. Really good work! :)