1. 62
  1.  

  2. 20

    Apropos of nothing other than it is cool, the line from the article:

    In the real world, the analog world, sound is made up of waves, which are air pressures that can be arbitrarily large.

    reminded me that while you can have arbitrarily high pressure (not sure if there is a limit there? perhaps if you get the pressure high enough to drop the boiling point of the gas so it liquefies?) there is a lower limit (vacuum).

    The practical upshot of this is that some rocket launches are loud enough so that they have “clipping” artefacts, caused not by hitting the dynamic range limits of sound recording hardware, but by hitting the dynamic range limits of the atmosphere (on the bottom end).

    tl;dr - if you want to hear louder things without artefacts, you need more atmospheric pressure.

    1. 1

      That’s fascinating! I only included these to quickly introduce the idea of conversion from the analog world to the digital world. I’m neither a physicist nor audio engineer so I’m not very knowledgeable on that aspect. I’m amazed at all the domains of expertise that exist.

      Anecdote: I do know first hand what large shockwaves did to buildings in my city.

    2. 4

      This is an excellent overview of Unix audio, and I’ll be recommending it to others. Thank you for shedding light on a murky subject.

      1. 2

        Is the Unix audio stack implemented well? The author points out, that it is very complicated, but is this unavoidable? What could be improved and what had to be deprecated along the way? I‘m interested in hearing some opinions :)

        1. 3

          Only one way to find out: Read the article. 😁

          1. 2

            I read it, and I wonder what you think would be the next steps to improve the current state – keep developing things as-is, remove some layers, replace some things, etc.? What would be the biggest bang for he buck?

            1. 2

              I’m not sure I really have an “opinion”. I think it depends on the needs. A pro-audio user won’t be looking for the same thing as a desktop user.

              On the desktop I think the next big thing is to bring use-case scenarios, things like grouping notifications as a single audio stream, videos as a group, music, voice, etc.. But that’s hard to do without the participation of the application developers adding a media class/category.
              Otherwise, you are left with something like the restoration db of PulseAudio which will directly remember and move streams to the sink you want as soon as it sees it. Currently we don’t have that, we need to have different sinks/source for different use-case, and not one sink per device-port. What could be done is automatically create virtual sinks such as “notification”, “voice”, “media” by default with the routing to the device port. This can be done with the module-device-manager or this recommendation made initially for Phonon KDE and adding virtual sinks for each use-case. Take a look a this Phonon screenshot for an idea of what I mean. I think KDE audio is going in a good direction.

        2. 1

          For example, on Windows the audio APIs being ASIO, DirectSound and WASAPI.

          Windows just has wasapi and asio. Directsound, like xaudio2, is implemented on top of wasapi and adds only overhead.

          1. 2

            You could then say that Linux only has ALSA and that PulseAudio/JACK/PipeWire/sndio are implemented on top of it and “adds only overhead”. That doesn’t clarify nor explain anything.

            1. 2

              Do you then also include sdl and juce and soloud and fmod and wwyse and xaudio2 and xact? How about apulse?

              There’s a meaningful difference from a user‘s perspective between running straight alsa and running one of pulse, jack, pipewire, and sndio. They’re daemons; you have to run them, and they have some common effect on all the applications that run on them. For example, with pulseaudio you can dynamically change which sink an application’s audio is routed to, where with jack you cannot. No such property applies to dsound. It’s simply a library implemented on top of wasapi; an implementation detail, if you will, of a sound-producing application.

              1. 1

                You’re right, it all comes down to where you choose to put the complexity: hardware, kernel, user-space, etc.. That’s why this little diagram is important https://venam.nixers.net/blog/assets/audio_unix/device_functionality.jpg

                For example, with pulseaudio you can dynamically change which sink an application’s audio is routed to, where with jack you cannot.

                Yes, unfortunately.