1. 52
  1.  

  2. 16

    I think its speed is one of the thing which makes apk (and therefore alpine) so well suited to containers.

    It used to be that the slowness of apt wasn’t a huge issue. You would potentially have to let apt spin in the background for a few minutes while upgrading your system, and, once in a blue moon when you need a new package right now, the longer-than-necessary wait isn’t a huge issue. But these days, people spin up new containers left right and center. As a frequent user of Ubuntu-based containers, I feel that apt’s single-threaded, phase-based design is frequently a large time cost. It’s also one of the things which makes CI builds excruciatingly slow.

    1. 4

      distri really can’t happen fast enough… the current state of package management really feels stuck in time.

      1. 1

        I feel like speed could be a non issue if the repository state was “reified” somehow. Then you could cache installation as a function, like

        f(image state, repo state, installation_query) -> new_image_state
        

        This seems obvious but doesn’t seem like the state of the art. (I know Nix and guix do better here, but I also need Python/JS/R packages, etc.)

        The number of times packages are installed in a container build seems bizarre to me. And it’s not even that; right now every time I push to a CI on Travis and sourcehut it installs packages. It seems very inefficient and obviously redundant. I guess all the CI services run a package cache for apt and so forth, but I don’t think that is a great solution. I use some less common package managers like CRAN, etc.

        1. 2

          Part of it is no doubt that hosted CI platforms don’t do a great job of keeping a consistent container build cache around. You usually have to manually manage saving and restoring the cache to some kind of hosted artifact repository, and copying it around can add up to a nontrivial chunk of your build time.

          At my previous job, that was a big part of our motivation for switching to self-hosted build servers: with no extra fussing, the build servers’ local Docker build caches would quickly get populated with all the infrequently-changing layers of our various container builds.

        2. 1

          This sounds reasonable, until you realise it means that containers are constantly being rebuilt rather than just persisted and loaded when needed.

          1. 3

            Yeah, but they are. Look at any popular CI - TravisCI, CircleCi, builds.sr.ht, probably many, many others. They all expect you to specify some base image (usually Debian, Ubuntu or Alpine), a set of packages you need installed on top of the base, and some commands to run once the packages are installed. Here’s an example of the kind of thing which happens for every commit to Sway: https://builds.sr.ht/~emersion/job/496138 - spin up an Alpine image, install 164 packages, then finally start doing useful work.

            I’m not saying it’s good, but it’s the way people are doing it, and it means that slow package managers slow things down unreasonably.

            1. 2

              If you’re rebuilding your OS every time you want to test or compile your application, it’s not the package manager making it slow, no matter what said package manager does.

            2. 1

              Persistence can be your enemy in testing environments.

              1. 2

                Sure re-deploy your app, but rebuild the OS? I understand everybody does it all the time (I work in the CI/CD space), but that doesn’t mean it’s a good idea.

          2. 4

            APK definitely has been nice to use. It’s lightning fast, and just kinda “does what I mean”. On the other hand, I’ve had a lot of troubles with APT. Having errors with packages being kept back and me not knowing why, strange errors, and a lot of having to do apt install --fix-broken. But then again, there are also a lot more package on APT so it might not be a fair comparison, I tend to use APK in less risky situations (more on the server side). Always interesting to read about these things tho!

            1. 7

              I get irrationally angry every time apt tells me “you have held broken packages”. If the machine was sentient, I would yell back, “no, you have held broken packages! I haven’t touched your repositories! You messed up here, apt, not me!”

              1. 1

                I feel that every kinfd of software should have a “no, I didn’t screw up, you screwed up” mode.

                1. 1

                  other than admitting guilt, what else would that mode do? would you really trust the thing that screwed you up to unscrew up the situation?

                  1. 3

                    would you really trust the thing that screwed you up to unscrew up the situation?

                    I mean, with humans who can proactively admit their mistake, that’s usually exactly what I do. Typically works fine.

                    Somewhat less so with machines though.

                    1. 1

                      They are talking about software (so, machines), not humans.

                    2. 1

                      I’d expect it to provide some kind of detailed trace that lists the assumptions that were made and explains how it ended up in the error case, such that I can mark the faulty assumption and let the application create an error report for me that I can submit to a bug tracker if I want to.

              2. 4

                For those curious about interesting package manager designs since we’re on the topic, I find Haiku’s pretty interesting. Instead of extracting files to your filesystem, it instead merges the contents of each package into its own union filesystem. It’s somewhat declarative (the files in a package directory correspond to what’s installed), encourages direct manipulation (drag and drop), and makes things like rollbacks easy.

                1. 5

                  For example, we recently had a bug in Alpine where pipewire-pulse was preferred over pulseaudio due to having a simpler dependency graph.

                  As a pipewire user, this sounds like a feature to me, not a bug. Pipewire works, pulseaudio never did.

                  1. 3

                    I was directly involved in the bug referenced in the blog post. Pipewire does not work for some use cases, e.g. where echo cancellation is needed (phones). So it’s not a real alternative for Pulseaudio, yet, until it can replace all functionality.

                    1. 2

                      In my case, I welcome it. Because it does take care of what I actually consider important. It allows pro audio, literally jack pipelines running on pipewire as-is, while still working for e.g. music players, videogames, videoconference and so on.

                      I used to have a convoluted setup with a fake alsa device feeding into jack. Now I just use pulseaudio.

                      It doesn’t allow me as low a latency as jack did (I have to run it at ~10ms, when I could do jack at 5ms), but this is fortunately tolerable for me at this time.

                      1. 3

                        I’m looking forward to replacing pulseaudio, but there’s no shortage of people clamoring for the immediate end to pulseaudio just because their specific use case can be satisfied by pipewire, with no consideration for all of the things that pulseaudio provides that pipewire currently does not.

                        1. 2

                          My perspective is that if you can do pro audio (and with low latency) as pipewire is able to, general purpose use is, at least, a possibility.

                          If you fundamentally cannot (as it’s the case with pulseaudio), then it will never be able to target general purpose. It will never be more than a toy.

                          A rewrite could change this, but a rewrite is what pipewire already is.

                    2. 2

                      Unless you need multiple users to play audio at once using the same audio devices, in which case pipewire doesn’t seem to have anything to address that at the moment. I tried to switch after a reinstall but I couldn’t get that to work nor find anyone actively working on the problem.

                      1. 1

                        Interesting use case. I hope that’s at least a bug in some bugtracker. Lots of families out there do have a single computer they share among the family members.

                    3. 3

                      Clarification as an APT user: yes apt install/remove manipulates packages, but apt-mark manipulates desired state. I exclusively use apt-mark these days.

                      1. 1

                        The same counts for aptitude --schedule-only or changing package states in Aptitude’s TUI without pressing g (twice by default) at the end but just exiting Aptitude, e.g. by pressing q.

                        Everyone who’s using Aptitude’s TUI regularly is familiar with this concept.

                        And with regarding changing states by editing files. Editing /var/lib/aptitude/pkgstates works as well.