1. 4

    I use those Sony headphones too. The way I swap them doesn’t require explicitly disconnecting them on a computer:

    1. Start with the headphones off
    2. Hold down the power button
    3. Keep holding it after they say “power on”
    4. Release it after they say “Bluetooth pairing”
    5. Pick the headphones from the “I know this device” list on your computer or phone or whatever
    1. 3

      Thanks for letting me know! I’ll give it a go.

    1. 8

      No pizza?!

      1. 9

        No, the pizza goes onto a different table! :)

        1. 5

          Can you say something about the chair? Is there a reason why this company, and exactly this chair? I’m also looking for some work chair in the future and have zero ideas what to look for.

          1. 3

            I picked the Vitra ID Mesh because I was using it at work for a few years (before moving offices), and I remember it being the most comfortable chair I have used thus far.

            Here are a few points to look for:

            • height-adjustable arm rests. Some move forwards/backwards (“2D arm rests”), others move left/right, too (“3D arm rests”). I prefer 2D over 3D here, because the 3D arm rests are easy to accidentally move, I find

            • adjustment of the seat cushion, so that you can sit on it with 90-degree angles of legs and arms

            • comfortable lumbar support. You’ll need to test-sit in the chair to figure out what works for you.

            Maybe other people can add what they look for? Bottom line, I think you need to try things out in a physical store :)

            1. 2

              I also have them at work, so I can attest the comfortability. :-)

              I’m just a bit unsure whether it is worth to buy them privately, or whether some cheaper vendor is ok too (minus style/look/whatever).

              1. 1

                Probably, but buying a chair is such a hassle that I’d rather skip any experimentation process here :)

      1. 4

        Window Manager: i3

        As an Emacs user, what’s the benefit of using i3 over EXWM?

        This makes it not a great fit for trivial editing tasks, such as commenting out a line of configuration on a server via SSH.

        Are you aware of TRAMP in Emacs? It makes Emacs perfect for this task.

        1. 6

          They’re the creator of i3, so I would guess they prefer its use case.

          Additional does EXWM still run into the single thread model of emacs?

          1. 1

            Yes, but it has not been much of an issue for me.

            Between i3, StumpWM and EXWM I’d say they’re all perfectly usable and come down to personal preference. Although i3 is the most polished and requires the least configuration.

          2. 2

            I haven’t used EXWM, so I can’t say. In general, I find Emacs Lisp code not very easy to debug, which may just be because I write very little of it (lack of practice), so I’m a bit hesitant to use it for anything so loadbearing to my workflow as window management.

            I am aware of TRAMP and use it. However, in some scenarios, it is significant work to get the environment set up (think weird SSH tunneling setups, or bare-bones installations), so it’s faster bottom-line to just use another editor than to start TRAMP.

            1. 3

              I find Emacs Lisp code not very easy to debug, which may just be because I write very little of it (lack of practice),

              Edebug makes Elisp debugging super comfortable. Basically you evaluate a function or top-level expression any using “C-u C-M-x”, and then every time it’s invoked you step through the evaluation visually.

              But regarding EXWM, it’s afaik still considered experimental, and if you already have a window manager you’re familiar with (coincidentally), then I don’t think there’s much to gain.

              What I would rather wonder is if you use i3 instead of the “built-in” Emacs window manager via packages like https://github.com/davidshepherd7/frames-only-mode or similar concepts?

              1. 3

                Thanks for the explanation.

                I know there is e.g. https://github.com/vava/i3-emacs, but I’m not using it: for me, Emacs is just one window among others.

          1. 3

            I was wondering if you could go a bit more indepth about your network storage.

            1. 5

              Sure, which parts are you interested in?

              I think https://michael.stapelberg.ch/posts/2016-11-21-gigabit-nas-coreos/ should give a good introduction, if you haven’t read that yet

            1. 2

              Dell introduced the UP3218K in January 2017. It is the world’s first available 8K monitor, meaning it has a resolution of 7680x4320 pixels at a refresh rate of 60 Hz. The display’s dimensions are 698.1mm by 392.7mm (80cm diagonal, or 31.5 inches), meaning the display shows 280 dpi.

              I run it in 300% scaling mode (Xft.dpi: 288), resulting in incredibly crisp text.

              Somewhat incredible that Xorg is still unable (or perhaps unwilling?) to do these configurations automatically.

              1. 6

                Note that modern desktop environments (e.g. GNOME) do set the Xft.dpi value automatically. Xorg itself doesn’t, which is in line with its “mechanism, not policy” approach.

                Another data point as to why this is not trivial: when introducing hi-DPI support in i3 in 2013, we tried deriving the dpi from screen resolution and screen dimensions, but had to find out that the monitor-reported dimensions were often incorrect, or resulted in undesirable scale factors for the user.

              1. 2

                I’d be curious to know what you use for email nowadays, since you write that you haven’t used notmuch in a long time.

                1. 5

                  Gmail :)

                  With a full time job, removing mailserver maintenance from my setup allows me to enjoy other things instead.

                  1. 4

                    I keep hearing it and I have no idea what are you guys talking about. Email setup for a small group of people takes as much time to maintain as you want to spend. Mine takes, on average, zero hours a month. I’d be curious to hear about email setups that constantly break on their own and need operator attention. Maybe a list of mistakes that lead to that can help future operators.

                    In terms of precision, a trackball will not be as good as a mouse can be.

                    Depends on the size of the ball. ;) Big ball trackballs like Kensington Expert Mouse are more precise than most mice. It may also depend on the thumb vs. three fingers operation. The big ball ones can also be used with either hand .

                    1. 8

                      I’m glad to hear your setup works well for you.

                      Here are some of the issues I have faced:

                      • Went on vacation to another country and on day 1 of the week-long trip, my self-hosted mail server had a hardware failure. Made for a very stressful trip.

                      • Was sometimes unable to receive emails from various senders for various reasons, all of which required getting in touch via a separate channel and then debugging.

                      • Was unable to send emails and only noticed days later.

                      • Spam filter was significantly worse than gmail, even after years of diligent training.

                      Using a major hosted email provider takes care of all of these issues, and reduces my work load when migrating/updating servers.

                      1. 3

                        While it might take zero hours a month for most of the time, I at least would be slightly more stressed simply because that thing exists and I have to care of it. I also try to outsource as much as possible (with a few exceptions for entertainment) of the technical side of my life to others.

                  1. 4

                    Glad to see my project here!

                    I’ll be working on router7 live on stream with Matt Layher in less than an hour: https://twitter.com/zekjur/status/1264198945263280128

                    …just in case you want to drop by and ask questions :)

                    I’ll also have an eye on the questions here, of course.

                    1. -2

                      Tag suggestion: “show” instead of “programming”. Also seems like a low-effort post.

                      1. 19

                        Added show, but the site won’t let me remove programming.

                        I certainly spent many hours on this post, so not sure why you think it’s low-effort? :)

                        1. 5

                          We have annual battlestations threads. You just missed the last one. You can, of course, still comment there.

                          And I suggest your tags match them: practices

                          1. 10

                            Thanks! I’ll keep an eye on the 2021 version then, I guess.

                            Added a comment to the 2020 thread, too, and updated the tags. Thanks for the hint

                          2. 1

                            Everyone has a desk and can write about it, not everyone has your knowledge of Objective-C or can write about that.

                            1. 18

                              But I found it a thoughtful post that not only listed hardware, but also explained why it’s beneficial. It’s a good read.

                              1. 8

                                true, everyone has a desk but the post also has

                                Window manager: i3 It won’t be a surprise that I am using the i3 tiling window manager, which I created in 2009 and still maintain.

                                And the projects written in Go, including the router7 or debian code search.

                                I would say that the title does a tiny bit disservice to the content in the post itself.

                                1. 2

                                  Being the writer of i3 makes the point even stronger. Imagine the interesting posts he could write about i3 and window managers. Instead we get a post about what he has on his desk.

                                  How does writing i3 make his desk any more interesting? He didn’t write i3 because his desk was set up a particular way.

                                  1. 1

                                    But even then the “content” is just a list of products or programs the author has bought, used or developed. There’s nothing to really be learned or gained from such a list.

                                    1. 8

                                      i learned a few things from the post though.

                                      1. Multiproject management through emacs- org mode.
                                      2. serialize thoughts by writing them down on a physical notebook
                                      3. authors reasons for choosing
                                      • Go
                                      • Single monitor
                                      1. 3

                                        You needed the developer of i3 to tell you that you can write things down in a notebook?

                                        1. 7

                                          i mean, i like learning about people’s workflow. Like i learned about bullet journal from a lobsters thread in 2016 and it was a game changer for me.

                                        2. 2

                                          Let’s be real, you’ve heard all this stuff before and know it well: This isn’t the first time you’ve seen “serialise thoughts by writing them down” or “Go is great because of x”.

                                          1. 12

                                            If you recognized this as a “one of many” posts then feel free to skip it, but it is certainly not low effort and apparently did interest quite a few of us.

                                1. 2

                                  I like the focus on debug symbols! That was always a big mystery to me in Debian.

                                  And I learned that debug symbols are “optional” for Debian packagers. That shouldn’t surprise me, but I don’t see any reason why a good package shouldn’t have them. Although it is sort of awkward that debug symbol packages “pollute” the regular package namespace.

                                  Are there any distros that do better than Debian in that area? Fedora? Arch?

                                  1. 3

                                    Yeah, it has been a pain point for a long time, pretty much across the board :(

                                    I’m currently on Arch Linux, and I lack symbols all the time. In my experience, it’s worse than it was on Debian.

                                    In comparison, it’s easier to develop on distri, at least for me.

                                    Fedora has better tooling than Debian: their package manager has a subcommand to install the required packages (which seem to be broadly available), and (after confirmation) it can automatically configure the required repositories for you. I haven’t used Fedora in a while, so correct me if I’m wrong

                                    distri goes one step further, and makes debug infos and sources available by default, without any extra steps. Behind the scenes, this is done by automatically fetching the required SquashFS images from the repository (a static file HTTP server).

                                    distri’s overlay directories are available (without requiring the images themselves) to the debugfs and srcfs services via the repository metadata.

                                    Metadata is transferred in bandwidth-efficient gzip-compressed binary protobuf, as opposed to the XML and text based formats of other distributions. More importantly, metadata in distri is targeted to what really needs to be there, whereas other distributions often just have one type of metadata, an ever-growing grab bag of things.

                                    Targeted and wire-efficient representation are two low-hanguing fruit for many distributions. A lazy-loading read-only FUSE file system for debug and source packages should be a reasonable project to implement.

                                    Hopefully the other distributions pick up some of these goals :)

                                    Edit: forgot to mention: https://developers.redhat.com/blog/2019/10/14/introducing-debuginfod-the-elfutils-debuginfo-server/ also seems pretty cool

                                    1. 1

                                      So the debug info lives at a well-known path, but is lazily fetched? If so that makes a lot of sense.

                                      Does distri do any differential compression between versions of the same package? One downside of SquashFS is that you may lose some structure that could be useful for that.

                                      For example these pairs of images should all be very similar (I would guess 90-99%):

                                      • sources for Python 3.8 vs 3.9
                                      • binaries for Python 3.8 vs. 3.9
                                      • debug symbols for Python 3.8 vs 3.9

                                      and even more so for 3.9.0 vs 3.9.1.

                                      I forget if I mentioned that I tried (and failed) to write a binary-centric / hermetic package manager around 2014… And one thing that was important for my use case was package updates that are much more rapid than Debian. To prevent disk space from exploding, and to save on network time, I felt that differential compression was important.

                                      It’s a long story, but an important use case was running R packages, which move extremely quickly – much faster than distros.


                                      Actually I could just copy from a conversation I had with @ac last year, who at the time was also working on a binary-centric package manager like Nix.

                                      Here are some more concrete examples of the problem I would describe as “apps are pyramids with big shared bases”.

                                      • I was dealing with 30 or so R apps, R packages, and R itself. The R code is 500 lines, but the whole app bundle is 500 MB x 30 apps.
                                      • tiny scripts depending on Python, Pandas, NumPy, which are large. Another recent huge dependency is TensorFlow
                                      • Compilers using LLVM: Clang, Rust, Julia, etc.
                                      • Apps using Electron: VSCode, Atom, etc. Slack I think
                                      • All dynamic web stacks: Python and Django, Ruby and Rails, etc.
                                      • GUI apps and associated frameworks. Actually I believe this is why dynamic linking was invented in Unix.

                                      So another way to think of it is that I think you should be able to install like 10 versions of Clang and Rust and Julia on the same machine, and not have 30x the space of LLVM. It would probably be north of 30 GiB, and you would pay that as disk and network space.

                                      So anyway I’m not sure if this is in your design goals for distri, but it’s a problem I have had in the back of my mind. I think fine-grained versions are useful to develop and deploy software quickly, but it gets expensive

                                      Oil was definitely motivated by distros, e.g. the relatively bad mish-mash of languages and macro-processing that distros use to express their package configuration:

                                      http://www.oilshell.org/blog/tags.html?tag=linux-distro#linux-distro

                                      1. 1

                                        So the debug info lives at a well-known path, but is lazily fetched? If so that makes a lot of sense.

                                        That’s correct!

                                        Does distri do any differential compression between versions of the same package? One downside of SquashFS is that you may lose some structure that could be useful for that.

                                        Not right now, and it’s not something that’s on my list either. I wanted to give http://zsync.moria.org.uk/ a shot for differential compression for the download step, but haven’t tried it out yet.

                                        In practice, the large disks we’re used to nowadays, and the fact that most packages are present in one version only (with a few exceptions), make this largely a non-issue in my day-to-day.

                                        I appreciate your description, though, and it sounds like in your environment differential compression is a lot more useful!

                                        1. 1

                                          OK interesting, I didn’t know about zsync. It seems interesting so I just downloaded it and built it. It passes a couple tests, but it doesn’t seem like a mature project otherwise (i.e. does anyone use it in production?)

                                          Although you need to generate a .zsync delta between pairs, I think it could mostly work if you go on the assumption that most people will be at the latest versions. Package versions could be kept forever, while you would need say 3*N zsync deltas for N old versions and 3 new versions. So it will scale linearly rather than quadratically.

                                          I like that it works with a plain HTTP server.


                                          I have a fairly concrete use case I could try it out with: Oil’s continuous builds which are currently on Travis, but eventually need to be ported to other platforms for non-Ubuntu builds.

                                          The dev dependencies are big and need to be sync’d every time.

                                          Someone contributed Nix support, which is not fast: https://github.com/oilshell/oil/issues/513

                                          But this doesn’t pass tests now, because the package versions are different than Ubuntu, which I develop on. And it doesn’t seem easy to create Nix packages. I already have shell scripts that build the right versions of all my dev dependencies, but that’s VERY far from a Nix package (while I don’t think it would be too far from an Arch package).

                                          Nix seems to require a lot of weird patches to get packages to work, and that compounds as you move “up”. I also have many Python dependencies, and the contributor didn’t have a real idea about how to tackle that in Nix.

                                          It’s mostly because of the /nix/store thing I believe. I think FUSE probably allows you to avoid too many upstream changes. I think mostly relying on --prefix is a good idea.

                                          So what I end up doing is using cache: feature in .travis.yml to avoid 10 minutes of building dependencies before every continuous build.

                                          That mostly works fine, although I ran into a bug a couple weeks ago. Builds were incorrectly failing, and had to turn off the cache for a few builds, and then turn it on. In my experience, that’s not surprising with ad hoc cache mechanisms. It works OK because the dev dependencies don’t change very much, but if they change, it would be a bigger hassle. You have to manually delete the cache with the travis command line tool.


                                          Anyway, long story short is that I think continuous builds are a good use case for a performance-oriented distro. The way Travis works is that all builds start from a clean slate, and I think they have a lot of one-off local caches in AWS of Debian, Python’s PIP, node.js to make it reasonably fast. I think it works fine because it’s centralized in an AWS data center, with fast networking. I imagine the bugs I ran into were probably some production issue about migrating clusters – i.e. the cache state wasn’t correct.

                                          But I think it would be nicer for a distro to support this use case out of the box – booting a known set of dependencies from scratch for a fast build. And it shouldn’t rely on running in the cloud close to caches.

                                          Since I alerady use the oilshell.org static web server to fetch tarball sources when the cache doesn’t exist, it seems like it could be easy to plug in zsync! So there is a path to optionally trying it out.

                                          And from there I could port off of Travis onto other platforms. Anyway I’m on the distri mailing list, so if I ever actually try this, I can report back some results :)

                                          (FWIW here is the site with builds: http://travis-ci.oilshell.org/jobs/ , it’s doing a lot of work now, which I’m happy with)


                                          edit: although it occurs to me now that FUSE is not a good dependency for a lot of continuous build platforms, because of kernel support… hm I will have to think about this. Right now I only have one “layer” of dependencies really. That is, I just avoid make install and run all my devtool binaries out of the source dir. But if there are transitive dependencies then you need the equivalent of make install.

                                          Filed bug to keep track of it, not a very high priority: https://github.com/oilshell/oil/issues/756

                                  1. 3

                                    Thank you for posting this, Michael. Very interesting!

                                    I am glad to see people who commit time and development in trying to advance the state of the art even for “plumbing” like package management.

                                    Do you have specific goals with Distri? E.g. do you think this will be adapted by another distro, or influence the design of other package managers?

                                    1. 3

                                      Glad you like it!

                                      Yeah, ideally other distributions and package managers would pick up ideas and run with them. If they don’t think this is worth it or realistic, hopefully at least newly built package managers will consider the architectural observations I’m providing here :)

                                      So if you know anyone working in Linux distributions who’d be up for championing such a change, or people working on package managers specifically who might be interested, please share! :)

                                      1. 1

                                        Cool!

                                        I’m afraid I don’t have any personal connections within this area.

                                        However, as you might know there are some smaller distros out there which seem more “agile” and less tied up by legacy. The ones that come to mind are Void and Alpine.

                                        So it might be worth reaching out to them.

                                    1. 3

                                      meta: I think you should have either linked to the “about” page, or added the release tag.

                                      1. 3

                                        Added release tag now, thanks for the hint. Still new around here :)

                                        1. 3

                                          Welcome and nice to meet you! :)

                                      1. 1

                                        Interesting that shells were too slow for the wrapper. dash is the best shell for that, although maybe it’s still too slow?

                                        https://lobste.rs/s/dnfxpk/hello_world#c_zjxrhd


                                        Another option could be execline, which I’ve never used:

                                        https://skarnet.org/software/execline/grammar.html

                                        execline is the first script language to rely entirely on chain loading. An execline script is a single argv, made of a chain of programs designed to perform their action then exec() into the next one.

                                        I guess the C program is simple enough, but you could also write a single C program like execline and accomplish the same thing?


                                        Anyway, very informative post! I look forward to hearing more about distri.

                                        Could distri be used “on top” of another distro, like Nix? I think distros really should be split in half – all the stuff that depends on hardware, which is complex, and then all the portable stuff (shell, Python, Ruby, etc. and everything upward). I think there is too much coupling between these layers in most distros.

                                        Oil’s dev env (which should be in the portable upper half) was partly ported to Nix by a contributor, but it turns out to be hard to get the tests pass. This is due to the versions of each package being different from Ubuntu, and also the surprising number of weird patches (probably in both Nix and Debian/Ubuntu, but more in Nix).

                                        https://github.com/oilshell/oil/blob/master/shell.nix

                                        So I would really like to find a hermetic “semi-distro” to put all the Oil dev tools in. So people can just run one command and download deps and build. Nix is pretty close to that, but there seem to be some problems in practice. (And yes now I have some first hand sympathy with the complaints about Nix’s expression language… )

                                        1. 1

                                          I guess the C program is simple enough, but you could also write a single C program like execline and accomplish the same thing?

                                          Probably, but then that single programs still needs to be configured, which takes time away. The advantage of compiling the program at package build time is that it can be even quicker.

                                          Anyway, very informative post! I look forward to hearing more about distri.

                                          Nice! Find a list of posts at https://michael.stapelberg.ch/posts/tags/distri/, and subscribe to https://www.freelists.org/list/distri if you want to reach out and discuss :)

                                          Could distri be used “on top” of another distro, like Nix? I think distros really should be split in half – all the stuff that depends on hardware, which is complex, and then all the portable stuff (shell, Python, Ruby, etc. and everything upward). I think there is too much coupling between these layers in most distros.

                                          To an extent. There are a couple of paths which certain packages treat as special. For example, glibc’s NSS mechanism loads plugins from /usr/lib. GCC will consider /usr/include as the system include dir. Using not just distri’s packages, but also its file system layout, helps in these cases.

                                          This is just one caveat that comes to mind. I have indeed used distri packages on Debian and Arch before.

                                          Please also take a look at https://michael.stapelberg.ch/posts/2019-08-17-introducing-distri/#project-outlook — I’m not looking to use distri productively (only for research).

                                          1. 2

                                            Ah OK the /usr/include is interesting. So are multiple versions of the same compiler/libc co-installable in distri? The FUSE indirection solves that problem?

                                            I ran into a related issue compiling code with a “nightly” build of Clang recently. The nightl build appears to use the system libc++ which is GCC’s, but there is an extra flag to compile with the libc++ that comes with Clang itself. I remember having a hard time figuring that out.

                                            I think the issue was some C++17 features like <optional>, which made it hard to compile a lot of software on my system, even with a non-system compiler that supported C++17.


                                            A long time ago, I tried to make hermetic packages with chroots, which sort of works. But I came to the conclusion that having the FUSE layer would probably make things more efficient. But at that time I didn’t want to depend on FUSE.

                                            I think distri overlaps with what I want but it sounds like your goals are also a bit different. Have you looked into other hermetic distros, i.e. ones where library dependency versions are fixed ? I don’t think there are that many. Off the top of my head, there’s only:

                                            1. Nix and Guix (as mentioned I have tried Nix, and it’s OK, but I’m still looking for something else)
                                            2. distri which as you say is a research project for now (and the issues you are uncovering and documenting are interesting)

                                            I can’t think of any others that don’t duplicate the entire dependency chain, which I don’t want …

                                            Basically I want a binary-centric distro and not a library-centric one. Debian seems to treat them as on “equal footing”.

                                            IMO binary stability (e.g. firefox, Clang compiler, VLC, Inkscape, Python interpeter, Python apps like hg) is more important than having exactly one version of a library on every machine.

                                            This is more of a shell-centric point of view, e.g. for Oil. The shell cares about binaries and not libraries. I think that is a more scalable and reliable way of composing software. That is, once you reach a certain point, you start using binaries and versionless protocols (either shell or IPC/RPC), not libraries with incompatible upgrades.

                                            1. 1

                                              So are multiple versions of the same compiler/libc co-installable in distri?

                                              Yes, and for building distri packages, there is no ambiguity because only one version will be visible in the build environment. For interactive builds (done by humans, outside of the distri build environment), FUSE will serve symlinks to the most recent version of each file in /usr/include, so if you need something more specific, it’s up to you to arrange that.

                                              Basically I want a binary-centric distro and not a library-centric one. Debian seems to treat them as on “equal footing”.

                                              binary-centric is a good term! I agree with your desired goal here :)

                                        1. 2

                                          I wonder if it would be possible to have multiple “distributions” that use different policies, e. g. provide the choice whether packages are statically or dynamically linked.

                                          I think it would be really interesting to learn some lessons there regarding development velocity vs. stability:

                                          E. g. how much easier would it be to change things around and experiment with different designs if it was guaranteed that all downstream users are statically linked against the exact library version, so incompatible updates would never break anyone.

                                          1. 1

                                            Package maintainers certainly can reduce the blast radius if they want.

                                            For example, as a user, you can install a package from a third-party repository (new development version of curl), and you can be sure that nothing else breaks on your system, even if curl pulls in a new OpenSSL version that is buggy.

                                            1. 1

                                              I was looking at it from the opposite direction:

                                              Having the option to be sure that there is only one, single, dynamically-linked instance of each security-critical library installed.

                                              Such that updating that library is enough to secure all applications using it in case of a security issue.

                                              1. 1

                                                Yes, that is the status quo in many systems (e.g. Debian). distri makes different trade-offs :)

                                                1. 1

                                                  I wonder if it’s possible to get distri’s benefits without having to forgo dynamic linking. :-)

                                          1. 3

                                            This is similar to what nixos, snap, flatpack, ostree, and guix(?) is currently doing. It’s an interesting concept but i’m more curious how distri keeps track of the dependencies in the packages. How do you track the openssl version that was used for curl at buildtime? How is this recorded and what is the tooling around this?

                                            1. 2

                                              Yes, there are similarities, because hermeticity is a desirable property to have :)

                                              OpenSSL is available in distri under /ro/openssl-amd64-1.1.1g-5. When building curl, curl’s build system will find OpenSSL under that path:

                                              […]
                                              checking for openssl options with pkg-config... found
                                              configure: pkg-config: SSL_LIBS: "-lssl -lcrypto"
                                              configure: pkg-config: SSL_LDFLAGS: "-L/ro/openssl-amd64-1.1.1g-5/out/lib"
                                              configure: pkg-config: SSL_CPPFLAGS: ""
                                              […]
                                              

                                              As the article outlines, we compile with the rpath set to a lib directory, and then create symlinks with the full paths we want to resolve each library to:

                                              % ls -l /ro/curl-amd64-7.69.1-8/lib/
                                              lrwxrwxrwx 1 root root 51 2020-05-07 00:11 libcrypto.so.1.1 -> /ro/openssl-amd64-1.1.1g-5/out/lib/libcrypto.so.1.1
                                              lrwxrwxrwx 1 root root 43 2020-05-07 00:11 libc.so.6 -> /ro/glibc-amd64-2.31-4/out/lib/libc-2.31.so
                                              lrwxrwxrwx 1 root root 48 2020-05-07 00:11 libcurl.so.4 -> /ro/curl-amd64-7.69.1-8/out/lib/libcurl.so.4.6.0
                                              lrwxrwxrwx 1 root root 44 2020-05-07 00:11 libdl.so.2 -> /ro/glibc-amd64-2.31-4/out/lib/libdl-2.31.so
                                              lrwxrwxrwx 1 root root 49 2020-05-07 00:11 libpthread.so.0 -> /ro/glibc-amd64-2.31-4/out/lib/libpthread-2.31.so
                                              lrwxrwxrwx 1 root root 48 2020-05-07 00:11 libssl.so.1.1 -> /ro/openssl-amd64-1.1.1g-5/out/lib/libssl.so.1.1
                                              lrwxrwxrwx 1 root root 46 2020-05-07 00:11 libz.so.1 -> /ro/zlib-amd64-1.2.11-4/out/lib/libz.so.1.2.11
                                              

                                              At runtime, when starting curl:

                                              1. OpenSSL will be searched in /ro/curl-amd64-7.69.1-8/lib/libcrypto.so.1.1
                                              2. …which resolves to /ro/openssl-amd64-1.1.1g-5/out/lib/libcrypto.so.1.1.

                                              Since package contents never change, this is always the same version.

                                              Hope that answers your question, let me know if anything is still unclear :)

                                              1. 2

                                                I think there is an important difference: do binaries share libraries when they’re at the same version? For example, sa you have 3 scripts running Python 3.8, and 2 scripts running Python 3.9 in each of these systems.

                                                Then do you have:

                                                1. 5 Python interpreters – 3 copies of 3.8, and 2 copies of 3.9.
                                                2. 2 Python intepreters – 3.8 and 3.9 are shared among the respective apps.
                                                3. Error: the two interpreters are not co-installable. (I don’t think this usually an issue for Python, since you can get it from a PPA (?) But it’s an issue for other similar software.)

                                                The answer is #1 for Nix and Guix.

                                                But what about snap and flatpak? I think they might duplicate the interpreters (#2), which doesn’t really scale IMO. It’s more like Docker, which also doesn’t “scale” in terms of having many binaries.

                                                I thought OSTree was solving a lower level problem, but I haven’t kept up …

                                                1. 1

                                                  The answer is #1 for Nix and Guix.

                                                  I am not sure I follow?

                                                  $ nix-store -qR $(nix-build '<nixpkgs>' -A magic-wormhole --no-out-link) | grep python3-
                                                  /nix/store/xnfcmgfhssgvkqq4vsnc89hwvfyfwcla-python3-3.7.7
                                                  $ nix-store -qR $(nix-build '<nixpkgs>' -A youtube-dl --no-out-link) | grep python3-
                                                  /nix/store/xnfcmgfhssgvkqq4vsnc89hwvfyfwcla-python3-3.7.7
                                                  
                                                  1. 1

                                                    Oops, I meant #2… typo. #1 is to duplicate the interpreters like snap / flatpak (I think), and #2 is to share them (like Nix and Guix).

                                                    1. 1

                                                      Thanks for the clarification! I guess in Flatpak the interpreter could be part of one of the (shared) runtimess, but I don’t know enough about Flatpak to know if there is actually a runtime with the Python interpreter.

                                                2. 1

                                                  I think it uses metadata files for this: https://repo.distr1.org/distri/master/pkg/

                                                1. 4

                                                  Awesome work! I hope some of the mainstream distributions pick up on this.

                                                  Was the go userland instrumental to making image generation this fast? At first glance this shouldn’t have much of an effect here, and instead possibly affect the execution speed. Did you make any time comparisons there?

                                                  Other than that, what would you say are the advantages of the go userland? Things like reimplementing file system IDs would seem to introduce opportunities for divergence.

                                                  1. 3

                                                    Awesome work! I hope some of the mainstream distributions pick up on this.

                                                    Thanks! I hope so, too :)

                                                    Was the go userland instrumental to making image generation this fast? At first glance this shouldn’t have much of an effect here, and instead possibly affect the execution speed. Did you make any time comparisons there?

                                                    I did not make any time comparisons, because then I would have needed to develop another userland :)

                                                    Other than that, what would you say are the advantages of the go userland?

                                                    One small advantage of the Go userland is its self-containedness: instead of including a number of different files (mostly busybox nowadays, though) and finding their shared library dependencies, we can just copy one file. That said, this probably doesn’t make a measurable dent in the initramfs generation time.

                                                    The big advantage of the Go userland is that it’s written in Go, just like the rest of the project, and therefore can easily be profiled, use concurrent code, etc.

                                                    Things like reimplementing file system IDs would seem to introduce opportunities for divergence.

                                                    Divergence is always a risk. However, given that these IDs are stored in literally millions of systems across the world, though, I think all contributors have an interest in keeping them stable :)

                                                  1. 5

                                                    It’s 2020, initramfs should be a thing of the past for at least a decade.

                                                    You know you can just pass root=/dev/XXX as kernel cmdline and it’ll boot whatever you pass in “init” argument? You’ll get rid of additional component you need to maintain on every kernel update, consider in secure/verified/mesaured boot scenarios and look inside if it wasn’t altered by someone to take control of your system before it boots completely. Just build distribution kernels with all major storage modules for target platform and file system modules as well. In the perfect setup there should be a way to re-link kernel with modules from /lib/modules/$(uname -r) without rebuilding,

                                                    And yes, you’ll probably have a couple of concerns about some exotic, complex and overkill setups, but most likely that’s supported by kernel directly now, like booting from NFS. Well, the kernelspace IP stack can now even do DHCP during boot to mount the NFS share!

                                                    One thing which might be concerning is the encrypted volume mounting. That’s ridiculous for me, as dmcrypt, LUKS, LVM and all other crazy setups are already done in kernelspace. What it lacks is the key gathering method for unlocking it. If I recognize correctly, you can point to the keyfile (for example on ESP) and it’ll be picked up, but there’s no interactive password prompt in kernel itself, while BSDs are proud of that for two decades at least.

                                                    Most of other edge cases are so marginal they probably can be fixed by simple kernel patches here and there without turning the tables so much. On the other side, people having bootable rootfs on ZFS are sort of irresponsible…

                                                    1. 8

                                                      One thing which might be concerning is the encrypted volume mounting. That’s ridiculous for me, as dmcrypt, LUKS, LVM and all other crazy setups are already done in kernelspace. What it lacks is the key gathering method for unlocking it.

                                                      Right – and if you’re reading your keyfile straight off the ESP, without some interactive step, what’s protecting your disk again? Full disk encryption support is a must have for most folks these days (even if it’s not for you), at which point you need something to do the interactive step. If only there was some pre-boot partition you could use to store this bootstrapping information…

                                                      (All of which is to say – you may not need initramfs, but many of us will continue to need it for the foreseeable future, and not just for some crazy exotic configuration.)

                                                      1. 7

                                                        but there’s no interactive password prompt in kernel itself, while BSDs are proud of that for two decades at least.

                                                        That’s a major problem for postmarketOS (and likely other mobile Linux distros), where we need to basically run a touchscreen-enabled on-screen keyboard for passphrase entry. There’s no way around it other than using initramfs, AFAIK.

                                                        1. 6

                                                          Totally agreed that the kernel could handle more of that. Until then, initramfs might as well be fast :)

                                                          1. 4

                                                            I strongly disagree. things like LUKS and LVM should remain userspace frameworks on top of device mappers and the kernel shouldn’t be taking on these framework conventions and integrating them deeply into the kernel. Hell I’m shocked the linux kernel even has a VT terminal, which likely could be used to ingest a password from a user.

                                                            1. 3

                                                              I’m sure eventually systemd will offer a complete solution for this. 😈

                                                              Parts needed are already in there.

                                                              1. 1

                                                                I agree with this and have stopped using an initramfs for some time now. The entire concept always seemed weird to me. The kernel can also somewhat handle encrypted devices through dm.mod.create. [1]

                                                                [1] https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-init.html

                                                                1. 1

                                                                  On the other side, people having bootable rootfs on ZFS are sort of irresponsible…

                                                                  Excuse me?

                                                                  What?

                                                                  You drop this in because it’s a feature that the kernel can’t really handle today and doesn’t agree with your point… It of course being a legitimate way to handle a root fs for those of us not running Nix, and then you have the gall to say it’s irresponsible?

                                                                  The main value of a ramdisk is to set up disks or devices that your system needs but can’t set up for some reason. If you want to do pre-boot checks on those files that could also be done in the ramdisk

                                                                  For some of us, it’s worth the slightly extra complexity.

                                                                1. 2

                                                                  You mention that when there are conflicts that the package with the highest distri revision will take precedence, but are there plans to allow globally selecting a particular package version as the ‘default’? e.g. some distros provide tools that basically symlink /usr/bin/gcc-7–>/usr/bin/gcc to set the default gcc to 7 even if gcc-8 were installed.

                                                                  As an aside, I had no idea you wrote i3. Wow, thank you for that!!

                                                                  1. 2

                                                                    but are there plans to allow globally selecting a particular package version as the ‘default’? e.g. some distros provide tools that basically symlink /usr/bin/gcc-7–>/usr/bin/gcc to set the default gcc to 7 even if gcc-8 were installed.

                                                                    That can be achieved using e.g.:

                                                                    % mkdir /tmp/gcc
                                                                    % ln -s /ro/gcc-amd64-8*/bin/gcc /tmp/gcc/
                                                                    % export PATH=/tmp/gcc:$PATH
                                                                    

                                                                    It’s a bit low-level, but you typically only need that temporarily while working on a project that doesn’t build with your preferred compiler version.

                                                                    For distri packages themselves, you would just only depend on one version or the other, and not have a conflict there.

                                                                    As an aside, I had no idea you wrote i3. Wow, thank you for that!!

                                                                    Thanks, glad you like it!

                                                                    1. 1

                                                                      Or just PATH= /ro/gcc-amd64-8*/bin:$PATH make

                                                                      Or is there a reason why you’d want gcc-8 and g++-7?

                                                                      1. 1

                                                                        Or just PATH= /ro/gcc-amd64-8*/bin:$PATH make

                                                                        Sure, that’s even shorter. What I was hinting at with my approach is that you can designate a directory to hold and manage symlinks, which might be easier than to override $PATH ad-hoc when you need to.

                                                                        Or is there a reason why you’d want gcc-8 and g++-7?

                                                                        No, no specific reason other than I didn’t pay attention :)

                                                                        1. 2

                                                                          Ok. Btw, this looks really great. I’m convinced packaging/modules isn’t even close to a solved problem, so I’m really excited to see exploration in this space.

                                                                          1. 2

                                                                            Thanks! I appreciate the kind words.

                                                                  1. 5

                                                                    Thankyou for sharing stapelberg.

                                                                    Something I have not seen mentioned elsewhere: Gobolinux. It doesn’t treat packages as mounted images, but it does have a philosophy of “program goes in one folder” and then “farm out symlinks into traditional paths”. I think they also have a kernel module + tool to hide paths (like /usr) from showing up in ls (of /).

                                                                    https://gobolinux.org/at_a_glance.html

                                                                    Sidenote: I’ve read a lot of comments shouting ‘NIH’ and ‘suspiciously similar’ on other sites. Ignore them, do what you want to do.

                                                                    1. 3

                                                                      Thanks for the pointer! A few people have mentioned Gobolinux on twitter, too. I had read about it many years ago, and there are certainly some similarities. I like it when projects reinforce each other like that.

                                                                      Thanks also for your note of support. I appreciate it!

                                                                    1. 14

                                                                      Welcome @stapelberg to lobste.rs! :-)

                                                                      Happy you finally released it publicly.

                                                                      There is also a German talk about package managers and distri. I’ve submitted this two months ago (only accessible if you are logged in), but asked to take it down again.

                                                                      I’ve just tried out the Docker image and this whole project looks really promising (though I’d called it distri add and not install ^^).

                                                                      New innovation in Linux package management is super needed. NixOS is cool, but what distri is addressing goes even further. I think the question “Can we do without hooks and triggers” is a really important question to ask, I think this is a huge issue (for example) Debian mindset still has - everything needs to somehow be glued together in various ways…

                                                                      Do I understand right, distri is mounting squashfs via. fuse? Are there any security issues, guarantees missing in comparison to regular kernel-space file systems? My FUSE security know how is only limited…

                                                                      I’ve read that Linux namespaces is getting fuse support, will this mean we could create distri based Docker images?

                                                                      1. 6

                                                                        Thanks!

                                                                        I want to do an English talk at some point, too, and will definitely share the recording.

                                                                        Do I understand right, distri is mounting squashfs via. fuse?

                                                                        Correct!

                                                                        Are there any security issues, guarantees missing in comparison to regular kernel-space file systems? My FUSE security know how is only limited…

                                                                        If anything, I would say there is less attack surface when running the SquashFS driver in user space: if a malicious image is used (e.g. from a third-party mirror that an attacker convinced you to use), at worst you’ll need to reboot your system when the FUSE daemon crashes (we should look into restarting it when crashing, but it crashes rarely thus far).

                                                                        I’ve read that Linux namespaces is getting fuse support, will this mean we could create distri based Docker images?

                                                                        Interesting! Can you share a link please? I haven’t heard of this yet.

                                                                        1. 4

                                                                          Seconded, welcome aboard! I hope you stick around

                                                                          1. 3

                                                                            Thanks! Thus far, I like the discussion; it seems very positive.

                                                                          2. 1

                                                                            Interesting! Can you share a link please? I haven’t heard of this yet.

                                                                            I found the GitHub issue again: https://github.com/docker/for-linux/issues/321#issuecomment-487955090

                                                                            Seems it is not there, but I guess a big fundament is “FUSE Gets User NS Support Linux 4.18”.

                                                                            Torvalds: User-Space File-Systems, Toys, Misguided People

                                                                            Well I guess this is an entirely different story for read-only SquashFS images.

                                                                            I can’t find it anywhere, I think it was about that you don’t have certain guarantees in FUSE (like multiple users on one file-system). But I guess the exact use-case doesn’t fall into many issues people usually have with FUSE (higher latency?). I’m also wondering how it will behave in low memory conditions.

                                                                            PS: This paper looks really interesting

                                                                            http://edoc.sub.uni-hamburg.de/informatik/volltexte/2015/210/pdf/bac_duwe.pdf

                                                                            1. 3

                                                                              Ah, yeah. distri currently uses unprivileged user namespaces (which need to be explicitly enabled on a number of distributions), so it already has permission to mount FUSE within the namespace. I don’t think we’ll gain anything from that change you reference.

                                                                              like multiple users on one file-system

                                                                              We’re using FUSE’s allow_others option so that all users can read from the file system. I think that’s what you mean.

                                                                              PS: This paper looks really interesting http://edoc.sub.uni-hamburg.de/informatik/volltexte/2015/210/pdf/bac_duwe.pdf

                                                                              Yeah, thanks for the link! There’s also https://www.usenix.org/system/files/atc19-bijlani.pdf (I wrote about it in https://twitter.com/zekjur/status/1149582433072771078)