1. 31
  1.  

  2. 7

    There is something worth investigating here, although I don’t understand people well enough to articulate it myself.

    I have only seen Mailpile before in the context of NixOS. nixpkgs had carried an older version of Mailpile, but when a community contributor attempted to update it, the Mailpile author requested that no package distribution carry Mailpile until its official release. After official release, when another community member requested Mailpile be included in nixpkgs, the Mailpile author requested that NixOS only ship upstream-provided packages, and that Mailpile should have control over the package distribution channel. Finally, a third community member updated Mailpile without talking to the Mailpile author.

    I wonder whether the Mailpile author is trying too hard to exert control over the projects of other communities, and in the process, becoming burnt out by the failure of others to agree with their preferences. I also wonder about the wider context of package management; as systems like Nix become more prevalent, we must start to embrace a serious notion of package-capability systems, where the ambient authority of FHS is removed entirely in favor of explicit dependencies, and recursive calls to dynamically load new packages are clearly visible.

    1. 3

      agree totally - the maintainer might not like the state of the software - but if others find it useful then he should accept that.

      a big part of one developer (or in this case 3 developer) projects is that sometimes you just have to accept that the software is not going to be in the state you want it to be, for a long time, if ever. until that happens if people find some use of your software you shouldnt stand in the way of that.

      the flipside is that said users need to be keenly aware of when they are using software from understaffed projects and manage their expectations thusly.

      1. 33

        It’s one developer, me, there haven’t been three of us for over four years now.

        The version Nix was carrying was buggy to the point of being unusable, and we were getting the bug reports and their users weren’t getting updates. Their package was also not in line with how Mailpile should be installed, they’re installing it as if it were a multi-user daemon, not a single-user MUA. Imagine installing Thunderbird or Firefox or LibreOffice with the assumption that only one user could use them! That’s how Mailpile was integrated into Nix. This was not a situation that benefited anyone. So I asked them to stop.

        They didn’t, they’re still many releases behind even our glacially-moving RC branch. If I’m reading their repo right, their integration still shows that whoever packaged it still persists in doing it wrong, so any documentation or support Mailpile itself provides WILL be wrong for Nix users.

        Of course, Nix won’t get the blame. Mailpile will, the project’s reputation will suffer because some random packager did things “his way”. But it’s still my fault. Ultimately, everything in this project is. Thus, burnout.

        1. 15

          This reminds me of the situation with xscreensaver. Eventually the author just included a warning that popped to explain that they shouldn’t be using the packaged version: https://www.jwz.org/blog/2016/04/i-would-like-debian-to-stop-shipping-xscreensaver/

          1. 8

            That’s not the first case when people packaging software mess around with software and break it in subtle ways. Blaming the software vendor (“Mailpile author is trying too hard to exert control over the projects of other communities”) is not only counter-productive but also misses the point.

            Software vendors are interested in their users having the best experience with their software. That’s why they want to “exert control” - to ensure high quality. If package manager for system X lowers perceived quality of the software on system X but software works well on other systems it’s system’s X packager’s fault.

            Tangential: package managers in Linux are a bizarre concept for me. It’s nice that the entire system updates including installed software, but the fact that the software doesn’t come from the vendor always surprised me (maybe because I used Windows, MacOS and Linux). Just compare installation method for Windows and MacOS for example here and then Linux installation instructions for the same software.

            1. 4

              There are important social reasons why package maintainers exist, see http://kmkeen.com/maintainers-matter/

              1. 1

                With a caveat that this is written by Arch package maintainer and packages are contrasted mainly with App Stores, not exactly the ideal model I’d have in mind. I found a new keyword “Linux Universal Packages” though… interesting…

              2. 3

                I wonder how to best communicate the nature of Nix. I’ve been using the phrase “package-capability”, and the Nix documentation itself has an excellent overview, but neither of them highlight the largest problematic nature of Nix: Almost all existing software will have some differences in behavior when built by Nix or running on NixOS, by design, in ways that software authors have historically disliked.

                There are two layers of giving up control here, and I feel that it is worthwhile to distinguish them. First, there is the inversion of control created by nixpkgs itself, because nixpkgs has the control structure of a ports tree, which means that we build a package for a target system by interpreting the tree’s description of that package. This is an inversion of control from a self-installing package delivered by a vendor. Once inverted, this change in control means that the package manager, not the vendor, controls build configuration and installation directories.

                In this way, NixOS is in the lineage of Gentoo, Arch, and other distros whose ports trees have enabled them to ingest large numbers of otherwise-misfit packages into a single coherent presentation. I recall that in my years of using Gentoo, there were folks who owned upstream packages which they felt were improperly packaged, but who also said that their preference would be for their packages to not be shipped at all. It is not in the nature of a ports tree to agree to such a self-defeating deal. Of course, when those upstreams then turned around and denied support and triage to those Gentoo users, they were wholly within rights to do so.

                However, there is a second layer, as well. Packages often depend upon other packages. Nix requires that such dependencies be explicitly coded. This humble desire is unyielding, and so it is the case that many packages included in nixpkgs are only included under protest: Source must be patched, tests must be avoided, network traffic must be blocked, file metadata must be destroyed, and even ELF headers must be rewritten. The worst offenders are either emulated or placed in chroots.

                None of that is necessary for Mailpile. Its current Nix expression is quite modest, with few modifications and nothing not typical for Python applications. It indicates both of upstream’s wishes, in that the package will not build locally without either a modification to nixpkgs or to local user configuration, and that a warning is emitted in any case.

                1. 5

                  Dependency management is only one of the things a packager is supposed to do. All this you describe is the prerequisite for getting the software to run at all, but none of it guarantees it will run well.

                  System integration also matters. Making sure packages play nice with each-other. Firewalls, modular configurations, SELinux (or AppArmour or whatever the flavour if the month is), desktop “start” menus, etc. etc.

                  And updates matter a great deal, especially for security-critical software. Which Mailpile is - the web UI aside, e-mail clients in general are a major vector for malware and one of the few “server type” apps on a desktop system where remote attackers can just push their attacks into your system and hope you get pwned.

                  My reservations about Nix pertain to integration and updates. In your discussion of “the nature of Nix”, you mention neither. I dunno if it’s a coincidence, or whether this emphasis accurately characterises Nix, but those areas are exactly where Nix is failing Mailpile, IMO. If they’re not even on the radar, then that doesn’t bode well.

                  1. 2

                    From my view, nix is a bit of a wild west when it comes to package quality, any given package is only as good as the last person who used the package and cared enough about making it work well. There seems to be little control in regards to ensuring maintainers are active and keeping things up to date.

                    On the flip side, it is very easy for someone to take control of how their own application is packaged in nixos, simply by making a pull request on github.

                    1. 5

                      Saying it’s “very easy” doesn’t make it so, there is a learning curve and time investment involved which means less time for doing other things. Fixing all of Mailpile’s bugs is also easy! Simply make a pull request on Github… :-P

                      1. 4

                        Ok, you are right - I think I should have said “relatively easy”, relative to some other linux distributions.

                    2. 2

                      It is tempting to describe NixOS as that set of Nix expressions which provides an integrated system. NixOS modules are written in plain Nix but provide firewalls, modular configurations, a hardened system profile, XDG paths, etc. However, this is not the entire story.

                      An update of a Nix package is initiated by the user in control of the Nix daemon. As a result, packagers do not have direct control over which package versions are visible to users. Instead, users must decide for themselves when to update. Rather than considering certain packages to be important to security, we instead decide that all packages are important to security, and there is a contact page in case of emergency. Upstream package authors should use common reporting pathways like CVE instead; CVEs might be annoying, but they embody the reality of upstreams not knowing how to contact every downstream.

                      Without the experience of using Nix or being steeped in capability-aware security principles (POLA, confused deputies, etc.) it could be hard to grok why we use Nix.

                      Returning to Mailpile, and looking at its Repology page, it would seem that no distros are packaging the latest release candidate. Also, it would seem that every packaging distro is a ports tree. This is roughly as good as the versioning will ever get for Mailpile in the wild; compare and contrast with another Python package with a frustrated upstream, Quod Libet, whose Repology page contains many distros with good reputations and outdated versions.

                  2. 1

                    I’m not a primary Linux user. But this package manager awkwardness… is there any solution in sight?

                    1. 3

                      Linux package management works this way due to pervasive use of shared libraries, where 20 different packages may depend on a single shared library L. There’s only one copy of L installed, but in order to correctly run the latest version of all those packages, you might need up to 20 different versions of L installed. I won’t go into details, but the resulting mess is called “dependency hell”.

                      The solution is for each application to bundle its own preferred versions of each of its dependencies. There are 3 different Linux app distribution formats that support this: FlatPack, AppImage, and Snap.

                      I’ve never used Nix, but I’ve heard that Nix is supposed to fix dependency hell in a different way. However, this story is about a Nix packager breaking an app, so I guess it isn’t perfect. By contrast, FlatPack, AppImage and Snap allow the original app developer to build universal Linux binaries that run on any modern Linux system and distribute those binaries directly to customers without a middleman repackaging and compiling their software and possibly breaking it.

                      1. 1

                        Wouldn’t static linking help with this? Or is that irrelevant/impractical for some reason?

                        Also, regarding the three package managers you mentioned: aren’t those geared towards apps? Is it universally true that the only package managers that let developers bundle their own versions, are the ones geared toward applications? (I guess this makes sense now in hindsight…)

                        1. 3

                          The open source project I’m working on, Curv, cannot be statically linked, both because it uses OpenGL, and because it dynamically loads shared objects at runtime. There are lots of reasons why static linking can be impossible on Linux.

                          1. 1

                            There are lots of reasons why static linking can be impossible on Linux.

                            What are some other reasons?

                            1. 1

                              I mean that there are lots of dependencies that can make static linking impossible. All you need is one of those dependencies.

                              The fundamental issue is that if you call the dlopen function from the GNU C library, then static linking is impossible. Any dependency that calls this function will prevent you from creating an ELF statically linked executable.

                            2. 1

                              I’m not sure I understand enough about this. As far as I was aware, virtually anything can be treated as a compile-time constant and baked into the executable. Do you know of any good resources on the topic, or would you otherwise care to explain?

                              1. 1

                                if you call the dlopen function from the GNU C library, directly or via one of your dependencies, then static linking is impossible. If you use dlopen, you can still statically link static libraries into your executable, but you cannot create an ELF Statically Linked Executable file.

                                Here is some output from my bash shell to illustrate the difference (curv is dynamically linked, curvc is statically linked):

                                $ file curv curvc
                                curv:  ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=bfd93bf852da5ac346c8dc4848446743d0f8e14c, not stripped
                                curvc: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.32, BuildID[sha1]=5bd2494d8bd65078e53edab7ee04ab72e194c1cf, not stripped
                                
                      2. 1

                        The free software ecosystem is highly decentralized. Most of the software available in any Linux distribution is not even specific to Linux, but can also work on various BSDs, and probably Windows and MacOS too. But compiling and installing software works somewhat differently in all of these targets, especially so for things like persistent services, boot dependencies, default configurations, etc. So the upstream typically only provides a flexible, portable source code repository (the “tarball”, as it were), and then various communities take care of curating, vetting, configuring, and packaging the upstream source into artifacts more suitable for a particular system.

                        This arrangement is an old tradition and has never been perfect. But free software as a whole can’t just decide on one ultimate way of doing things. It’s always a sprawling decentralized ecosystem. That’s like a basic political reality. The whole thing is like a continuous experiment with no single organization in charge—and so no central decision about how to package software. Network effects, technical advantages, and capitalization give rise to some leaders, like Ubuntu, Red Hat, etc, but there’s always competition and alternatives.

                        Nix itself is supposed to be a significant development in software packaging and deployment. It started as a research project in the software technology group at the university of Utrecht. The introductory chapter of Eelco Dolstra’s 2006 Ph.D. thesis explains the context and motivation. The free software ecosystem is a good fit for research like this, and it’s no accident that Nix developed in that context rather than Windows or MacOS.

                        It seems likely that Nix’s novel design will influence the future of software deployment. I can easily imagine a similar system becoming the standard way to create containers, for example. It has already led to at least one new independent project with the same basic principles, namely Guix and GuixSD—which seem positioned to become the official GNU project standards.

                        The free software ecosystem is a vast commons with enormous amounts of different people, organizations, intentions, etc. It’s not a single commercial enterprise and so it can never have the unitary characteristics of Windows or MacOS.

                        1. 1

                          But free software as a whole can’t just decide on one ultimate way of doing things. It’s always a sprawling decentralized ecosystem.

                          What do you think about standards organizations such as IETF? I’m glad that there are documents such as RFC 7540 and that things such as HTTP can be implemented on any OS in any language and framework and best of all they all interconnect. I did implement HTTP server years ago and Firefox developers didn’t have to tweak my server to work with their browser nor tweak their browser to connect to my server. It just worked and I didn’t have to ask anyone for permission.

                      3. 1

                        I think you are being too hard on yourself.

                        youve obviously made something people like - as the repo has thousands of stars - you just have to decide for yourself where your limit is - and youve done that. kudos to you as that can be difficult

                        however another side of that is learning to accept what you cant control - you cant control is dumb people package your software dumbly. best you can do it put a FAQ or README somewhere prominent explaining the situation and point everyone to that - as long as youve documented your side of the story smart people will read it and understand you arent at fault

                        dumb people wont read anything and blame you - but you cant do anything about that except archive the repo - and i know you dont want to do that

                        1. 14

                          Thanks. One thing: I won’t go so far as to say they’re dumb or doing things dumbly… they are by and large volunteers, doing their best. I do think they made some incorrect assumptions, and I lacked the bandwidth/energy/time to correct that. It’s just hard.

                  3. 2

                    I’m jealous of you working for ISNIC, it sounds awesome to help run core internet infrastructure. I’m happy you’re feeling better and have hope again.