1. 5

    This article nicely illustrates how easy the git email flow is for developers/contributors, but how about for maintainers?

    For instance, how do I easily apply a patch sent to me, if I use the gmail/fastmail web interface? If web-mail is frowned upon, does it work if I use thunderbird/outlook/something-else? Or does it basically require me to use mutt?

    How about managing different patches from different sources at the same time? Github conveniently suggests you create a local copy of the PR’ed branch, meaning you can easily change between different PRs. How does this work with git email flow? Do I just create those branches myself, or is there an easier way to do it?

    What about very large patches? I recently applied a code formatter to a project I contribute to, resulting in a 4MB diff. Not all email clients/servers handle arbitrarily large files well.

    I’ve seen enough descriptions about how git-send-email works from a contributors perspective, but I would really appreciate it, if someone would write down how they then receive and review patches in this flow.

    1. 6

      As long as your e-mail client/interface doesn’t somehow mangle the original message, the simplest solution would probably be to simply copy the message as a whole and then run something like xsel -ob | git am in the repository. I’d reckon this is much easier than setting up some more UNIX-like e-mail client.

      1. 6

        Although his workflow is obviously not representative of most maintainers, Greg K-H’s writeup was nonetheless fascinating.

        1. 1

          That does indeed look fascinating, and like it addresses some of my questions. Will take a closer look later.

        2. 3

          What about very large patches?

          git request-pull can be used for this purpose, it generates emails with a URL from which the reviewer can pull the changes. It’s generally used for subsystem maintainers in large projects to merge their (independent) histories upstream, but it can also be used to handle large changes which would be unwieldly in patch form.

          1. 3

            For instance, how do I easily apply a patch sent to me, if I use the gmail/fastmail web interface? If web-mail is frowned upon, does it work if I use thunderbird/outlook/something-else? Or does it basically require me to use mutt?

            You can certainly use mutt or Gnus if you want. Most projects using git-by-email use mailing lists, some are fancy with download buttons to get the patches. Most of the time you can pass the output of curl fetching the mailing list page directly to patch. Quoting the manual

            patch tries to skip any leading garbage, apply the diff, and then skip any trailing garbage. Thus you could feed an article or message containing a diff listing to patch, and it should work.

            I’ve done this as recently as this week, when I found an interesting patch to u-boot.

            If a mailing list or other web view isn’t available, then you can either set one up (patchwork is the defacto standard) or find a client that doesn’t break plaintext emails. Last I checked, receiving patches via Gmail’s web interface was difficult (but sometimes possible if you hunt down the “raw message” link), I haven’t checked Fastmail. If you don’t want to use a TUI or GUI email client, you can configure getmail to only fetch emails from specific folders or labels, and that will create mbox files you can use with git am.

            How about managing different patches from different sources at the same time? Github conveniently suggests you create a local copy of the PR’ed branch, meaning you can easily change between different PRs. How does this work with git email flow? Do I just create those branches myself, or is there an easier way to do it?

            Branches are lightweight, create them for however many things you want to work on at the same time. I usually don’t use branches at all, instead using stgit, and just import patches and push and pop them as needed.

            What about very large patches? I recently applied a code formatter to a project I contribute to, resulting in a 4MB diff. Not all email clients/servers handle arbitrarily large files well.

            You can breakup that work into smaller chunks, to avoid too much disruption to other patches that might be in flight. Nothing stops you from linking to an external patch, though I would probably prefer instructions on how to run the tool myself, and just forgo the patch in that case.

            1. 2
              1. 1

                I wondered about the same thing. After a very long unfruitful search on the internet, @fkooman pointed me to git am - Apply a series of patches from a mailbox, which pretty much takes care of everything. It would have been helpful if git-send-email.io had a pointer to that, or maybe I missed it..

              1. 20

                I thought most of the point of CentOS was “RHEL, but basically free”. If it’s going to be “what RHEL will be in the future”, then a lot of the value proposition goes away. Why not Fedora at that point?

                1. 7

                  My interpretation is that this will be a new stepping stone for changes coming from Fedora before making their way into RHEL. Nevertheless that too doesn’t sound differentiated enough to be sustainable.

                  1. 6

                    While the Stream lifecycle is shorter than CentOS Linux, it is still 5 years. Stream will still keep the same kernel + rh patches for the full lifecycle, so very different from Fedora model. Stream will be a rolling release for the next minor release of RHEL.

                    1. 2

                      Actually CentOS patches will go to rhel

                    2. 5

                      According to the Stream page it’s intended to be “positioned as a midstream between Fedora Linux and RHEL”.

                      CentOS started out as an independent community project, but since 2014 it’s effectively been part of Red Hat, which owns the trademark and employs most of its developers. From Red Hat’s point of view all of this makes a lot of sense (Red Hat’s acquisition by IBM probably pays a part in this shift). But for people like you and me who want a “free RHEL” … yeah, it’s not a great change.

                      1. 2

                        Free doesn’t pay IBM nothing. They get with this a rolling beta release where they iron out bugs. The CentOS users get…well they get nothing. Maybe Scientific Linux or FreeBSD like other commenters suggested.

                        1. 6

                          FreeBSD is essentially a “rolling release” distro; it’s a fine system but not really a replacement for CentOS’ use case.

                          1. 6

                            That’s not quite true. FreeBSD is at either extreme, depending on what you’re looking at:

                            The base system, which includes the kernel, libc, and a bunch of core libraries and tools, is ABI-stable across an entire major release (supported for 4-5 years, I think). Anything written targeting these is guaranteed to keep working and get security updates for new versions. You can write a kernel module for FreeBSD X.0 and it will keep working for all FreeBSD X.y. Any device ioctl from the base system will keep working in the same way. Anything written using control interface (e.g. the the network configuration interfaces used by ifconfig and friends) has the same guarantees. Between major releases:

                            • All syscalls will keep working via COMPAT interfaces in the kernel (which may optionally be compiled out for small / legacy-free systems).
                            • Control interfaces and device ioctls may change in any way.
                            • Core base system libraries will usually have symbol versioning and so will support old versions. Where there’s a complete ABI break, there’s a userspace compat package that installs the old version, though this may not get security updates.

                            The ports system, which contains all third-party software, is rolling release. If you depend on something like ffmpeg or Qt and want to avoid new versions then you need to either maintain a separate install of the version that you depend on (which is quite easy to do with a fork of the ports tree and configuring poudriere with a different LOCALBASE for all of your fixed-version things), bundle it with your program, or persuade the port maintainer to support multiple versions (a few things do this anyway. I think there are typically 3-4 versions of LLVM in the tree because a bunch of things depend on older ones).

                            In my experience, it’s pretty rare for software to break across even FreeBSD major version upgrades, unless it uses some third-party shiny buzzwordy dependency from ports that doesn’t provide any backwards compatibility guarantees.

                            1. 7

                              The base system, which includes the kernel, libc, and a bunch of core libraries and tools, is ABI-stable across an entire major release (supported for 4-5 years, I think).

                              CentOS is supported for ~10 years, if I’ve understood everything correctly. You also get SELinux and a bunch of other features that are nice for different reasons.

                              FreeBSD is nice in many, many ways, but it is not a replacement for CentOS.

                              1. 3

                                If you need SELinux on FreeBSD then you have MAC (Mandatory Access Control) and also a SEBSD module:

                                You also have other security mechanisms on FreeBSD like Capsicum.

                                1. 1

                                  Didn’t know about MAC, cool! Not sure how I’ve missed it :-)

                                  One nice thing with SELinux is that it’s included and enabled by default, not kernel patches et c to apply.

                                2. 1

                                  Is that still the case?

                                3. 4

                                  FreeBSD can be rolling release when you track STABLE or CURRENT and can also NOT be rolling release if you just use RELEASE version.

                                  1. 1

                                    Yes, but ports/pkg is always a rolling (or semi-rolling if you go with quarterly updates) which differs greatly from the CentOS way of doing it. I’m not saying it’s good or bad, it’s just different.

                                    1. 1

                                      With CentOS/Red Hat approach you end up with very outdated packages very quickly.

                                      With FreeBSD approach you always have up-to-date packages.

                                      You can also use Poudriere to create and maintain your own packages versions: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-poudriere-build-system-to-create-packages-for-your-freebsd-servers

                                      1. 2

                                        With CentOS/Red Hat approach you end up with very outdated packages very quickly.

                                        Yes, agreed. But you get security updates for them as well.

                                        With FreeBSD approach you always have up-to-date packages.

                                        Yes, and that can be a problem in itself. Imagine that you can’t upgrade to a newer version due to breaking changes, but a new security vulnerability pops up. What do you do?

                                        I work in jurassic operations, it’s terrible and everything we run on is far too old. But we are not a developing organisation, we barely know anything about anything right now. Organisations like mine will always chose CentOS/similar if we get support for it, and we are willing to pay stupid amounts of money.

                                        I used to work in software development as a tester. But not even in a team with virtually no technical backlog (like really!) would we ever chose to use a rolling distribution. That is just wasted work, effort, and money. Imagine trying to reasonably test supported versions for your app if using a rolling distribution.

                                  2. 1

                                    You can write a kernel module for FreeBSD X.0 and it will keep working for all FreeBSD X.y

                                    Only if you recompile. There is NO stable kernel ABI. Currently on 12.2 people must compile the GPU drivers locally, because the binary package is produced on 12.0 or 1 or whatever and it does not work.

                                    1. 3

                                      I believe the GPU drivers are something of a special case here: They depend on the LinuxKPI module, which does not have the same stability guarantees as the rest of the kernel because it tracks Linux kernel interfaces that can change every minor release of the Linux kernel. For the rest of the kernel, they are much stronger binary-compat guarantees. There’s a process before branching each major release of adding padding fields to a bunch of kernel structures so that anticipated functionality can be added without breaking the KBI. This is the reason that a lot of Adrian’s work on WiFi didn’t get MFC’d: it depended on adding extra fields to structures at various places in the WiFi stack, which would have been KBI-breaking changes and so were not allowed into -STABLE without rewriting.

                                  3. 1

                                    I know, I’m just saying that people who habe been using CentOS because “it’s redhat but free” might want to move to sometging else. People who needed CentOS for ABI compatibility will have to work with IBM/Redhat on this. Because RedHat doesn’t want to work for free, obviously.

                              1. 11

                                That looks very nice indeed.

                                It would meet my requirements if it had a clone address coded into the display. Given that you’re already setting details of each repo in the config, this could be an easy fix?

                                1. 5

                                  D’oh, how did I miss that?

                                  1. 6

                                    Nice to see, new work in this space. Congrats!

                                    I’m also working on something similar. I initially launched as a web front-end for Git repositories like CGit. It is written in Go

                                    Dogfooding it. https://git.nirm.al/r/sorcia

                                    But now, I’m working on collaboration feature where people doesn’t have to have an account in an instance in order to contribute. This way, I think it will be light-weight without any pull-requests features like Gitea.

                                    What I’m trying to do here is: Sending patches instead. I’m just doing a brief here:

                                    1. So, a contributor will generate a patch using ‘git-format-patch’.
                                    2. Upload it via the web interface or use the CLI utility which I’m going to build.
                                    3. Verification will be that they will have to confirm their email address.
                                    4. Contributor’s patch will get into a moderation queue for administrator or repo members who have permissions to check the patch and move it to the review queue for anyone to review the patch and apply it via the web interface.

                                    I’ve written about this in detail here https://gist.github.com/mysticmode/e07802b949af5985964f25d2cffcae5f

                                    1. 3

                                      Sorcia looks really nice. What do you think about forgefed?

                                      1. 1

                                        I’ve been looking at forgefed for a while. As well ActivityPub and ActivityStream. But it makes Sorcia a bit complicated. As I said, I need it to be lightweight.

                                        Maybe for discovery purposes, I might use AP. If I really wanted to make it federated or decentralized per-se - I’d actually stick to IPFS

                                        I got to know about IPFS through this article.

                                        It was written 5 years ago but it is still relevant and interesting to me.

                                      2. 1

                                        It would be nice if it worked without JS, like cgit typically does.

                                  1. 5

                                    Is shared libraries really needed those days? Same space? My laptop has 2Tb of it.

                                    1. 2

                                      I’d love for them to go away and us keep everything separate.

                                      While the ‘disk’ space issue is probably not a problem in many cases (I know I’d rather sacrifice space for duplicate code than have to deal with dependency hell) there are likely more issues to consider.

                                      I was going to say that it’s a pain to have to update an identical library in multiple packages when there’s a (security) fix, but it’s common that a fix breaks some packages and not others, so you’re left with the choice of some broken packages or a security fix you may or may not feel causes a vulnerability for you.

                                      Being able to update some packages (where a ‘fix’ doesn’t break them) and leave others until later, accepting the lack of fix, seems like a potentially desirable option.

                                      Are there other reasons for shared libraries to continue existing?

                                      1. 14

                                        Sharing library pages between applications? Preloads (mixed bag)? Less shaking the tree at link-time? Ecosystem stability beyond the syscall level?

                                        FWIW, Linux is an aberration in how much it makes static. Most systems have a hard requirement on dynamically linking system libraries, and unlike Linux, they either have extreme ABI stability (Windows, Solaris, etc.) or change the syscall ABI and require a rebuild anyways (OpenBSD).

                                        1. 9

                                          FWIW, Linux is an aberration in how much it makes static. Most systems have a hard requirement on dynamically linking system libraries, and unlike Linux, they either have extreme ABI stability (Windows, Solaris, etc.) or change the syscall ABI and require a rebuild anyways (OpenBSD).

                                          Or both. Solaris and Windows change(d) the syscall interface regularly – the stable boundary is in the system libraries.

                                          1. 5

                                            This would be a blog post I would love to read!

                                            1. 1

                                              I’d love to know how much shared library code is actually shared in RAM in desktop systems, servers, containers, etc. It would seem intuitive that a desktop would have plenty of shared code in some large libraries (e.g. those from Qt and KDE) but I suspect there may be less sharing than we might hope.

                                              LD_PRELOAD? Is it used for something important? I can imagine it might be but I just haven’t noticed it being used.

                                              Are you referring to compile time or runtime linking? I seem to remember runtime linking being extremely slow for ‘large’ code under Linux and that meaning we had to put hacks in place to make KDE apps appear to launch faster. It was something that only affected C++ code - not C - and I didn’t know how it could be improved. Would static linking make this worse?

                                              1. 4

                                                It’s pretty easy on a Linux system—just read /proc/<pid>/maps, extract the libraries and count. I just did that on my virtual server (that handles email, web, gopher, etc.). The most commonly used libraries are /lib/ld.so and /lib/tls/libc.so (every process). Out of 118 libraries used, 44 are shared 8 times or less, one 10 times, 3 11 times, and then the rest at 21 reuses or more.

                                                Also, I use LD_PRELOAD to intercept certain C functions, but generally only on a development system.

                                              2. 1

                                                Well, talking about windows, it sure has tons of DLLs… but bundled for each program, so there’s almost no deduplication involved. I’d rather directly get static binaries that don’t break so easily (looking at you pacman, you should be fully static).

                                              3. 3

                                                Application launch time, memory overhead, etc are the big ones.

                                                But when you say “no shared libraries” where does that end? Every application should have its own complete copy of the windowing and UI libraries? If every application has its own copy of a library that has a security bug, then every application has to be updated.

                                                Put aside that means improvements to the OS have no benefits to applications that have already been compiled, and OS UI changes won’t be reflected in your app, the code bloat of essentially having a complete copy of the OS for every app obviously becomes insane.

                                                1. 1

                                                  The occurrence of security fixes in libraries with many consumers is much more frequent than ABI breakages.

                                                  This says nothing as well as to how difficult it can be to track packages which statically link said libraries (there is no way to easily examine the binaries for linkage - you have to look at the build recipe or do some heuristic check for the presence of a symbol).

                                                2. 1

                                                  First up, disk space isn’t the important bit for shared libraries these days. It’s in memory cost and application launch time.

                                                  The second issue is common to OS’s - If I have a library with a stable API, and two other libraries communicate with each other with that library, but they each link in their own copy of a library, I need the memory layout of both libraries to match, at which point you’re ABI locked, and may as well just have shared libraries.

                                                1. 13

                                                  I’ll note the other thing the announcment says is “On the other hand the level of interest for this architecture is going down, and with it the human resources available for porting is going down” and the author of this post isn’t offering to step up and maintain it (either for Debian or the other two distros they mention).

                                                  I’d expect Debian would be fine keeping it if there was people willing to maintain it, but if there isn’t then it’s better it gets dropped rather than keep decaying further. Also, IIRC this has happened before for some items like this, if there are in fact lurking people willing to maintain MIPS then this might get reversed if volunteers come to light as a result of this announcment.

                                                  1. 4

                                                    “Might” being the key word; a whole group of us got together to try to “save” ppc64 and Debian wasn’t interested, more than likely because we weren’t already Debian developers. It’d be nice if the “ports” system was more open to external contributions. But mips isn’t even going to ports, it’s being removed.

                                                    1. 3

                                                      From my experience, if you aren’t already a Debian developer, you aren’t going to become one. My experience trying to contribute to it was absolutely miserable. I’ve heard that changed somewhat, but I don’t feel like trying anymore.

                                                      1. 1

                                                        Can you speak more to this issue? I’m curious as to whether it was a technical or social problem for you, or both.

                                                        1. 3

                                                          More of a social problem. I wanted to package a certain library. I filed an “intent to package” bug, made a package, and uploaded it to the mentors server as per the procedure. It got autoremoved from there after a couple of months of being ignored by people supposed to review those submissions. Six months later someone replied to the bug with a question whether I’m going to work on packaging it.

                                                          I don’t know if my experience is uniquely bad, but I suspect it’s not. Not long ago I needed to rebuild a ppp package from Buster and found that it doesn’t build from their git source. Turned out there’s a merge request against it unmerged for months, someone probably pulled it, built an official package and forgot about it in the same fashion.

                                                          Now three years later that package is in Debian, packaged by someone else.

                                                          1. 2

                                                            I don’t know if my experience is uniquely bad, but I suspect it’s not.

                                                            Seem’s like you’re right: https://news.ycombinator.com/item?id=19354001

                                                    2. 3

                                                      …and the author of this post isn’t offering to step up and maintain it (either for Debian or the other two distros they mention).

                                                      From the author’s github profile:

                                                      Project maintainer of the Adélie Linux distro.

                                                      1. 0

                                                        Hmm, maybe. I’d bet against it. If Debian is going (reading between the lines) “maintaining modern software on this architecture is getting really hard” then I’d bet against anyone else adding support. Maybe I’ll lose that bet, in which case I owe someone here several beers, but I’ll be very surprised!