1. 3

    Clarification as an APT user: yes apt install/remove manipulates packages, but apt-mark manipulates desired state. I exclusively use apt-mark these days.

    1. 1

      The same counts for aptitude --schedule-only or changing package states in Aptitude’s TUI without pressing g (twice by default) at the end but just exiting Aptitude, e.g. by pressing q.

      Everyone who’s using Aptitude’s TUI regularly is familiar with this concept.

      And with regarding changing states by editing files. Editing /var/lib/aptitude/pkgstates works as well.

      1. 2

        I’ve flagged the post as broken link since I can’t edit it anymore.

        1. 2

          Are admins capable of fixing this? It seems it mostly gives negativ karma to the thread.

      1. 3

        I’m running OpenBSD on a 2004 Centrino single core laptop (haven’t updated in a year or so) and it’s… ok. You can work with it if you have patience, but using Chrome is a little tedious. This is a good data point, I think a dualcore CPU adds that little bit of power that makes you able to work in a relatively normal way (it’s not the 400MHz).

        1. 3

          I have FreeBSD 10 on a laptop from around 2001. It’s old enough to have a real serial port and I leave it in my cellar for when I need to get a serial console onto my NAS to fix things. It’s actually the only laptop I own myself and got somewhat more use until not so long ago when my employer was forced to provide me with one for home working during lockdown. It has no USB ports and the CD drive seems to have broken now so I fear any further upgrade attempts may break it beyond repair. Might have swapped it to NetBSD otherwise.

          1. 2

            I’ve used Debian on Thinkpad 760ED and a 760XD, both from around 1997 until a few years ago.

            I used the 760ED for quite some years from around 2000 until 2006 or 2007 or so when I bought my first new Thinkpad a T61 (which died several times—IIRC once on warranty—due to an overheating NVidia GPU).

            The 760XD was a “performance” on the Vintage Computing Festival Europe a few years ago: A fresh install of a back then current Debian release on a laptop from the 90s. Starting with Debian 3.0 Woody because it was the last release which was installable from floppies, and then dist-upgrading several times until IIRC Debian 8 Jessie.

            Unfortunately Debian kicked out Pentium 1 support with the release of Debian 9 Stretch, so the last Debian I upgraded them to was Debian 8 Jessie. See also this Retro Computing Stack Exchange question of mine from back then.

            Those two laptops still exist and I should probably apply security updates once before Jessie ELTS finally goes EoL. Probably a nice holidays side project.

            Oh, and a funny coincidence: I know Matto (the author of the linked article) personally. Well, actually not so a big coincidence. We’re both into retro computing for quite a while. :-)

            1. 2

              Yeah Debian is my default so I was deliberately looking to run something different to play around on that machine. When I did this first a few years ago, a few distros had already stopped supporting 32 bit processors and it’s already gotten worse.

          1. 6

            *sigh* Still have two or three laying around. All but one broken, not sure if that last one would still run. :-/

            1. 22

              Or, in a “a picture’s worth a thousand words” format: https://xkcd.com/1179/

              1. 4

                Or recently on Twitter referring to this Reddit posting (deleted already by moderators, still accessible, though).

              1. 39

                As (former) application author I find it very hard to sympathize with distro packagers if their opinions and the mentioned patches they make out of them continue to be responsible for a good chunk of bugreports that cannot be reproduced outside of their distro. Why should I cater to the whims of multiple Linux distros, what do I get out of putting more work into the product I already provide for free? Imagine Apple app store, on top of placing random restrictions on application submissions, added random patches to your application and is not sufficiently careful about which of them break the end user experience. That is what upstream maintainers have to deal with, and they don’t even get paid for it.

                See also Linus on static linking and distro packaging.

                Keep in mind that 1) this is literally only a problem on Linux distros or other third-parties repackaging applications and imposing their opinions on everybody 2) the actual context of this blogpost is that the author is mad at Python packages using Rust dylibs, it seems his sense of entitlement has not significantly improved since then.

                1. 16

                  Ideally you don’t need to do anything except not make distro maintainer’s lifes harder.

                  If you absolutely want to provide your own binaries directly to endusers, as of 2020 there are things like Docker images and AppImage now so you can bundle what you need at this level.

                  So while we don’t have Linus dive tool in Void Linux yet, it looks easy to package and once that is done, the maintainers will take care to provide the newest version to the users, taking off work from you.

                  We also generally only try to patch build issues and security fixes that did not make it into a release yet. So often, users of our binary packages get fixed versions quicker than upstream.

                  1. 6

                    Ideally you don’t need to do anything except not make distro maintainer’s lifes harder.

                    I think the point of cognitive dissonance here is that what distro maintainers want often makes application developer’s lives harder. Dynamic linking doesn’t work well for many application developers, because libraries break even when they don’t change “major” versions: that’s just a fact of life. No software development process is perfect, and the application developer can’t reasonably test against every different patch that every different distribution applies to every different library. Being able to just drop a binary onto a machine and be confident it’ll work the same on that machine as it does on your own is a selling point of languages like Go and Rust.

                    And if you want to change the libraries used for these languages it’s not exactly hard. Just change the go.mod or Cargo.toml to point to the library you want it to use, rather than the library it’s currently using, and rebuild.

                    If you absolutely want to provide your own binaries directly to endusers, as of 2020 there are things like Docker images and AppImage now so you can bundle what you need at this level.

                    Docker and co are worse for security than static linking. Packaging as a Docker container incurs all of the downsides of static linking, and also all of the downsides of any outdated packages in the image. Static linking only distributes the libraries you need: containers distribute effectively an entire OS minus the kernel (and also the libraries you need).

                    Docker as a solution only makes sense if application developers want both dynamic linking and static linking; dynamic if you install it on the host, and effectively-static if you run it as a container. But the core issue is that many application developers do not want dynamic linking! If you do not want dynamic linking, static linking is better than using containers.

                    1. 4

                      I think the article confuses two separable things:

                      • Bundling in the shipped product.
                      • Provenance of inputs.

                      The former is a problem in terms of computational resource, but not much else. If a program statically links its dependencies (or uses C++ header-only libraries, or whatever), then you need to redo at least some of the build every time there’s an update (and generally you redo the whole build because incremental builds after dependency updates are flaky). The FreeBSD project can rebuild the entire package collection (30,000+ packages) in under two days on a single machine, so in the era of cloud computing that’s a complete non-issue unless you’re running Gentoo on an old machine.

                      The second is a much bigger problem. If there’s a vulnerability in libFoo, a distro bumps the version of libFoo. Anything that has libFoo as a build-time dependency is rebuilt. Security update fixed, we just burned some cycles doing the rebuild (though, in the case of static linking, possibly a lot fewer than we’d burn by doing dynamic linking on every machine that ran the program). If a program has vendored its dependency on libFoo, there’s no metadata conveniently available for the distribution that tells anyone that it needs to be rebuilt against a newer libFoo. It’s up to the program author to issue a security advisory, bump the library version, and so on. The distro will keep shipping the same library for ages without any knowledge.

                      Things like Docker make this worse because they make it trivial to write custom things in the build that grab source from random places and don’t record the provenance in an auditable structure. If I have an OCI image, I have absolutely no idea what versions of any libraries I’m running. They may be patched by the person who built the container to avoid a bug that caused problems for a specific program and that patch may have introduced another vulnerability. They may be an old version from some repo. They may be the latest trunk version when the container was released.

                    2. 5

                      Securitywise Docker images are about as bad as static linking for the end user.

                      1. 3

                        Of course, but it’s easier on the entire supply chain in the 99.9% of cases there is no security problem.

                        1. 9

                          99.9%? do you mean 40%?

                          https://www.techrepublic.com/article/docker-containers-are-filled-with-vulnerabilities-heres-how-the-top-1000-fared/

                          “Over 60 percent of the top Docker files held a vulnerability that had a Kenna Risk Score above 330; and over 20 percent of the files contained at least one vulnerability that would be considered high risk under Kenna’s scoring model,” […] the average (mean) number of CVEs per container is 176, with the median at 37.

                        2. 3

                          Yes, and static linking has a known solution for security updates: the distro rebuilds from updated source.

                          1. 3

                            Yes, but this needs to be done so often and so broadly, that at least Debian just seems to do regular rebuilds of nearly everything every few weeks or so in unstable and declares that software written in Go has no proper security support in at least Debian 10 Buster and security updates will only be provided via the minor stable updates approximately every two months or so. Still a PITA and hence q.e.d.

                        3. 5

                          If you absolutely want to provide your own binaries directly to endusers

                          You say this like it’s a method of last resort, but this is overwhelmingly how software authors prefer to package and distribute their applications. There’s lots of good reasons for that, and it’s not going to change.

                          1. 1

                            Ideally you don’t need to do anything except not make distro maintainer’s lifes harder.

                            I don’t even need to do that. Again, I am providing free work here.

                            If you absolutely want to provide your own binaries directly to endusers, as of 2020 there are things like Docker images and AppImage now so you can bundle what you need at this level.

                            I am fairly sure if people started to do that at scale, distro maintainers would complain all the same as they do about static linking.

                            So while we don’t have Linus dive tool in Void Linux yet, it looks easy to package and once that is done, the maintainers will take care to provide the newest version to the users, taking off work from you.

                            You’re wholly missing the point with this sentence. The fact that we’re in a position where we need to build applications per-distro is unsustainable. There is very little work in building a static binary on any other platform.

                            We also generally only try to patch build issues and security fixes that did not make it into a release yet. So often, users of our binary packages get fixed versions quicker than upstream.

                            Yes, and then the users report bugs regressions in a version that is not supposed to have the patch that introduced it. This is literally what I am complaining about.

                          2. 6

                            Keep in mind that 1) this is literally only a problem on Linux distros or other third-parties repackaging applications and imposing their opinions on everybody 2) the actual context of this blogpost is that the author is mad at Python packages using Rust dylibs, it seems his sense of entitlement has not significantly improved since then.

                            How is this relevant to static linking and the discussion about its security issues?

                            1. 3

                              Because it’s the reason this discussion continues to exist.

                              1. 3

                                So in summary people are still angry about cryptography and Rust and so they keep posting roundabout takes on it and people get onto news aggregator sites to hawk their positions but not work on a solution? I’m really not sure how that’s productive for anyone.

                                1. 1

                                  I publish static binaries for my applications. Now I have a third party who wants to redistribute my free work but wants me to change the way I write software so their use of my free work gets easier (for a debatable value of easier).

                                  Frankly I don’t see a problem I have to solve. My way works just fine on Windows.

                                  1. 1

                                    At this point it’s up to all the parties to coordinate. It’s obvious that each of the different parties have different perspectives, desires, and annoyances. If you put your shoes in any of the various parties (application developers, distro maintainers, application users, distro users), and there’s plenty on this thread and the HN version of this link, then I think you can see the many angles of frustration. I don’t think getting angry on message boards is going to settle this debate for anyone, unless you’re just looking to vent, which I’d rather not see on lobste.rs and instead on chatrooms.

                            2. 5

                              This is only a problem on Linux. The fact that anybody can create a Linux distribution means that there are lot of systems that are largely similar and yet wholly incompatible with one another. Bazaar-style development has encouraged this pattern and, as such, we have a fragmentation of Linux that have just the tiniest little differences that make packaging an app near impossible to do in an universal fashion. Like it or not, cathedral-style systems do not suffer from this problem. You can count on the libc and loader to exist in a well known and understood location in FreeBSD, Windows, and MacOS. Sure, there are going to be differences in between major versions, but not so much as the difference between glibc and musl.

                              Having your own packaging system then frees you, the application developer, from having to wait on the over 9,000 different Linux distributions to update their packages so that you can use a new shiny version of a dependency in your app. Furthermore, there are plenty of commercial, proprietary, software packages that don’t need to move at the same cadence as their deployed Linux distribution. The app might update their dependencies more frequently while in active development or less frequently if the business can’t justify the cost of upgrading the source code.

                              I lay out that this situation is not unique to Linux, but rather, it exists because of Linux’s fragmentation… And secondarily as a result of the friction associated with walled-garden ecosystems like Apple.

                            1. 10

                              Thanks! Finally someone speaks out what causes a lot of pain to keep software packages secure in Linux distributions — and as it seems not only in binary distributions. (Not to mention software distributed as Snap, AppImage, Flatpak, Docker images, etc. which all have similar issues and should be avoided if you care about being able to track security issues in your installed software.)

                              1. 3

                                Why would I use screen over tmux? Honestly curious, have no experience with screen.

                                1. 7

                                  It’s often preinstalled. Many users are familiar with it over tmux for that reason.

                                  1. 6

                                    There are many reasons, none of them is really general:

                                    • Being oldschool and being used to it. tmux is different and even if you change Ctrl-B back to Ctrl-A, it’s not a drop-in replacement.
                                    • Missing serial console support in tmux and some other more exotic features missing in tmux (probably on purpose).
                                    • IMHO easier to configure (albeit definitely less mighty)

                                    (Disclaimer: I’m the author of the linked blog posting and the maintainer of Debian’s screen package, so I’m probably biased. ;-)

                                    1. 5

                                      Screen is good enough, I know the shortcut keys I need and it does serial ports. There is nothing I need that it doesn’t do, so why change? Not all change is progress…

                                      1. 7

                                        Good summary, yes. :-)

                                        There are admittedly also some downsides: Most of the code of screen is ancient, has only few comments and is not easy to understand. It’s older than the Linux kernel. And despite it’s a GNU project these days, it started as IIRC “BSD Screen Manager” or so on BSDs.

                                        1. 1

                                          what’s wrong with old code?

                                          1. 3

                                            The rest of the sentence says:

                                            has only few comments and is not easy to understand

                                            So, harder to maintain, fix, improve upon?

                                            1. 1

                                              that would be an issue, but i don’t see what that has to do with the age of the code

                                              1. 2

                                                Different common sense and coding style now and back then.

                                                1. 2

                                                  Maybe, but two developers today may differ just as much in their common sense and coding style. It can be a pain to work on a code base written in a fancy IDE, if the author leaned on syntax highlighting and auto-completion to compensate for clunky names. There are a lot of factors that could make old code better or worse than new code.

                                            2. 3

                                              Nothing in general, but it tends to accumulate issues over time:

                                              • Occasionally stops compiling with newer, more strict compilers.
                                              • Does not adhere to current coding standards which usually focus on readability and avoiding common errors → harder to read, more error prone.
                                                • Also might hinder attracting new contributors or maintainers.
                                              • The current maintainers might no more know what the code was for if the original authors are no more around.
                                              • At least Screen is known to have support for quite a few dead operating systems (think SunOS, etc.). These kind of tweaks can cause issues on modern operating systems. The master branch in Screen’s git repo has some cleanup on that, but unfortunately also kicked out some features which are still in use. No release has been made out of that branch anyway. I suspect that it will become version 5 if there will be ever a release out of that branch.
                                      2. 2

                                        In addition to the other answers… tmux feels generally more vim-like, while screen is more emacs-like. If you already have a preference in that game, that tends to color your perceptions of them.

                                        1. 2

                                          Any chance you could elaborate on that? I’ve never gone deep into configuring either of them, but by default both feel more emacsy in bindings. What is there beyond that?

                                          1. 2

                                            Interesting. Never came to that thought, but at least it seems to fit for me: I’m a GNU Emacs (and GNU Zile) guy. :-)

                                            Then again: I don’t see where Screen is very emacs-ish. So I’d also be interested in a more detailed explanation.

                                        1. 2

                                          Reminds me of the 100% handcoded HTML campaign.

                                          1. 2

                                            Oh my. I first read “FPGA NNTP Server”. %-)