Honestly, for me the big thing about Arch isn’t a lack of “stability”, it’s more the number of sharp edges to cut yourself on.
For example, the author mentioned that the longest they’ve gone without a system update is 9 months. Now, the standard way to update an Arch system is pacman -Syu, but this won’t work if you haven’t updated in 9 months – the package signing keys (?) would have changed and the servers would have stopped keeping old packages, so what you instead want to do is pacman -Sy archlinux-keyring && pacman -Su.
There’s a page on the ArchWiki telling you about this, but you wouldn’t find it until after you run a system update and it fails with some cryptic errors about gpg. It also doesn’t help that pacman -Sy <packagename> is said to be an unsupported operation on the wiki otherwise, so you wouldn’t think to do it yourself, and might even hesitate if someone on a forum tells you to do it. Any other package manager would just… take care of this.
It’s little things like this that make me not want to use Arch, and what I think gives it a reputation for instability - it seems to break all the time, but that’s not actually instability that’s just The Arch Way, as you can clearly read halfway down this random wiki page.
If you’re worried about sharp edges like that, then yeah you probably don’t want to deal with Arch. Someone who uses the official install guide though should be made pretty clear that they exist. You hand prepare the system, and then install every package you want from there. It’s quite a bit different than a distro that provides a simple out of the box install. (I’m ignoring the projects that try to simplify the install for Arch here.)
It’s also a rolling release. Sure, you could go for 9 months without updating but with a rolling a release that’s a long time. You’ll likely be upgrading most of your installed software in a single jump by doing so. That’s not bad, per se, but would be mildly concerning to me. There’s no clean rollback mechanism in Arch so this presents an extra level of caution.
I think the perception of a lack of stability does come from the Arch way, but from my experience it’s usually down to changes in the upstream software being packaged and clearly nothing that the distro is adding. It seems obvious to me that if you’re pulling in newer versions of software constantly you will have less stability by design. There’s real benefit in distros that take snapshots when it comes to predictability and thus stability.
I use Arch on exactly one system, my laptop/workstation, and I’m quite happy with it there. I get easy access to updated packages, and through the AUR a wide variety of software not officially packaged. It’s perfect for what I want on this machine and lets me customize my desktop environment exactly how I want. Doing the same with Debian and Ubuntu was much more effort on my part.
I wouldn’t use Arch on a server, mostly because I don’t want all the package churn.
It’s also a rolling release. Sure, you could go for 9 months without updating but with a rolling a release that’s a long time. You’ll likely be upgrading most of your installed software in a single jump by doing so. That’s not bad, per se, but would be mildly concerning to me. There’s no clean rollback mechanism in Arch so this presents an extra level of caution.
That’s actually why I stopped using arch: I’ve got some devices that I won’t use for 6-12 months, but then I’ll start to use them daily again. And turns out, if you do that, pacman breaks your whole system, you’ll have to hunt down package archives and manually untangle that mess.
I wish there was a happy medium between Arch and Debian. Something relatively up to date but also with an eye for stability, and also minimalist when it comes to the default install.
I think that’s a bit of an exaggeration, when compared to other Linux distros where upgrades are always that scary thing.
Also the keyring stuff .. I’m not sure when that was introduced. So might have been before that?
I’ve done pretty long jumps on an Arch Linux System for my mother which isn’t really good with computers and technology in general on a netbook. Just a few buttons on the side in xfce worked really well until the web became too demanding for first gen Intel atoms. But I updated it quite a bit later looking for some files or something. I don’t remember that being a big issue. But I do remember how I was surprised that it wasn’t.
I actually had many more problems, like a huge amount of them with apt for example.
Worse of course package managers trying to be smart. If there’s one thing that I would never want to be smart it’s a package manager. I haven’t seen an instance yet where that didn’t backfire.
Upgrades have been relatively fear-free for me on both Ubuntu and Fedora, though that may be a recent thing, and due to the fact that my systems don’t stray too far from a “stock” install.
One thing I will give Arch props for is that it’s super easy to build a lightweight system for low-end devices, as you mentioned. Currently my only Arch device is an Intel Core m5 laptop, because everything else chugs on it.
Have you tried Alpine? It’s really shaping up to be a decent general purpose distro, but provides snapshot stability. It’s also about as lightweight as you can get.
I haven’t for that particular device, but in my time using it I couldn’t get it to function right on another netbook I had. Probably user error on account of me being too used to systemd, but I’m not in a rush to try it again either.
Good to know, thanks. I’m on my first non-test Arch install at the moment and so far I’ve been surprised by the actual lack of anything being worse than on other distros. Everything worked out of the box.
This matches my experience too. I love that there’s no big release upgrades, it’s just incrementally updated over time and just keeps working—no major releases that seem to be very disruptive and commonly break things in the so called “stable” distros.
As a long term user of Debian stable I can tell you that it indeed is very, very stable. Never had any problem upgrading to a new version. The name is very apt.
Conversely, in the years that used Debian and Ubuntu based distros, I had tons of issues doing an upgrade whenever I was using third party packages. In particular for Ubuntu, PPAs and proprietary software repos tend to not be prepared for a dist-upgrade sometimes for months after the new release comes out.
Not hating on Debian or Ubuntu, just pointing out that both approaches are imperfect.
I’ve had lots of problems with upgrading while I have third-party software installed too. The official upgrade runbook for debian upgrades advises to remove non-debian packages and sources as well.
Worth considering the container-thingys (snaps/flatpaks/appimages) for proprietary software. They can work better that way. I use slack, signal, heroku, etc that way and it’s mostly ok.
Yep, as much hate as snaps/flatpaks get at least they encapsulate all their dependencies so a change on the underlying glibc won’t break them. Still a bunch of things to fix (just look at how Ubuntu is still dealing with huge performance issues in their Firefox snap) but at least it’s step in the right direction.
Yeah I also haven’t managed to do this for firefox. I tried it (with a snap?) and there were some font issues that I couldn’t be bothered to debug. I’m using the official (non-deb) binary from Mozilla.
Agreed, if you wander off the beaten path into PPA’s and proprietary repos, you are definitely well into You get all the pieces when it inevitably breaks.
However, if you stay within the Stable repos, the chances of breaking are low to near zero, even during upgrades.
If you have to play a lot in random package repos, you probably shouldn’t be playing in stable in the first place.
You’re making the argument that the common wisdom of Arch being unstable is incorrect and you’re positing that running the same install over a couple of machines over the course of a decade proves that.
The thing is, there are people who can say the same thing with virtually any operating system, including the much maligned Windows! :) (There are definitely people out there who’ve been upgrading the same install since god knows when).
What makes your experience with Arch’s stability unique? How does Arch in particular lend itself to this kind of longevity and stability?
I really just meant to say that Arch Linux doesn’t break unusually much compared to other desktop operating systems I’ve used. At least that’s been my experience. The other operating systems I’ve used are mainly Ubuntu, Fedora, and Windows.
Try using arch without updating it for a year or two, then update it all at once. And then try this with Windows, Fedora, Ubuntu again. That’s honestly arch’s primary issue, that you can’t easily update after you’ve missed a few months of updates.
I don’t quite see how that is a primary issue when this is the nature of rolling release distros though. Comparing to point release Fedora and Ubuntu doesn’t fit well, and I’d somewhat expect the same to happen on the rolling/testing portion of Debian and Ubuntu, along with Fedora Rawhide or OpenSUSE Tumbleweed? Am I wrong here? Do they break less often if you don’t update for a year+?
Personally I keep around Arch installs from several years ago and I’ll happily update them without a lot of issues when I realize they exist. Usually everything just works with the cursed command invocation of pacman -Sy archlinux-keyring && pacman --overwrite=* -Su
I don’t quite see how that is a primary issue when this is the nature of rolling release distros though
It’s not necessarily – a rolling release distro could also do something like releasing a manifest of all current package versions per day, which is guaranteed to work together, and the package manager could then incrementially run through these snapshots to upgrade to any given point in time.
This would also easily allow rolling back to any point in time.
It’s actually a similar idea to how coreos used to do (and maybe still does?) it.
That would limit a package release to once a day? Else you need a pr hour/minute manifest. This doesn’t scale when we are talking about not updating for years though as we can’t have mirrors storing that many packages. I also think this glosses over the challenge of making any guarantee that packages work together. This is hard and few (if any?) distros are that rigorous today even.
It is interesting to note that storing transactions was an intended feature of pacman. But the current developer thinks this belongs in an ostree or btrfs snapshot functionality.
I’m confused by this thread. Maybe I’m missing something?
It seems like it should be really simple to keep a database with a table containing a (updated_at, package_name, new_package_version) triple with an index on (updated_at, package_name) and allow arbitrary point in time queries to get a package manifest from that point in time.
No need to materialize manifests every hour/minute/ever except when they’re queried. No need to make any new guarantees, the packages for any given point in time query should work together iff they worked together in that actual point of time in real life. No need to make the package manger transactional (that would be useful for rolling back to bespoke sets of packages someone installed on their system, but not for installing a set of packages that were in the repo at a given time).
Actually storing the contents of the packages would take up quite a bit of disk space, but it sounds like there is an archive already doing that? Other than making that store it sounds like just a bit of grunt work to make the db and make the package manger capable of using it?
I didn’t read it as some grandiose statement of how awesome Arch is, just one user’s experience.
My equally unscientific experience is that I can usually upgrade Ubuntu once, but after a second upgrade it’s usually easier and quicker to just start fresh because of all the cruft that has accumulated. I fully acknowledge that one could combat that more easily, but I also typically get new hardware after X years, for X lower than 10.
I do have an x230 from 2013 with a continually updated Debian install on it, so 10 more months and I also managed to get to 10 years.
Really though? While it’s true that you can upgrade Windows I have yet to see someone who managed to pull that off with their primary system and things like drivers, etc. accumulating and biting you. This is especially true if you update too early and later realize incompatibility with drivers or software which really sucks if that’s your main system.
Upgrading usually works when everything because there’s upgrade paths but having a usable system sadly is a less common theme.
And Linux distributons that aren’t rolling release tend to be way worse than Windows. And while I don’t know MacOS at every company I’ve been so far a bigger update of MacOS always means that everyone is expected to not have a functional system for at least a day, which always is a bit shocking in an IT company. But I have to say I have really no clue what is going on during that time. So not sure what’s really happening. I know from my limited exposure that updates rent to simply be big in download and long installation processes.
I probably only stuck with arch all that time compared precisely because it didn’t give me a time to consider switching so it’s really just laziness.
When I think about other OSs and distributions I’ve used scary updates of OS and software are a big junk of why I don’t use them in one way or another. I used to be a fan of source based distributions because they gave you something similar. Back when I could spend the time. I should check whether Gentoo has something like FreeBSD’s poudriere to pre-build stuff. Does anyone happen to know?
I would have agreed with you from ca. 1995-2017, but I saw several Win 7 -> Win 10 -> Win 10 migrations that have at least been flawlessly running for 5+ years, if you accept upgrades between different Win 10 versions as valid.
I’ve had many qualms and bad things to say about Windows in my life, but I only had a single installation self-destruct since Win 7 launched, so I guess it’s now on par with Linux here for this criterion.
Changing the hardware and reusing the Windows installation was still hit or miss whenever I tried, with the same motherboard chipset I’ve never seen a problem, and otherwise.. not sure.. I guess I only remember reinstalls.
I was doing tech support for a local business around the time the forced Windows 10 roll out happened. I had to reinstall quite a few machines because they just had funky issues. Things were working before then suddenly they weren’t. I couldn’t tell you what the issues were at the time but I just remember it being an utter pain in the ass.
Yeah I’m not claiming authority on it being flawless, but it has changed to “I would bet on this windows installation breaking down in X months” to “hey, it seems to work”, based on my few machines.
(Long-time Gentoo user here.) I’m not sure if this answers your question, but I often use Gentoo’s feature to use more powerful machines to compile stuff, then install the built binaries on weaker systems. You just have to generally match the architecture (Intel with Intel, AMD with AMD).
I will need to look into this. I mostly wrote that sort of to keep it at the back of my head. I heard it was possible but there was some reason that I didn’t up trying that out . Maybe I should do it at my next vacation or something.
It isn’t stable in the sense that ‘things don’t change too much’, but it is stable (in my experience) in that ‘things generally work’. I recall maybe 10 years ago I would need to frequently check the website homepage in case of breaking upgrades requiring manual intervention, but it hasn’t been like that for a long time. In fact I can’t remember the last time I checked the homepage. Upgrades just work now, and if an upgrade doesn’t go through in the future, I know where to look. Past experience tells me it will likely just be a couple of manual commands.
On the other hand, what has given me far more grief lately is Ubuntu LTS (server). I was probably far too eager to upgrade to 22.04, and too trusting, but I never thought they’d upgrade to OpenSSL 3 with absolutely no fallback for software still reliant on OpenSSL 1.x… in an LTS release (isn’t this kind of major change what non-LTS releases are for?). So far, for me this has broken OpenSMTPD, Ruby 2.7, and MongoDB (a mistake from a very old project – never again). For now I’ve had to reinstall 20.04. I’ll be more cautious in future, but I’m still surprised by how this was handled.
Some details regarding the times things did break that I mentioned in the article:
In September 2014, X broke, and I created an /etc/X11/Xwrapper.config file with the lines “allowed_users = anybody” and “needs_root_rights = yes” to get it to work again. I don’t remember and don’t have notes on why that helped. It sure does sound like a pretty terrible hack. I don’t have that Xwrapper.config file anymore, and I also don’t know when I deleted it.
In June 2017, audio stopped working, but all I had to do was add my user to the audio group.
In May 2018, X broke a second time. This time I downgraded the xorg-server-common and xorg-server packages. A few weeks later, I ran another system upgrade, and this one went fine.
Whenever something broke, I generally took a note about what I did to fix it. I don’t have anything about the /usr/bin change from June 2013, so all I can tell you is that I followed the instructions and (probably) didn’t encounter any problems doing so. My pacman.log shows that I dragged my feet until August before performing this system upgrade:
I ran Arch on a server once. It actually worked pretty well and I had a better administrative experience because I knew that box in and out, having configured it from scratch myself. I moved away from this setup because I couldn’t set security updates on a cronjob - I would have to manually apply updates every day or two, and worse, sometimes I simply did not have time to apply these updates. Apache 2.4 came out during this time and broke everything, and what I thought was going to be a quick upgrade immediately became an hour-long project to read about the changes and resolve the config diff. I couldn’t delay because I needed the security updates, and partial upgrades are not supported.
Maybe I’m being a downer (in fact I’m sure I am and I’m sorry about that) but what I read when I read this article is that the author only applies security updates once a month, and once went 9 months without patching security problems. I really hope you get your web browser from Flatpak and not pacman.
I’d argue that Arch is the wrong base for a server. It makes a lot of sense to have something ‘breaky’ like a desktop where you are constantly tweaking and fixing stuff. On a server you definitely want the most stable and boring base ever.
Funny that you mention it, the web browser is one of a few packages I explicitly pin to an older version, because upgrading Firefox often unexpectedly breaks my userChrome. I’ll deal with that when I feel like it, not when they make me.
I’ve been running Arch on my no-SLA server for several years without any issues. Basically zero maintenance, other than the occasional reboot to a newer kernel.
If I wanted top security, I’d go back to OpenBSD, which runs slow as molasses, and may have issues with hardware compatibility or availability of ports, but is even simpler to work with than Arch. I just don’t think I have enough of an attack surface to care.
Your web browser has a massive attack surface that’s exposed to untrusted input all the time. Maybe you don’t need OpenBSD’s level of security but there’s a big difference between that and at least making sure that you’re not running software with known, public vulnerabilities. Do you not feel uncomfortable knowing that every website you visit could be exploiting some vulnerability that’s already been fixed in a Firefox version you haven’t installed yet?
I feel orders of magnitude more uncomfortable with a broken web browser.
If I examine the risk: running Firefox with an adblock on Linux alone significantly reduces the odds of being a good target. Then, an attacker is the most likely to either install a miner, or a botnet client (I’d notice), or double-encrypt my data (I have backups). I’m a no one, and offer no promise of providing access to secrets, so targeted attacks are unlikely. What’s there to be anxious about again?
I think you give in to a false sense of security. The amount of random shit I more or less blindly install from AUR, or even distribution repositories, worries me much more.
the author only applies security updates once a month
This is accurate and includes my web browser. I could be wrong, but I never considered this update frequency a major security risk. I definitely aim not to go without a system upgrade for nine months again, though.
Your point about Arch Linux on servers is great. My first choice for a server OS these days is also something that allows getting security updates (and only security updates) for as long as possible.
I have a similar experience. My ArchLinux installation on my laptop is ~9 years already. In the meantime, my Linux distro on my desktop has changed several times because of distro problems (like a release of a non-booting kernel without kernel backups).
My Arch Linux Install on my main computer is a few years older. It survived a switch between laptops and then to a desktop and a switch from a hard disk to SSD. Never spent so much between with system updates. For me it has been updated about once a day, sometimes more, sometimes lots, when I was in vacation or so. I used Arch before that but then usually a computer was a new install with copying over fines.
The biggest problem in all that time was pulseaudio. The switch to systemd was easy (and I still don’t like it, but that’s a different story).
I’m a bit sad that it kind of developed away from KISS in stone areas, but that was already starting ten years ago. It’s if course still kind of KISS compared to other stuff, but it’s not as much a priority anymore. I remember back in the days there was a big of a discussion about how the Linux kernel and patch sets should be treated.
Honestly, for me the big thing about Arch isn’t a lack of “stability”, it’s more the number of sharp edges to cut yourself on.
For example, the author mentioned that the longest they’ve gone without a system update is 9 months. Now, the standard way to update an Arch system is
pacman -Syu
, but this won’t work if you haven’t updated in 9 months – the package signing keys (?) would have changed and the servers would have stopped keeping old packages, so what you instead want to do ispacman -Sy archlinux-keyring && pacman -Su
.There’s a page on the ArchWiki telling you about this, but you wouldn’t find it until after you run a system update and it fails with some cryptic errors about gpg. It also doesn’t help that
pacman -Sy <packagename>
is said to be an unsupported operation on the wiki otherwise, so you wouldn’t think to do it yourself, and might even hesitate if someone on a forum tells you to do it. Any other package manager would just… take care of this.It’s little things like this that make me not want to use Arch, and what I think gives it a reputation for instability - it seems to break all the time, but that’s not actually instability that’s just The Arch Way, as you can clearly read halfway down this random wiki page.
As it happens, they recently added a systemd timer to update the keyring.
ahh, that’s a good start. Still doesn’t help my system that’s been sitting shut down in a basement for four months, but at least there’s something.
If you’re worried about sharp edges like that, then yeah you probably don’t want to deal with Arch. Someone who uses the official install guide though should be made pretty clear that they exist. You hand prepare the system, and then install every package you want from there. It’s quite a bit different than a distro that provides a simple out of the box install. (I’m ignoring the projects that try to simplify the install for Arch here.)
It’s also a rolling release. Sure, you could go for 9 months without updating but with a rolling a release that’s a long time. You’ll likely be upgrading most of your installed software in a single jump by doing so. That’s not bad, per se, but would be mildly concerning to me. There’s no clean rollback mechanism in Arch so this presents an extra level of caution.
I think the perception of a lack of stability does come from the Arch way, but from my experience it’s usually down to changes in the upstream software being packaged and clearly nothing that the distro is adding. It seems obvious to me that if you’re pulling in newer versions of software constantly you will have less stability by design. There’s real benefit in distros that take snapshots when it comes to predictability and thus stability.
I use Arch on exactly one system, my laptop/workstation, and I’m quite happy with it there. I get easy access to updated packages, and through the AUR a wide variety of software not officially packaged. It’s perfect for what I want on this machine and lets me customize my desktop environment exactly how I want. Doing the same with Debian and Ubuntu was much more effort on my part.
I wouldn’t use Arch on a server, mostly because I don’t want all the package churn.
That’s actually why I stopped using arch: I’ve got some devices that I won’t use for 6-12 months, but then I’ll start to use them daily again. And turns out, if you do that, pacman breaks your whole system, you’ll have to hunt down package archives and manually untangle that mess.
I wish there was a happy medium between Arch and Debian. Something relatively up to date but also with an eye for stability, and also minimalist when it comes to the default install.
Void?
I think that’s a bit of an exaggeration, when compared to other Linux distros where upgrades are always that scary thing.
Also the keyring stuff .. I’m not sure when that was introduced. So might have been before that?
I’ve done pretty long jumps on an Arch Linux System for my mother which isn’t really good with computers and technology in general on a netbook. Just a few buttons on the side in xfce worked really well until the web became too demanding for first gen Intel atoms. But I updated it quite a bit later looking for some files or something. I don’t remember that being a big issue. But I do remember how I was surprised that it wasn’t.
I actually had many more problems, like a huge amount of them with apt for example.
Worse of course package managers trying to be smart. If there’s one thing that I would never want to be smart it’s a package manager. I haven’t seen an instance yet where that didn’t backfire.
Upgrades have been relatively fear-free for me on both Ubuntu and Fedora, though that may be a recent thing, and due to the fact that my systems don’t stray too far from a “stock” install.
One thing I will give Arch props for is that it’s super easy to build a lightweight system for low-end devices, as you mentioned. Currently my only Arch device is an Intel Core m5 laptop, because everything else chugs on it.
Have you tried Alpine? It’s really shaping up to be a decent general purpose distro, but provides snapshot stability. It’s also about as lightweight as you can get.
I haven’t for that particular device, but in my time using it I couldn’t get it to function right on another netbook I had. Probably user error on account of me being too used to systemd, but I’m not in a rush to try it again either.
Good to know, thanks. I’m on my first non-test Arch install at the moment and so far I’ve been surprised by the actual lack of anything being worse than on other distros. Everything worked out of the box.
I don’t get what’s so odd about this.
I have multiple 10+ year Arch installs. No issue.
There isn’t anything too odd about this, which is also roughly what I’m saying in my article. Things break less than many people expect.
I was thinking exactly this!
This matches my experience too. I love that there’s no big release upgrades, it’s just incrementally updated over time and just keeps working—no major releases that seem to be very disruptive and commonly break things in the so called “stable” distros.
As a long term user of Debian stable I can tell you that it indeed is very, very stable. Never had any problem upgrading to a new version. The name is very apt.
Conversely, in the years that used Debian and Ubuntu based distros, I had tons of issues doing an upgrade whenever I was using third party packages. In particular for Ubuntu, PPAs and proprietary software repos tend to not be prepared for a dist-upgrade sometimes for months after the new release comes out.
Not hating on Debian or Ubuntu, just pointing out that both approaches are imperfect.
I’ve had lots of problems with upgrading while I have third-party software installed too. The official upgrade runbook for debian upgrades advises to remove non-debian packages and sources as well.
Worth considering the container-thingys (snaps/flatpaks/appimages) for proprietary software. They can work better that way. I use slack, signal, heroku, etc that way and it’s mostly ok.
Yep, as much hate as snaps/flatpaks get at least they encapsulate all their dependencies so a change on the underlying glibc won’t break them. Still a bunch of things to fix (just look at how Ubuntu is still dealing with huge performance issues in their Firefox snap) but at least it’s step in the right direction.
Yeah I also haven’t managed to do this for firefox. I tried it (with a snap?) and there were some font issues that I couldn’t be bothered to debug. I’m using the official (non-deb) binary from Mozilla.
If your glibc updates won’t the rest of your system update at the same time to use the new glibc?
Agreed, if you wander off the beaten path into PPA’s and proprietary repos, you are definitely well into You get all the pieces when it inevitably breaks.
However, if you stay within the Stable repos, the chances of breaking are low to near zero, even during upgrades.
If you have to play a lot in random package repos, you probably shouldn’t be playing in stable in the first place.
I died a little from that pun.
I’ll note that this isn’t the case of Debian testing, which I’ve experienced serious packaging issues with.
I’d love to see some more insights here.
You’re making the argument that the common wisdom of Arch being unstable is incorrect and you’re positing that running the same install over a couple of machines over the course of a decade proves that.
The thing is, there are people who can say the same thing with virtually any operating system, including the much maligned Windows! :) (There are definitely people out there who’ve been upgrading the same install since god knows when).
What makes your experience with Arch’s stability unique? How does Arch in particular lend itself to this kind of longevity and stability?
I really just meant to say that Arch Linux doesn’t break unusually much compared to other desktop operating systems I’ve used. At least that’s been my experience. The other operating systems I’ve used are mainly Ubuntu, Fedora, and Windows.
Try using arch without updating it for a year or two, then update it all at once. And then try this with Windows, Fedora, Ubuntu again. That’s honestly arch’s primary issue, that you can’t easily update after you’ve missed a few months of updates.
I don’t quite see how that is a primary issue when this is the nature of rolling release distros though. Comparing to point release Fedora and Ubuntu doesn’t fit well, and I’d somewhat expect the same to happen on the rolling/testing portion of Debian and Ubuntu, along with Fedora Rawhide or OpenSUSE Tumbleweed? Am I wrong here? Do they break less often if you don’t update for a year+?
Personally I keep around Arch installs from several years ago and I’ll happily update them without a lot of issues when I realize they exist. Usually everything just works with the cursed command invocation of
pacman -Sy archlinux-keyring && pacman --overwrite=* -Su
It’s not necessarily – a rolling release distro could also do something like releasing a manifest of all current package versions per day, which is guaranteed to work together, and the package manager could then incrementially run through these snapshots to upgrade to any given point in time.
This would also easily allow rolling back to any point in time.
It’s actually a similar idea to how coreos used to do (and maybe still does?) it.
That would limit a package release to once a day? Else you need a pr hour/minute manifest. This doesn’t scale when we are talking about not updating for years though as we can’t have mirrors storing that many packages. I also think this glosses over the challenge of making any guarantee that packages work together. This is hard and few (if any?) distros are that rigorous today even.
It is interesting to note that storing transactions was an intended feature of pacman. But the current developer thinks this belongs in an
ostree
or btrfs snapshot functionality.I’m confused by this thread. Maybe I’m missing something?
It seems like it should be really simple to keep a database with a table containing a (updated_at, package_name, new_package_version) triple with an index on (updated_at, package_name) and allow arbitrary point in time queries to get a package manifest from that point in time.
No need to materialize manifests every hour/minute/ever except when they’re queried. No need to make any new guarantees, the packages for any given point in time query should work together iff they worked together in that actual point of time in real life. No need to make the package manger transactional (that would be useful for rolling back to bespoke sets of packages someone installed on their system, but not for installing a set of packages that were in the repo at a given time).
Actually storing the contents of the packages would take up quite a bit of disk space, but it sounds like there is an archive already doing that? Other than making that store it sounds like just a bit of grunt work to make the db and make the package manger capable of using it?
I didn’t read it as some grandiose statement of how awesome Arch is, just one user’s experience.
My equally unscientific experience is that I can usually upgrade Ubuntu once, but after a second upgrade it’s usually easier and quicker to just start fresh because of all the cruft that has accumulated. I fully acknowledge that one could combat that more easily, but I also typically get new hardware after X years, for X lower than 10.
I do have an x230 from 2013 with a continually updated Debian install on it, so 10 more months and I also managed to get to 10 years.
Really though? While it’s true that you can upgrade Windows I have yet to see someone who managed to pull that off with their primary system and things like drivers, etc. accumulating and biting you. This is especially true if you update too early and later realize incompatibility with drivers or software which really sucks if that’s your main system.
Upgrading usually works when everything because there’s upgrade paths but having a usable system sadly is a less common theme.
And Linux distributons that aren’t rolling release tend to be way worse than Windows. And while I don’t know MacOS at every company I’ve been so far a bigger update of MacOS always means that everyone is expected to not have a functional system for at least a day, which always is a bit shocking in an IT company. But I have to say I have really no clue what is going on during that time. So not sure what’s really happening. I know from my limited exposure that updates rent to simply be big in download and long installation processes.
I probably only stuck with arch all that time compared precisely because it didn’t give me a time to consider switching so it’s really just laziness.
When I think about other OSs and distributions I’ve used scary updates of OS and software are a big junk of why I don’t use them in one way or another. I used to be a fan of source based distributions because they gave you something similar. Back when I could spend the time. I should check whether Gentoo has something like FreeBSD’s poudriere to pre-build stuff. Does anyone happen to know?
I would have agreed with you from ca. 1995-2017, but I saw several Win 7 -> Win 10 -> Win 10 migrations that have at least been flawlessly running for 5+ years, if you accept upgrades between different Win 10 versions as valid.
I’ve had many qualms and bad things to say about Windows in my life, but I only had a single installation self-destruct since Win 7 launched, so I guess it’s now on par with Linux here for this criterion.
Changing the hardware and reusing the Windows installation was still hit or miss whenever I tried, with the same motherboard chipset I’ve never seen a problem, and otherwise.. not sure.. I guess I only remember reinstalls.
I was doing tech support for a local business around the time the forced Windows 10 roll out happened. I had to reinstall quite a few machines because they just had funky issues. Things were working before then suddenly they weren’t. I couldn’t tell you what the issues were at the time but I just remember it being an utter pain in the ass.
Yeah I’m not claiming authority on it being flawless, but it has changed to “I would bet on this windows installation breaking down in X months” to “hey, it seems to work”, based on my few machines.
(Long-time Gentoo user here.) I’m not sure if this answers your question, but I often use Gentoo’s feature to use more powerful machines to compile stuff, then install the built binaries on weaker systems. You just have to generally match the architecture (Intel with Intel, AMD with AMD).
I will need to look into this. I mostly wrote that sort of to keep it at the back of my head. I heard it was possible but there was some reason that I didn’t up trying that out . Maybe I should do it at my next vacation or something.
It isn’t stable in the sense that ‘things don’t change too much’, but it is stable (in my experience) in that ‘things generally work’. I recall maybe 10 years ago I would need to frequently check the website homepage in case of breaking upgrades requiring manual intervention, but it hasn’t been like that for a long time. In fact I can’t remember the last time I checked the homepage. Upgrades just work now, and if an upgrade doesn’t go through in the future, I know where to look. Past experience tells me it will likely just be a couple of manual commands.
On the other hand, what has given me far more grief lately is Ubuntu LTS (server). I was probably far too eager to upgrade to 22.04, and too trusting, but I never thought they’d upgrade to OpenSSL 3 with absolutely no fallback for software still reliant on OpenSSL 1.x… in an LTS release (isn’t this kind of major change what non-LTS releases are for?). So far, for me this has broken OpenSMTPD, Ruby 2.7, and MongoDB (a mistake from a very old project – never again). For now I’ve had to reinstall 20.04. I’ll be more cautious in future, but I’m still surprised by how this was handled.
Even Arch Linux plans to handle OpenSSL3 with more grace (with an openssl-1.1 package).
Some details regarding the times things did break that I mentioned in the article:
/etc/X11/Xwrapper.config
file with the lines “allowed_users = anybody
” and “needs_root_rights = yes
” to get it to work again. I don’t remember and don’t have notes on why that helped. It sure does sound like a pretty terrible hack. I don’t have that Xwrapper.config file anymore, and I also don’t know when I deleted it.audio
group.xorg-server-common
andxorg-server
packages. A few weeks later, I ran another system upgrade, and this one went fine.how did you handle the binaries change to /usr/bin and filesystem layout updates?
Whenever something broke, I generally took a note about what I did to fix it. I don’t have anything about the /usr/bin change from June 2013, so all I can tell you is that I followed the instructions and (probably) didn’t encounter any problems doing so. My pacman.log shows that I dragged my feet until August before performing this system upgrade:
I ran Arch on a server once. It actually worked pretty well and I had a better administrative experience because I knew that box in and out, having configured it from scratch myself. I moved away from this setup because I couldn’t set security updates on a cronjob - I would have to manually apply updates every day or two, and worse, sometimes I simply did not have time to apply these updates. Apache 2.4 came out during this time and broke everything, and what I thought was going to be a quick upgrade immediately became an hour-long project to read about the changes and resolve the config diff. I couldn’t delay because I needed the security updates, and partial upgrades are not supported.
Maybe I’m being a downer (in fact I’m sure I am and I’m sorry about that) but what I read when I read this article is that the author only applies security updates once a month, and once went 9 months without patching security problems. I really hope you get your web browser from Flatpak and not pacman.
I’d argue that Arch is the wrong base for a server. It makes a lot of sense to have something ‘breaky’ like a desktop where you are constantly tweaking and fixing stuff. On a server you definitely want the most stable and boring base ever.
I mean, yeah. That’s why I don’t use it anymore.
Funny that you mention it, the web browser is one of a few packages I explicitly pin to an older version, because upgrading Firefox often unexpectedly breaks my userChrome. I’ll deal with that when I feel like it, not when they make me.
I’ve been running Arch on my no-SLA server for several years without any issues. Basically zero maintenance, other than the occasional reboot to a newer kernel.
If I wanted top security, I’d go back to OpenBSD, which runs slow as molasses, and may have issues with hardware compatibility or availability of ports, but is even simpler to work with than Arch. I just don’t think I have enough of an attack surface to care.
Your web browser has a massive attack surface that’s exposed to untrusted input all the time. Maybe you don’t need OpenBSD’s level of security but there’s a big difference between that and at least making sure that you’re not running software with known, public vulnerabilities. Do you not feel uncomfortable knowing that every website you visit could be exploiting some vulnerability that’s already been fixed in a Firefox version you haven’t installed yet?
I feel orders of magnitude more uncomfortable with a broken web browser.
If I examine the risk: running Firefox with an adblock on Linux alone significantly reduces the odds of being a good target. Then, an attacker is the most likely to either install a miner, or a botnet client (I’d notice), or double-encrypt my data (I have backups). I’m a no one, and offer no promise of providing access to secrets, so targeted attacks are unlikely. What’s there to be anxious about again?
I think you give in to a false sense of security. The amount of random shit I more or less blindly install from AUR, or even distribution repositories, worries me much more.
This is accurate and includes my web browser. I could be wrong, but I never considered this update frequency a major security risk. I definitely aim not to go without a system upgrade for nine months again, though.
Your point about Arch Linux on servers is great. My first choice for a server OS these days is also something that allows getting security updates (and only security updates) for as long as possible.
I have a similar experience. My ArchLinux installation on my laptop is ~9 years already. In the meantime, my Linux distro on my desktop has changed several times because of distro problems (like a release of a non-booting kernel without kernel backups).
Rolling releases on desktop being less of a pain on the long run is also the reason why Google switched to a rolling distribution for their employees desktop.
My Arch Linux Install on my main computer is a few years older. It survived a switch between laptops and then to a desktop and a switch from a hard disk to SSD. Never spent so much between with system updates. For me it has been updated about once a day, sometimes more, sometimes lots, when I was in vacation or so. I used Arch before that but then usually a computer was a new install with copying over fines.
The biggest problem in all that time was pulseaudio. The switch to systemd was easy (and I still don’t like it, but that’s a different story).
I’m a bit sad that it kind of developed away from KISS in stone areas, but that was already starting ten years ago. It’s if course still kind of KISS compared to other stuff, but it’s not as much a priority anymore. I remember back in the days there was a big of a discussion about how the Linux kernel and patch sets should be treated.
This has also generally been my experience. The most common breaks are:
The best tool I have found to combat this are automation around date versioned repos like this for rollbacks: https://discovery.endeavouros.com/pacman/downgrade-packages-to-a-specific-date/2021/03/ (Hint chheck mtimes of files in /var/cache/pacman/pkg)
Outside of a very occasional hiccup it’s been easy breezy beautiful.
BTW, I use Arch.
Are you completing a meme?
Yes, sorry. 😇