After two years of using my company-issued MacBook for personal things, I’m setting up a personal laptop for personal things, which I hope to be able to do more of during lockdown.
Now faced with the choice of distro, I noticed I’ve never been very deliberate with this. My personal history:
pacmanis a time saver)
dnfseemed like a good package manager, and I wanted things abstracted away from me)
I’m curious what other crustaceans use and care about. How relevant are these things these days, now that so much of the software we run is on the browser anyway?
I don’t really like it but I mainly use Arch. It’s a bit clunky and I have to configure way more stuff than I’d want to these days, but:
This is all for personal use – I don’t exercise any pickiness for work use. I use whatever distro is needed, for whatever purpose. Most of my customers are on Ubuntu so I have an Ubuntu laptop on the shelf next to my desk. It gets zero use on weekends. It works pretty well, mostly because I rarely update it. I… every time I touch an Ubuntu machine it breaks. I don’t know why. By now I’m convinced it’s not Ubuntu’s fault, I’m sure I’m not holding it right or whatever.
What I’d love in a distro and would definitely make me switch:
Arch covers 4 out of these 5 so… yeah.
Every 8-10 months or so I try various other distros that are supposed to be cool but there’s enough broken stuff that I end up going back to my clunky Arch install. Lots of systems try to be really smart, and it’s hard to do smart things with a system that’s developed by a hundred different teams in a hundred different places. Ubuntu, Fedora and even Debian fall flat on their head way more often than I have patience with anymore. The only one that I really liked was Void Linux. I’ve got my eyes on that (and Slackware 15, of course :-) ).
I have another machine that I use for non-work stuff pretty often, a laptop running OpenBSD. I love it but there’s lots of Linux-only tech that I occasionally dabble in on my non-work machine, so I can’t really switch full-time, I guess (but I am keeping an eye on vmm. The moment I can compile a kernel in less than forever, and maybe run Wayland and Wine, I’m so out).
My Linux history was roughly as follows:
I also liked FreeBSD a lot back in the day, and ran it at home for a while, too, but I really can’t remember when anymore…
I don’t use Arch but I do use their documentation and wiki which is often one of the best sources of information on the oddities of whatever piece of software I happen to be dealing with.
Excellent comment and argumentation. I’d disagree on systemd though. Having migrated from Gnome to Sway on Arch systemd user services are a real boon to me. I use them to run several services when Sway starts: sway starts
sway-session.targetthat pulls all user dependencies: mako (notifications), redshift, udiskie (mounting disks). I also have a restic backup as a template service (and timer!) so that backups go to S3 and Backblaze on schedule. I’ve got calendars sync there (vdirsyncer), mpd and some cleanup tasks. Everything is clean and readable, dependencies work, logs are there.
Is it possible to do it all without systemd? Yeah. But systemd hits the spot for me in terms of readability and flexibility.
Oh, like I said, I don’t avoid systemd religiously – in fact, in a professional setting (I mostly do embedded systems for a living), I don’t avoid it at all.
I’d rather avoid it on my machine purely due to personal preferences:
On systems where systemd’s features are relevant, I think the trade-off is usually worth it. Yes, it breaks more often than I’d want, but it still breaks way less often than a web of thirty 500-line init scripts written by developers with limited Linux experience. And it’s definitely easier to debug, too. And it makes a bunch of things related to power management a lot easier. And, of course, on systems that aren’t mine, how much I trust its development team is pretty much irrelevant.
(Edit: I also 100% understand why it’s the tool of choice for large distributions. Distributions like Arch, Debian or Ubuntu or Fedora package thousands of services. systemd absolutely makes it easier for hundreds of volunteers to make things that work better together, and makes it easier for people to package their software, too.)
On my personal machine, though, I haven’t found the trade-off to be worth making (basically, it doesn’t really “hit the spot” :-) ). However, that’s pretty much a product of my own preferences and of the way I use computers. I’m sure it’s a good trade-off for other people.
Thanks for your detailed explanation! I just wanted to highlight that I find user services of systemd particularly useful. I did not have much experience with debugging non-systemd services but comparing writing unit files and init scripts (or upstart files) for work I’d rather maintain the unit file :)
I do agree that it could be simpler. Sometimes I feel like we’re sitting on a big pile of gunpowder with our primitive tools (monolithic kernels, unsafe languages), but such is life.
We’re definitely in agreement here, I find a lot of things about systemd useful, too, just not for most of the things that I use my home machine for, or at least not the way I use it. That’s likely a byproduct of the fact that many of my usage patterns are stuck in the 90s, I just didn’t find much of a reason to update them :).
In my experience: the baseline for debugging init scripts and upstart files is higher, but the peak line is lower. That is, I find that the most common problems that you encounter are way easier to debug on systemd than they are on other systems. The excellent debugging and profiling tools definitely help (but they’re also, IMHO, a byproduct of the sheer complexity of the system: upstart & friends can do without them because you can debug most common problems between vi and looking at log files). But more complex problems, especially when they’re related to bugs in the core system (and there are lots of them, not because of sloppy coding – the systemd codebase is pretty clean actually – but simply because there’s lots of code), are way harder to troubleshoot on systemd.
But the “breakage” line is also a lot higher on systemd. For example, synchronizing services and targets is something that it gets right 99% of the time and you rarely have to debug it. That was easier to get wrong on other systems (although maybe I just sucked at it more than I do today). On the other hand, that 1% is a nightmare to debug.
Anyways, the quirks of my machine aside, I think systemd is generally a step in the right direction. I think whatever will come after it is gonna be pretty damn great.
Void Linux, mainly because I’m a maintainer of it and can fix things quickly.
It’s the bomb. Haven’t looked back since I switched to it some ~2 years ago. Really hitting the spot for me in all the specific design choices that drive the distro. Discovered it when I was about to jump ship from Linux to some BSD, Void kept me from doing so
Thanks for your contributions as a maintainer! I used Arch for many years (I even installed it on my MBP), but have recently tried Void as a new setup (wayland/sway-based, I don’t need systemd) and really appreciated how easy it was to get running on my laptop and how smooth it is to keep updated. I’m eyeing a new Dell XPS 13 to install Void on and making it my main machine (it is currently running on a pretty old laptop).
I always come back to Debian Stable. It offers a breadth of packages and architectures, has a strong free-software ethos just shy of achieving GNU’s Respects Your Freedom certification, and generally stays out of my way. Being widely used, it is very easy to find documentation and examples for using software on a Debian system.
I like declarative package management, but Nix plays too fast and loose with licensing for my taste. Guix is better on that front, and I love the work they are putting into reproducible bootstrapping. That said, neither supports ppc64le and documentation/examples can be scarce sometimes.
Void Linux is very attractive to me in principle, but I don’t have time to debug packages and Debian has a better track record of just-working for me.
OpenBSD serves very well on my router. Like Debian it supports a breadth of architectures, has entirely free software in base, and generally stays out of my way. Fewer hits on stackoverflow but the man pages are comprehensive.
What are you using that for?
Around 2016, I decided rather offhand that I’d like to own at least one fully-FOSS device and to rebuild all its code from head to toe. I thought it would be an easy project that would mostly be a matter of tracking down all the parts and fiddling with build environments.
However, the further I looked the more I realized this “simple” project was a very hard problem. That same year, I learned about the powerful proprietary code in Intel’s Management Engine, AMD’s PSP, and ARM’s TrustZone. Stuxnet and Flame malware demonstrated that HDD firmware could be rootkitted. In the wake of the Mirai botnet Bruce Schneier proclaimed “the internet era of fun and games is over,” and I was inclined to agree with him.
Today I own two Thinkpad X200s, one Librebooted and one with the stock BIOS, which I am attempting to replace with a deblobbed Coreboot firmware.
For home servers, I have a KGPE-D16 for amd64, and a Talos II which is ppc64le. I am in the process of standing up selfhosted cloud services on the KGPE, following along with a duplicate setup on the Talos.
The KGPE is still running the stock BIOS because Coreboot locks the fans at 100%. There is extra work required to run OpenBMC in conjunction with Coreboot to control the fans. I believe the Talos is running completely free software, but I haven’t recompiled any of it yet.
Oddly enough, IoT is where I’ve had the least progress running totally free software. Raspberry Pis are so convenient yet so locked-down (proprietary VPU code boots the board, wtf?) and Raspbian has served my needs. I have an i.MX6 Hummingboard sitting unused.
Part of why I always come back to Debian is how easy it is for me to recompile official packages. It’s very easy to
apt-srcan individual package or
apt-buildan entire system. I’d like to set up my own Debian package build server and verify my packages reproduce the same hash as official mirrors.
I flirted with the distro Proxmox on my KGPE, since it’s basically Debian with a nice hypervisor web UI and fresh kernel, but I was frustrated by their lack of apt-src repos. In theory all the code is available at https://git.proxmox.com/, but it just rubs me the wrong way. They’re amd64-only anyway.
Side note, Ubuntu claims to be all about fixing papercuts and that may be true for end-users, but I am continually frustrated while trying to build packages for Ubuntu. For instance their x86 packages are located at http://archive.ubuntu.com/ubuntu whereas other arch packages are at http://ports.ubuntu.com/ubuntu-ports. Why use a different URL? Debian puts everything in the same place which is far easier to script for. Debian is pretty much ready-to-go once you install
build-essentials, but I need to enable extra repos and packages before building packages in an Ubuntu debootstrap chroot. At least they offer source repos.
It was introduced to me as “a Linux distro which does everything right” which turned out to be true from my point of view as well. Specifically, it gets package management right:
In practical terms, 1 means that the system ages well. NixOS is the first system I’ve used where I don’t have to reinstall the OS once in a while because it somehow got into a bad state. The opposite is true: due to the ease of sharing the config, I feel like I’ve been running the same os “instance” on several physical machines, which age faster.
2 and 3 mean that by default you are running a stable, coherent set of software, but you can easily opt-in into rolling-release or build-from-source for specific packages, without affecting the rest of the system, and without leaving any kind of “garbage” behind.
The biggest downsides for me are that running precompiled binaries from the Internet is tricky, and that docs are rather thorough, but not as good as arch-wiki for solving specific problems.
I’ve been diving into NixOS lately and really like it and intend to use it. But I also really understand why people complain about the configuration language and documentation.
The documentation is dense and hard to navigate. I often ended up on a slow-to-load page with a long exhaustive list of options, before discovering that there’s actually an options browser on the homepage, with links to code: https://nixos.org/nixos/options.html
Still, I also felt like I had to depend on reading nixpkgs code a lot to figure things out. And the code is also incredibly dense and hard to navigate. Maybe this a tooling thing, but I had difficulty with the many levels of abstraction going on, and figuring out just where a certain function is defined. (As far as functional programming goes, I have some experience with Elm.)
And the error messages can be super useless sometimes. :-/
I’m thinking maybe the issue is just that it’s a lot to take in. It’s a slightly alien language, a standard library with a somewhat unusual purpose, a slightly alien package manager, and distribution all rolled into one. Or maybe it’s just way ahead of its time, I don’t know.
But I’m still going to use it. Getting things right in NixOS is kind of a sweet ‘yessss’ moment, which makes learning fun. I also have a project where I need to have some public infra documented, which fits well.
Today, docs issues are to some extend ameliorated by a really helpful discourse instance. But I totally share your experience – it took me a while to figure out the minimal necessary set of syntax/commands, and, to this day, I must admit I don’t have a super clear big picture in my mind. But I do know enough to make the stuff just work, and I no longer spend notable amount of time on learning/tweaking configuration.
This is my experience too. It’s improved over the last couple years, though.
All of these reasons are why I like Guix so much. It really comes down to a preference in scheme as a configuration language. I keep my various machines’ hardware specs in separate files while my “default” desktop is inherited.
I think another big difference is approach to non-free software. I think in Guix it is not packaged into a main channel (but one can use an overlay) while in NixOS it is packaged in nixpkgs, and the user needs to unlock it with allowNonfree setting.
Not exactly. Guix doesn’t use overlays, instead a user can designate a repository as a channel that gets added alongside the main channel, just like with nix. The new channel can inherit the main channel (thus creating a channel that is a superset of the main one).
Also yes, guix does not package non-free software in their main repo but there is a fully functional icecat (based on the latest firefox esr), ungoogled-chromium, and Nonguix gitlab channel where one can compile a nonfree kernel derivation. Community repos are new but they’re maturing each day.
I currently use GNU Guix System and Trisquel as my daily drivers. They are both wholly free GNU/Linux distros that allow me to use my machines in freedom. Trisquel is the more familiar-feeling one (it’s an Ubuntu derivative), and Guix is the one with more radical ideas like purely functional package management (similar to Nix), fully reproducible builds, and lots of other exciting features you can read about on their website and manual. You write declarative package definitions, system configurations, and just about everything in between in GNU Guile scheme.
I also use GNU Guix on a 2015 chromebook pixel. Soon I want to start hacking on guile-wm and see where the maintainer left off because I’m super close to having a scheme machine. I’m even trying out scsh (and failing because I know fuck all about its interaction model).
I asked this question in 2000 in Australia, at a Linux Users Group and was told FreeBSD - a month later I was running FreeBSD 4.0, and six month’s later I discovered OpenBSD, and have been running OpenBSD as my laptop / desktop OS ever since. The documentation is excellent, and is installed by default on OpenBSD, which is a big plus for me.
I don’t remember them doing CDs at that point! I installed Slackware on my machine in ’95 and had to make on the order of two dozen floppies to get it done. Most of the floppies I used were re-purposed AOL subscription offers.
That exercise cost me a monitor. Because the right kind of typo in an X11 modeline could let the smoke out of a monitor back then.
I made a Slackware CD back in those days which was distributed with a few magazines which I happened to be editor in chief and/or managing editor for. A year later I made another CD which worked just fine in Linux but, due to some oddities in the file names used, refused to work on Windows. Of course I only found out when the thing was already in production… I ended up solving this by distributing a floppy with an improved version of the Microsoft CD extensions (MSCDEX) which supported longer filenames.
My memory of a slackware CD that came with a book or magazine was that I still had to make floppies to use it. Of course, that could be because my only CD drive was SCSI and my linux box didn’t have that, so I couldn’t hook up a CD anyway. But I’m 90% sure the install process I used looked for floppy sets and wasn’t prepared to find them on a CD.
You needed to make a boot floppy due to the lack of bootable CD’s (‘El Torito’ came along at the end of 1995) but for the rest the install worked from the CD unless that CD was connected in some way which was not supported in Linux, e.g. through a non-standard interface on a sound card.
This is correct. Slackware 4.0 was the first release that came with a bootable CD-ROM.
I’ve been using Debian for nearly? roughly? 20 years whenever it makes sense. Sometimes I use other stuff (usually Ubuntu if I have to or for certain servers where a certain LTS version is best supported)
That means Debian on personal machines by default, Ubuntu at current employer.
The first distros I used were SuSE and Red Hat, mid ’98. Then I experimented with every distro I got my hands on, then settled on Debian.
Hardly any of the software that I deem important runs in a browser, so I can’t support that motion.
Debian stable or newer?
On laptops it’s usually “run stable as long a as it’s feasible”, e.g. just after a release, for the first year usually, maybe 2, then switch to testing. (This might not work with new hardware, e.g. Lenovo’s current year model, ymmv).
On servers, stable all the way.
Testing is pretty stable in my experience (hasn’t always been, but perfectly fine for the last 10 years). I am not a fan of Unstable for computers I want to get work done (even if it’s private stuff), but even that was usable most of the time, people told me a while ago, not sure how it’s looking in 2020.
I’m asking because that’s also approximately my approach. Stable just gets too outdated after a while, but in my experience (sadly) testing wasn’t stable enough for day to day work.
Depends on your hardware and installed packages I guess.
I ran testing at work for most of ~2012-2017. I had about one “breakage” per year, this means ~1-2h to fix it. Not a single full day lost.
Since early 2018 I’ve not used my two private debian laptops a lot, so can’t comment.
Don’t know about wink, but I track testing on my dev / play machines & run stable on the servers.
I currently use Void Linux, and am a maintainer for the same.
Unlike many others here, I use it because I don’t need to tinker with packages when I don’t want to, most of the time just getting a system that both works and is understandable, even to an idiot like me. If I want to know where things are I run
xbps-installand continue with my day.
Yeah, Void is literally a “just werks” distro and it’s beautiful.
I am currently using Void Linux. I hate systemd, it’s hassle to maintain. Void uses runit also it’s package manager is very good.
How would you compare package manager to PacMan
It is the same, except split in multiple binaries. Even the system upgrade process uses
However, the package building system is much more complex IMO (though really powerful !). Building a package from scratch will spin up a basic void chroot with your dependencies to make sure you didn’t forget anything, usually resulting in high quality packages. This raise the entry level though, and I say it as the maintainer of a lot of packages on crux (and arch, back in the days).
I do have void installed on a notebook though, because it runs well without systemd, and just works. If I need a package from the repo, it is easy to install it. If it is not in the repos, I compile it manually and use my own package manager to install to /usr/local or $HOME/.local.
The UI is mildly superior, at least once you install xtools, the functionality is basically identical at least from an end user perspective. The lack of an AUR equivalent means installing things from other peoples forks is slightly more painful, but it’s not hard to build packages from source.
My recent(ish) Linux history, at least for my usecases (desktops, laptops, and a mix of ARM/x86 baremetal/virtual servers/containers that just need to act like appliances) has been something like:
How’s your experience using NixOS i always wanted to try it out
Totally changed the way I approach deploying Linux systems for the better. I’ve posted about it a few times, feel free to trawl through my post history or ask more questions. :-)
It’s definitely one of the most exciting developments in operating systems that I’ve ever seen.
Sure. i’ll definetely check out your post about using nixos. Its one of the top three choices I have on my list. I really like one feature of NixOS like using config file to build the system to you want instead of doing everything from scratch
Some others beat me to it!
thanks for sharing ill check it out
NixOS on my personal laptop and MacOS + Nix at work.
Ubuntu (and previously Linux Mint). It’s the closest I’ve ever got to “just works”, at least this decade.
I became sick of reinstalling my OS over and over again, and it was clear that even tools like Ansible would not cut it. So I installed NixOS three years ago and never looked back.
Yep, running the same NixOS installation since 2015, still as fresh as ever, can’t imagine wanting to reinstall.
NixOS, it helps me do the insane things I like to do. The fact that I can configure my entire system from a git repo and get a near identical replica in 20 minutes of work is amazing.
Have you ever been too inconvenienced by the lack of NixOs packages? This is my main concern of making the leap to NixOS/Guix SD
nixpkgs (which NixOS is part of) is probably one of the larger package sets out there:
Most of the time when I found something missing, I’ve created a package for it in my user packages repo. This includes things like my patchset for dwm and st that make it all look something like this: https://i.imgur.com/ydXoncu.png
Right now, I only have Linux on small servers running Debian. Google’s gLinux is based on Debian, so it’s most familiar at the moment. In the past I’ve used Linux Mint (standard and Debian Edition), Fedora, CentOS, Red Hat Developer Edition, FreeBSD, OpenBSD, Arch, and Void.
Overall I would recommend Debian, though if I were installing a new Linux laptop today I would install Linux Mint Debian Edition. These days I have less time / inclination to screw around with OS setup; I prefer a curated, out of the box experience.
As toy installs I highly recommend Void and OpenBSD. They’re both dead simple in their own ways, which I found delightful and enlightening to experience. They each reminded me that we don’t need to make everything so complicated. For example, I found XBPS incredibly accessible to non-maintainers. I easily fixed an issue with the ssh package without much hassle. On the other hand, I had to fix an issue with the ssh package. That said, both are solid once working. I ran each for several years without issue (Void at work on a personal build/test server, OpenBSD on an APU2 for my home router).
I wouldn’t recommend the Red Hat derivatives. They have obnoxious repo setups, with packages split across the core, EPEL, and miscellaneous third-party repos, depending on how much “enterprise” folks care about those packages. Red Hat proper is worse, with an additional layer of repo management and separation related to their paid support subscriptions.
I personally don’t like Arch, pacman, or the AUR. I don’t think Arch provides a coherent experience. Unlike Void or OpenBSD, different parts of the system were clearly written by different people with different preferences. I guess some people like that Arch has “personality” in that way, I don’t.
NixOS, because it uses the ideas of functional programming and applies it to systems. Benefits are declarative configuration, fearless upgrades and rollbacks, super super easy to fork and improve it.
I started in 2011 with Ubuntu to discover what Linux is, and switched to Arch in 2012 do go further into details. When the distro switched to systemd as the default init, I was still eager to tinker with the system so I searched an alternative, to finally settle on crux, which has now powered my personnal computers since 2013 !
As I learnt about Linux, package management and writing more C, Crux offered me the simplest package building system I could find, so even if it did not have as many packages in base than the big ones, it was still easy and quick to package them. I could grab a source for an unknown program and package it in 5mn, which was the killer feature for me, even if that means compiling everything (it doesn’t take that much time for a single package, and big updates can be done overnight).
I am slowly moving toward OpenBSD (crux is based upon its ideas), but I am not there yet. I still dualboot debian steam when I want to play games, and my company laptop runs debian as well, because I need it to “just works”, and not use it as a lab for my ideas :)
How would you compare Crux’s initial setup to something like Arch? I did try Crux before but it was too “from scratch” for my expertise level at the moment.
Also, I get the idea of “just works” for company computers. As soon as I got employed I set up something that let me do my work out of the box without having to spend my whole first day (prob week) at work tweaking my environment.
The install is fairly easy, though I agree it is manual. You have to generate your locales, configure network, install bootloader, … The handbook is really great for this all.
The hardest part is compiling your own kernel, though after you did this a couple times, it is fairly simple.
To me, the simplicity of the system outweights the manual process drawbacks.
I have only used one Linux distribution and it’s Slackware Linux. The reason I’m using it is because it works very well and it doesn’t require maintenance. Once you install it, you know it will work for years.
What do you use your machine for? Are you able to get the software you need from slackware packages or do you wind up installing a lot of your own stuff outside the package system?
I use Slackware on local development and production servers. I never use packages. They modify the global system state, which leads to a disaster sooner or later. My Slackware installs are 300MB in size and I have build scripts for software that each server should run. The built software gets installed to ~user/installs/.
this is basically me too. I was still on a hand-me-down pentium 1 box in 2004 when I first tried Linux. Friend recommended Mandrake, but the experience on that computer was awful, so I went to Slackware out of necessity and just never looked back. Now that it is set it is all easy and I’ve customized it all pretty heavily over the years.
I see them putting wayland in the packages thing though… I did the pulseaudio thing finally last time and am not a fan. I might just never update again at the rate it is going.
Are you both running the latest release or current? It seems that the latest release is almost four years old now?
I run the latest release only. Age is a good indicator of how good the software is. The older the software, the better it is. Anything new is a red flag for me. I only run software that I know has worked for years and that will work for years. This keeps my pager off at nights and I know there will be no hiccups in production when I deploy new servers as it’s been battle-tested for
Mine was current when I most recently reinstalled last September. Before that, it was the 14.2 release for a long time. I update pretty rarely, usually when I have hardware trouble of some sort and need to make a change anyway.
Does this keep you on an older web browser though? How do Slackers keep updated with new browsers?
You can update individual packages at any time very easily. The slackware package itself is the ESR version of firefox and periodically emails you saying a security update is available to it. You can just update that and forget about the rest. Slackware also rarely modifies anything so you can compile from upstream (the original software devs, not the distro) easily enough if you really want to.
For example, I also have a copy of new firefox in my home directory that used to auto-update on its own, I downloaded the tar.xz from firefox website and uncompressed in-place right there, so all independently of the rest of the system. All just worked out of the box with its built-in updater. (except the latest 75.0 was so bad I rolled it back and disabled future update checks to ensure it never does that again).
I assume that’s because of the giant address bar in 75?
Personally I’d have turned it off in
about:configand left updates on; security patches are particularly important for open-source apps (where attackers can read the patch to figure out how to attack old versions) and particularly important for anything that talks to untrusted network services.
Well, the giant address bar is part of it, but it was the on-click behavior that drove me so nuts I reverted. In my old version, single click places the cursor. That’s it. Double click selects all and sets the PRIMARY so i can middle-click paste it elsewhere. They changed all that in one swoop and the about:config thing only undoes the visual change, not the behavioral change.
I might just switch browsers entirely if Mozilla don’t come to their senses soon. They’ve done a lot of WTF things lately and while I’ve undone many with about:config or userChrome.css and such, the smaller frustrations are piling up anyway.
I feel ya. I just don’t have many good alternatives on linux that don’t involve an advertising company.
Edge on windows / safari on macos are ideal on laptops where battery life is a consideration.
While not using Slackware (any more) I do use a non-packaged version of Firefox, installed in /opt and kept up to date using a script:
(for 64-bit replace
This installs FF nightly in /opt/APPfirefox (keeping the last two versions) with a versioned name (current is
firefox-77.0a1). It also installs a policy file (in
/opt/APPfirefox/firefox/distribution/policies.json) which disables automatic updates since a) I want to manage these myself and b) they don’t work because the FF installation directory is not (nor should be) writeable by the current user. The contents of that file are:
Ever since the dawn of time I’ve been adding a drop-in extension to
/optfor certain directory names and adds these to
MANPATH. The whole system is tailored to allow packages which are installed using
/opt/SRVservice_name, etc) to be used seamlessly. Here’s the drop-in:
It sources any
.pkgprofilefiles it finds in
/opt/dirnameto enable package-specific configurations which are not covered by this script. In recent distributions this can be dropped in
/etc/profile.d, otherwise add it to
/etc/profile. I removed some ancient cruft (e.g. a reference to non-existing
X11R6directory) m it, there might be other parts where its age shows but it still works after some 25 years.
I’m currently using Void Linux. I tried it because it wasn’t based on any other distro and it does not use systemd. It’s been more than 6 months and nothing to complain, everything I need is available and working well. It’s the kind of distro where you have to tailor it for your needs which I like.
The only Linux I touch on the regular is the NixOS machine in my basement. The declarative configuration is, I think, the only sensible model in a world of cattle-not-pets, and I would be interested in using it more broadly if I ever have the opportunity.
Of course, that’s on a server that doesn’t get interactive use, so. If I had to pick something to work on a laptop, and I couldn’t use a Mac, I guess I’d go the path of least resistance, and Ubuntu, because life is too short.
ElementaryOS. It has the same UI-polish vibe as macOS. It’s not just superficial copying of the skin, but the fact that the UI is coherent, and clearly every detail has received attention.
Debian. I picked it because it had the best package manager at the time, hands down. That was 20 years ago. I’ve seen no reason to switch.
I’ve been using Debian Sid (unstable) for two years now on my Lenovo laptop (not Thinkpad). I wanted something Debian/Ubuntu because are the distros are more comfortable with and also because they have the best third party support of all Linux distros. But I didn’t like Ubuntu because lots of applications aren’t updated since the release of a version.
Debian Sid (with GNOME 3/Wayland) is my choice because it gives me the comfort of the Debian family and it’s a rolling release, meaning, packages are always updated. It’s unstable, but that is compared to Debian stable, which is rock solid :). Debian Sid is quite stable as I managed to keep it two years, running different GNOME versions but it’s not trouble-free.
I had been running Ubuntu since 2005, but switched 3 years ago to Fedora as I thought that Ubuntu lost its way, investing in doomed projects like convergence and Mir and forgetting to keep up with innovations. At that time, Fedora had Wayland and not Ubuntu, plus I was having some pretty bad issues with memory leaks that only happened in Ubuntu. This pushed me to move to Fedora (Redhat was the first distro I used in 1997), and I’ve been very happy since. I’m glad there is none of that Snap nonsense (and the snap directory polluting my $HOME), Gnome is always up to date and Flatpaks work wonders. I just wished the installer was better, especially with custom partitioning.
Have been using Arch for a long time now on my personal PC. Probably more than eight years. OpenSUSE and Fedora before that. Honestly, the thing that hooked me was the rolling release cycle. I got sick of pretty much having to reinstall and re-tweak everything every few years just to keep everything up to date.
That said, Arch is good for pets, but not for cattle. I use Ubuntu LTS for servers and Kubuntu LTS for setting up PCs for my parents. Lots of Ansible YAML for the former.
I also use Dietpi along with Ansible for a few Raspberry Pi machines I have around the place (the exception being OSMC for the Kodi multimedia centre).
I’ve been a loooong time KDE afficionado, but have recently switched to the Awesome tiling window manager (still use Dolphin and Konsole though).
I think Nix looks interesting and may be tempted to give it a try in another decade or so. :-)
my history progression has been something like Slackware Ubuntu Mint rhel/centos (for remote hosting non workstation) Manjaro I used windows and mac off an on as well. I would typically have windows at work or for gaming, but kept coming back to linux when I wanted to play with things, or learn. I always would enjoy linux until i needed a change for some reason or got annoyed with driver issues and etc. Then I would jump to windows or mac for a bit, until I would get fed up with lack of customizability or stopped playing games. Lately I have found the balance with a mac laptop for work and a linux desktop for home. The gaming support is really good (i don’t really play the latest games, but enjoy dota2 or etc) and dev work is way easier on linux these days for what I do.
Manjaro has brought me everything that I want in linux without a lot of the rough setup and maintenance. A rolling release distro is awesome, and I don’t have to worry about adding apt repos. But it still all seems very personal preference or trial and error to me.
Debian. The OS/distro is not the correct layer for experimentation.
Fellow Debian user here. I think there is room for experimentation at the distro and even OS levels, but … not at my day job, and not on my personal machines until I retire.
I was a Debian user for many years, and I still think it’s among the best mutable-state distros. That’s not only because of its technical aspects, but because the project’s governance is very well thought-out and effective at steering it in the right way.
I’ve often used Knoppix as a recovery tool regardless of what else I was using at the time. It’s good for that, and doesn’t require any real commitment.
I experimented with CoreOS a few years ago, because I’m a fan of minimizing mutable state, both for security purposes and to make things more maintainable. I didn’t like it. It was too much work to do anything interesting with it, and the Docker command-line tools were kind of awkward to work with. Perhaps it’s gotten better.
Today I use NixOS. I think the hygienic build system and pure-functional configuration definitions are the future. I’ve been using it for a few years, been through some messes with it, and the path to recovery has always worked out in the way I’d hoped.
Migrating my setup to a new machine is infinitely easier than it ever was with Debian. I tend to customize things pretty heavily over time, and when the manner of the customization is editing a lot of disparate files in /etc and installing a lot of interdependent stuff in /usr/local, it winds up being impossible to really be sure what’s in there and why I did it that way. With Nix I can factor my config so all the stuff that I did for the same reason, is together in one file.
Similarly, if I decommission a machine, I don’t have to worry that I’m losing anything important. I just have to check for stuff in /home and /var, and make sure the config is checked into a repo. That can be done in minutes. With mutable-state distros it can take weeks to be sure I have everything, and I’ll never be able to re-create it in quite the same way.
I use devuan, because it’s the pinnacle of unix. The software on my machines generally falls into one of two classes:
Devuan gives me oldish versions of everything, and the people who man the ship are very… unixy. They are friends of the kind of unix I’ve used for >30 years. These are people to whom systemd is foreign and rather assuming, so devuan is the debian package repo, edited by people who really liked unix, for people who really like unix.
I personally use Pop!_OS (Ubuntu based) because I like the direction System76 have taken the UI/UX. There’s very little do to out of the box, except run my package install script.
If I wasn’t using Pop!_OS, it would be Ubuntu. I’m a “set it and forget it” kind of guy, so I like to get going and leave my OS alone. The fact that Ubuntu and/or Pop are supported by large companies, and therefore unlikely to go anywhere is a big bonus for me also. I try to stay away from the smaller indie distros.
Ubuntu is also the biggest distro out there, so getting support if things go wrong is trivial.
Yes, and the big userbase means that it is supported by third parties like nvidia for cuda!
1999: Introduced to Linux via Red Hat 6.0, not RHEL, but the version of Red Hat that came with Linux books/magazines in Barnes & Noble.
2000: FreeBSD 4.0
2004: NetBSD 2.0
2008: FreeBSD 7.0
I’ve been on BSD variants for so long that Linux feels confusing and uncomfortable, especially when working with CentOS at work and Ubuntu on my wife’s laptop. Making that brain switch can be tough at times.
I’ve leaned on KDE Neon. It’s been fast and I get updates like hours, if not a day after the source code is cut - which is amazing to me. Outside of that, the use of PackageKit to handle packages makes it easy to use both Flatpak and Debian stuff without having to run multiple commands (and Discover’s getting better - needs a lot of work though).
I started out on SUSE around 2003 and switched to Slackware around 2004/2005. I stayed on that for a few years before moving to Debian, where I remained until very recently. The past month I’ve been running FreeBSD and it’s quickly becoming my favorite.
I’ve also used Ubuntu, RHEL6/7 and Centos 6/7 at work.
Out of the Linux distros, I prefer Debian. It’s very stable, widely supported, and just stays out of the way.
systemdkerfuffle and YaST is a good enough tool to manage it while I learn the commands to maintain this installation.
What did you do to kill void?
If I knew (or if the message was something more useful than “an error occurred”), I’d just fix the problem instead of ditching it. Although using something that takes RPM packages does come in handy.
I’m running Manjaro on my Pinebook Pro because that’s the current default distro (and therefore the distro I expect to be most supported + up-to-date). I was previously running their previous default, which was a custom-hacked Debian Mate, but it was very difficult to install new software because it was an 32bit (armhf) OS running on top of a 64bit kernel, and they used brittle hacks to make that work. The Manjaro build feels like a real Linux install that I can install software on and update myself, and all of the hardware seems to work. I still have to compile a lot of stuff from source though, because there aren’t as many pre-compiled binaries for ARM as one might hope.
I run Ubuntu on my virtual server on Linode, but I’m working on switching to Arch because it’s really annoying trying to get up-to-date packages on Ubuntu.
My first distro was Fedora Core (they’ve since dropped “Core” from the name). I don’t remember which version exactly. From there I moved to Gentoo because I wanted to be cool. Come to find out no one cared and it took way too much time to compile everything so I went back to Fedora for quite a while.
Finally, I took a job about 5 years ago now that used Ubuntu almost exclusively. At that point I switched to Ubuntu at home because it made it easier for me. Distrobutions like Fedora, SUSE, Ubuntu, etc. all look about the same on the outside, but when you start doing system admin work you suddenly start running into a lot of little differences that really add up. I’ve since left that job, but I still use Ubuntu LTS.
I have tried to distro hop here and there. I tried Guix but it won’t work properly on my Intel NUC. I tried NixOS but it’s not compatible with my printer. I tried a BSD or two, but they all either had installation issues I didn’t want to deal with, or they were too different from Linux in ways that, frankly, I didn’t care enough to figure out. The BSDs are great I guess, and despite my mild interest in them, I have no desire to actually run them.
I plan on buying a new printer specifically to run NixOS (I hear Brother works well) so I might end up leaving Ubuntu. We’ll see how it goes.
Suse, OpenSuse, openSUSE Tumbleweed
Not going to attempt to remember what years I used what, but here we go.
I don’t think I’ll ever go back to using Linux at this point. I really have bought into the BSD style of having a full operating system out of the box. I’m so sick of removing bloat from linux distros only to have to install stuff that should be defaults but are not. Plus come on, who doesn’t want native zfs these days.
I used to use Arch and that was fun because there was lots of stuff in AUR and the rolling release worked well. But then I moved away for a year, came back, and turned my machine on and I couldn’t upgrade. Finding the sequence of packages was an intractable problem. None of the package managers (I’d used
pacmanreligiously) could do it.
The Ubuntu stuff worked mostly, though. So now I use Ubuntu.
Funny. I think I started with Red Hat around Red Hat 7. At least I distinctly remember correctly partitioning everything, then fucking the bootloader installation up so that you couldn’t get to Windows, then fucking the partitioning up while trying to fix that. Dad was mildly mad. No one was convinced that using the Linux desktop alone was a good idea. Remember being very excited for Red Hat 9 Shrike to come out :)
So RH > Fedora > Ubuntu > Debian > Ubuntu > Arch > Ubuntu and I’m honestly never going to try another distro. Only got Ubuntu because they shipped CDs across the world! Unbelievable. So hard to get modern software in India and I had the newest compiler for free! Also my first international package. Wish I’d had digital cameras back then. I was so very excited! Obviously being a child I also played around with Enlightenment DR17 and software composited desktops, making it utterly unusable by anyone who wasn’t in love with ＨΛＣＫΣＲ░ΛΣＳＴＨΣＴＩＣ （ほ園ラ）
Ubuntu since Dapper Drake (2006?). I get
aptand have become agnostic to design/ui choices since they moved to GNOME by default (the custom Ubuntu thingy years were painful, what was it called? Unity?).
I used to customize heavily, used awesomewm for a while but came to the realization, that I hate being broken by updates and prefer spending my time with software development much rather than system fixing. So I started caring less, intentionally. I think it worked. :)
That’s a trend across many of these comments. We got started with the customized setups, loved them, they become huge headache, and we switch to “just works.”
That’s why I recommend “just works” as the default to young people these days. If they want to tinker, they’re better off doing it building improvements inside or on top of the “just works” types of software.
I use a mix of Open/Net/FreeBSD, Debian, Ubuntu and Arch for different tasks. At the moment my main personal desktop is an Amiga 4000 running Amiga OS 3.9, supported in part by a Raspberry Pi running DietPi. The Pi runs certain tasks that could run on the Amiga, but are faster when done by the Pi, such as Stunnel for SSL Tunnels (as TLS overhead is significant on a Motorola 68k CPU), and mounting drives (it mounts remote drives and exposes them via SMB and FTP so I don’t have to keep the mounts active on the Amiga).
So, I have to ask: how do you use your Amiga as a main, personal desktop? Do you just use basic applications or does it have a lot more than I’d guess?
I write mostly markdown using Ed, but Final Writer too if I want to do anything in RTF. I edit photos using Photogenics. Just last night I was doing some 3D modelling with Imagine 3D and uses Photogenics to convert the IFF renders to JPEG. Earlier on I was trying to get an older 3D tool called Opticks working so I used RNOPDF to keep the manual on the Workbench screen while I fiddled. I play a lot of tunes in EaglePlayer as it has full AHI support for my 16-bit Repulse Soundcard, but I also play MP3s in AmigAMP. I’m mostly using Final Calc to keep track of investments. I can export to OpenOffice, but going back doesn’t work very well, so I think I might look at alternatives. I had been working on a 44CON activity booklet with PageStream, but I’m not sure that’s going to happen now.
I read Reddit (and lobste.rs, among other sites) using AmRSS. This is plugged into an RSS proxy on a HTTP server so I don’t have the TLS overhead. I also use AmRSS to keep track of regulatory notifications for investments on my personal stock screener. I use AmIRC -> Stunnel -> ZNC -> Bitlbee for Mastodon, Twitter and IRC. There is a native Twitter client but I don’t really like it. AmTelnet lets me connect to the Pi to do onward SSH, but if I’m feeling paranoid I can use NComm over a USB-Serial link instead. I use SimpleMail for Email but I’m not a fan. I need to look into setting up Yam at some point but would like to try Usenet. The Pi and Amiga are on the same managed and monitored network switch, so I’m pretty confident things are good enough for my threat model.
I could use Amiga cloud-handler to access Dropbox or Google Drive, but instead I have Samba and FTP services running on the Pi pointing at mounts for SSHFS to several boxes, a Windows 10 box, Nextcloud and Google Drive. Even though I have nearly half a gigabyte of RAM in my Amiga I only have one 68060/060. I only want to run things like filesystem mounts when I need them.
My Amiga has modern Ethernet (although not at modern speeds, but it doesn’t need them), USB storage and 1080p resolution thanks to the ZZ9000 card. I use CompactFlash cards for Storage, with a 64Gb card as my main card. I have a 256Gb one, but that doesn’t like my FastATA card.
When I have everything stable I’m going to write it all up with links on how people can set it up themselves. I have other retro computers including C64s, Spectrums and more esoteric stuff so I want them all to be able to access as much of the same functionality as possible.
You should because this was probably the most detailed picture of using Amiga’s today that I’ve seen so far. Thanks for the write-up. I’m sure some of your tricks will apply to anything else with weaker hardware or, like in secure kernels, things that have high overhead due to the checks they do.
Void Linux. I destroyed my Gentoo install one day and didn’t want to do everything from scratch again. I found Void Linux and never looked back.
It’s the best distro I used. My distro history went like this: Mandrake > Slackware > Gentoo > Void
I’ve used nearly every distro out there imaginable from Suse to Redhat to Mandrake to Debian to Slackware to Gentoo to Fedora to FreeBSD. Ultimately in 2007 I switched to Ubuntu since I was leading an install party and wanted to be on the same system as everyone else and basically just stayed on it. I never had the system get itself into an unusable state and for any package not in the main repo or some PPA I just build from source. Most software these days I’m getting using npm, pip, or cargo anyway so it doesn’t matter all too much what distro I use.
My setup is so stable that I have pulled hard drives from old laptops and put them into new ones and had the system just continue working. I’m honestly pretty happy with it.
Fedora. It just works and for the things I use has up to date packages. I only ever really run Emacs and Firefox and maybe a terminal, and Gnome 3 is more than adequate to do that plus the occasional sundry while still managing to be aesthetically pleasing.
I’ve used Fedora and the early redhat versions (with a brief dalliance with both debian and SuSE in 2000 - 2001) at home since the 90s. For 15 years at work I basically developed a customized CentOS version that ran our appliance, so RPMS and the redhat way of doing things just makes sense to me.
I’ve used Fedora for a while mostly just because we were using CentOS at my previous job and it seemed like a reasonable way of getting accustomed to the RPM family of distros without going all-in on having ancient versions of everything.
I switched the DE from the default (Gnome/Wayland) to KDE/x11 because the Gnome/Wayland combination exhibits horrifying input bugs: mouse input behaves weirdly (couldn’t say why but it feels extremely off) and it randomly drops and duplicates keypresses.
I use Fedora for my personal laptop and workstation. It’s just the perfect storm for me. Most client work I do is on RHEL. So the tooling and config common ground works well for me there. Fedora is reasonably stable. Its package selection also tends to be reasonably current. So quickly trying something out on my local workstation before deciding to spin up a VM somewhere is usually easy to do. It also runs the commercial software I care about either without any trouble (Jetbrains stuff, 010editor, Steam) or with only a small amount of trouble that I understand very well (VMWare).
Since I’ve gotten the hang of using mock to build things and COPR to make them easily accessible to the package manager, I also find it very easy to keep my system clean.
I run enough software outside the browser to care quite a bit about how easy or hard that is on my workstation’s distribution. If I ever got to the point where I was only dealing with software that runs in a browser, I’d not bother to build a workstation or maintain a laptop. I’d just use a tablet or a chromebook for everything. My browser is still mostly for reading things, and sometimes for watching things. Not for making things, yet, for me.
My capsule history:
Somewhere in there, maybe around the late 1990s, I ran OpenBSD just for a change of pace.
I also got used to Window Maker as my GUI somewhere in the Slackware period, I think, and I’ve used that on Ubuntu for as long as I’ve used Ubuntu. I therefore don’t really understand why some people are put out by Ubuntu changing their default GUI.
I ordered some CDs in late 1998: Red Hat 5.2 and Debian 2.0 (hamm). As best I remember it, I had a hard time getting on our terrible dialup with RH and Debian worked.
I’ve experimented a lot since then, including some quality time with FreeBSD and OpenBSD, and long detours into Ubuntu on client machines, but Debian is what I’ve spent the most time with by far. It’s free software and serious about it. It’s not owned by a megacorp. The defaults are usually sensible. I know the environment and the tooling pretty well. The package repositories are an astonishingly vast and deep resource.
These days, for primary workstation/laptop machines, I mostly run stable with a few things out of backports and if I need anything newer I can usually build it locally or find packages. There are some frustrations that come with the length of the release cycle, but I’ve really come to value the rug-not-getting-yanked-out qualities of a stable distro.
I had been switching between different distributions and families of distributions when I first started using Linux, but it’s been a couple of years since I settled with only using Debian-based systems which seem to be the most widely supported package-wise. Arch (especially because of the community repositories) might actually be slightly better, but I have developed a certain (and not necessarily fair or logical) distrust against the entire distribution after a failed upgrade I experienced a couple of years ago.
I’m now running Debian and I’ve recently switched from testing to stable. If I were to migrate to a different distribution, I’d be most likely to consider Ubuntu Server, Ubuntu Desktop, Mint or KDE neon, if I decided to give KDE another shot (in that order).
Interesting to see how many people use Void linux though, I don’t know much about it. I should try it out.
been basic Ubuntu ever since high school
I break laptops pretty easily (sometimes through something dumb, like spilling coffee on it, sometimes through something baffling, like a hard drive getting corrupted after playing around with urbit) so it’s nice to have a go-to for something fairly universal on laptops (i move around too much for a desktop) with a simple install process; all i need really is i3, chromium, emacs, smplayer, cmus, haskell/stack, and some python shit–i don’t configure much so it’s not bad for things to be gone
i have used os x personally, and i do for work, but the lack of tiling is terrible for me: i should not need to use a mouse
I’m sticking with Gentoo. USE flags are just too handy and I’ve had machines that have had time same install (but are kept up to date) for over a decade now. Gentoo is what I want. No more and no less.
Slackware, RedHat, Debian, Gentoo, Ubuntu, Gobo, (Net/Open/FreeBSD,) Debian, NixOS
I’ve used Debian for about 20 years. Keep coming back to it. But am currently migrating to NixOS, maybe a change that will see me abandon Debian.
This is my favorite thread Has anyone have experience using KISS distro
I do! (I created KISS and use it daily)
I’m happy to answer any questions you may have. :)
Ahhh sure so I am looking to get into KISS. i realized its very different compared to installing other distros. I am currently using manjaro but exploring Distros that our more minimal so I found KISS. Hows’s the experience is it similar to installing arch
It’s trickier and this is almost solely due to compiling your own kernel. This is where a lot of users trip up during the installation process. It’s a matter of creating a kernel .config which is suitable for your hardware. The best advice I have is to figure out beforehand the ins and outs of your hardware and which kernel modules, drivers and firmware it requires.
The easiest way is to then run ‘make defconfig’ which will generate a nice base configuration file to extend. This base contains pretty much everything needed (in the general sense), namespaces, the block layer, networking, etc, etc. It’s then just a matter of what your hardware needs.
I extended it for my hardware by enabling amdgpu, elan/i2c/hid stuff for my touchpad, nvme drivers for my SSD, Realtek drivers for my WiFi card, etc. It’s a trial and error process. Your priority should be getting the kernel to boot the system.
Once you’re booted it becomes a lot easier to compile a new kernel and reboot to test it. Remember, while this may seem a steep hill to climb from the get go, you only have to go through this process once. Keep a backup of your .config file when you’re finished and you’ll be able to simply copy it to the kernel sources and run ‘make’ each time you’d like to update your kernel.
It’s a “set and forget” kind of situation (if that makes sense).
The other tricky part (though far less tricky) is the partition layout. You should think about beforehand what kind of partition layout you’d like, are you going to use EFI or BIOS? Do I want a separate /home? Do I want to use encryption?
The installation guide is as general as it can be so your choices here (and in the kernel) may affect the existing instructions. This is why I tell users and those wanting to dive in to know what they’d like to do prior to the install. I guess what I’m trying to say is; “Know your stuff as steps may change based on your choices and some thought may be required”.
I hope this doesn’t dissuade from trying it out. It can seem a little daunting at first. You will no doubt learn from this experience if you choose to continue. This is part of the reason for the guide being as loose and manual as it is. The guide should teach you how to compile your own kernel (solely for your hardware), how to manually setup disks, how to operate a chroot, how to manage your KISS system, etc.
Anyway, apologies for the wall of text. I’m happy to answer any further questions you may have. :)
No problem man I really appreciate your long comment explaining the details. Plus you are the creator of this distro So I really appreciate your answer. By the way i love your tool neofetch and how did you got so good at bash
I’m been on Linux for almost two years now after using macOS for about 15 years prior to that. Love the experience in general, don’t miss anything and plan to continue this way. It’s a new world with a crazy amount of different options.
I experimented with different desktop environments (Gnome, KDE, Xfce…) and concluded that Gnome fits my workflow the best. Picking the “right” distro is more of a challenge. I like the philosophy of Debian (community driven, free-software focused etc) the most but Debian Stable has an older Gnome version while Gnome releases big improvements every 6 months.
This has made me try several other distros that run the latest Gnome with Fedora being my favorite among those. Fedora has a similar philosophy but compared to Debian it has less packages so some more obscure stuff I cannot find and it’s a bit heavier/slower than Debian by default as it has more things running in the background.
So lately I’ve been going between running Fedora and running Debian Sid which has latest Gnome but there’s theoretically a higher risk of breakage which is always on the back of my mind as I’m a relative newbie and unsure if I could fix it.
All in all, if there was an official Gnome OS distro run by the community and with a strong focus on free software philosophy I would most probably run that one :)
Elementary OS and KDE Neon sound close to what you desire, except for the wrong desktop environments!
maveonair pointed out Solus Linux which uses Budgie, a DE based on Gnome. Not sure if that’s something you’re interested in trying.
thanks for sharing!
There is a Solus Gnome flavor available which gets a lot of love from the Solus Team too: https://getsol.us/download/!
Please check it out and if you have any questions don’t hesitate to reach out to the community forum here: https://discuss.getsol.us/
thanks, I’ll take a look!
I have consistently been a macOS user since 2007. In 2018 I discovered Nix and NixOS and haven’t looked back. macOS is a local optimum UI-wise and has the best GUI applications. The macOS CLI became a lot more bearable with Nix (I basically replace it by a GNU userland and maintain everything with home-manager). NixOS is the best Linux distribution I have used so far. Declarative configurations, virtual environments, atomic upgrades/rollbacks, ZFS. If Linux had the apps (Microsoft Office, OmniGraffle, etc.) I wouldn’t need a Mac anymore. But since Linux doesn’t have those, I am happily using macOS + Nix and various NixOS machines.
On servers I use whatever my employer uses. For my own and my family’s servers I use NixOS. Our daughter’s desktop also runs NixOS. My wife has been a macOS user since 2010/2011 or so.
For many years I used different kind of Macbooks but switched a year ago to a Workstation / Portable Notebook Setup:
Workstation: Self-built PC because I get great performance for the money. On that machine I am running Solus (https://getsol.us/home/) which claims to be an operating system that is designed for home computing. Before that I used many different Linux distributions over the years such as Fedora, Ubuntu, Debian, Arch, Manjaro and once even Gentoo :).
Portable Notebook: I always come back to an Macbook, and I tried many times other alternatives. At this moment I have a Macbook Air 2019.
My daily driver is Solus (Budgie) - it’s “curated rolling” (updates weekly), fast and always “just works”. My older machines (mostly netbooks) run either MX Linux, Q4OS or EXE Linux (Devuan with Trinity DE), except for a fully libre BlackBook (Macbook 2,1) which runs Trisquel.
If I want to be sure that something’s going to work fine, I use one of the Ubuntus, usually Lubuntu.
I like OpenBSD and have used it as my primary OS quite a lot in the past, because it has exceptional documentation, a small footprint, and sensible defaults. Right now though I’ve been experimenting with KISS, because I enjoy compiling from source and it doesn’t use GNU tools. I don’t have a particular problem with GNU, but in the same way that people use Firefox because it isn’t a Chrome spinoff, I want to do my bit to ensure that there are other implementations of POSIX tools available. At some point I’ll probably switch to Alpine when I get bored of compiling from source.
1994-2000: Messing with Linux occasionally on loaned computers. I didn’t have the hardware to run it.
2000-2002 Debian unstable: Rolling releases before they were a thing. Lots of packages. Abandoned because it was really unstable, and slow.
2002-2015 Gentoo ~testing: Rolling release. Flexibility. Targeting the CPU I have. Flexible. High maintenance, but I learned a lot with it.
2015-now Arch. Rolling release. Excellent documentation wiki. Most tolerable pre-packaged distribution, as it seldom gets in the way. Extremely low maintenance.
Note: I don’t just use one distribution, but this lists what my main workstation’s Linux been at a given period of time. Other operating systems I use (such as AmigaOS, Freedos, haiku, netbsd) have been excluded deliberately, as this story is tagged Linux.
I use Devuan with the unstable repos (which aren’t that unstable in my experience), which gives me the maturity of Debian without needing to run systemd. I know many apt and dpkg off the top of my head, and don’t need to mess around with yet another package manager with its own archaic commands.
Installs tend to just work, and I don’t have to constantly
pacman -Syufor fear of falling out of data. I can leave a machine alone for months and come back to it, run an
apt update && apt dist-upgradeand everything works. Debian isn’t flashy, but it’s stable. I can always run vms or containers if I want to play with something more cutting edge.
Recently, I’ve begun experimenting with running guix on top so I can install extra packages. I agree with a lot of guix’s philosophy, but run enough non-free software that it makes running guixSD as my distro painful. I don’t want to maintain a custom linux kernel config.
Historically, I’ve used:
: I know most people happily run distros with systemd, but for some reason, whenever I install it on my desktop machine I get weird boot/shutdown hangs. That plus the general low quality of systemd code, use of binary logs, automatic overwriting of common config files like /etc/resolv.conf are all reasons for me to stay away.
Parabola. I used Trisquel for a couple of years, but the packages were getting old. Parabola is in the sweet spot of being entirely Free Software and also having recent packages, since it’s based on Arch. When packages are recent and it’s a rolling release there isn’t any incentive to distro hop.
I’m currently running arch linux and void linux. Future installs will probably be void. Basically via process of elimination
Long back I was bit hard by trying to upgrade non-rolling release distros from one version to the next. I also like being on recent versions of software, so that disqualifies Ubuntu/Fedora/most versions of debian.
Debian sid was never advertised as particularly good for end users, and issues like this will continue to keep me far away.
Gentoo sounds like too much work.
Arch linux has systemd, which isn’t the end of the world but has caused me a few headaches.
So far the only bad things I have to say about Void is that it doesn’t package openssl, only libressl, and the documentation is a big step down from arch. On the plus side the fact that it has its package repository on github makes it incredibly easy to see what is happening and contribute compared to most distros.
I think I started in 2008 with Fedora. Went through /a lot/ of different ones… the last few before NixOS were Arch, Void and Fedora.
Now I’m on NixOS, the reason I chose it originally was just because they were a bit more mature than Guix (which I wanted to make my daily driver at the time). Now I’m stockholm-syndrome’d, I used to think the Nix language was the worst language design ever but now I think it’s merely got bad tooling.
Obviously the reason I wanted to use these systems is reproducibility. I was sick of losing my state or having trouble keeping track of my state or suddenly having an inconsistent state. Most operating systems are actually way more bonkers than NixOS, although I don’t think Nix is really bringing sanity back to computing it is at least on the right track.
The view I take now is that NixOS has the correct general idea and that for the foreseeable future we will be switching out the pieces (like when systemd inevitably gets rewritten in Rust or something along those lines) and the only thing that will make me leave this “ship of Theseus” would be a working microkernel architecture, so: redox, genode or fuschia and that’s only if I can run it on some cool RiscV machine, and even then I’d still probably try to get Nix to work on that thing.
tl;dr: My distro needs are solved for the next decade, now I’m trying to figure out userspace optimizations, i.e. window management.
I feel like I’m in the same boat with Nix. It has radically improved system deployment for me even though the tools are full of sharp edges. nixpkgs and the tools are definitely improving, though.
1987-2005: Whatever my Dad ran, usually some flavor of RH or Debian.
2005-2006: A brief stint with Debian, before getting frustrated by the lack of recent versions of things I wanted to use and not enough understanding to build them myself.
2006-2010: Arch, I dove in the deep end with learning how things worked, I briefly tried Gentoo but landed on Arch because waiting hours for shit to compile on the awful laptop I had wasn’t worth it.
2010-2015: Back to Debian, Arch is lovely, but configuring a system from scratch was a pain, and Debian was roughly usable out of the box. I tried using Ubuntu somewhere in here, but didn’t like it (it’s basically Debian but worse).
2015-present: Manjaro does most of what I wanted with Debian (roughly usable out of the box), but has the upshot of AUR and Arch’s general ‘everything already in the repos and at recent versions). I still use Debian in a few places, and I’m comfortable in most distros at this point because of work, mostly the things I care about are portable between systems so it’s just ‘whatever package manager I have here.’ I stuck a thing in my shell line at one point to remind me what system I’m on, now I just fumble through package managers I know till I find one that works. A lot of the linux I run is in VMs, though, A Kali box for recreational pentesting, an ArchBang box because I wanted to test it out, a Manjaro laptop, a Debian server, some Debian on Pi’s around the house. Most VMs run on a windows host so I have somewhere to play games.
2014-2017: Mixed Windows and Ubuntu, gradually moving to Ubuntu.
2017-present: mass migration to RHEL
I can’t go into a ton of specifics, but I’ve been pleasantly surprised with RHEL, I used to hate it, but I think I just never gave it a chance. Once you learn the SELinux bits and get in the right headspace, it’s a perfectly serviceable system.
2018 - Current: OpenSUSE Tumbleweed (Desktop and Laptop)
I’ve been through:
My setup is Windows dual boot and Ubuntu. I do not want to think about drivers and kernel options anymore. I need a browser, a terminal, ssh and something where VPN and teleconference software just works.
Just Ubuntu, it runs everything.
Gentoo on a laptop just for fun, and Debian on servers.
I’ve been using Fedora for ~7yrs now as my main work driver. I’m in a mixed *nix/Windows environment for work so it does the trick. I will use a combo of Fedora and Windows w/WSL enabled at work. It’s gone well for me so far. Fedora “just works” and hasn’t presented me with issues I’ve ran into with distros like Debian or Arch.
At home I mostly use QubesOS though. I’m in love with the compartmentalization and privacy options at hand when I’m running VMs on there. If I want to test something out I can always run it in a disposable VM. Creating new VMs for things like multimedia or communications is a wonderful option. A recent example of a great use-case was for a Zoom install as I was being forced to use it for remote chat at times. With Qubes I was able to contain the Zoom in an AppVM completely dependent from any other applications and files on the system.
tl;dr - Fedora in the streets, QubesOS in the sheets.
First (1992) SLS on a DAT tape which I had to dump to floppies to get it installed on a machine which did not do SCSI. This was followed by Slackware (1993) and Redhat (1995) which I used in parallel until I got fed up with the lackluster upgrade experience with Redhat and went full Debian in 1998. Apart from using niche distributions for specific purposes - DSL for minimal systems, Knoppix for live media, LTSP for when I had a flock of Netpliance iOpeners and Virgin Webplayers running odd services around the house, now Librelec/Kodi on a RPi to give the family something to watch while I do other things - I’ve been on Debian ever since. Where Redhat-based systems generally took a lot of hand-holding on updates, most of my Debian systems have more or less made it unscathed through decades of upgrades.
I’m looking for a buttery smooth, lightweight system, which just works and has a lot of packages available. I always use the xfce desktop environment, sometimes combined with the i3 desktop manager.
I’ve used Linux Mint and Arch Linux, and I currently use MX Linux. It’s good. I miss pacman though. Nix and Guix also seem interesting.
I’ve mostly used Arch these past few years without any issue. My desktop right now has NixOS though
I use Ubuntu, because I’m emotionally attached to the Debian package management system (my first Linux install was Debian from floppies on a 386 with 4MB of RAM). Ubuntu strikes the right balance of usability and power for me.
I’m currently using Manjaro because I wanted to have access to a lot of recent packages. But lately I’m considering a move to NixOS or GuixSD for their declarative configuration, especially for setting up VPS machines. It just feels right, so I’m eager to try it out.
Ubuntu or Debian. Usually Debian on my desktops and Ubuntu LTS on my servers.
I’ve had good luck with pretty broad support for any one-off packages I need and relatively well supported in one-off needs. I don’t have much time or energy for customising anything so it’s a good default. Centos is also fine but I cut my teeth on Debian as my first distro so I’m more comfortable with that toolchain.
I used Red Hat from 1998–2010 or so, Ubuntu from 2010 to 2011 or 2012, and Debian since (numbers are a bit fuzzy, and part of the Red Hat time was Fedora).
I like that Debian comes from a traditional Unix sysadmin background; I appreciate the stability of the system (I love computers, and I use them for fun as well as work, but I am not as interested in spending a weekend chasing down depa for a broken system as I was in my youth); and I also appreciate the Free Software focus. If you’re used to Ubuntu, Debian is Ubuntu without the flakiness; if you’re used to Red Hat, Debian is Red Hat with more-pleasant tooling (that’s a matter of opinion, of course).
I would love to take a look at Guix SD sometime. And maybe Arch someday — I love the Arch wiki. Seems like a great community. But with Debian I can get work done, and that’s what is important.
Debian, because it’s stable and as “vanilla” as it gets. It’s pretty mainstream as far as distros go.
I run i3wm + i3gaps, tint2 and kitty.
btw, i use arch.
But yeah, I started back in 2010 with Ubuntu and since then I had been hopping distros til’ about 2015 where I settled with Fedora and started contributing to the local community, which was the only active at the moment in my country.
Time had passed by and I started using AntergOS on my personal computers and Fedora/Ubuntu for servers. I liked having bleeding edge software without wasting too much time setting up my machine (as it did a few years ago), plus the documentation and control over packages given was more flexible.
It was this way until I started using Ubuntu at work (for the sake of having something more stable). Like one or two months ago I (finally) noticed I had to do something about my laptop having a discontinued distro so I cleaned my antergos machine and ended up with a “look-like” Arch.
This week a new SSD arrived for my desktop and I installed Arch, I spent a night setting it up and now with the packages I require to work and be comfortable it has more than 1k packages less than the AntergOS installation. I’d like to keep testing more setups if I had a spare machine, but since I’m already familiar with these (Arch, Ubuntu, Fedora) I’d rather not changed my daily drivers for testing purposes like I did with Crux in the middle of college finals.
Void linux. I was on Fedora before, but the short lifecycle and offline upgrades were killing me so I switched to void to try something different. There are few things which bug me, but I just can’t get myself to nuke-and-pave again so I’m stuck with it for the time being.
Ubuntu 16/18. I’ve always used either Mac or Ubuntu and for the foreseeable future that’s how I’m probably going to live my life as well.
Mac for usability + getting me into programming in the first place Ubuntu for server stuff (majority of the time I’m sshed into 2-10 ubuntu servers via my mac)
1998-2000: Slackware. Started on Slackware 4.0, then totally busted up my system upgrading to Slackware 7.0 (yes, that was the next version, really.) 2000-2005: Debian. Most of the people I respected at the time ran Debian, so I installed potato and found it quite enjoyable. 2005-: Ubuntu (on workstations). I had bought a laptop that really needed Xorg to work properly, and if you searched for Xorg packages for Debian in 2005, you were on the road to installing Ubuntu. Sometimes they kind of screw up release engineering, but on the whole it works, and hasn’t given me any problems that have made me want to switch. I still run Debian on servers, though.
Regarding desktop environment / WM, I started out as a Blackbox/WindowMaker kind of guy, but ended up installing KDE back in the days when Mozilla required some fancy WM support to embed Netscape plugins correctly, and kwin was one of the few that could deliver. I figured out how to configure it up in a wmaker-ish fashion and never looked back. They have made me angry; KDE 4 and KDE 5 both added thousands of bugs while removing features, and didn’t really become stable until years after their respective “.0” releases. But all’s well today, and maybe they learned something.
I had a somewhat similar road: Knoppix -> Ubuntu -> Arch -> Fedora -> Arch (OS X used throughout mainly for work).
My dislike with Arch Linux is the amount of configuration that must be done upfront. I actually enjoy using Gnome (themed appropriately with Arc) and so have used Fedora as well but have found that the Fedora upgrade is much more painful than just running Arch Linux. I’m an Arch Linux tester and am happy creating my own PKGBUILD’s.
I’m interested in NixOS, additionally I’d like to try out Alpine Linux as a desktop OS.
Many years ago, I ended up with Gentoo after trying Debian, Red Hat and Slackware. The main thing I like about Gentoo that almost no other distro family offers is that I can opt in or out of features of a given package. I like the thought of not being saddled with parts of software that I won’t use. For the most part, with the mainstream distros, you install all the bloat with a package, or you don’t install the package at all; nothing in between.
Even though Gentoo’s popularity has really waned in the last several years, I still like it and use it, as it gets the job done for me.
2019 - Present: Pop_OS! Terrible name, awesome experience. Was able to run it on a Thinkpad p52S with wifi, bluetooth and nvidia drivers right off the bat.
2017-2019: Lubuntu - Clunky and stuff didn’t work right out of the box. Lot’s of little configuration details that got annoying. Was running this on a 2012 macbook pro, so perhaps that had something to do with it.
I’ve mostly used Debian since I became a full-time Linux user back in 2008 but recently switched to Fedora to get SELinux and packages closer to upstream. So far I’ve been very happy.
I’ve also tried Arch Linux, CentOS, Elementary, Slackware, and Ubuntu at various points, but none of those ever seemed like an overall improvement to me.
Pop_OS! Damn that’s hard to write. I love it. It’s Ubuntu with defaults that makes sense to me as a developer.
I want a system that just works out of box. I’ve used Manjaro, Arch but got tired of all the time I had to put into just keeping my machine running.
I use manjaro because it can come pre-installed with a nice i3 setup, and because the AUR usually has what I need, close at hand.
My first 20 years of Linux history.
For last ~15 years I use Ubuntu. For the last ~7 years the LTS version. I’m not a very demanding user. The software I need in more recent versions (e.g. SyncThing), I install through other means (deb download, snap).
ubuntu: ease of use stability good support big community many commercial apps work better on ubuntu gets better and better each version been using since 2008 (8.04) snap package manager
there is only one thing out there that bothers me and that is systemd which almost all major linux distros are based on nowadays
Embraced Ubuntu in 2009 (after Mandrake, Slackware, RedHat) and then only upgraded on every LTS.
In the years I’ve become increasingly worried about keeping my home and office systems up and running, and Ubuntu LTS never failed with its conservative approach (the 20.04 does not yet default to Wayland or Snap).
And yes, the latest time Ubuntu made me scratch my head was the infamous “OAFIID” fiasco ten years ago, when apps would only start if they were in the mood. But it was due to Gnome still being in its infancy.
??? I used GNOME in the 90ies. GNOME 2 had been around for a long time 10 years ago, and 2010 was pretty much peak GNOME 2. I don’t think Ubuntu had switched to Unity yet outside the netbook edition?
The infamous OAFIID bug (Object Activation Framework, it was a thing before D-Bus, and it was deprecated by Gnome soon after) is still present in Ubuntu 10.04 LTS. Basically you had to ssh and reboot 2-3 times before getting a complete and usable graphical desktop, at least on systems with less than 512 Mb RAM.
My first distro was Red Hat (7.1 I think, came in the cover of a book about linux).
I later eperimentet with a lot of distros, I remember I particularily liked Mandrake (later Mandriva) and preferred KDE before I settled with Gnome 2 on Ubuntu from 2006 until they swapped to Unity.
After that there were a couple of years were I felt there were no really good options until I found KDE Neon, which I have used for the last 4 years, I guess one of the installations (the one I work on now) have been with me since then which is quite remarkable.
I think what distro one use is important. For me it is incredibly important that it is fast and doesn’t get in the way. Otherwise I could have used one of the two mainstream options, but for me the micro-lagging on Windows drives me crazy, as does CMD-Tab (and back when I used it, the fact that fn and ctrl was swapped) on Mac.
Arch Linux since 2010, if I remember correctly; prior to that I used a mixture of Ubuntu and macOS. I quite like Arch Linux, as it gives me a nice balance between control and not having to configure every little thing (at least these days). The only nuisance is sometimes having to take some extra manual steps when upgrading packages, but fortunately that doesn’t happen often.
work: ubuntu, ever since debian was really stale in 2004-2005, then gentoo, then ubuntu. private: mostly void linux on pi, notebook, desktop. one old server with guix - definitely worth looking at it.
Arch for my personal machine.
I like that it’s minimal and doesn’t push a default desktop, tools etc on me: but it has a good package base and decent defaults too. For me it’s a good balance between configurability and usability. I also really like the rolling release model.
I tried a few other things a while back, mostly to see if I could dodge systemd, but ended up back on Arch.
I use Xubuntu. My use of Linux started in 2016 when I started my current job and decided to use Linux on a Lenovo X1 Carbon. I picked Debian with XFCE as the desktop env which I really liked compared to GNOME because it used less resources and looked decent. Then I upgraded to a newer X1 Carbon and installed Xubuntu because I didn’t want to deal with Debian’s janky installer. I also use Xubuntu on my home desktop/gaming machine. Xubuntu has worked great for me and I’m always amazed at how easily it installs. Though I wish disk encryption were easier to set up from the get-go because it requires formatting the entire disk.
Ubuntu on my local, Debian on my servers for lower memory use. Someone has to think about distros but it’s not going to be me.
I’m super happy with Ubuntu right now. I thought it would be more painful coming from Arch, but actually I found a quite good out of box experience.
And been contributing to the distribution since 2016’ish. Quite happy with it over all.
Arch since 2005. I was using Gentoo earlier, and switched when drobbins went to work for Microsoft.
Before that, I had used Red Hat, and briefly Mandrake and Slackware.
I also spent almost a year using Minix3 as my main OS (around 2007), because I wanted to learn more about it. I stopped because it wasn’t very practical.
Why Arch? Because it’s mainstream enough to support all my hardware out of the box and have almost all the software I need packaged, but at the same time it has modern software versions, mostly vanilla. Also it’s very easy to make your own packages if needed. I never
make installinto the system, I always create a package instead.
Fedora on top of Qubes. For over three years now. Hardware can be a hassle, but the slightly upgraded Thinkpad relic still does the trick.
Arch for my personal computer, Debian for headless stuff, or CentOS/RHEL if it’s for work. NixOS is interesting but haven’t tried it yet.
To get here I’ve gone through Ubuntu, Lubuntu, Xubuntu, DSL, Puppy Linux, Crunchbang, Zenwalk, Mint, Slax…
I’m back to Ubuntu (on my main computer) for a few days after I couldn’t get a Bluetooth keyboard to work with Fedora Silverblue. Ubuntu (20.04) just works.
I’m currently testing FreeBSD on my laptop that has been set up with Fedora since maybe 22, 23 and my desktop is in Ubuntu LTS still pending a change. Change to what, depends on the outcome of the said test.
But I’ve dabbled with Slackware until Fedora Core 4, then switched to Ubuntu at 8.08 and it’s been mostly the two. I’ve had a year-long Mint stint, and maybe a year of Centos in between. Didn’t like Arch the one time I’ve tried it, I’ve had enough of dabbling with my system that I wanted something hands-off (hence Ubuntu).
Why? Well right now it’s mostly that I don’t care as long I habe a usable shell, browser and IDE and software dev tools. Perhaps it’s time to finally switch to Mac and just live with it. Although from having dealt with my wife’s 4 devices in the last fifteen or so years, I think I like Linux better. The reason? I’m not afraid to break it with installing everything all the time.
At work I use (and sometimes genuinely suffer) corporate Windows, luckily with admin privileges but still centrally managed..
Xubuntu since 2013. Ease of installation, configuration, etc.
Ubuntu, because they handed out CDs like candy in 2004, then inertia.
On the Linux side, my main is currently Fedora and likely to remain that way. Before that, it was Slackware, but that’s now my secondary Linux distro to tinker with even though it still holds a special place in my heart. Nevertheless, I’m at the point where I need a distro that “just works” and is well supported, so Fedora it is.
I have crossed over to the BSD side as well. My main there is OpenBSD with FreeBSD being second.
OpenBSD or any BSD Then VOID or Alpine or whatever is carrying on the slackware spirit. If it has systemd, might as well use windows, and then you can play games. I am correct.
I started with RedHat 5.1, followed it through to Fedora, then switched to Slackware and am very happy with it. Slackware doesn’t modify packages more than strictly necessary and creates packages with minimal dependencies, where RedHat/Fedora seemed to enable every possible binding between every package, resulting in over-functional builds that have huge dependencies. I used to spend a lot of time recompiling packages just to prune out optional build time dependencies. Since Slackware is minimal, it’s just providing an OS that I can put my own things on top of, which works for me. It also is one of the few remaining Linux distros that doesn’t use systemd - actually it never really used SysV init anyway, more of BSD-style init system. Perhaps somewhat philosophically, it’s a Linux distribution that is really trying to create a UNIX-like system, not a Windows replacement.
After I become disillusioned with Debian, about 3-5 years ago, I tried a few different distros, large and small, and settled on Fedora LXDE. Back then, I was still transitioning from Mac, which means I was using Mac OS for things like watching videos and playing music, while all my development happened in a Debian, and then Fedora, VM in VirtualBox. This was a great system, because my dev environment was segregated completely from my desktop. It also meant that I could take regular snapshots of my dev “machine”, so if I screwed something up, I could go back to 10 days ago, or whenever I happened to make a snapshot, git pull from my github repo, and continue working. I could have set up some kind of automated snapshotting, but I just never bothered. The painful part of this is that I had to shut down to take a snapshot (to avoid making a copy of ram unnecessarily) and also that when I had to delete an older snapshot to save space, it would sometimes take 5-15 minutes. No biggie.
Recently, I upgraded to a bottom-of-the-line 5-year-old ThinkPad, and installed Fedora LXDE natively, after trying out Windows 10 for a week or two. I’ve read that it might be going unmaintained, which is sad for me, but I’ll ride this horse until it falls off the cliff.
I picked it because it seemed to just work. There’s a lack of polish in some areas, but there’s also a lack of extra shit that I don’t want, like animations, 3D effects, etc. I want my UI to look and work like Windows 95, which it misses the mark on, but is the closest I’ve found. I want the OS to not give me problems, and I want to be able to install most software I need through the package system, which this setup also satisfies. It would be nice to have hardware volume buttons and to be able to adjust brightness, but I don’t care about it enough to look for the solution. Fedora did not support the built-in wireless networking adapter out of the box, and I didn’t bother setting it up. I just used Ethernet whenever it was available, which is at most people’s networked homes, a few libraries, etc. Between, I saved up my commits, and used my other device for Internet access. Eventually, when I volunteered at FSF, I was gifted a ThinkPenguin.com USB adapter, which worked as soon as I plugged it in. I am very happy with this setup, because I can unplug my wireless network just like I can unplug ethernet. It picks up about 90% of the networks my mobile device does, some it just won’t show.