1. 64
  1.  

    1. 25

      Honestly I sympathise a lot with hating the Debian packaging of your tool - this author is not the first and probably won’t be the last, and the way they package Rust is genuinely awful. Flaming is definitely counterproductive, but I wouldn’t want to support any of my code on Debian either, and would consider any bugs anyone runs into on Debian to be their own fault for doing that.

      1. 24

        the way they [Debian] package Rust is genuinely awful

        The Debian project’s goal is not to make Rust folks happy, rather it is to make Debian users happy. Perhaps the way they packaged Rust was the best that they could do under the constraints (which are enormous, given the existing infrastructure, user base, and cultural expectations)? I would personally show some humility when criticizing a project like Debian which stood the test of time like very few other open source projects.

        On a more constructive note, can anyone summarize the different between Debian and Fedora when it comes to packaging Rust? I don’t hear any complaints about Rust in Fedora so they must have gotten it right?

        1. 33

          Debian patches all Rust code to use common versions of libraries in order to keep the libraries in separate packages. The problem is that usually that means using versions of libraries that are older, buggier and were not tested by the original developer (and the developer itself is puzzled because they start to receive bug reports which are impossible to reproduce given their original code where using such older libraries is impossible).

            1. 7

              Yes, that was probably a mistake. They should have rejected any Rust application that has unstable (buggy, fast-churning, etc) dependencies as itself being too immature for inclusion into Debian. All of this is IMHO, I don’t have the complete picture.

              Of course, if they did that, they would have received a lot of flack for not including the latest hot stuff, which is what bcachefs was until recently.

              1. 4

                Yes, that was probably a mistake. They should have rejected any Rust application that has unstable

                Just because libraries receive bug fixes doesn’t mean that they are “unstable”. Though some libraries are permanently unstable, like bindgen. If you’re only allowing one version of bindgen, you’re never, ever going to be able to ship Rust software that actually works correctly.

                Like you say, the goal is to make Debian users happy, so I wonder, as more and more software gets written in Rust, will Debian users be happy to either have to live with Rust software that’s buggy and doesn’t work, or live without a larger and larger share of software?

            2. 17

              Debian’s packaging policies are often annoying. I gave up helping Debian users who insisted on using the packaged versions of GNUstep. Debian required everything to be built with the system compiler. GCC supported Objective-C (modulo occasionally deciding that 100% broken Objective-C codegen was not a release blocker), but it was an ancient dialect of the language. Supporting some of the new features required ABI changes and so, if you compiled GNUstep (implementation of the Foundation and AppKit core standard libraries for Objective-C) with GCC, a load of stuff would not work well even if you compiled things that used them with clang. The way that they would fail was known and you could, if you were very careful, work around them. But Debian would not let the GNUstep package maintainers simply compile with clang and have things work as users expected because GNUstep could build with GCC (it just wasn’t a recommended configuration and came with a bunch of warnings).

              1. 4

                Debian required everything to be built with the system compiler.

                I think we both would agree this policy is there for a good reason. For example, if Debian allowed building with either GCC or Clang at maintainer’s choosing, sooner or later someone will want to build with libc++ instead of libstdc++. And now we have two sets of libraries that cannot be mixed in the same application. Funny enough, I am trying to figure out how to deal with the exact same problem but in Homebrew.

                So the two plausible solutions to this problem seem to be either to stick to this policy or to start handing out exceptions on a case-by-case basis after carefully analyzing each case for potential fallout. I would venture a guess that the vast majority of Debian users don’t care about GNUstep. So Debian deciding to stick to this policy looks like a pretty sensible choice to me. It’s a tradeoff. As with all tradeoffs, someone will think the wrong one was made.

                1. 15

                  I think we both would agree this policy is there for a good reason. For example, if Debian allowed building with either GCC or Clang at maintainer’s choosing

                  I think it’s a fine policy for C and C++. It’s not a good policy for the other languages that GCC kind-of supports. There was no mechanism to define the default compiler for other languages.

              2. 12

                It’s ok to struggle with solutions when you have constraints. That’s completely understandable. The issue with Debian is that those constraints are sometimes both self-inflicted and don’t actually make anyone’s life easier if taken to extreme. They really could have some exceptions.

                1. 4

                  It’s anecdotal, but this Debian user of more than two decades can tell you with certainty that “those constraints” do make his life easier.

                  I don’t want to have 50 versions of every Rust dependency installed on my machine nor do I want 50 copies statically linked into Rust applications that I may want to install. I am happy to leave 1G+ incremental updates to Windows and Mac OS users to enjoy.

                  1. 9

                    Give this a read:

                    https://wiki.alopex.li/LetsBeRealAboutDependencies#gotta-go-deeper

                    The TL;DR: is that you get a lot of that with C and C++ too… it’s just that, without access to proper dependency management, it comes in the form of meant-to-be-vendored header-only libraries that the Debian maintainers can’t split out and bespoke re-implementations of things that can’t be deduplicated.

                    See also https://blogs.gentoo.org/mgorny/2012/08/20/the-impact-of-cxx-templates-on-library-abi/

                    TL;DR: C++ libraries that use templates behave the same way Rust does… if you build something that’s all-templates as a dynamic library, you get an empty .so file.

                    Dynamic linking for generic/templated code without the kind of trade-offs that Swift incurs to achieve it is an unsolved problem.

                    1. 6

                      I don’t want to have 50 versions of every Rust dependency installed on my machine nor do I want 50 copies statically linked into Rust applications that I may want to install. I am happy to leave 1G+ incremental updates to Windows and Mac OS users to enjoy.

                      Why?

                      1. 11

                        In particular, either way, Rust dependencies are statically linked and Debian packaging does not support incremental updates on the same package, much less across packages. So you’re paying almost the exact same bandwidth and storage cost regardless of the number of Rust dependency versions in play.

                2. 19

                  That’s where I’m at with Debian as well. I’ve been using it for a long time but I’m just fed up with these stories where Debian wants to do something weird and completely contrary to the developer’s intention (the recent KeePassXC thing for example). The attitude just rubs me the wrong way and feels very old fashioned. All power to those who like to do things that way, but I’m looking elsewhere these days.

                  1. 9

                    Agreed. The Debian approach was brilliant in the 90s, and for a long time after .. but the world has just changed too much. There’s just too much damned software that’s changing too fast for the traditional approach, I think. I’m slowly moving over to NixOS, which definitely has its own problems and can be very wasteful of disk space, but I’m already finding it less aggravating in a lot of ways.

                    1. 2

                      Me too on all of that, but I imagine Debian is a lot better than NixOS for systems without lots of spare storage, which is still most computers if you count the ones in cars and whatever.

                      Having said that, I’m 100% sure that Nix or something like it is going to take over literally everything (modulo energy crisis etc.)

                    2. 3

                      where Debian wants to do something weird and completely contrary to the developer’s intention

                      Keep in mind the scale of what Debian has to do… they have to manage the intentions of tens of thousands of diverse developers, and they do nearly all of it on volunteer time.

                      Although I have Things To Say about some of the tools they use, I am overall quite happy that Debian is deliberately conservative with their policies in order to deliver the most stable OS they can.

                      1. 3

                        Debian, and similar traditional 90s style distros - chose a model that doesn’t scale well and requires massive labor - O(all available software) to produce the next version of the system. Their model is also hostile to backwards compatibility (using software built last year on this year’s system) or using multiple versions of the same software. These systems fit together in such a brittle manner that changing a subsystem or a few decisions requires standing up a whole separate distribution which lead to the proliferation of slightly different but quite distinct Linux systems. The ecosystem evolved in a way to make it so challenging to distribute software, that the easiest thing to do is to package software with an entire distro and ship that around. It’s ironic that Debian’s fight against static linking and vendored dependencies means most developers targeting Linux prefer to static link the entire operating system into their software.

                        To me the amount of volunteer hours spent on projects like Debian is like trying to feed thirsty people by having each volunteer walk to the reservoir, scoop up water in a cup, and then walk that water to the thirsty person somewhere in town. Commendable effort, but the world would be much better if we build an aqueduct and plumbing instead with that volunteer time, to completely eliminate the need for toil in perpetuity.

                        1. 3

                          From your last paragraph, it sounds like you are concerned with merely shipping the OS in the most efficient way possible… Debian, etc are concerned with releasing a stable OS that works out of the box and doesn’t surprise their many millions of users. Yes, maintaining a versioned distro like Debian is a lot of work. But the results are worth it, otherwise thousands of people wouldn’t volunteer their time toward making it happen.

                          It’s not clear to me what you are proposing as an alternative.

                          If you are advocating for a rolling-release distro like Arch or Gentoo or NixOS, those have their issues too. Namely the sheer amount of constant package churn and never being quite sure that the exact combination of package versions that you have just installed are known to be compatible with each other. On Debian, I only have to worry about my workflow or applications breaking every two years. On a rolling-release distro, I have to worry about it every time I run the command to update. Often my worries turned out to be justified as I spent a few hours figuring out how to fix my system. I was an Gentoo/Arch user for a long time, I am VERY familiar with this. Maybe this isn’t everyone’s experience but it certainly was mine.

                          1. 2

                            There are versioned distros that update at a higher cadence, like Alpine.

                            Honestly, if you ever have to worry about your workflow breaking, something is wrong, regardless of whether it’s every week or once every two years.

                            Namely the sheer amount of constant package churn and never being quite sure that the exact combination of package versions that you have just installed are known to be compatible with each other.

                            Maybe if we applied the Rust model to all software, that wouldn’t be something to worry about. (And I mean the actual Rust model, not the way Debian does Rust)

                    3. 14

                      Library developers and distribution packagers have different perspectives on the user needs. When they disagree, developers are quick to blame the packagers (“my stuff works fine on all other distributions, don’t be annoying with your own rules!”) and packagers are quick to blame the developers (“all other packages work fine with these rules, don’t be annoying with your unruly development practices!”). Different users have different needs and they sometimes align more with one side or the other.

                      My personal rule of thumb:

                      1. For development environments (I want to hack on stuff), use the language’s packager managers and forget about the distribution package managers and its rules.
                      2. When I want to install an application as an end-user, prefer the distribution package manager.

                      Sometimes it is useful to make exceptions to preference (2) (eg. maybe your web browser you want to manage yourself and let it auto-update, etc.; maybe jj is not packaged, etc.), but each exception comes with a convenience cost unless you are very disciplined.

                      This does not avoid all troubles because packaging an end-user application (2) still requires packaging its libraries, which are written by people who prefer perspective (1), and this generates the sort of complaints we can read around on the web. But mostly it’s fine, and getting my distribution to manage and update my applications and their dependencies brings a large convenience bonus. Snap, Flatpack, etc. are trying to replace package managers in a way is closer to how upstream developers test and release their software. This is clearly convenient for proprietary applications, but I believe the jury is still out for other applications.

                      1. 6

                        Same, I don’t hate Debian though.. It’s mostly the users of Debian Stable that report bugs to upstream that frustrate me. Like no, your version is 1-2 years old, we do not support it anymore.

                        I wish Debian users reported the bugs to Debian, not upstream. Debian devs can make better calls whether to report bugs to upstream or patch them out themselves.

                        With LTS, the bugs are LTS too.

                        1. 8

                          I wish Debian users reported the bugs to Debian, not upstream.

                          FWIW, the Debian project always instructs it’s users to report bugs in software that it packages to Debian, never to upstream. I, for example, always do so.

                          1. 5

                            Maybe they need to communicate that better. (And they also need to communicate that if you like playing with experimental filesystems, Debian is probably not the right distro for you!)

                            I do appreciate that they’re trying to play nice with the rest of the ecosystem, though. Knowing that they’re willing to take on bug reports reframed things for me—Debian being weird and contrary luddites, versus Debian having a specific LTS goal that (quixotic or not) they’re making a good faith effort to pursue. It’s not my goal, personally, but I can root for them from the sidelines.

                            1. 2

                              Yeah, I know, sadly a lot of the users don’t. :/

                            2. 4

                              My plan is to put it in big bold letters in my bug-reporting template that you must reproduce bugs using the officially supported builds before reporting them, including a checkbox that they have, and then to add some code to my --version display which adds something like -system to the end of the version string if installed under /bin, /usr/bin, /sbin, or /usr/sbin. (/usr/local/bin and /usr/local/sbin are fine.)

                              If a -system version turns up, then I’ll close the bug with a “Reopen with proof that it’s not a distro build” message and, if I catch a distro patching that out, then I’ll look into using either trademark law or a license change to force an Iceweasel situation.

                            3. 1

                              Can you share some links which provide context on how Debian packages Rust and what issues that causes specifically for Kent? I haven’t encountered this drama before

                              1. 3

                                Here’s a critique on Debian’s approach from a long-time Debian developer: https://diziet.dreamwidth.org/10559.html

                                  1. 10

                                    Oh wow, this is some wild editorialization from Phoronics:

                                    It was simple at first when it was simple C code but since the Bcachefs tools transitioned to Rust, it’s become an unmaintainable mess for stable-minded distribution vendors

                                    Anyway, thanks for the context.

                                    It sounds weird to insist on having system packages for every dependency in a language where everything’s statically linked, doesn’t it? Does Debian hack Rust to do dynamic linking against system libraries?

                                    1. 4

                                      It sounds weird to insist on having system packages for every dependency in a language where everything’s statically linked, doesn’t it?

                                      It’s very weird and essentially puts a ton more work on package maintainers since they either need to ship multiple versions of 1 dependency in separate packages or update packages to use a newer dependency.

                                      Does Debian hack Rust to do dynamic linking against system libraries?

                                      debian (and fedora) package rust libraries by moving the code into a system folder, and most of the debian rust libs aren’t even marked for all architectures so now you get the same tarball for every architecture supported

                                      1. 3

                                        Does Debian hack Rust to do dynamic linking against system libraries?

                                        I’m jaded on debian so I’ll put my troll hat on and say — no, of course not. The dependencies situation is nothing but a power play / bullying by part of the developer team. They can write C-with-classes and mailinglist messages, not rust. See also the Objective-C situation in neighbour comments.

                                1. 21

                                  I’ve been following the bcachefs saga with great interest, but sadly what’s shared here matches my observations as a spectator.

                                  Meanwhile, Btrfs has seen wider adoption and increased performance and stability. With it being the default on Fedora, I’ve used it problem-free with transparent compression on multiple machines for years now (though no fancy multi-disk configurations or the like). Maybe the day will yet come for bcachefs to wow us all, but it seems some of its biggest hurdles are not technical in nature.

                                  1. 9

                                    BTRFS kind of sucks if you don’t have identical disks though.

                                    • Reads pick copies based on the process ID.
                                    • Writes always go to the disk with the least space.

                                    bcachefs would track latencies so automatically prefer reading from SSD over hard drive or similar. It also balances disks over time rather than strictly writing only to the disk with the most space.

                                    bcachefs also had amazing per-directory storage options. My photos and videos can be stored on HDD, my database on SSD and my mirror of public data doesn’t need to be stored redundantly. All of this could be configured ad-hoc without needing to divide my storage pool or configure separate mount points (which adds complexity and breaks hard/ref links).

                                    1. 8

                                      Yeah, I do really want it to succeed, and I’m at least half on Kent’s side when it comes to some of the ‘drama’, but I’m beginning to worry about the long-term future now. I don’t know what the current situation is when it comes to funding, getting other devs in etc. I would have thought there was enough potential for someone to want to jump in and adopt it as a project.

                                  2. 5

                                    People are still using xfs? Is there a reason for that?

                                    1. 23

                                      It’s fast and it has a clear codebase with great authors. For a classic filesystem, XFS is still my choice over ext4.

                                      Although fedora comes with btrfs which has worked great. Raid1 with compression.

                                      I wish all the luck for Kent. I’ll be coming back to bcachefs in a few years.

                                      1. 16

                                        XFS is quite popular in the server space if I’m not mistaken. At least at GitLab I believe it was the filesystem we ran for everything, though perhaps that has changed since I left.

                                        1. 3

                                          Around 10 years ago, I chose XFS because it had features I needed that ext4 did not at the time. I don’t recall exactly what those were (64-bit inodes maybe?), but it also performed better with lots of small files, doesn’t require a fsck at pre-determined intervals. And it’s just been rock-solid. It’s like the Debian of filesystems.

                                        2. 13

                                          It’s solid, stable and fast. It’s boring, but in a good way.

                                          There is more seldom a disk check that delays the boot, compared to ext4. It did not have the raid issues btrfs had. And it’s not as experimental as bcachefs.

                                          1. 9

                                            I’ve started using it for NixOS because it seems to cope better with its high demand for inodes than ext4 does. It also seems to be faster than btrfs, particularly in VMs for some reason.

                                            1. 7

                                              My anecdotal evidence as a XFS user since XFSv4 (probably the last 6 years? I lost the count to be honest).

                                              XFS used to be a filesystem only recommended for servers and other systems that had some kind of backup power system to ensure clean shutdown. I used in a desktop for a few months, until a system forced reboot mounted my system read-only, and xfs_repair completely corrupted the file system. But even before that I lost a few files thanks to forced shutdowns. Well, went back to Ext4, and stayed there for a few years.

                                              After trying btrfs and getting frustrated with performance (this was before NVMe were common, so I was using a SATA SSD), I decided to go back to XFS and this time not only it solved my performance issues, I hadn’t have problems with corruption or missing files anymore. The file system is simple rock solid. So I still use it by default, unless I want some specific feature (like compression) that is not supported in XFS.

                                              1. 4

                                                It’s a popular filesystem in some low-latency storage situations such as with Seastar-based software like ScyllaDB here and Redpanda.

                                                We use it at work for our Clickhouse disks. If we could start over I’d have probably gone with ext4 instead as that’s what Clickhouse is mainly tested on. There was some historic instability with XFS but it seems to have gotten better (partly with updates, partly with tuning on our end to minimise situations where the disk is under high load). Like most things XFS is a good choice if your software is explicitly tested against it.

                                                1. 4

                                                  It’s (was?) the default filesystem in at least RHEL/CentOS.

                                                  1. 4

                                                    At the time I chose XFS several years ago, I wanted to be able to use things like reflinks without needing to use btrfs (which is pretty stable these days but I wasn’t very confident in it back then). I can certainly say that’s it’s been quite resilient, even with me overflowing my thin pool multiple times (I am very good at disk accounting /s) and throwing a bunch of hard shutoffs at it.

                                                    1. 17

                                                      If you often have problems filling up your disk, you are going to have a very, VERY bad time on btrfs. Low disk space handling is NOT stable in btrfs. In fact it is almost non-existent.

                                                      After 15 years, btrfs can still get into situations where you’re getting screwed by its chunk handling and have to plug a USB drive in to give it enough free space to deallocate some stuff. Even though df / reports (<10 but >1) gigabytes of free storage. This blog post was 9 years old when I consulted it, and I still needed all the tips in it.

                                                      I find it unconscionable that Fedora made btrfs the default with this behavior still not fixed. I will never, ever be putting a new system on btrfs again.

                                                      1. 7

                                                        I find it unconscionable that Fedora made btrfs the default with this behavior still not fixed. I will never, ever be putting a new system on btrfs again.

                                                        100% this.

                                                        I have had openSUSE self-destruct 5 or 6 times in a few years because snapper filled the disks with snapshots and Btrfs self-destructed.

                                                        For me the killer misfeatures are this:

                                                        • Self-destructs if the volume fills up
                                                        • Volumes are easy to fill by accident because df does not give accurate or valid numbers on Btrfs
                                                        • There is no working equivalent of fsck and the existing Btrfs-repair tool routinely destroys damaged volumes.

                                                        Any one of those alone would be a deal-breaker. Two would rule it straight out. All 3 means it’s off the table.

                                                        Note: I have not yet even mentioned the multiple problems with multi-disk Btrfs volumes.

                                                        I have raised these issues internally and externally at SUSE; they were dismissed out of hand, without discussion.

                                                        1. 4

                                                          Thank you! Count me along with the camp of “btrfs is great except for when you really need it to be” — low disk being one of those times (high write load being my personal burn moment)

                                                          I wanted bcachefs to work but this and related articles are keeping me away from it too.

                                                          I force my Fedora installs to ext4 (sometimes atop lvm) and move on with my life :shrug:

                                                          1. 5

                                                            This is why I bite the out of tree bullet and just use ZFS. People tell me I’m crazy for running ZFS instead of Btrfs on single disk systems like my laptop, but like, no! I cannot consider Btrfs reliable in any scenario.

                                                            1. 4

                                                              I’ve been using ZFSBootMenu on my Fedora single disk laptops for a while now and find it hard to imagine a different setup.

                                                              1. 3

                                                                100% agree. I have found DKMS ZFS to be more stable than in-tree btrfs. Other than one nasty deadlock problem years ago it’s been rock solid. (Just some memory accounting weirdness…)

                                                            2. 3

                                                              Yeah, I still get bitten by that one once or twice a year. I find btrfs useful for container pools, etc, but I still don’t use it for stuff I can’t easily regenerate.

                                                          2. 2

                                                            Count me among the XFS users, albeit only on one machine at this point. I think I set up my current home server (running Fedora) around the same time Red Hat made XFS the default for RHEL, and I wanted to be E N T E R P R I S E. I’ll likely use Btrfs for my next build, as I have for all my laptops and random desktop machines in recent years. Transparent compression is very nice to have.

                                                            EDIT: I believe Fedora Server also defaults to XFS, or at least it did at some point.

                                                            1. 1

                                                              Last time I mkfs’d (going back a few years now) it had dynamic sized xattr support, and ext4 set a fixed size at creation time. This was important for me at the time for preserving macOS metadata.

                                                            2. 4

                                                              Huh. I’ve made filesystems with a billion empty files, to compare generators and filesystems, and xfs performed poorly, with its default settings. ext4, ext2, and btrfs finished within a day or two; xfs was on track to take weeks.

                                                              1. 12

                                                                Not to be rude, but… so what? I’ve seen some very degenerate filesystems, but none with anywhere near a billion files.

                                                                1. 4

                                                                  Yet!

                                                                  Surely this is a forum that rewards testing software to cursed limits!

                                                                  1. 4

                                                                    not sure if the data-point is interesting here, but on my 13T external that’s formatted with modern NTFS, 1_296_232_240 ) inodes are allocated (not used). I could see it working and not as outlandish test it seems to be at first glance.

                                                                    1. 2

                                                                      I checked my notes: it definitely bogged down within the first four million files, which doesn’t feel massive, but maybe that’s relative. I think it bogged down much sooner than that. After hitting ctrl-c, it took 20+ min to get a responsive shell. Unmounting took a few minutes.

                                                                      Sorry if it came off as a glib “your favorite filesystem sucks”.

                                                                      1. 1

                                                                        Nah, it felt more just like “it’s unreasonable that this thing fails under unreasonable circumstances”. I hope I wasn’t too glib in response; thanks for checking your notes. 4 million files is a much more reasonable circumstance, and bogging down earlier than that is definitely not a good thing. It does surprise me ‘cause XFS was supposedly designed for big chonky systems and in my experience has been quite bulletproof. If only I had the energy to reproduce it and do a a deep dive to find out what’s going on…

                                                                  2. 4

                                                                    My hard drive was (allegedly) incompatible with early bcachefs & I feel like I have dodged a bullet. The little bit I did get it working crashed hard & I lost my data. I tried to gather the dmsg logs I could & when I asked for further clarification on the errors & how to read them, I was kicked from the IRC room after being flamed for getting the drive the laptop manufacturer gave me (no brand at purchase, & emailed someone to see it was different per region). There is a lot of hostility brewing in the community too; now instead the most vocal supporters have been a crowd of anti-CoC folks which made the larger community look not so great having defenders of such behavior while the other side of the community points out it is clearly unacceptable & against the guildlines. It is a real shame. Nix made it pretty easy to try, but I think I am out too & back to ZFS which has been good but the out of kernel state makes juggling support while kernels are deprecated a pain.

                                                                    1. 2

                                                                      I’ve converted my last bcachefs filesystem to XFS, and I don’t intend to look at it again in the near future.

                                                                      XFS does not have checksums for data - not a safe filesystem for long term storage for anything valuable.

                                                                      1. 4

                                                                        Most filesystems don’t, nor does most memory or a lot of the rest of computing. Network traffic and compression are about the only places it’s usually around.

                                                                        1. 1

                                                                          I use another layer (backups and scheduled checksum checks) to get those features.

                                                                          1. 4

                                                                            I just make ZFS do it all for me. It’s also replaced FAT32 for portable drives for me as well, since many OS’s have ZFS implementations(even Windows and MacOS).

                                                                      2. 1

                                                                        I’ve converted my last bcachefs filesystem to XFS

                                                                        Why not btrfs? Btrfs provides nearly same set of features that bcachefs provides, btrfs is just a little slower than bcachefs, as well as I understand.

                                                                        I use btrfs-raid-1 on my system for all my data (I do backups, of course). I use this setup for year. I never did lose any data.

                                                                        btrfs is certainly better than XFS, because btrfs has filesystem-native raid-1, and XFS doesn’t

                                                                        1. 4

                                                                          btrfs also has an amazing record of crashing, imploding and whatever else it decided to do to break itself, so it’s totally understandable someone chooses a very stable fs such as XFS