1. 27
  1.  

  2. 8

    Nice - I like seeing low power hardware builds, particularly where the CPUs aren’t heavily underpowered. My only criticism is the use of XFS ahead of ZFS (yes, I know Eric Sandeen is an XFS developer - I guess he’d better be doing a bit of dogfooding!).

    I have a J1900 quad core Celeron (10W) system and I’m very happy with it. Multi-core performance is not hugely off that of an E8400 3.0Ghz Core 2 Duo - 1883 vs 2172 on PassMark’s CPU Mark, for example.

    1. 9

      My only criticism is the use of XFS ahead of ZFS (yes, I know Eric Sandeen is an XFS developer - I guess he’d better be doing a bit of dogfooding!).

      XFS has over two decades of field use under it’s belt, decent performance, journaling, and no patent or API suit-happy organization behind it. Those make a nice default for many of us. I still haven’t moved over to ZFS or BTFS even though they’re mighty appealing. I will experiment with them in future servers. :) Far as collections of servers, I prefer clustered filesystems as they eliminate extra points of failure plus have performance benefits.

      “I have a J1900 quad core Celeron (10W) system and I’m very happy with it. ”

      Appreciate the tip. Didn’t realize the Celerons were doing so well these days.

      1. 7

        I still haven’t moved over to ZFS or BTFS even though they’re mighty appealing

        As a ZFS advocate I’ll point out that ZFS has had about a decade of industrial usage and, AFAIK, at this point more widely distributed than XFS. One point of pride of the ZFS team is that it has never, due to a bug in the code, lost costumer data. BTRFS has had some bugs that will curdle your data. At this point, I use ZFS everywhere I can even my laptop. No regrats.

        1. 1

          Good counter. Another one in it’s favor. :)

        2. 3

          I’ve been a fan of XFS for a long time - before ZFS, it was my go-to filesystem on Linux systems (and I still use it in places where ZFS is not appropriate). It’s still under heavy development (Red Hat employ a number of the key developers) and new features are added all the time, which is great, but it doesn’t (yet) have the key features that make ZFS so attractive.

          Unfortunately ZFS on Linux is still a bit of a pain at times - using it as a root filesystem can require a lot of hoop jumping to get the kernel module built and installed, etc. Unfortunately that’s probably not going to get any easier unless you use Ubuntu, although the legal state of their ZFS usage is rather dubious.

          btrfs is not something I’d go near. Yes, it does work for a lot of people (and SUSE even uses it as default filesystem, if I’m not mistaken), but just a few months back there was a serious data corruption issue.

          1. 1

            But Linux is not the only game in town.

            1. 1

              Indeed, and ZFS is far more mature on other operating systems (I think FreeBSD is probably the most accessible option for most people). But at least ZoL is a viable option now (and it’s available in a lot of distributions, albeit in source form other than in Ubuntu).

              1. 1

                Personally I don’t trust source-only dmks-style packages. The risk that your system won’t boot on upgrade is too much. I have been bit by this multiple times.

                The binary packages provided by Ubuntu is better in this regard, but it means I have to use their kernel. Sometimes I need a different kernel for various reasons. I’ve spent way too much time backporting upstream features to Ubuntu and Fedora/Red Hat kernels.

                Personally I just avoid ZFS on Linux, and I avoid Linux as much as it is practical. When I use Linux I usually boot it from iSCSI backed by ZFS.

                1. 1

                  Yes, I’m not a fan of DKMS either (although I appreciate its value). To prevent all sorts of nasty mismatches you really need to build on the system in question, which means having the kernel sources, a compiler, etc. Urgh. It’s possible to build module packages but then you need to make sure the kernel versions on the build and target systems match. A mismatch with something as important as a filesystem can be fatal.

          2. 1

            I’ve heard of data integrity issues with XFS.

            Also, these Celerons are different. Desktop-grade Atoms (RIP) get branded as Celeron and Pentium, along with their ‘big core" counterparts. Look at the model number to distinguish between them.

            1. 5

              I’ve heard of data integrity issues with XFS.

              I have a small slip of paper pinned to a board over my desk with this printed on it: https://pages.cs.wisc.edu/~zev/fs-theorem.pdf

              1. 2

                Haha. I think I get it but I don’t know the symbols. What’s the literal English way of saying this? And optionally a fun way?

                1. 4

                  Read aloud, something like: “for all F in filesystems, there exists a U in users such that U says ‘F ate my data’”. (In retrospect, I suppose perhaps a colon would be more appropriate than the pipe/vertical bar, but oh well…)

                  More: , ,

              2. 2

                There was one about zeros showing up in writes or something like that. They fixed that one. Id be interested in a list of any severe ones left. Easier to fix a few in robust, file system than create new one.

                1. 4

                  The issues I was thinking of were in power failures, and how XFS doesn’t have many protections against data loss in that case.

                  1. 1

                    Yep, it’s documented in the FAQ.

            2. 5

              I find that I rarely have the need for a higher-powered x86 device (the only use case I can think of would be live transcoding of media content, e.g. 1080p mkv -> mp4 for mobile device playback. Or FLAC -> ogg transcoding for remote streaming).

              I have a RaspBerry Pi 3 connected to 2x 750GB platter drives connected via USB (yes, I understand the drives are heavily bottlenecked by the 480Mbps USB bus), which works for me as a IPFS node (to serve my personal files externally via IPFS gateway), NAS and persistent torrent box all in one.

              According to the spec sheets, a RPi3 peaks at 4W draw (which would be very rare since I’m not doing anything that would push the CPU normally).

              1. 2

                I’m using zfs-on-linux on a similar setup and am happy so far. Installation was hassle-free (/ is on ext4 though, which makes things a bit easier with linux). That replacing a (not already dead) disk waits with the removal until the new one is completely resilvered is really nice, I don’t know if other systems provide this.

                I had nice experiences with XFS so far too. What made me switch to zfs is that lvm and mdraid are more complicated in administration than what I want for a file dump at home.

                1. 2

                  mdraid and lvm do have a number of features that ZFS doesn’t (and probably never will), like very flexible resizing/growing options. But the data integrity features in ZFS are a huge win. I agree that it’s not for everybody - the restrictions on changing pool layout are a pain, as are the horrible interactions with the Linux VM subsystem (in the ZoL case), but I really do trust it with my data.

                  1. 1

                    i have never really missed changing layouts. the only thing i might want to do is shrinking a pool to have space for a new one, and that doesn’t work with xfs.

                    just that i don’t have to sweat bullets doing maintenance is a huge win for me. the zfs tools are so much better from a usability standpoint than the lvm/mdadm tools (at least imho).

                    1. 1

                      The one thing I miss is having, eg, 5 disks of a certain size and being able to add another 2 or three to increase array size. With mdraid/LVM it’s trivial, with ZFS not so much (well, not that adding disks is difficult with ZFS, just the vdev/pool balancing, etc that you need to be aware of).