1. 19
  1. 22

    “OpenBSD Gets”.. in the sense that this tool runs on OpenBSD..

    It isn’t built into OpenBSD and there isn’t even a port of it yet.

    1. 4

      I’m probably misunderstanding this, but:

      A journal was not needed to address the write hole issue. muxfs writes to each directory in series, which guarantees that as long as there is more than one directory in the array there will always be at least one in a valid state. It is then simply a matter of running a muxfs sync to revert or update the interrupted directory to a valid state from one of the others.

      I don’t see how this differs from the RAID 1 write hole (taken from https://www.raid-recovery-guide.com/raid5-write-hole.aspx):

      the write hole effect can happen in a RAID1. Even if one disk is designated as “first” or “authoritative”, and the write operations are arranged so that data is always written to this disk first, ensuring that it contains the latest copy of data, two difficulties still remain:

      1. a hard disk can cache data itself. Caching may violate the arrangement done by the controller.
      2. if the disk that was designated as the first/authoritative fails, write holes may already been present on the second disk and it would be impossible to find them without the first disk data.

      If we can recover from the write hole with ‘muxfs sync’, we could do the same with RAID1? (Find something on disk 2 wihch isn’t on disk 1).

      Does muxfs make this more efficient than a full scan via some checkpointing?

      1. 4

        I don’t think you’re misunderstanding anything. Either muxfs doesn’t handle this correctly, or the explanation is incomplete. It probably relies on the underlying filesystems providing a greater degree of atomicity than a raw block device, as well as its own checksums, but I don’t think the author adequately explains that mechanism here.

        In particular:

        caching may violate the arrangement done by the controller

        Only matters if the ordering is the full source of authority, muxfs also has checksums.

        it would be impossible to find [write holes] without the first disk data

        Unless you have checksums of all data and metadata.

        However, from the original post:

        muxfs writes to each directory in series

        as long as there is more than one directory in the array there will always be at least one in a valid state

        I chose to exclude timestamps from the checksummed metadata

        Unless I’m very much mistaken, this means muxfs may recover any valid state persisted on any of the underlying volumes, with no way to ensure the recovered write is actually the latest one. Either the timestamps could be wrong (not checksummed) or the volume ordering of two valid checksums can’t be used as an authority (caching). It would get you back to some valid state, possibly losing writes to do it.

        For this reason, mtime should be checksummed. They say timestamps are too volatile, which I buy for atime. But mtime only changes when the underlying data changes anyway. And ctime is constant.

      2. 4

        Is there a legit reason for this being posted again 2 months later with a different title?

        https://lobste.rs/s/xddtf5/introducing_muxfs_mirroring

        1. 2

          I believe submissions older than 30d can be submitted again.

        2. 2

          I’m not sure why would you want to run this and OpenBSD over an OS with a better storage layer (with a proper LVM, ZFS, anything), that’d give you much better performance, operations, and reliability guarantees..

          1. 4

            “Technology exists” is a bad reason to avoid developing more technology.

            1. 1

              Variation in a server fleet is not entirely free.

              For me, the history of privsep and now pledge() and unveil() is worth a lot. I don’t encounter performance, operations or reliability issues with my OpenBSD systems.

            Stories with similar links:

            1. Introducing muxfs -- a mirroring, checksumming, and self-healing filesystem layer for OpenBSD via ciprian_craciun 7 months ago | 38 points | 7 comments