Someone told me once that there’s no such thing as a “10x engineer”, unless one counts the effect an engineer might have on the rest of the team (e.g. if one person is able to multiple their team’s output by 1.1, then, for an appropriately sized team, that engineer might have had a 10x impact). I didn’t agree, and this submission shows an example of why: S.D. Adams reduced the problem to the 1/10th core that actually needs to be solved to meet the goal, and appears to have been 10x as productive as a result.
While not solving the problem I have with data loss on OpenBSD: Any OpenBSD machine can, on losing power, require human intervention to bring up again (run the fsck, hope it comes out OK).
Until I can bounce OpenBSD boxes like I can consumer routers, or linux boxes, I can’t install OpenBSD in all the places I want to. These weaknesses are a layer below muxfs’s area of concern.
The problem space was reduced, yes, without solving the problem space where I’m most interested.
I have a bunch of QEMU boxes for testing/building things on different platforms, and very occasionally one doesn’t have a clean shutdown (while doing nothing) and OpenBSD is the only one that gives me grief with “starting in single-user mode, run fsck manually” regularly. I usually start these things without video connected, so it will just “hang” forever. Thus far I’ve never had this with NetBSD, FreeBSD, Dragonfly, or anywhere else. I “solved” it by just taking a snapshot and reverting to that, but meh….
Most operating systems ‘solved’ this by introducing journalling filesystems. This guaranteed that the on-disk representation always had enough state to be able to either discard or complete in-flight transactions. FreeBSD provided an alternative called soft-updates, which enforced an ordering on the writes such that a failure would leave the disk in a well-defined state that fsck could recover and allowed fsck to run while the filesystem was mounted (an unclean shutdown would leak blocks, so fsck was needed to go and find unlinked ones and return them to the free pool). This was later combined with journalling, which eliminated the need for background fsck to scan all of the inodes. Copy-on-write filesystems such as ZFS are typically self-healing and so don’t ever enter a situation where the on-disk structure can be invalid except in the case of a drive failing.
To my knowledge, OpenBSD never pulled in the soft-update or journalling code from FreeBSD. NetBSD has an independent implementation of journaling for UFS (I vaguely remember that they added journaling before FreeBSD, but I think the FreeBSD version did so in a way designed to compose with soft-updates, whereas NetBSD’s is independent).
OpenBSD does seem to support “soft dependencies” – which seems similar or identical to soft updates – with mount -o softdep, but it’s not enabled by default.
I don’t remember FreeBSD being quite this bad; soft-updates were generally discouraged for your root fs and not enabled by default for it, and I rarely had a “booting in single-user mode, run fsck manually” in many years of desktop usage. OpenBSD is much more fragile, either due to the design of their FFS/UFS implementation, fsck implementation, or choices they made in when to boot to single-user mode.
Theoretically it could work everywhere where FUSE is supported.
However, there is a limitation that muxfs introduces which in my opinion makes it less usable: it requires stable inode numbers, because it stores the checksums tied to those inode numbers. Thus you won’t be able to use it with any backing file-system that doesn’t provide such stable inodes, which it seems includes all NFS ande network-based file-systems and other FUSE-based ones.
(Also based on the discussions on HN on the same article, it seems that the author relies quite a bit on the FUSE implementation particularities on OpenBSD, like for example the fact that FUSE is single-threaded on OpenBSD. Thus porting might have to contend with a few issues such as these.)
Someone told me once that there’s no such thing as a “10x engineer”, unless one counts the effect an engineer might have on the rest of the team (e.g. if one person is able to multiple their team’s output by 1.1, then, for an appropriately sized team, that engineer might have had a 10x impact). I didn’t agree, and this submission shows an example of why: S.D. Adams reduced the problem to the 1/10th core that actually needs to be solved to meet the goal, and appears to have been 10x as productive as a result.
While not solving the problem I have with data loss on OpenBSD: Any OpenBSD machine can, on losing power, require human intervention to bring up again (run the fsck, hope it comes out OK).
Until I can bounce OpenBSD boxes like I can consumer routers, or linux boxes, I can’t install OpenBSD in all the places I want to. These weaknesses are a layer below
muxfs
’s area of concern.The problem space was reduced, yes, without solving the problem space where I’m most interested.
I have a bunch of QEMU boxes for testing/building things on different platforms, and very occasionally one doesn’t have a clean shutdown (while doing nothing) and OpenBSD is the only one that gives me grief with “starting in single-user mode, run fsck manually” regularly. I usually start these things without video connected, so it will just “hang” forever. Thus far I’ve never had this with NetBSD, FreeBSD, Dragonfly, or anywhere else. I “solved” it by just taking a snapshot and reverting to that, but meh….
Most operating systems ‘solved’ this by introducing journalling filesystems. This guaranteed that the on-disk representation always had enough state to be able to either discard or complete in-flight transactions. FreeBSD provided an alternative called soft-updates, which enforced an ordering on the writes such that a failure would leave the disk in a well-defined state that fsck could recover and allowed fsck to run while the filesystem was mounted (an unclean shutdown would leak blocks, so fsck was needed to go and find unlinked ones and return them to the free pool). This was later combined with journalling, which eliminated the need for background fsck to scan all of the inodes. Copy-on-write filesystems such as ZFS are typically self-healing and so don’t ever enter a situation where the on-disk structure can be invalid except in the case of a drive failing.
To my knowledge, OpenBSD never pulled in the soft-update or journalling code from FreeBSD. NetBSD has an independent implementation of journaling for UFS (I vaguely remember that they added journaling before FreeBSD, but I think the FreeBSD version did so in a way designed to compose with soft-updates, whereas NetBSD’s is independent).
OpenBSD does seem to support “soft dependencies” – which seems similar or identical to soft updates – with
mount -o softdep
, but it’s not enabled by default.I don’t remember FreeBSD being quite this bad; soft-updates were generally discouraged for your root fs and not enabled by default for it, and I rarely had a “booting in single-user mode, run fsck manually” in many years of desktop usage. OpenBSD is much more fragile, either due to the design of their FFS/UFS implementation, fsck implementation, or choices they made in when to boot to single-user mode.
That’s pretty cool. It being FUSE sounds like it might also work on other file and operating systems. Or would there be anything preventing that?
Theoretically it could work everywhere where FUSE is supported.
However, there is a limitation that
muxfs
introduces which in my opinion makes it less usable: it requires stableinode
numbers, because it stores the checksums tied to thoseinode
numbers. Thus you won’t be able to use it with any backing file-system that doesn’t provide such stableinodes
, which it seems includes all NFS ande network-based file-systems and other FUSE-based ones.(Also based on the discussions on HN on the same article, it seems that the author relies quite a bit on the FUSE implementation particularities on OpenBSD, like for example the fact that FUSE is single-threaded on OpenBSD. Thus porting might have to contend with a few issues such as these.)