What happens when apple decides they don’t like ZFS is a question I hadn’t considered. But indeed, owning your filesystem (and its future direction) is pretty important for an OS vendor. It is not so easily removed or replaced.
Well in the most simplistic UNIX file system, directories are just files marked with the directory bit, the contents being lists of name->inode maps. So you definitely could create a hard link, no big deal. There are reasons not to, but why would it be impossible? ;)
If you say “it should point to the original parent”, well, what happens when you remove the hard-linked directory from that parent? Normally, unlinking something is just decrementing the reference count; it doesn’t involve modifying the content. Even if you do modify the content, if you’ve hardlinked the directory into mulitple locations, which one becomes the new parent?
Also, how would you go about unlinking a directory? unlink() is supposed to set EPERM if you call it on a directory… perhaps it should work as long as the refcount is > 1, so whether it works depends on other things happening elsewhere in the filesystem? Alternatively, you might leave unlink() alone and modify rmdir()… but rmdir() normally requires its target to be empty, and if you remove the contents of a hard-linked directory before unlinking it, well, now you’ve deleted data from two locations instead of just one.
.. goes to the parent directory of the hard link. For unlink/rmdir, do both: make unlink work on a directory if the refcount is >1, and modify rmdir to work too but keep its general contract (requires empty dir). These aren’t terribly difficult choices.
I find it very difficult to care about filesystems. It’s about as exciting to me as printer drivers. I currently use ext4 because it was a default and I had no reason to try anything else. Can someone explain what appreciable difference a filesystem would make on my everyday usage of computers?
Consider that there’s a lot of legacy software in the Apple ecosystem that isn’t very careful about case normalization because it doesn’t have to be. You can format a volume case-sensitive HFS+ today (and you could also do case-sensitive UFS in the past), but a ton of stuff is broken, including big-name apps like Steam. Apple has always stuck with case-insensitivity in the default install because doing anything else breaks too much.
For “normal” people, having a case-insensitive file system is nice, so if they fat-finger the caps-lock key they still get the file they want.
For programmers, having a case-sensitive file system is nice, so the file you create with a given name is always distinct from other files with logically different names.
Imagine writing some sort of “cache” files to disk that are named using some hash that produces a combination of upper and lower-case letters. On a case-insensitive file system, you’re eventually going to end up with collisions. That’s a bummer (and hours of debugging time lost) to have to worry about, especially when your program needs to work cross-platform.
What Windows does (as a compromise) is NTFS and the kernel are case sensitive, but the Win32 subsystem is not by default. Users get what they expect, and other subsystems can get the semantics they want. (Case sensitivity is toggleable for Win32 as well, but I wouldn’t recommend this.)
Snapshots are particularly useful. Sorta like a git commit, you can always go back to that point, even if you delete files.. etc. With HammerFS(2? DragonflyBSD Only), ZFS, BTRFS and FreeBSD FFS (maybe more..) you get the snapshotting.
As already mentioned: snapshots. If you ever try SmartOS/SmartDataCenter (recently renamed to Triton) by Joyent, you might end up playing around with container snapshots (zones). It’s crazy that I can go into a zone, or a bunch of zones, completely destroy the filesystem, then roll back to a safe snapshot in a matter of seconds (data type/size would make “seconds” vary here, but it’s always fast).
I remember being absolutely blown away by the notion of VM “flavours” being available in a repo, just like packages, this was before the big hit of Docker becoming widely known and adopted. The fact I could go onto my SmartOS headnode and do a imgadm avail | fgrep redis then grab that “image” in no time from Joyent’s remote repo, then deploy straight to a zone was just baffling. Why am I harping on about this? Because this framework revolves around ZFS snapshots bundled up with some metadata compressed into a tarball. Pretty damn cool.
I won’t babble on about that, check out those search results.
There’s also the ability to implement file sharing protocols, like NFS/smb, at the filesystem level with some simple flags when making/modifying volumes. I use a SmartOS server at home with a few NFS shares set up directly when I made the volumes.
Another thing: on the fly expansion/shrinking of volumes. Really cool when you want to chuck some extra space at some zones, or bring them down.
All of this is largely from my own experience of administering servers with ZFS, I’ve never used it on a desktop/laptop. However, were I to go down that path (if it was presented to me in a simple, solid manner), I’d be using snapshots, send/recieve to make backups all the damn time.
From the perspective of an OSX user, would there be much of an improvement over the kind of snapshotting Time Machine does? I realize it’s not at the filesystem level, and not nearly as flexible if you’re managing big storage arrays and such, but for a desktop/laptop user it seems like a “good enough” solution.
It might be that TM does a good enough job, but other things like the self-healing stuff can take it a step further. Say you have a “raid” volume (used quotes because zfs has its own naming) and a file gets corrupted on one of the raid mirrors, zfs will check the file’s checksum against other mirrors and replace the broken file with a known good copy. TM in this example would just put the corrupt file into your TM backup.
All that said, my main FS is OpenBSD’s FFS, which has none of these features, and I have never had issues :P
iMacs come with this SSD/spinny disk fusion, which ZFS has pretty good support for out of the box (where the SSD becomes a cache rather than extra storage space). On top of that, for things like video editing (which Apple still has a strong presence on, I think?), something like ZFS gives you a lot of options to increase the throughput of storage. And, while ZFS supports a RAID setup well, it does have a lot of value for single disk setups. I use it on my desktops without a hitch. The COW-semantics of APFS, from what I can tell skimming the docs, are very similar to what ZFS does, so at the very least a subset of ZFS is useful enough for Apple devices.
It can be done with a single drive as well (obviously this isn’t recommended :P). Also I am not arguing that people are or should be doing this (running any of these FS2.0 file systems). I am just expanding on my previous list of “distinguishing features”.
I use ZFS everywhere I can and am really pleased with it and don’t think I’d feel comfortable going back to something else. The two main values I get from ZFS are that it ensure data is valid via checksums. I have been stung by hardware or software corrupting my data and ZFS has protections against that. The second is snapshots. Snapshots are cheap in ZFS, so on my workstations I take snapshots at various units of 5 min, 15 min, 1 hour, 1 day, 1 week, 1 month and retain them for various periods of time. Then transferring snapshots around is easy so I can back up these to other machines really painlessly.
With snapshots, you can do other really powerful things that you might not realize you want to do until you have them. The biggest one is boot environments. This makes it so you can snapshot your installation and switch between them on boot. The usecase for this is if you’re going to do a big upgrade you can role it back if it breaks. The power that something like ZFS gives you is that you can ensure the packages and kernel are always in sync. While existing OS’s like Ubuntu might keep multiple kernel versions laying around, you don’t have any guarantees that the rest of the system still makes sense if you rollback. You do have those guarantees with boot environments.
Then there are other nice things you can do, for example if you have a lot of data and you want to experiment with it, you can clone it (cheap), play with it, and destroy it, without harming the original data. If you are using Solaris or FreeBSD, there are things like jails which are whole-system containers, that become much easier and more powerful with ZFS (creating new ones becomes fast and cheap, so you can make heavy use of them).
Then, if you’re admining any system, ZFS lets you do a lot of useful things, even delegate control of portions of the filesystem to users so they can do the work they want themselves. Running any serious storage box benefits from ZFS on basically every axis (performance, durability, operationally).
So, of course, it depends. For myself, ZFS has given me the ability to do things I didn’t realize I wanted to do before as well as given me increased safety about my data. On top of that, it’s benefiting me as a person who admins some machines and as regular-joe-user. I used to rsync data to multiple USB drives as backup, now I can just transfer incremental snapshots around which is much safer and significantly faster.
What happens when apple decides they don’t like ZFS is a question I hadn’t considered. But indeed, owning your filesystem (and its future direction) is pretty important for an OS vendor. It is not so easily removed or replaced.
TIL: apparently, OS X must be the only UNIX that supports hard links to directories…
How does it even work? I recall I even had an interview question at one point of why hard links are not possible with directories in UNIX.
This answer on StackExchange has some useful info in regard to that question: What is the Unix command to create a hardlink to a directory in OS X?
Well in the most simplistic UNIX file system, directories are just files marked with the directory bit, the contents being lists of name->inode maps. So you definitely could create a hard link, no big deal. There are reasons not to, but why would it be impossible? ;)
Where would the
..
entry point?If you say “it should point to the original parent”, well, what happens when you remove the hard-linked directory from that parent? Normally, unlinking something is just decrementing the reference count; it doesn’t involve modifying the content. Even if you do modify the content, if you’ve hardlinked the directory into mulitple locations, which one becomes the new parent?
Also, how would you go about unlinking a directory?
unlink()
is supposed to set EPERM if you call it on a directory… perhaps it should work as long as the refcount is > 1, so whether it works depends on other things happening elsewhere in the filesystem? Alternatively, you might leaveunlink()
alone and modifyrmdir()
… butrmdir()
normally requires its target to be empty, and if you remove the contents of a hard-linked directory before unlinking it, well, now you’ve deleted data from two locations instead of just one... goes to the parent directory of the hard link. For unlink/rmdir, do both: make unlink work on a directory if the refcount is >1, and modify rmdir to work too but keep its general contract (requires empty dir). These aren’t terribly difficult choices.
With FreeBSD of a certain vintage, with
ln -F
.I find it very difficult to care about filesystems. It’s about as exciting to me as printer drivers. I currently use ext4 because it was a default and I had no reason to try anything else. Can someone explain what appreciable difference a filesystem would make on my everyday usage of computers?
I’m only really excited because Apple (might) get rid of the
.DS_Store
files finder creates when viewing directories.I was hoping the new FS would be case sensitive by default, just because it’s what I’m used to from Linux. But it won’t be.
https://developer.apple.com/library/prerelease/content/documentation/FileManagement/Conceptual/APFS_Guide/UsingtheAppleFileSystem/UsingtheAppleFileSystem.html#//apple_ref/doc/uid/TP40016999-CH4-SW1
APFS defaults to case sensitive currently, I’m not sure if that’s going to stay the default going forward but it seems like progress on that front.
It likely won’t stay the default.
Consider that there’s a lot of legacy software in the Apple ecosystem that isn’t very careful about case normalization because it doesn’t have to be. You can format a volume case-sensitive HFS+ today (and you could also do case-sensitive UFS in the past), but a ton of stuff is broken, including big-name apps like Steam. Apple has always stuck with case-insensitivity in the default install because doing anything else breaks too much.
Is a case-sensitive filesystem considered a good thing?
In other words, I’m curious what benefits there are to be had in the ability to have Foo.txt and foo.txt side-by-side.
For “normal” people, having a case-insensitive file system is nice, so if they fat-finger the caps-lock key they still get the file they want.
For programmers, having a case-sensitive file system is nice, so the file you create with a given name is always distinct from other files with logically different names.
Imagine writing some sort of “cache” files to disk that are named using some hash that produces a combination of upper and lower-case letters. On a case-insensitive file system, you’re eventually going to end up with collisions. That’s a bummer (and hours of debugging time lost) to have to worry about, especially when your program needs to work cross-platform.
Normal people never type the names of existing files.
What Windows does (as a compromise) is NTFS and the kernel are case sensitive, but the Win32 subsystem is not by default. Users get what they expect, and other subsystems can get the semantics they want. (Case sensitivity is toggleable for Win32 as well, but I wouldn’t recommend this.)
Sure:
Snapshots are particularly useful. Sorta like a git commit, you can always go back to that point, even if you delete files.. etc. With HammerFS(2? DragonflyBSD Only), ZFS, BTRFS and FreeBSD FFS (maybe more..) you get the snapshotting.
Hey @qbit, been a while!
I figure I’ll weigh in here too.
As already mentioned: snapshots. If you ever try SmartOS/SmartDataCenter (recently renamed to Triton) by Joyent, you might end up playing around with container snapshots (zones). It’s crazy that I can go into a zone, or a bunch of zones, completely destroy the filesystem, then roll back to a safe snapshot in a matter of seconds (data type/size would make “seconds” vary here, but it’s always fast).
I remember being absolutely blown away by the notion of VM “flavours” being available in a repo, just like packages, this was before the big hit of Docker becoming widely known and adopted. The fact I could go onto my SmartOS headnode and do a
imgadm avail | fgrep redis
then grab that “image” in no time from Joyent’s remote repo, then deploy straight to a zone was just baffling. Why am I harping on about this? Because this framework revolves around ZFS snapshots bundled up with some metadata compressed into a tarball. Pretty damn cool.Which then leads me to some of the utilities ZFS has available, like
zfs send
andzfs receive
: https://duckduckgo.com/?q=zfs+send+receive&ia=webI won’t babble on about that, check out those search results.
There’s also the ability to implement file sharing protocols, like NFS/smb, at the filesystem level with some simple flags when making/modifying volumes. I use a SmartOS server at home with a few NFS shares set up directly when I made the volumes.
Another thing: on the fly expansion/shrinking of volumes. Really cool when you want to chuck some extra space at some zones, or bring them down.
All of this is largely from my own experience of administering servers with ZFS, I’ve never used it on a desktop/laptop. However, were I to go down that path (if it was presented to me in a simple, solid manner), I’d be using snapshots, send/recieve to make backups all the damn time.
ZFS is fucking great - I’ll end on that.
From the perspective of an OSX user, would there be much of an improvement over the kind of snapshotting Time Machine does? I realize it’s not at the filesystem level, and not nearly as flexible if you’re managing big storage arrays and such, but for a desktop/laptop user it seems like a “good enough” solution.
It might be that TM does a good enough job, but other things like the self-healing stuff can take it a step further. Say you have a “raid” volume (used quotes because zfs has its own naming) and a file gets corrupted on one of the raid mirrors, zfs will check the file’s checksum against other mirrors and replace the broken file with a known good copy. TM in this example would just put the corrupt file into your TM backup.
All that said, my main FS is OpenBSD’s FFS, which has none of these features, and I have never had issues :P
[Comment removed by author]
iMacs come with this SSD/spinny disk fusion, which ZFS has pretty good support for out of the box (where the SSD becomes a cache rather than extra storage space). On top of that, for things like video editing (which Apple still has a strong presence on, I think?), something like ZFS gives you a lot of options to increase the throughput of storage. And, while ZFS supports a RAID setup well, it does have a lot of value for single disk setups. I use it on my desktops without a hitch. The COW-semantics of APFS, from what I can tell skimming the docs, are very similar to what ZFS does, so at the very least a subset of ZFS is useful enough for Apple devices.
[Comment removed by author]
It can be done with a single drive as well (obviously this isn’t recommended :P). Also I am not arguing that people are or should be doing this (running any of these FS2.0 file systems). I am just expanding on my previous list of “distinguishing features”.
I use ZFS everywhere I can and am really pleased with it and don’t think I’d feel comfortable going back to something else. The two main values I get from ZFS are that it ensure data is valid via checksums. I have been stung by hardware or software corrupting my data and ZFS has protections against that. The second is snapshots. Snapshots are cheap in ZFS, so on my workstations I take snapshots at various units of 5 min, 15 min, 1 hour, 1 day, 1 week, 1 month and retain them for various periods of time. Then transferring snapshots around is easy so I can back up these to other machines really painlessly.
With snapshots, you can do other really powerful things that you might not realize you want to do until you have them. The biggest one is boot environments. This makes it so you can snapshot your installation and switch between them on boot. The usecase for this is if you’re going to do a big upgrade you can role it back if it breaks. The power that something like ZFS gives you is that you can ensure the packages and kernel are always in sync. While existing OS’s like Ubuntu might keep multiple kernel versions laying around, you don’t have any guarantees that the rest of the system still makes sense if you rollback. You do have those guarantees with boot environments.
Then there are other nice things you can do, for example if you have a lot of data and you want to experiment with it, you can clone it (cheap), play with it, and destroy it, without harming the original data. If you are using Solaris or FreeBSD, there are things like jails which are whole-system containers, that become much easier and more powerful with ZFS (creating new ones becomes fast and cheap, so you can make heavy use of them).
Then, if you’re admining any system, ZFS lets you do a lot of useful things, even delegate control of portions of the filesystem to users so they can do the work they want themselves. Running any serious storage box benefits from ZFS on basically every axis (performance, durability, operationally).
So, of course, it depends. For myself, ZFS has given me the ability to do things I didn’t realize I wanted to do before as well as given me increased safety about my data. On top of that, it’s benefiting me as a person who admins some machines and as regular-joe-user. I used to rsync data to multiple USB drives as backup, now I can just transfer incremental snapshots around which is much safer and significantly faster.