It’s sometimes said that in Unix, everything is a file. In Plan 9, everything is a filesystem, and they can be transparently or opaquely mounted on top of one another to customise the environment for each process.
For example, where Unix has a special API for making network connections and configuring them, Plan9 has a particular filesystem path: write a description of the connection you want to make, read back a new path. Open that path, and the resulting file-descriptor is your TCP socket. If you want to forward your connections via another computer, you don’t need a special port-forwarding API or a VPN, you can just mount the remote computer’s network-connection filesystem over the top of your own, and everything that makes a network connection from then on will be talking to the TCP stack on the remote computer.
I guess the big thing about Plan 9 is that it really tried to make everything a file. So using the network was just opening files, writing to the GUI was just writing to files &c. Really, the differences from Unix are mostly a result of that goal, e.g. in Plan 9 any user can create his own namespace of files & directories from other files & directories.
The longer version would go into detail about how that actually worked (short version: really well).
What I especially like about home-manager, is that it allows me to try and gradually migrate stuff to Nix, but I still can do e.g. nix-env -iA nixpkgs.umlet for quick additions/tests, and still have an escape hatch of sudo apt-get install ... if something is not available (or broken for me) in nixpkgs.
I can confirm that home-manager works on OSX. The home.nix file I linked to above is used both on my Thinkpad running NixOS and my Macbook running macOS (via nix-darwin).
Great! I’ve been sitting on it, nearly finished for months.
Every time I write something, I get 80% of the way there (often more), and then upon re-reading it sounds obvious and trite, and I talk myself out of publishing. I do this at least once a week. I’ve decided to try fighting the urge to do that, as a matter of personal improvement.
Thanks for this, however I still can’t understand how and why GNU Stow (or other similar dotfile managers or Ansible playbooks, etc) are better than a simple shell script. Precisely because I’m sharing my dotfiles across multiple devices, platforms, and operating systems I want them to be as platform agnostic and minimal as possible, and without any external dependencies. My script simply installs/symlinks everything and later I use git pull to sync changes across machines.
Yep I also use an install script (although mine is in Python), the reasons being:
I need to support 3 platforms (Ubuntu/Mint, macOS, Windows), with different things to install in different ways (or not at all) depending on the platform.
In some cases, I find it easier to procedurally generate a dotfile that will point to a resource located in your dotfiles repo, instead of symlinking everything into a fixed/hard-coded location.
A script can also manage your sub-repos (pulling/cloning), so that everything is done in one command.
A script can optionally do a subset of something, so that if you want to just update, say, your tmux config, you can just run that and it will just pull your tmux plugins’ repos, redo the symlinks, etc., and nothing else.
It’s certainly not for everyone. I use it because its default behaviour matches my workflow perfectly, and there’s no need for a shell script. The only thing simpler than a simple shell script is no shell script at all!
Unless I’m missing something you are creating an extra dependency before you can install (and manage) your dotfiles: Stow. I.e., you have to download, compile, and install from source or via a package manager. What if you have to use systems you’re not the admin for and have no sudo rights? Or Perl is too old (Stow’s requirement.) Or, or, or…
What’s the benefit of symlinking vs. copying? I guess being able to edit dotfiles in their usual places (vi ~/.vimrc) is cool, but I actually do make temporary local changes sometimes, and I don’t want them in the repo.
I just have “modules” (directories) with apply.sh scripts and a really simple install.sh to install these “modules” (also rinstall.sh to install over SSH to a machine where I don’t want to clone the git repo). So the repo works as kind of a “staging area” (like the git index).
I’m in agreement with you. I would rather have filenames that don’t begin with a dot, and an install shell script gives me exactly that. What does stow do that a shell script can’t?
I think the real benefit of having symlinks is if you are sharing the files across multiple devices.
There’s nothing stopping someone from re-implementing a copy-not-symlink version of stow, but then you are responsible for merging differences in the script (or bailing out with an error).
The beauty of having symlinks is you can use any external tool (e.g. git) to handle merging changes in the config files if you share them across multiple devices.
edit: I actually implemented a shell script to do exactly what OP described, but kept getting burned by managing conflicts.
Most of the things I want to make local customisations for have ways to include other files (sh, ssh, my editor, etc.), so I usually make my main config files include a .foo.local or .foo.d/* whenever possible.
Some of the things I stow with stow are directories for precisely that same reason. I wrote an i3wm config manager that uses ~/.config/i3/config.d, and what I put into stow was literally that directory so I can add new files to ~/.config/i3wm/config.d and they magically show up in my repo where I can add them since config.d is a symlink.
Isn’t managing conflicts exactly what a tool like git is supposed to help with? So on update a copy-not-symlink script would copy back into the repo and then do a merge.
(I’ve always used symlinks because it’s less work and thought up front)
Yea that was kind of my (poorly worded) point. with copy-not-symlink, your script now has to be smart enough to recognize conflicts, not copy, and invoke git or whatever to help merge changes. with symlinks, you use one tool to symlink (stow) and another to resolve conflicts (git). Stow doesn’t care about conflicts, and it doesn’t have to. It is simpler and less work than creating your own script to copy-not-symlink.
Stow is a really neat tool and I use it for managing my ~/.local tree which contains locally built software packages. However, for dotfiles I just check my whole home directory into a git repository and add ignore rules for files and directories that don’t need to be versioned. It’s pretty low complexity and has the added benefit that you can spot config drift and new dotfiles (which you may or may not want to version) very quickly. I’d very much recommend that over using stow to manage dotfiles.
I thought about this, years ago - and I see one major problem. Using stow is a snapshot in time, e.g. in its usual use case I install foobar-1.2.3 to /usr/local/stow/foobar-1.2.3 and then it symlinks the foobar binary to /usr/local/bin/foobar. I don’t see how this would apply to dotfiles, as this set of files is ever-changing. You start using new tools,. stop using old ones. So everytime for it to be meaningful you have to move them to the stow path and re-stow, or not?
I’m not saying my solution is better - it’s not so different. But when I start using a new machine I clone my dotfiles to ~/code/dotfiles and symlink a select few config files to ~ - but I do know that it’s only a small subset, and then I might migrate some new ones into that set. So basically I have the same problem as I described above, but I also don’t have stow (just a shell script that does this job once) to think about. And yes, I do use stow regularly in /usr/local, but I think it doesn’t solve the problem in a meaningful way, but maybe I’m missing something.
Very useful, thanks. I may revisit my workflow now based on this nice write up. A few years ago I wrote wsup to solve the same problem. It works well for me, but needs updating..
Stow is cool and I remember looking at it when I was trying to find something better than git submodules, and symlinking for managing my dotfiles. I ended up going with vcsh, and myrepos, and have so far been really happy with it. It’s a bit more of a minimal solution, and a wrapper around git but it does what I want.
I wrote a utility earlier this year (dotenv) that caters to a similar use case (managing dotfiles) but with the added requirement of having customizable and overridable dotfiles. I did that because I found that new hires nerver really configured the command-line tools we use on a daily basis, and I wanted to create a standard configuration that they could still customize. It’s quite new (a few months old), but I use it every day. If you’re interested, let me know what you think of it!
There is a bit of namespace pollution for dotenv. There is a dotenv package for ruby, javascript, php, elixir, rust, & python to manage values for .env files.
I’ve had good experience with yadm, which is a script wrapped around git. It’s been really convenient to use my familiar git workflow to manage dotfiles.
#!/bin/sh -e
cd "$(dirname "$0")"
for dir in */; do
(
cd "$dir"
find * \
-type d -exec sh -c 'mkdir -p "$HOME/.$0"' {} \; -o \
-type f -exec sh -c 'ln -sf "$PWD/$0" "$HOME/.$0"' {} \;
)
done
I do something like this as well, with the added layer of crazy that my dotfiles are a nix package which is installed and then linked into place.
But really, all I want is unprivileged union mounts that work.
Plan 9, where art thou?
I seriously don’t understand why Unix hasn’t evolved to be more like Plan 9 over the last few decades. It’s so clearly the right direction!
Hey, do you have like a 10-second primer on plan 9? I always see people talking about it but have no idea how it differs from Unix.
It’s sometimes said that in Unix, everything is a file. In Plan 9, everything is a filesystem, and they can be transparently or opaquely mounted on top of one another to customise the environment for each process.
For example, where Unix has a special API for making network connections and configuring them, Plan9 has a particular filesystem path: write a description of the connection you want to make, read back a new path. Open that path, and the resulting file-descriptor is your TCP socket. If you want to forward your connections via another computer, you don’t need a special port-forwarding API or a VPN, you can just mount the remote computer’s network-connection filesystem over the top of your own, and everything that makes a network connection from then on will be talking to the TCP stack on the remote computer.
I think Redox’ “Everything is a URL” is a nice improvement on Pan 9’s idea.
Very interesting. Thanks!
That’s a great question!
I guess the big thing about Plan 9 is that it really tried to make everything a file. So using the network was just opening files, writing to the GUI was just writing to files &c. Really, the differences from Unix are mostly a result of that goal, e.g. in Plan 9 any user can create his own namespace of files & directories from other files & directories.
The longer version would go into detail about how that actually worked (short version: really well).
Do you have your Nix/dotfiles code somewhere?
A more radical approach would be using rewritefs.
I use
home-manager
, and mine looks as simple as:Me too! Some more stuff I do to manage simple scripts/aliases:
What I especially like about home-manager, is that it allows me to try and gradually migrate stuff to Nix, but I still can do e.g.
nix-env -iA nixpkgs.umlet
for quick additions/tests, and still have an escape hatch ofsudo apt-get install ...
if something is not available (or broken for me) in nixpkgs.You don’t need that
script
function. viz.:I remember finding home-manager and was unsure if it worked or was testing on a not-NixOS system, so I kept using my Rube Goldberg setup :/
I can confirm that
home-manager
works on OSX. Thehome.nix
file I linked to above is used both on my Thinkpad running NixOS and my Macbook running macOS (via nix-darwin).I’m using it succesfully on Ubuntu 16.04.
Oh, hey, I’ve been meaning to do that. Would you be willing to share your config?
Here’s my nix expression which, in true FP style, is completely inscrutable:
The
src
argument is a directory arranged like “programname/.dotfile”.And this is called from a script I call
nix-up
:That
$(hostname).nix
file has a list of packages and hooks the overlay that contains the above expression.Thanks! I’m excited to set it up.
Xero also has a great and brief article about the same thing: https://blog.xero.nu/managing_dotfiles_with_gnu_stow
Makes me happy to see a post about stow. I use xstow[1] to manage
bin
andlocal
in my homedir.[1] More recently maintained fork here
Great! I’ve been sitting on it, nearly finished for months.
Every time I write something, I get 80% of the way there (often more), and then upon re-reading it sounds obvious and trite, and I talk myself out of publishing. I do this at least once a week. I’ve decided to try fighting the urge to do that, as a matter of personal improvement.
Thanks for this, however I still can’t understand how and why GNU Stow (or other similar dotfile managers or Ansible playbooks, etc) are better than a simple shell script. Precisely because I’m sharing my dotfiles across multiple devices, platforms, and operating systems I want them to be as platform agnostic and minimal as possible, and without any external dependencies. My script simply installs/symlinks everything and later I use git pull to sync changes across machines.
Yep I also use an install script (although mine is in Python), the reasons being:
It’s certainly not for everyone. I use it because its default behaviour matches my workflow perfectly, and there’s no need for a shell script. The only thing simpler than a simple shell script is no shell script at all!
Unless I’m missing something you are creating an extra dependency before you can install (and manage) your dotfiles: Stow. I.e., you have to download, compile, and install from source or via a package manager. What if you have to use systems you’re not the admin for and have no sudo rights? Or Perl is too old (Stow’s requirement.) Or, or, or…
I stole this from a Hacker News comment way way back, but it works great.
What’s the benefit of symlinking vs. copying? I guess being able to edit dotfiles in their usual places (
vi ~/.vimrc
) is cool, but I actually do make temporary local changes sometimes, and I don’t want them in the repo.I just have “modules” (directories) with apply.sh scripts and a really simple install.sh to install these “modules” (also rinstall.sh to install over SSH to a machine where I don’t want to clone the git repo). So the repo works as kind of a “staging area” (like the git index).
I’m in agreement with you. I would rather have filenames that don’t begin with a dot, and an install shell script gives me exactly that. What does stow do that a shell script can’t?
I think the real benefit of having symlinks is if you are sharing the files across multiple devices.
There’s nothing stopping someone from re-implementing a copy-not-symlink version of stow, but then you are responsible for merging differences in the script (or bailing out with an error).
The beauty of having symlinks is you can use any external tool (e.g. git) to handle merging changes in the config files if you share them across multiple devices.
edit: I actually implemented a shell script to do exactly what OP described, but kept getting burned by managing conflicts.
Most of the things I want to make local customisations for have ways to include other files (sh, ssh, my editor, etc.), so I usually make my main config files include a .foo.local or .foo.d/* whenever possible.
Some of the things I stow with stow are directories for precisely that same reason. I wrote an i3wm config manager that uses ~/.config/i3/config.d, and what I put into stow was literally that directory so I can add new files to ~/.config/i3wm/config.d and they magically show up in my repo where I can add them since config.d is a symlink.
Isn’t managing conflicts exactly what a tool like git is supposed to help with? So on update a copy-not-symlink script would copy back into the repo and then do a merge.
(I’ve always used symlinks because it’s less work and thought up front)
Yea that was kind of my (poorly worded) point. with copy-not-symlink, your script now has to be smart enough to recognize conflicts, not copy, and invoke git or whatever to help merge changes. with symlinks, you use one tool to symlink (stow) and another to resolve conflicts (git). Stow doesn’t care about conflicts, and it doesn’t have to. It is simpler and less work than creating your own script to copy-not-symlink.
Stow is a really neat tool and I use it for managing my
~/.local
tree which contains locally built software packages. However, for dotfiles I just check my whole home directory into a git repository and add ignore rules for files and directories that don’t need to be versioned. It’s pretty low complexity and has the added benefit that you can spot config drift and new dotfiles (which you may or may not want to version) very quickly. I’d very much recommend that over using stow to manage dotfiles.I thought about this, years ago - and I see one major problem. Using stow is a snapshot in time, e.g. in its usual use case I install
foobar-1.2.3
to/usr/local/stow/foobar-1.2.3
and then it symlinks thefoobar
binary to/usr/local/bin/foobar
. I don’t see how this would apply to dotfiles, as this set of files is ever-changing. You start using new tools,. stop using old ones. So everytime for it to be meaningful you have to move them to the stow path and re-stow, or not?I’m not saying my solution is better - it’s not so different. But when I start using a new machine I clone my dotfiles to
~/code/dotfiles
and symlink a select few config files to ~ - but I do know that it’s only a small subset, and then I might migrate some new ones into that set. So basically I have the same problem as I described above, but I also don’t have stow (just a shell script that does this job once) to think about. And yes, I do use stow regularly in/usr/local
, but I think it doesn’t solve the problem in a meaningful way, but maybe I’m missing something.Very useful, thanks. I may revisit my workflow now based on this nice write up. A few years ago I wrote wsup to solve the same problem. It works well for me, but needs updating..
Stow is cool and I remember looking at it when I was trying to find something better than git submodules, and symlinking for managing my dotfiles. I ended up going with vcsh, and myrepos, and have so far been really happy with it. It’s a bit more of a minimal solution, and a wrapper around git but it does what I want.
I wrote a utility earlier this year (dotenv) that caters to a similar use case (managing dotfiles) but with the added requirement of having customizable and overridable dotfiles. I did that because I found that new hires nerver really configured the command-line tools we use on a daily basis, and I wanted to create a standard configuration that they could still customize. It’s quite new (a few months old), but I use it every day. If you’re interested, let me know what you think of it!
There is a bit of namespace pollution for dotenv. There is a dotenv package for ruby, javascript, php, elixir, rust, & python to manage values for .env files.
I’ve had good experience with yadm, which is a script wrapped around git. It’s been really convenient to use my familiar git workflow to manage dotfiles.
POSIX compliant:
As the last word said… “done”.