Instead of hosting your average dot files repository I’ve deiced to instead host mine as a repository of salt states. I picked salt because that’s we’ve started using at work and it seemed simple enough to just convert my own system to use it as well. The states are pretty slim currently because I just did a fresh install of Ubuntu 18.04 and I’m writing the states as I get setup.
Comments and suggestions welcome!
I do something very similar with Ansible. I’ve found it a lot more effective than bash scripting or tools that symlink dotfiles into a home directory, since it not only manages configuration but also installing the applications that use that configuration - I think more of my setup is installing apps over configuring them now.
I also use peru to download applications I can’t install from a standard package repository and store them in my git repo, so that I’m relying on fewer external downloads at setup time, and can easily control when they update.
Thanks for the hat tip to peru. I’ll have to check that out.
Nice! Gunna chime in here and tell you to check out my collection of Nix expressions for my fully declarative, functional, atomic configuration I use across my macOS systems:
Start by digging through the
darwin-configuration.nixif you’re curious, it’ll lead onto the other expressions with
Of course, this is all achievable on NixOS (and a lot more), and any system that supports Nix.
Hope someone finds this helpful/inspirational!
This is cool — I do something similar. My home directory is managed in git and I manage the software on the machine (mostly) with Salt. Restoring backups from restic and highstating gives me a pretty much identical machine in about 60-90 minutes, depending on the network connection.
Maybe @steveno or someone else can ELI5 this to me why is this advantageous over traditional, platform-agnostic, and dependency-less symlinking in a bash script? Cf. my dotfiles and the install script.
Salt’s declarative nature means that you’re mostly describing the end state of a system, not how to get there.
So instead of saying “copy this stuff to this directory and then chmod” you say “I want this other directory to look like this”. Instead of saying “install these packages” you say “I want this to be installed”. You also get dependency management so if you (say) just want to install your SSH setup on a machine you can say to do that (and ignore your window manager conf).
If your files are grouped well enough and organized enough you can apply targeted subsets of your setup on many machines based off of what you want. “I want to use FF on this machine so pull in that + all the dependencies on that that I need”. “Install everything but leave out the driver conf I need for this one specific machine”
This means that if you update these scripts, you can re-run salt and it will just run what needs to run to hit the target state! So you get recovery from partial setup, checking for divergences in setups, etc for free! There’s dry run capabilities too so you can easily see what would need to change.
This is a wonderful way of keeping machines in sync
Looking at my repository right now, there isn’t any advantage. You could do everything I’ve done with a bash script. The beauty of this setup for me, and I really should have stated this initially, is that I can have multiple machines all share this configuration really easily. For example, my plan is buy a RaspberryPi and setup an encrypted DNS server. All I need to do is install salt on the Pi and it gets all of this setup just like my NUC currently has. I can then use salt to target specific machines and have it setup a lot of this for me.
You can also do this with a shell script.
With shell scripts you don’t need to install anything.
As I previously stated, given what’s currently in this repository, there isn’t anything here that you couldn’t do with a shell script. That’s missing the point though. Salt, or ansible, or chef, provide you with a way to manage complex setups on multiple systems. Salt specifically (because I’m not very familiar with ansible or chef) provides a lot of other convenient tools like salt-ssh or reactor as well.
I feel like your point is just that shell script is turing complete. Ok. The interesting questions are about which approach is better/easier/faster/safer/more powerful.
If you’re targeting different distributions of linux or different operating systems entirely, the complexity of a bash script will start to ramp up pretty quickly.
I disagree, I use a shell script simply because I use a vast array of Unix operating systems. Many of which don’t even support tools like salt, or simply do not have package management at all.
I have a POSIX sh script that I use to manage my dotfiles. Instead of it trying to actually install system packages for me, I have a
./configctl checkcommand that just checks if certain binaries are available in the environment. I’ve found that this approach hits the sweet spot since I still get a consistent environment across machines but I don’t need to do any hairy cross-distro stuff. And I get looped in to decide what’s right for the particular machine since I’m the one actually going and installing stuff.
Have to agree with @4ad on this one. I have to use remote machines I don’t have sudo rights and/or often are completely bare bones (eg., not even
gitpreinstalled.) My goal, in essence, is a standardized, reproducible, platform-agnostic, dependency-less dotfile environment which I can install with as few commands as possible and use as fast as possible. I don’t see how adding such a dependency benefits me in this scenario. I’m not against Ansible-like dotfile systems, but, in my opinion, using such systems for this task seems like an overkill. Happy to hear otherwise, though.
Until you start using Pillar and templates, it’ll remain an average dot files repository, I’m afraid.
Don’t get me wrong, I like Salt and use it daily but, without Pillar and templates, your repository is just another
dotfilesrepository*, albeit using Salt states :^)
* by that, I mean static and not very reusable
Which I completely understand and I am okay with. The idea was to explore the possibilities and see if others were doing something similar.