Your glob is nice because it limits valid names to a set which is very distinguishable from ordinary helper scripts (in order to be sourced, the filename must start with a digit).
In my particular experience, I have never hit a case where I had to bother with the sourcing order but, since bash globs are sorted by default, I could use this very same approach without changing the bootstrap snippet.
Apparently the “Submit Story” form has eaten the .md extension from the link and I didn’t notice it. It should be readable in pretty-printed Markdown as well by re-adding it. Would some moderator edit the link, please? :)
Using fish has the benefit of being able to use a real programming language to define helper functions and having all of those neatly stored in .config/fish, so there is no random mess in my $HOME.
I have switched to fish this year and it has been a game changer for me. My configuration is now simple and I got a lot of features that I was missing out of the box.
I beg to differ. One question would be, what is substantially different between bash and other scripting languages that makes it more prone to a single-file script where different sections of the code are only indicated by comments?
I like my configuration to be “easily portable”; that is, copying a single file to a new machine is a lot less work than copying six files. And sure, there are a myriad ways to deal with this; I even wrote an entire script for this myself. But sometimes it’s just convenient to be able to copy/scp (or even manually copy/paste) a single file and just be done with it.
I used to use the multi-file approach, but went back to the single-file one largely because of this.
I also somewhat prefer larger files in general (with both config files and programming), rather than the “one small thing per file”-approach. Both schools of thought are perfectly valid, just a matter of personal preference. I can open my zshrc or vimrc and search and I don’t have to think about which file I have to open. I never cared much for the Linux “/etc/foo.d”-approach either, and prefer a single “/etc/foo.conf”.
How I personally use it is that the non-portable snippets go to ${BASHRC_D} instead. Having worked as a developer in projects with very heterogeneous stacks, I got fed up of the constant changes to ~/.bashrc that would have to be cleaned up sooner or later.
My usual workflow when I am working on a new machine temporarily is to copy only ~/.bashrc. Any additional config is added to ${BASHRC_D} as needed.
copying a single file to a new machine is a lot less work than copying six files
Is it? I have all of my configs in a git repo, so it’s a single command for me to git clone to a new machine. Copying a single file is maybe simpler if that’s the only operation that you do, but copying and versioning a single file is no easier than copying and versioning a directory. The bash config part of my config repo has a separate file or directory per hostname, so I can have things in there that only make sense for a specific machine, but everything is versioned as a whole.
I never cared much for the Linux “/etc/foo.d”-approach either, and prefer a single “/etc/foo.conf”.
For files that are edited by hand, this is very much a matter of personal preference. The big difference is for files that need to be updated by tools. It’s fairly trivial to machine edit FreeBSD’s rc.conf in theory, because it’s intended to be a simple key-value store, but it’s actually a shell script and so correctly editing in a tool it has a bunch of corner cases and isn’t really safe unless you have a full shell script parser and symbolic execution environment (even for a simple case such as putting the line that enables the service in an if block: how should a tool that uninstalls that service and cleans up the config handle it?). Editing rc.conf.d by hand is a lot more faff (especially since most files there contain only one line) but dropping a new file in there or deleting it is a trivial operation for a package installer do do.
Same thing I’d say about Python: it’s a interpreted scripting language where multiple files are only loosely linked together and there’s no compilation or verification step. At least usually you have source files right next to each other but in this case they’re associated using environment variables. Just feels like overengineering.
Still no difference from a single-file approach. So I’m afraid I fail to see how is this a relevant aspect in making such an option.
At least usually you have source files right next to each other but in this case they’re associated using environment variables.
Environment variables like ${BASHRC_D} are nothing but a convenience. It could be replaced by local variables or sheer repetition with no downside. It is a matter of personal preference.
Just feels like overengineering.
There is no engineering involved in that at all, so calling it “overengineering” feels like overestimation :)
I have a slightly more complex setup. I put all of my bash config under ~/.config/bash, which is in a git repo. I have a helper function that sources files in the pattern ${XDG_CONFIG_HOME}/bash/$1/$2 and ${XDG_CONFIG_HOME}/bash/$1/$2.d/*. I call this function with hosts and the output from hostname and with systems and the output of uname. This lets me have either individual files or directories full of files inside ~/config/bash/hosts for individual machines and the same in systems for things that are for every machine I have with a specific OS (e.g. FreeBSD, Linux, macOS, with a special case in Linux that tries again with Linux-WSL if it detects that it’s running in WSL).
This means all of my configs for all machines are in the same git repo and don’t get lost if anything happens to a particular machine.
I like how Fish’s config files works. In ~/.config/fish
├── completions # autocompletion scripts
├── conf.d # all files from this dir are sourced automatically
└── functions # sourced when function is used
└── config.fish # .bashrc equivalent
It’s easy to keep things organized with this layout.
Minimalism is a valid option. But, in this case, I can’t help but think on how to enable or disable a behavior through an environment variable as presented in the article. Should it be configured manually before execution? If yes, how to you keep track of these in order not to forget them—or not to execute them more than once when it is critical?
I started with something like this but it grew when I needed to understand what was taking so much time when I started a new shell. That turned into this benchmarking code path.
In my case, it is more an exercise in proper separation than scale. I don’t usually keep more than half a dozen of very short files in the configuration dir.
More than enabling the maintenance of several config files, as I mentioned in the title, it helps me ensure that my ~/.bashrc doesn’t become a mess. It also facilitates reuse across different machines.
This is a neat way to do this, but I’m not personally a fan. I don’t want to keep the overhead of remembering what’s where, I’d rather have one big rc-file for my shell. I haven’t ever had a need to disable these things either. But things like this are why I love scripting with Bash, and I always learn something.
I bet you’d like zsh scripting even more than bash. I read the zsh docs in detail and it offers dozens, if not more, of conveniences that improve my sanity.
I’m sure you’re right, and I really want learn zsh scripting more thoroughly. It’s my shell, but I haven’t really taken the time to do more than its bash interop.
Well, multiple files are a mess. I tend to source machine specific script from a common shell config files. I have maybe 5 or 6 different setup where I use this common set of dot files. Makes it portable.
I too do something similar:
You can control the order of
source
ing this way.Your glob is nice because it limits valid names to a set which is very distinguishable from ordinary helper scripts (in order to be sourced, the filename must start with a digit).
In my particular experience, I have never hit a case where I had to bother with the sourcing order but, since bash globs are sorted by default, I could use this very same approach without changing the bootstrap snippet.
Apparently the “Submit Story” form has eaten the
.md
extension from the link and I didn’t notice it. It should be readable in pretty-printed Markdown as well by re-adding it. Would some moderator edit the link, please? :)Edit: link to the pretty-printed version: https://write.as/bpsylevc6lliaspe.md
That’s neat—didn’t know write.as had that feature.
Using fish has the benefit of being able to use a real programming language to define helper functions and having all of those neatly stored in .config/fish, so there is no random mess in my $HOME.
I have switched to fish this year and it has been a game changer for me. My configuration is now simple and I got a lot of features that I was missing out of the box.
Also, the features that fish has built-in are usually implemented in a better way than those added on via scripting in zsh/bash.
\o/ yay for fish
I particularly like being able to ‘save’ variables to my config within the shell.
Eshell also has this benefit. It’s great!
I just add comments, it’s simpler and it seems like there’s fewer things that can go wrong.
I beg to differ. One question would be, what is substantially different between bash and other scripting languages that makes it more prone to a single-file script where different sections of the code are only indicated by comments?
I like my configuration to be “easily portable”; that is, copying a single file to a new machine is a lot less work than copying six files. And sure, there are a myriad ways to deal with this; I even wrote an entire script for this myself. But sometimes it’s just convenient to be able to copy/scp (or even manually copy/paste) a single file and just be done with it.
I used to use the multi-file approach, but went back to the single-file one largely because of this.
I also somewhat prefer larger files in general (with both config files and programming), rather than the “one small thing per file”-approach. Both schools of thought are perfectly valid, just a matter of personal preference. I can open my zshrc or vimrc and search and I don’t have to think about which file I have to open. I never cared much for the Linux “/etc/foo.d”-approach either, and prefer a single “/etc/foo.conf”.
How I personally use it is that the non-portable snippets go to
${BASHRC_D}
instead. Having worked as a developer in projects with very heterogeneous stacks, I got fed up of the constant changes to~/.bashrc
that would have to be cleaned up sooner or later.My usual workflow when I am working on a new machine temporarily is to copy only
~/.bashrc
. Any additional config is added to${BASHRC_D}
as needed.Is it? I have all of my configs in a git repo, so it’s a single command for me to
git clone
to a new machine. Copying a single file is maybe simpler if that’s the only operation that you do, but copying and versioning a single file is no easier than copying and versioning a directory. The bash config part of my config repo has a separate file or directory per hostname, so I can have things in there that only make sense for a specific machine, but everything is versioned as a whole.For files that are edited by hand, this is very much a matter of personal preference. The big difference is for files that need to be updated by tools. It’s fairly trivial to machine edit FreeBSD’s
rc.conf
in theory, because it’s intended to be a simple key-value store, but it’s actually a shell script and so correctly editing in a tool it has a bunch of corner cases and isn’t really safe unless you have a full shell script parser and symbolic execution environment (even for a simple case such as putting the line that enables the service in anif
block: how should a tool that uninstalls that service and cleans up the config handle it?). Editingrc.conf.d
by hand is a lot more faff (especially since most files there contain only one line) but dropping a new file in there or deleting it is a trivial operation for a package installer do do.Same thing I’d say about Python: it’s a interpreted scripting language where multiple files are only loosely linked together and there’s no compilation or verification step. At least usually you have source files right next to each other but in this case they’re associated using environment variables. Just feels like overengineering.
Still no difference from a single-file approach. So I’m afraid I fail to see how is this a relevant aspect in making such an option.
Environment variables like
${BASHRC_D}
are nothing but a convenience. It could be replaced by local variables or sheer repetition with no downside. It is a matter of personal preference.There is no engineering involved in that at all, so calling it “overengineering” feels like overestimation :)
Similarly, I have a
~/.bash.local
file which allows for machine-specific config (it’s.gitignore
d) and overrides by beingsource
d last: https://github.com/Pinjasaur/dotfiles/blob/193df781b46e1f7e7a556f386172b76f067adcd9/.bash_profile#L28-L32I have a slightly more complex setup. I put all of my bash config under
~/.config/bash
, which is in a git repo. I have a helper function that sources files in the pattern${XDG_CONFIG_HOME}/bash/$1/$2
and${XDG_CONFIG_HOME}/bash/$1/$2.d/*
. I call this function withhosts
and the output fromhostname
and withsystems
and the output ofuname
. This lets me have either individual files or directories full of files inside~/config/bash/hosts
for individual machines and the same insystems
for things that are for every machine I have with a specific OS (e.g. FreeBSD, Linux, macOS, with a special case in Linux that tries again with Linux-WSL if it detects that it’s running in WSL).This means all of my configs for all machines are in the same git repo and don’t get lost if anything happens to a particular machine.
I also have a .aliases-local alongside local bash, and check for existence of both before sourcing in my dotfiled bashrc.
Same!
I like how Fish’s config files works. In ~/.config/fish
It’s easy to keep things organized with this layout.
Or, apply minimalism. My bashrc looks roughly like:
Minimalism is a valid option. But, in this case, I can’t help but think on how to enable or disable a behavior through an environment variable as presented in the article. Should it be configured manually before execution? If yes, how to you keep track of these in order not to forget them—or not to execute them more than once when it is critical?
Thank you! Deeply inspired.
And thanks for the
. $file
that was quite a surprise! Usually i was usingsource
.I started with something like this but it grew when I needed to understand what was taking so much time when I started a new shell. That turned into this benchmarking code path.
In my case, it is more an exercise in proper separation than scale. I don’t usually keep more than half a dozen of very short files in the configuration dir.
More than enabling the maintenance of several config files, as I mentioned in the title, it helps me ensure that my
~/.bashrc
doesn’t become a mess. It also facilitates reuse across different machines.This is a neat way to do this, but I’m not personally a fan. I don’t want to keep the overhead of remembering what’s where, I’d rather have one big rc-file for my shell. I haven’t ever had a need to disable these things either. But things like this are why I love scripting with Bash, and I always learn something.
I bet you’d like zsh scripting even more than bash. I read the zsh docs in detail and it offers dozens, if not more, of conveniences that improve my sanity.
I’m sure you’re right, and I really want learn zsh scripting more thoroughly. It’s my shell, but I haven’t really taken the time to do more than its bash interop.
Well, multiple files are a mess. I tend to source machine specific script from a common shell config files. I have maybe 5 or 6 different setup where I use this common set of dot files. Makes it portable.