1. 2

I really liked the u option and went to check my scripts only to find out that I’ve been using it for quite a while now in set -euo pipefail. E, OTOH, I had no idea about and is very interesting.

1. 3

Thanks for this, however I still can’t understand how and why GNU Stow (or other similar dotfile managers or Ansible playbooks, etc) are better than a simple shell script. Precisely because I’m sharing my dotfiles across multiple devices, platforms, and operating systems I want them to be as platform agnostic and minimal as possible, and without any external dependencies. My script simply installs/symlinks everything and later I use git pull to sync changes across machines.

1. 2

Yep I also use an install script (although mine is in Python), the reasons being:

• I need to support 3 platforms (Ubuntu/Mint, macOS, Windows), with different things to install in different ways (or not at all) depending on the platform.
• In some cases, I find it easier to procedurally generate a dotfile that will point to a resource located in your dotfiles repo, instead of symlinking everything into a fixed/hard-coded location.
• A script can also manage your sub-repos (pulling/cloning), so that everything is done in one command.
• A script can optionally do a subset of something, so that if you want to just update, say, your tmux config, you can just run that and it will just pull your tmux plugins’ repos, redo the symlinks, etc., and nothing else.
1. 1

It’s certainly not for everyone. I use it because its default behaviour matches my workflow perfectly, and there’s no need for a shell script. The only thing simpler than a simple shell script is no shell script at all!

1. 1

Unless I’m missing something you are creating an extra dependency before you can install (and manage) your dotfiles: Stow. I.e., you have to download, compile, and install from source or via a package manager. What if you have to use systems you’re not the admin for and have no sudo rights? Or Perl is too old (Stow’s requirement.) Or, or, or…

1. 15

Between OMZ making the shell take 3s to start on an SSD, and asking whether it can check for updates, I’m not sure it lasted more than 3 days on my machine. :)

More to the point: rather than starting with OMZ and asking, “what can I get rid of?” I’d encourage the opposite view of asking “what do I need?” and adding that to a simple zsh setup. Take ownership of your workspace and add only what you fully understand. I really like a simple prompt with CWD, current git branch (if any), and dirty status. My shell needs are also very simple, and I’m still learning a lot about what plain zsh can do. Add aliases as you need them instead of using giant community-driven plugins.

I really feel like our workspaces should be slightly idiosyncratic, if only to surface better workflows over time. Frameworks cut this impulse off at the knees.

The benefit of this approach:

   ❯ time zsh -i -c exit
0.05 real         0.04 user         0.01 sys

1. 4

I really like a simple prompt with CWD, current git branch (if any), and dirty status.

In that case perhaps you’ll like my prompt. It relies on and sources these two simple functions.

Time varies from 0m0.006s to 0m0.009s.

1. 1

Yeah, I’ve always felt OMZ (and similar things for emacs, etc.) kind of miss the point.

Dot files and tool configuration allow everybody to optimize their workflow so that it works best for their needs. Maybe using a collection of other people’s customizations really works best for some people, but it seems sub-optimal in general.

1. 3

Maybe @steveno or someone else can ELI5 this to me why is this advantageous over traditional, platform-agnostic, and dependency-less symlinking in a bash script? Cf. my dotfiles and the install script.

1. 3

Salt’s declarative nature means that you’re mostly describing the end state of a system, not how to get there.

So instead of saying “copy this stuff to this directory and then chmod” you say “I want this other directory to look like this”. Instead of saying “install these packages” you say “I want this to be installed”. You also get dependency management so if you (say) just want to install your SSH setup on a machine you can say to do that (and ignore your window manager conf).

If your files are grouped well enough and organized enough you can apply targeted subsets of your setup on many machines based off of what you want. “I want to use FF on this machine so pull in that + all the dependencies on that that I need”. “Install everything but leave out the driver conf I need for this one specific machine”

This means that if you update these scripts, you can re-run salt and it will just run what needs to run to hit the target state! So you get recovery from partial setup, checking for divergences in setups, etc for free! There’s dry run capabilities too so you can easily see what would need to change.

This is a wonderful way of keeping machines in sync

1. 2

Looking at my repository right now, there isn’t any advantage. You could do everything I’ve done with a bash script. The beauty of this setup for me, and I really should have stated this initially, is that I can have multiple machines all share this configuration really easily. For example, my plan is buy a RaspberryPi and setup an encrypted DNS server. All I need to do is install salt on the Pi and it gets all of this setup just like my NUC currently has. I can then use salt to target specific machines and have it setup a lot of this for me.

1. 2

The beauty of this setup for me, and I really should have stated this initially, is that I can have multiple machines all share this configuration really easily

You can also do this with a shell script.

All I need to do is install salt

With shell scripts you don’t need to install anything.

1. 3

As I previously stated, given what’s currently in this repository, there isn’t anything here that you couldn’t do with a shell script. That’s missing the point though. Salt, or ansible, or chef, provide you with a way to manage complex setups on multiple systems. Salt specifically (because I’m not very familiar with ansible or chef) provides a lot of other convenient tools like salt-ssh or reactor as well.

1. 2

I feel like your point is just that shell script is turing complete. Ok. The interesting questions are about which approach is better/easier/faster/safer/more powerful.

1. 2

If you’re targeting different distributions of linux or different operating systems entirely, the complexity of a bash script will start to ramp up pretty quickly.

1. 2

I disagree, I use a shell script simply because I use a vast array of Unix operating systems. Many of which don’t even support tools like salt, or simply do not have package management at all.

1. 1

I have a POSIX sh script that I use to manage my dotfiles. Instead of it trying to actually install system packages for me, I have a ./configctl check command that just checks if certain binaries are available in the environment. I’ve found that this approach hits the sweet spot since I still get a consistent environment across machines but I don’t need to do any hairy cross-distro stuff. And I get looped in to decide what’s right for the particular machine since I’m the one actually going and installing stuff.

2. 1

The beauty of this setup for me, and I really should have stated this initially, is that I can have multiple machines all share this configuration really easily.

Have to agree with @4ad on this one. I have to use remote machines I don’t have sudo rights and/or often are completely bare bones (eg., not even git preinstalled.) My goal, in essence, is a standardized, reproducible, platform-agnostic, dependency-less dotfile environment which I can install with as few commands as possible and use as fast as possible. I don’t see how adding such a dependency benefits me in this scenario. I’m not against Ansible-like dotfile systems, but, in my opinion, using such systems for this task seems like an overkill. Happy to hear otherwise, though.

1. 2

Awesome link. I didn’t know xargs could do that.

1. 2

After reading this I looked up the Compression & ControlMaster options only to find out that I didn’t know of ProxyCommand. I’d aliased my ssh remotes from config and I just type:

$<nameofhost> <nameofhostbehind> # while first one loads  And I get in the second one immediately. My hosts are usually 3 or 4 char long so this whole process was ok. But now can just type once. A minor time saver but cool nonetheless. How did I not know of ProxyCommand in ssh config? 🤦🏻‍♂️ 1. 24 I can sympathise with jgm’s desire for a simpler spec; modern Markdown is “more congealed than designed”, to misquote Douglas Adams. However, I’m pretty sure one of Markdown’s original design goals was to make good-looking, readable plain-text documents, ever-so-slightly constraining existing conventions so that they could make good-looking, readable rich-text documents too. To dramatically reduce ambiguities, we can remove the doubled character delimiters for strong emphasis. Instead, use a single _ for regular emphasis, and a single * for strong emphasis. The trouble is that in plain-text documents, people traditionally used * and _ for level-one emphasis (read as “bold” and “underlined” respectively), but typographic tradition is that level-one emphasis is italic text. So “single for level-one emphasis, double for level-two emphasis” is the most natural, semantic translation. Shortcut references like [foo] would have to be disallowed (unless we were willing to force writers to escape all literal bracket characters). I don’t know how I missed it, but until this year I missed that shortcut references were even possible. I started off with long-form [text](url) references, which looked ugly and broke up the text, and eventually twigged to [text][tag] references which still look weird to people who don’t know Markdown (or people who haven’t seen that syntax before). Just being able to write [text] in running prose marks that text as special without overly distracting the reader, and if the next paragraph (or something at the end of the document) says [text]: http://example.com the association should hopefully be plain. Since we have fenced code blocks, we don’t need indented code blocks. Fenced-code blocks are weird and ugly unless you’re already familiar with Markdown, while indenting is a clear visual delimiter. Instead of passing through raw HTML, we should introduce a special syntax that allows passing through raw content of any format. I can appreciate this from a technical standpoint (it’s a simple rule that solves a whole class of problems!) but even without raw HTML support, Markdown is pretty heavily tied to the HTML document model. Consider Asciidoc, which is basically Markdown but for the DocBook XML document model instead. There’s definitely similarities to Markdown, but the differences run much, much deeper than just what kind of raw content pass-through is allowed. We should require a blank line between paragraph text and a list. Always. This also is an excellent technical opinion, but click through to the OP and look at the examples of the new syntax and tell me whether either of them look pleasing. Introduce a syntax for an attribute specification. This definitely makes Markdown more flexible, but doesn’t make it any prettier to read. Also, if anything it ties Markdown even closer to the HTML document model. Overall, these changes would move Markdown further from being a plain-text prettifier, and closer towards being a Huffman-optimized encoding of HTML. That’s not a bad thing, and certainly it seems to be what most people who use Markdown actually want, but it’s quite different from Markdown’s original goals. When the CommonMark first began (as “Standard Markdown”), they tried to get John Gruber involved, but as I recall he refused to take part and told them not to use the name “Markdown”. I felt he was being a jerk, but having thought about it, I wonder if maybe he felt a bit like Chuck Moore about ANSI Forth, that the real value was the idea of a hastily-written script that took something human-friendly and made it computer-friendly, and making a set-in-stone Standard with Conformance Tests would be the exact opposite of Gruber’s idea of Markdown, no matter how compatible it was to his script. I imagine something like ABC notation is much closer, despite being entirely unrelated. 1. 8 You raise valid points and I can certainly understand where you’re coming from with “[o]verall, these changes would move Markdown further from being a plain-text prettifier” (to which I would sort of concur.) I have some disagreements about fenced code blocks and a blank line between paragraph text and a list but they boil down to personal visual preferences. Code indenting is in my opinion ambiguous and I almost always insert a blank line between paragraph text and a list item. It’s great that original Markdown allows for both. On the other hand, I feel we have to acknowledge that original, i.e., canonical Markdown has evolved from Gruber’s original implementation and scope (simple blog posts?) to the lingua franca of text input in websites—such as the prompt I’m currently writing in—and the de facto plain text format or LaTeX pre-processor. For instance, I do all my writing in plain text Markdown and use Pandoc to produce LaTeX typeset documents, letters, and papers with bibliography. Heck, I even wrote a static blog engine on top of my publishing workflow to guarantee that one text source file produces the same document regardless of output medium (PDF, web, &c.) (The OCD kicked in, lol.) I am a big believer in plain text and separating content from formatting, and Markdown is a great, modern plain text format. Ultimately, I think we have to cater for and balance both realities. Gruber’s original simplicity of the format, which notwithstanding being ambiguous, should be the guiding principle, and the evolved scope of present-day Markdown. And, boy, that’s a tough one. 1. 4 I definitely agree. Like all successful technologies, Markdown has grown well beyond its creator’s intended limits, and as much as I miss pretty plain-text I much prefer Markdown to BBCode or the rich-text editing features browsers provide. ReStructuredText is a light-weight markup explicitly designed to support multi-target output and the kind of extensibility people want for Markdown, but I find it awkward and pedantic and I’d much rather use Markdown any day. I’m curious, though: you say “Code indenting is in my opinion ambiguous”, how so? 1. 2 First off, forgot to mention that I always use [link][tag] link references and didn’t know about tagless ones. Neat. Regarding your question: I feel like between tabbed and space-based indentation (which may output the source text differently when cating or writing in $EDITOR) and cases like list-nested blocks (should I indent again or maintain current indentation level?,) that you have to worry how different current and future Markdown interpreters will interpret the document. Instead, I find the fencing solution (with  backticks, for the record) better and clearer: wherever I am in the document simply fence the code and be done with it—I don’t have to count or check the indentation.

1. 1

Follow-up: also what @myfreeweb and @Forty-Bot wrote below.

2. 4

[text][tag] references

Ha! I only knew about the shortcut [text] references, TIL on the [text][tag] one :D

indenting is a clear visual delimiter

Indenting can be a pain to work with though. Especially in web comment fields!

1. 4

Indenting can be a pain to work with though. Especially in web comment fields!

I end up having to paste text in and out of vim to format code correctly. Backticks are a much better solution imo.

1. 1

One of these days I’ll get around to writing a replacement for the “It’s All Text” extension that works with modern Firefox, then I can write all my comments in a real text editor!

1. 3

I’ve seen a few around, but haven’t found the time to test any of them to see how well they work.

1. 15

Thanks for the write-up and thanks to @alynpost for graciously hosting us.

1. 7

You’re welcome. I’m glad to help.

1. 1

Chapeau.

1. 7

This is like z, which I’ve been using for years. Great tool.

https://github.com/rupa/z.git

1. 2

+1 I like z better as it’s implemented in all shell - works anywhere with no dependencies

1. 1

+1 for exactly the same reasons. z is life.

1. 1

Have a look at z. This is my tool of choice across an array of *nix systems.

1. 2

Cool idea, OP. I automated your solution in a bash function.

encrypt() {
file=$1 filename=$(basename "$file") # get filename without path ext="${filename##*.}"               # get file extension
name="${filename%.*}" # get filename directory=$(dirname "$file") # get directory path payload=$directory/$name-e.$ext

if [[ $# -eq 1 ]]; then read -s -p "Encryption password: " filepasswd qpdf --encrypt$filepasswd ' ' 256 -- "$file" "$payload"
echo -e "\nEncryption successful!"
else
echo "Missing parameter or wrong syntax. Needs one file name."
echo "encrypt file"
fi
}
`
1. 2

This seems very nice. In fact, I will adopt some aspects of Ivy into athena. I wonder, though, about two things. One, why the author relies on Python’s Markdown package (which is very minimal and quirky) and not on, say, Pandoc. Two, Ivy claims “it’s [. . .] suited to building project documentation” – I wonder how it’d go about building a blog (since its YAML values seem to support it.)

1. 2

tl;dr:

1. Watch as iPhone’s signal (as in worthiness) gatekeeper;
2. Silent iOS notifications (texts, calls, apps;)
3. No Lock Screen iOS notifications (but for a few exceptions;)
4. Extremely conservative with allowing apps to send notifications.
1. 1

Maybe Stallman was right about Javascript after all.

1. 2

If there were no JavaScript, most users would use other interactive formats with non-trivial code components. Flash or the good old native app.

All these allow for exactly that.

1. 1

I’m curious about athena’s supported for ET styled PDF and LaTeX output. Can you give a PDF example of one of these essays in the demo site?

1. 1

athena doesn’t support PDF output; it only converts to HTML. My personal Pandoc Markdown to PDF via LaTeX script produces documents like this (.md source) when I export to an ET template.

1. 2

This reminds me of Pelican. What’s the main difference?

1. 2

As far as I’m aware, pelican doesn’t support Pandoc out of the box (the most crucial tool in my writing infrastructure outside of athena;) one would need to download and install a plugin. athena started as a pet project to scratch my own itch and in the process I thought of releasing it publicly as well since it’s a great playground to experiment with ET’s ideas and SSGs while incorporating my personal Pandoc (academic and casual) publishing workflow. My main goal was to create one workflow to write plain text docs and be able to publish to PDF via LaTeX (Tufte layout or not,) HTML (same,) slides, letters, &c without (or with minimal) changes in document structure. athena is responsible for the HTML and blog in my setup. Moreover, athena tries to be as minimal with as few dependencies and options as possible.

1. 1

Installation is a bit too complex for a minimal blog generator. Maybe create a Homebrew formula for it?

1. 3

While I’d argue it isn’t that complex, you’re generally right – I plan to automate the installation and customization process.

1. 2

nix / nixos could probably solve this problem quite nicely.

1. 5

I like this idea a lot and have been working on my own static site lately. I wonder if there will be a set of themes that cascade (theme file->user override) similar to other static site builders like Hugo. I think it’s really appealing for folks to take their content and try it out with several themes to decide on a starting point that nails down the basics for them. Plus you can build a gallery of themes, which is great marketing!

The demo pages do have some broken bits BTW. For example on this page I get a 404 for an image and the equation is not being rendered correctly.

1. 3

Author / OP here. That’s an interesting idea and I have discussed it in the past with a friend of mine to build such a gallery and further automate the theme / layout selection. Re: image 404, permalinks – fixed.

1. 2

The MathML output looks broken on the demo site here. I’m on Chrome on MacOS.

1. 5

Author / OP here. Chrome ditched MathML for MathJax a little while ago [link] and I have not found a way yet to support both. (Relevant.)

1. 2

That’s a damn shame!