1. 15
  1. 5

    My rule of thumb: if it’s not in a VCS, it doesn’t exist.

    1. 1

      Honestly, this is the big reason why I like Docker. I don’t want to have to remember how to configure anything. Let the computer figure it out.

    2. 4

      Documenting learning processes lessens required relearning. (This would make a good acronym)

      dole prole rere

      1. 3

        Quidquid Sus-Latine dictum, altum videtur

      2. 4

        Having a Makefile or similar is a really good idea: it’s easier and less error-prone than having to remember a whole bunch of steps each time. I think the author would have benefitted from going a bit further: running make via a VCS hook. That consolidates two steps, and ties the neccessity of running make with the nice-to-have of VCS (which we might otherwise forget or avoid). Note that this also requires breaking the habit of running make manually; if it’s deeply ingrained in our muscle memory we could try changing the target name, to break our “autopilot”.

        For a site that lives in a single directory on a single machine, it’s probably easiest to publish via a post-commit hook. My site has a few remotes which I push to as backups/mirrors, so I have one of those publish the site via a post-receive hook; this lets me commit early and often, without worrying about half-finished things being published (although I also have a separate directory for unfinished work, that I can git mv into place when I’m happy with it).

        1. 2

          :O this is wonderful, I totally forgot about commit hooks. Thank you for this tip! I also have a similar setup, frontend for dev and public for build. I’m going to look into this further. https://githooks.com/ seems like a pretty good resource.

          Do you have a proprietary server running yr site, and hence have control of server-side hooks?

          I feel, to prevent publishing in unpublishable things, I either gotta come up with some protocol to determine whether a commit contains unpublishable things so not the publish if that commit is pushed, or continue doing it manually, especially since my original problem was not committing, not forgetting to publish.

          Since my site lives in S3 I could probably leverage GitHub webhooks/lambda to further automate, hmmmm

          1. 2

            I don’t use “server-side hooks”; I push changes from one place on my laptop to another ;)

            I make changes to my site via a working copy at ~/blog, which pushes to a bare clone at ~/Programming/repos/chriswarbo-net.git. That bare clone has a post-receive hook which publishes the site; it also propagates those new commits to a copy on my server and a mirror on github. I actually manage all of my git repos this way; although none of my other projects are Web sites so they don’t do the publishing step.

            Regarding unpublishable things: I just stick them in a directory called /unfinished which isn’t linked to from other pages. When something’s finished I’ll move it to a location which is linked to (either /blog or /projects).

            1. 1

              ¡Nice, that’s clever, I think I’m going to adopt/steal that approach!

          2. 2

            Having a Makefile or similar is a really good idea:

            Make is an amazingly powerful tool as long as you don’t stray too far from its core competency of turning $this_file.a into $this_file.b and building a graph of the dependencies and processes for doing that. When your Makefile has more dummy targets than real ones, that’s a good sign you should have just written a shell script instead. (I’m looking at you, Pelican.)

            1. 1

              Oh sure, by “or similar” I just meant a single command, needing no arguments, to build+test+push+etc. which can be easily extended. A script will do, or a complex build system du jour will do; although Make is (probably) fine

              I used to use Make for my site (which I render from Markdown using Pandoc), but ended up with two problems:

              • The Makefile became very complicated, as I tried to avoid repeating myself by calculating filenames, dependencies of index pages, etc. It seemed to work, but I had to learn a lot about (GNU) Make’s special variables, evaluation order, multiple-escaping, recursive invocation, etc.
              • The output directory would sometimes be stale, for reasons I couldn’t figure out; so I got into the habit of deleting it first, which loses the only real advantage of Make. This is especially bad since some pages take a long time to render, since they do a bunch of computation during the rendering.

              I now use Nix, since I was already using it for per-page dependencies, and its language is saner than Make.

            2. 2

              Another way to do this would be just have the Makefile test if the VCS is in a clean state or not. If it’s not, you can exit and fail to run with a message “Hey you! commit first!” or just a warning if you want :)

              a git example:

              git status | grep clean
              

              will exit 1 if clean is not found, and Make will then of course exit as well. I’m sure there is a better way, but the above works quite well in practice. (Obviously there are edge cases: like having a new file with the name clean that is not yet committed)

            3. 4

              Downloading the live site works, but you really want a regular backup of your entire workstation (not just hopefully-versioned directories). Tarsnap works well for me, but there are a lot of other options (many open-source.)

              1. 2

                “regular” being the key word here. Tarsnap looks cool, perhaps though, I could just have a and encrypt, compress, and push to S3 script on cron to save me a few picodollars. But then again maybe being a cheapskate on secure redundancy isn’t such a good move.