1. 36
    1. 40

      If all the author needed was a blog, maybe the problem is that his tech stack is way too big for his need? A bunch of generated HTML file behind a Nginx server would not have required this amount of maintenance work.

      Is the caching of image at the edge really necessary? So what if it take a little while to load them. Just by not having to load a front end framework and making 10 API call before anything is displayed, the site will already load faster than many popular site.

      If the whole point is to have fun and learn stuff, the busy work is the very point of course. Yet all this seems to be the very definition of non value added work.

      1. 13

        At the end he says

        I know that I could put this burden down. I have a mentor making excellent and sober use of Squarespace for his professional domain - and the results look great. I read his blog posts myself and think that they look good! It doesn’t have to be like this. […]

        And that’s exactly why I do it. It’s one of the best projects I’ve ever created.

        So I think the whole point is to have fun and learn stuff.

        1. 7

          Inventing your own static site generator is also a lot of fun. And because all the hard work is done outside the serving path, there’s much less production maintenance needs.

          1. 14

            Different people find different things fun

          2. 1

            IMO if you do it right, inventing your own static site generator is only fun for about half a day tops. Because it only takes a couple hours. :)

            1. 2

              Not if you decide to write your own CommonMark compliant Markdown parser :]

              1. 2

                Pandoc is right there.

              2. 1

                I’ve been seriously considering dropping Markdown and just transforming HTML into HTML by defining custom tags. Or finally learning XSLT and using that, and exposing stuff like transforming LaTeX math into MathML via custom functions.

      2. 9
        • Node.js or package.json or Vue.js or Nuxt.js issues or Ubuntu C library issues
        • CVEs that force my to bump some obscure dependency past the last version that works in my current setup
        • Debugging and customizing pre-built CSS frameworks

        All of these can be done away with.

        I understand that the point may be to explore new tech with a purposefully over-engineered solution, but if the point is learning, surely the “lesson learned” should be that this kind of tech has real downsides, for the reasons the author points out and more. Dependencies, especially in the web ecosystem, are often expensive, much more so than you would think. Don’t use them unless you have to.

        Static html and simple CSS are not just the preference of grumpy devs set in their ways. They really are easier to maintain.

      3. 5

        There’s several schools of thought with regards to website optimization. One of them is that if images load quickly, you have a much lower bounce-rate (or people that run away screaming), meaning that you get more readers. Based on the stack the article describes, it does seem a little much, but he’s able to justify it. A lot of personal sites are really passion projects that won’t really work when scaled to normal production workloads, but that’s fine.

        I kinda treat my website and its supporting infrastructure the same way, a lot of it is really there to help me explore the problem spaces involved. I chose to use Rust for my website, and that seems to have a lot less ecosystem churn/toil than the frontend ecosystem does. I only really have to fix things when bumping packages about once per quarter, and that’s usually about when I’m going to be improving the site anyways.

        There is a happy medium to be found, but if they wanna do some dumb shit to see how things work in practice, more power to them.

      4. 4

        A bunch of generated HTML file behind a Nginx server would not have required this amount of maintenance work.

        Sometimes we need a tiny bit more flexibility than that. To this day I don’t know how to enable content negotiation with Nginx like I used to do with Apache. Say I have two files, my_article.fr.html, and my_article.en.html. I want to serve them under https://example.com/my_article, English by default, French if the user’s browser prefers it over English. How do I do that? Right now short of falling back to Apache I’m genuinely considering writing my own web server (though I don’t really want to, because of TLS).

        This is the only complication I would like to address, it seems pretty basic (surely there are lots of multilingual web site out there), and I would have guessed the original dev, not being American, would have thought of linguistic issues. Haven’t they, or did I missed something?

        1. 4

          Automatic content negotiation sucks though? It’s fine as a default first run behavior, but as someone who lived in Japan and often used the school computers, you really, really need there to also be a button on the site to explicitly pick your language instead of just assuming that the browser already knows your preference. At that point, you can probably just put some JS on a static page and have it store the language preference in localStorage or something.

          1. 1

            There’s a way to bypass it: in addition to


            Also serve


            And generate a bit of HTML boilerplate to let the user access the one they want. And perhaps remember their last choice in a cookie. (I would like to avoid JavaScript as much as possible.)

            1. 1

              If JS isn’t a deal breaker, you can make my_article a blank page that JS redirects to a language specific page. You can use <noscript> to have it reveal links to those pages for people with JS turned off.

          2. 1

            Browsers have had multiple user profiles with different settings available, for more than a decade now (in the case of Firefox I distinctly remember there being a profile chooser box on startup in 2001–2).

            1. 2

              Which is fine if you can actually make a profile to suit your needs. If you cannot make a profile, you are stuck with whatever settings the browser has, and you get gibberish in response as you might not understand the local language.

              1. 1

                Look, the browser is a user agent. It’s supposed to work for the user and be adaptable to their needs. If there are that many restrictions on it, then you don’t have a viable user agent in the first place and there’s nothing that web standards can do about that.

            2. 1

              The initial release of Firefox was 2004. Did you typo 2011 or mean one of its predecessor browsers?

              1. 2

                Yeah I’m probably thinking of Phoenix.

        2. 3

          There’s no easy way, AFAIK - you either run a Perl server to get redirects or add an extra module (although if you were doing that, I’d add the Lua module which gives you much more freedom to do these kinds of shenanigans.)

        3. 1

          Caddy allows you to match HTTP headers, and you can probably achieve what you want with a bunch of horrible rewrite rules.

          You can always roll your own HTTP server and put it behind Caddy or whatever TLS-capable HTTP server.

        4. 1

          You could put Apache behind Nginx; I’ve done that before, and I might do it again.

          • I prefer nginx for high load; it’s great with static files.
          • apache config for some things - redirects, htaccess, I think? - feels easier.

          It’s been quite a while since I delved in on these.

    2. 9

      Remove “npm” from the equation, and most (all?) of the issues the article complains about will be gone?

    3. 8

      And that’s exactly why I do it. It’s one of the best projects I’ve ever created.

      Maybe it is just “busywork”. It can feel good. You are doing stuff! But not actually moving things forward.

      1. 24

        I usually roll my eyes at people bringing existential thoughts into play but “moving things forward” sounds like a burnout-inducing mentality for personal projects. Moving things forward… toward what? What’s wrong with cultivating a garden that will die with you, occasionally sharing the fruit it produces with others? You think any of this stuff we’re creating will be used even ten years from now?

        1. 9

          I’m all for people learning without being “productive” but the things he’s describing learning about don’t feel meaningful in any way, unless the point is to develop cynicism towards contemporary frontend practices and a deeper understanding of the tremendous scale of the time wasted by the assumption that everything you do must involve npm somehow.

          You think any of this stuff we’re creating will be used even ten years from now?

          Yeah, I started my blog in 2004, so I imagine it will still be updated in 2034. Why not? It’s just a bunch of HTML files served up by Apache.

          1. 2

            Yeah, I started my blog in 2004, so I imagine it will still be updated in 2034. Why not? It’s just a bunch of HTML files served up by Apache.

            Who’s running the server if you get hit by a bus?

            1. 2

              I’m not saying I have some immortal timeless wisdom on my blog that future generations must have access to.

              I’m saying ten years is not a long period of time for this unless you have the attention span of an npm user.

          2. 1

            It is quite rare that I read a blog post from 2013, relative to the number of blog posts published in 2013.

        2. 2

          I met a woman who wrote C for New Horizons and the idea of waiting almost a decade for my software to be used was kind of mind boggling to me as a web person. 😆

    4. 6

      It sucks to read too, with the garish background, web fonts which render worse than native fonts, and the whole page moving when you move your mouse toward the top of the window. Unfortunately for us, putting up with these headaches doesn’t develop marketable skills the way maintenance headaches do.

    5. 5

      Feels like all these issues aside from DNS would be avoided by using a static site compiler (that is a native binary like Zola or Hugo)… the author did mention “highly customised personal website” so maybe is really does need a server side component, but if not I think there are simpler options.

    6. 5

      TL;DR when choosing to learn, plan for how long to let the product of that learning serve you so it does not burden your future learning.

      1. Fast.
      2. Cheap.
      3. High quality.

      We say “pick two,” but when you roll your own anything as a learning activity, you’re probably choosing none. And that’s OK. At least, it’s OK until the context in which you created the thing no longer exists: spare time.

      I won’t represent myself as having “done it all,” but I’ve done a lot and learned some hard lessons:

      1. self-hosted email when it was complicated, but before it was hard — the dedicated server hard drive died, and I found out the hard way that my backups hadn’t fired off for two years despite an email notification saying success.
      2. self-hosted websites for others on the first wave of virtual private servers, then took ownership of a dedicated server that you colocated in the same rack spot but had never seen the hardware — how do you tell friends whose website you’ve hosted for years about № 1 above, to learn they’d always edited on the server directly and had no local copy of their site?
      3. rolled my own static site generator based on someone else’s bash kludge from 2000 and used it as an angsty, emo blog for four years of college?
      4. hosted a WordPress blog that taught how Swiss-cheese WordPress used to be
      5. built awesome VPN and containers VMs for my homelab — a bug in QNAP’s SSD caching corrupted a RAID array, and something corrupted the VM disks leaving them unsalvageable, and I’ve still not rebuilt the containers VM 8 months later, even after I got the NAS back to normalcy after a one week RAID rebuild. Why? I don’t have time for this. *plays another round of Overwatch*
      6. Bought hundreds of dollars in homelab stuff to build a cool cluster of sorts — *takes on more side projects to help others with things*

      It’s not all bad. I’ve built desktops that are fine many years later. I’ve got a 16 TB NAS that is 12 years old with one working sysfan… but it might be the next to fall to lack of maintenance and take 16 years of accumulated crap with it… fortunately, I’ve got backups of the important stuff, and I verify those backups regularly. I learned!

      The context changed, and I didn’t make the time for maintenance and in some cases, execution beyond gathering materials. For some things, I’ve gotten pretty good at recovering from failure. I got the VPN VM back up in an evening. I’ve learned to plan for long-term maintenance of any infrastructure I build myself. That’s really what I’ve learned alongside a variety of development, sysadmin, and entrepreneur things that constitute my professional life.

    7. 5

      For what it’s worth, my personal site is no paragon of perfection, but it is relatively simple and requires virtually no maintenance. It loads reasonably fast. It has no caching.

      HTML is generated from Markdown with Hugo and I serve that HTML with busybox httpd. Whenever I commit to the site’s github repo, an Action runs the hugo build and creates a docker image from that build. The docker image can run anywhere.

      Anyway, all this to say that its container has 128Mhz allocated to it and spent all of yesterday on the front page of Hacker News without a blip, running on a dirt cheap hetzner node. Simple things don’t require many resources.

      I say without a blip because there were no performance blips. However, I intentionally don’t make it Highly Available, so when I deploy a new container, visitors get a 404 for a couple seconds. That’s fine. It’s not a Fortune 500 personal site.

      1. 2

        I’ve been using Hugo for my personal blog since 2016, which in a shocking display of rudeness was 7 years ago. The JS is just vanilla show/hide toggle, so there’s basically no maintenance to speak of.

    8. 5

      This blog post reads as Stockholm Syndrome to me. Most stacks don’t have anywhere near as much instability and dependency breakage as npm.

    9. 4

      My blog is a C program running via CGI on Apache, and has been since 1999, so late-90s tech (which is also using 80s tech). I can update my blog either through the web (used only a few times), from a file on the server (pretty each), or via email (my preferred choice, made easier because I run my own email server). All of this set up years ago, and still running smoothly.

      The rest of my site is a static site based off XML files, and use XSLT (via xsltproc) to generate the HTML. I have a script that will then pick up changed HTML files (based on hash, not timestamp, as all the HTML is “new”), then use rsync to copy the updated files to the server. So, early 2000s tech here. It works for me, and I did this so I could have consistent navigation throughout the entire site, but had I known what I was getting myself into using XSLT, I might not have done so. There’s still just enough friction (having to edit XML) that I don’t update it as often as I should, but aside from that, it’s easy enough to use.

    10. 2

      I’ve been having fun writing my own static site generator for my blog: https://github.com/mk12/blog/tree/new. Some interesting parts:

      • Having make only rebuilds pages that actually changed. This was tricky because posts link to the next/previous post, but I don’t want editing one post to force rebuilding all of them.
      • Wrote my own templating engine because I didn’t like any of the existing ones.
      • Wrote a Unix stream socket client/server so that some parts can be implemented in different languages without having to spawn a new process for every call.

      Like with the author’s blog, most of this is completely unnecessary to get words on the Internet. But I enjoy it.

    11. 2

      Vue 2 somehow exactly repeated the error of Python’s 2 to 3 transition. :-( It seems to have snapped back faster than Python did, with only a two or three year loss of productivity, but it still sucks. The lessons from those transitions are really, really clear:

      • An automated tool to move some stuff from version N to N+1 isn’t going to be good enough.
      • You can break backwards compatibility, fine.
      • Do not under any circumstances break forward compatibility.

      Once forward compatibility is broken, you just slammed the breaks on the whole ecosystem. Instead of them upgrading to N+1, finding all the spots using the deprecated API, and then fixing them, you have to somehow do the upgrade and fixes simultaneously. It doesn’t work.

      It also means all of your dependency roots need to upgrade first (and break all their dependents!) before you can upgrade.

      I can see why it was a tempting move for Vue, since breaking forwards compatibility had real advantages in file size, but it was ultimately a big mistake.

      1. 1

        Did it have the same reasons that made Python 2 to 3’s transition necessary, though?

        1. 1

          Not really. Evan You rebuilt the core of Vue and added a React hooks style composition functions, but there were also some breaking changes that were just broken to be a nicer API or more logical. One example off the top of my head: in Vue 2, if you have a ref named x on an HTML element in a loop, $refs.x is an array, but in Vue 3 you just aren’t allowed to do that. Instead there’s some more flexible system you can tap into and do other stuff with, but by default it won’t work.

          Again though it’s fine to break backwards compatibility as long as you have forward compatibility, so the solution should have been:

          • Figure out the new core and APIs
          • Once those are stabilized, back port them to work with Vue 2.
          • Encourage everyone to write code that is compatible with both Vue 2.X and Vue 3 during the transition period.
    12. [Comment removed by author]

    13. [Comment removed by author]

      1. 2

        I think the issue is just that the JS ecosystem is used to have breaking changes every six months for some reason, so if you write your blog in say Nuxt today, it will go through breaking changes every year or two. It’s a lot of churn if you’re just a hobbyist.

        1. [Comment removed by author]