1. 10
  1. 13

    If the operation of your blog depends on a monthly certificate renewal from Let’s Encrypt – even if the renewal is set up to be automatic – it is much less sustainable than serving your site over HTTP.

    The operation of my blog depends on me paying my hosting company to handle a truly mind-boggling array of things for me; including but by no means limited to:

    • paying the power company for electricity for the server
    • battery backup for when the power cuts out
    • taking care of the HVAC in the data center
    • fire suppression systems
    • keeping a redundant internet connection hooked up to the server
    • running DNS servers
    • keeping an SSH server running so I can update the files
    • patching the OS and HTTP server software with the latest security updates
    • and a lot more

    On top of this enormous stack of dependencies, complaining about the fact that my hosting company needs to make a Let’s Encrypt API call every few months seems very overblown.

    While it is easy to migrate the site to a new server, it is not effortless to update the contents of the site, because you can’t re-generate a Jekyll-driven site if you don’t have Jekyll installed on the computer.

    This is a strike against Jekyll (and Ruby programs in general) being very tedious to install, not against the idea of generated sites. I use GNU M4 and Make, so I can regenerate my site on basically any machine.

    With that said, there are other things you can generally count on being available without extra setup. One such thing is PHP.

    Disabling PHP in the .htaccess file is always the first thing I do when I set up a new site. It’s just not worth the security risk. In limited cases where I need dynamicity, (for instance, the signup form for a conference I ran last year) I have taken to using CGI scripts running in Lua that I individually whitelist. These can trivially be compiled to standalone executables so there is no dependency on having Lua installed on the server. If there were a security flaw in one of these, the blast radius would be much more contained as it’s not used for every single page on the site.

    That said, other than these objections I agreed with the bulk of the points made.

    1. 6

      On top of this enormous stack of dependencies, complaining about the fact that my hosting company needs to make a Let’s Encrypt API call every few months seems very overblown.

      Not to mention that “I’m just a static site, why would I need HTTPS” arguments miss out on all sorts of ISP content-injection nastiness that only work on unsecured HTTP. Turning on HTTPS protects your site’s users from that and is well worth the $0 and “just check this box” integration many hosting providers now have with Let’s Encrypt.

      1. 1

        Sure, some parts of my argument don’t apply if you have a hosting provider that handles certificate renewal for you. Though I personally also care about my site being accessible from older browsers that don’t support modern encryption. My compromise is that my site is accessible via HTTPS, via a self-signed certificate (which I should update, come to think of it…), but it’s optional.

        1. 3

          Please don’t try to support old web browsers. Supporting new but less featureful web browsers, such as elinks, is a laudable goal, but pretty much any old web browser that doesn’t support modern TLS cypher suites also has remote arbitrary-code execution vulnerabilities. By catering to those users, you’re encouraging them to say ‘I don’t need to upgrade, it works for everything I need’, right up until they find drive-by ransomware installed on their machine. You’re not doing people any favours by encouraging them to use a vulnerable piece of software to connect to the Internet and parse untrusted data.

          1. 1

            I disagree completely, because I like using older computers (and operating systems) myself and enjoy browsing trusted sites on them. There is no reason that a text-only blog should be inaccessible from older browsers.

            And I don’t like the argument that supporting older browsers encourages people to not upgrade their operating systems. The absolute majority of people update their operating system rather often, because they replace their computer rather often. They don’t need to be baby-sit by web developers.

            I think deliberately preventing older browsers from accessing your site never helps anyone, only hurts people.

            1. 2

              I disagree completely, because I like using older computers (and operating systems) myself and enjoy browsing trusted sites on them.

              There’s no such thing as a trusted HTTP site, because any node on the (definitely untrusted) Internet between you and the server can tamper with the content in a way that is not detectable and injects exploit code into the data that you’re parsing. Older versions of SSL / TLS also have sufficient known vulnerabilities that they’re not much better than plain HTTP from this perspective anymore.

              And I don’t like the argument that supporting older browsers encourages people to not upgrade their operating systems

              I didn’t say anything about operating systems. Supporting older browsers encourages people not to upgrade their browser. Any browser from even 2-3 years ago is almost certainly vulnerable to known arbitrary code execution vulnerabilities that any site that it connects to can exploit. Even if you think you’re only going to trusted sites, one of those could be compromised, one of the network hops could be malicious, or one of those sites could include a resource (picture, ad, iframe) from a site that is compromised or malicious. Or you may be connecting to an entirely unexpected site (I’m guessing that those old browsers are also not validating DNSSEC, so who knows which IP address they actually connect to from a particular address).

              I think deliberately preventing older browsers from accessing your site never helps anyone, only hurts people.

              Actively supporting people doing something that has the potential to injure them does not help them. It’s no different from encouraging people to use cars that are old enough not to have seat belts, airbags or roll cages.

              1. 1

                There’s no such thing as a trusted HTTP site, because any node on the (definitely untrusted) Internet between you and the server can tamper with the content in a way that is not detectable and injects exploit code into the data that you’re parsing.

                How many nodes are there between your browser and my web server? How often have you experienced such injections? I’ve never seen it. It’s funny how the web worked fine for years before everyone started enforcing HTTPS, yet suddenly, using HTTP is treated as though it’s the same as knowingly installing viruses on your computer.

                It’s no different from encouraging people to use cars that are old enough not to have seat belts, airbags or roll cages.

                If we go with this analogy, I have a car without an airbag, and I would not appreciate it if some road owners prevented my access to their roads for that reason alone, especially since my not using an airbag (not using HTTPS) doesn’t hurt the other drivers (site visitors).

                1. 2

                  How many nodes are there between your browser and my web server?

                  I don’t know where your web server is, but typically something on the order of 10-20.

                  How often have you experienced such injections?

                  I haven’t for ages, because pretty much everyone uses HTTPS these days. I have come across malicious wireless access points that do this and at least one national ISP in the US has been caught doing it.

                  I’ve never seen it. It’s funny how the web worked fine for years before everyone started enforcing HTTPS, yet suddenly, using HTTP is treated as though it’s the same as knowingly installing viruses on your computer.

                  The Internet is a much more hostile place than it was. Back in the ‘90s, you could telnet into a server and not worry. In 2000, I had may password stolen by someone packet snooping and intercepting it when I typed it over telnet. Now everyone uses SSH instead of telnet. Over the last decade, I’ve had to help people who had their credentials stolen by someone sniffing their cookies in a coffee shop. MITM attacks on HTTP traffic have gone from things requiring custom software to built-in features in commodity middleware boxes from companies like Cisco.

                  If we go with this analogy, I have a car without an airbag, and I would not appreciate it if some road owners prevented my access to their roads for that reason alone, especially since my not using an airbag (not using HTTPS) doesn’t hurt the other drivers (site visitors).

                  If your computer has an Internet connection and is running malware, that does affect me.

                  1. 1

                    Over the last decade, I’ve had to help people who had their credentials stolen by someone sniffing their cookies in a coffee shop.

                    Yeah, I mean, I support HTTPS and think it’s a good thing, especially for things that require authentication, but I don’t think it’s necessary to block HTTP access to things like blogs, which don’t require authentication.

                    And I still think you are over-estimating the dangers of the web. What you’re saying doesn’t line up with my experience. I don’t think browsing the web has become that much more dangerous in the last 10 or even 20 years. As far as I remember, viruses were more common back then than they are now. Likely because of higher consciousness of computer security and higher prevalence of anti-virus software, sure – but I think this indicates that these factors are far more important than HTTP vs HTTPS, as far as viruses go at least.

                    The reason I’m pushing back is because I think this very security-conscious mindset is a bit merciless. Sure, security is good, but there are other considerations that may override it. Cars are dangerous, and airbags help, but still airbag-less cars aren’t outlawed, and there’s a reason for that. Mercy trumps the need for perfection.

        2. 1

          It appears that he’s also under the impression that he needs to put https:// in all of his links. If you start a URL with :// in an href attribute then the browser uses whatever protocol it used to get the page. You can then just use headers / redirects in the web server to enforce HTTPS of you want to.

          1. 1

            No, I know that I can do that. The problem is that other people rarely know that. They will almost always link to your site using https://.

        3. 2

          On top of this enormous stack of dependencies, complaining about the fact that my hosting company needs to make a Let’s Encrypt API call every few months seems very overblown.

          If your hosting company handles this for you, and you don’t have to do anything, then sure, it’s effortless. But if you ever want to run your own server, you’d have to deal with certificate renewal yourself, which obviously is manageable, but it does make your site a bit harder to maintain over time.

          This is a strike against Jekyll (and Ruby programs in general) being very tedious to install, not against the idea of generated sites. I use GNU M4 and Make, so I can regenerate my site on basically any machine.

          Well, I would say that does make it harder to edit your site on other computers. At my university, they only have Windows computers, onto which I can’t install m4 and make.

          That said, other than these objections I agreed with the bulk of the points made.

          Thanks :-)

        4. 3

          Thank you for sharing this! As a lover of HTML for a long time, I would strongly agree with much of the sentiment!

          I fear, however, some of your argumentation is very similar to what I used when I was a touch younger (especially when it comes to arguing for Linux) and there may be slightly more shades of gray in your argument that you should acknowledge if you want meaningful discourse from the person you are trying to persuade. Consider me your target market, I use Jekyll to create a static site and you are arguing that is less sustainable than your solution. I would disagree.

          For example, two of the knocks against Jekyll seem to be made against Jekyll as you have (or have seen others) used it. The need for Jekyll to be installed on the web server is not a requirement, I author my content in Jekyll on my development box and, exactly as you do, copy a folder of HTML to the web server. I am also not concerned about breaking changes in future versions of Jekyll, because I can pin the version in my Gemfile, which is checked into my git repo, so every time I clone this, I will be using the same version of Ruby and Jekyll.

          Having helped more than one friend recover pure-HTML websites in the past, even as you’ve acknowledged, there is a LOT more work if the repetitive aspects aren’t handled by something dynamic. In your case, you are using PHP at the time of the page load. I prefer to not need anything beyond the web server, so in my case, that is Jekyll when the page is created. Both solve the problem, one does it once, the other every time the page loads.

          Because of that, I would suggest considering how much of your argument is based in preference (a hard lesson I had to learn as I would frequently argue long and loud about why no one should use Windows, but only Linux) and how much you would consider a well-reasoned standard many would accept. When you say something like “The more a web site resembles a folder of ungenerated, static HTML files, served over HTTP, the more sustainable it is” and then immediately say why you needed PHP, you don’t convince me to drop Jekyll and a static site and move to PHP and a dynamic one, especially since you just said static HTML is fool proof and now you are introducing other packages.

          To flip your second argument on its head, I would argue that I have the power to completely redesign the structure of my blog simply by editing the _config.yml file and changing the permalink structure from (currently)

          permalink: /:year/:month/:day/:title/

          to

          permalink: /:year-:month-:day/:title/.

          That would make the complex URL you laid out and do so for every file in 2.26 seconds (on my current build). To make that change in pure HTML would require moving everything around, yet your premise for this point revolves around power to create the structure of the site.

          “How limited am I in my creation and maintenance of a) the structure of the entire site, and b) any given page on the site?”

          I say all this because I agree with a lot of what you are saying, but believe you can present a more solid argument. Having tried (both successfully and unsuccessfully) to migrate people to straight HTML, I think you may want to not attack how those HTML files get created as much as the fact that they are (i.e. if a user has a SSG they like, that’s awesome. If you like a WYSIWYG and I prefer CLI-based tools that work anywhere, it doesn’t matter).

          Two arguments I often find persuasive are that of security and cost. To the average person (not average on Lobste.rs, literally your average businessman who runs a random website that he needs work on), much of the above is esoteric and unintelligible. But when you start talking about Wordpress having a security flaw that gives anyone access, or that the cost of hosting the site as HTML being cheaper than the corresponding “full” hosting, normal people start to listen. Show them the difference in page load time between garbage dynamic hosting and a static site, they’ll realize what their customers will see quicker. They go with Wordpress or Wix simply for ease of use, but when other topics they deal with on a daily basis come up (security/risk, money, customer experience), now you have an audience willing to hear your pitch.

          Also, +1 on the flip phone. #ColdDeadHands.

          1. 2

            Thanks for your kind comments! I have a couple of answers:

            1. Yes, my arguments are rather subjective. It’s my own opinions and preferences, based on my own experience, which might align with other people’s experience, but might not. I don’t really want to persuade people who aren’t in the same situation as me and have other areas of expertise. I think that if you don’t care about the two criteria that I describe – especially sustainability – then you won’t be persuaded by my argument.

              As for my presentation being a bit black and white, I recognize this, but I found it hard to condense it without leaving out some of the nuances and alternative approaches.

            2. Maybe I should clarify that I wasn’t talking about the problem of needing Jekyll installed on the server, but rather on the computer you use to edit your web site. If you need a special piece of software on your computer to edit your web site, then you can’t easily edit it on computers that you don’t own, or ones that use another operating system, etc.

            3. I mentioned the /YYYY-MM-DD/post-title URL as an example of a non-complex URL. A more complex one would be /category/post-title. Even this is rather simple, but very difficult to implement cleanly with Jekyll (at least last I tried).

            1. 2

              I mentioned the /YYYY-MM-DD/post-title URL as an example of a non-complex URL. A more complex one would be /category/post-title. Even this is rather simple, but very difficult to implement cleanly with Jekyll (at least last I tried).

              This shouldn’t be too hard:

              permalink: /:categories/:title/
              

              Not sure what problems you ran in to with this? Also see the documentation.

              Because posts can have multiple categories, this can result in links like jekyll/update/welcome-to-jekyll (categories “jekyll” and “update”); but it’s not too hard to add a plugin which adds :first_category so you have just jekyll/welcome-to-jekyll:

              class Jekyll::Drops::UrlDrop
                def first_category
                  @obj.data["categories"].first || ''
                end
              end
              
              1. 1

                Sorry, I didn’t remember entirely. What I was trying to do is to have my posts arranged in a directory structure and mirror that directory structure in the generated site. This is what turned out to be extremely difficult.

                For example, I’d have some posts stored in english/technology/prolog/ or whatever and would want that mirrored in the URL of the generated site (/english/technology/prolog/).

                So, to clarify, I meant /category/post-title, assuming that the post is stored in the directory category, which can be an arbitrary path.

                1. 1

                  You should be able to place the files in english/technology/prolog/post.html (or .markdown), with some optional frontmatter if you want to apply a template or some such. Like I mentioned in the other post: you don’t need to use the _posts directory (although you can probably do something similar from there too, but I’d have to check).

                  Lots of the documentation on the website is focused on writing a blog from _posts so it’s probably a bit confusing, but you can do a lot more with it.

                  At any rate, not trying to convince you to use Jekyll or anything; if your current solution works for you then 👍 But feel free to let me know if you decide to try it again and run in to problems.

              2. 1

                My pleasure! Those are fair answers and I hope you do persuade others to join the HTML side of things! Thanks again for sharing this article.

            2. 3

              I definitely agree with this perspective. I played around with a lot of the tools you mentioned, and came to the same conclusion. I think that if I was producing a lot more content, or was concerned about SEO, I would want a more sophisticated tool. But I think that for my case and for most of what we use the web for, plain HTML is a great solution. My version is at https://alexwennerberg.com/ – I don’t even have a navbar, I just keep all the links on the homepage and have pages link back home.

              https://neocities.org is a great example of this in practice, and I think it creates a much more pleasant web experience.

              1. 3

                They are almost always built for very simple purposes: all blog posts go in _posts, and any more complex URLs than example.com/YYYY-MM-DD/title-of-post/ are usually very difficult to implement

                Jekyll allows multiple collections; you can have separate _programming/[..] and _politics/[..] collections/directories if you want and not use _posts at all; that’s just the default. You also don’t need to use collections and can just write a bunch of .html or .markdown files which are copied in to some standard layout.

                I find that Jekyll is much more flexible than many people give it credit for, mostly because of the flexibility and power Ruby gives you. I don’t really know how it compares to other sites generators as I don’t have much experience with them, but it’s certainly a lot more flexible than Hugo (written in Go, which lacks the flexibility Ruby gives you).

                there are other things you can generally count on being available without extra setup. One such thing is PHP.

                I don’t know; PHP is not that common, or at least not as common as it was ~15 years ago. Without any PHP, you can host you site pretty much $anywhere, with very few server-side dependencies. For example I currently host it at Netlify which doesn’t have PHP (GitHub pages is another popular solution), and for a while I hosted it with a little Go server I wrote which “packs” the entire site content in the binary so my entire site, including all the contents, is available as a single portable static binary, which I think is kinda neat.

                One thing I used in the (distant) past is write PHP files with some includes and other simple code, loop over them with a script and run the PHP CLI on them, which would process all includes and put the result in an output directory which would get served. This is actually very similar to the solution you ended up using except that it doesn’t require a PHP server. But this is actually pretty similar to “Jekyll without collections” mentioned above, so I ended up just switching to that as it’s easier.

                1. 2

                  I don’t know; PHP is not that common, or at least not as common as it was ~15 years ago.

                  I think it’s still just as common on normal web servers, but in addition to traditional web hosts, web hosts specifically designed for static sites have appeared – like Netlify. These obviously don’t support PHP. But if you host your site on a web server not designed specifically for static sites, then PHP support is practically a given.

                  I replied to another comment of yours about Jekyll. In my personal experience, it is still rather inflexible.

                2. 1

                  While it is easy to migrate the site to a new server, it is not effortless to update the contents of the site, because you can’t re-generate a Jekyll-driven site if you don’t have Jekyll installed on the computer.

                  You can setup Jekyll to run in CI. Though that’s not exactly trivial, then all you need is Git and SSH really.