The intro part makes one glaring omission, which is surprisingly common when it comes to people claiming how difficult hosting stuff has become: A webhosting package at any of a large number of hosting providers. Nowadays with any non-crap choice it’s not FTP anymore but SFTP/SCP, and HTTPS isn’t an expensive add-on anymore, but other than that it’s the same as it has been for decades by now.
I totally get tinkering with self-hosting or cloud services, but you’re choosing more effort than necessary if all you want is some HTML hosted.
Yeah I posted this comment literally 5 minutes ago at Hacker News.
(And note the caveat in the other reply – I will admit I only got comfortable with shared hosting when I actually learned how to use a shell! But it’s a good reason to learn shell. The shell makes you feel like somebody else’s computer is your own :) You don’t have to manage it, but you can control it. )
Uh what’s wrong with shared hosting? I use Dreamhost but there are dozens of others. It costs less than $10/month and I’ve used it since 2009.
I think the industry (or part of a generation of programmers) somehow collectively forgot that it exists. I don’t see any mention of it in the article – it solves the problem exactly. It easily serves and survives all the spikes from Hacker News that my site gets.
Shared hosting is a Linux box with a web server that someone ELSE manages. You get an SSH login, and you can use git and rsync.
It’s associated with PHP hosting, but it is perfect for static sites.
Uh what’s wrong with shared hosting? I use Dreamhost but there are dozens of others. It costs less than $10/month and I’ve used it since 2009.
I used a (admittedly very cheap) shared hosting at the very beginning when I started my blog and had several issues. For example, when my site was posted to HN, the provider simply disabled my website (all you got was a white page) without even informing me. Other times, the DNS they provided was flaky and my site wouldn’t even resolve at random times. Another issue was that the site would at random times be inexplicably slow (probably due to “neighbour” websites eating up resources).
Also, messing about with SFTP isn’t that great - I prefer rsync over SSH to avoid unnecessary data transfers (and disruptions caused by truncation when uploading a file as it’s being retrieved). This provider didn’t offer SSH at the time (about 10 years ago)
I believe neocities free tier should survive hn front-page just fine - but for sustained heavy load you might want to be a supporter; should you need more than 3TB of data in a month - you might have to pay more than five dollars and consider alternative (non-hobby, commercial) hosting.
Yeah I think that is part of the reason for low awareness of / interest in shared hosting among a certain crowd.
10 or 20 years ago shared hosting was a dime a dozen; there was a race to the bottom in cost with bad/aggressive marketing, and you couldn’t tell who was good or not. There were many acquisitions in the industry, which probably led to bad service.
Also people might think it’s for “just for PHP”, when in reality it’s perfect for static sites (which is a much easier subproblem).
I used 1and1 before that, and I remember it feeling very janky, and they also had bad a web interface. The web UI quality probably contributes to it feeling “old”
So maybe Dreamhost is not really “commodity”, and they are actually good and more competent than others.
I tried NearlyFreeSpeech as well a few years ago, but I stuck with Dreamhost.
Yeah also someone else told me that SSH used to be a “feature”. I have always had it with Dreamhost, and I remember 1and1 also had it, and it seemed worse. But it could also be because I was less familiar with SSH and the shell at the time (2003 or so).
I used 1and1 before that, and I remember it feeling very janky, and they also had bad a web interface. The web UI quality probably contributes to it feeling “old”
Yeah, most of them are still using ye olde DirectAdmin, Plesk or cPanel and at least the former still looks exactly like it did 20 years ago.
For DreamHost in particular, they do offer SSH included with any of their shared hosting plans, and rsync works perfectly fine. Don’t know how new of a development it is, but I’ve been using it a lot lately
I fully agree. It’s one of the simplest hosting solutions for a static site. My dad, who is by no means a techie, maintained his website for years by just editing his files locally with Dreamweaver and then WS-FTP’ing files to a shared host (I maintained the server, but to him it was just a shared webhost).
I have been using VPSes for 15 years or so, but quite some techie friends just upload their static sites to DreamHost (many others use Jekyll with GitHub pages).
Indeed didn’t mention anything about the shared webhosting solutions, just as I didn’t mention anything about S3 + CloudFront, or Backblaze B2 + a CDN in front, or Cloudflare + WebWorkers, or AWS Lambda, etc.
Although shared webhosting is part of our web history, and still a viable choice especially if you have something in PHP, I don’t think it is still a common choice for today. It’s somewhere in between cloud-hosting, because although you have an actual HTTP server (usually Apache or Nginx) that you can’t configure it much because it’s managed by the host, thus it gives you the same features as an a proper cloud-hosted static site solution (such as Netlify); and between self-hosting, because of the same reason, having an actual full-blown HTTP server, but one you can’t fully control, thus it gives you fewer features than a self-managed VM in a cloud provider or self-hosted machine. Thus unless you need PHP, or htaccess, I think the other two alternatives make a better choice.
What do you need to do that isn’t supported by shared hosts? If you’re serving a static site, it’s almost certainly going to work
You can configure it with .htaccess dropped in the individual dirs, which isn’t all that intuitive, but it certainly works. I think I’ve needed 3-5 lines of it in 13 years, so it’s not that bad.
What do you need to do that isn’t supported by shared hosts? If you’re serving a static site, it’s almost certainly going to work
The issue with “static sites”, due to the de-facto requirements in 2022 imposed by the the internet “gatekeepers” (mainly search engines), is that they aren’t “just a bunch of files on disk that we can just serve with proper Content-Type, Last-Modified or ETag, and perhaps compressed”; we now need (in order to meet the latest hoops the gatekeepers want us to jump through) to also do a bunch of things that aren’t quite possible (or certainly not easily) with current web servers. For example:
minification (which I’ve cited in my article) – besides compression, one should also employ HTML / CSS / JS and other asset minification; none of the classical web servers support this; there is something like https://www.modpagespeed.com/, but it’s far from straightforward to deploy (let alone on a shared web-host;)
when it comes to headers (be it the ones for CSP and other security related ones) or even Link headers for preloading, these aren’t easy to configure, especially if you need those Link headers only for some HTML pages and not all resources; in this regard I don’t know how many shared webhosts actually allow you to tinker with these;
The point I was trying to make is that if you want to deploy a professional (as in performant) static web site, just throwing some files in a folder and pointing Apache or Nginx at them isn’t enough. If the performance you are getting by default from such a setup is enough for you, then perfect! If not there is a lot of pain getting everything to work properly.
minification (which I’ve cited in my article) – besides compression, one should also employ HTML / CSS / JS and other asset minification; none of the classical web servers support this; there is something like https://www.modpagespeed.com/, but it’s far from straightforward to deploy (let alone on a shared web-host;)
I don’t follow? You can do minification as part of static site generation? Why do you need to do this on a server?
when it comes to headers (be it the ones for CSP and other security related ones) or even Link headers for preloading, these aren’t easy to configure
I think this only applies for a small subset of larger-trafic sites in a competitive market. I’ve run static sites for years and they are the first Google hit for the relevant queries. I don’t even minify assets, let alone use custom HTTP headers (though, admittedly, I use a small amount of hand-crafted CSS and not much Javascript outside maybe KaTeX, which is minified). I think for a vast majority of people, just generating HTML with a static site generator and putting it on some shared host will work. As long as HTTPS is supported.
Indeed, most generators out there do output minified HTML / CSS / JS; some might even minify images.
However, for those other use-cases where I want to host HTML / CSS / JS obtained (or generetated) by other means, say for example downloading a documentation bundle, or hand-crafted HTMLs, where the original source is not already minified, then I’m out of luck…
In my view the publishing workflow is composed of the following distinct (and independently implementable) phases:
authoring – write your Markdown or other lightweight markup language;
generation – take those files and spit-out HTML / CSS / other resources;
optimization – take the files from the previous phase and apply minification, bundling, pruning unused CSS, image optimization, asset compression, etc.;
(other ancillary steps such as sitemap.xml generation and other checks such as dead-link detection;)
deployment – bundle the optimized HTML and assets for serving;
serving – replying to HTTP requests;
edge caching – usually employing a CDN;
At the moment, the static site generators take care of all the steps up-to the deployment. This duplicates a lot of effort in implementing the generators, because all need to support minification for example, and given each is written in different languages we end-up with half-backed implementations in a lot of languages.
On the other hand it would make more sense to have a different tool handling the optimization part, and let the static site generators focus on actually munching the markups.
In fact I was thinking about mkws the other day before writing the article, and I was pondering if to contact you and ask if you are interested in collaborating on some small tutorial on how to get the two working in tandem, especially given the fact that each one is focused solely on a particular aspect of the publishing workflow. :)
I’ve always wondered: if you’re gzipping your responses (which essentially every server does) is there any measurable effect of explicitly minified assets?
Although I haven’t explicitly tested this, I would expect that yes, there will be some impact, because some minification processes don’t just remove whitespace and comments, but could also rewrite stuff to be more compact and still be specification compliant. (For example many HTML minification tools remove quotes, closing tags, etc.)
I wouldn’t expect that the impact is more than 5-10%, but that is still enough.
focus on bootstrap.css (~237K) and bootstrap.min.css (~194K); (~20% reduction;)
compress that with brotli -k -v -q 11 ./bootstrap.css (and the same for .min.css) and you’ll get ~21K and ~19K respectively; (~9% reduction;)
If you want a larger dataset, you could go wild with CDNJS from https://github.com/cdnjs/cdnjs/tree/master/ajax/libs (you should really make a shallow clone and have a few tens of GiB free on the disk for it). :)
Well, one thing to make it easier is to just ignore the CDN part. It depends on the popularity and audience of the site of course, but in my experience you don’t need it for the majority of cases. Also the part on security is over emphasized, I think. It’s a static site, there is not much to hack.
I say this, apparently, as a graybeard, since my default choice is a VPS with Linux and Nginx, but I tried to help other, less technical people get their site on the web and what bothered me the most was that all the well-known hosting providers use a git based solution. This in itself complicates things immensely. For starters, normal people will not understand and use version control. Markdown without a GUI editor is difficult too. But: requiring one of those also assumes a generator and a deployment pipeline and this kills the quick feedback loop of changing something and immediately see what the effects are. It is really hard to see people struggle with this when you have your graybeard memories of just transferring a bunch of HTML to a server and see results instantly.
But: requiring one of those also assumes a generator and a deployment pipeline and this kills the quick feedback loop of changing something and immediately see what the effects are.
And this is exactly why I’ve said that perhaps deploying a pre-generated set of HTTP responses could solve this issue. You generate the responses on your side (with all the headers and the final body) and thus you’ll know immediately (even before deployment) what you’ll get when a browser hits that URL. (You could even run a local web server to test things out.)
Indeed the final workflow is somewhat more complex: author markup -> run static site generator -> run HTTP response generator -> deploy -> purge CDN caches, as compared to author HTML directly via SCP. However this is a spectrum from simplest solution up to the one that squeezes the most out of the whole HTTP-based ecosystem.
The solution I’m proposing leans towards the performance side of the spectrum.
Since you’re the author, could you add main.document :is(li, p) { max-width: 67ch; } (or similar); the paragraph width problematic for comfortable reading–books are as wide as they are because that’s what is comfortable. “URL’s” should be “URLs”.
Content wise: I don’t agree with always needing to concatenate JS/CSS anymore with HTTP/2 and onward and this can lead to worse performance. As such, I often in simple static scenarios find it easier include each file independently as I would break them out for organization. This can help with build caching too as I don’t need to derive a concatenated file every time I edit one thing. Not mentioned is that many servers, especially self-hosted, can serve pre-compressed assets so you could make a Zopfli and Brotli compression part of your final build step before serving so the server doesn’t have to compress or cache it (specifically useful when the hardware isn’t the most powerful).
Thanks for the corrections (applied) and for the suggestion. As for the width, in a previous version of the CSS it was limited to 100ch in width, however I found that too narrow on desktop. Perhaps I’ll reconsider, thus thanks once again for this kind of feedback!
With regard to the JS/CSS concatenation. Indeed, if you manage to slice and dice your CSS in such a puzzle that each HTML loads the minimum number of required puzzle chunks, then yes serving individual CSS chunks is perhaps better than one huge CSS that each HTML uses only a small fraction of it. However this isn’t quite simple to achieve. (And for some use-cases such as content blogs, where each page has a similar look-and-feel, most of the CSS will be used.)
On the other hand, if you do have CSS it means you actually need it, and thus if you reference one bundled CSS (that you use a large portion of) or a few smaller CSS chunks, it shouldn’t make a big difference because on the client side you are limited by bandwidth, and what’s more important, compressing one large file versus multiple smaller ones yields more savings with the former.
As with serving pre-compressed assets, indeed some well-known web-servers do support that, but I don’t think it’s enabled by default; meanwhile with others it’s not supported at all.
You could write a simple custom http server like https://mkws.sh/src/https.go. I add additional processing in the s cgi script, stuff like Content-Security-Policy (just to look good on online web site “health checkers”), serving pre compressed files, Cache-Control, directory handling (e.g. serve dir/index.html when asking for dir/)
All these stuff is outside the scope of the static site generator and it’s more tied to static site serving.
Anyway, in technical terms a static site is just a bunch of files on the file-system that are served with a simple HTTP server which when loaded via a browser present the user with something one can read (or listen to if accessibility is taken into consideration).
How is this different from what the ‘out-of-date’ wikipedia article says?
The Wikipedia article about “static web page” focuses mostly on the fact that the resulting HTML is unchanging, and less on the fact that it’s actually immutable and has zero runtime generation. For example I’ll quote:
[…] and could even include pages formatted using a template and served through an application server, as long as the page served is unchanging and presented essentially as stored
The page I was really targeting with the “so old it’s funny” was in fact the section on “static sites generators” that not only doesn’t list any of the most popular SSGs, but they actually talk about FrontPage, Flash and Dreamweaver:
FrontPage and Dreamweaver were once the most popular editors with template sub-systems. A Flash web template uses Macromedia Flash to create visually interactive sites.
The intro part makes one glaring omission, which is surprisingly common when it comes to people claiming how difficult hosting stuff has become: A webhosting package at any of a large number of hosting providers. Nowadays with any non-crap choice it’s not FTP anymore but SFTP/SCP, and HTTPS isn’t an expensive add-on anymore, but other than that it’s the same as it has been for decades by now.
I totally get tinkering with self-hosting or cloud services, but you’re choosing more effort than necessary if all you want is some HTML hosted.
Yeah I posted this comment literally 5 minutes ago at Hacker News.
(And note the caveat in the other reply – I will admit I only got comfortable with shared hosting when I actually learned how to use a shell! But it’s a good reason to learn shell. The shell makes you feel like somebody else’s computer is your own :) You don’t have to manage it, but you can control it. )
copy of https://news.ycombinator.com/item?id=32731048
Uh what’s wrong with shared hosting? I use Dreamhost but there are dozens of others. It costs less than $10/month and I’ve used it since 2009.
I think the industry (or part of a generation of programmers) somehow collectively forgot that it exists. I don’t see any mention of it in the article – it solves the problem exactly. It easily serves and survives all the spikes from Hacker News that my site gets.
Shared hosting is a Linux box with a web server that someone ELSE manages. You get an SSH login, and you can use git and rsync.
It’s associated with PHP hosting, but it is perfect for static sites.
http://www.oilshell.org/blog/2021/12/backlog-assess.html#app…
Answer to the HN question: What’s the fastest way to get a page on the web?
https://news.ycombinator.com/item?id=29254006
FWIW I also have a Linux VPS with nginx and a HTTPS certificate, but it’s a pain. I much prefer using shared hosting for things I want to be reliable.
I used a (admittedly very cheap) shared hosting at the very beginning when I started my blog and had several issues. For example, when my site was posted to HN, the provider simply disabled my website (all you got was a white page) without even informing me. Other times, the DNS they provided was flaky and my site wouldn’t even resolve at random times. Another issue was that the site would at random times be inexplicably slow (probably due to “neighbour” websites eating up resources).
Also, messing about with SFTP isn’t that great - I prefer rsync over SSH to avoid unnecessary data transfers (and disruptions caused by truncation when uploading a file as it’s being retrieved). This provider didn’t offer SSH at the time (about 10 years ago)
Back in the day I use Filezilla to do FTP and it definitely had an upload changed files only. Man, that was a while ago.
I believe neocities free tier should survive hn front-page just fine - but for sustained heavy load you might want to be a supporter; should you need more than 3TB of data in a month - you might have to pay more than five dollars and consider alternative (non-hobby, commercial) hosting.
https://neocities.org/supporter
Yeah I think that is part of the reason for low awareness of / interest in shared hosting among a certain crowd.
10 or 20 years ago shared hosting was a dime a dozen; there was a race to the bottom in cost with bad/aggressive marketing, and you couldn’t tell who was good or not. There were many acquisitions in the industry, which probably led to bad service.
Also people might think it’s for “just for PHP”, when in reality it’s perfect for static sites (which is a much easier subproblem).
I used 1and1 before that, and I remember it feeling very janky, and they also had bad a web interface. The web UI quality probably contributes to it feeling “old”
So maybe Dreamhost is not really “commodity”, and they are actually good and more competent than others.
I wrote about that more here: https://news.ycombinator.com/item?id=32731207
I tried NearlyFreeSpeech as well a few years ago, but I stuck with Dreamhost.
Yeah also someone else told me that SSH used to be a “feature”. I have always had it with Dreamhost, and I remember 1and1 also had it, and it seemed worse. But it could also be because I was less familiar with SSH and the shell at the time (2003 or so).
Yeah, most of them are still using ye olde DirectAdmin, Plesk or cPanel and at least the former still looks exactly like it did 20 years ago.
For DreamHost in particular, they do offer SSH included with any of their shared hosting plans, and rsync works perfectly fine. Don’t know how new of a development it is, but I’ve been using it a lot lately
I fully agree. It’s one of the simplest hosting solutions for a static site. My dad, who is by no means a techie, maintained his website for years by just editing his files locally with Dreamweaver and then WS-FTP’ing files to a shared host (I maintained the server, but to him it was just a shared webhost).
I have been using VPSes for 15 years or so, but quite some techie friends just upload their static sites to DreamHost (many others use Jekyll with GitHub pages).
Just a reminder that even the free/gratis tier of neocities comes with a CDN:
https://neocities.org/supporter
There are also really nice middle ground choices like Netlify. They have an actually quite usable free tier that’s actually free.
It’s effectively web hosting but with a Git back-end and some minimal infra to ‘build’ your site.
Indeed didn’t mention anything about the shared webhosting solutions, just as I didn’t mention anything about S3 + CloudFront, or Backblaze B2 + a CDN in front, or Cloudflare + WebWorkers, or AWS Lambda, etc.
Although shared webhosting is part of our web history, and still a viable choice especially if you have something in PHP, I don’t think it is still a common choice for today. It’s somewhere in between cloud-hosting, because although you have an actual HTTP server (usually Apache or Nginx) that you can’t configure it much because it’s managed by the host, thus it gives you the same features as an a proper cloud-hosted static site solution (such as Netlify); and between self-hosting, because of the same reason, having an actual full-blown HTTP server, but one you can’t fully control, thus it gives you fewer features than a self-managed VM in a cloud provider or self-hosted machine. Thus unless you need PHP, or
htaccess
, I think the other two alternatives make a better choice.What do you need to do that isn’t supported by shared hosts? If you’re serving a static site, it’s almost certainly going to work
You can configure it with
.htaccess
dropped in the individual dirs, which isn’t all that intuitive, but it certainly works. I think I’ve needed 3-5 lines of it in 13 years, so it’s not that bad.The issue with “static sites”, due to the de-facto requirements in 2022 imposed by the the internet “gatekeepers” (mainly search engines), is that they aren’t “just a bunch of files on disk that we can just serve with proper
Content-Type
,Last-Modified
orETag
, and perhaps compressed”; we now need (in order to meet the latest hoops the gatekeepers want us to jump through) to also do a bunch of things that aren’t quite possible (or certainly not easily) with current web servers. For example:minification (which I’ve cited in my article) – besides compression, one should also employ HTML / CSS / JS and other asset minification; none of the classical web servers support this; there is something like https://www.modpagespeed.com/, but it’s far from straightforward to deploy (let alone on a shared web-host;)
when it comes to headers (be it the ones for CSP and other security related ones) or even
Link
headers for preloading, these aren’t easy to configure, especially if you need thoseLink
headers only for some HTML pages and not all resources; in this regard I don’t know how many shared webhosts actually allow you to tinker with these;The point I was trying to make is that if you want to deploy a professional (as in performant) static web site, just throwing some files in a folder and pointing Apache or Nginx at them isn’t enough. If the performance you are getting by default from such a setup is enough for you, then perfect! If not there is a lot of pain getting everything to work properly.
I don’t follow? You can do minification as part of static site generation? Why do you need to do this on a server?
I think this only applies for a small subset of larger-trafic sites in a competitive market. I’ve run static sites for years and they are the first Google hit for the relevant queries. I don’t even minify assets, let alone use custom HTTP headers (though, admittedly, I use a small amount of hand-crafted CSS and not much Javascript outside maybe KaTeX, which is minified). I think for a vast majority of people, just generating HTML with a static site generator and putting it on some shared host will work. As long as HTTPS is supported.
Indeed, most generators out there do output minified HTML / CSS / JS; some might even minify images.
However, for those other use-cases where I want to host HTML / CSS / JS obtained (or generetated) by other means, say for example downloading a documentation bundle, or hand-crafted HTMLs, where the original source is not already minified, then I’m out of luck…
In my view the publishing workflow is composed of the following distinct (and independently implementable) phases:
sitemap.xml
generation and other checks such as dead-link detection;)At the moment, the static site generators take care of all the steps up-to the deployment. This duplicates a lot of effort in implementing the generators, because all need to support minification for example, and given each is written in different languages we end-up with half-backed implementations in a lot of languages.
On the other hand it would make more sense to have a different tool handling the optimization part, and let the static site generators focus on actually munching the markups.
That’s the original idea of mkws, just handle the generation! Throw in other tools of your preference for the rest of the tasks!
In fact I was thinking about
mkws
the other day before writing the article, and I was pondering if to contact you and ask if you are interested in collaborating on some small tutorial on how to get the two working in tandem, especially given the fact that each one is focused solely on a particular aspect of the publishing workflow. :)Sure! Drop me an email! You have it!
I’ve always wondered: if you’re gzipping your responses (which essentially every server does) is there any measurable effect of explicitly minified assets?
Although I haven’t explicitly tested this, I would expect that yes, there will be some impact, because some minification processes don’t just remove whitespace and comments, but could also rewrite stuff to be more compact and still be specification compliant. (For example many HTML minification tools remove quotes, closing tags, etc.)
I wouldn’t expect that the impact is more than 5-10%, but that is still enough.
I would be surprised if that were true! Do you know of a good or canonical example of an asset that’s typically minified? I’d love to test it out.
You cold take for example Bootstrap:
bootstrap.css
(~237K) andbootstrap.min.css
(~194K); (~20% reduction;)brotli -k -v -q 11 ./bootstrap.css
(and the same for.min.css
) and you’ll get ~21K and ~19K respectively; (~9% reduction;)If you want a larger dataset, you could go wild with CDNJS from https://github.com/cdnjs/cdnjs/tree/master/ajax/libs (you should really make a shallow clone and have a few tens of GiB free on the disk for it). :)
Well, one thing to make it easier is to just ignore the CDN part. It depends on the popularity and audience of the site of course, but in my experience you don’t need it for the majority of cases. Also the part on security is over emphasized, I think. It’s a static site, there is not much to hack.
I say this, apparently, as a graybeard, since my default choice is a VPS with Linux and Nginx, but I tried to help other, less technical people get their site on the web and what bothered me the most was that all the well-known hosting providers use a git based solution. This in itself complicates things immensely. For starters, normal people will not understand and use version control. Markdown without a GUI editor is difficult too. But: requiring one of those also assumes a generator and a deployment pipeline and this kills the quick feedback loop of changing something and immediately see what the effects are. It is really hard to see people struggle with this when you have your graybeard memories of just transferring a bunch of HTML to a server and see results instantly.
And this is exactly why I’ve said that perhaps deploying a pre-generated set of HTTP responses could solve this issue. You generate the responses on your side (with all the headers and the final body) and thus you’ll know immediately (even before deployment) what you’ll get when a browser hits that URL. (You could even run a local web server to test things out.)
Indeed the final workflow is somewhat more complex: author markup -> run static site generator -> run HTTP response generator -> deploy -> purge CDN caches, as compared to author HTML directly via SCP. However this is a spectrum from simplest solution up to the one that squeezes the most out of the whole HTTP-based ecosystem.
The solution I’m proposing leans towards the performance side of the spectrum.
Since you’re the author, could you add
main.document :is(li, p) { max-width: 67ch; }
(or similar); the paragraph width problematic for comfortable reading–books are as wide as they are because that’s what is comfortable. “URL’s” should be “URLs”.Content wise: I don’t agree with always needing to concatenate JS/CSS anymore with HTTP/2 and onward and this can lead to worse performance. As such, I often in simple static scenarios find it easier include each file independently as I would break them out for organization. This can help with build caching too as I don’t need to derive a concatenated file every time I edit one thing. Not mentioned is that many servers, especially self-hosted, can serve pre-compressed assets so you could make a Zopfli and Brotli compression part of your final build step before serving so the server doesn’t have to compress or cache it (specifically useful when the hardware isn’t the most powerful).
Thanks for the corrections (applied) and for the suggestion. As for the width, in a previous version of the CSS it was limited to
100ch
in width, however I found that too narrow on desktop. Perhaps I’ll reconsider, thus thanks once again for this kind of feedback!With regard to the JS/CSS concatenation. Indeed, if you manage to slice and dice your CSS in such a puzzle that each HTML loads the minimum number of required puzzle chunks, then yes serving individual CSS chunks is perhaps better than one huge CSS that each HTML uses only a small fraction of it. However this isn’t quite simple to achieve. (And for some use-cases such as content blogs, where each page has a similar look-and-feel, most of the CSS will be used.)
On the other hand, if you do have CSS it means you actually need it, and thus if you reference one bundled CSS (that you use a large portion of) or a few smaller CSS chunks, it shouldn’t make a big difference because on the client side you are limited by bandwidth, and what’s more important, compressing one large file versus multiple smaller ones yields more savings with the former.
As with serving pre-compressed assets, indeed some well-known web-servers do support that, but I don’t think it’s enabled by default; meanwhile with others it’s not supported at all.
You could write a simple custom http server like https://mkws.sh/src/https.go. I add additional processing in the
s
cgi script, stuff likeContent-Security-Policy
(just to look good on online web site “health checkers”), serving pre compressed files,Cache-Control
, directory handling (e.g. serve dir/index.html when asking for dir/)All these stuff is outside the scope of the static site generator and it’s more tied to static site serving.
Also, I have written a small https tunnel:
You could deploy the binaries also and just restart the tunnel and http server on deployment.
How is this different from what the ‘out-of-date’ wikipedia article says?
The Wikipedia article about “static web page” focuses mostly on the fact that the resulting HTML is unchanging, and less on the fact that it’s actually immutable and has zero runtime generation. For example I’ll quote:
The page I was really targeting with the “so old it’s funny” was in fact the section on “static sites generators” that not only doesn’t list any of the most popular SSGs, but they actually talk about FrontPage, Flash and Dreamweaver: