1. 22
    1. 96

      Static or dynamic refers to whether the webserver serves requests by reading a static file off disk or running some dynamic code (whether in process or not). While the word “dynamic” can apply broadly to any change, reusing a term with a well-understood definition in this context to refer to unrelated changes like SSL cert renewal and HTTP headers is really confusing. Late in the article it refers to “the filesystem API used to host static files” so it’s clear the author knows the definition. It’s unfortunate that the article is written in this way; it’s self-fulfilling that misusing a clear and well-established term just results in confusion. Maybe a better metaphor for the points it’s trying to make would be Stewart Brand’s concept of pace layering.

      1. 12

        Yeah I agree, I think the article is generally good, but the title is misleading.

        My summary is “We should try to make dynamic sites as easy to maintain as static sites”, using sqlite, nginx, whatever.

        The distinction obviously exists – in fact the article heavily relies on the distinction to make its point.

        I agree with the idea of moving them closer together (who wouldn’t want to make dynamic sites easier to maintain?) But I think there will be a difference no matter what.

        Mainly that’s because the sandboxing problem (which consists of namespace isolation and resource isolation) is hard on any kernel and on any hardware. When you have a static site, you don’t need to solve that problem at all.

        We will get better at solving that problem, but it will always be there. There are hardware issues like Spectre and Meltdown (which required patches to kernels and compilers!), but that’s arguably not even the hardest problem.


        I also think recognizing this distinction will lead to more robust architectures. Similar to how progressive enhancement says that your website should still work without JS, your website’s static part should still work if the dynamic parts are broken (the app servers are down). That’s just good engineering.

        1. 3

          Funnily enough, sqlite + nginx is what I use for most of my smaller dynamic websites, usually wish a server process as well.

          EDIT: Reading further, yeah, almost all of my side projects use that setup, outside of some Phoenix stuff, and I’ve definitely noticed those projects requiring not very much maintenance at all.

          1. 7

            What’s also a bit funny is that sqlite and nginx are both extremely old school, state machine-heavy, plain C code.

            Yet we reach for them when we want something reliable. I recommend everyone look at the source code for both projects.

            This reminds me of these 2 old articles:

            https://tratt.net/laurie/blog/entries/how_can_c_programs_be_so_reliable.html

            http://damienkatz.net/2013/01/the_unreasonable_effectiveness_of_c.html

            (And I am not saying this is good; I certainly wouldn’t and can’t write such C code. It’s just funny)

            1. 1

              SQLite, at least, partially compensates via extensive testing, and a slow/considered pace of work (or so I understand). It’s the antithesis of many web-apps in that regard. And the authors come from a tradition that allows them to think outside the box much more than many devs, and do things like auto-generate the SQLite C header, rather than trying to maintain it by hand.

              C and C++ can be used effectively, as demonstrated by nginx, sqlite, curl, ruby, python, tcl, lua and others, but it’s definitely a different headspace, as I understand it from dipping into such things just a bit.

          2. 6

            I did not know that nginx can talk to sqlite by itself. Can you share your setup?

            1. 1

              For me, I don’t use nginx talking directly to SQLite, I just use it as a reverse proxy. It’s just that it makes it easy to set up a lot of websites behind one server, and using SQLite makes it easy to manage those from a data storage standpoint.

              1. 1

                I see, yes that makes sense. I use it that way too.

      2. 11

        You articulated that without using expressions that would be inappropriate in the average office setting. I admire you for that.

        The whole act of reusing a common, well-understood content-related term to instead refer to TLS certs and HTTP headers left me ready to respond with coarse language and possibly question whether OP was trolling.

        The idea that maybe we’re comparing a fast layer to a slow layer is somewhat appealing, but I don’t think it quite fits either. I think OP is muddling content and presentation. Different presentations require differing levels of maintenance even for the same content. So if I publish a book, I might need to reprint it every few hundred years as natural conditions cause paper to degrade, etc. Whereas if I publish the same content on a website, I might need to alter the computer that hosts that content every X days as browsers’ expectations change.

        That content doesn’t change. And that’s what we commonly mean when we say “a static website.” The fact that the thing presenting the content needs to change in order to adequately serve the readers doesn’t, in my view, make the content dynamic. And I don’t think it moves it from a slow layer to a faster one either.

      3. 5

        This is a reasonable criticism, but I think it’s slightly more complicated than that — a collection of files in a directory isn’t enough to unambiguously know how to correctly serve a static site. For instance, different servers disagree on the file extension → mimetype mapping. So I think you need to accept that you can’t just “read a static file off disk”, in order to serve it, you also need other information, which is encoded in the webserver configuration. But nginx/apache/etc let you do surprisingly dynamic things (changing routing depending on cookies/auth status/etc, for instance). So what parts of the webserver configuration are you allowed to use while still classifying something as “static”?

        That’s what I’m trying to get at — a directory of files can’t be served as a static site without a configuration system of some sort, and actual http server software in order to serve a static site. But once you’re doing that sort of thing, how do you draw a principled line about what’s “static” and what isn’t?

        1. 7

          Putting a finer point on the mimetype thing, since I understand that it could be seen as a purely academic issue: python2 -m SimpleHTTPServer and python3 -m http.server will server foo.wasm with different mimetypes (application/wasm and application/octet-stream) Only the wasm bundle served by the python3 version will be executed by browsers, due to security constraints. Thus, what the website does, in a very concrete way, will be dependent not on the files, but on the server software. That sounds like a property of a “dynamic” system to me — why isn’t it?

          You could say, ok, so a static website needs a filesystem to serve from and a mapping of extensions to content types. But there are also other things you need — information about routing, for instance. What domain and port is the content supposed to be served on, and at what path? If you don’t get that correct, links on the site likely won’t work. This is typically configured out of band — on GitHub pages, for instance, this is configured with the name of the repo.

          So you need a extension to mimetype mapping, and routing information, and a filesystem. But you can have a static javacsript file that then goes and talks to the sever it was served from, and arbitrarily changes its behavior based on the HTTP headers that were returned. So really, if you want a robust definition of what a “static” website is, you need to pretty completely describe the mapping between HTTP requests and HTTP responses. But isn’t “a mapping between HTTP requests and HTTP responses” just a FP sort of way of describing a dynamic webserver?

          If you disagree with some part of this chain of logic, I’m curious which part.

          1. 6

            All the configuration parts and “dynamic” nature of serving files in a static site are about that: serving them, how the file gets on my computer. But at the end of the day, with a static site the content of the document I get is the same as the content on the filesystem on the server. And with a dynamic site it is not. That is the difference. It’s about what is served.

            All this talk about mime types and routing just confuses things. One can do the same kinds of tricks with a file system and local applications. For instance: changing the extension, setting default applications, etc. can all change the behavior you observe by opening a file. Does that mean my file system is dynamic too? Isn’t everything dynamic if you look at it that way?

          2. 6

            It seems very odd to be talking about whether or not WASM gets executed to make a point about static websites.

            When the average person talks about a static site, they are talking about a document-like site with some HTML, some CSS, maybe some images. Yes, there may be some scripting, but it’s likely to be non-essential to the functionality of the site. For these kinds of sites, in practice MIME types are basically never something you as the site author will have to worry about. Every reasonable server will serve HTML, CSS, etc. with reasonable MIME types.

            Sure, you can come up with some contrived example of an application-like site that reliant on WASM to function and call it a static site. But that is not what the phrase means in common usage, so what point do you think you are proving by doing so?

          3. 4

            You can also misconfigure nginx to send html files as text/plain, if that is your point. python2 predates wasm, it’s simply a wrong default -today-.

            1. 3

              What about that is “misconfigured”? It’s just configuration, in some cases you might want all files to be served with a particular content type, regardless of path.

              My point is that just having a set of files doesn’t properly encode the information you need to serve that website. That, to me, seems to indicate that defining a static site as one that responds to requests by “reading static files off disk” is at the very least, incomplete.

              1. 3

                I think this discussion is kind of pointless then.

                Ask 10 web developers and I bet 9 would tell you that they will assume a “normal” or “randomly picked” not shit webserver will serve html/png/jpeg/css files with the correct header so that clients can meaningfully interpret them. It’s not really a web standard but it’s common knowledge/best practice/whatever you wanna call it. I simply think it’s disingenious to call this proper configuration then and not “just assuming any webserver that works”.

          4. 2

            I found your point (about the false division of static and dynamic websites) intuitive, from when you talked about isolation primitives in your post. (Is a webserver which serves a FUSE filesystem static or dynamic, for example? What if that filesystem is archivemount?)

            But this point about MIME headers is also quite persuasive and interesting, perhaps more so than the isolation point, you should include it in your post.

            Given this WASM mimetype requirement, what happens when you distribute WASM as part of a filesystem trees of HTML files and open it with file://? Is there an exception, or… Is this just relying on the browser’s internal mimetype detection heuristics to be correct?

            1. 2

              Yeah, I probably should have included it in the post — I might write a follow up post, or add a postscript.

              Loading WASM actually doesn’t work from file:// URLs at all! In general, file:// URLs are pretty special and there’s a bunch of stuff that doesn’t work with them. (Similarly, there are a handful of browser features that don’t work on non-https origins). If you’re doing local development with wasm files, you have to use a HTTP server of some sort.

              1. 3

                Loading WASM actually doesn’t work from file:// URLs at all!

                Fascinating! That’s also good support for your post! It disproves the “static means you can distribute it as a tarball and open it and all the content is there” counter-argument.

                1. 1

                  This is for a good reason. Originally HTML pages were self-contained. Images were added, then styles and scripts. Systems were made that assumed pages wouldn’t be able to just request any old file, so when Javascript gained the ability to load any file it was limited to only be able to load files from the same Origin (protocol + hostname + port group) to not break the assumptions of existing services. But file:// URLs are special, they’re treated as a unique origins so random HTML pages on disk can’t exfiltrate all the data on your drive. People still wanted to load data from other origins, so they figured out JSONP (basically letting 3rd-party servers run arbitrary JS on your site to tell you things because JS files are special) and then browsers added CORS. CORS allowed servers to send headers to opt in to access from other origins.

                  WebAssembly isn’t special like scripts are, you have to fetch it yourself and it’s subject to CORS and the same origin policy so loading it from a file:// URL isn’t possible without disabling security restrictions (there are flags for this, using them is a bad idea) but you could inline the WebAssembly file as a data: URL. (You can basically always fetch those.)

          5. 2

            What domain and port is the content supposed to be served on, and at what path? If you don’t get that correct, links on the site likely won’t work.

            These days when getting a subdomain is a non-issue, I can’t see why anyone would want to use absolute URLs inside pages, other than in a few very special cases like sub-sites generated by different tools (e.g. example.com/docs produced by a documentation generator).

            I also haven’t seen MIME type mapping become a serious problem in practice. If a client expects JS or WASM, it doesn’t look at the MIME type at all normally because the behavior for it is hardcoded and doesn’t depend on the MIME type reported by the server. Otherwise, for loading non-HTML files, whether the user agent displays it or offers to open it with an external program by default isn’t a big issue.

            1. 3

              MIME bites you where you least expect it. Especially when serving files to external apps or stuff that understands both xml and json and wants to know which one it got. My last surprise was app manifests for windows click-once updates which have to have their weird content-type which the app expects.

            2. 2

              If a client expects JS or WASM, it doesn’t look at the MIME type at all normally because the behavior for it is hardcoded and doesn’t depend on the MIME type reported by the server.

              This is incorrect. Browsers will not execute WASM from locations that do not have a correct mimetype. This is mandated by the spec: https://www.w3.org/TR/wasm-web-api-1/

              You might not have seen this be a problem in practice, but it does exist, and I and many other people have ran into it.

              1. 1

                Thanks for the pointer, I didn’t know that the standard requires clients to reject WASM if the MIME type is not correct.

                However, I think the original point still stands. If the standard didn’t require rejecting WASM with different MIME types but some clients did it on their own initiative, then I’d agree that web servers with _different but equally acceptable behavior could make or break the website. But if it’s mandated, then a web server that doesn’t have a correct mapping is incorrectly implemented or misconfigured.

                Since WASM is relatively new, it’s a more subtle issue of course—some servers/config used to be valid, but no longer are. But they are still expected to conform with the standard now.

          6. 1

            You don’t need any specific mimetype for WASM, you can load bytes however you want and pass them to WebAssembly.instantiate as an ArrayBuffer.

      4. 3

        The other replies have explained this particular case in detail, but I think it’s worth isolating the logical fallacy you’re espousing. Suppose we believe that there are two distinct types of X, say PX and QX. But there exist X that are simultaneously PX and QX. Then those existing X are counterexamples, and we should question our assumption that PX and QX were distinct. If PX and QX are only defined in opposition to each other, then we should also question whether P and Q are meaningful.

    2. 12

      The fundamental difference is that static websites are collections of documents (or client-side applications embedded in documents) that can be served over the network… or they can not be! It’s possible to make a local copy of a static website that will be functionally identical to the original.

      It’s also possible to host the complete source data of a static website publicly without creating any security risk, and let other people deploy it if they need/want it. Website mirroring used to be common in the old days, made possible by the fully static nature of many project/documentation websites.

    3. 10

      Huh. Everybody is missing the simplest difference I know: statefulness.

      A static website does no computation on the content of what it chooses to serve, just selection of what best matches the request. No state is necessarily changed on the server, although almost everyone likes to retain logs and such.

      A dynamic website is one that computes something based on the requests it receives and alters the data that it returns dependent on the computation and the state of the server process(es). State is necessarily updated in general, although any given request might be essentially static.

      1. 2

        I think you’re conflating two things here: statelessness, and not doing any computations on the request.

        You can have something that’s stateless, but does computation on the request to generate a response — for instance, a web server that doesn’t talk to a database or filesystem, but does do some computation on the path or request parameters to generate a response.

        You can also have a something that’s stateful, but where computation is not done on the content. A key-value store would be an example of this.

        Which of these are you suggesting is the definition of a static website? One, the other, or both?

        The comment about statefulness is why I included the bit about TLS certs: if your definition of a static website is one that is stateless, that means that sites served over HTTPS cannot be static, since the TLS cert is state (at least, in the case where you auto-renew it). If you disagree with that characterization, I’d be curious why — I don’t see a principled reason not to think of TLS certs as “state”.

        1. 11

          if your definition of a static website is one that is stateless, that means that sites served over HTTPS cannot be static

          If you’re going to make that argument, why not just go all the way and point out that the hardware you run a site on is itself inherently stateful, thus statelessness in software is a fools errand?

          In my mind there’s a reason we attempt (with varying levels of success) to build layers of abstraction which hide the details of lower layers where they aren’t relevant.

          1. 0

            I think there’s a significant difference, in that a HTTP site can be easily served on a server running with a readonly disk, whereas a HTTPS site can’t, unless you’re fine with getting a new certificate every time you reboot your server. Clearly, everything about how computers operate is “stateful”, but there’s a difference between state that is kept in memory, and state that is persisted to disk — that difference is usually what people are talking about when they talk about a web server being “stateful”, so I assumed it’s what you meant as well.

            1. 1

              Assuming Let’s Encrypt certs, you only need to renew every ~12 weeks, so it’d be reasonable if read-only is a hard requirement to burn new certs onto a CD or whatever with a different computer every 10ish weeks.

        2. 2

          If the key-value store is updated by requests, then it is the state; if it is never updated by a request, then it’s not state.

          If something in the request is used to compute which stored response to give, then it could be replaced by asking for the stored response directly, and is similar to not knowing how the filesystem is storing a file. Have the client compute the request fully and send a request for the key; unless the point of the request is actually to change state on the server, or the server is embedding a cryptographic secret.

          If the request is used to compute a new answer for the response and there is no state maintained on the server, in most cases this should be done client-side. I suppose the primary exception would be when the client wants to gain access to a secret – but if the secret is not updated, why not ship it in the client?

          So, no, I don’t think I’m conflating here: a static website doesn’t change its state as a result of a request, so repeating a request will get you the same response predictably. It could be replaced by a read-only filesystem sitting on the client, assuming infinite storage. A dynamic website changes state as a result of a request, so it could only be replaced by a read-write local filesystem if it only had one client in the universe.

          I think your TLS question is nearly irrelevant because, as you acknowledged, a TLS cert with a Not After timestamp in the year 9999 is feasible; and as I think you implied and I will state outright: whether you answering over HTTP, HTTPS, HTTP2, HTTP3 or an AX.25 radio network is not relevant to whether we think that the web server is static or not.

    4. 5

      I agree with andyc’s summary but part of the impression I get is that there’s some strong correlations being pushed by the author without looking at the deeper “why?” parts of these correlations.

      A single binary site using sqlite is probably going to be easier to deploy/maintain, yes. But why? Because there’s most likely fewer moving parts, period. As we decrease the amount of moving parts, we’re simplifying maintainability and (probably) deployability, but we’re also sacrificing something. Whether that’s sacrificing what can be done live vs what needs “deployed” to do, sacrificing flexibility, etc, we’re not just getting easier deploys and less maintenance for free by rebuilding software as single binary deployables powered by sqlite.

    5. 3

      the blurry space between static and dynamic

      Oh I’ve been living there…

      One of my previous blog engines was “both static and dynamic” – it would both generate pages on the fly and pre-save them all to disk. Scripting in the frontend server would reverse-proxy the dynamic version to me (with admin cookie) but serve the precached files to the public.

      The current website is “fully” static on S3… except Cloudfront does similar(ish) interesting decisions with JS on the “edge”. This time it directs me to an admin-augmented JS file instead of the public one.

      Anyway, in my experience

      • the heaviest-feeling burden is not the server application itself, no, it’s the unix box it runs on
      • servers generally can just run for long periods of time, the problems come when you want to update and change stuff
        • whether because there’s a vulnerability discovered in some of the stuff you use or just to make the proverbial logo bigger again
      • Amazon’s whole “API gateway” / “lambdas are not just for web requests” monstrosity is stupid, why couldn’t they just make HTTP the primary interface and turn the “other events” into webhooks >_<
        • still worth it. FaaS/PaaS is great when it’s free (or at least nearly free) and big enough that maintenance will go on for years, e.g. Google App Engine has been running my png gradient service that was last updated in 2012 (with an update to go 1.x from 0.x!) until 2019 when the retro platform was shut down (I didn’t want to update to the new one because ehhhh) and it was completely free and I completely forgot about it the day linear-gradient landed in browsers
    6. 3

      The author confuses the website itself with the stack used to serve it. A static website may be distributed using a USB-drive if you so wish. But the second half of the post has some good points. There are some really good dynamic website applications that just gets out of your way. 👍

    7. 2

      I’d point out that application servers may not be necessary at all, and if they are, they can be easily automatically managed too. Apache, for example, will start and stop fastcgi servers on demand which is really nice, and deploying one to a url location is as simple as uploading it with that name.

    8. 1

      Isn’t the difference that a static website sets headers instructing the browser to cache the documents, while a dynamic website must not be cached? I noticed the letters “cach” don’t appear anywhere in the article.

      1. 2

        I’ve understood it more to be that a static site’s computation is conceptually equivalent to indexing and serving, but not much else. In other words it’ll do work to find the content it’s supposed to return, and whatever else it needs to do in order to actually get it returned, but it’s not producing that content in any meaningful way. A dynamic site I’ve thought of, in contrast, as doing some work to construct the content, e.g. inserting a username for a greeting or something. It’s doing something more than just finding a sequence of bytes that already exists somewhere and returning it.

        It’s a pretty arbitrary definition, because the difference between finding content and constructing it is an abstraction of its own. After all, you could imagine a server that just has every possible file precomputed and just determines which to return from a given request and any associated state; such a server would meet my definition of a static site but could behave indistinguishably from my definition of a dynamic site. But, nevertheless, that feels like the common meaning of the distinction.

        I’d agree that usually results in the cache behavior you describe, but I’d call that incidental. That said, I am not a web developer - I’m in security by trade. So I am very much not an expert or authority.

      2. 1

        That doesn’t seem right: a static website could be frequently updated, in which case you wouldn’t want to cache it for long periods of time; whereas a dynamic website could change what content it generates only infrequently, in which case you can easily cache it.

        To me, the difference comes down to whether you need some process in addition to the web server or not: a static website is just a web server + files + configuration, whereas a dynamic website has the web server communicate with some other process (which, for the sake of argument, could even be done so through a filesystem interface, making the two cases look the same to the web server process) which generates the response.

        1. 1

          a static website could be frequently updated

          The HTTP caching system supports that; either the files modified date or hash can be used as a cache key so the browser only downloads the new version of the file when necessary.

          a dynamic website could change what content it generates only infrequently, in which case you can easily cache it

          What I meant was that in my “headcanon” definition, the content on a “dynamic” website would never really be cached; for example, let’s say the website uses WebRTC and every time you request the HTML page, it includes a dynamic javascript in the <head> containing SDPs for WebRTC peers. Let’s just say for the sake of argument that one of those peers is the server, and the server must generate a new SDP for every request. In that case, the response bytes will always be different and should never be cached.

          Yes, its possible that the dynamic part could also be a separate API, but for various legacy support / latency reasons, the site may perform better (“boot” faster) if the javascript is injected up front.

          This is a sort of contrived example but there are tons of “real time” or “application” use cases on the web where caching simply does not make sense. Those are what I would describe as truly “dynamic”.

    9. 0

      hate the clickbait title