“Small web” as a hobby is fine. But this article like nearly all others in this line of thought make a technocratic argument that people want the wrong thing. It’s all wrapped up in pseudo-moral arguments (selling your soul, destroying the planet), when the only ‘sin’ is making a tradeoff between size and development time, or size and user experience: indeed, people generally prefer sites with nice images to those without. Even most smartphone users don’t care if it adds a megabyte of download.
I do like ‘small websites’. I don’t use Facebook. I try to write code that is efficient. But this is a labor of love and takes time, and it’s far from obviously better.
If someone can produce a ‘small web’ site that people actually want to use and that doesn’t just contain blog posts about the ‘small web’, gemini, or running raspberry pis on solar power, I’d be more open to recommending it generally. As it is, it’s a hobby for tech enthusiasts at best and a fetish at worst.
I think it would be instructive for enthusiasts of the small web to understand why people aren’t using it or self-hosting, but using Facebook et al. A lot these enthuiasts are basically in their own echo chamber.
Yeah I definitely agree with that. I remember read an article in the past year that said something similar about all the Fediverse. Basically, the average would-be user of a Twitter clone doesn’t care about the technical detail and likely doesn’t care about privacy too much. They care about the convenience, UX, and network of users.
I think Twitter (et al) users really are concerned about privacy, they just aren’t willing to sacrifice literally everything else in order to achieve it. People frame this as “not caring about privacy and lying to themselves”, but it’s not. It just doesn’t make sense to the common tech-enthusiast worldview, and is therefore dismissed as nonsense.
Thoughtful comment, thanks. However, I specifically wanted to avoid dwelling on the moral arguments, because I agree with you that big websites aren’t a “sin”. I mention “saving the planet” in light of power consumption but it’s more of an aside. The privacy concerns with “selling your soul to large tech companies” is related, but a slightly different issue – still, as I mention, the small web helps us resist that.
My “whys” in my introduction are different: it’s simpler and hence easier to develop and debug, it’s faster, it extends your phone’s battery life, and (I believe) it’s a compelling aesthetic. I’m making some technical arguments, but my main goal with the article was to preach the aesthetic: small is beautiful.
Sometimes big sites/software is about reducing development time (e.g., Electron), but often big websites are created simply because it’s how it’s done these days: big JS, big images, big architectures. But it doesn’t need to be that way, and it may actually be easier to develop in the small once we get used to it again.
To me, it appears as if this issue is usually only viewed from two sides. Either, the viewpoint is “users want that” (i.e., your point of view). Or the viewpoint is “modern web development is crap, go minimal for moral reasons” (appearently the OP’s view point).
I think both points of view are not correct. “users” is a generalisation that, as all generalisations, glosses over those individual users who think different, but are a minority. On the other hand, many people do not buy a moral impetus on web design either.
In my opinion, the web is large enough for all of us. Please stop bashing fans of minimal websites as not having an idea of what “users” want, and also please stop telling everyone else that minimalism is the one way to go. Just design your website with the goals you have in mind, and acknowledge that there will always be people who disagree with those very goals.
Note that “modern web development is crap, go minimal for moral reasons” isn’t my view point. See my reply here. I appreciate your reply, though – I think your other points are valid.
I enjoy a minimal (if well designed) UI as much as the next person, but I feel that these posts tend not to present a persuasive argument that would be likely to have a material impact on UI that it perceives as bloated. And I think that this may be a result of our understanding of “software” or “the web” more broadly.
I think that programmers, especially hobbyists and those that frequent this type of forum, tend to see their work as a creative artifact and to that degree an expression of their ideals. If software is an ideal or an expression of the values of the developer, why you want to build something bloated, ugly, or distracting?
But it seems unlikely that CNN dot com is a visual abomination because the designers and developers don’t appreciate minimalism and the practice of restraint. CNN isn’t a website but a business, and since they’re in the business of converting attention into profit, why would they design around principles that would otherwise help the user easily locate contend and decide when they’ve had enough? I’m using CNN here as an example but you can make a similar point about most internet businesses that profit by advertisement.
Every single example of “small web” design presented in the article is a tech website with a preexisting minimal brand for likeminded creative tech hobbyist types who presumably already appreciate that design in the first place. If sir dot hat were to use “excessive” javascript or too many colors or busy-up the UI, it would alienate its users and the business would plausibly suffer as a result.
I don’t really know what to think about these pieces anymore. Like, the web works better when it’s less complicated and more cleanly designed, and it’s frustrating in general when UI works poorly and distracts from the stated function of the tool. I’m just having trouble seeing how programmers are going to improve things by advocating for design ideals. The “small web” isn’t a question of aesthetics but of function.
I like the aesthetic, but it’s incorrect about Java. There have been Java AOT compilers for decades, and, more recently, free and open source ones. These give you the same end result as Go - a small native stand-alone executable with no runtime except a GC.
Java AOT compilers are either slow or not fully compliant. There are a few things that make Java difficult for AOT compilation:
The class loader is part of the spec (and used by the standard library for locales, so the first printf triggers it). A fully compliant implementation must be able to load new Java bytecode. You need to either embed a load-time compiler or not support this part of the spec. This has a knock-on effect on devitualisation: Even if you can statically prove that it’s safe to monomorphise a particular call site given whole-program analysis, your analysis may become wrong after something is loaded.
Reflection means you can’t actually do reachability analysis if the reflection APIs are used unless you edge a bit closer to symbolic execution and determine an exhaustive set of all of the strings that might be passed to the reflection APIs. This makes it very difficult to create a statically linked binary that doesn’t include the entire Java standard library.
Every method is virtual except in final classes (where it’s logically virtual, but at least has only a single implementation). Most Java JITs completely discard final because they have more accurate information from the loaded set of classes, so there’s been no incentive for a long time to add final qualifiers on classes for performance. This means that you need to do devirtualisation if you want to do any inlining (which is where most big perf wins come from). This is relatively easy in a JIT: even without a trace-based JIT, you can identify monomorphic and low-order polymorphic call sites and inline and then deoptimise if a call site becomes megamorphic later.
Most Java AOT compilers I’ve seen either completely punt on the class loader / reflection (and risk Oracle suing them, because they have a bunch of patents on bits of Java that are licensed only to 100% conforming implementations - they’ve probably all expired now though), require some special treatment (any class that you will load must be compiled to a .so), or hit really slow paths if you use them (e.g. including a simple interpreter that is used to run every dynamically loaded class, so you hit a factor of 10 or more slowdown when you use the class loader).
The AOT compiler I linked can work on a closed world assumption, or it can embed a runtime class loader (which itself is AOT compiled) - you control this at build time. Reflection targets must be determined at build time, but this can be automated with a test run, and you can always manually add entries.
As I mentioned, I’ve tried this out on a complex real world app and it worked fine and performance was excellent.
GraalVM (Which native-image is part of) also has other cool stuff like being able to automatically AOT to a JIT based runtime for languages like javascript, python, ruby, wasm, llvm bitcode (c, fortran etc.) (given just an interpreter for that language) using their Truffle framework. The cool thing here is it can also do cross language optimisation. E.g. inlining a call from a ruby method to a C extension.
Interesting, thanks. I’d heard of that, but I didn’t think they were widely used. When I’ve used applications written in Java I’d always had to run them with java -jar foo.jar or similar … then again, I guess if it was natively compiled I might not have known it was written in Java. Do you know how widely this is done, i.e., Java applications being distributed in a native, compiled format?
Nice, thanks for the link. I’m glad “they” are focussing on this for Java these days. 32MB for an app like that isn’t bad at all nowadays (it’d probably be a similar size in Go).
For me, it all depends where the weight is going. Heavy JS is a nonstarter, but I love browsing Neocities and its plethora of big gifs and poorly optimized headers.
Speaking of hero images, you don’t need big irrelevant images at the top of your blog posts. They just add hundreds of kilobytes (even megabytes) to your page weight, and don’t provide value.
One issue is that all social media now assume every post has a “hero image” (who even invented that term? Give me an address to punch them in the face over TCP/IP). Another, more disturbing issue is that readers are significantly more likely to click posts that have an image (whether it means they like those images is debatable).
So, not adding them is a tradeoff, unlike many other size improvements, which are net benefits.
Yeah, this is concerning to me too. I’d just been reading about how Google Discover prioritizes content with images, presumably whether they’re relevant or not (they just have to be “compelling” and “high quality”). That’s rewarding the wrong thing.
I’ve, similarly, become interested in minimal bundles and binaries of late. One point I would add — I think a large amount of bloat is due to the fact that it’s so easy. When installing another node dependency is one npm i away then it’s trivial to add one more.
I wholly agree with you that most of the battle is just caring about size. If you change the mindset from “of course we need Google Analytics” to “what are the objectives GA is delivering and how can we best go about achieving them” you’ve already made significant progress. We keep adding and collecting without ever stopping to ask, why?
I just happen to be reading Schumacher’s Small is Beautiful at the moment and can also highly recommend it. The only downside to reading it is that you realise that the same economic and ecological issues we fight with today were just as clearly articulated in the ’70s…
if you look at the postJson funcition described in the article as an example of you-might-not-need-jquery. there is no error-handler. how do you find out if there was an error?
The error handling I do have (for HTTP errors) is in the callback. I don’t have any error handling for network or lower-level errors here. For this use case it’s a somewhat throwaway or easily-repeatable action, and if nothing happens the user can simply click again. Not perfect, but I think reasonable for my use case here.
“Small web” as a hobby is fine. But this article like nearly all others in this line of thought make a technocratic argument that people want the wrong thing. It’s all wrapped up in pseudo-moral arguments (selling your soul, destroying the planet), when the only ‘sin’ is making a tradeoff between size and development time, or size and user experience: indeed, people generally prefer sites with nice images to those without. Even most smartphone users don’t care if it adds a megabyte of download.
I do like ‘small websites’. I don’t use Facebook. I try to write code that is efficient. But this is a labor of love and takes time, and it’s far from obviously better.
If someone can produce a ‘small web’ site that people actually want to use and that doesn’t just contain blog posts about the ‘small web’, gemini, or running raspberry pis on solar power, I’d be more open to recommending it generally. As it is, it’s a hobby for tech enthusiasts at best and a fetish at worst.
Anyway, here’s my https://0kb.club/
I think it would be instructive for enthusiasts of the small web to understand why people aren’t using it or self-hosting, but using Facebook et al. A lot these enthuiasts are basically in their own echo chamber.
Yeah I definitely agree with that. I remember read an article in the past year that said something similar about all the Fediverse. Basically, the average would-be user of a Twitter clone doesn’t care about the technical detail and likely doesn’t care about privacy too much. They care about the convenience, UX, and network of users.
I think Twitter (et al) users really are concerned about privacy, they just aren’t willing to sacrifice literally everything else in order to achieve it. People frame this as “not caring about privacy and lying to themselves”, but it’s not. It just doesn’t make sense to the common tech-enthusiast worldview, and is therefore dismissed as nonsense.
Thoughtful comment, thanks. However, I specifically wanted to avoid dwelling on the moral arguments, because I agree with you that big websites aren’t a “sin”. I mention “saving the planet” in light of power consumption but it’s more of an aside. The privacy concerns with “selling your soul to large tech companies” is related, but a slightly different issue – still, as I mention, the small web helps us resist that.
My “whys” in my introduction are different: it’s simpler and hence easier to develop and debug, it’s faster, it extends your phone’s battery life, and (I believe) it’s a compelling aesthetic. I’m making some technical arguments, but my main goal with the article was to preach the aesthetic: small is beautiful.
Sometimes big sites/software is about reducing development time (e.g., Electron), but often big websites are created simply because it’s how it’s done these days: big JS, big images, big architectures. But it doesn’t need to be that way, and it may actually be easier to develop in the small once we get used to it again.
To me, it appears as if this issue is usually only viewed from two sides. Either, the viewpoint is “users want that” (i.e., your point of view). Or the viewpoint is “modern web development is crap, go minimal for moral reasons” (appearently the OP’s view point).
I think both points of view are not correct. “users” is a generalisation that, as all generalisations, glosses over those individual users who think different, but are a minority. On the other hand, many people do not buy a moral impetus on web design either.
In my opinion, the web is large enough for all of us. Please stop bashing fans of minimal websites as not having an idea of what “users” want, and also please stop telling everyone else that minimalism is the one way to go. Just design your website with the goals you have in mind, and acknowledge that there will always be people who disagree with those very goals.
Note that “modern web development is crap, go minimal for moral reasons” isn’t my view point. See my reply here. I appreciate your reply, though – I think your other points are valid.
TIL: you can just not have any html content and some CSS is still loaded
I enjoy a minimal (if well designed) UI as much as the next person, but I feel that these posts tend not to present a persuasive argument that would be likely to have a material impact on UI that it perceives as bloated. And I think that this may be a result of our understanding of “software” or “the web” more broadly.
I think that programmers, especially hobbyists and those that frequent this type of forum, tend to see their work as a creative artifact and to that degree an expression of their ideals. If software is an ideal or an expression of the values of the developer, why you want to build something bloated, ugly, or distracting?
But it seems unlikely that CNN dot com is a visual abomination because the designers and developers don’t appreciate minimalism and the practice of restraint. CNN isn’t a website but a business, and since they’re in the business of converting attention into profit, why would they design around principles that would otherwise help the user easily locate contend and decide when they’ve had enough? I’m using CNN here as an example but you can make a similar point about most internet businesses that profit by advertisement.
Every single example of “small web” design presented in the article is a tech website with a preexisting minimal brand for likeminded creative tech hobbyist types who presumably already appreciate that design in the first place. If sir dot hat were to use “excessive” javascript or too many colors or busy-up the UI, it would alienate its users and the business would plausibly suffer as a result.
I don’t really know what to think about these pieces anymore. Like, the web works better when it’s less complicated and more cleanly designed, and it’s frustrating in general when UI works poorly and distracts from the stated function of the tool. I’m just having trouble seeing how programmers are going to improve things by advocating for design ideals. The “small web” isn’t a question of aesthetics but of function.
I like the aesthetic, but it’s incorrect about Java. There have been Java AOT compilers for decades, and, more recently, free and open source ones. These give you the same end result as Go - a small native stand-alone executable with no runtime except a GC.
Java AOT compilers are either slow or not fully compliant. There are a few things that make Java difficult for AOT compilation:
printf
triggers it). A fully compliant implementation must be able to load new Java bytecode. You need to either embed a load-time compiler or not support this part of the spec. This has a knock-on effect on devitualisation: Even if you can statically prove that it’s safe to monomorphise a particular call site given whole-program analysis, your analysis may become wrong after something is loaded.final
classes (where it’s logically virtual, but at least has only a single implementation). Most Java JITs completely discardfinal
because they have more accurate information from the loaded set of classes, so there’s been no incentive for a long time to addfinal
qualifiers on classes for performance. This means that you need to do devirtualisation if you want to do any inlining (which is where most big perf wins come from). This is relatively easy in a JIT: even without a trace-based JIT, you can identify monomorphic and low-order polymorphic call sites and inline and then deoptimise if a call site becomes megamorphic later.Most Java AOT compilers I’ve seen either completely punt on the class loader / reflection (and risk Oracle suing them, because they have a bunch of patents on bits of Java that are licensed only to 100% conforming implementations - they’ve probably all expired now though), require some special treatment (any class that you will load must be compiled to a .so), or hit really slow paths if you use them (e.g. including a simple interpreter that is used to run every dynamically loaded class, so you hit a factor of 10 or more slowdown when you use the class loader).
The AOT compiler I linked can work on a closed world assumption, or it can embed a runtime class loader (which itself is AOT compiled) - you control this at build time. Reflection targets must be determined at build time, but this can be automated with a test run, and you can always manually add entries.
As I mentioned, I’ve tried this out on a complex real world app and it worked fine and performance was excellent.
GraalVM (Which native-image is part of) also has other cool stuff like being able to automatically AOT to a JIT based runtime for languages like javascript, python, ruby, wasm, llvm bitcode (c, fortran etc.) (given just an interpreter for that language) using their Truffle framework. The cool thing here is it can also do cross language optimisation. E.g. inlining a call from a ruby method to a C extension.
Coincidentally, Amazon have just last week announced that their entire SDK supports this AOT compiler out of the box: https://aws.amazon.com/blogs/developer/graalvm-native-image-support-in-the-aws-sdk-for-java-2-x/
Interesting, thanks. I’d heard of that, but I didn’t think they were widely used. When I’ve used applications written in Java I’d always had to run them with
java -jar foo.jar
or similar … then again, I guess if it was natively compiled I might not have known it was written in Java. Do you know how widely this is done, i.e., Java applications being distributed in a native, compiled format?This is the most recent one I was referring to: https://www.graalvm.org/reference-manual/native-image/
I’ve used it myself on Peergos. Ended up with a 32 MiB executable (including 19 MiB of web assets, sqlite, postgres client, and FUSE bindings).
Thanks for the great article!
Nice, thanks for the link. I’m glad “they” are focussing on this for Java these days. 32MB for an app like that isn’t bad at all nowadays (it’d probably be a similar size in Go).
For me, it all depends where the weight is going. Heavy JS is a nonstarter, but I love browsing Neocities and its plethora of big gifs and poorly optimized headers.
One issue is that all social media now assume every post has a “hero image” (who even invented that term? Give me an address to punch them in the face over TCP/IP). Another, more disturbing issue is that readers are significantly more likely to click posts that have an image (whether it means they like those images is debatable). So, not adding them is a tradeoff, unlike many other size improvements, which are net benefits.
Yeah, this is concerning to me too. I’d just been reading about how Google Discover prioritizes content with images, presumably whether they’re relevant or not (they just have to be “compelling” and “high quality”). That’s rewarding the wrong thing.
I have no words.
I’ve, similarly, become interested in minimal bundles and binaries of late. One point I would add — I think a large amount of bloat is due to the fact that it’s so easy. When installing another node dependency is one
npm i
away then it’s trivial to add one more.I wholly agree with you that most of the battle is just caring about size. If you change the mindset from “of course we need Google Analytics” to “what are the objectives GA is delivering and how can we best go about achieving them” you’ve already made significant progress. We keep adding and collecting without ever stopping to ask, why?
I just happen to be reading Schumacher’s Small is Beautiful at the moment and can also highly recommend it. The only downside to reading it is that you realise that the same economic and ecological issues we fight with today were just as clearly articulated in the ’70s…
Regarding static site generators: Any recommendations for themes or plugins for Jekyll that are compatible with minimalism?
I’ve used Hyde before – it’s reasonably lightweight (“About” page transfers ~30KB), and looks good on desktop and mobile (it’s “responsive”).
if you look at the
postJson
funcition described in the article as an example of you-might-not-need-jquery. there is no error-handler. how do you find out if there was an error?The error handling I do have (for HTTP errors) is in the callback. I don’t have any error handling for network or lower-level errors here. For this use case it’s a somewhat throwaway or easily-repeatable action, and if nothing happens the user can simply click again. Not perfect, but I think reasonable for my use case here.
You can set the onerror property to catch network level errors, e.g. https://github.com/Peergos/web-ui/blob/master/vendor/priors/gwt.js#L84