1. 23

    “Small web” as a hobby is fine. But this article like nearly all others in this line of thought make a technocratic argument that people want the wrong thing. It’s all wrapped up in pseudo-moral arguments (selling your soul, destroying the planet), when the only ‘sin’ is making a tradeoff between size and development time, or size and user experience: indeed, people generally prefer sites with nice images to those without. Even most smartphone users don’t care if it adds a megabyte of download.

    I do like ‘small websites’. I don’t use Facebook. I try to write code that is efficient. But this is a labor of love and takes time, and it’s far from obviously better.

    If someone can produce a ‘small web’ site that people actually want to use and that doesn’t just contain blog posts about the ‘small web’, gemini, or running raspberry pis on solar power, I’d be more open to recommending it generally. As it is, it’s a hobby for tech enthusiasts at best and a fetish at worst.

    Anyway, here’s my https://0kb.club/

    1. 13

      I think it would be instructive for enthusiasts of the small web to understand why people aren’t using it or self-hosting, but using Facebook et al. A lot these enthuiasts are basically in their own echo chamber.

      1. 2

        Yeah I definitely agree with that. I remember read an article in the past year that said something similar about all the Fediverse. Basically, the average would-be user of a Twitter clone doesn’t care about the technical detail and likely doesn’t care about privacy too much. They care about the convenience, UX, and network of users.

        1. 5

          I think Twitter (et al) users really are concerned about privacy, they just aren’t willing to sacrifice literally everything else in order to achieve it. People frame this as “not caring about privacy and lying to themselves”, but it’s not. It just doesn’t make sense to the common tech-enthusiast worldview, and is therefore dismissed as nonsense.

      2. 8

        Thoughtful comment, thanks. However, I specifically wanted to avoid dwelling on the moral arguments, because I agree with you that big websites aren’t a “sin”. I mention “saving the planet” in light of power consumption but it’s more of an aside. The privacy concerns with “selling your soul to large tech companies” is related, but a slightly different issue – still, as I mention, the small web helps us resist that.

        My “whys” in my introduction are different: it’s simpler and hence easier to develop and debug, it’s faster, it extends your phone’s battery life, and (I believe) it’s a compelling aesthetic. I’m making some technical arguments, but my main goal with the article was to preach the aesthetic: small is beautiful.

        Sometimes big sites/software is about reducing development time (e.g., Electron), but often big websites are created simply because it’s how it’s done these days: big JS, big images, big architectures. But it doesn’t need to be that way, and it may actually be easier to develop in the small once we get used to it again.

        1. 3

          To me, it appears as if this issue is usually only viewed from two sides. Either, the viewpoint is “users want that” (i.e., your point of view). Or the viewpoint is “modern web development is crap, go minimal for moral reasons” (appearently the OP’s view point).

          I think both points of view are not correct. “users” is a generalisation that, as all generalisations, glosses over those individual users who think different, but are a minority. On the other hand, many people do not buy a moral impetus on web design either.

          In my opinion, the web is large enough for all of us. Please stop bashing fans of minimal websites as not having an idea of what “users” want, and also please stop telling everyone else that minimalism is the one way to go. Just design your website with the goals you have in mind, and acknowledge that there will always be people who disagree with those very goals.

          1. 4

            Note that “modern web development is crap, go minimal for moral reasons” isn’t my view point. See my reply here. I appreciate your reply, though – I think your other points are valid.

          2. 1

            TIL: you can just not have any html content and some CSS is still loaded

          1. 11

            There are basically 5 classes of programs that the author discusses

            1. Fully dynamically linked programs. Only python programs are of this form.
            2. Partially dynamically linked programs. This describes C and C++ using dynamic libraries. The contents of the .c files are dynamically linked, and the contents of the .h files are statically linked. We can assume that the .h files are picked up from the system that the artifact is built on and not pinned or bundled in any way.
            3. Statically linked programs without dependency pinning. This describes rust binaries that don’t check in their Cargo.lock file to the repository, for instance.
            4. Statically linked programs with dependency pinning. This describes rust binaries that do check in their Cargo.lock file to the repository. (For simplicity sake we can include bundled but easily replaceable dependencies in this category)
            5. Programs with hard to replace bundled dependencies (statically or dynamically linked, for instance they complain about rustc llvm which is dynamically linked).

            I think it’s pretty clear that what the author is interested in isn’t actually the type of linking, they are interested in the ease of upgrading dependencies. This is why they don’t like python programs despite the fact that they are the most dynamically linked. They happen to have tooling that works for the case of dynamically linked C/C++ programs (as long as the header files don’t change, and if they do, sucks to be the user) so they like them. They don’t have tooling that works for updating python/rust/go/… dependencies, so they don’t like them.

            They do have a bit of a legitimate complaint here that it takes longer to relink all the statically link dependencies than the dynamically linked ones, but this strikes me as very minor. Builds don’t take that long in the grand scheme of things (especially if you keep around intermediate artifacts from previous builds). The benefit that we don’t have the C/C++ problem where the statically linked parts and the dynamically linked parts can come from different code bases and not line up strikes me as more than worth it.

            They seem to be annoyed with case 3 because it requires they update their tooling, and maybe because it makes bugs resulting from the equivalent of header file changes more immediately their problem. As you can guess, I’m not sympathetic to this complaint.

            They seem to be annoyed with case 4 because it also makes it makes the responsibility for breaking changes in dependencies shift slightly from code authors to maintainers, and their tooling is even less likely to support it. This complaint mostly strikes me as entitled, the people who develop the code they are packaging for the most part are doing so for free (this is open source after all) and haven’t made some commitment to support you updating their dependencies, why should it be their problem? If you look at any popular C/C++ library on github, you will find issues asking for support for exactly this sort of thing.

            Category 5 does have some interesting tradeoffs in both directions depending on the situation, but I don’t think this article does justice to either side… and I think getting into them here would detract from the main point.

            1. 5

              I was especially surprised to see this article on a gentoo blog, given that as I remember gentoo (admittedly from like 10-15 years ago), it was all about recompiling everything from source code, mainly For Better Performance IIRC. And if you recompile everything from source anyway, I’d think that should solve this issue for “static linkage” too? But maybe gentoo changed their way since?

              Looking at some other modern technologies, I believe Nix (and NixOS) actually also provide this feature of basically recompiling from source, and thus should make working with “static” vs. “dynamic” linking mostly the same? I’m quite sure arbitrary patches can be (and are) applied to apps distributed via Nix. And anytime I nix-channel --upgrade, I’m getting new versions of everything AFAIK, including statically linked stuff (obviously also risking occasional breakage :/)

              edit: Hm, Wikipedia does seem to also say Gentoo is about rebuilding from source, so I’m now honestly completely confused why this article is on gentoo’s blog, of all the distros…

              Unlike a binary software distribution, the source code is compiled locally according to the user’s preferences and is often optimized for the specific type of computer. Precompiled binaries are available for some larger packages or those with no available source code.

              1. 11

                “Build from source” doesn’t really solve the case of vendored libraries or pinned dependencies. If my program ships with liblob-1.15 and it turns out that version has a security problem, then a recompile will just compile that version again.

                You need upstream to update it to liblob-1.16 which fixes the problem, or maybe even liblob-2.0. This is essentially the issue; to quote the opening sentence of this article: “One of the most important tasks of the distribution packager is to ensure that the software shipped to our users is free of security vulnerabilities”. They don’t want to be reliant on upstream for this, so they take care to patch this in their packages, but it’s all some effort. You also need to rebuild all packages that use liblob<=1.15.

                I don’t especially agree with this author, but no one can deny that recompiling only a system liblob is a lot easier.

                1. 2

                  AIUI the crux of Gentoo is that it provides compile-time configuration - if you’re not using e.g. Firefox’s Kerberos support, then instead of compiling the Kerberos code into the binary and adding “use_kerberos=false” or whatever, you can just not compile that dead code in the first place. And on top of that, you can skip a dependency on libkerberos or whatever, that might break! And as a slight side-effect, the smaller binary might have performance improvements. Also, obviously, you don’t need libkerberos or whatever loaded in RAM. Or even on disk.

                  These compile-time configuration choices have typically been the domain of distro packagers, but Gentoo gives the choice to users instead. So I think it makes a lot of sense for a Gentoo user to have strong opinions about how upstream packaging works.

                  1. 2

                    But don’t they also advertise things like --with-sse2 etc., i.e. specific flags to tailor the packages to one’s specific hardware? Though I guess maybe hardwares are uniform enough nowadays that a typical gentoo user wants exactly the same flags as most others?

                2. 4

                  This complaint mostly strikes me as entitled, the people who develop the code they are packaging for the most part are doing so for free (this is open source after all) and haven’t made some commitment to support you updating their dependencies, why should it be their problem?

                  Maybe I’m reading too much into the post, but the complaints about version pinning seem to be implying that the application maintainers should be responsible for maintaining compatibility with any arbitrary version of any dependency the application pulls in. Of course application maintainers want to specify which versions there compatible with; it’s completely unrealistic to expect an application to put in the work to maintain compatibility with any old version that one distro or another might be stuck on. The alternative is a combinatoric explosion of headaches.

                  Am I misreading this? I’m trying to come up with a more charitable reading but it’s difficult.

                  1. 3

                    I’m not sure. When I wrote a Lua wrapper for libtls, I attempted to support older versions, but the authors of libtls didn’t do a good job of versioning macros. I eventually gave up on older versions when I switched to a different libtls. I am not happy about this.

                  2. 3

                    Don’t JVM and CLR programs also do all dynamic linking all the time, or almost so?

                    1. 2

                      Er, when I said “Only python programs are of this form.” I just meant of the languages mentioned in the article. Obviously various other languages including most interpreted languages are similar in nature.

                      I think the JVM code I’ve worked on packaged all it’s (java) dependencies inside the jar file - which seems roughly equivalent to static linking. I don’t know what’s typical in the open source world though. I’ve never worked with CLR/.net.

                      1. 3

                        It depends…

                        • Desktop or standalone Java programs usually consists of a collection of JAR files and you can easily inspect them and replace/upgrade particular libraries if you wish.
                        • Many web applications that are deployed on a web container (e.g. Tomcat) or an application server (e.g. Payara) as WAR files, have libraries bundled inside. This is bit ugly and I do not like it much (you have to upload big files to servers on each deploy), however you still can do the same as in the first case – just need to unzip and zip the WAR file.
                        • Modular applications have only their own code inside + they declare dependencies in a machine readable form. So you deploy small files e.g. on a OSGi container like Karaf and dependencies are resolved during the deploy (the metadata contain needed libraries and their supported version ranges). In this case you may have installed a library in many versions and proper one is linked to your application (other versions and other libraries are invisible despite they are present in the runtime environment). The introspection is very nice and you can watch how the application is starting, whether it is waiting for some libraries or other resources, you can install or configure them and then the starting process continues.

                        So it is far from static-linking and even if everything is bundled in a single JAR/WAR, you can easily replace or upgrade the libraries or do some other hacking or studying.

                  1. 3

                    Whenever “what package management ought to look like” comes up, I think my fundamental objections comes to this:

                    How does the system handle theming?

                    It’s super common for there to be a theme manager that has a list of user-contributed themes on a website. It’s also possible for literal thousands of themes to exist for a single program.

                    So, a few questions:

                    • Should themes be handled by the system’s package manager, or by some sort of parallel package manager that’s specific to themes? After all, some programs’ themes are entirely declarative, trivially backward compatible and all-around extremely simple.
                    • If you’re running a program that breaks the theming system regularly and requires the theme be updated often, then what should handle the updating system if not the system’s package manager itself?
                    • If you’re using the system’s package manager, should there be some sort of “package subset” for extremely simple ‘packages’ that will never need to update, run code, etc?
                    • If you want to reinstall your OS on a different computer, how will installing your theming be handled? (saying “you manually track down and re-pick each theme for each program” is not a desirable response)
                    • Will the package manager be responsive with randomly trying a ton of potentially-less-than-one-kilobyte files?
                    • How do you handle user submissions when they’re actually submitting official packages? I mean, there’s zero reason to wait 2 years for a theme to be available in your distro’s LTS.

                    In practice, the answer to all of this is either 1) theming is a website, you download from, it’s annoyingly manual and not-streamlined and won’t gracefully handle automated installations, or 2) a theme is a normal package and there are perhaps 3 themes, instead of hundreds, because submitting an actual package is a lot more overhead than submitting a webform (or even a “share this theme online” button in the program’s theme settings menu).

                    1. 3

                      I think one thing that is missing from modern OSes is a “community”-oriented aspect. Most of what people use computers for is to connect with other people. Unices are multi-user systems, but they’re rarely genuinely used that way anymore, and they’re not great when you do. My university offers a unix shell account but I don’t really use it much, because there’s not really anything there: there’s not a chat, there’s not a bbs, there’s no way to discover other users, etc. 9grid is pretty cool on that front, you can share pretty much anything to another user very quickly, share computing resources, all sorts of neat stuff.

                      the closest thing I can think of that does this sort of OS+social network idea is urbit, but I have some ideological issues with the platform that I don’t think can be solved.

                      1. 2

                        I think there’s no community aspect because there’s no inherent community group for the OS to build around - forums are basically just random members of public after all, whereas the original university computers were explicitly built around university members AFAICT (the TUI for useradd explicitly, hardcodedly prompts for the user’s room number IIRC).

                        I don’t see how you could make a sane OS+social network system, without improving the underlying social reality the OS is intended to reflect - if there are no ties binding the community together otherthan mutual interests, then there’s nothing preventing anyone from selling out/only joining in order to spam or push malware.

                      1. 4

                        I suspect that the “why” is more important than the “what” here, so I’ve tried to include some of both.

                        Things I’d want to see either implemented or explored in a new OS:

                        • A focus on seamless (less than 3 seconds), extremely fast app installation that doesn’t require admin permission:

                          …which is basically what browsers do, except we call them “webpages” instead. Browsers are very comparable to an OS and has gained quite a lot of marketshare off Windows/Linux/etc and desktop desperately needs to learn (some of) its lessons.

                          Other things that BrowserOS does really well: * Trivial cross-device syncing and support * Support for every device that matters * Inherent network transparency * ALWAYS up to date! (it kind of cheats with online-only requirements though, and I remember people constantly complaining about Facebook UI changes back when I still used it)

                          I think is important for lowering barriers to entry - it’s generally much faster to check out a website than it is to install and run a program. And by extension, this makes it more likely people will buy into the ecosystem for new reasons and stick around.

                        • Built-in payment system: So today, we pay for EVERYTHING online - either with tangibles like money, or with intangibles like ads and data. Why are intangibles preferred? Convenience, probably - intangibles are universally supported on every browser unless you install an adblock or something, and are literally zero-click and always have been. So intangible payment means you don’t need to require registration etc.

                          And more broadly, Free Software is fundamentally about tilting developer’s incentives toward helping the user. It does this by giving the user power, and “where the money comes from” is by far the biggest source of power around.

                          We’re currently relying heavily on corporate funding, but: 1) that means Free Software is still beholden to the donor corporation(s) power-wise 2) corporate software doesn’t actually help individuals, and a corporation’s Freedom ultimately isn’t very important - employees don’t have power over their employer’s IT system anyway, so enterprise software being Free isn’t inherently very important for individuals (we mostly just care about the side-effects). 3) the stuff corporate software cares about is often poorly suited to general consumer use-cases - your home server does not benefit from easily spanning 3 continents, and the requirements of extremely-scalable software often makes the software much harder to maintain by 3 volunteers in their spare time. This isn’t new, here’s an xkcd from 2009 commenting on it: https://xkcd.com/619/

                          In other words, corporate funders are temporary allies, not friends.

                          So, we need to make it as convenient as possible for random member-of-public users to send money, and the best way to make it convenient IMO is to ensure it’s supported by the OS OOTB.

                          In particular, I think distros should natively support paid Free Software in their repos - plenty of Free Software devs sell GPL’d software, where they provide the source code gratis but sell the convenience of pre-compiled binaries. Distros constantly undercut that, which is legally allowed but at the cost of undercutting one of the few direct-from-user funding models that aren’t “DONATE MONEY PLS”.

                        • Native “faceted” ID system (more “explore” TBH):

                          People have different facets of their identity - e.g. you act differently in bed than you do with a child (if otherwise please tell the cops) deliberately avoid using the same account for e.g. linkedIn and pornhub, because they present different facet of their identity in those two scenarios, and they should be kept separate.

                          By “faceted” I mean having an identity tree or DAG, where your root identity has power over child-node identities and can prove ownership, but (most?) outsiders can’t prove any relation by default.

                          So ideally, what this means is that you don’t need to create a new account for anything - the program/website just perhaps asks for permission to grab an autogenerated pseudonymous account and you click okay, and off you go, zero barriers to entry.

                        • A first-principles re-design of the hardware input device:

                          Keyboard and mouse were chosen for historical reasons. Software is designed around using your existing keyboard and mouse, and the keyboard and mouse are used so as to be able to use the software - a chicken/egg problem.

                          KB+M have two main problems; they’re hard to use without a surface to rest them on, and the keyboard lacks discoverability for its shortcuts. The touch-screen is one step forward one step back, as touch-screens don’t have tactile feedback (i.e. you can’t feel where your fingers are)

                        • A multi-device “operating system”:

                          So nowadays people might have several of: * A desktop * A laptop * A tablet (iPad etc) * A phone * An e-reader (maybe) * A smartwatch * A smart-TV * etc etc etc

                          Yet most Linux distros don’t have any sort of over-arching system to handle them OOTB. There’s stuff like NextCloud, which is a massive pain to set up on all devices, and it really seems like there needs to be standard software infrastructure to handle connections between your set of trusted devices in a convenient fashion.

                          I think it would be useful to separate device-specific config from device-independent config (like “disable wifi because this specific laptop’s wifi driver leaks” VS “disable wifi because I have a system that queues up network tasks and does them all in one go for better battery life”. IDK.

                          Honestly, I’d settle for someone coming up with a name for a multi-device “personal nexus” from a device’s operating system like FreeBSD. Using the term “operating system” for two different concepts sucks. If someone more educated than me has a Proper Name for what I’m describing, please go ahead and mention it, as long as it’s not “operating system”.

                        • A better migration system:

                          To be fair, this isn’t really new or anything academically sexy, it’s just something that’s typically mediocre. Apparently Apple does it quite well, but I can’t say personally.

                          The more your OS relies on people configuring it rather than having one-size-fits-all defaults, the more important an easy and seamless migration system is. Also, the more people tend to upgrade their hardware or buy hardware they expect to replace in an average of 2 years’ time because the battery et al are not replaceable, the more important an easy and seamless migration system is.

                        • The freebie: a better documentation system:

                          Based on the theory that there are four types of documentation (and man pages don’t consistently provide more than 1 of the 4, and doesn’t cleanly separate them): https://documentation.divio.com/

                          I don’t really have much to add for this one.