1. 15
    1. 7

      Surely part of the simplicity afforded SourceHut is that it it’s currently operating at a vastly smaller scale that something like GitHub. The traffic, storage, etc. requirements of GitHub almost certainly require a more complex architecture, right?

      1. 4

        Yes, but having the advantage of being at a smaller scale does not change the facts: SourceHut is the fastest and most reliable service. For anyone making the decision today of what platform to use, that truth will be reflected in their practical experience with using each service.

        That being said, I gave a lot of reasons here which suggest that our design gets more scale and more reliability for the dollar than I think GitHub can. SourceHut is fundamentally slimmer and more fault tolerant than GitHub et al, on multiple levels, by orders of magnitude. If we were handling GitHub-level loads tomorrow, we’d be crippled. But, if we were handling GitHub-level loads in 5 years, then I think we’d still be the fastest and most reliable at a fraction of the cost.

    2. 4

      The simplicity of SourceHut is definitely something I’m drawn to. I like snappy, simple web pages with minimal JS. Most of the modern features aren’t making the web much of a better place than it was in the 90s.

      SourceHut has 10 dedicated servers and about 30 virtual machines, all of which were provisioned by hand

      Ehhhh… I like a lot of the decisions that Drew makes, but this one just seems like old-man-shouting-at-the-sky dislike for modern virtualization and deployment technologies. I have issues with Docker’s overengineered and complex user interface. It’s underlying model isn’t that much more elegant, and is definitely a hodge-podge of different Linux sandboxing features that are slapped together. And don’t even get me started on Kubernetes. Kubernetes isn’t needed for the vast majority of applications (and certainly isn’t something SourceHut would benefit from). But Docker solves a lot of deployment problems, and forces you to build infrastructure in a way that you can nuke and pave which is one of the most important properties a system can have.

      I even understand eschewing virtualization in favor of giving the kernel full access to the hardware to squeeze as much performance as possible (to avoid issues like noisy neighbors messing up IO perf and virtualization overhead). There are benefits to having control over the hardware directly. But there are tools that do automatic provisioning that don’t require virtualization.

      But I see no benefit to manual provisioning, and many tradeoffs. You can’t nuke and pave. You waste time configuring new servers, trying to guess the configuration of the snowflake server that you set up last. You lose configuration settings that should reside somewhere in version control. Treating servers like pets rather than cattle doesn’t make anything simpler. I’m just baffled…

      1. 8

        The key which makes my approach work is going all-in on Alpine Linux. The configuration of my servers are:

        1. Their IP address and network settings
        2. The contents of /etc/apk/world (list of installed packages, between 20 and 30 lines, of which 2 or 3 are unique to that server)
        3. 2 or 3 config files, which probably have secrets and cannot be publicly version controlled

        I build our applications into Alpine packages, rather than into a Dockerfile. This is very easy to reproduce, and makes every machine easy to store in one person’s head. And not by sweeping complexity under the rug - but by being less complex. I don’t just understand my application, but also all of the dependencies and everything each machine is doing. It takes less than 15 minutes to provision a new SourceHut machine.

        This isn’t me being an old curmudgeon; I have extensive experience with the Docker et al approach as well and I have found the simpler, boring strategy to be much more reliable and straightforward.

        1. 2

          In your experience, what are some of the problems that you will obviate by avoiding Docker?

          1. 9

            The introduction of additional complexity needs to be justified, not the other way around.

            But, to be specific, Docker encourages a bunch of behaviors which I dislike. For one, it encourages you to deploy your application like an alien in a foreign environment (the distro), rather than packaging up your application properly so it fits neatly into the environment around it. The distro tends to make pretty good decisions - they’ve packaged up thousands of applications, whereas you’ve probably just done the handful which matter to you - so following their lead is wise, and allows your application to integrate in a similar manner to the rest of the things you need to consider on the box. And you do need to consider them - Docker encourages you not to think about them, but your dependencies are your responsibility.

            Speaking of being responsible for your dependencies, Docker encourages you to pile them up by the hundreds, then forget about them, making them quickly atrophy. They fall out of the support lifecycles for upstream, accumulate vulnerabilities, then - inevitably - break. Because you’ve been neglecting your dependencies after Docker wrapped them up tight in a neat little black box for so long, fixing this breakage is likely to cause cascading failures and make the problem a lot worse.

            Docker also introduces another moving part, which brings its own set of possible failure modes on board. The daemon could be offline, Docker hub could be down, it could have eaten all of your disk space, or messed with the host’s networking and locked you out. I have seen all of these examples happen before, some of them several times. Another thing which bugs me is that on some of my systems, dockerd does fuck-all for 20 minutes after a reboot, which is often another 20 minutes of downtime that has no reason to be there. The Docker implementation frankly sucks, it’s a giant rube goldberg machine that no one understands, is impossible to debug, and is constantly growing and changing and getting worse.

            That’s just Docker, and not the giant web of problems which the industry has built on top of it. It’s not a castle built on foundations of sand, it’s sand castles built on sand foundations on a sand planet, and your application is the little toy flag stuck on top.

            1. 2

              It’s not a castle built on foundations of sand, it’s sand castles built on sand foundations on a sand planet, and your application is the little toy flag stuck on top.

              Given that sand is the second most common element on Earth and is used to obtain Silicon, this hits the nail on the head.

        2. 2

          So application configuration is automated through Alpine packages (which I like as a deployment medium), but that’s still not server provisioning. Granted, SourceHut isn’t trying to solve the problem of autoscaling to thousands of servers under load [1], so you have the option to build new servers more manually.

          My main issue isn’t with stamping out new servers, but avoiding custom configuration that sneaks in. Of course, by being disciplined its possible to avoid this, but Docker forces configuration to be tracked since changes are ephemeral otherwise. How do you address that concern?

          This isn’t me being an old curmudgeon; I have extensive experience with the Docker et al approach as well and I have found the simpler, boring strategy to be much more reliable and straightforward.

          I don’t doubt your experience with Docker, but I’m still unclear on what parts of it are so complex that they lead to reliability concerns. If you’re using VMs already, containers seem like a logical pairing (for certain problems).

          Also I would have thought that SourceHut build servers used containers when they allow you to ssh onto the machine to debug the build. What are you using to sandbox the ssh sessions?

          [1] And I applaud this – the complexity of autoscaling far outweighs the benefits for most applications, and I’m sure some amount of SourceHut’s low-latency performance can be attributed to avoiding the overhead of a distributed system.

          1. 3

            My main issue isn’t with stamping out new servers, but avoiding custom configuration that sneaks in. Of course, by being disciplined its possible to avoid this, but Docker forces configuration to be tracked since changes are ephemeral otherwise. How do you address that concern?

            Discipline. There aren’t a whole bunch of people working on these servers, for the most part it’s just me. Don’t hand the keys to the kingdom over to just anyone, train your sysadmins and sysops on how to utilize discipline to do their jobs well.

            I don’t doubt your experience with Docker, but I’m still unclear on what parts of it are so complex that they lead to reliability concerns.

            Answered in detail here: https://lobste.rs/s/dijw0l/prioritizing_simplicity_improves#c_usfanv

            Also I would have thought that SourceHut build servers used containers when they allow you to ssh onto the machine to debug the build. What are you using to sandbox the ssh sessions?

            KVM, which provides more security than containers can guarantee, and more features.

            https://git.sr.ht/~sircmpwn/builds.sr.ht

          2. 2

            Also I would have thought that SourceHut build servers used containers when they allow you to ssh onto the machine to debug the build. What are you using to sandbox the ssh sessions?

            As of now builds are running in QEMU in empty Docker image.

    3. 3

      Hey Drew, signed up for sr.ht a while back. Loved looking through the wonderfully blueprinted Flask code to see what was going on. What motivated you to put each of the separate functions on their own sub-domain as separate apps, rather than a monolith? Was it to distribute outage risk or something?

      1. 4

        Distributing risk is one advantage, but it has more to do with the Unix philosophy - each service has one job and does it well, then you compose them in whatever combination suits your problem.