1. 32
  1.  

    1. 12

      The author is a bit more forgiving than I’d be. Scripts that get killed, logs that don’t work, et cetera, are problems, not annoyances. When something that has very precisely defined functionality stops functioning as expected, whoever made the change gets the blame, no matter their reasoning.

      To be fair, I have nothing good to say about Dreamhost, anyway. They allow tons of malicious activity on their networks and take ages to take action when it’s reported. They don’t seem to care about anything but doing the bare minimum, but as someone who is only seeing the bots on their networks and who isn’t a customer, I can’t speak to the quality of anything but their responses to abuse complaints. But any server, network, NAT implementation, whatever, that times out ssh connections or has inexplicable flakiness is going to get a fist shaken at them, no matter what.

      I’ve hosted people on Unix systems (NetBSD) continuously since the late ‘90s and still have some of the people and their content from then. Granted, I stopped trying to make money by hosting and let Unix shells just be a benefit of having servers for other reasons, but I totally get how difficult it’d be to find something similar elsewhere. I also understand how many people these days don’t have a mental model about how it works and have had some rather funny exchanges with people as the light bulb lights up as the pieces start making sense to them.

      I’ve tried to discuss the cloud part to people - particularly how you make something that can just work regardless of the hosting platform, and, like asking Android developers how you install a toolchain on an Android device, I get a lot of responses that don’t make sense until I ask the same thing again, but more slowly and with added emphasis, then responses which typically are some variation of “Why?” and/or “I really don’t know.”

      But, like anything else that’s good, works well, doesn’t need lots of maintenance and just runs, Unix shell hosting will exist forever, even if younger folks are never taught about it. It’s fun to teach them, though :)

      1. 3

        But, like anything else that’s good, works well, doesn’t need lots of maintenance and just runs, Unix shell hosting will exist forever,

        I am not so optimistic it will continue to exist as a business … It will continue to exist in some form, but it may not be attractive to younger entrepreneurs. The economics seem to be “squeezed” already.


        From one perspective, I agree that Dreamhost totally dropped the ball. The new server upgrade kinda broke everything, and they didn’t understand that

        From another perspective, I think most of the support people are in foreign countries, and it’s not going to accomplish anything to shake your fist at them … they have limited latitude in their jobs.

        My spidey senses tell me that what happened is similar to what happened to Heroku

        • There were competent people who created a good server config many years ago, maybe decades ago
        • Due to business reasons, or just employee turnover, the company lost its ability to improve this configuration
        • They switched to some kind of stock Ubuntu config
        • This is a project pushed by management, without real understanding of the technical details. Certainly they don’t understand FastCGI or care about it.
        • If they need to leave some customers by the wayside to achieve this corporate objective, so be it
        • And the PHP customers are more common and profitable, so who cares about Unix, etc.

        I’m pretty sure Heroku is gonna break a bunch of stuff when they move to Kubernetes. I mean I’m sure that is why they got rid of the free tier – because they don’t want to migrate the free tier to Kubernetes. That’s a lot of cost, and no money.


        So there is nothing you can do in that case … it’s a business decision. I’m glad it lasted 15 years!

        I stlil have the account, but I’m not going to use it for things that need to be reliable, like https://oils.pub/

        1. 2

          how you make something that can just work regardless of the hosting platform

          That’s Kubernetes biggest strength IMO and something not a lot of people seem to see as a benefit. I personally don’t like using it much but it’s the only real solution I know.
          You ask the cloud provider for a cluster (which is done in a cloud specific way), and then basically everything is done inside the cluster using k8s resources, thus portable to another cloud provider.
          It doesn’t cover all the cloud features out there, but you can manage everything needed for most use cases, especially with additional operators, and once you’re using k8s resources it’s the easy path to use them instead of reaching for cloud specific tech.
          So you can end up essentially trade cloud provider lock-in for Kubernetes lock-in, but nowadays k8s is a standard.

          For personal use I still like to manually provision a VPS and install whatever floats my boat (which has been NixOS for a while).

          1. 5

            I guess you’ve only been an environment where Kubernetes clusters were set up ahead of time for you.

            The reality is that clusters are easy to set up, easy for cloud providers to maintain, but very difficult for you and me to maintain; at least without downtime. this essentially guarantees that you’re in one of the big cloud providers, forever.

            You mentioned the lock-in, and I think that undersells it. It’s not the same between providers, it’s similar. which is much more insidious.

            1. 1

              I didn’t mention self managed because it’s a pain and not a solution unless you can dedicate a team to it, though k8s distros are likely much better than the last time I tried.

              It’s a different discussion but there are plenty of smaller clouds that offer managed Kubernetes.

              The cloud lock-in? There’s cloud independent k8s resources for basically everything you need, at least for what’s mentioned in the article and what’s most common in the industry. You don’t have to use the cloud specific parts.

        2. 12

          The author is right, there are not many Unix hosting options out there.

          However, I personally self-host everything for me, and for my community. We run a small hackerspace and what I ended up doing is something very simple: FreeBSD Jails.

          I see the author saying that

          I was surprised that it was based on FreeBSD! I use Linux on my local machines, and I prefer my remote machines to be the same.

          I don’t want to talk about OS diversity anymore, my doctor does not recommend that, so I’ll skip that part. However, most common Unix hosting providers in the 90s (I’ve been told) were running non-Linux systems, some of them still do, such as SDF.org, which provides free Unix shell for anyone.

          I understand why the the author wants to use Linux everywhere, but trying another Unix-like system is never a bad idea.

          As to our hackerspace, my setup is pretty simple. Every member of the hackerspace logs in by doing ssh jail@ourhackerspace and they just drop into a Jail, as root, and they can do pkg install postgresql-server if they want to. If needed, they can even create Jails of their own.

          The config was very simple:

          # cat /home/jail/.ssh/authorized_keys
          command="/usr/local/bin/doas /usr/sbin/jexec -l antranigv login -f root",restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,pty,no-user-rc ssh-ed25519 <the_key_here> antranigv@zvartnots
          

          and in nginx I did something as simple as

              server {
                      listen 80;
                      server_name ~^(?<domain>.+)\.hackerspace.am$;
          
                      location / {
                              proxy_pass http://$domain.hackerspace.local;
                      }
              }
          

          There was also some DNS magic, which I don’t remember, but I think it was as simple as a glob match in BIND.

          I’m sure you can do the same with Linux and LXC, but most people who are interested in being Unix hosting providers, are, by culture, not interested in doing Linux/LXC. The Linux/LXC people would be more interested in “cloud”-y things, like 2 more levels of abstraction on top of AWS. The BSD/jails/chroot people will be interested in Unix hosting.

          This is not a technical holdback, but a cultural one.

          Nice article!

          1. 3

            Yeah actually I am more interested in BSDs now, and I forgot to mention that I did look through this entire list:

            https://www.freebsd.org/commercial/isp/

            I am wondering why BSD shared hosting is not more common? It seems like they should be BETTER for shared hosting in some ways.

            I have not found any obvious alternatives to Dreamhost


            NearlyFreeSpeech seems to be the most prominent one. Personally I don’t like the usage-based pricing, and I mentioned that I had a problem with slow disks when I tried them:

            https://news.ycombinator.com/item?id=32731207

            i.e. Dreamhost hardware/networks seemed to be faster at the time, which was probably a long time ago now


            I would definitely be interested in recommendations for BSD hosting, since I do want to test my software on different operating systems

            It looks like there are many VPS options for BSDs, but not many shared hosting options

            1. 3

              I have no personal experience with them, but I know NearlyFreeSpeech upgraded their hardware, including disks, a year or so ago.

          2. 9

            Somewhere in there, there was a mention of a VPS but otherwise it was framed as shared hosting vs using cloud services. But I wonder, why not run your favorite Linux distro in a VPS? There are an abundance of suppliers of them, even a real server server is quite affordable at budget providers. I help admin a small forum, and we moved from shared hosting to a VPS because of papercuts from small restrictions that were not showstoppers but made the experience quite unpleasant. Like, just being able to install any package you want or reboot the server is wonderful after being stuck with a shared host.

            1. 6
              1. It’s wasteful if you only run a small handful of light sites.
              2. You’ll also have to maintain the server yourself, which increases the barrier to entry.
              3. Maintenance could be a hassle – patches that should be installed promptly and unexpected downtime that you don’t have time to fix for a couple of days.

              As an example, I’ve heard sysadmins say that they don’t want to do their work on their time off, so maintaining servers as a hobby is a no-go.

              1. 2

                Or from another perspective: You’ll also have to maintain the server yourself, which increases the barrier to entry provides a valuable learning experience people often miss out on today.

                1. 1

                  Ease of maintenance a good question: VPS hosts usually have some images with manage-your-hosting web-panels pre-installed; I have no clue how to compare the robustness of the updates between the options…

                  Of course, I personally don’t care because I am do have scripts for my preferred way of managing the updates on my VPS (one could say because I have a preferred way of handling the updates)

                2. 3

                  (author here) This was briefly mentioned, but it is a FAQ, so I could have addressed VPSes


                  I’ve had a Linode VPS with Ubuntu/Debian since 2008 – so like 16 years now. This was BEFORE I really used shared hosting …

                  I still use it, but not for anything I actually want to be up all the time. Over time I found the “set and forget” nature of shared hosting a lot more reliable

                  That might just be me, but I don’t think so. I’ve noticed A TON of link rot due to:

                  “I set up a VPS, and wrote one or two good blog posts last year. But now I am doing other things, and the site went down and I don’t remember how to fix it”

                  I think it’s mainly monitoring, and putting other stuff on the VPS that’s unrelated, which can destabilize it


                  For a blog, I’d use Github pages before a VPS. But as mentioned, that only handles static sites, and scripting is useful (even for blogs)

                  I think people who recommend VPSes haven’t used either Heroku or PHP. The VPS is “standard advice”, but the experience is significantly worse, and the result is worse, IMO

                  1. 2

                    Edit: Not sure how, but I think I wanted to respond to https://lobste.rs/s/f5ziu7/comments_on_shared_unix_hosting_vs_cloud#c_v0de1u and somehow UI confused by about your responses.

                    “set and forget” nature of shared hosting a lot more reliable

                    In theory, yes. If it works. Or if the company doesn’t go out of business, or some other issues. The above post would not exist if it was so rosy in practice.

                    haven’t used either Heroku or PHP.

                    I wrote a bunch of app in PHP, around 2000-2005. I remember setting htaccess files in Apache, LAMP, and hosting Wordpress and Drupal on shared PHP/cgi hostings back in a day, all the jam. Before Rails and NodeJs. There was a certain beauty and simplicity to it, sure. Simplicity that most of the ecosystem haven’t even fully recovered …

                    Ubuntu/Debian since 2008 …

                    It might be cheating here, as full Nix convert, and I have all my systems’ configuration in a git repo, in a Nix flake. My 3 desktops, bunch of laptops for extended family. my VPS, one Macbook Air, and every other piece of configuration.

                    The config I use to on my VPS is the same NixOS config I use for my desktops, just with desktop stuff disabled and Nginx and some custom addons. My VPS config is fully reproducible, and if my Steam games etc. are working on my desktop, I’m guaranteed that the much simpler headless VPS config works too. I don’t need to occasionally “do system upgrades etc.” as it just happens as a byproduct of my normal setup. All that NixOS complexity bought me a lot of simplicity w.r.t managing this zoo.

                    Otherwise, yes, a raw classic Linux requires a bit of babysitting and monitoring. Logs will fill up, disk will fill up, needs security upgrades and stuff breaks, or requires reinstalling and/or reconfiguration. It’s true. With NixOS it’s very easy to get it to a point where it’s on auto-pilot, including “upgrades”.

                  2. 3

                    Yeah, the commoditization of VPSs seems the elephant in the room. <$10/mo and I don’t really see many downsides. You can run PHP/Jekyll/whatever, or “just” straight nginx. Varnish, wireguard, whatever. Pretty great IMO

                    To me, it is shared hosting, though I understand the common/old vernacular.

                    1. 1

                      Yeah, I think this is the way to go. VPSes have gotten quite cheap and on top of that you get a better security and performance boundary (hardware virtualization vs shared kernel).

                      It may be slightly less efficient. But if you run a handful of sites yourself if becomes a rounding error quite quickly and you get a lot more flexibility.

                    2. 8

                      I’ve complained about similar to this on Mastodon. I did a year and a half at AWS and got dumped into their “Containers” (EKS, ECS, Fargate, etc) teams. I went from being an “old-school sysop” type like the author describes to having to learn Kubernetes and cloud absolutely from scratch

                      It was miserable and it was frankly absolutely traumatic participating in the AWS metrics death marches (naturally I got PIP’d and thrown overboard after just over a year)

                      Something someone pointed out that has been lost in the transition to the cloud is the satisfaction. Everything is just numbers on a screen. CPU, memory. Charts in Grafana etc

                      Back in the day there was a genuine sense of pride and workmanship in setting up a rack of servers, drilling right down to the OS, tuning all the knobs and pushing all the dials. Really optimizing the thing you’re working on to fit the workload it’s after

                      Cloud vendors want to treat compute as an ephemeral abstraction. Just throw your perfect little app into a docker container and punt it into a kubernetes void to figure out how to run it. If it dies because it’s buggy, let Kubernetes restart it! Then wrap it in layers of bubble wrap with logging and metrics and all the other Sisyphean layers of abstraction and misery until it’s an unmanageable boulder of mud, tape and tears and it takes an entire SRE team to push it up a hill

                      My roommate likened it to Marx’s concept of a worker being alienated from his labor. You’re so divorced from your own acts of creation or operation from the computer that everything just melts into pointless sludge

                      1. 4

                        My roommate likened it to Marx’s concept of a worker being alienated from his labor.

                        I imagine it’s not just LIKE that, it IS that … but I haven’t read the right bits of Marx so I’m not sure. Can anyone tell me which bits of Marx to read please? (He wrote too much!)

                        1. 2

                          Cloud vendors want to treat compute as an ephemeral abstraction. Just throw your perfect little app into a docker container and punt it into a kubernetes void to figure out how to run it.

                          That’s because everything is a stateless web app! For state (i.e. the hard part), those just talk to a managed service! RDS, S3, etc. Are you making a non-stateless/non-web app? You’re out of luck. Can’t have those anymore…

                          I’m still keeping my state in sight and control.

                        2. 7

                          Heh, 1000 sites per server :-) When I was working on web servers at Demon Internet in the late 1990s, our shared hosting servers had 1000 web sites each. Our “homepages” static website hosting service (included in the dial-up subscription) had more than 50,000 web sites on a cluster of less than 10 servers. Demon was using BSD / Apache / CGI / Perl (it was before Linux was good at networking, and before the rise of PHP and MySQL). So there generally wasn’t a huge CPU load. I have absolutely no idea what contention ratios are like these days; I wonder how it has changed.

                          I’m a big fan of Mythic Beasts (not just because I’ve known the founders for decades!) After I took over Cambridge University’s DNS, I moved all the vanity domain registrations to Mythic. They could give me monthly billing which reduced the renewal workload from hours per month to minutes, by avoiding university finance office bullshit. I subsequently moved my personal hosting to Mythic as well.

                          Their job application process involves a kind of escape room challenge http://jobs.mythic-beasts.com/ which I am told is quite fun (tho I have not bothered to do it myself).

                          1. 2

                            Wow, these days it should be at least 10x the amount, considering how much the Linux kernel and commodity hardware has improved in 20-30 years!

                            Funny thing is you can get a sense of it from the shell:

                            $ wc -l /etc/passwd    # Dreamhost
                            5079 /etc/passwd
                            
                            $ wc -l /etc/passwd  # Mythic Beasts
                            458 /etc/passwd
                            

                            The Dreamhost box is very beefy though, with 128 cores/threads and 512 GB of RAM

                          2. 5

                            I mostly agree. I am confused how access to apache style logs is a not standard anymore.

                            I think the main thing that changed for the better is the customer isolation requirement. CPUs now support all kinds of useful virtualization thingies that make isolation better.

                            People have shifted to an incredible amount of complexity into the build environment ala vercel and yet completely refuse to think about hosting implementation.

                            I finally gave fly.io a try and turns out it is a modern take on cgi-bin. One gets proper control over routing requests, logs in somewhat standard cloud format and firecracker for isolation. Prices are not insane either. However while their stack checks all the cool-modern-features box, it is not sufficiently stable and not at all interoperable with other providers

                            1. 1

                              Yeah I was interested in fly.io too, but they seem to not have their “state” figured out yet … It started out as an edge network, and then people wanted to host stateful apps, so they rolled out an SQL service that was apparently not well received.

                              I think I just want a modern Heroku … that also has a static file service, like Github Pages

                              1. 1

                                I think I just want a modern Heroku … that also has a static file service, like Github Pages

                                (shameless plug) Render is pretty much this. I’m the founder; happy to help.

                                1. 1

                                  Oh cool! The funny thing is that I linked one of your docs in the appendix of this post:

                                  https://render.com/docs/render-vs-heroku-comparison

                                  And that’s because I saw it on Hacker News a few days ago :-)

                                  https://news.ycombinator.com/item?id=42833080

                                  That doc was helpful, and helped me understand what Render is.


                                  What do you think about the points in this section: https://oils.pub/blog/2025/02/shared-hosting.html#whats-wrong-with-cloud-hosting

                                  1. Lack of protocols - like CGI or FastCGI
                                  2. Stability, e.g. over a 10 year period / The Lindy Effect

                                  I linked one of your docs in that section: https://render.com/docs/#quickstarts

                                  I’m wondering if I use a new language like Inko, Zig, or YSH, how do I say print <p>Hello $name</p> and have it work?

                                  Heroku has build packs for that I guess, and the servers are plain HTTP rather than CGI or uwsgi.

                                  So I wonder if there is a protocol to drop in an arbitrary HTTP server or CGI, in any language.


                                  (BTW I noticed in that comment you worked at Stripe – I remember I visited their offices in SOMA in 2012, I think shortly after they moved from Palo Alto (?). Because I did one of their Capture the Flag contests from Hacker News. I definitely realized it was an extremely talented team then!)

                                  1. 1
                                    1. Lack of protocols: unfortunately, the industry seems to have converged on asking app developers to start an HTTP listener in a Docker container that contains all the dependencies needed for the app. This setup can run on any host that supports Docker, including Render, and it frees the platform from worrying about installing or managing application dependencies. I say ‘unfortunately’ because Docker isn’t great for developer ergonomics. As a result, providers who care about DX end up with some behind-the-scenes ‘dockerizing’ for certain languages and frameworks, similar to Render’s native environments, or Heroku’s buildpacks. My own preference is to meet developers where they are and support however they build their apps; Docker or otherwise. We’ll keep working on it.

                                    2. Lack of stability: the Lindy effect is real in hosting, which is why AWS is still the default for most businesses. However, the hyperscaler usability tax is also very real, pushing people to other providers. You want a host that doesn’t lock you in to their cloud by building vendor-exclusive features like Vercel does with Next.js. This is where Docker can be handy: as long as you manage your own dependencies in a Dockerfile, you can simply take your container to the next host and have them run your app without modification.

                                    Re: Stripe, that was a long time ago! Yes, we moved to SF (near 2nd and Mission) from Palo Alto in January 2012. I’m glad you found CTF fun. It was indeed a great group of people.

                                    1. 1

                                      I agree Docker has DX issues, but regardless of whether the platform chooses Docker, there are other issues with the app/platform interface, like:

                                      • What port or socket does the container listen on, so it can be proxied?
                                      • How does the platform get web logs out?
                                      • How does the platform check if the app is ready to serve?
                                      • How are replicas started? i.e. if the app can dynamically provision apps

                                      In my ideal world, the platform would support any Unix process that obeys a simple protocol – and that process may or may not be a Linux container, which may or may not be Docker

                                      I feel like those are orthogonal concerns


                                      I think this is where Heroku was going around when they were acquired. They started out with Ruby, and then started to make a polyglot and Unix-y platform.

                                      https://12factor.net/ - https://lobste.rs/s/ksbcmq/twelve_factor_app_2011

                                      So when I said I would like “a modern Heroku with a static file service”, I was talking about something Unix-y or “12 factor”, based on processes.

                                      And I also liked that the build packs were open source - the app and platform also have an interface at build time, which should be specified.

                                      I mentioned static files because Apache lets you seamlessly mix static and dynamic content – your .html can live next to your .php and .cgi. I think your docs rightly pointed out that this is missing from Heroku


                                      But I looked over the Render docs a bit more, and it seems to be architected more towards getting started easily in common languages, rather than neutral protocols:

                                      • There are runtimes native to the Render platform, for Node.js / Bun, Python, Ruby, Go, Rust, and Elixir
                                        • https://render.com/docs/native-runtimes
                                        • e.g. no PHP support, which kinda reminds me that the Google App Engine team had problems adding PHP support for many years – there was definitely a strong language bias there too, for various technical reasons
                                      • And then there are Docker containers
                                        • https://render.com/docs/docker
                                        • Although I didn’t see exactly how you’re supposed to build a Docker container, it says Render automatically detects Dockerfiles at the root of your repo and suggests a Docker runtime for your app during service creation – and there there are also like 10 or 20 apps as special cases

                                      I might try it just to “understand what I’m missing”, but from reading the docs, I imagine I’m not the kind of customer you’re targeting

                                      Similar to what I wrote in the blog – Dreamhost is for PHP hosting, it’s not for Unix hosting. I can tell that from the docs and support

                                      Likewise, I think Render is for app hosting, but not arbitrary apps/languages apparently. I’m happy to be corrected though! I’d like it if there are NO “native runtimes” – if every language is on equal footing (as mentioned in the blog post)

                                2. 1

                                  and then people wanted to host stateful apps, so they rolled out an SQL service that was apparently not well received.

                                  fyi, they seem to be working on a better sql service, to be launched within months. https://go.news.fly.io/deliveries/dgSN7QkDAOOaEeKaEQGUtLI1B0iFoL8oIlsSYqY=

                                  (i’m not a user of fly)

                              2. 4

                                It’s a sad fact that this is no longer a commodity

                                Glad to see Mythic Beasts get this recognition.

                                I use their hosting but hadn’t seen that they support deamons. The linked Python docs likely work for Go too, with https://pkg.go.dev/net#Listen and “unix”.

                                1. 3

                                  Yup, I’m pretty excited by this!! I am planning to write some Go HTTP servers

                                  Although Go actually works fine for CGI too, because the binary starts in 1 ms, unlike Python’s 30 - 300 ms

                                  And Go still has a non-deprecated CGI library!

                                  1. 1

                                    Agreed, I hadn’t heard of them before, and look like an interesting option.

                                    I miss the old days of WebFaction being a nice solid developer-focused shared host before GoDaddy acquired them.

                                    1. 2

                                      BTW, I mention OpalStack in the post, and it’s by the founders of WebFaction: https://news.ycombinator.com/item?id=24809178

                                      This is another “hosting economics” issue … big companies often buy up smaller hosting companies, and then shut them down for whatever corporate reason …

                                      1. 1

                                        Didn’t realize this was the same people, I’ll check it out, thanks.

                                  2. 3

                                    Reading this makes me wonder, if you did have to come up with a system for shared hosting that was easier to manage nowadays, what would it be? It doesn’t feel like fastCGI is the answer, but is there one?

                                    1. 2

                                      That’s a very good question, and I want to make CGI version 2 :-)

                                      CGI hasn’t been updated since 1997 apparently - https://en.wikipedia.org/wiki/Common_Gateway_Interface

                                      What I have in mind:

                                      • backward compatible with CGI 1.1
                                      • optionally supports persistent processes like FastCGI, but not through sockets and threads
                                        • I’d support a single-threaded while (1) server
                                        • Or perhaps optionally a fork() server
                                      • parses all the crap in HTTP into structured data for you:
                                        • URL escaping like %20
                                        • URL query params: x=42&y=99
                                        • multipart/MIME form POSTS - the format that the browser sends file uploads in, which is very messy
                                        • cookies - these have a few fields
                                      • maybe sandboxed by default, e.g.
                                        • database connections are “capability”
                                        • persistent disk is a capability – the default is ephemeral disk?

                                      I think it can be useful for newer languages like YSH, Inko, Zig, etc.

                                      They will relieve you of having to write say a multipart/MIME parser, which is very annoying. That can happen in a different process.

                                      Instead you would just use a length-prefixed format for everything, like netstrings

                                      It could also be JSON/JSON8, since those parsers already exist


                                      Actually almost 15 years ago I wrote almost exactly this with https://tnetstrings.info/ , used by the Mongrel 2 web server

                                      My implementation was actually used by a few people, but it wasn’t very good

                                      I still think this is sorely needed… it’s weird that nothing has changed since 1997, but I guess that’s because there’s a lot of money to be made in the cloud, and locking customers in

                                      The 12 factor app by Heroku (linked in post) is probably the only evidence I’ve seen of any hosting provider thinking about protocols and well-specified platforms, in the last 20+ years


                                      If anyone has a nascent programming language that needs “CGI” / web support, feel free to contact me! It would be nice to iron out this protocol with someone who wants to use it

                                      I think it can be deployed easily, because the first step would just be a simple CGI exec wrapper … So it would be a CGI script that execs your program, and hence can be deployed anywhere. This won’t be slow because Unix processes are not slow :-)

                                      Later it can be built into a web server

                                        1. 1

                                          I’d say that proxying is fine and that’s what Heroku does, but it’s also easier to write a CGI script than an HTTP server.

                                          It’s not easier to write a FastCGI script than an HTTP server, because you always need a FastCGI library, which is annoying.

                                          But I think we can preserve the good parts of FastCGI in a “CGI version 2”

                                          There is a actually a 3rd dimension I’m thinking of - “lazy” process management, which is similar to the cloud. It’s not just config and the protocol.


                                          Also, WSGI / Rack / PSGI all use CGI-style requests, in process. So there a CGI-like protocol is still very natural for programming languages, more so than HTTP.

                                        2. 1

                                          Just for refence, CGI has no state, right? I’ve always kinda wondered if there was some trick that could be used that would get us to something kinda like CGI but without paying bootstrap costs per request (though maybe that’s… fine)

                                          1. 2

                                            Yes, CGI is stateless – it starts a process for every request.

                                            The simplest way to avoid that would be to have the subprocesses run in a while (1) loop as mentioned. So you can reuse your database connections and so forth between requests.

                                            And then a higher level would handle concurrency, e.g. starting 5 of these while (1) processes in parallel, and multiplexing between them.

                                            You will get an occasional “cold start”, which is exactly how “serverless” products work, so that seems fine. Or of course you can always warm them up beforehand.

                                            (FastCGI as deployed on Dreamhost has a weird quirk where it will start 2 processes, but then those 2 processes may each have say 4 threads, which to me seems to make provisioning more complex)

                                            1. 1

                                              bootstrap costs per [CGI] request (though maybe that’s… fine)

                                              I’m guessing that it’s rare to run into any real-world problems with CGI now that machines have enough memory that they no longer have to hit the disk in order to find the code to run for each request.

                                              1. 2

                                                It depends on the startup cost of the CGI program. You’ll run into problems at low load if it’s Python or if it’s library-heavy Perl. My link log is a CGI that ingests its entire database of tens of thousands of links on every request, which needed FastCGI when it was Perl; after rewriting with Rust+Serde it is fast enough that I could revert to much simpler CGI. At some point I might have to move it from a flat file to something with indexes…

                                                1. 1

                                                  CGI that ingests its entire database of tens of thousands of links on every request […] At some point I might have to move it from a flat file to something with indexes

                                                  If you use CGI anyway, can’t you embed the links in the binary at compile time? New links => recompile and swap the binary. Should just work, as there’s no state.

                                                2. 2

                                                  On an emotional level I dislike the idea of spinning up Python and doing parsing, compilation etc on every. Single. Request.

                                                  State and cache is “the enemy” for lots of stable systems but it would be neat to have something.

                                          2. 2

                                            There are places like SDF: https://sdf.org/ that do shared unix shells and web hosting. Their focus is on community more than anything else though. SDF for instance specifically spells out fastcgi support: https://sdf.org/?faq?WEB?02

                                            1. 2

                                              Hm I have seen that page, but it’s easy to miss the FastCGI part! I thought it was just about CGI

                                              It would be nice to have sample apps – I have used FastCGI on Dreamhost successfully, but I actually didn’t get it to work on Mythic Beasts, and they don’t recommend it.

                                              It is not the same everywhere; there is non-trivial configuration

                                              I might try them, but honestly I find their pages pretty hard to read / navigate … lots of CAPS!

                                              1. 1

                                                Yes, I agree.

                                                SDF in general is just sort of hard to use. I forget every decade or so and try again. I never keep my membership for more than a year. I just end up not using it, because it’s such a PITA. Last I tried their help/support system was some special forum thing they coded themselves as a TUI app that was an even bigger PITA.

                                                They even had(have?) a Nexctloud/Owncloud or whatever the latest project name is members can use. If you really need that and don’t want to manage an instance yourself, it’s probably worth the hassle of SDF. I don’t need it that badly.

                                                There are other community oriented unix shell organizations around. I’m not really up to speed on any of the others either though.

                                                I have plenty of unix shells at home and at work, so I don’t need more of them.

                                            2. 2

                                              Why share Unix hosting, when a whole VM is like $4/mo? It’s like trying to run a business making people share underwear or socks. The amount of extra effort trying to make it hygienic enough, etc. is just not worth it when everyone can just buy themselves their own socks and wear them exclusively.

                                              Unix/Linux is just a complex beast with a gnarly and wide surface area. In the modern times of automatic exploitation, local privilege escalation, hardware bugs, etc. none of this makes any sense anymore. Yes, everyone getting a VM is relatively a waste, but it’s a wast of something we nowadays have an abundance of.

                                              The most important part of optimization is that you need to optimize what is scarce and the bottleneck from the perspective of the whole system. Focusing on “wasted memory by virtualized kernel” is wrong, if it isn’t your bottleneck. And given how cheap compute is, it is definitely not a bottleneck.

                                              I have a $6/mo VM in Hetzner running NixOS, and I’m running a whole zoo of stuff on it: blog, Matrix server, Radicle server, some of my little web apps, and bunch of other stuff, and it just works, all fits without a problem, and it’s all mine. I upgrade it when I want it, run versions of everything that I require, customize what I need customized, from the kernel to ever bit of configuration. A waste would be for me to have to look for individual “shared hosting” for each piece that it’s there, read their docs, workaround all the little details of everything and so on.

                                              1. 4

                                                I’ve had a VPS since 2008, and still use it for certain things

                                                I describe why I don’t use it for the blog here: https://lobste.rs/s/f5ziu7/comments_on_shared_unix_hosting_vs_cloud#c_wy9sjt

                                                and here: https://news.ycombinator.com/item?id=42969840

                                                It’s not about the cost. It’s about saving my time, and that I want it to actually be reliable


                                                NixOS does seem interesting, though there still seem to be many users who haven’t gotten over the hump. I think you probably save time if you already use Nix are your job (?) So you don’t have to learn it for self-hosting

                                                But learning it to JUST to put up a blog (web server, SSL certs, config) and Python script seems like a bit much

                                                1. 2

                                                  Anyway, I think my main point was that there’s simply not enough money in the business of shared hosting, while there’s a lot of risks and cost, so I don’t expect the situation of shared hosting to improve, irrespective of our preferences. Plus the tech fashion changed, and I doubt it will go back anytime soon.

                                                  1. 2

                                                    I wonder if it can go forward with some kind of extending the current «oh this is a presintalled image for you VPS with what you needed to get shared-shell-hosting experience» to an explicitly advertised image+update-management story.

                                                    Sometimes I wonder what VPS hosters could provide so that those image-with-migrations could be a subscription service where in case of your provider going down you can retake the control over your VPS, not lose access as happens with hosted-versions-of-open-source when a provider goes out of business.

                                                    1. 1

                                                      I can’t figure out what do you mean. :)

                                                      1. 1

                                                        There are hosting providers that provide the hardware VPS. These need upfront capital to start. They will probably be reviewed based on reliability and performance honestness.

                                                        There is the service which is basically «top-level Nginx/Apache version/config and some SQL DB version will be updated for you». If this is bundled with hosting, there are various incentive effects around hosting quality and system management quality and then the reviews are kind of hard to filter because they mix the issues.

                                                        I wonder if there is a way to separate «I pay for storage/RAM/CPU to them after reading the reviews about their reliability and performance» and «I pay subscription price for automatic updates to someone who has earned a system competence reputation elsewhere and made a plausible promise to debug and mass-fix any on-update issues their subscribers have».

                                                        So you get an image maybe with a simple management panel for the few user-configurable options and use it as a non-root user and have the updates applied, and you can press a button and get the root password but then you lose the stability promises. This probably needs the hosting provider to provide an «external-manager» role for VPS that can get a notification/confirmation when the VPS is used in the Rescue Console mode.

                                                        The thing is, if you update-subscription-provider goes out of business, you are at least no worse than if you have started with a VPS, and have a good chance that standard ways of doing system updates will work fine for a month or two while you pick the migration options.

                                                  2. 1

                                                    NixOS does seem interesting, though there still seem to be many users who haven’t gotten over the hump. I think you probably save time if you already use Nix are your job (?) So you don’t have to learn it for self-hosting

                                                    I’m so fed up with doing manual Linux admin that I’m going to rip my Hetzner box apart and have the thing built on Nixos. Along these lines: https://mtlynch.io/notes/simple-go-web-service-nixos/

                                                  3. 2

                                                    I have a NixOS VM and still have shared hosting.

                                                    If my VM breaks, or I want to do maintenance on it, I do it at my leisure. I enjoy that flexibility.

                                                    Shared hosting is for the sites/services that I have whose uptime can be someone else’s responsibility.

                                                  4. 2

                                                    I would like to mention GraalOS as a related development, as it’s not too well known (not affiliated, just follow GraalVM)

                                                    It would allow the safe hosting of applications without virtualization, in a way maybe similar to CGI, but with Graal-supported languages as scripts (JVM languages, Javascript, Python, Ruby, but also LLVM languages).

                                                    The Oracle-written documentations/ads are pretty bad, so let me link to a HN comment which describes what it actually is: https://news.ycombinator.com/item?id=37613065

                                                    I think the idea is quite clever.

                                                    1. 2

                                                      I use and love NearlyFreeSpeech. Ever since they got TLS working on their service (which basically lines up with widespread client SNI support), I don’t have a bad word to say about them, and I have lots of good ones.

                                                      I think there could be a really interesting space for a thing like shared hosting but with containers that the users could upload. For example, the main reason I wouldn’t want to host one of my little django sites there is that it’d be so much work to twist the site until it worked there, and it might just run afoul of their limits if I ever accidentally submitted a bad title image and pillow ran amok resizing it. But I don’t really need a full VPS for that site; it could work just fine if I got to upload a container that could be run under strict limits, connect over TCP to some database, and access some shared filesystem.

                                                      I’d try it out if someone offered it at a nicer-than-VPS price.

                                                      1. 1

                                                        Nicer-than-VPS for any kind of hosting is clearly not happening. There are $1/month VPS options if you pay yearly… We need to figure out micropayments

                                                        Docker hosting exists but the price is always higher than my €3.5/month OVH VPS that hosts my personal-domain versions of my homepage and email.

                                                        1. 2

                                                          I think you mean “cheaper than VPS”.

                                                          People will clearly pay more for “nicer than VPS” – Heroku was famously expensive, but it was easier to use than a VPS, so people paid.

                                                          Many startups used it, racking up AWS-type bills of hundreds of thousands of dollars a month, and Recurse Center has used it for >10 years IIRC

                                                          Render is also more expensive than VPS - https://render.com/pricing


                                                          VPS is cheap because they are offloading the sys admin duties onto you. And human time is the thing that is expensive, not computer time.

                                                          Globally, this is less efficient. It’s easier/better for 1 sys admin to maintain 1000 or 5000 sites than it is for 1000-5000 people to maintain those sites. And the latter is the case with the VPS

                                                          1. 1

                                                            I thought that at least a factor needs to be pretty low to count as nicer-than-VPS and not something considered a separate thing. (I guess for Docker hosting the factor is low but you need to manage the image anyway)

                                                            The annoying part is that some «web management panel» images at VPS providers do promise the updates to the basics happening automatically (which would be a proper way to deduplicate the admin work if it were reliable), but it is very hard to find reviews on how reliable these are.

                                                      2. 2

                                                        I don’t really get the intention of this post. Is it a rant? Is it “you all don’t know what you’re missing”? Or is it more of a factual observation than I manage to grasp right now.

                                                        Yes, of course it sucks if something that some people find useful is not/hardly available anymore. But there’s usually a reason - in my case everyone I know who did this (back then) has long since moved on, mostly to VPS (because of the flexibility), some to static hosting, some to shared hosting. But I think I can name like 3 people I know who were running cgi-bin or “custom fcgi” stuff in the last 5 years, everyone else didn’t.

                                                        1. 1

                                                          We should get rid of the entire CGI thing and have a hosting standard where we can run anything compiled to WASM.

                                                          I’m locked into a standard hosting provider for another year and I tried to see if I could run anything other than PHP crap there and turns out… no. No way to run anything there that isn’t PHP crapware. So I guess that thing will be underused there and some server farmer will harvest a final year of margin off me.

                                                          1. 2

                                                            How would that work? I ask having a website that is CGI based (my blog) and a website as an Apache module (my online King James Bible). How would I write them as WASM?

                                                            1. 2

                                                              I haven’t worked out the exact details but since this is possible with k8s already I don’t see why it wouldn’t be possible at a lower level having a web server that fields a WASM app at a URL (similar to your Apache module I guess).

                                                          2. 1

                                                            I think this is giving Unix way, way too much credit.

                                                            Wouldn’t it be silly if Unix came with a canned set of programming languages?

                                                            It does. They’re C, Unix shell and awk.

                                                            Moreover, imagine that each Unix machine had incompatible syscalls.

                                                            They do. That’s why you can’t just run a Linux binary on OpenBSD or vice-versa. The standard interface that’s relatively compatible across Unixen is not the syscalls (heck, almost everyone except Linux doesn’t even have stable syscalls across different versions of the same Unix!), it’s libc. Which is absolutely tied to a single programming language.

                                                            Having to go through libc means everyone needs to care about weird C-isms like varargs functions, null-terminated strings, errno and (not least due to how errno is commonly implemented) the C preprocessor. It’s not that Unix is meaningfully language-agnostic, it’s just that we’ve been in this situation long enough that everything else has evolved enough C mimicry to mostly work anyway.

                                                            Unix didn’t set out to be an open standard, it started out as a single, proprietary OS implementation. Then projects like GNU showed up trying to reimplement its API. The need for standardization (which eventually culminated in POSIX) only became apparent when users of different Unixen started getting annoyed by incompatibilities.

                                                            And we’re starting to see a similar sort of “predatory compatibility” with cloud services, at least the ones that have been around for a while. AWS S3 has a “proprietary”, “nonstandard” protocol, but it’s stable and well-documented enough that by now it has become the de-facto standard for key-value stores. Both open source reimplementations and commercial cloud services are simply providing the S3 API now. Similarly, Docker has been cannibalized to within an inch of its life by other, better container runtimes that reimplemented its API (now known as “the OCI runtime spec”).

                                                            1. 3

                                                              heck, almost everyone except Linux doesn’t even have stable syscalls across different versions of the same Unix!

                                                              Well, the largest non-Linux free Unix-like is probably FreeBSD and they do promise cross-version syscall compatibility, and seem to keep the promise. (Indeed, NetBSD and OpenBSD do not promise that)

                                                              Agree with the rest, though

                                                              1. 1

                                                                Yeah I should have said “libc calls”. (And Windows and OS X are also compatible/specified at the shared lib level, not at the syscall level)

                                                                Except that FreeBSD and Illumos actually have Linux x86 syscall emulation layers - https://docs.freebsd.org/en/books/handbook/linuxemu/

                                                                Illumos can run Docker containers that contain Linux binaries because of this


                                                                Actually that is one thing I want to experiment with – I have a habit of “doing everything twice” to make sure it conforms with specifications.

                                                                I want to see how complete the layer is … although my preliminary research suggests that you need a special version of Docker that maps unshare() to Illumos zones and so forth, so yeah it isn’t very easy to do portable sandboxing unfortunately

                                                                That is a huge thing missing from Unix that caused decades of downstream complexity

                                                              2. 1

                                                                Off-topic (if I may) on naming: “oils” sounds weird to me because it sounds plural but is actually not (I think). I catch myself having this thought almost every time when reading the blog ever since the project was renamed.

                                                                What do you think about this? Have you heard of this from others? Have you considered a different name?

                                                                I see you mention “Making it plural is a subtle change that also results in a different connotation” in a different post of yours, but this doesn’t make it better for me.

                                                                1. 3

                                                                  Plural seems appropriate because the project has many parts: OSH, YSH, J8 Notation, and more …

                                                                  https://www.oilshell.org/blog/2024/09/project-overview.html#thirteen-parts-of-the-oils-project

                                                                  The project is basically a collection of languages/protocols

                                                                  1. 1

                                                                    Hmm, maybe that makes sense, thanks.