1. 13
  1.  

  2. 19

    I’m generally fine with calling a VPS “self-hosting”, mostly because of the fact that when people talk about self-hosting, they seem to be trying to achieve one of:

    • Avoiding vendor lock-in by preserving the ability to change providers
    • Running a free version to avoid high SaaS charges
    • Running the software themselves for the pleasure of it, or for ideological reasons

    All of those are satisfied with a VPS, assuming a market of more-or-less interchangeable vendors selling Linux VMs. (Which does exist right now!)

    A smaller number of people want to own the actual hardware they are storing data on, or because they enjoy running hardware for the fun of it. But that seems relatively less common.

    While I personally enjoy running datacenters, I accept that I’m an outlier and shouldn’t impose my weirdness on others. 😉

    1. 15

      Using SaaS is like going to a restaurant to eat.

      Self hosting on a VPS is like making a nutritious meal at home with groceries from the supermarket.

      Self hosting on your physical hardware is like making that meal from veggies and animals you grew and raised on your hobby farm.

      It’s almost as if… things aren’t black and white!

      Self-hosting on a VPS is better than using SaaS, in terms of maintaining control and flexibility, understanding the stack and data-security implications, etc. Hosting it on your own bare metal takes those benefits even further, plus ensures a more diverse and distributed Internet. Kudos to anyone who is doing that.

      You could take this even further…

      It only counts as self-hosting if you host on hardware that you own and that you built

      Or

      It only counts as self-hosting if you host on hardware that you own and that you built from scratch using individual IC components, wrote all firmware for, …

      etc.

      1. 13

        One thing is a discussion whether “self-hosting on a VPS is good enough” and then there’s trying to redefine a term that’s been nearly unequivocally used for quite a while - and I don’t think this is a useful discussion. No matter how purist, this ship has sailed, please don’t fight windmlls.

        That said, I also disagree with the original hot take.

        If you don’t trust the hoster’s admins to not somehow remote into your xen instance… surprise, they can open your cabinet and plug an USB drive in. If you rent a physical server it may have a serial attached.

        Where do you draw the line? I don’t claim to be right, but I do think I am not a lot more secure by renting a server at the same hoster versus renting a VPS there.

        So unless you glue all your ports shut and deliver your case with case intrusion on or maybe run your own cabinet (and even then someone might break in).. then I might concede the point.

        Also sure, my self-hosted services at home are better secured from physical access. but also a lot less reliably hosted. No USV, no multihoming, no DC-grade networking equipment. Also 24h disconnects are kinda the norm here, so there’s always 1-5min downtime every day. Everything else besides physical access control is worse. So I’d gladly trade that for the VPS and call it self-hosting.

        1. 4

          With the disclaimer that I’m not personally particularly worried about having a VPS subverted, the threat model is a little different from a physical box. With a physical box, the colo facility can get into it.

          With a VPS, the colo facility can get into it, the hypervisor admins can get into it (often these are the same people as the colo facility, but not always), anyone who finds a privilege escalation bug in the hypervisor can get into it and anyone who breaks into the hypervisor admins’ system for administering the boxes can get into it. Historically, hypervisors have had privilege escalation bugs found in them - didn’t Xen have a lot of bugs related to random stuff like the emulated floppy drivers? And IIRC there was at least one high-profile incident in the news where Linode got broken into by people who used that access to break into VPSes belonging to Linode’s customers.

          edit: to be clear, I don’t consider “subverted by the company I’m paying to host the box” to be something worth worrying about (because they’ll promptly lose all customers and go bankrupt if they do that). I don’t really care about splitting the “does this really count as self-hosted?” hairs. I’m just pointing out that, if you’re paranoid, you may care about the fact that a VPS typically has a couple more layers of stuff which could have security holes in them that give away access to your box by accident.

          1. 2

            You’re absolutely right, I had forgotten about “people on the same physical host who could find their way in through the hypervisor” - but as I said, my main point is that “self-hosting” isn’t 100% about security.

            1. 2

              Physical server providers often use integrated management systems like Dell’s ILO. Those can and do have privilege escalation bugs in them too. :-3

              1. 1

                Now those things genuinely terrify me. It seems most likely that the firmware on all of them is written the same kind of abject negligence normally found in IoT gadget manufacturers or home router vendors.

            2. 2

              Also, there’s always the practical security aspect of opening up your home network to the rest of the world. I’m not too concerned if my VPS gets owned, but I’d be a lot more worried if my home server got owned

            3. 5

              I consider self-hosting an activity which is mostly sysadmin stuff like configuration, backups, and updates. If you do this, then you are self-hosting.

              It implies VPS is self-hosting for me.

              1. 5

                There’s also security. The old rule was the enemy controls anything they get physical possession of. They can backdoor the device with an implant, target vulnerabilities in I/O other than Ethernet, and (rare one) get secrets out with side channels requiring proximity.

                So, I prefer some things stay physically close to me with no untrustworthy people having access.

                1. 7

                  with no untrustworthy people having access

                  I wish I could trust myself, not to mess up on occasion 8~)

                  1. 4

                    That is a great counterpoint. Nicely summaries the build vs buy decision for a lot of companies, too. :)

                    1. 2

                      although as you have pointed out before the processes used to ensure that you have verifiable builds for your software and services should help to reduce the risk, and ideas like the checklist manifesto can help reduce complexity so that you can get it right.

                  2. 3

                    So, I prefer some things stay physically close to me with no untrustworthy people having access.

                    It’s only been in the past few years that my vanity domain (which hosts my email and utterly useless personal website) wasn’t hosted on a little box in my home office off my personal DSL/cable. It eventually got too expensive to get a static IP…

                    1. 4

                      Your most precious and secret stuff is probably at your house, though. That’s more my point.

                      My vanity domain was bought for $500 by someone else when I was broke. The cheap, barely-acceptable alternative is currently parked with some service provider. Even worse! :)

                  3. 3

                    This is why the https://freedombox.org project is focused on running on a small board at home.

                    1. 3

                      A large factor for some people is who can access the content being self hosted. There’s an argument that if you own the dedicated hardware in the colo that only you can access the content. I’m not sure that argument holds water in many, or even more than a minority of circumstances.

                      However, with a VPS or Cloud system, there’s usually some form of ability for someone other than you to gain access to data and/or root on the system. For that reason I personally don’t class VPSes or cloud systems as self-hosting, but can see how some people would.

                      I have dedicated servers, I consider those self hosted. I have boxes at home and at work, I’m not sure I consider them self hosting, just home and work servers. I have VPSes dotted here, there and everywhere along with the odd ephemeral cloud system. I don’t consider anything there self-hosted because of the control aspect.

                      But to be honest, I feel like any argument about it is splitting hairs. If you’re running your own software instance on something you control and are comfortable with the data storage, more power to you.

                      1. 2

                        Yeah I self host. I have a raspberry pi right next to my router that I put my experimental websites on. The tricky part is keeping my domain name pointed at my IP address since my IP address isn’t static. I wrote a bash script that checks what it’s current IP address is and if it’s different than the one stored in a file, it does an API call to my domain name service to update where it points.

                        1. 1

                          I’m as skeptic to VPSes as SasS, neither is better, - it’s easy to introspect a VM when you control the hypervisor without the knowledge of any ssh users signed into it. For example, mount it’s disk at the hypervisor in readonly. I’m maybe too paranoid but I ended up with own hardware which I myself installed into a rack at a datacenter with very strict security. But even then, it’s dangers around :)