1. 101
  1. 21

    The second is when your load is highly irregular.

    That is something that maybe should be explained better, because I frequently see wrong assumptions here. It doesn’t mean “you intend to grow, potentially by a lot”, it also doesn’t mean “we have more/fewer users during weekdays/weekends/nighttime/…”. It means that you have such huge differences, that you cannot afford simply owning or leasing the resources (which also might be handy in case of DDOS or sudden load spikes) on a month to month basis. Furthermore you need to have the infrastructure (know-how, automation, applications actually supporting this) in in place and in shape in a way that doesn’t outweigh what you save scaling up and down. It also means that you cannot benefit as much from reserving an instance for some time.

    And then when you calculate through this and compare prices of owning (or renting with a pay per month subscriber) the resources the cloud infrastructure should be cheaper. This usually means you very frequently scale down by more than a factor of 10 and this cannot be planned by months or something. Even then classical/cheaper providers might have better suited short term options.

    I point that out, because somehow people seem to think it’s like that. Many people seem to be really out of touch on the expense side on raw (cloud/dedicated/hosing/vserver) hosting costs, and then they realize that cloud doesn’t mean you don’t need Ops people, and both devs and ops need to actually make these things work and so on. It even goes so far that people are in denial, when someone takes a closer look, sometimes even about things like that they have DevOps/SRE people that do nothing all day but making sure things work on the cloud. Whole expensive teams are ignored.

    Of course it all depends on usage, per-case situations, etc., but when people use AWS like a really expensive 2000s vServer hoster, with their Wordpress or off the shelf PHP-Webshop installed on a compute instance and act like they are benefiting and modern from using the cloud it gets bizarre. And when it gets more complex and companies it seems like they are just using “the cloud”, because they are told to, be it because it’s modern, best practice, claims to make things easier - it might, but sometimes also more complex. It feels like people using an electric can opener to slice bread, because they consider it more modern and convenient. And I say that as someone who earns his living with people using the cloud.

    It feels like the marketing worked too well.

    Like I said it all really depends on the use case, but I think it would be wise to get away from “you have to use the cloud, because it’s the future” or similar hand-wavy arguments. Of course the same is true for any other technology that is being hyped right now. But that would go off-topic. In other words: Take a step back from time to time and reflect on what you are doing really makes sense in practice.

    1. 15

      I was running a webpage for school snow days. Mostly zero traffic, except for snow days, when it goes up a lot. I found autoscaling worked poorly because the scaling takes ~10 minutes, during which time the existing servers would get overloaded and die. If I had it to do again, I would just make it a static site on S3 or something, so it didn’t need autoscaling or (expensive) overprovisioning because S3 is cheap overprovisioning.

      1. 1

        I had a similar experience with a charity website. For one week of the year it would see several million visitors but barely see 100k a month for the rest of the year. In this case we provisioned multiple high core count servers behind a load balancer and rode out the storm without any degradation to service.

        The majority of that website was static content and nowadays I would do the same as you, build it as a static website and chuck it on S3; the dynamic content could then be loaded via a very lightweight backend with extensive use of caching.

        1. 2

          When Healthcare.gov flopped, the rescue team basically had to resort to a static page with a Go app that acted as a gatekeeper to let traffic trickle in as resources allowed: https://changelog.com/gotime/154 Not a universally applicable solution (most customers won’t wait in a queue to come back) but the principles of why the original version flopped (brittle, overly-complex, hard to reason about capacity) are pretty applicable to other circumstances.

      2. 2

        the same is true for any other technology that is being hyped right now

        I think the key thing is to remember that “right now” just means currently, at the time of discussion/evaluation. It’s not like the the things being hyped in October 2022 are specifically overhyped way more than the things being hyped in 2015 were.

        1. 1

          Exactly. :)

        2. 1

          sometimes even about things like that they have DevOps/SRE people that do nothing all day but making sure things work on the cloud. Whole expensive teams are ignored.

          I mean, as “an SRE person”, “the cloud” means I’m building higher level abstractions and automating things rather than maintaining ansible scripts and minding hosts. It doesn’t suffice to look at whether or not cloud shops employ SREs, you have to look at the value they deliver (I posit that I’m working on more valuable things because some cloud provider takes care of much of the tedium). Another way to look at it would be to compare how many SREs we would need to employ to provide the same foundational services (with comparable SLAs, etc) and how does that hypothetical expense compare with our cloud bill.

          1. 1

            And it’s even harder to get this all right on something like AWS where to get the best pricing you need to commit to some level of reservations with SavingsPlan. But if you overbuy, you’re throwing away money, and if you underbuy, you’re also wasting money paying for OnDemand instances.

            It kind of works out if you have highly variable load, and can do SavingsPlan for your baseline, and then do OnDemand for the peaks. Or if your team is competent enough, do Spot to really save save some money.

            And then of course, there is always the risk that AWS just does’t have the instances you need when you need them. You can reserve capacity too, but then you’re locked into paying for it. Again, if you’re competent enough you can setup your app to use a different instance type depending on cost or availability… Then you just added even more complexity.

          2. 16

            As little as I generally appreciate DHH’s apparent general approach, I think this is a good post explaining good decision-making; and as much as I appreciate the reality-of-the-cost-saving discussion responses, the real value for me in the post is in the internet-societal-structural point at the end about how we’ve (those of us who host and run services) all been complicit in getting the centralised internet that we’ve got simply by virtue of prioritising comparative complexity and price over value and worth for the world. I’ve never much liked Basecamp, or what I’ve seen of Hey, or the various seemingly self-aggrandising posts coming out of 37 Signals over time, but: making value calls, explaining them clearly and putting your money where your mouth is like this is something that I rate pretty highly. Good on you, DHH.

            1. 8

              There is some irony that they don’t support anything other than completely moving your mail to them, one could say centralizing..

            2. 8

              I appreciate 37signals’ years of iconoclastic writing, but this is off topic.

              I can’t help but read this…

              We have a business model that’s incredibly compatible with owning hardware and writing it off over many years. Growth trajectories that are mostly predictable.

              …as basically saying they’re not expecting any spikes in users and their not particularly worried about serving more than a handful of regions. Which is fine. But to use AWS for years, dump them because you can’t justify the cost anymore, and then retroactively call it a moral choice is not only a bit hard to swallow, it’s also a kind of topical morass. What are we really talking about here? Business? Ethics? The One True Internet?

              1. 7

                I think this article is flawed by only considering the billing and not the people / on-site work you suddenly need when you’re not “cloud” anymore and how much harder that labor is getting to find because we need less of it.

                1. 29

                  There’s a lot of options between “the cloud” and “we literally own the land, the DC, the genset and the racks for our servers” - those options have been available for at least the last two decades, and have only gotten better in that time.

                  For example, plenty of places will happily colo your owned or long-term leased hardware, providing power, connectivity and remote hands when needed; your existing team of ops who were fighting with the Amazon Rube Goldberg machine can now be used to manage your machines using whatever orchestration and resource management approach works for your needs.

                  1. 3

                    We fit somewhere on that spectrum more towards the “we literally own everything” side, but not quite.

                    We do have our own location, generator, PDUs, CRAC units, etc. but you can pay vendors to do a lot of the work. Fan goes out on a PDU? Email the vendor and hold the DC door open for them.

                    I don’t know the exact cost of all this stuff, but a lot of that stuff will last you a long time.

                  2. 12

                    I don’t think it fails to consider the people/on-site work you need at all.

                    They say:

                    Now the argument always goes: Sure, but you have to manage these machines! The cloud is so much simpler! The savings will all be there in labor costs! Except no. Anyone who thinks running a major service like HEY or Basecamp in the cloud is “simple” has clearly never tried. Some things are simpler, others more complex, but on the whole, I’ve yet to hear of organizations at our scale being able to materially shrink their operations team, just because they moved to the cloud.

                    It sounds to me like they thought about it and concluded that they’re spending just as much on labor to manage AWS as they would to manage servers in a colo.

                    1. 10

                      I’ve yet to hear of organizations at our scale being able to materially shrink their operations team, just because they moved to the cloud.

                      I think this quote is intended to address it. Especially since “own hardware” presumably doesn’t mean “a stack of machines in our office”, nor “we do everything ourselves”. (Now does that math indeed work out like that for them? shrug)

                    2. 6

                      I wonder if they’ll be using Oxide machines… sounds like they’re looking for what Oxide is selling

                      1. 4

                        Is Oxide actually shipping any racks yet?

                        1. 8

                          There’s a physical rack that visitors have taken photos of! This means they’re very close to the end of verification testing and moving into the Pilot stage

                          1. 3

                            They say by the end of the year, so in 2 months at most if all goes well.

                        2. 5

                          We’re paying over half a million dollars per year for database (RDS) and search (ES) services from Amazon. … Sure, but you have to manage these machines!

                          In addition to those machines you’ll also have to manage those services, which will be less trivial than managing just those machines. Especially at the usage levels that run up that big of an AWS bill.

                          1. 2

                            Yeah but you could buy suitable database machines and pay a good SRE half a million a year to spend a fraction of their time maintaining those databases. And databases really do benefit hugely from bare metal performance. NVMe attached SSD that doesn’t evaporate when your machine reboots is pretty great.

                            1. 2

                              You’re right. I used to work with some very nice database servers that cost around £80k per 4U server and we had two racks of them. Someone worked the numbers and it was far cheaper to keep the dedicated boxes than to shift anything to the “cloud.”

                              The servers would be replaced every six years or so, with them retiring from colo to our staging environment in the office until they became no longer required and someone lucky got to take them home.

                              It seems the hardware purchase has better tax deductions than cloud as well, because you get to write off the purchase cost against profit over many years but also write off the depreciation over time.

                          2. 4

                            I’m sure they’ve factored this in, but something that would make me a bit nervous about hardware servers is the need to make sure you’re always over capacity. Not counting basic redundancy (eg extra application servers for temporary failover) you would also have to plan for handling load spikes and hardware failures.

                            As far as I understand the lead time for getting a new server could be in the order of days or weeks, so the consequence of underestimating your server needs could be running a degraded service for several days. With cloud you can tune your server capacity “just in time”, which would make me sleep easier at night at least.

                            (However, maybe the cost savings are so great you can run at something like 25% capacity as your baseline and still save money?)

                            1. 7

                              When I ran the numbers for our service, the cost for AWS was ~7x higher than colo, because we’re bandwidth heavy. There’s definitely room if your usage profile is the right shape and high enough scale to make the ops worthwhile.

                              1. 4

                                As far as I understand the lead time for getting a new server could be in the order of days or weeks

                                Couple of thoughts:

                                • At bigger providers you can usually get new servers within minutes, at other hours. Of course that’s different if you run your own datacenter, but again there’s a very big chance that you won’t be in a situation where you won’t be pretty good with a big additional server, because for a large amount of businesses adding even a single server with a couple dozen cores and 200GiB of RAM can get you quite far. And that’s something you get within minutes at big providers and probably can get quickly otherwise as well. And if you can anticipate it at all none of these will make issues.
                                • It’s usually A LOT cheaper to hold additional capacity compared to the big cloud providers and since you already keep that additional capacity it is certainly available faster than a cloud provider is able to provision it
                                • I’ve been in a situation where GCP (Google) was unable to add servers in a region for months (or more), which sucked because a contract that was made specified the country
                                • Yes the savings are that great. That’s why I would highly recommend at least giving it a bit of thought. I think you can typically go a lot higher than 25% capacity while still saving quite a bit of money. Depending on what you do you might even benefit from reduced overhead if you scale load vertically rather than horizontally to a degree. Of course you want to have your system to fail over to, but typically there is some overhead involved in scaling out, that you might not need to deal with when just scaling up, which is typically more expensive for cloud providers, because their calculation naturally is how many compute instance they can squeeze into a physical server. If you have know your load and have your planned free capacity you might be able to just get exactly what you need for hardware and for whoever runs the data center (you or some hosting company) it’s mostly just another one or two units in their rack.

                                Since the resources are less dynamic the prices ware too, which has the benefit of easier cost estimates, especially before starting a project. You need a basic grasp plus generous additional capacity and since things like traffic and so on are part of the package there are fewer things that might spike or that you need to optimize for on the costs side. You can say you need that server and an equivalent one to fail over for example and you get an exact price that’s also very stable. For some businesses that’s a benefit in itself.

                                1. 1

                                  Thanks for the great write up! I’m not really in a position right now where physical servers would make much sense, but if / when I get there I’ll definitely keep this in mind. I’m a big fan of keeping things simple, and as was pointed out in the OP, cloud is not always as simple as the vendors make it seem. Additionally, the way cloud pricing works you’re often incentivized to run a lot of small servers instead of a few big ones, which further increase operational complexity.

                                2. 2

                                  Couldn’t you still use cloud servers to cover emergencies?

                                  It would make sense to start with cloud servers, then buy servers to replace them after establishing that there will be steady demand.

                                  1. 1

                                    That depends though. It might not make sense on the latency side. In best case you use a hosting company with some basic semi-cloud offerings (ie hourly billed vServers). These are fairly common, and even places that mostly offer colo have these. Then you can just choose. I tend to also like starting out with essentially the cheapest dedicated server, if it’s affordable and simply not have to worry for the initial development and finding out eventual resource usage.

                                    But that leads to another topic. In many situations that are some variation of your usual CRUD app the thing that really takes the load is the database, while all the applications do is waiting for it to respond and then serializing to JSON. The logic inside actual application for such projects is frequently really low. So you are anyways only bound by that and then the question is if you run your database yourself or use something managed. I tend to always go with running it myself for various reasons (it’s something I am comfortable with, I have hit shortcomings with managed instances on big providers, I see being able to use the latest features as a core advantage, etc.). Then when it actually grows I rent bigger servers and fail over to the bigger server. And databases are usually something where you can’t shrink again on managed instances - unless that changed. Also while there’s mechanisms to grow it’s usually something where you need to know how you grow for it to do so smoothly and not for example slowing down everything by a lot while it grows. And databases tend to be very expensive in managed/cloud solutions.

                                    1. 1

                                      Yeah, that makes sense—if you spend the upfront integration cost you a hybrid solution I guess it lets you buy as much “earthquake insurance” as you think you need. (However, from reading the post it looks like it was this kind of solution they were migrating away from at Basecamp)

                                  2. 5

                                    I’ve been questioning the incentives behind writing any free content online ever since internet became primarily an advertising platform.

                                    So what could be the motives behind this post? Why would David take time from his busy/fun life and care to share their decision to leave the cloud?

                                    I think they are trying to push buttons by announcing a decision that is against the current grain, hoping people would get so outraged, stumped, curious that they would stop and pontificate on the topic. That will increase mind share of Basecamp and is an insidious form of marketing. This company is a grandmaster st this.

                                    That’s the lesson here in my opinion. Forget about the technical merits of their decision. Think about how much this “controversial” announcement is going to increase their mindshare. And think about how you could use the same playbook for your own app, business’s, life..

                                    1. 3

                                      Is anyone from Oxide computing on Lobsters? What’s the Oxide response to this? Are they in the market to make a rack for DHH, or are they playing a different game?

                                      1. 4

                                        I suspect one Oxide rack would cover significantly more load than Hey deals with, at a corresponding multiple of cost. It’s almost certainly a better deal in terms of request/second/$$$ spent than AWS, but unless the workload can make effective, steady use of that capacity it’s not actually a win.

                                        Oxide is aiming for datacenter-scale workloads, and $500k/year in aggregate AWS spend isn’t anywhere close to that.

                                        1. 3

                                          My guesstimate for a price of an Oxide rack is $750k. Considering that HEY alone (there’s Basecamp for them as well) spent $500k, on databases (RDS and ES) alone, they likely spend more than a cost of a rack per year. And even if a single rack is significantly more powerful than they require, there is no harm in having more power than you need, besides the extra cost, and I’d expect that cost to be not that much larger than the premiums of cloud.

                                          1. 4

                                            While I mostly agree I think you kind of want to have a rack for failover to at a certain size.

                                            1. 1

                                              Fair point. But I’d guess that they have enough costs in non-database compute and bandwidth that they can justify having two racks.

                                            2. 4

                                              I don’t have any information to confirm or dispute your estimate for the sticker price of a rack, but given a standard 4-5 year amortization schedule, installation and power, peering, etc., I can’t imagine paying much under 2x the sticker price inclusive. So $750k / * 2 / 4 years = $375k/year, before you factor in any labor or licenses. (Buying hardware is also CapEx, not OpEx, which matters a lot for some firms.)

                                              It might be a wash, or it might come out slightly better for on-prem or cloud, but it probably isn’t a slam dunk.

                                              Also, at this point I’m mostly entertained by DHH, infamously the, “Rails doesn’t need to scale b/c you can just throw servers at it” guy, is now worried about efficiency while running at a couple of orders of magnitude lower scale than Stripe, Shopify, or any number of similar “big Rails monolith” shops.

                                            3. 2

                                              So their potential customers are more like Fly.io or CloudFlare than someone using those services.

                                          2. 2

                                            What about renting physical servers? Does anyone of this size do that anymore? Is it any cheaper than renting servers in AWS?

                                            The thing I don’t like is thinking about disk hardware and drivers and that sort of thing. I kind of like the VM abstraction, but without VMs maybe :)

                                            Of course some people want control over the disk, especially for databases …

                                            1. 8

                                              You can certainly rent bare metal. This is most of OVH and Hetzners business. Both of them do it really cheaply. Though Hetzner only does cloud servers in the US, their bare metal stuff is in Europe only.

                                              I find OVHs stuff to be pretty janky, so I’d hesitate to do much scale there. Equinix Metal is a pretty expensive, but probably quality option (I haven’t tried it).

                                              There still exists businesses everywhere on the spectrum from an empty room to put your servers in to “here is a server with an OS on it.”

                                              And even with your VM abstraction, someone has to worry about the disks and drivers and stuff. The decision is if you want to pay someone to do that for you, or do it yourself.

                                              For the last couple of years rented a single bare metal server from OVH and installed FreeBSD on it and just used it for hobby stuff. I used zfs for the filesystem and volume management and bhyve for VMs. I ran several VMs on it, various Linux distributions and OpenBSD. It worked great, very few issues. The server cost me about $60/month.

                                              But I eventually gave it up and just moved all that stuff to a combination of Hetzner cloud and Vulr because I didn’t want to deal with the maintenance.

                                              1. 4

                                                In my experience it is cheaper than renting from AWS once you need more than a certain threshold. That threshold has stuck, rather consistently, at “about half a rack” from where I’ve observed it over the past 10 years or so.

                                                1. 1

                                                  OK interesting …

                                                  So isn’t the problem is that for whatever reason, the market doesn’t have consistent pricing for physical servers? It might be LESS than AWS, but I think it’s also sometimes “call us and we’ll negotiate a price” ?

                                                  I can see how that would turn off customers

                                                  Looking at a provider that kelp mentioned, they do have pricing for on demand: https://metal.equinix.com/product/on-demand/

                                                  And then it switches to “call us”: https://metal.equinix.com/product/reserved/

                                                  I’m probably not their customer, but that sorta annoys me …

                                                  1. 3

                                                    Yeah basically a lot of stuff in the datacenter and datacenter server/networking equipment space is “call us” for pricing.

                                                    The next step past Equinix metal is to buy colocation space and put your own servers in it.

                                                    And the discounts get stupid steep as your spend gets larger. Like I’ve seen 60-70% off list price for Juniper networking equipment. But this is when you’re spending millions of dollars a year.

                                                    When you’re buying a rack at a time from HPE or Dell at $250K - $500K/rack (numbers from when I was doing this 5+ years ago) you can get them to knock off 20-40% or something.

                                                    It can be pretty annoying because you have to go through this whole negotiation and you’ll greatly overpay unless you have experience or know people with experience to know what an actual fair price is.

                                                    At enough scale you have a whole procurement team (I’ve hired and built one of these teams before) who’s whole job is to negotiate with your vendors to get the best prices over the long term.

                                                    But if you’re doing much smaller scale, you can often get pretty good deals on one off servers from ProVantage or TigerDirect, but the prices jump around a LOT. It’s kind of like buying a Thinkpad direct from Lenovo where they are constantly having sales, but if you don’t hit the right sale, you’ll greatly overpay.

                                                    Overall price transparency is not there.

                                                    But this whole enterprise discount thing also exists with all the big cloud providers. Though you get into that at the $10s of millions per year in spend. With AWS you can negotiate a discount across almost all their products, and deeper discounts on some products. I’ve seen up to 30% off OnDemand EC2 instances. Other things like EBS they really won’t discount at all. I think they operate EBS basically at cost. And to get discounts on S3 you have to be storing many many PB.

                                                    1. 3

                                                      But this whole enterprise discount thing also exists with all the big cloud providers. Though you get into that at the $10s of millions per year in spend.

                                                      AWS definitely has an interesting model that I’ve observed from both sides. In the small, they seem to like to funnel you into their EDP program that gives you a flat percentage off in exchange for an annual spend commitment. IME they like larger multi-year commitments as well, so you’ll get a better discount if you spend $6m/3 years than if you do three individual EDPs for $2m. But even then, they’ll start talking about enterprise discounts when you are willing to commit around $500k of spend, just don’t expect a big percentage ;)

                                                      When I once worked for a company with a very large AWS cloud spend - think “enough to buy time during Andy Jassy’s big keynote” - EDPs stopped being flat and became much more customized. I remember deep discounts to bandwidth, which makes sense because that’s so high margin for them.

                                                      1. 1

                                                        It can be pretty annoying because you have to go through this whole negotiation and you’ll greatly overpay unless you have experience or know people with experience to know what an actual fair price is.

                                                        This is a key bit that people don’t realize. When I worked for a large ISP and was helping spec a brand new deployment to run OpenStack/Kubernetes I was negotiating list price from $$MM down to $MM. Mostly by putting out requests for bids to the various entities that sell the gear (knowing they won’t all provide the same exact specs/CPU’s/hard drives), then comparing and contrasting, taking the cheapest 4 and making them compete for business.

                                                        But its a lot of time and effort up front, and there has to be a ton of money handed over upfront. With the cloud that money is spent over time, rather than front loading it.

                                                      2. 2

                                                        I share your annoyance.

                                                        I think the threshold has been pretty consistent if you think of it in terms of what percentage of a rack they need to sell… people who rent servers out by the “unit” drop below the AWS prices once you occupy around half a rack. And yes, I’ve had to call them to get that pricing.

                                                        It’s a little annoying to have to call.

                                                        And furthermore, things can be cheaper at different points in a hardware cycle, so it’s a moving target.

                                                        I think some of it is down to people who peddle VMs being able to charge per “compute unit” but people who peddle servers (or fractions of servers) not being able to go quite that granular.

                                                        If you rent in “server” units, you need to be prepared to constantly renegotiate.

                                                    2. 2

                                                      There are certainly businesses which are built on the idea that they are value-added datacenters, where the value is typically hardware leasing and maintenance, fast replacement of same, networking, and perhaps some managed services (NTP, DHCP, DNS, VLANs, firewalls, load balancers, proxies…)

                                                      1. 1

                                                        “Single VM on a physical host” is a thing I’ve seen (for similar reasons you mention: a common/standard abstraction), not sure how often it’s used at this sort of scale though.

                                                        1. 1

                                                          The thing I don’t like is thinking about disk hardware and drivers and that sort of thing. I kind of like the VM abstraction, but without VMs maybe :)

                                                          I think you trade one thing for another. You have to think about other things. If you want you could also run your own VMs of course, but honestly, you just have an OS there and that’s it and if you want to not think about storage you can just run MinIO or SeaweedFS, etc. on some big storage server and add as you go. And if you rent a dedicated server and your server really happens to have disk failure (which is usually in a raid anyways) you just have it replaced and use your failover machine, like you’d use your failover if your compute instance starts to have issues.

                                                          It’s not like AWS and others don’t have errors, it’s just that you see them differently and sometimes Amazon notices before you and they just start up that VM somewhere else (that works automated as well if you run your own VMs. That’s widely implemented technology and “old”). I see all of this as server errors, whether physical rented machine or virtual machine. In both situations I’d open some form of ticket and have the failover handle it. It’s not like cloud instances are magically immune and that stuff often breaks the abstraction as well. They might detect it, there however also is just a certain percentage of VMs/instances becoming unavailable or worse becoming half-unavailable, clearly doing something still despite it having been replaced and in my opinion that is a lot more annoying then knowing of any hardware thing, because with hardware issues you know how to react, with instances doing strange things it can be a very hard to verify anything at that layer of abstraction and eventually you will be passed along through AWS support. If you are lucky of course it’s enough to just replace it, but that is pretty much an option with hardware as well and dedicated hosting providers certainly go that route of automating all these things and are pretty much there. There’s hardware/bare metal clouds but to be fair, they are close, but still lag behind. Here I think in terms of infrastructure as code. I think slowly that’s coming to physical machines as well with the “bare metal cloud” topic. It just wasn’t the focus so much. I really hope that hosters keep pushing that and customers use and demand it. AWS datacenters becoming pretty much equivalent to “the internet” is scary.

                                                          But it’s certainly not in an area where if you use compute instances (rather than something way even higher level, in the realms of Heroku, Fly.io, etc.) that you created yourself it makes a huge difference. Same problem, different level of abstraction. Probably slightly more noticed on physical machines, because of the mentioned automated VM failover, that only works though in specific cases. In either case you will need someone that knows how the server works, be it virtual or physical.

                                                        2. 2

                                                          They’re still gonna make cloud apps. They’re just gonna run them on their own machines instead of on cloud services like AWS.