1. 21
  1. 11

    Getting fed up with people banging on about the price of EC2, yes it’s more expensive than hosting your own servers but it sure beats getting called up at 4am because a disk exploded. People just think they can shove a server in a colo facility and be done with it, when the exact opposite is true.

    1. 7

      Moving your servers to a new data center is another significant opportunity cost.
      Our data center move occupied a substantial portion of the department for months.

      1. 2

        It’s always a tradeoff… If they’re valuing one-time costs of the server payment and don’t experience much overhead by keeping it running (electricity,maintenance,hardware(disk) replacement..), then it’s probably a huge win for them. As they’re now able to run this 24/7 without seeing a bill-counter ticking up on their amazon site. But if you have more of an overhead keeping it running locally or need to scale faster, of course, go with the cloud solution.

        In the same way I’m using root-vservers where I have 0% steal, guaranteed hardware but still am running on a KVM system and a RAID 10. This way I don’t have to deal with defective SSDs or a hardware outage, but I also don’t have to pay amazon $160 USD/mo* if I want a constant pricing. Because most of my stuff runs 24/7, “pay-per-usage” would be “pay 24/7” at my current CPU usage and traffic. (try running a syncthing-relay from that..)

        *Current prices for lighthouse.

        Edit: And of course it’s completely depending on the fact I don’t have huge spikes or dynamic load.

        1. 2

          Another downside of pushing 4 million QPS to a single server—whether you’re using EC2 or something you host yourself—is that the blast radius of any single server doing 4 million QPS is massive. I’d much rather have a bunch of tiny servers doing the same overall QPS than a few large servers. Beyond the blast radius of a single server failing, this also allows you to have more fine-grained blue/green deployments.

          1. 1

            Almost all monoliths IME are running a single primary database so nearly everyone has the same issue: “if the primary DB goes down, most everything goes down”.

            1. 1

              I guess I’m spoiled/have been spared this scenario by my employer.

          2. 1

            You can also get managed hardware (via Rackspace, e.g.) where you are the only tenant but Rackspace admins will monitor and correct any hardware issues. We ran a $100m business on five production machines. It was probably the same cost as EC2 but much better disk I/O, we knew exactly how far they would scale, no noisy neighbors, etc.

            1. 1

              I wonder if Rackspace would still be the best managed hardware provider for a fledgling company to use. Their website has so much marketing fluff these days, and they want you to talk strategy with a salesperson. If I were in a position to choose a managed dedicated hosting provider, I’d want a no-bullshit website with transparent pricing.

              1. 1

                For sure, this was 8 years ago. They were acquired by IBM years ago and so I’m sure all value has been erased from them by now.

          3. 4

            Am I reading it wrong or does the benchmark numbers just amount to more CPUs is faster than fewer CPUs? The graphs look like ~linear increase until threads ~ hyper CPUs and then plateaus. That’s not measuring the EC2 overhead, just saying that more cores can make things go faster.

            1. 2

              That’s what I’m seeing too. Goes to show how hard benchmarking is, and how easy it is to misread the results as saying what you want to see! Confirmation bias is a bitch.

            2. 2

              This is pretty crazy, has anyone actually used the synchronized SQLite thing they’re talking about before?

              1. 2

                Bedrock looked interesting.

                I found this:

                “C++ as its primary stored procedure language”

                Why oh why would you want this?


                Stories with similar links:

                1. Scaling SQLite to 4M QPS on a Single Server (EC2 vs Bare Metal) via juef 4 years ago | 38 points | 10 comments