1. 30
  1.  

  2. 16

    A brief stint at a serverless startup showed me that there are a TON of incentives for dark-patterns that trick people into using more resources than necessary to inflate the bill. When I brought up cost-saving measures, the response was almost always “we’ll just pass the cost on to the user, they tend to be OK with throwing money away”.

    It’s a tricky thing, because these people are willing to pay money to not think about things like capacity planning, but then they end up getting burned for not doing their capacity planning.

    1. 7

      these people are willing to pay money to not think about things like capacity planning, but then they end up getting burned for not doing their capacity planning

      That’s basically the motto of “the cloud”, serverless or otherwise, no?

      1. 6

        “we’ll just pass the cost on to the user, they tend to be OK with throwing money away”

        Also known as “we’re making less profit that we could easily otherwise be” which is a strange situation for a business to find itself in.

      2. 11

        First, I think that it’s super scummy that AWS doesn’t have an easy way of saying “Look, I’ve blown my budget–don’t warn me just shut the damn thing down.” It took a nontrivial time for them to introduce budgets period as memory serves.

        Secondly, though, I did work for a bit at a place that had gone serverless. What was really annoying was that testing things was more complicated, deploying things was more complicated (>30 minutes because numpy/scipy packaging for AWS)…just everything was harder than it had to be.

        For quick “I need to munge a bunch of logs as they uploaded” maybe…but even then, the cost savings for a VPS doing the same thing in most cases are nontrivial.

        1. 6

          This is history repeating. The banks did the same thing where they’d force an overdraft loan immediately with no option to just stop processing the payments. The Obama Administration got a law passed forcing them to let each customer choose. They called me asking and I said “Heck no don’t let it through if I have no money.”

          Amazon has no regulator forcing them to not screw customers over like that. So, they’ll happily do it. Worst-case scenario is all the big cloud vendors joining together to do such things like in other oligopoly markets (esp telecom).

          1. 4

            I half agree, but AWS has positioned themselves a lot like a utility. You pay for what you use, and if you don’t like paying for it, don’t use it. Sometimes it’s nice to be treated like an adult. I mean, my electricity bill spikes in the winter if I leave the heat on high, and I think there’s ways to monitor that, but they’re not going to cut off my power even if I ask. For their part, AWS is apparently pretty forgiving about refunding unexpected charges.

            1. 2

              “Being treated like an adult” can include “let me set a threshold after which you shut the whole thing down and send me a text message saying EVERYTHING IS DOWN so I can figure it out.” AWS sometimes feels more like they’re trying to get you to make a poor decision by not giving you enough information.

            2. 1

              The tooling is in a super immature state. There are no significant barriers to fixing this. Eventually it will be silly to use anything that forces you to consider any capacity planning beyond price. There’s a ton of low-hanging fruit, like slow starts, real billing quotas etc… And I’m convinced it will be the way most people deploy their code in a few years.

              1. 2

                And I’m convinced it will be the way most people deploy their code in a few years.

                I’ll take that bet. This is a classic cyclical pattern – today’s micro-acrhitecture fad will condense back into a monolith fad in a few years when people decide they don’t like the complexity and overhead for most things, which will last a few years until they re-atomize into services as they decide they want the theoretical scalability benefits.

                Rinse and repeat.

                1. 1

                  Serverless IS the complexity-cutting fad. No more thinking about containers, replication controllers, capacity planning, etc… is a huge win for most engineers who just want to write code and have it deployed with as little friction as possible. There’s also no reason you can’t run a monolith on a serverless platform, and this is something I advocate for pretty strongly in some cases. Serverless is the best deployment platform (once tooling catches up, which it will soon). Monolith is nice when the politics of an org don’t get too much in the way (I view microservice as mostly a way to get around sandbox-politics in projects, most people don’t have a performance or reliability justification for distribution, and don’t have the skills to make services that actually improve their overall performance and reliability.

                  1. 0

                    Well, that’s certainly the kool-aid view of it. I don’t think it will set the world on fire for the same reason every other PAAS hasn’t eaten everything – limited flexibility, high prices, and sub-par performance.

                    1. 1

                      It’s an inevitability of economies of scale, driving hardware costs in the favor of cloud providers. It is not a matter of if, but when. It sucks right now for a lot of reasons, none of which are significant challenges for them to fix. Azure functions are already a huge improvement in a number of ways over lambda.

                      1. 1

                        Nothing ever eats everything - there are still mainframes happily ticking away.

                        “The Cloud” is also more expensive and slower than owning your own kit, but it’s still become the dominant method of deploying new things across many sectors.

              2. 7

                This is a problem with pay-per-use function services even without an obvious mistake. For example, if you’re using Lambda to process client files, what happens when someone boneheaded (or malicious) at the client uploads thousands of cat pictures into the directory instead? Or if you’re calling external services which suffer a serious latency issue?