1. 22
    1. 9

      It’s worth noting that in Cory O’Daniel’s “From $erverless to Elixir” article, which is one of the case studies linked there but the only one with a detailed breakdown, the vast majority of the cost savings came from avoiding AWS API Gateway.

      This matches my experience, because when I’ve done back of the envelope price estimation before for systems that use API Gateway, I’ve noticed that the cost of the Lambda endpoints we were invoking (using 128MB instances, running for about 250ms average) was much smaller than the per-request pricing for API Gateway. My notes say $0.515/Mreq for the lambda charges and $3.50/Mreq for API Gateway.

      1. 1

        Just submitted that one so it can be merged. Suggestion by pushcx.

      2. 5

        I think this is slightly misleading. BEAM saves them the money and they like the affordances in Elixir. I was pretty surprised until I saw that their baseline was Ruby. BEAM is a lot slower than most low-level languages, but Ruby is well known to be painfully slow. Last time I looked at the language shootout game results, it was a quarte of the speed of Squeak (Smalltalk). Squeak is written to be flexible and understandable at the expense of performance, Ruby eliminates the features of Smalltalk that add the worse performance overhead and still manages to be slow.

        1. 4

          Ruby is not too bad now compared to Smalltalk. One thing to keep in mind is that pre 1.9, Matz’s Ruby Implementation (MRI) was the de facto standard. And it was slow. Switching to a bytecode interpreter has improved performance: https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/ruby.html

          1. 3

            I’m not sure it’s only speed. Just by having cheaper parallelism (and ‘hanging’ connections) you can do a lot with the BEAM where a multi-threaded (or multi-proc) server in ruby would need a lot more memory (and probably CPU).

            1. 5

              That’s true. I first used Erlang during my PhD and wrote code that ran on my single-core PowerBook and scaled linearly to the 64-processor (MIPS) SGI box that I deployed it on. I have never done that in any other language. Unfortunately, per core it ran at 1/10th the speed of the C version. I definitely couldn’t do it in a Smalltalk-family language like Ruby without some very rigid discipline that would remove most of the value such a language.

              It would be interesting to look at a baseline like Pony, which is much faster for straight-line execution than BEAM.

              1. 2

                It compounds too. A lot of things become easier like a long running pool of connections to your db, which makes the caching on postgres far more efficient as an example.

                Cost of crashes are lower too, latency tend to handle high load better, etc etc. It compounds pretty fast into really tangible results.

          🇬🇧 The UK geoblock is lifted, hopefully permanently.