1. 23
  1.  

  2. 21

    Keep in mind 30k/min is 500 per second, not nothing but certainly not something requiring exotic solutions.

    1. 2

      I just had to migrate a Kinesis ingest service from Node.js because of this same aws-sdk-js memory leak last week. :-(

      1. 1

        What did you migrate to?

        1. 3

          Elixir was what i migrated to.

      2. 2

        Nothing whatsoever against Rust (I think its ownership model is a neat way to deal with concurrent data access) but it’s not like it’s the first non-garbage-collected language in history. They could have written this in C++ or half a dozen other mainstream compiled languages. They could probably even have written it in Java and run it with a low-pause garbage collector and hit their latency and throughput targets.

        That aside, if my experience with AWS is typical, they’re going to find themselves dealing with latency spikes regardless of what memory allocator their code uses, because the underlying EC2 instance their stuff is running on will hiccup every once in a while. Less often than their garbage collector was pausing, but probably still often enough that they’ll need to make their system tolerate random pauses. Or they could run on dedicated instances but it doesn’t sound like that’d fit their setup too well.

        1. 2

          They could probably even have written it in Java and run it with a low-pause garbage collector and hit their latency and throughput targets.

          On that same thought, I noticed they ruled out Go without even trying it. Given Go’s emphasis on low GC pauses, I’m curious if it actually would have worked.