1. 76
  1.  

  2. 16

    This is absolutely bonkers and completely against my initial intuition. I’d think that adding network round trips and spawning lambda environments would be waaaaay more costly than any improvements you would get from parallelization, but that’s obviously not the case.

    There is the fact that compiling is kind of an obviously parallelizable task anyways, so this is just scaling the nodes, and also I bet that the custom runtime is somewhat optimized. Still, completely changes my mental model of what is possible with AWS Lambda and FaaSs in general. Very cool.

    1. 8

      I’m equal parts amazed and horrified.

      What other fun did you run into doing this?

      1. 8

        In experiments several years ago on EC2 I found that LLVM build times using ninja scale linearly out to around 48 to 64 CPUs. But that doesn’t get you anywhere near 90 seconds. From memory, about 4 minutes might have been the fastest I managed.

        Maybe they’ve improved the build system.

        Edit: they’re using much more stripped down builds than I was, -O0, only x86 code generation, other stuff.

        1. 3

          This is why people eventually end up using bazel and remote build execution: is slot faster and economical.

          1. 2

            Very cool. Some questions: You’re mentioning that the lambda instances are reused between instances. Was the 90s measurement with cold- or warm start? How much overhead (in time) do you have for preparing the lambdas and checking caches (i’d expect this to matter for partial rebuilds?)

            Somewhat unrelated, I wish CI would run testsuites this way: Deep integration into test frameworks such that you can run one test per lambda function invocation if you’re really in a hurry.

            1. 2

              90s is a cold start; I re-create the Lambda function before benchmarks to make AWS flush any running instances. I could run a second build with everything warmed up and see how different it is – that might be an interesting comparison. I’d expect it to be substantial – uploading files from the client, especially, is a significant part of the latency.

            2. 1

              This is awesome! I was considering building something similar for offloading compilation of AUR packages for my Pinebook Pro. Is native (not cross-compiled) aarch64 support a goal?

              1. 1

                Lambda doesn’t support aarch64 functions so I’m not sure how you could do ARM builds using it without doing some sort of cross-compilation. If there’s a high-performance ARM cloud functions provider out there, I’d potentially be interested in trying it.

                1. 1

                  That makes sense. I’m not familiar with Lambda, but I thought it might support a Graviton host

                  1. 1

                    Could run inside of qemu, would just be a constant factor.