1. 3

    It looks pretty cool to me! I think the thing I want to know the most (but haven’t really been able to find an answer to) is the performance of using async/await when compared to normal promises. I think there are two dimensions to this. The first, and slightly less useful one at present, is a comparison without any transpiling and relying on the browser native implementation. The second, and much more useful at present, is a comparison using transpiling and a generator runtime like regenerator. Namely, if I want to use async/await today, then I probably need to use regenerator. If I do that, what have I lost? (Other than perhaps good stack traces.)

    1. 4

      I have no idea what the performance using regenerator is like, but I can comment on native performance. Right now, it’s mediocre to pretty bad, so entirely unusable in tight loops. Then again, the same is true for Promise itself, too. But yes, it’s worse for async/await, because that needs not only Promises, but also generators, both of which have fairly fundamental perf issues. In most engines, generators don’t run in the optimizing JIT tier. I think V8 is currently the only exception (but I might be wrong). I’m not sure how much that matters relative to the required trip through the browser’s event loop/task queue.

      So for the general case, both Promise and async/await will always have non-negligible overhead. However, there are certain, pretty common, scenarios in which async/await should be able to perform substantially better, to the point of being optimizable into simple fully JIT-compiled loops. SpiderMonkey’s bug 1317481 has more information, but the gist is this: because the resumption after an awaited Promise is fulfilled is guaranteed to be the first frame on the stack, we know that the current task will end upon encountering the await. If in this situation we can ensure that a), the awaited value is either not a Promise or other thenable (which would wrap it into an immediately resolved Promise) and b) no other tasks are enqueued in the microtask queue, we can simply resume execution immediately without an observable difference. (Note, this is only true in the second iteration: the first time the await is encountered, there are other frames on the stack.)

      This might seem like it’d only hold for fairly esoteric circumstances, but I’m pretty sure it’ll be a very common scenario. E.g. a ReadableStream that has multiple chunks pending will result in such a tight loop.

      (Reply moved from top-level so it’s clear what I’m replying to.)

      1. 2

        Re. regenerator performance, if I’m reading http://kpdecker.github.io/six-speed/#test-generator right, the native implementations in recent versions of JS engines are faster than the babel-transpiled ones. In all cases, however, the code using generators is slower than the code rewritten to not use them at all.