It is worth noting that 80ms is definitely notable, in, for example, programs that would get used as part of a command prompt customization, and is addition to any other latency imposed by the execution context (such as the latency that mingw imposes on staring processes).
That being said, it’s definitely not the highest VM startup latency. I think I’ve seen 200ms startup times for node in the past.
My favourite is the typescript compiler:
~/tmp$ time tsc hello-world.ts
Adding three seconds of interpreter startup time and JITing and dependency reading/parsing before every single invocation of the compiler is really bloody noticable.
(Sure, it’s not all just VM startup, but it’s all stuff which wouldn’t have to be done in a proper AOT-compiled language.)
It’s also worth noting that the author completely skips with silence the two most glaring data points in their own graph, i.e. the 10^5 and 10^6 array sizes, where JIT seems to take over 200ms more than Go/Native. Note that “0.1 second [100ms] is about the limit for having the user feel that the system is reacting instantaneously”. So, 200ms is probably significant, and 80ms vs 2ms sounds exactly like reaching the difference between seeming instanteous vs not.
I wonder how this would look if you tested using a big system instead of a tiny example which just allocates an array and sorts it? I imagine the JITing overhead will be way bigger in realistic programs with more bytecode.
Yeah, and it’s this benchmark definitely is highly biased towards the sort of thing a JIT should be good at, and it still takes a lot of data before using a JIT pays off.
Of course, microbenchmarks are usually not very informative, but even so.