I wish there was a more in-depth analysis as to why certain benchmarks were faster/slower, and also a discussion of the benchmarking methodology (eg. were there pauses to account for the thermal throttling of both processors?).
How reliable are benchmarks that report times in the micro- and nanosecond range? Or are these derived from enough repetitions to bring down the noise?
These microbenchmarks are interesting in the sense that they might help to identify areas of the code that need a closer look. There are a few cases where the ARM CPU takes twice as long or even 10 times as long as the Intel CPU.
What would be more interesting (to me) is to also see how long it takes to run a real world job. For example, use Hugo to generate a substantially large website and measure how long that takes.
I wish there was a more in-depth analysis as to why certain benchmarks were faster/slower, and also a discussion of the benchmarking methodology (eg. were there pauses to account for the thermal throttling of both processors?).
How reliable are benchmarks that report times in the micro- and nanosecond range? Or are these derived from enough repetitions to bring down the noise?
These microbenchmarks are interesting in the sense that they might help to identify areas of the code that need a closer look. There are a few cases where the ARM CPU takes twice as long or even 10 times as long as the Intel CPU.
What would be more interesting (to me) is to also see how long it takes to run a real world job. For example, use Hugo to generate a substantially large website and measure how long that takes.
Is this under Rosetta or native? Because a few of those benchmarks are fairly catastrophically slower and it seems translation could do that
Native by the looks of it. My MacBook Air is arriving today, so I’m looking forward to running some benchmarks of my own.
Next blog post: “Ruby on Rails on M1: see why it may (or may not) be 10x faster!”