Genuinely surprised the original CLOCKSP program didn’t suffer from integer wraparound or anything trying to compute and print those performance numbers. :)
We can compare the clock speed of an original 6502 to a modern x86_64 CPU and yeah, CPUs have gotten a lot faster. But the fact that the emulated CPU’s clock-speed is so much faster than the host CPU’s clock-speed is evidence for how much more efficient CPUs have gotten, with caches and branch-prediction and out-of-order execution (and SPECTRE and Meltdown). It’s easy to overlook the effect of those other factors, since they don’t have a nice big number attached to them, but wow, I’m impressed.
Very very cool. Reminds me of the 68000 FPGA stuff that’s been going on, where they re-implemented it and trying to clock it higher at the hardware level.
It’s a very cool project and an interesting read.
One point I would like to add: I think good performance absolutely needs predictable interrupts. For the author this works out, because his interrupt sources are timers or the VSYNC signal - both highly predictable. This becomes impossible if you simulate a wide range of systems with various interrupt sources that are not as easily specified.
What I don’t understand: What happens in the case we know an interrupt is going to happen in a block? Is there a fallback to an interpreter?