Making my bet; over the next 15 years, RISC-V (or similar) industrial capacity gets to the point where using modern-ish fab tech on open hardware is viable. Software-to-hardware compilers improve enough to get within a factor of two of the performance of hand-rolled RTL, but with drastically less human labor. This opens up opportunities for increased (high-quality) formal verification, decreasing the incidence of stuff like this.
Intel is like the Microsoft of hardware. Once open hardware becomes viable, it’s going to be vastly better, faster, and more reliable than “traditional” hardware, for the same reasons Linux, the BSDs, etc. are more reliable and (subjectively, of course) vastly better than Windows. Most of Intel’s market share will come from slow-moving tech-conservative behemoths. It’s the future version of “no one ever got fired for buying IBM”.
Does formal verification of hardware go beyond logic? The power management in modern CPUs can be pretty complex, and e.g. Ryzen already showed bugs that result from misapplied voltages. I would’ve liked some details about this HT bug but nope, we keep guessing.
Seeing how little formal verification is employed in open software, I have a hunch that it won’t be a huge thing in open hardware either, not in the near future at least.
Maybe in a simple microcontroller…
RISC-V PCs would be fun though.
Intel is a big user of formal verification, ever since the Pentium bug which cost them $475 million in 1994:
Based on my discussion with Intel employees, they don’t formally verify much of anything beyond some individual FUs, and they don’t use much modern theorem-prover or satisfiability based verification, but instead exhaustive property checking. This is far behind the cutting edge, both in terms of capability and cost.
Of course, their latest formal work might be kept secret.
I consider Formal Methods at Intel presentation by John Harrison (Intel employee, HOL Light author) to be definitive. It mostly agrees with you, but there is a matter of perspective. Maybe they don’t verify much beyond FU, but verifying FU is still a lot more verification than almost anywhere else.
I agree, but for a different reason. Intel is absolutely the world best in “fab tech”, and likely it will continue to be so. IF fab tech continues to advance, Intel will continue to have upper hand. I am pessimistic about Intel only because Moore’s law seems to be ending.
Precisely. Moore’s law is slowing down, which is why AMD is catching up to Intel faster than Intel can keep ahead, and why Intel had to slow their development cycle.
I’ve been considering building a desktop machine, but I’ve been a little put off by reports of flakiness in high end hardware. First reports of segfaults on ryzen CPUs and now broken hyperthreading on these?
As someone once said: “The biggest performance boost your code will ever get is when it goes from not working to working”.