This is tricky when all our systems are distributed, all our software is written in a hurry, and most of us aren’t expert programmers. There are a couple of possibilities:
Programmers with the necessary level of mathematical sophistication to find bugs in published proofs of distributed algorithms will program circles around the rest of us, who will be stuck wasting all our time debugging the messes that we create; and consequently the software used by the vast majority of people will be written by a tiny minority of elite programmers. This resembles the situation with, say, relational database management systems, where lots of people have written some crap to store data on disk, but most people use SQLite, MariaDB, or Postgres.
We’ll make do with incorrect algorithms and patch them when they fail, trying to make sure that the failure modes are not too dangerous. This resembles the situation with most web sites.
[Comment removed by author]
Yeah I thought the assumption that published methods are “settled” seems odd and naive. Publishing is merely the beginning of the peer review process.
The idea “if it’s in a published paper it must be right” is certainly naive, but I don’t think it’s odd or unusual. For people with extensive academic training, seeing a paper as a step in a long walk towards truth is second nature. For people without it, a paper in a good journal with respected names is treated as strong evidence that the contents of the paper are right. That heuristic isn’t actually a bad one, as these things go, but still occasionally produces incorrect results.
This reminded me of the quicksort bug.