This is really interesting. It’s always been clear to me in scientific computing that, except for repeatability and stability concerns, most of the math done is really fuzzy and could easily be replaced by less precise operations. Generally applying that would be a little tricky since stability concerns are very real in many standard algorithms, but there are a lot of tricks which allow you to “avoid division” in specialized algorithms.
1% error is especially irrelevant for algorithms that inherently include noise, like neural networks and computer vision. The quarter billion neurons in a human visual cortex don’t compute arithmetic with 32 bits of precision either.