I think it maybe should be noted that if you include the log details data in the __le__ and use the same data (input) for __eq__ then everything is fine. Maybe the docs for total_ordering should clarify that the result is correct only if the provided methods are correct (follow laws).
But yeah, the model is, like you point out, almost always some approximation and simplification of reality, never completely true in all aspects.
Similar things happen with simple things like floating point numbers - consider e.g. what fast math for C is (https://stackoverflow.com/questions/6430448/why-doesnt-gcc-optimize-aaaaaa-to-aaaaaa) and e.g. how Haskell’s standard library is a lie (https://stackoverflow.com/a/15746633).
I guess the only conclusion is that everything is broken and can’t be fixed ;)
But also we should accept the limitations of computers and try to do our best and then the results might be just good enough..