This is an interesting write up. I never thought of exploiting machine precision in that way.
For the method that fails, shouldn’t the author keep all the checks on x instead of f(x)? By the definition of limit, we can control the tolerance in x but we cannot control the tolerance of f(x), which in some cases can be several order of magnitude bigger that the original tolerance depending on the function uses.
If we want to keep a tolerance check, why not use something like:
while x2-x1 > tol