This can be considered a feature request to improve Rust’s support for LLVM’s Fast-Math Flags. Rust in fact provides access to this LLVM functionality, see https://doc.rust-lang.org/std/intrinsics/fn.fadd_fast.html (it enables all fast-math flags), but does not provide a command line flag to turn normal floating point operations to fast-math floating point operations. It is debatable whether such flag is a good idea.

Agreed. A more meaningful chart would be wall clock time to non-flawed result. fast-math has its place, but it doesn’t mean the non-fast math is slow math. It is fast for a reason.

As pointed out in the article, though, fast math isnt necessarily lower precision. Fused multiply and add will actually improve your precision. For scientific computing, which is what IEEE floating point was designed for, you need exact reproducibility, though. For other things, speed is more important. There needs to be a way to tell the compiler what you want, at the function or scope level.

Edit: I think I misread your comment at first. My response is not really related. But I’ll leave this because it’s still true, IMHO.

There was a discussion about optimizing FMA in Rust, and argument against it was that if x*x - y*y computes one side with higher precision, then the expression overall may end up with noticably worse result (e.g. wrong sign in x==y case).

Personally I’d love a “fast float without NaN” type.

True, there are edge cases where accuracy/precision can be worse by introducing FMA. To be fair, though, if you care about accuracy/precision and you’re calculating x*x - y*y where x ~= y, then you’re doing it wrong.

This can be considered a feature request to improve Rust’s support for LLVM’s Fast-Math Flags. Rust in fact provides access to this LLVM functionality, see https://doc.rust-lang.org/std/intrinsics/fn.fadd_fast.html (it enables all fast-math flags), but does not provide a command line flag to turn normal floating point operations to fast-math floating point operations. It is debatable whether such flag is a good idea.

Agreed. A more meaningful chart would be wall clock time to non-flawed result. fast-math has its place, but it doesn’t mean the non-fast math is slow math. It is

fastfor a reason.As pointed out in the article, though, fast math isnt necessarily lower precision. Fused multiply and add will actually improve your precision. For scientific computing, which is what IEEE floating point was designed for, you need exact reproducibility, though. For other things, speed is more important. There needs to be a way to tell the compiler what you want, at the function or scope level.

Edit: I think I misread your comment at first. My response is not really related. But I’ll leave this because it’s still true, IMHO.There was a discussion about optimizing FMA in Rust, and argument against it was that if

`x*x - y*y`

computes one side with higher precision, then the expression overall may end up with noticably worse result (e.g. wrong sign in`x==y`

case).Personally I’d love a “fast float without NaN” type.

True, there are edge cases where accuracy/precision can be worse by introducing FMA. To be fair, though, if you care about accuracy/precision and you’re calculating

`x*x - y*y`

where`x ~= y`

, then you’re doing it wrong.