The state of floating-point support in languages and compilers is really abysmally bad.
In steel bank common lisp, some optimisations appear to be aware of the potential existence of inf/nan and overflow, but some do not. There is an optimisation quality, sb-c::float-accuracy, which is at least nominally supposed to guard such optimisations, but it in fact guards only a couple of them.
LLVM was apparently rife with floating-point miscompilations until they started formalising it with alive2 and cleaning them up.
In C, there is a pragma you are supposed to use in translation units where you change the rounding mode (etc.) at runtime: ‘#pragma STDC FENV_ACCESS ON’. In practice, as far as I can tell, compilers completely ignore this, and will silently break your code if you change the rounding mode (etc.). Oops.
The newer gpu languages (spv, dxil, metal) seem to do somewhat better here, with more granular knobs for fp-related optimisations; there is a nice overview here (still not sure about rounding, though). Which is somewhat ironic considering how incredibly bad this used to be. However, this doesn’t save you from hardware malfeasances; afaik, you do not get correctly rounded division (nor sqrt?), and fma may be implemented as mul+add, rounding twice. Oops.
As far as I can tell, java simply doesn’t give you access to any part of the dynamic floating-point environment; I’ll defer to kahan for thoughts on that.
Comparatively little effort and care have gone into the design and implementation of most other programming languages (one litmus test: java, c/c++, and vulkan are the only mainstream environments with defined concurrency semantics; c++ has some hangers-on by proxy of llvm, but all of the actual concurrency semantics work comes from c++). There are probably some other exceptions (fortran?), with which I am unfamiliar.
Rust certainly has shared shared-memory concurrency. It is one of the many llvm hangers-on with nebulous or no semantics unto themselves. It does not ‘prevent mutable data from being visible to other threads’, though it does attempt to make such sharing explicit.
When you talk about “defined concurrency semantics” in the context of floating point, what specifically do you refer to? I think it means that the floating point environment (rounding mode, currently raised exceptions) must be saved and restored when there is a context switch to a different thread. Are there mainstream languages that screw this up?
It is unrelated to floating-point; it is a litmus test for the amount of care and attention that went into a language’s design. See boehm for why this is important; many languages either attempt to implement threads as a library, or else inherit c++’s semantics by accident when their implementations target llvm.
The state of floating-point support in languages and compilers is really abysmally bad.
In steel bank common lisp, some optimisations appear to be aware of the potential existence of inf/nan and overflow, but some do not. There is an optimisation quality, sb-c::float-accuracy, which is at least nominally supposed to guard such optimisations, but it in fact guards only a couple of them.
LLVM was apparently rife with floating-point miscompilations until they started formalising it with alive2 and cleaning them up.
In C, there is a pragma you are supposed to use in translation units where you change the rounding mode (etc.) at runtime: ‘#pragma STDC FENV_ACCESS ON’. In practice, as far as I can tell, compilers completely ignore this, and will silently break your code if you change the rounding mode (etc.). Oops.
The newer gpu languages (spv, dxil, metal) seem to do somewhat better here, with more granular knobs for fp-related optimisations; there is a nice overview here (still not sure about rounding, though). Which is somewhat ironic considering how incredibly bad this used to be. However, this doesn’t save you from hardware malfeasances; afaik, you do not get correctly rounded division (nor sqrt?), and fma may be implemented as mul+add, rounding twice. Oops.
As far as I can tell, java simply doesn’t give you access to any part of the dynamic floating-point environment; I’ll defer to kahan for thoughts on that.
Comparatively little effort and care have gone into the design and implementation of most other programming languages (one litmus test: java, c/c++, and vulkan are the only mainstream environments with defined concurrency semantics; c++ has some hangers-on by proxy of llvm, but all of the actual concurrency semantics work comes from c++). There are probably some other exceptions (fortran?), with which I am unfamiliar.
i believe Go has well defined concurrency behavior.
Rust basically prevents mutable data from being visible to other threads at all, which to me seems well-defined.
Rust certainly has shared shared-memory concurrency. It is one of the many llvm hangers-on with nebulous or no semantics unto themselves. It does not ‘prevent mutable data from being visible to other threads’, though it does attempt to make such sharing explicit.
When you talk about “defined concurrency semantics” in the context of floating point, what specifically do you refer to? I think it means that the floating point environment (rounding mode, currently raised exceptions) must be saved and restored when there is a context switch to a different thread. Are there mainstream languages that screw this up?
It is unrelated to floating-point; it is a litmus test for the amount of care and attention that went into a language’s design. See boehm for why this is important; many languages either attempt to implement threads as a library, or else inherit c++’s semantics by accident when their implementations target llvm.