Before I clicked the link I thought it was going to be about how the infinity of the reals is bigger than the infinity of the integers, but instead it was about IEEE floating point numbers and was pretty neat.
For many applications such as Monte Carlo simulations, 23-bit resolution (or even the full 32) isn’t enough and you want to be using 64-bit floats.
For example, let’s say that you’re doing the Box-Muller transform to generate two independent random normal variates:
x ~ U(0, 1], y ~ U(0, 1] =>
a = sqrt(-2*log(x)) * cos(2*pi*y), b = sqrt(-2*log(x)) * sin(2*pi*y).
Note that my interval is inclusive at the 1 rather than 0; this is to avoid taking a log of zero.
The granularity of very low x (e.g. the fact that the absolute lowest value is 2^-23) actually biases this transform away from large variates. Tail events (5+) become significantly rarer than they are in practice. This isn’t an issue if you’re generating a few hundred or even a couple million random numbers, because 5+ sigma is already extremely rare, but it can be an issue if you need billions of random numbers and a faithful Monte Carlo.
Of course, that issue still exists in theory at 53 bits with doubles, but you’re now seeing the truncation at 8+ standard deviations, which is usually considered to be close enough to “will never happen”.
Before I clicked the link I thought it was going to be about how the infinity of the reals is bigger than the infinity of the integers, but instead it was about IEEE floating point numbers and was pretty neat.
For many applications such as Monte Carlo simulations, 23-bit resolution (or even the full 32) isn’t enough and you want to be using 64-bit floats.
For example, let’s say that you’re doing the Box-Muller transform to generate two independent random normal variates:
x ~ U(0, 1], y ~ U(0, 1] =>a = sqrt(-2*log(x)) * cos(2*pi*y), b = sqrt(-2*log(x)) * sin(2*pi*y).Note that my interval is inclusive at the 1 rather than 0; this is to avoid taking a
logof zero.The granularity of very low
x(e.g. the fact that the absolute lowest value is 2^-23) actually biases this transform away from large variates. Tail events (5+) become significantly rarer than they are in practice. This isn’t an issue if you’re generating a few hundred or even a couple million random numbers, because 5+ sigma is already extremely rare, but it can be an issue if you need billions of random numbers and a faithful Monte Carlo.Of course, that issue still exists in theory at 53 bits with doubles, but you’re now seeing the truncation at 8+ standard deviations, which is usually considered to be close enough to “will never happen”.
Also relevant/interesting:
http://mumble.net/~campbell/2014/04/28/uniform-random-float