After reading, I have to ask: what about a “reducing” technique? You could average the numbers in equal-sized chunks, then average those averages. Then you could even chunk the chunks’ averages and repeat the process as many levels down as you want to. And then the chunks could be as small as 2 each.
I guess this still assumes that the largest number in the original list is less than or equal to the maximum floating-point value, but other than that you stay roughly in the same space of precision as the original data.
So how do you compute the average of two maximum float values? Or just one? The instant you add anything positive to a max float it becomes inf.
aren’t doubles there for that?
There’s a max value for a double: 1.797693134862316e+308 How do you find the average of 1.797693134862316e+308 and 1.797693134862314e+308?
But we are talking about floats, so that’s how you find the average between floats.
For average between doubles, you obviously need to use triples, then quadruples, etc.
Uh, triples and quadruples aren’t commonly-available data types. When people say “float” they almost always mean “double-precision float”, because that’s the most “native” type nowadays.
Anyway, it would be extremely wasteful to try to convert doubles to some rare, higher-precision floating point type to handle this case.
Oh c’mon, of course I know.
The point was that the solution to the float problem was to use doubles but then it ends there…
A double is a float; single precision is very rarely used, and you’re not solving anything by doubling memory usage needlessly (if you can even doubling memory usage by finding a quad implementation for your programming language).
And how about turning them into BigInt (without any decimal number and with a given precision P) doing all your fancy calculations and then dividing by P + converting the number back to float?
A bit convoluted but then it doesn’t becomes too hard.