1. 34
  1.  

  2. 11

    You have a list of floating point numbers. No nasty tricks - these aren’t NaN or Infinity, just normal “simple” floating point numbers.

    I would classify looking at numbers larger than 10^307 as as nasty trick.

    1. 4

      Or smaller than 10^-308. Or performing aggregate operations on more than a few thousand of them (though that’s not applicable here). Floating point numbers are a bizarre subset of the rationals that have a convenient representation and desirable distribution, but they’re not even a field; most math doesn’t work on them! The errors add up, especially near the boundaries of the range.

      Of course, it’s frequently worth reminding people of this fact, and it’s certainly neat that Hypothesis will find edge cases in floating-point code; but, as the author notes, almost nobody cares.

    2. 6

      Praise be our lord, his name is Integer.

      1. 3

        You still need double-length integers for “exact” average, tho.

        1. 2

          So when you take the mean of a list of integers, do you use integer division? Is your final result also an integer? Do you sum the integers forever, use “infinite precision” arithmetic and return a fraction? I’m trying to think how this would work.

        2. 3

          At the cost of being a tad less correct:

          def mean(ls):
              return sum( map( lambda n: n/len(ls), ls ) )
          

          Edit: Whee! I recreated the second example!

          1. 3

            Isn’t that the second example?

            1. 2

              Hah. Good point, I completely missed it, thanks.

              Strangely enough, though, it doesn’t give any messages for me if I run it through hypothesis myself.

              1. 4

                It takes a fair few runs to find it in modern hypothesis. This appears to be a regression. I should probably have mentioned that in the article.

                1. 1

                  Need a hypothesis for hypothesis

                  1. 3

                    Hypothesis actually is tested with Hypothesis, but as with normal tests using Hypothesis you still need to come up with the things you want to test :-)

          2. 3

            I’ve been faced with this problem lately - finding a robust, online way to calculate the mean of a stream of numbers coming in. It is indeed a harder problem than it seems.

            My approach is to take an N-nary tree and prune the branches that aren’t needed. So, effectively, for X numbers, I’d be taking logN(X) nested running averages (Of The last 1-N values - updated each insert, Of the last N-NxN values - Updated every N inserts (ei, the running average of the previous complete running averages), and so-on repeat logN(x) times + balancing/rollover operations, which boil down adding a new node at the top). At any point, the mean is an appropriately weighted combination of these running averages.

            Each running average involves summing P numbers where P < N and then dividing by N - so you need a double-double but you should get “minimal” error overall, you shouldn’t get accumulating error and other bad things.

            If anyone knows or can think-of any Hole/Gotchas to this approach, I’d love to hear them.

            1. 3

              What goes wrong with the naive streaming approach of m_1=x_1, m_n = m_(n-1) * (n-1)/n + x_n/n (possibly with some standard fp math tricks I don’t know about)?

              1. 3

                The problem I’d see with the naive running average is that as n becomes large and then (n-1)/n is going underflow to be 1, 1/n become zero and you’re screwed. Plus errors accumulate with each insertion.

                The virtue I’d claim with my approach is that you’re never dividing by more than a fixed constant. And you can make most of divisions be by this constant, which you can choose to be a power of 2, which should give minimal error if done appropriately.

              2. 2

                It seems like something that would be in the published literature, but google scholar isn’t finding much for me. This paper has something related, an addition algorithm that minimizes error: http://dx.doi.org/10.1109/ARITH.1991.145549 - maybe it could be inspiration, or there might be something useful in that journal / by that author?

                1. 2

                  Ah, it seems like anything that emulates arbitrary precisions arithmetic would naturally guarantee exactness. And If you keep a running sum with arbitrary precision arithmetic and divide only at the end, the result algorithm is more or less identical to the approach I have been thinking of - if you break the process down to operations on regular floats.