1. 7

  2. 5

    The analysis would be more interesting if the algorithms described were designed with the same use case in mind.

    1. 3

      It’s not really fair to run the O(n) loop for Dave’s & Rosetta code. If you want a list, you’d want to rewrite those functions. I think Dave’s code is slower since it’s not tail-recursive.

      But I think you can do better if you eliminate the Enum.reverse call. Erlang can build up lists in a body-recursive style, eliminating the final Enum.reverse (which I think wastes even more memory than time IIRC). For example,

      def fibonacci_body(n), do: fibonacci_body_(n, 0, 1)
      def fibonacci_body_(0, _prev, _prevprev), do: []
      def fibonacci_body_(n, prev, prevprev) do
        [ prev | fibonacci_body_(n-1, prev+prevprev, prev)]

      The benchee results:

      Name                     ips        average  deviation         median         99th %
      fibonacci_body         11.72       85.31 ms    ±36.19%       79.03 ms      189.71 ms
      fibonacci              10.72       93.29 ms    ±12.31%       96.65 ms      116.30 ms
      fibonacci_body         11.72
      fibonacci              10.72 - 1.09x slower
      1. 1

        Yes, I agree, it wasn’t a fair comparison. But that was kind of my conclusion at the end. If you want to solve a problem efficiently it’s better to implement your own algorithm from scratch rather than building upon already existing building blocks (e.g. Dave’s and Rosetta Code in this case).

        Thanks for those benchmarks, I didn’t realize the body recursive implementation would be faster. I think I may need to revisit this topic again.

      2. 3

        You should look up on memorization: recording previous function calls and reusing the result when calling with the same arguments. You are doing something very similar by working incrementally on the list of results

        1. 2

          Typical memoization isn’t something you can do with just pure functions, but it could be done with an Erlang process storing state. I wanted to keep my implementation simple, so I didn’t spawn a process to store state. If you look at Dave’s Gist that I linked to, it contains an exercise where he does use processes to make his implementation faster. Using a list is an easy way of getting the benefits of memoization using pure functions. The downside is once the functions return no state is retained, unlike memoization which persists across calls.

          1. 2

            You can’t do the typical memoization by closing over mutable state, but you can memoize a pure function by making the state an explicit parameter and part of the return value, so fib(nth) -> integer becomes fib(nth, { nth: integer}) -> (integer, { nth: integer). Now fib has a dictionary it can check for memoized values from, and add values to by returning the tuple of result + possibly larger dictionary. (Basically, the state monad.) This is a really lovely design in recursive code like the Fibonacci sequence.

            1. 2

              That’s true. I was thinking of memoization in the OO sense where the state is implicit and nothing has to be passed around.