1. 7
  1.  

  2. 4

    The analysis would be more interesting if the algorithms described were designed with the same use case in mind.

    1. 2

      You should look up on memorization: recording previous function calls and reusing the result when calling with the same arguments. You are doing something very similar by working incrementally on the list of results

      1. 1

        It’s not really fair to run the O(n) loop for Dave’s & Rosetta code. If you want a list, you’d want to rewrite those functions. I think Dave’s code is slower since it’s not tail-recursive.

        But I think you can do better if you eliminate the Enum.reverse call. Erlang can build up lists in a body-recursive style, eliminating the final Enum.reverse (which I think wastes even more memory than time IIRC). For example,

        def fibonacci_body(n), do: fibonacci_body_(n, 0, 1)
        def fibonacci_body_(0, _prev, _prevprev), do: []
        def fibonacci_body_(n, prev, prevprev) do
          [ prev | fibonacci_body_(n-1, prev+prevprev, prev)]
        end
        

        The benchee results:

        Name                     ips        average  deviation         median         99th %
        fibonacci_body         11.72       85.31 ms    ±36.19%       79.03 ms      189.71 ms
        fibonacci              10.72       93.29 ms    ±12.31%       96.65 ms      116.30 ms
        
        Comparison: 
        fibonacci_body         11.72
        fibonacci              10.72 - 1.09x slower