1. 19
  1. 8

    And profile, profile, profile when you do want to optimize, because often times, the slow spot isn’t where you think it is.

    1. 3

      I agree that replacing built in array methods probably warrants profiling, but you should also be maintaining constant vigilance against writing code that will obviously be slow to begin with.

      1. 7

        I think constant vigilance is falling into the same premature optimisation trap. Back when I taught CS we had undergrads wasting their time on macro (mostly data structure choices) and micro-optimisations (preincrement instead of postincrement and other weird tricks from C) before their code or algorithms were even correct. Most often, these optimisations had little to no benefit.

        Make the code correct. Then 99% of the time it will be fast enough anyway. For the remainder, profile it and see if there are quick and obvious improvements. If there aren’t obvious improvements you probably need to go and examine your algorithm and maybe read the literature.

        If you’re in a domain where it pays to care about performance then you’ll get to know what works well with experience.

        1. 2

          I generally think you’re right, but also that macro optimizations like data structure choices can be considered part of “correctness”. Not that you should agonize over small differences in best and worst case performance, but that you should be choosing data structures with generally appropriate performance characteristics.

          1. 2

            I think a fair amount of accepted wisdom about data structures tends not to take processor pre-fetching or performance characteristics of the relevant language into account. Often the right data structure for the job is not obvious and if you just use vectors, sets and maps as appropriate you’ll usually be fine.

            Obviously you should think about what you’re going to write before you write it, but people get very excited about these details that I would wager make very little difference in most cases. Where it does matter you’ll pick that up in the design if you’re experienced enough or in review or evaluation if you’re not.

            E.g. if you do a lot of data science you’ll probably pick up that e.g. sparse arrays and vectors can be very useful; or if you’re searching a lot of strings you might read up on tries or DFA theory, but focussing on these issues early is usually going to be self-indulgent.

      2. 2

        I fell into this trap before. Some years ago, at my first job, I was in charge of designing and developing a Web broadcast planning tool. I replaced all native array methods by hand-written forEach/map functions and some lodash ones.

        It was (a bit) faster. But, I spent ~1 month doing it.
        The problem is, it added too few value but it costed 1 month of development.

        1. 2

          agree, exactly. I bet there are lots of similar stories out there

        2. 1

          Go for the simplest.

          So that the optimization you added during the initial implementation along with the optimization added during the optimization phase do not stack up.

          It will be easier to optimize if it is simple.

          1. 1

            I think we are missing, as software dev construct – some classification for optimization techniques. There is a lot in between:

            • ‘do not create a file on a network drive, every time you receive web request’
            • ‘do not hand unroll your for-loops’

            First seems like a good idea to avoid, even without profiling. Second requires a ton of justification.

            Until, we have such classification, I think it is best if we share our personal experiences, and anecdotes, as such – without assuming that we had found a rule or an axiom of some sort…