1. 15
  1.  

  2. [Comment removed by author]

    1. 8

      abstraction is a core concept of programming.

      is not the same thing as:

      abstraction is the core concept of programming.

      Too often someone proposes a code change that is “more abstract” without demonstrating what value it is: Why is:

      area(R) -> 3.1415 * R * R.
      

      worse than:

      -define(PI, 3.1415).
      area(R) -> ?PI * R * R.
      

      Do you think that someone is going to change the value of PI?

      Forth programmers don’t bother; if they used a PI constant then they would miss seeing easy opportunities for using fixed-point arithmetic:

      : area dup 5632 * * 7 / ;
      

      Abstraction is an important concept, but it is not our goal and should never be confused with our goal: To solve business problems efficiently and correctly we miss opportunities and make it harder to be successful if we abstract too much. Knowing when not to abstract an interface is almost more important than knowing how to abstract it.

      1. 7

        Sandy Metz had a related and very good talk about this. The central thrust was “a little duplication is a lot less costly than the wrong abstraction”. Having internalized that abstraction is a core programming concept, we’ve all become a bit too knee-jerk about applying it instantly everywhere we could possibly fit it, and then pay the price in long-term maintainability when it turns out our abstraction was premature and actually ends up getting in our way.

        1. 3

          A little duplication is a lot less costly than the wrong abstraction.

          I have a feeling the OP takes DRY too seriously and ends up with insane abstractions all over the place. @dwc’s assessment is accurate: use it when you need it.

          Personally, I let repetition sit around until I find it’s too complicated and then I’ll turn it into an abstraction. It’s easier to find the useful patterns after you repeat yourself for a while.

          1. 2

            Personally, I let repetition sit around until I find it’s too complicated and then I’ll turn it into an abstraction. It’s easier to find the useful patterns after you repeat yourself for a while.

            This is sensible, but note there’s an element of taste here that makes following this advice difficult for junior developers (NB the language you used “too complicated” – how do they know if it’s too complicated?)

            I find anything that makes the program shorter as defined by source-code bytes is a better mechanism for identifying when to introduce abstraction or any other kind of functional utility. Yes, some people want to argue about lines or words or lexemes or whatnot, but I find we can usually keep that part of the discussion (and it’s advantages/disadvantages) separate.

            1. 2

              I find anything that makes the program shorter as defined by source-code bytes is a better mechanism…

              That’s a reasonable metric to work from and a good way to guide junior devs.

              “Too complicated” is one of those things that you get a feel for through experience. Certainly making the code smaller is a good thing, but there may be something outside of the code (like say, debugging/monitoring) that warrants a little bit of bloat. (Maybe. I would proceed with caution but not rule it out.)

              Also, with letting the repetition sit, you tend to figure out what a decent abstraction would be after repeating it a few times. After thinking about what I wrote a bit more, I realized that’s the real value in waiting to implement an abstraction: actually seeing the pattern instead of imagining it.

        2. 9

          In the first example, pi is well known. In the second case, whats 5632 doing there, and where did it come from? But even in the first case: That specific value of pi is fairly likely to change, because it’s dropping a lot of precision.

          Regardless, giving a value a name is roughly the same as giving it a descriptive comment. It’s documentation, not abstraction.

          1. 0

            Rubbish. If the value of “pi is fairly likely to change” you’re focusing on something other than solving the business problem correctly and quickly. As a trivial and likely example: Perhaps there’s a big friendly comment above there explaining we need an estimate of the number of screens that are only 100 pixels wide. In such a case, changing the value of PI is worse than a waste of time, it makes the program slower and perhaps less correct.

            But by all means, argue with what you choose to imagine I’m saying instead of what I’m actually saying.

            1. 4

              Awww man, but I can never remember the Erlang gregorian seconds to epoch delta and fed up of having to cut’n’paste it.

          2. 2

            I mean, you could have area(R), and circum(R) and volume(R), and then decide to switch to have more/less precision in PI. But I’d only introduce PI once there was more than one, preferably 3+, places that will use it.

          3. 4

            I think you’ve covered this wonderfully.

            Often we want to try to remove any part of a problem that’s difficult and inevitable. Abstraction tries to address enforcing assumptions of a model while minimising the attack surface of the system’s environment. It’s a tricky solution to a hard problem, and we’ll necessarily get it wrong (there’s no such thing as a perfect set of assumptions, only a sufficient set).

            Throwing away a toolkit because it doesn’t cleanly and formally solve our problem without informal knowledge and iteration is a bit like refusing to wash dishes because they’ll be there tomorrow.

            Abstraction is not the set of mechanisms that allow you to define interfaces in a language, it’s the set of assumptions, implicit and explicit, that we try to enforce in our system.

            1. 1

              What do you consider to be the other core concepts of programming?

            2. 6

              without it, we would be writing software in byte code

              Without it, we wouldn’t be writing software – we’d be building circuits