1. 40
  1.  

  2. 13

    I also sometimes fall into this trap. What has helped me is to distinguish between “I am writing a library” and “I am writing code specific to a single application”. The former invokes a mindset of reuse, generalized concepts, and generic code. The latter requires focus on the end goal and a willingness to accumulate application-specific code and data structures.

    1. 5

      I have found that this mindset / decision is crucial for all my projects in all languages. If I don’t know which it is, then my odds of stopping before I’ve produced something useful are basically 100%.

      1. 2

        Taking part in the Advent of Code using Haskell helped to cure me of this. Just write the code: abstractions can come later…

        1. 1

          Have you done any Project Euler? At least in my experience you accumulate a bunch of utilities while coding that are reusable across challenges.

          AoC is great as it’s so focused and each day’s challenge can be decoupled from the other.

        2. 2

          Makes me think about that post a few weeks ago about “The Wrong Abstraction.”

        3. 9

          Time for some nitpicking! You actually just need a Semigroup, you have no use for munit, and it’s pointless to append the list with munits, since mplus a munit = a by the monoid laws.

          1. 4

            Your comment reminded me of Data.These: since we don’t pad with mempty values, then there a notion of “the zip of two lists will return partial values at some point”.

            And that led me to Data.Align, which has the exact function we are looking for:

            salign :: (Align f, Semigroup a) => f a -> f a -> f a 
            

            http://hackage.haskell.org/package/these-0.7.4/docs/Data-Align.html#v:salign

            (that weird notion was align :: f a -> f b -> f (These a b))

            1. 1

              Yeah this is exactly it. Good eye!

              1. 1

                It’s funny, because I think I poked the universe in a way that resulted in salign going into Data.Align; A year or so ago, someone mentioned malign in a reddit r/haskell thread, and I pointed out that malign only needed Semigroup and one of the participants in the thread opened an issue requesting a malign alternative with the Semigroup constraint.

                Now I feel like a Semigroup evangelist :)

            2. 8

              This seems less Haskell-specific and more a lack-of-focus problem. I can refactor and add abstractions all day in Ruby too :)

              1. 2

                I don’t know if you know haskell but it is much worse in haskell. Like 20x worse. People will try to say it’s a personal probably and yes all languages, but I really don’t think it is.

                1. 4

                  I do all of my personal coding in Haskell. I don’t find the temptation to do useless extra abstractions much worse. I mean, it’s bad in every language because abstractions are fun, but one can also choose to get things done :)

              2. 5

                I’m slowly, painfully beating this over-generalization habit out of my system. It’s not easy, but I much rather think “why did I not generalize this?” when it turns out that there are other similar cases than think “OMG why did I generalize this!?” when there is only the one case.

                An easy first step is to follow The Rule of Three; do not abstract/generalize until you have seen three difference instances of the same problem.

                1. 3

                  I think to put off this habit you have to realize that you’re only digging yourself into a ditch if you try to design everything before writing any code, especially if you’re not well acquainted with the problem space or with the tools you’re using.

                  Any design you might come up with would be flawed at best and completely wrong in relation to the requirements of your program at worst, so if you’re not sure how to proceed maybe it’s best to just give up. The solution is to suck it up and bang out the damn code, and what needs to be done will hopefully be obvious.

                  1. 2

                    I don’t write Haskell, just Python and JS. But I can empathize with the author. I think sometimes it doesn’t matter that the thing didn’t get finished (except if it’s for work), because of the useful side-effect of learning something new. I get this when I start trying to do something with async Python or whatever new JS library is popular this week.

                    Other times I try to maintain laser focus on if the thing works well enough. If it does, I move on. Used Flask, SQLAlchemy, Sqlite, and React; didn’t learn a thing, but got it working in a weekend. This is really helpful for personal projects, where I have only a little time and nothing constraining my choices (no-one’s mandating framework X).

                    1. 2

                      I ran in the same trap with Haskell. For me, using OCaml helped to avoid that to some degree. My code might not be pure, but I find it much easier to incrementally improve my programs until I am satisfied with them.

                      1. 2

                        The lack of philosophical temptations and what evil tongues would call “academic wankery”, is one of the selling points of sometimes using Go or C instead of Haskell and C++, at least for me.

                        1. 1

                          I definitely find there’s a strong relationship between language complexity and bikeshedding. When you have a big language like Haskell or Scala, it’s easy to get distracted from solving the actual problem by trying to do it the most “proper” way possible. This is also how you end up with design astronautics in enterprise Java as well where people obsess over using every design pattern in the book instead of writing direct and concise code that’s going to be maintainable.

                          Nowadays I have a strong preference for simple and focused languages that use a small number of patterns that can be applied to a wide range of problems. That goes a long way in avoiding the analysis paralysis problem.