1. 15
  1.  

  2. 7

    The only reservation I have about this, is if stupid code is repetitive or poorly organized.

    An example of “stupid” being repetitive is: not using templates or macros to handle many nearly identical definitions/declarations.

    An example of “stupid” being disorganized is: using module globals for your CLI program, not distinguishing between the state of main and the state of the subcommands.

    1. 4

      I disagree to an extent–often times, DRYing stuff out results in settling on more brittle abstractions than just duplicating near-identical functionality. It also changes the maintenance burden and can stymie efforts when you’re doing the original development.

      An example of this would be writing out the logic for a state machine as a series of functions handling each state and returning function pointers to the next applicable state, thus allowing for a simple loop to handle running it.

      Once you’re completely finished, it’s easy to go back and point out all the redundant machinary and merge states and do refactoring and make a neatly packed little gizmo–but until you’re 100% there, it’s easier and more maintainable just to have a handful of near-identically structured functions that you can tweak until they’re what you want.

      1. 2

        I agree with you. I think an intermediate case is what deriving does in Rust and Haskell. It sure is nice to write all those “obvious” instances/impls and if you don’t write them, you can’t flip the sign or do other stupid stuff. The alternative to polymorphic equality or ordering is lots of little functions all over the place, or casts – so not necessarily clear or efficient. On the other hand, it is code you never see; and it is not perfect so sometimes you have to write the instance.

    2. 5

      (I suggest removing the - Thorsten Ball bit from the title.)

      I like the writeup, but more concrete examples would help. So, towards that end, I’ll point out something from JS:

      Consider the task of looping over a list, transforming elements, removing elements that don’t match a particular criteria, and splatting the remainder into a map of some variety.

      The “naive” (in the author’s parlance, “stupid”) approach would be something like:

      //    List of form [ {ticket: "company name", val: 42 }, ...]
      function doList( list ) {
          var xformedList = list.map( function increaseShareholderValue(el) {
              return { ticker: el.ticker, val: el.val+34 };
          });
          var filteredList = xformedList.filter( function divestBadCompanies(el) {
              return el.val % 2 != 0;
          });
          var companyMap = {};
          filteredList.forEach( function goPublic(el){
              companyMap[el.ticker] = el.val;
          });
          return companyMap;
      }
      

      Now, there are good things about this approach:

      • It is painfully (even excruciatingly) obvious what it does.
      • It is understandable by anybody who knows basic ES5.
      • It is easy to pluck out and change any steps of the transform.
      • You can inspect the value at each step using breakpoints very easily.

      But, there are problems:

      • It’s slow. It updates each element only to throw away half of them later.
      • It’s got function call overhead for everything, multiple times.
      • The use of an object literal can mean weird things happen, sometimes, under certain circumstances.

      The most optimal approach is going to look something like:

      function doListBetter( list ) {
          var companyMap = Object.create(null);
          for( var i = 0; i < list.length; i++ ) {
              var el = list[i];
              if ( el.val % 2 != 0) {
                  companyMap[el.ticker] = el.val + 34;
              }
          }
          return companyMap;
      }
      

      This approach is good, because:

      • It avoids function call overhead.
      • It processes the list only once.
      • It avoids a class of problems involving object literals.

      It’s bad, though because:

      • It’s easy to do a dumb thing and have the for construct not behave as expected.
      • As you increase the complexity of the filtering criteria, that test may no longer be orderable like that (what if it requires the rest of the list suddenly, etc.).
      • As you increase complexity of the transform and storage, the logic starts to smear out and get harder to follow.

      So, even though the optimal version is faster, the maintainability stuff gets a lot, lot worse.

      Write the naive approach first, profile it when needed, and then fix it.

      1. 4

        Author here. Your comment is really interesting to me. Why? Because since I wrote the post I’ve thought about including an example a lot and that example would pretty much look like the one you presented.

        In the past a few people said to me: “Oh, I know what you’re saying! Just use a for loop instead of map/forEach/reduce!” But what I meant comes much closer to the approach in your comment.

        1. 4

          Thank you! Feel free to use it with attribution.

      2. 5

        This was/is a requirement in high-assurance systems and security. The idea is that language features or modules that exceed a certain complexity are too hard to understand by humans or machines.

        1. 4

          The idea is that language features or modules that exceed a certain complexity are too hard to understand by humans or machines.

          I assume with C this would be certain kinds of pointer operations. With C++, are templates a feature that is nominally banned?

          1. 4

            That correct. The pointers are banned entirely in favor of call-by-value or references if possible. That was done in high-assurance, GEMSOS kernel albeit they used Pascal. Often can’t do this so use various rules and checks on the pointers instead. The general approach for C or C++ is subsetting [1] the language to use the simplest to analyze features with least, unpredictable behavior. Subsets include Power of 10 [2], MISRA-C/C++, High Integrity C++, and so on. Many tools can automatically spot non-conformance with things like MISRA-C/C++ or HIC++. Tools like Astree Analyzer able to statically prove absence of common vulnerabilities/errors in subsets of C. SPARK [3] does that for Ada with addition of formal semantics and specs to help automated provers do their job effectively.

            Far as C++, it’s coding standards are more speculative as it’s quite a complex language. I couldn’t even find a review of HIC++. I personally think the templates shouldn’t be used since they probably hurt program analysis. The C or LISP style macros can get turned into regular code before running through an analysis pass. Only exception I’d make is if template behavior is obvious to human eye + instantiations are checked in expanded form by hand. Kind of a convenience or code compacting method. I’ve went further to discourage use of C++ over C entirely since C subset + decent libraries + its tons of verification tools (esp enforcing good style) are stronger in safety than just C++ and its libraries. Probably… A NASA lead used C over alternatives for same reason. Will continue to increase that lead, too, although C++’s modernization has been impressive.

            [1] http://vita.mil-embedded.com/articles/when-programming-language-technology-safety/

            [2] http://spinroot.com/gerard/pdf/P10.pdf

            [3] https://en.wikipedia.org/wiki/SPARK_(programming_language)

        2. 4

          Keep It Stupid, Stupid.

          1. 3

            As I like to put it: don’t make me think. I’m not good at it, and I’ll probably fuck it up.

            1. 3

              Stupid code is simple code. And simple code is more maintainable. This reminds me of the still-relevant “Code Simplicity” by Max Kanat-Alexander http://neverworkintheory.org/2012/05/03/a-review-of-code-simplicity.html

              1. 2

                My rule is this - at home you’re free to explore and expand your knowledge. At work your number one job is to write maintainable code because long term your code costs more in maintenance than in any other aspect. It’s also nice if your code is bug free and efficient but over the lifetime of the code that’s probably less important than the cost to maintain it.

                What this means to me is that professionally written code must be simple and clear above all else.

                1. 1

                  This is one of the core principals of the Go community and I think one of the reasons it is so successful. It was really refreshing after only about a week with the language to be able to dive into almost any Go code base, including the standard library, and clearly understand what the code was doing.

                  1. 1

                    Is simplicity/complexity/stupidity of code something objective? How does one define it?

                    I know there are measures like LOC or cyclomatic complexity. But these things don’t seem like complexity itself, rather symptoms people notice when observing codebases they experience as complex.

                    I wonder if the “complexity” of a piece of code might not be dependent on the community of programmers reviewing the piece of code.

                    For example, cyclomatic complexity can be reduced in some cases by using interfaces. But many programmers will experience a codebase with lots of interfaces as “complex”.

                    1. 1

                      I think, you’ve hit the nail on the head here. It’s really, really hard to find common ground here, especially if you’re coming from different backgrounds, programming languages and communities.

                      That being said, I believe there is something we can all agree on being the “stupid”, “more obvious”, “duh, of course” version, compared to something more “clever” and “complex”. But maybe we can just decide on a case by case basis and never derive some general rule. I tried doing the latter, but think it’s still too vague.