1. 5

  2. 20

    Interestingly, you can replace the technical content of this post with almost anything related to good human behavior. That’s because at it’s core, the post is arguing against deontological ethics (the idea that whether something is good is determined by whether it adheres to a specific set of rules). However, I find the argument against the terms “good” and “bad” misplaced.

    As an alternative, you might try viewing things through the lens of virtue ethics, which argue that “good” and “bad” are useful terms, but are always relative to some particular goal. In human behavior overall that goal might be called “human flourishing.” And in the case of code, that would be something like “achieving it’s organizational purpose without causing excess detriment.” Or more concretely, “taking online orders with an acceptable failure rate.”

    Edit: The virtue ethics view is thus that any time you call something “good,” it means good for some particular thing, even if we aren’t always explicit about what that thing is.

    1. 3

      Thank you! Another entry for my “why programmers should study humanities” list.

    2. 13

      Clickbait title.

      Also, there is bad code. Can we stop with moral relativism in a field where we can actually measure how shitty code is? Where we all have direct personal experience with mudballs (often of our own creation)? Where we’ve all had to pay the price for badly-written code that badly performs while doing a bad thing badly?

      I’d rather read a “Your code is probably bad and you should probably feel bad. Here’s how to write less bad code.”

      (And before you ask, yes, all code is varying degrees of bad.)

      1. 6

        There are plenty of non-programmers who write “bad” code that created something useful for them. Telling them they should feel bad is the attitude I don’t like. Certainly you can teach them some of the skills you have that they don’t, but that means explaining why they should do better… and that ends up being tied to specific circumstances. You’re not going to tell someone writing Excel macros about formal verification methods: they don’t care and they don’t have time.

        Software is just a tool. If the tool succeeds at its purpose, then it’s a useful tool.

        1. 2

          non-programmers who write “bad” code that created something useful for them. Telling them they should feel bad is the attitude I don’t like

          No one suggested doing that?

          Code that does something useful can still be bad code, and its badness can even turn out to be costly to whomever wrote it no matter how adorable that person is.

          You’re not going to tell someone writing Excel macros about formal verification methods

          No one has suggested that either. I don’t feel like looking it up to confirm, but this smells like strawmanning.

          Bad code does actually exist. Whether it achieves something useful in the real world is irrelevant to that.

      2. 5

        What a goofy article. It’s essentially just a bunch of straw man arguments attempting to defend crappy coding. Most people aren’t writing code for an obfuscated coding contest or working in an industry without established best practices.

        Just because NASA’s “best practices” aren’t the same as a small time web developer’s “best practices”, doesn’t mean that no “best practices” exist. It means a competent developer (or team) needs to understand their particular situation and decide for themselves the relevant “best practices” to use.

        If the value your code provided to your organization is sufficiently higher than the cost of maintenance, it can’t really be said to be “bad” code.

        That’s the only semi-reasonable argument in the whole article, and it doesn’t hold up because more readable, “better” code would decrease maintenance costs, making it an even better value for the company.

        1. 2

          Well that last part is just one dimension of bad. Could also be that the code is esthetically bad, not idiomatic, inefficient, ill organized, or a plethora of other flavors of bad. Now, you might want to optimize for value, and not care about all the other ways code can be bad, and that is perfectly okay. If you decide to make all variables have people names instead of something meaningful, I’m gonna call the code pure crap no matter how valuable it is. “Bad” is a complex thing, and oftentimes a meaningless one unless qualified.

        2. 4

          This is another post that seems more like trying to attain Guru Status than actually be good. For example, the post claims it will prove to you that there is no such thing as bad code. In the “Hard-to-read” section it lists three situations where hard-to-read code might be fine. Ok. But what if you write hard to read code that, for instance, costs more to maintain than the value it offers. How is that not “bad code”??

          This whole post is a list of situations where lower-quality code might be acceptable, but that hardly means bad code does not exist.

          1. 4

            But what if you write hard to read code that, for instance, costs more to maintain than the value it offers. How is that not “bad code”??

            The argument seems to be “if it works, it’s not bad.” I don’t buy that having worked with code that “achieves its goals” but is nearly impossible to maintain.

            I’m of the mind that you can tell people that their code is not good without being an asshole about it, so I don’t find the guru approach of the OP’s tone convincing.

          2. 3

            Both NASA’s techniques and formal verification lead to far less defects. So should they be best practices?

            Yes please!

            It depends: if you’re building the website for a local chain of restaurants, using these techniques is ridiculous.

            That’s just moving the meaning of “best practice” up one level. Instead of saying “best practice is to use formal verification specification”, we say “best practice is to use formal verification specification if you have nontrivial business logic. “

            1. 2

              He mentions tests and heavy methods of formal verification. It’s not binary. One of earliest tricks in formal methods was using languages built to improve safety that were easy to analyze with automated tooling. Most widely-deployed version of that was Java with its huge ecosystem of tools for testing, static analysis, concurrency analysis, and so on. However, for a web app for restaurants, one might extend the language to mitigate web issues safely by default:



              Opa at the least was also easy to use for rapid development. Another concept might be bolting on capabilities similar to Ur or Opa into Haskell web frameworks which I think already have some type-safe constructions. So, the restaurant app is developed quickly, suffers from fewer errors, little to no zero days, and scales up. All because someone was looking for what assurance methods they could apply within the constraints.

            2. 3

              There is definitely bad code. Just because that’s all some customers can afford, doesn’t make it good, any more than being low-cost makes a 10-year-old Hyundai a good car.

              1. 2

                Whilst there certainly is “bad code” out there, I dislike terms like “bad code” when they’re used in an unhelpful or dismissive way. The most common version seems to be “you’re doing X wrong” or “that’s not proper X” or “this doesn’t follow X best practices”.

                If someone’s asking for help with some code, e.g. on StackOverflow, then such comments are rarely useful on their own, or with an obligatory link to e.g. a TDD book or the Wikipedia page on SOLID or Category Theory for Programmers or whatever. There are two reasons for this:

                The first reason is it’s completely non-specific. Far better to suggest a particular, concrete change, explain why you think it’s an improvement, and maybe relate that to some particular practice or style. For example:

                I think the problem is due to quote marks in the strings, but it’s hard to check due to all the database stuff that’s going on around it. I’d suggest you move all of the string manipulation into a new function like fooHelper, and have foo call that function with the query result. That makes it easy to check the logic by running fooHelper from the REPL. You could also write automated tests to check how it works (i.e. a script which tries some examples and compares the output to what you expect; there are libraries to help with this, like fooUnit). By the way, functions which only transform input to output (like fooHelper) are called “pure”, and it can make things like testing easier if you do your calculations/transformations in pure functions, separate to your “effects” (like database access).

                The second reason is that “perfect is the enemy of good”. I’d claim that all best practices either require judgement calls, which can be argued over, or they “shouldn’t be taken too far” (which, of course, is a judgement call!). Arguing over those judgement calls is fine, e.g. whether a given class has a Single Responsibility due to XYZ, but if we pretend those judgement calls don’t exist, and e.g. shout “Single Responsibility!” without clarification, then we can keep moving the goalposts without ever having to face the reality that some “best practices” can sometimes be inappropriate.

                I think lots of flame wars end up being unproductive precisely because they argue about principles in the abstract, like this second point. For example, arguing whether dynamic types with a perfect test suite is better or worse than static types rigorously proving a perfect specification, with all else (e.g. dev time, available talent pool, etc.) being equal. The correct answer is, of course, that these are unrealistic hypotheticals, and any serious discussion needs to take imperfections and tradeoffs into account, and hence needs to be more specific and constrained.