1. 111
  1.  

  2. 16

    This isn’t unique to the developer world, but it certainly makes writing technical blog posts exhausting! It is so easy to get caught up trying to tie down every lose end in your writing to avoid misinterpretation. It’s probably better just to ignore those kind of responses.

    1. 12

      Also: people don’t actually read what you’ve written, but will comment anyway. So if you’re not super careful with your title and first paragraph or two, you’ll keep receiving “criticism” that’s actually already addressed in the article itself, or they will take one thing you’ve said and argue heavily against that, while actually later on more nuance is added.

      The worst was when I wrote “Why is no one signing their emails?”; the article is about institutions like Amazon, your bank, etc. not signing their emails so it’s easier to identify phishing and such. Many parts about PGP/GPG are quite hard, but verifying signatures is quite easy. Of course, many people responded to the title only with commentary about how hard it is to use PGP to sign stuff. Okay … don’t disagree, but … that wasn’t what the article is about 🤷‍♂️

      As a result of all of this, I’ve become a much more careful reader before commenting myself, or simply abstaining from commenting 😅

    2. 9

      Related but not the same are thought-terminating clichés, or at least debate-terminating clichés: “Pick the best tool for the job.” OK… and? What does that statement add, precisely? Are we going to have a debate about what the best tool is for any specific job? Somehow, the people who use that statement never seem to.

      Statements like that one, which are “wise” without conveying any wisdom (“Motherhood Statements” as they’re called), are similar to the pseudo-rebuttals the blog post is talking about in that they end the discussion, or try to: If you counter the pseudo-rebuttal, you end up in a rabbit hole of arguing relevance, as opposed to the rabbit hole of arguing about whatever you were arguing about to begin with. If you counter the “wise” statement, you’re arguing against wisdom, which never seems to reflect well on you.

      1. 7

        Related but not the same are thought-terminating clichés, or at least debate-terminating clichés: “Pick the best tool for the job.” OK… and? What does that statement add, precisely? Are we going to have a debate about what the best tool is for any specific job? Somehow, the people who use that statement never seem to.

        I actually really like that cliche because I enjoy discussing how we determine what the right tool for the job is. Sadly that’s not what people use it for. As you say, it’s often a way to shut down discussion.

        1. 4

          Stuff like that is a “discussion stopper”; I’ve not seen that term being used in English that often, but it’s more common in Dutch. It’s annoying because a lot of the time no one can really disagree with it, but it also doesn’t advance the debate: it stops it.

          The worst I’ve seen was a former coworker who would simple state “I strongly disagree” in any discussion. Well, okay… now what?

          1. 2

            Stuff like that is a “discussion stopper”; I’ve not seen that term being used in English that often

            I see it used fairly often. Another one is “conversation ender.” Useful concept by whatever name you use.

            1. 3

              Oh, well, guess I haven’t been paying enough attention 😅

            2. 2

              It depends.

          2. 9

            One reason this seems to happen is context collapse: I may be writing on my blog or a mailing list with a known regular audience, but the Internet is public by default. As soon as my writing gets linked to on some other discussion site, the commenters probably don’t have the same background and implicit assumptions… and they miss the point.

            As a random example: on some discussion sites, I skip the comments on any kind of scientific computing or supercomputing links, because I find several of the likely comment threads tiresome: “ugh, Fortran”, “why can’t scientists write better code”, ”why didn’t they run in AWS”, etc. That’s not to say those are even (necessarily) bad conversations… but they are unlikely to be useful to either the author or the intended audience of most of those links.

            Almost everyone writes with some audience in mind, and that audience is probably smaller than “the entire Internet”. But the Internet and commenting culture seem to award points for pulling things out of context and criticizing them.

            1. 12

              I’ve been meaning to write something similar for a while now, but to be honest this is much better phrased than I could have 😅 Saves me from writing it, so cheers!

              As a response to “what could be the cause of this”, I think perhaps one factor might be that some of programmers (although far from all, certainly don’t want to generalize!) are the “nerdy” type who don’t do so well in social situations, might be bullied at school, and so forth, resulting in a kind of “unrecognized genius”-complex, for lack of a better term.

              But what they are is smart, and this is one of the few areas where they can really assert their worth and get some leverage; this, combined with a lack of self-esteem and desire to prove to both themselves not just to others, but also themselves, leads to this kind of behaviour, at least in some cases.

              I don’t think this is a grand theory which explains everything, and to be honest I’m just reasoning from personal experience here because that pretty much describes a younger version of myself (although I’m really not that smart). When I was younger I was the type of “but actually [..]” type of person. Reflecting on this now it’s patently obvious to me I was a bit of a twat at times, but at the time I had a desperate burning desire to prove myself and assert my worth, but to myself and others. Now that I’m a bit older, hopefully a bit wiser, and also more secure in myself it’s clear that this was silly and completely unnecessary, but that was not the experience I had at the time.

              That said, different people probably exhibit the same behaviour but for competently different reasons, so I don’t know if this described 0.1%, 1%, 10%, or 50% of the cases. All I know for sure it that it describes my case (and even that I’m not 100% sure about to be honest, as it’s hard to really examine the source of your behaviour) 😅

              1. 5

                But what they are is smart

                They believe they’re smart, because when you’re young, being “smart” is measured by knowing the Right Answer.

                The problem is with programming (and really anything philosophical, with apologies to Russel) that there is both a religious Right, and a scientific (or objective) Right, and it’s the former that young programmers argue on the Internet because none of them have done any serious science about Programming.

              2. 3

                Is there an xkcd for this phenomenon?

                1. 2

                  https://xkcd.com/1731/

                  edit: this actually isn’t the same phenomenon – my bad

                  1. 1

                    Yet I see why you posted it. It seems to spring from the same need to be right / smart/ etc.

                2. 1

                  Regarding the security point specifically: it would be nice if there were more rigorous frameworks for reasoning about such things.

                  E.g. “How do I take into account, model and parameterize all possible attack vectors, including NASA or random hackers”

                  1. 3

                    There are books on threat modeling that actually go into a variety of models.

                  2. 1

                    A beautiful illustration of straw men in computing. Couldn’t have written it better.

                    About the bit about how school teaches people to be factually right and therefore encourages straw man arguments: this is another reason experiential learning is such a boon over pure lecture-based learning! Having students learn like professionals do makes being relevant much more… relevant.

                    1. 1

                      Those statements seem to be contrarianism of the basic form that highlights the bad aspect of things that are for the most part good. https://www.lesswrong.com/posts/9kcTNWopvXFncXgPy/intellectual-hipsters-and-meta-contrarianism

                      And yes, they are factually correct.

                      1. 0

                        Hedging your writing just leads to boring prose imho. The “not all foo can bar” guys can safely be ignored.

                        Everyone who is considered a great communicator is also a very effective information compressor who knows that the right decompressors exist out there. There are also a lot of wrong decompressors but they don’t care about them. Few great communicators hedge their statements.

                        1. -2

                          This is again true. If every single developer on a code base is being 100% focused and 100% careful 100% of the time, then bug free code is possible. Reality has shown time and time again that it is not possible, human beings are simply not capable of operating flawlessly for extended periods of time.

                          Tell that to Robert C. Martin :)

                          1. 4

                            As I understand it, Martin’s position is exactly that developers are prone to making mistakes (as it’s in the nature of the job), so you need certain processes and techniques to catch these mistakes (like TDD) to ensure reliability.

                            I don’t especially agree with Martin on a number of points, but unless I’m misinterpreting your comment, I’m not sure if your comment is an accurate reflection of Martin’s views.

                            1. 3

                              Martin’s position is that if there are defects, then it is the programmer’s fault, and that programmers should just be more disciplined/professional/rigorous, or else they should — and I’m not making this up — quit their jobs and never write code again.

                              Now, ask yourself why these defects happen too often. If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.

                              And what is it that programmers are supposed to do to prevent defects? I’ll give you one guess. Here are some hints. It’s a verb. It starts with a “T”. Yeah. You got it. TEST!

                              You test that your system does not emit unexpected nulls. You test that your system handles nulls at it’s inputs. You test that every exception you can throw is caught somewhere.

                              Why are these languages adopting all these features? Because programmers are not testing their code. And because programmers are not testing their code, we now have languages that force us to put the word open in front of every class we want to derive from. We now have languages that force us to adorn every function, all the way up the calling tree, with try!. We now have languages that are so constraining, and so over-specified, that you have to design the whole system up front before you can code any of it.

                              https://blog.cleancoder.com/uncle-bob/2017/01/11/TheDarkPath.html

                              Relying on the human to write the trivial tests that a smarter technology provides for you, 100% of the time, automatically, for free, is not a wise strategy for business. Perhaps nobody told Martin that people write tests even when they’re working with types.

                              1. 6

                                As I read that, his position is more akin to “if you don’t follow certain procedures when programming and just handwave defects away with ‘it’s the languages fault’, you should not be programming”, not that people who write defects while following the established procedures should not be programming.

                                For example the last paragraph from that article:

                                Why did the nuclear plant at Chernobyl catch fire, melt down, destroy a small city, and leave a large area uninhabitable? They overrode all the safeties. So don’t depend on safeties to prevent catastrophes. Instead, you’d better get used to writing lots and lots of tests, no matter what language you are using!

                                He’s not blaming the Chernobyl engineers for making mistakes as such, but he’s blaming them for not following proper procedures and overriding safeties. This is an important difference, I think. In programming, he’s not blaming people for making mistakes and writing defects: he’s blaming them for not following procedures and using proper safeties (specifically: testing).

                                While I think he has some interesting points and perspectives, I don’t fully agree with all of that so I don’t really want to “defend” his views. But my reading of the article (combined with some other things I’ve read and seen from him) is that his positions are rather more nuanced than your summarisation (although I don’t think he’s always very good at conveying that nuance).

                                1. 2

                                  What does ‘overrode safeties’ mean in this context? Aren’t tests also part of the safety infrastructure? If your engineering team has a culture of disabling safeties in general, they will also have a habit of disabling or crippling tests, which is exactly what happened in Chernobyl: https://en.wikipedia.org/wiki/Chernobyl_disaster#Test_delay_and_shift_change

                                  The system would have no influence on the events that unfolded next, but allowing to run the reactor for 11 hours outside of the test without emergency protection indicated a generally low level of safety culture.

                                  Nowadays teams have a lot of options when it comes to enforcing safeties … code reviews, branch merge rules, push rules, code quality checkers, etc. If the team has a culture of systematically disabling all of these, then tests won’t save them.

                            2. 4

                              The best thing that could happen to software engineering this decade would be “Uncle Bob’s” retirement, and I mean this sincerely, not as a snark.

                              1. 1

                                I would argue that this is completely wrong, mainly because this doesn’t take I to account knowledge and skills.

                                1. 5

                                  An abundance of knowledge and skill does not make one infallible.

                                  We’re only human; let’s not set unreasonable standards upon ourselves.

                                  1. 1

                                    Yes, that was my point. I wasn’t explicit enough about the conclusion.

                                2. 1

                                  The correct takeaway would be that no human is capable of writing correct code. Martin would do well to remember that they, too, are a soft and falliable meatbag, just like the rest of us.