1. 13
  1.  

  2. 6

    This is a long article that essentially repeats the claim without evidence or solutions. The claim is false for basic testing but true for sophisticated testing. Let me illustrate by comparing the coding and testing step:

    Coding: There’s a spec in their head of what the code is supposed to do. This usually has output, may produce side effects, and may have an output. They write a series of steps in some functions to do that. They then run it on some input to see what it does. They’re already testing.

    Testing. Typing both correct and incorrect inputs into the same function to see if it behaves according to the mental spec. This can be as simple as tweaking the variables in a range or just feeding random data in (i.e. fuzzing). Takes less thought and effort than coding above.

    So, the claim starts out to be false. The mechanics of testing are already built-in to the runtime part of the coding phase. The others are slight tweaks. The basic testing that will knock out tons of problems in FOSS or proprietary apps takes no special skill past willingness to do the testing. Now let’s look at a simple example of where testing might take extra knowledge or effort.

    https://casd.wordpress.ncsu.edu/files/2016/10/kuhn-casd-161026-final.pdf

    So, the government did some research. They re-discovered a rule that Hamilton wrote about for Apollo program: most failures are interface errors between two or more interacting components. That interfacing used them in ways they weren’t intended. The new discovery, combinatorial testing, was that you can treat these combinations of interfaces as sets to test directly in various random or exhaustive combinations. Just testing all 2-way interactions can knock out 96% of faults per empirical data. Virtually all faults drop off before 6-way point.

    Why is this sophisticated enough to deserve a claim like in OP’s post? First, people don’t learn the concepts or mechanics during the course of programming. You have to run into someone that’s heard of it, be convinced of its value, think in terms of combinations, and so on. Once you know the method, you might have to build some testing infrastructure to identify the interfaces & test them. There’s also probably esoteric knowledge about what heuristics to use to save time when combinations go past 3-way toward combinatorial explosion. So, combinatorial testing is certainly a separate skill whose application could frustrate the hell out of developers. Until they learn it and it easily knocks out boatloads of bugs. :)

    Regular testing of making inputs act outside of their range? Nope. Vanilla stuff same as the coding you’re doing. Easier than a lot of the coding actually since the concepts are so simple. Basic arithmetic and conditionals on functions you already wrote. What stops basic testing from happening is just apathy. Incidentally, that also stops them from learning the sophisticated stuff for quite a while.

    1. 8

      Not strictly related, but this comment reminded me of one of my favorite tweets of all time:

      Trek Glowacki‏ @trek

      Usually when I watch people who “don’t TDD” program, they’re TDDing in a browser/REPL/etc.

      Then throwing those tests away.

      https://twitter.com/trek/status/658066390067245056

      1. 2

        Well said. Same claim I’m making for basic testing.

      2. 5

        They then run it on some input to see what it does.

        While I agree this is what most developers do, I’ve run into several who’ll just code up what they think is correct and call it a day without ever executing (or even compiling!) it. They don’t tend to last long, but they exist.

        1. 3

          That’s part of the apathy I was referring to. People who refuse to even attempt validation of their efforts are beyond help far as testing methods go. People might try to convince them but that’s orthoganal problem to training in testing methods. A people problem.

        2. 2

          The basic testing that will knock out tons of problems in FOSS or proprietary apps takes no special skill past willingness to do the testing.

          I half agree with you.

          The other half remembers that, for the most unskilled testers, they test for success, not for failure. And if you’re not testing for failure, then you’re doing little better than not testing at all.

          Testing for failure requires an intuitive leap, and it benefits from practice. That makes it a skill (or a component of a skill). It’s an easy-to-acquire skill, and one that should frankly be part of the core description of developer, but it’s a skill nonetheless.