1. 3
  1.  

  2. 2

    The information is fairly basic. I think Fagan’s Software Inspection Process (1970’s) and Mills' Cleanroom (1980’s) made similar claims while delivering them in the results. However, I thought the title with the word “physics” would be most appropriate to hardware development like RTL or gate-level testing:

    http://electronicdesign.com/digital-ics/understanding-28-nm-soc-design-arm-based-cores

    There’s physics issues all over it. Especially in Silicon Manufacturability section. Developers get the bonus of knowing a mistake might cost millions. :)

    1. 2

      TDD seems to me more like “experimental physics” to help understand the “theoretical physics” of a design.

      A specific TDD session is best when it follows a design discussion of the context in which the unit will exist. From that discussion, the requirements of the unit should be expressed in something like a “contract” (E.g. Hoare Logic)

      Developing unit tests should assist evolving the unit’s contract (the unit’s theory) and be sufficient to demonstrate the contract is valid.

      The programming language will determine the shape and effort of such tests. But no matter the language, it seems helpful to develop code and tests together, and to do so in small steps.

      TDD is a way to combine top-down and bottom-up development. There are other ways to do the same, e.g. using a REPL for exploration and then deriving tests from a review of the REPL transcript is not all that different from TDD. In some ways TDD is like a REPL for language environments that do not actually have a convenient REPL capability.

      A helpful outline in some detail to TDD…

      https://drive.google.com/file/d/0B0cKsRm-3yprZWFlc3Q2QVk1dzQ/view?usp=drivesdk

      1. 1

        While “Testing is a good idea!” is a no revelation for most of Lobsters, I found the framing of it in terms of comparing time-to-discover, time-to-find and time-to-fix to be an inspired way of summarizing the utility of tests.

        Possibly for use in explaining testing to non-technical management-types (and stripped of the physics metaphor).

        1. 1

          I agree they are a good way to think about it.

          But I think if you’re having to justify your approach to management types, then you’re screwed. Development practices should be absorbed into estimates.

          1. 1

            To start, I completely agree with your statement about including practices in a timeline.

            However, I think you are interpreting my desire to explain “to non-technical management-types” as me explaining up to my managers, and I guess I should have added some context.

            Being among the surviving companies from a previous wave of startups, in an ecosystem that is very unsophisticated for its size, I regularly field questions from non-technical founders who are in over their heads. One of the most common topics is trying to understand literally anything at all about how software gets written, and how they can relate to or understand their technical team.

            Concrete example: I recently spoke with a founder whose product team is comprised of a full-time junior developer (who wanted tests but didn’t know to absorb them into an estimate) and a part-time, freelance “senior” who just wanted to ship and get paid. The senior wasn’t against tests, but he didn’t seem to want to spend the time on them either if the business side didn’t care. The founder came to me because he wasn’t sure who to believe, but wanted to do the right thing. Spent 30 minutes over coffee explaining some basics, working up to the costs/benefits of various types of testing at various stages of a product lifecycle. The conversation continued in bits and pieces for about a week over email, and to the best of my knowledge, they settled on at covering all use cases by integration tests, and unit tests where appropriate. Better than nothing.

            So I present myself as a relatable “business bro” in order to be a sleeper agent for reliable software engineering. As depressing as it can get to have these basic conversations over and over, the war for best practices is being fought in marginal little battles like these.

            1. 1

              This is great. I’m going to copy you in this way in the future. :)

        2. 1

          I have the feeling (admittedly based on little more than the year it was written) that this was intended as evangelism. Even so I can’t help feeling it presents something of a false dichotomy in setting up the two possibilities as “test-driven” or “debug later” - what about code that was tested at the repl? What about development that was “test first” but not actually “test driven” (a fine distinction but an important one)? What about code-first-then-write-tests-straight-afterwards development?

          None of which carping is intended as a slam against the author for something he wrote eight years ago, but it does feel like we should have moved on a bit since then.