1. 13
  1. 8

    I completely agree with the overall message. It is possible, hypothetically, to do everything with tests or everything with types, but there are rapidly diminishing returns on investment. Far better to invest in testing things that are suited to testing and type-check things that are suited to type-checking.

    Somewhat tangentially, I found this characterisation of typed programming in the conclusion to be a little off:

    You need to check that you are actually sending the right Integer to the Integer-accepting function. We haven’t found a way yet to create a type called RightInteger that holds the right Integer for each situation, and we suspect it’s impossible.

    When we check things (with tests or types), we’re imposing meaning on to our code and data by forbidding certain operations or representations: for example, that the second integer is never smaller than the first (say, because they’re the start and end timestamps of an event). Tests do this by checking that the code we’ve written seem to avoid doing those forbidden things (i.e. when the test suite exercised the code, the second value was never smaller than the first).

    To do this with types we can follow the principle that “invalid states should be unrepresentable”. In other words, we should replace the side-condition of “must not be smaller” with a different representation, such that the side-condition is trivially true for all values. In the case of the Integer pair, we might choose the second value to be unsigned (and hence non-negative), and treat it as a Difference (or Duration). This may be represented in memory in the same way as a pair of Integers, but the meaning and algorithms are different from those of start and end timestamps. I suppose it’s a case of “boolean blindness” (or in the author’s case “integer blindness”).

    My point is that type-checking doesn’t work by divining meaning from ambiguous blobs of data (like “the right Integer for each situation”); it works by methodically applying situation-specific rules that we’ve told it. If we need different rules for each situation, then they’re probably not dealing with the same types (like Integer)!

    Just to clarify, I’m not saying that all code should avoid Integer, etc. (it’s all about bang-for-buck, which tests and types are complementary for). I’m just saying that we shouldn’t expect types to help us in that way, since that’s not what they’re about.

    1. 4

      Related: https://www.destroyallsoftware.com/talks/ideology

      Some people claim that unit tests make type systems unnecessary: “types are just simple unit tests written for you, and simple unit tests aren’t the important ones”. Other people claim that type systems make unit tests unnecessary: “dynamic languages only need unit tests because they don’t have type systems.” What’s going on here? These can’t both be right. We’ll use this example and a couple others to explore the unknown beliefs that structure our understanding of the world.

      1. 1

        Author here: Ideology by Gary Bernhardt is definitely part of the inspiration of this article, there is no point in hiding it. And I suggest you all go and check the talk, it’s really good, and goes way beyond programming.

      2. 4

        In a statically typed lang I don’t HAVE to use types, and I can use tests. I find that types are useful in reducing cognitive load and communicating intent, hopefully making invalid states unrepresentable in my code. I find that tests are useful in checking assumptions that are not actual business rules. You won’t usually see me using “object” around my code, and that’s because it’s often actually harder than specifying types.

        1. 3

          I liked this overall. It’s a difficult concept to get across to perhaps a junior audience. Kudos to the author for giving it a shot.

          I always go with types first when it comes to data. Things involving function signatures gets a little more wonky because to me, at least, it starts to get tied into your programming paradigm and architecture. Pure FP and the Onion Architecture goes one way, FP and something like DDD another. OO is also a different world when it comes to fully specifying functions.

          There may be a higher-level still, Language-Oriented Programming, where it’s impossible to break the program but care must be taken that it’s configured/programmed correctly. Adding that would be far, far too much for this one essay though. Good job, guys.

          1. 2

            Even in OO function signatures matter, they just are only expected to be a total function under non-exceptional flow. Use your void functions for side effects and mutation, and use your return functions when you actually want to return things. If you start mixing these two you’ll just create heartache imho. “Why did you call ____??” “Well I didn’t think it would change stuff, it’s a function…”

            1. 1

              I agree that the signatures matter. They matter just as much. My point was only in regards to how you think of them as a replacement for unit tests. The entire idea of polymorphism – and duck typing – means that simply because you have a tight function sig doesn’t give you the same assurance that you would have, say, in Haskell.

          2. 2

            Don’t forget that they aren’t the only automated ways of writing correct software! You also have contracts, which are way underused in most code. Also, TDD biases towards manual oracle tests, while there are lots of other kinds of testing, just as there are lots of other kinds of types. For example, fuzzing and metamorphic testing.