The double-entendre of “types” only gets worse when dynamic languages redefine “type safety” as well! The standard term for this, I believe, is well-formedness of data. /rant
But there is something pretty interesting here that you can talk about using types. At the boundaries of your system, you often are forced to become completely concrete about how to interpret inputs and outputs. This seems at odds with the desire to have code be as generic as possible.
But why do we seek generality? It seems like it ought to be so that we can reuse this code more, but I’d argue that the concept of parametricity gives a different and better answer: more generic code is more likely to be correct since it will state its assumptions more clearly. With a type system we can even leverage parametricity theorems to be doubly sure that our generality is paying its weight.
This produces the final state of balance between these concepts where the highly generalized core computation is “operated” by the concrete operation which interacts with the outside world. The core computation can be verified and ensured safe with ease (and also unit tested very nicely) while the “real world runtime” can focus on interpretation of outside signals and the negotiation of that complex boundary.
The “common pattern” seems to me to be just bad code.
The post does do a good job of ignoring its own title and presenting important notes on contracts and preventing bad data from permeating the core of the application.