I’m the furthest thing from being an expert in programming under the regime of a rich type system—I would appreciate hearing from the experts on how these approaches play out in practice.
The principle of making illegal states unrepresentable goes back much further than the contemporary interest in type theory, and the benefits have been clear for a long time. (We were talking about this back in the days when I was still a C++ programmer.) Clearly, the proposition is that the pseudo-mathematical nature of a well-founded and theoretically-grounded type system is supposed to result in greater benefits than these previous approaches—presumably in the areas of static verifiability and, e.g., automated testing.
That said, this particular post uses a business requirement as its motivating example. While, understandably, the example is chosen for illustrative purposes only, I wonder: what benefits does type-orientated programming give to modeling business logic?
In my mind, the limitation of fitting of business constraints to a pseudo-mathematical structure is:
business constraints live within a non-mathematical domain of human laws and logic, which defy neat & structured modelling. e.g., it seems possible to me that a Contact must have either an EmailContactInfo or a PostalContactInfo unless its some special case (like some VIP contact or a contact under a privacy-sensitive jurisdiction) in which case the constraint is made further complex. (This seems to be because computer programmes are limited representations of an infinitely profound human reality, and even the physical laws that govern natural phenomena are no match for the complexities that can be dreamt up by the human imagination, and then enforced upon us by law or language.)
given the above, it would make sense that this system might be designed to be one that communicates with stored parties, thus encircling the modelled business constraint with some system design constraint. In other words, in spite of the complexity of the actual business constraint, it would be nonsensical for the system designed for the sole purpose of performing some action from considering entities that are not capable of being processed. What does it mean to store a Contact in a system whose sole purpose is to send mail or e-mail, if that datum has neither EmailContactInfo nor PostalContactInfo? However, in practice, management rarely wants to pay the cost of writing multiple systems (with the corresponding O(n²) complexity of integrating them,) so you get stuck get writing one system that has to encode these constraints very close to the actual business logic itself. In other words, your Contact tracking type gets used in places where it is meaningful to not have the …ContactInfo stored under certain special cases, because it’s used for some purpose distinct from sending mail or e-mail. (Heaven forbid the system is exposed to business users for direct data entry, because you might then see users working around the programme constraints by filling in dummy data in required fields, totally defying the purpose of the constraints. Many large systems I’ve seen almost expect data entry to messy and ad hoc, and don’t even bother with any kind of fancy modelling.)
This seems to result in the following designs:
some outer system with relaxed constraints to capture information, exactly as entered to retain intention and provenance, where these business constraints are modelled not in a pseudo-mathematical, language-level construct like a type system but in a (runtime) application-level construct that allows for the flexibility of unplanned special cases and whose design exposes this logic to an end-user in a human-verifiable form (e.g., prints out a flow-chart of the constraints and their validation state in a form that the lawyer can read.)
multiple, very small inner systems which enforce constraints more rigidly, by circumscribing the subset of data they accept to those which they can process. Anything that defies being subjected to these constraints gets shunted to some secondary (potentially manual) process.
It seems to me that only the latter, inner system would benefit from type-orientated programming/ (It’s often these systems that languages like Python, we we put into a more rigid restricted computational domain like a pandas.DataFrame or a numpy.ndarray.) The benefits at this layer seem obvious, but they also seem more minor, since these systems are necessarily more restricted in their scope—by nature, they’re smaller and more narrowly purposed.
My question is:
I think this will answer your question: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/
To the extent that your chosen link answers the given questions, it seems like the answer is “no”:
No, type-driven programming is not fundamentally different from dynamically-typed approaches. In particular, we ought to see both dynamically-typed and statically-typed type-driven approaches as the exact same approach, of declaring shapes for abstract data types and then asking our language to ensure for us that values have the correct shape.
The first bullet was “no”.
To give a concrete example, consider Zephyr ASDL, an abstract syntax description language. Zephyr ASDL is effectively a language for declaring algebraic data. CPython uses Zephyr ASDL for the reference Python abstract syntax. This one description is used both from statically-typed C and from dynamically-typed Python. Similarly, my own language Monte uses Zephyr ASDL for not just the next-generation abstract syntax, but also can be used for any user-level algebraic data; I recently used it for text-user interface widgets and constructive solid geometry.
I hope that this example helps reveal to you that whether a type system is static or dynamic can depend on our vantage point.
I understand that the vantage point is when you choose to do the type-checking–at runtime, or before that. Type-driven programming still offers a unique ability though, which is explained in the post I linked to: that is, you can parse less-structured data into more-structured data and then use the type of the parsed value as a guarantee (a proof) that some condition holds.
You could of course choose to not take advantage of this technique–it doesn’t work magically. But if you do it, then the guarantee it gives you is very real, and one you don’t easily get with dynamic typing.
This is a great comment. I’ve recently written ten or twenty thousand words on this topic. I was going to come here and try to disambiguate all of the various issues, but you’ve done a pretty good job. I will attempt to be brief.
I think the mistake I see on both sides of this discussion is assuming totality of approach. That is, either turn this on or turn it off, now compare the two.
This fails for a variety of reasons, some of which you mention. However if we limit the scope to one important business decision at a time, it doesn’t fail at all. Various business groups may have various ideas of what makes a valid address, but the folks printing billing statements need their work done whether they agree with the other folks or not.
Making illegal states unrepresentative rocks, but it rocks only when appropriately scoped. By the way, this is exactly how we treat default local variables and system types: we start out with a job (method), we pick the system types we need for that job, and then we forget about them for another job. It’s only when we begin using shared structs and classes across various methods that we start running into any kind of problems.
The next logical question is: ok, then how do you manage types across hundreds of business decisions? I have an answer to that, but it’s too much to go into here. There are a couple of ways.
The “constructive” vs “predicative” distinction is useful: I’ve used it unconsciously before, and having it named is nice.
Probably some deeper things going on here
Probably some deeper things going on here
I agree. Though I think in the final example trying to answer it by connecting vague (albeit helpful) heuristics like declarative vs imperative ends up being an interesting misdirection. The reason putting the convoluted email/phone/address/etc business rules into a constructive form feels easier is that, at the end, you’ll have a list of legal cases:
Going through a list and checking each item is one of the few easy problems for the human-being-as-processor. We tend not to make mistakes there. Whereas translating the rules directly as written (declaratively) probably gets you into a tangle of multiply nested if… else statements. We tend not to be as reliable at reading or writing those.
is this about preventing sealand from getting a lawyer
For future readers: please note that this article is from 2015 and it’s pretty much outdated when it comes to C#.
I understood it first hand when, after lots ot digging, testing and all, realised that the tooling has been open-sourced and rapidly forgotten. Even in recent discussions with the C# devs, the feature is not deemed a priority.
So, in recent C#, Code Contracts are pretty much dead.
As an alternative, triggered by reading this article, I’m starting to use simpler Guards with a Conditional Debug , to ensure certain constraints at runtime. It’s not a static checker but better than nothing I guess.
This article is from this year and doesn’t really have anything to do with C#. I think you’ve responded to the wrong submission.
You are indeed right, I messed up, sorry!