I haven’t read parts 1 through 4, and thus this is a bit mysterious to me. But I get the gist, I think, and I think the insight here is really valuable and widely applicable.
We often have a bias, when working in typed languages, to shunt as much of the logic into the type system as we can. I think this arises out of a reasonable value judgment—compile time errors are cheaper and safer to fix than runtime ones—but can result in unintended knock on effects.
One of the corrupting influences, I think, is the general mystique of types. They are more challenging to get right and closer to the compsci domain that some (not all) of us feel is the ‘proper’ universe for an intelligent programmer to think within.
Another is the failure to distinguish between academic exercises with types and practical ones. Those of us in Lobsters or Hacker News are used to seeing encoding of all kinds of miraculous things in type systems. It’s easy to fail to note when those are happening in a plt research context or an educational one rather than a practical one.
One failure of this approach—encode as much as we can in the type system—is of course that business logic almost by definition changes over time. That which was invariant on Monday will possibly change by Friday. In some rare cases, the engineers might even prevail upon Product to effect the opposite transformation.
This is not to say that everything should always be done at runtime—we can change our types, too—but that it’s useful to spend some time thinking about where a useful distinction between types and business logic might be placed when we design our programs. This boundary might be different in every case. But looking out for it—as opposed to assuming that it will always be located as far to the typed side as possible—will make for better programs.
I haven’t read parts 1 through 4, and thus this is a bit mysterious to me.
I didn’t know whether to post part 5 with the punchline, but without the build-up or part 1 and hope that readers make it all the way to the payoff.
I find your analysis spot on and well in line with the series: we have tools in our programming languages to establish constraints and so it seems perfectly logical to use those tools to implement constraints of the problem in code. But as the series shows, it can be hard to do it in the first place and as new requirements come in, it can be nearly impossible to maintain.
I think these articles get at a crucial limitation in popular languages. The lack of efficient runtime dispatch. Typewise, it’s great that OOP features exist, hierarchies and all that good stuff. However, when it comes to defining complex interactions between types that don’t fit neatly into a tree type diagram, you’ll find yourself down river without a paddle. Sure, MD can be implemented on top of a language, but having it built in, is extremely desirable.
I’m not so sure. Part 4 argues against trying to solve this problem with multiple dispatch (and not just because MD isn’t intentionally “built-in” to C#).
I haven’t read parts 1 through 4, and thus this is a bit mysterious to me. But I get the gist, I think, and I think the insight here is really valuable and widely applicable.
We often have a bias, when working in typed languages, to shunt as much of the logic into the type system as we can. I think this arises out of a reasonable value judgment—compile time errors are cheaper and safer to fix than runtime ones—but can result in unintended knock on effects.
One of the corrupting influences, I think, is the general mystique of types. They are more challenging to get right and closer to the compsci domain that some (not all) of us feel is the ‘proper’ universe for an intelligent programmer to think within.
Another is the failure to distinguish between academic exercises with types and practical ones. Those of us in Lobsters or Hacker News are used to seeing encoding of all kinds of miraculous things in type systems. It’s easy to fail to note when those are happening in a plt research context or an educational one rather than a practical one.
One failure of this approach—encode as much as we can in the type system—is of course that business logic almost by definition changes over time. That which was invariant on Monday will possibly change by Friday. In some rare cases, the engineers might even prevail upon Product to effect the opposite transformation.
This is not to say that everything should always be done at runtime—we can change our types, too—but that it’s useful to spend some time thinking about where a useful distinction between types and business logic might be placed when we design our programs. This boundary might be different in every case. But looking out for it—as opposed to assuming that it will always be located as far to the typed side as possible—will make for better programs.
I didn’t know whether to post part 5 with the punchline, but without the build-up or part 1 and hope that readers make it all the way to the payoff.
I find your analysis spot on and well in line with the series: we have tools in our programming languages to establish constraints and so it seems perfectly logical to use those tools to implement constraints of the problem in code. But as the series shows, it can be hard to do it in the first place and as new requirements come in, it can be nearly impossible to maintain.
I think these articles get at a crucial limitation in popular languages. The lack of efficient runtime dispatch. Typewise, it’s great that OOP features exist, hierarchies and all that good stuff. However, when it comes to defining complex interactions between types that don’t fit neatly into a tree type diagram, you’ll find yourself down river without a paddle. Sure, MD can be implemented on top of a language, but having it built in, is extremely desirable.
I’m not so sure. Part 4 argues against trying to solve this problem with multiple dispatch (and not just because MD isn’t intentionally “built-in” to C#).