There simply isn’t a place where, if you put types, it makes your code harder to change.
I wish this was true. However, types can make your program slightly harder to change — simply in that if you want to change the type of something, you’ve got to change the type annotation in addition.
With modern type inference, you probably only ever need to write out types explicitly on system boundaries, and the rest of the program can be inferred safely.
This is certainly true of some languages! OCaml, Haskell, and F# all have fully-inferred type systems (for the most part). However, there’s a lot of things that you can’t do within these inferred type systems, that you’d totally be able to do in a not-as-inferred system. Subtyping, for one, but also bindings to polymorphic references.
I totally agree with the overall inclination towards types and type inference, but it’s good to know where the limits are.
There simply isn’t a place where, if you put types, it makes your code harder to change.
I’m not sure what the author is trying to say here, because it seems obviously and trivially false to me. If I have a function that used to take a String, but now takes a struct/record/sum type with the String as one of its members, the change requires more work if you have to change a number of type declarations in addition to the code.
I think the author, when he’s discussing strong typing, is actually making the assumption that the language in question has full type inference. He acknowledges, in a single throwaway line, that strong typing isn’t enough to make this model work (saying “look at C++”).
Some of the static in discussion of types comes from the fact that, plainly stated, only programming language nerds have ever used a statically typed language with sane type inferences. Most people’s idea of static typing comes from exposure to dysfunctional explicit static types in C++ or Java.
Type inference is, in my view, absolutely necessary to make static typing less trouble than it’s worth in small and medium-sized projects (and even as static typing sans inference makes certain types of errors easier to catch and more difficult to make in large projects with tens or hundreds of active developers, it requires type inference to make static typing feel like an aid rather than a hinderance even in that situation). Most professional and amateur programmers have never seen it, though.
Even as industry is slowly catching up to the past 30+ years of programming language theory, it’ll take another ten or fifteen years before J. Random Hacker hears “static typing” and can reasonably be expected to think of Haskell rather than Java, as the author of the piece does. Until then, if we aren’t explicit about meaning “static typing with inference”, we can expect to get a lot of strange-seeming pushback.
Isn’t the common advice in e.g. Haskell to explicitly declare the types for each function, even if not strictly necessary? Even excellent type inference doesn’t prevent your from having to make changes to all relevant declarations then.
Though I now realize that ‘making code harder to change’ and ‘making code more work to change’ are not entirely the same thing. Having to do extra work imposes a small (mental) barrier, but it’s menial work. And as @ddellacosta suggests in the parallel comment, types may make finding the places to change easier, reducing the amount of searching (or following failing tests) to do. So it may even be less work. So I think I agree with the original statement that adding types doesn’t make your code harder to change.
I’m not sure what the common advice is in Haskell. The common advice in Scala seems to be that types should only be annotated in situations where it would make behavior (as opposed to type) ambiguous, and that attitude makes sense to me – it makes using static types precisely as low-effort as using duck types, but moves checking out of runtime to make debugging type mismatches easier.
We underestimate the effort involved in menial work to our own peril: programming, in a professional context, is >80% menial by both time and energy (since genuinely intellectually-challenging code is risky – hard to reason about, hard to document, hard for coworkers to learn how to understand, and hard for end users to get an intuition for). Eliminating sources of menial effort makes room for a greater amount of menial effort from other sources.
In this sense, I figure having the debugging benefits of strict typing without ever needing to fight the type checker increases productivity substantially: every unnecessary trivial piece of reasoning about types going on in a developer’s head could be replaced with a trivial piece of reasoning about business logic, optimization, UX, architecture, giving less money to Amazon, which movie to attend after work, or something else that’s ultimately more important.
This requires a perspective shift, I think. I found it to be a straightforward and obviously true statement, in that having type information does nothing other than illuminate where a change needs to be made in an expression, vs. having that information obscured. This is the case whether we’re talking about a situation where type inference makes an annotation unnecessary as well as where we may have a type annotation written out: the type checker is going to tell us what we need to change.
This is in contrast to a non-statically-typed language where the only way you can understand how types may have changed throughout a codebase is by hopefully exposing enough potential type errors in your tests (or if you have something like Clojure’s spec, through assiduous manual annotation and good practices) so that they don’t pop up at runtime. In this sense “there simply isn’t a place where, if you put types, it makes your code harder to change.”
Yes and see my response to the parallel comment: I was confusing ‘make it more work to change code’ with ‘making code harder to change’. Even if the former would be true, the latter could be false and your argument even makes the former unlikely.
Though of course my excellent test suite also illuminates where changes need to be made /s
Inheritance and subclassing are different. In inheritance, one imposes a constraint declaratively (say, “these objects must be a subclass of other objects”) and that can be done, often without any verification what so ever. Subclassing, in contrast, just exists even if one would not explicitly implement it. Consider:
The expected object was Y, now we substitute for some object X. This substitution only makes sense when X is compatible with Y.
There exists a compatibility relation between Xs and Ys, let’s call it the “compatible” relation.
If X is compatible with Y, and Y is compatible with Z, then X is compatible with Z.
Every X is compatible with itself.
There exists the most compatible object (say, nil, that can always be substituted). There exists the least compatible object (say, the empty object, which is compatible only with itself).
Just from substitution, we arrive at subclassing: our “compatible” relation now takes place of subclassing. This does not have to be checked by a compiler, it could equally persist only in the programmers’ mind.
Consider how subclassing works differently for: languages with inheritance declarations, algebraically typed languages (cf. coercions), un(i)typed languages.
One can also argue that the notion of “compatibility” only makes sense whenever one works with errors. Again, errors can be kept implicit or explicitly declared as exceptions. Without errors, every object could be substituted for another, and everything is compatible!
Why do I call them classes of objects and not just sets of objects? Technical, mathematical terminology: objects are non-well-founded, non-hereditary, graph-like constructions, while most people assume set theory to be well-founded, hereditary, tree-like.
I wish this was true. However, types can make your program slightly harder to change — simply in that if you want to change the type of something, you’ve got to change the type annotation in addition.
This is certainly true of some languages! OCaml, Haskell, and F# all have fully-inferred type systems (for the most part). However, there’s a lot of things that you can’t do within these inferred type systems, that you’d totally be able to do in a not-as-inferred system. Subtyping, for one, but also bindings to polymorphic references.
I totally agree with the overall inclination towards types and type inference, but it’s good to know where the limits are.
I’m not sure what the author is trying to say here, because it seems obviously and trivially false to me. If I have a function that used to take a String, but now takes a struct/record/sum type with the String as one of its members, the change requires more work if you have to change a number of type declarations in addition to the code.
I think the author, when he’s discussing strong typing, is actually making the assumption that the language in question has full type inference. He acknowledges, in a single throwaway line, that strong typing isn’t enough to make this model work (saying “look at C++”).
Some of the static in discussion of types comes from the fact that, plainly stated, only programming language nerds have ever used a statically typed language with sane type inferences. Most people’s idea of static typing comes from exposure to dysfunctional explicit static types in C++ or Java.
Type inference is, in my view, absolutely necessary to make static typing less trouble than it’s worth in small and medium-sized projects (and even as static typing sans inference makes certain types of errors easier to catch and more difficult to make in large projects with tens or hundreds of active developers, it requires type inference to make static typing feel like an aid rather than a hinderance even in that situation). Most professional and amateur programmers have never seen it, though.
Even as industry is slowly catching up to the past 30+ years of programming language theory, it’ll take another ten or fifteen years before J. Random Hacker hears “static typing” and can reasonably be expected to think of Haskell rather than Java, as the author of the piece does. Until then, if we aren’t explicit about meaning “static typing with inference”, we can expect to get a lot of strange-seeming pushback.
Isn’t the common advice in e.g. Haskell to explicitly declare the types for each function, even if not strictly necessary? Even excellent type inference doesn’t prevent your from having to make changes to all relevant declarations then.
Though I now realize that ‘making code harder to change’ and ‘making code more work to change’ are not entirely the same thing. Having to do extra work imposes a small (mental) barrier, but it’s menial work. And as @ddellacosta suggests in the parallel comment, types may make finding the places to change easier, reducing the amount of searching (or following failing tests) to do. So it may even be less work. So I think I agree with the original statement that adding types doesn’t make your code harder to change.
I’m not sure what the common advice is in Haskell. The common advice in Scala seems to be that types should only be annotated in situations where it would make behavior (as opposed to type) ambiguous, and that attitude makes sense to me – it makes using static types precisely as low-effort as using duck types, but moves checking out of runtime to make debugging type mismatches easier.
We underestimate the effort involved in menial work to our own peril: programming, in a professional context, is >80% menial by both time and energy (since genuinely intellectually-challenging code is risky – hard to reason about, hard to document, hard for coworkers to learn how to understand, and hard for end users to get an intuition for). Eliminating sources of menial effort makes room for a greater amount of menial effort from other sources.
In this sense, I figure having the debugging benefits of strict typing without ever needing to fight the type checker increases productivity substantially: every unnecessary trivial piece of reasoning about types going on in a developer’s head could be replaced with a trivial piece of reasoning about business logic, optimization, UX, architecture, giving less money to Amazon, which movie to attend after work, or something else that’s ultimately more important.
This requires a perspective shift, I think. I found it to be a straightforward and obviously true statement, in that having type information does nothing other than illuminate where a change needs to be made in an expression, vs. having that information obscured. This is the case whether we’re talking about a situation where type inference makes an annotation unnecessary as well as where we may have a type annotation written out: the type checker is going to tell us what we need to change.
This is in contrast to a non-statically-typed language where the only way you can understand how types may have changed throughout a codebase is by hopefully exposing enough potential type errors in your tests (or if you have something like Clojure’s spec, through assiduous manual annotation and good practices) so that they don’t pop up at runtime. In this sense “there simply isn’t a place where, if you put types, it makes your code harder to change.”
Yes and see my response to the parallel comment: I was confusing ‘make it more work to change code’ with ‘making code harder to change’. Even if the former would be true, the latter could be false and your argument even makes the former unlikely.
Though of course my excellent test suite also illuminates where changes need to be made /s
When the author mentions OO:
I can’t resist myself to refuse the notion of OO ideology. What does ideology even mean here? Let me guess:
If one tries to understand the theory behind OO, and fails, and applies some of the ideas in an apparant chaotic way that resemblences it.
But this, in my opinion, happens with most things one tries to learn. No blaim to the student, for he is only interested in learning more!
Let me close with a poem:
Object-orientation without subclasses is like functions without arguments.
Inheritance is not important for OO programming at all. It’s about substitutability, and if done the way it was originally intended, message passing.
Inheritance and subclassing are different. In inheritance, one imposes a constraint declaratively (say, “these objects must be a subclass of other objects”) and that can be done, often without any verification what so ever. Subclassing, in contrast, just exists even if one would not explicitly implement it. Consider:
The expected object was Y, now we substitute for some object X. This substitution only makes sense when X is compatible with Y.
There exists a compatibility relation between Xs and Ys, let’s call it the “compatible” relation.
If X is compatible with Y, and Y is compatible with Z, then X is compatible with Z. Every X is compatible with itself. There exists the most compatible object (say, nil, that can always be substituted). There exists the least compatible object (say, the empty object, which is compatible only with itself).
Just from substitution, we arrive at subclassing: our “compatible” relation now takes place of subclassing. This does not have to be checked by a compiler, it could equally persist only in the programmers’ mind.
Consider how subclassing works differently for: languages with inheritance declarations, algebraically typed languages (cf. coercions), un(i)typed languages.
One can also argue that the notion of “compatibility” only makes sense whenever one works with errors. Again, errors can be kept implicit or explicitly declared as exceptions. Without errors, every object could be substituted for another, and everything is compatible!
Why do I call them classes of objects and not just sets of objects? Technical, mathematical terminology: objects are non-well-founded, non-hereditary, graph-like constructions, while most people assume set theory to be well-founded, hereditary, tree-like.