This is roughly what most arguments about Go come down to.
Go side says things like generic are too complicated and, hey, you don’t seem to need them either.
The generics side says no, they actually solve a very important problem and you’re pushing the complexity of doing type safety downstream.
Hard to say who’s right (I stand on the generics side) but that’s what makes it such a vicious debate.
It’s not a binary thing. Maybe for you and many others it is, but that isn’t universally true. One can enjoy type systems that are simplistic at the same time as enjoying other type systems that are more expressive. (I’ll count myself as an example.)
My statement has nothing to do with if one can enjoy it or not, it has to do with the binary decision of having generics or not and where that puts complexity.
You’ve missed my point. You formulated generics into an “us vs. them” issue. I’m saying that it doesn’t have to be that way. One can take both sides depending on the context, the problems one is trying to solve and the trade offs one wants to optimize for.
Oh, then I phrased my post wrong. I’m talking just about the Go discussion, which is a binary generics or no generics question.
The discussion about adding generics to Go is not binary, “generics or no generics”. The discussion is about finding an implementation of generics that fits well with the language goals:
That said, I agree with the general idea of your first comment.
Just to clarify, I know the discussion isn’t, but having generics is binary, they exist or they don’t. The discussion is all about complexity, though, and where it should live.
I agree with that statement :)
I always find that to be the interesting tradeoff: implementation complexity versus interface complexity. It’s rare when you can afford simplicity in both places, so, often, you have to make a choice.
On the side of implementation simplicity there’s the “thin layer” argument and the “leaky abstraction” argument. Both of these essentially suggest that abstractions are often strange, untrustworthy things in their own right and that the solution is to make it such that the underlying logic of a clean, simple implementation drives the use of your program.
On the side of interface simplicity there’s the “abstract types” and “higher-order reasoning” arguments. Both of these essentially suggest that the ability to reason about programs depends upon modularization separation of concerns and that the solution arises from spending your mental effort budget on the connection between components—and, identically, the isolation of parts of programs which are subject to implementation change.
I think over the long term it tends to be that the interface simplicity argument wins when it has a chance to settle in, but the implementation simplicity argument can create greater value more quickly and thus settle deeply into things like standards (which somehow, ultimately, end up satisfying neither form of simplicity!).
I’m really doing nothing more here than cribbing the old MIT/New Jersey style distinction that’s referenced in this article even, but I like thinking of it in these terms. I feel a bit more affinity to them this way.
I’ve been thinking about the implementation-vs-interface complexity tradeoff a bit recently after re-reading “Worse is Better” the other day. From the article:
…this is the kind of bug that appears as an emergent behaviour of component-based systems. Every component in the pipeline is working entirely correctly, in the sense that they’re all performing exactly the operation they were instructed to perform. The bug comes from the way the pieces have been joined together.
This was a big issue at a recent gig working on a microservices-type system. I ended up spending most of my test-writing time on end-to-end tests, with relatively little spent on unit tests. There were simply so many different services involved in a given object’s lifecycle that even when every service was correct by its own lights, mismatches in assumptions between services caused endless issues. Even comprehensive and rigorous unit tests for a given component didn’t give me any confidence in how the component would function as part of the larger system.
But I’ve worked on monolithic big-ball-of-mud projects too, and they are certainly no more appealing. I think one of the reasons why “Worse is Better” has remained so popular is that most of us have found ourselves on each side of the debate at different times. Interface simplicity sounds great until you are trying to diagnose why Handoff between your Mac and your phone only works when the phase of the moon is just so and you hold your phone over your head and wave it in circles. Implementation simplicity sounds great until you use a library that gives you two ways to pass in a JDBC URL which turn out to use totally different parsers with different ideas of validity, and you spend half a day poking through its guts trying to figure out why they are inconsistent only to discover that the library author regards this as a feature. (That one’s still fresh.)
I tend to come down on the interface simplicity side, but it’s often hard to build reliable abstractions that don’t leak, that difficulty is often underestimated, and as a result there are plenty of simple-but-broken interfaces out there; kind of the worst of both worlds.