I think that safe refactoring is probably the strongest case for strong static typing.
It’s not that it’s impossible to write reliable code with dynamically typed languages. It’s clearly quite possible. Most businesses just refuse to pay for it: they want more features, more cheaply, quality be damned. That creates a culture of sloppiness. All of that is just as possible in a Haskell shop as in a Ruby or Java shop. The difference is that the compiler slows you down and guards you against unexpected breakages (especially bugs “between working code”) that you might not even be aware of. So you tend to get an accumulation of unsafe refactorings and technical debt with dynamic typing if you follow business-as-usual coding practices.
I’ve given up on trying to “sell Haskell”. If the code is going to live long (2+ years) and projects are going to be multi-developer, then you either need static typing or really strong programmers. I tend to think that Haskell’s the best choice for long-haul projects, but I also recognize that the quality of programmers is more important than the language itself.
I’ll second all that.
Refactoring is the biggest benefit I’ve personally felt when working in Haskell: if a possible complicating factor isn’t mentioned in the type signature, I don’t have to worry about it. (Except for the possibility of functions not being total. Oh well.)
Absolutely agreed that maintenance problems are ultimately caused by lack of good engineering practices, such as investing resources into code health. I want to add a minor caveat that good code shouldn’t require a highly experienced programmer to maintain it, and that hiring and retaining experienced people is easier when you can show that you care about the engineering values - it really does come down to that.
I haven’t yet convinced anyone who wasn’t already sold on it, but I’ve been trying to promote an idea that “engineering values” are the technology equivalent of what “production values” are to a movie: If you don’t care about them, things wind up being visibly shoddy in all sorts of ways.
I’ve been trying to promote an idea that “engineering values” are the technology equivalent of what “production values” are to a movie: If you don’t care about them, things wind up being visibly shoddy in all sorts of ways.
I’d love to hear more about this. This is a great analogy and I agree. There’s a difference between general morale and technical morale. General morale is, “Do people like it here?” That’s fickle. Have a layoff, and general morale goes bye-bye. Give out raises and bonuses, and it goes up. Technical morale is, “Do people trust the stuff we build here?” It’s much slower to change. The first is organizational cohesion and esteem and the other is organizational self-efficacy.
I think that the current generation of tech CEOs sees “visibly shoddy” as an internal problem. They’re willing to make compromises that hurt actual engineering capability but improve the optics, e.g. open-plan offices (which look busy to investors but are unhealthy and counterproductive). They tend to assume that they’ll sell the company or IPO before things start falling apart. Unfortunately for them, they’re often wrong. This isn’t a great time to IPO, for starters. Still, plenty of these founders and CEOs and CTOs know that their policies (focus on the young, low wages, long hours and micromanagement) lead to shoddy engineering. They just don’t think that they’ll “get caught”, because they’re used to being promoted away from the messes they’ve generated (it’s worked thus far, for them; that’s why they’re founders and middle managers).
I think you said most of what I would! Framing it as trust is a new point which I’ll try to incorporate, because it’s exactly right. I’d say all of that about how engineering messes are long-term problems and how there’s more awareness than there used to be about the fact they exist, but that awareness doesn’t always translate into taking things seriously.
The only thing I usually talk about that you left out is the scope of what these values are. It’s more than just code health; it’s whether tools are appropriate for the things they’re being used for; whether security and reliability are taken seriously; whether there are effective means to escalate when something shouldn’t be the way it is. It’s how much headcount the company invests into infrastructure and into operations, whether infrastructure is pleasant to use, and whether infrastructure teams are responsive to the needs of their client teams. It’s being able to say “this would get it done fast but cause trouble later, so we should do it the hard way” and be taken seriously. It’s even recognizing which decisions are going to be hard to change later vs. which ones can be made based on somebody’s whim.
I go back and forth on what to say about decision-making process - data-driven, personality-driven, some mix? - and about other factors which are less directly about engineering, and how they relate.
Most businesses just refuse to pay for it: they want more features, more cheaply, quality be damned. That creates a culture of sloppiness.
This is the argument used by John Carmack for Haskell, roughly “given enough time and/or a large enough team, any code that compiles will get checked in”.
I agree that these are all great benefits of using Haskell, but I’ve always felt like the tooling for this language falls short. Yes, refactoring should work easily in Haskell, but every IDE or IDE plugin I’ve tried doesn’t seem to handle renaming functions or modules properly across files. I shouldn’t have to set up 10 pages of Vim plugins to get a decent workflow going.
Oh, definitely. I think many Haskellers agree, but nobody wants to do the work…