Submitted because just about every opinion in it is wrong, but Martin is still influential so we’re going to see this parroted.
Sadly yes. Most bizarre is that he seems to be directly contradicting some positions he’s held re:professionalism and “real engineering”.
A sampler to save people having to read through the thing:
If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.
At some point, we agreed to stop using lead paint in our houses. Lead paint is perfectly harmless–defect-free, we might even say–until some silly person decides it’s paintchip-and-salsa o'clock or sands without a respirator, but even so we all figured that maaaaybe we could just remove that entire category of problem.
My professional and hobbyist experience has taught me that it if a project requires a defect-free human being, it will probably be neither on-time nor under-budget. Engineering is about the art of the possible, and part of that is learning how to make allowances for sub-par engineers. Uncle Bob’s complaint, in that light, seems to suggest he doesn’t admit to realities of real-world engineering.
You test that your system does not emit unexpected nulls. You test that your system handles nulls at it’s inputs. You test that every exception you can throw is caught somewhere.
Bwahahahaha. The great lie of system testing, that you can test the unexpected. A suitably cynical outlook I’ve seen posited is that a well-tested system will still exhibit failures, but that they’ll be really fucking weird because they weren’t tested for (in turn, because they weren’t part of the mental model for the problem domain).
We now have languages that are so constraining, and so over-specified, that you have to design the whole system up front before you can code any of it.
Well, yes, that sort of up-front design is the difference between engineers and contractors.
More seriously, while I don’t agree with the folks (and oh Lord there are folks!) who claim that elaborate type systems will save us from having to write tests, I do think that we can make tremendous advances in certain areas with these really constrained tools.
It’s not as bad as having to up-front design “the whole system”. We can make meaningful strides at the layer of abstractions and system boundaries that we normally do, we can quickly stub in and rough in those things as we’ve always done, and still have something to show for it.
I’ve discussed and disagreed at length with at least @swifthand about this, the degree to which up-front design is required for “Engineering” and the degree to which that is even desirable today–but something we both agree on is that these type systems do have a lot to offer in making life easier when used with some testing. That’s a probably a blog post for another day though.
And so you will declare all your classes and all your functions open. You will never use exceptions. And you will get used to using lots and lots of ! characters to override the null checks and allow NPEs to rampage through your systems.
And furthermore, frogs will fill the streets, locusts will devour the crops, the water will turn to blood and sulfur will rain from the sky! You know, the average day of a modern JS developer.
More likely, you’ll start at the bottom, same as we’ve always done, and build little corners of your codebase that are as safe as possible, and only compromise in the middle and top levels of abstraction. A lot of people will write shitty unsafe code, but it’s gonna be a lot easier to check it automatically and say “Hey, @pushcx was drunk last night and made everything unsafe…maybe we shouldn’t merge this yet” than it is to read a bunch of tests and say “yep, sure, :shipit:”.
In general, this kinda feels like Uncle Bob is starting to be the Bill O'Reilly of software development–and that makes me sad. :(
For some languages that step is called compiling.
I’m generally not a fan of the “but types are tests”-argument, but you rightly call that out.
“Nullness” is something that can be modeled for the compiler to easily analyse, so I don’t understand why he calls that out. (especially as non-null is such a prevalent default case and most errors of not passing a value are accidents).
I wish I could upvote this comment a thousand times. Concise, funny, but also brutally true. You nailed it.
… plus a thorough type system lets the compiler make a whole bunch of optimizations which it might not otherwise be able to do.
Thank you for the thorough debunking I didn’t have the heart for.
Clean Code was a great, rightly influential book. But the farther we get from early 90s tools and understandings of programming, the less right Martin gets.
This post makes total sense if your understanding of types and interfaces is C++ and your understanding of safety is Java’s checked exceptions and both are circa 1995. I used them, they were terrible! But also great because they recognized a field of potential problems and attempted to solve them. Even if they weren’t the right solution (what first system is?), it takes years of experience with different experiments to find good solutions to entire classes of programming bugs.
This article attacks decent modern systems with criticisms that either applied to the problem 20 years ago or fundamentally misunderstand the problem. His entire case against middle layers of his system needing to explicitly list exceptions they allow to wander up the call chain is also a case in favor of global variables:
Defects are the fault of programmers. It is programmers who create defects – not languages.
Why prevent goto, pointer math, null? Why provide garbage collection, process separation, and data structures?
I guess that’s why this article’s getting such a strong negative reaction. The argument boils down Martin not understanding the benefits of features that are now really obvious to the majority of coders, then writing really high-flying moral language opposed to that understanding
It’s like if I opened a news site today to read an editorial about how not using restrictive seatbelts in cars is the only sane way to drive, and drivers who buckle their kids into car seats are monsters for deliberately crashing their cars. It’s so wrong I can barely figure out where the author started to hope explain the misunderstanding, but the braying moral condemnation demolishes my desire to engage. Martin’s really wrong, but he’s not working towards shared understanding so he’s only going to get responses from people who think that makes for a worthwhile conversation.
Java’s checked exceptions and both are circa 1995. I used them, they were terrible!
Interestingly for me, I came from a scripting language background and hated java checked exceptions with a passion. Because they felt tedious. It seemed lame that a large part of my programming involved IDE generated lists of exceptions. As I got more experienced and started writing software that I really want to not crash, I started spending a lot of mental effort tracking down what exceptions could be thrown in python and making sure I caught them all. Relying on/hoping documentation was accurate. I began to yearn for checked exceptions.
Ironically it seems like in java land they’ve mostly gone the route of magic frameworks and unchecked exceptions. So things like person.getName() can be used easily without worrying about whether or not the underlying runtime generated bytecode is using a straightup property access or if this attribute is being lazily initialized.
It seems like one of the simplest ways to retain your sanity is to uncouple I/O from your values and operate on simple Collections of POJOS. This gets into the arena of FP and monads, which use language level features to force this decoupling.
I also prefer the checked exception approach. Spent a lot of time with exceptions being thrown uncaught, got tired of it.
I would say that Go has show there is a middle ground somewhere between 100% type-proven safety, and unsafe yet efficient paradigms.
I’m pretty fond of Rust, or Haskell, but also enjoy less strict tools like JS or Ruby. Of course, I would rather like it if my auto-cruiser were written in Rust rather than Node, but one tool’s success does not mean the others are trash: I may be mistaken but if Martin’s point is “type-safety sucks”, it seems you are just saying “non-type-safety sucks more”. I’m not convinced by either arguments.
My point was that the people had deliberate reasons for the features they included or removed. I’m repeatedly asking “why” because Martin’s article dismisses the creators' reasons with an argument about personal responsibility and by characterizing them as punishments. The arguments Martin makes against these particular features also apply broadly to features he takes for granted.
I was writing entirely on the meta level of flaws in the article, not trying to argue for a personal favorite blend of safety/power features.
Yes. This. Exactly. Evolutionary language features AND engineering discipline. No need for either or, that’s just curmudgeonly.
then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.
Another argument to be made is productivity since he brought up a job. The productive programmers create maximum output in terms of correct software with minimum labor. That labor includes both time and mental effort. The things good, type systems catch are tedious things that take a lot of time to code or test for. They get scattered all throughout the codebase which adds effort if changing it in maintenance mode. Strong typing of data structures or interfaces vs manually managing all that will save time.
That means he’s essentially saying that developers using tools that boost their productivity should quit so less productive developers can take over. Doesn’t make a lot of business sense.
I wrote this a couple weeks ago, but I figure it’s worth repeating in this thread. I wrote a prototype in Rust to determine if using conjunctive-normal form to evaluate boolean expressions could be faster than naive evaluation. I created an Expr data type that represents regular, user-entered expressions and CNFExpr which forces conjunctive-normal form at the type system level. In this way, when I finished writing my .to_cnf() method, I knew that the result was in the desired form, because otherwise the type system would have whined. Great! However, it did not guarantee that the resulting CNFExpr was semantically equivalent to the original expression, so I had to write tests to give myself more confidence that my conversion was correct.
Testing and typing are not antagonists, they’re just different tools for making better software, and it’s extremely unnerving that someone like Uncle Bob, who has the ear of thousands of programmers, would dismiss a tool as powerful as type systems and suggest that people who think they are useful find a different line of work.
Thanks for the summary. Seems The Clean Coder has employed some dirty tricks to block Safari’s Reader mode, making this nigh on unreadable on my phone.
As a modern JS developer, I’ve started using Flow and TypeScript and have found that the streets have far fewer frogs now :)
More like the Ann Coulter of programming in so much that it is increasingly clear that they spout skin-deep ridiculous lines of reasoning to trigger people so that they gets more publicity!
Remember, when one retorts the troll has already won. Don’t feed the troll!
A passing thought
defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.
This brings to mind one of my Henry Baker’s taunting remark about our computing environments
computer software are “cultural” objects–they are purely a product of man’s imagination, and may be changed as quickly as a man can change his mind. Could God be a better hacker than man?
Has it not occurred to him that these languages come from programmers themselves? Of course it has. So sure, defaults are always the responsibility of people. Some are the fault of the application programmer; some are the fault of the people responsible for the language design. (And when one is knee deep writing some CRUD app whose technological choices are already set in stone, determining whose fault is it is of little use)
The entire point of software is to do stuff that people used to do by hand. Why on earth should we spend boatloads of hours writing tests to prove things that can be proved in milliseconds by the type system? That’s what type systems are for. If we were clever enough to write all the right tests all the time, we’d be clever enough to just not introduce NPEs in the first case.
I had the same reaction reading this. He’s off his rocker. The whole point of Swift being so strongly typed is that we’ve learned if the language does not enforce it, then it’s not a matter of if those bugs will happen but how often we will waste time dealing with them.
The worst part to me is that right off the bat he recognizes these languages aren’t purely functional; implying that there is a big difference between a language that enforces functional programming and one that doesn’t. Of course there is, and the same thing goes for typing.
He has just posted a follow up… http://blog.cleancoder.com/uncle-bob/2017/01/13/TypesAndTests.html
Alas, he says this…
Types do not specify behavior. Types are constraints placed, by the programmer, upon the textual elements of the program. Those constraints reduce the number of ways that different parts of the program text can refer to each other.
No Bob, Types are a name you give for a bundle of behaviour. It’s up to you to ensure that the type you name has the behaviour you think it has.
But whenever you refer to a type, the compiler ensures you will get something with that bundle of behaviour.
What behaviour exactly? That’s the business of you to decide and tests to verify or illustrate.
Whenever you loosen the type system…. you allow something that has almost, but not quite, the requested behaviour.
In every case I have investigated “Too Strict Type System”, I have come away with the feeling the true problem is “Insufficiently Expressive Type System” or “Type System With Subtle Inconsistencies” or worse, “My design has connascent coupling, but for commercial reasons I’m going to lie to the compiler about it rather than explicitly make the modules dependent.”
In which community is he influential, if I may ask? I’ve only learned of him through Lobsters.
I know him as a standard name in the Agile and Ruby communities, I think he’s well-known in Java but am not close enough to it to judge.
My college advisor loved talking about him and referencing him, but I think he’s mostly lost his influence with programmers today. At least, most people I know generally disagree with everything he’s written in the past decade.
Such blatant refusal (“everything is wrong”) seasoned with mockery (“parroted”) is exactly what has been stopping me from writing posts on this very topic.
Declaring that the responsibility for your inaction belongs to strangers leaps over impolite into outright manipulation. I pass.
This reads a lot like “get off my lawn.”
I’m unsure why he’s so upset about Swift’s optionals or Kotlin’s null shorthand. Once you grok the syntax/operators, they operate intuitively.
var l = s?.characters.count ?? 0
In Kotlin (length is a property, not a method — c'mon Bob):
var l = s?.length ?: 0
Both are completely safe. After either of those statements you’re guaranteed to have l assigned an integer — and in either case it will be the correct amount of characters.
Are either of these examples worse than this:
size_t l = strlen(s);
Maybe, maybe not. But if s is null we now have undefined behavior. Personally, I’ll take the type safety.
Maybe, maybe not. But if s is null we now have undefined behavior. Personally, I’ll take the type safety.
To be fair, it could be a totally valid pointer with a null terminator missing down the road ending in a read to a protected page.
The shitty thing is that Pascal and Fortran and Ada and other contemporaries of C already knew the right answer to this problem.
I agree with you. But, using Bob’s ridiculous argument, the developer should be in charge of checking for the terminator and adding it if need be — or trusting that it will always be there.
Given that I have seen and written a lot of terrible pointer math, I remain unconvinced that testing for the terminator, possibly checking the remaining allocated memory, possibly calling realloc(), etc… before you get the length of a string if better than optional-syntax.
And what is it that programmers are supposed to do to prevent defects? I’ll give you one guess. Here are some hints. It’s a verb. It starts with a “T”. Yeah. You got it. TEST!
The closed-mindedness of this statement astonishes me. How to reliably write correct software is by and at large an unsolved problem, especially when you add the constraint “cheaply enough”. Anything that increases the likelihood and reduces the cost of finding errors must be considered welcome. Rejecting one approach because we are too emotionally invested in another is dangerously irresponsible.
This is a lesson I had to learn the hard way. At some point in time, I was rabidly anti-testing. It offended my sensibilities (and, frankly, it still does) to run my code against a few test cases, when I had already proven it correct on paper for all infinitely many cases. Until one day I made an error while transcribing a proven correct program: I replaced a variable called v with another called w. As luck would have had it, both variables were of type int, so the type checker couldn’t find the error. The consequences were completely hilarious in retrospect - but back then I was furious.
All these constraints, that these languages are imposing, presume that the programmer has perfect knowledge of the system; before the system is written.
Now we know who hasn’t experienced the joy of refactoring typeful code.
And how do you avoid being punished? There are two ways. One that works; and one that doesn’t. The one that doesn’t work is to design everything up front before coding. The one that does avoid the punishment is to override all the safeties.
This contradicts my experience. Although I don’t use Swift or Kotlin, I use languages that are arguably even more typeful than them (Standard ML and Rust), and the last thing that would cross my mind is to work around the safety checks. Au contraire! I design my programs so that types catch as much as can reasonably be caught. (And, of course, manually prove correct and test the rest.)
This is a lesson I had to learn the hard way…
Knuth’s hilarious chestnut “beware of bugs in the above code; I have only proved it correct, not tried it” is, as typical, both useful and on-point. =)
Yeah, in retrospect it’s obvious, but sometimes one has to see to believe…
I think this Knuth quote is going to get really annoying in a decade or two when dependent types and proof systems mature from academic experiments into industry tools.
Dependent types are beautiful, but there’s a real danger that we end up locking ourselves into a mindset where types are the only legitimate verification technique. One of the main points of my original message is that we must keep our options open.
Types solve some problems beautifully:
But types also have serious weaknesses:
Type systems tend to be more convenient for proving things about (“no program in this language ever deadlocks”) than proving things in (“this specific program doesn’t deadlock”). So type systems normally tackle the kinds of problems that language designers, rather than language users, want to rule out by construction.
Type theory is fundamentally based on natural deduction: typing rules are deduction rules, not axioms. This is great for implementing a type system on a computer (which is dumb and can only execute pre-programmed rules anyway), but awful for humans to calculate in (since natural deduction proofs tend to be looooong, whereas axiomatic systems can be explicitly designed to shorten proofs).
The people using Esterel SCADE, SPARK Ada, Atlier-B, and Mercury are already annoyed. They figure it’s easier to build productivity or usability on top of their tooling for safe software than turning usable, productive crud into something safe. Most industry either disagrees or refuses to really assess the tooling. Consistently for years or over a decade depending on the method. Hence the… aggravation.
I think of Rust/SML/Haskell as free editions of 6-figure installs of Coverity/CodeSonar/Klocwork. And comparing a reasonably sized well-typed program’s maintenance reliability/cost vs a puddle of Python (etc) is just laughable.
Do you have a link to that? A couple searches doesn’t turn up anything.
Lol. That’s one way to look at it. Might even restate it similarly in the future when marketing such languages.
Regarding Mercury, it’s like a better Prolog with functional programming and performance boost.
At least one company uses it below. Many companies use Prolog since it’s essentially executable specifications for the domain. Im sure Mercury could have similar use.
Oh, it’s that Mercury! I was thinking it was a provable software framework. Nifty! I’ll take a look into it again, I keep wanting to write Prolog for things, but get stuck in the “boring business code that isn’t easily statable as horn clauses” bit.
And again, Im not saying it’s verified code so much as cheating around it a bit by executing your specs in the logic. Change specs also has no recode step with that property.
One other thing that might help you is to remember there can be error in any technique of quality assurance. It can come from a lot of places. The important thing is it might show up. Knowing this, it’s wise to have some redundancy with one technique catching problems in another helps counter a lot of that. It’s why A1 class of the Orange Book required formal specs, proofs, reviews, info-flow analysis, testing, and pentesting. In high-assurance systems, authors would find each of these would catch at least one, unique defect others missed. Precise specs, human reviews, and tests discovered the most consistently over time at reasonable cost. Proofs discovered most obscure or corner cases at high cost. Note that type systems are a form of formal specification but way weaker than what they were doing. Quick & dirty specs that catch common, small issues.
Exactly. Help in finding errors must be appreciated from wherever it might come, because problems can come from a lot of sources too. Back then, I was so obsessed with one tool (proofs) that handles some problems (coming up with the right design) that I had neglected that other problems were possible too (in this case, making sure that the implementation agrees with the design).
This is absurd. The same arguments can be made in favor of getting rid of variables: did you write [ebp-12] instead of [ebp-16]? Your tests should have caught this!
Tests have at least two goals: quality assurance and design guidance. If you look at the problem from this broader perspective it’s not surprising that there are other tools that contribute to overall quality (e.g. types).
It’s a cruel irony that this man is rich because of his opinions.
Now, ask yourself why these defects happen too often. If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages.
What even is there to say in response to such a statement? Virtually nobody with a background in PL believes this, should we all quit? Bob often presents himself as an authority when he shouldn’t, which I consider appallingly unprofessional.
There’s so much to unpack here, but I’m going to limit it to one.
These languages are like the little Dutch boy sticking his fingers in the dike. Every time there’s a new kind of bug, we add a language feature to prevent that kind of bug. And so these languages accumulate more and more fingers in holes in dikes. The problem is, eventually you run out of fingers and toes.
This is written right after complaining about null. It’s actually more work to add null to a type system. To use his analogy, designers intentionally poke holes (null) into their dike (type system), because it has been (incorrectly) perceived as more convenient for the end user.
I think he exclusively codes in machine code for architectures without MMU’s. Or he’s a hypocrite. ;)
So, I have to point out that it’s not the “Kotlin documentation” that calls null the billion dollar bug. They’re quoting the guy who invented null references, Tony Hoare, who called it the “Billion Dollar Mistake.” It’s kind of amazing that “Uncle Bob” is writing an article condemning language protections against NPEs without understanding that null references were a choice, and without knowing that Tony Hoare considers that choice a mistake.
I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
it’s not the “Kotlin documentation”
Kotlin’s type system is aimed at eliminating the danger of null references from code, also known as the The Billion Dollar Mistake.
They’re quoting Hoare.
Why did the nuclear plant at Chernobyl catch fire, melt down, destroy a small city, and leave a large area uninhabitable? They overrode all the safeties. So don’t depend on safeties to prevent catastrophes. Instead, you’d better get used to writing lots and lots of tests, no matter what language you are using!
That analogy doesn’t make any sense. How exactly would “tests” have prevented the Chernobyl disaster? Or on the flipside, how do override the safeties of modern languages? unsafePerformIO? That seems like a pretty terrible reason to question the entirety of your type system.
Ironic choice of example given that the Chernobyl disaster was literally caused by people executing a test:
During a hurried late night power-failure stress test, in which safety systems were deliberately turned off, a combination of inherent reactor design flaws, together with the reactor operators arranging the core in a manner contrary to the checklist for the stress test, eventually resulted in uncontrolled reaction conditions[.]
That’s a brilliant spin on it.
Kind of just feels like the writing of someone who’s backed the testing horse so much that he has to perceive change as a threat. It feels closed-minded and 2-dimensional. He’s forced to find ways to demonize language features which reduce reliance on testing so he contorts his whole worldview to make it so.
Whose job is it to manage the nulls. The language? Or the programmer?
It’s the programmer’s, of course. Not because choosing “the language” is false, but instead because that’s not a real choice.
The programmer chooses the system, methodology, and discipline to produce the end artifact. Their choice may be to use a language which helps them statically. Now we can ask whether a language’s static systems are helpful in their goals. We don’t have to force a dichotomy between types and testing.
If you’re choosing a language for a team then you’re as much choosing a culture as you are a technology. This choice of culture will permeate your team as it adopts the language and grows. Of course, the leaders of the team and the team itself are still the primary drivers of this culture—the language is merely a tool. Again.
To arrive at the conclusions Uncle Bob is taking here you seem to have to take a lot of responsibility off the programmer. You have to start believing that decisions are made ethically because something is right or wrong instead of via a process of having a bunch of options and (over time) finding ways to use them to get to your end effectively.
The up-front-design complaint is bizarre: it’s like he’s unwilling to contemplate refactoring (but only for these particular language features).
Also, I can squint and see the possibility of a meaningful discussion about the tradeoffs involved in Swift’s exception handling and Kotlin’s open, but marking nullability is such a simple win I’m really amazed to see him complaining about it.
Which is ironic since he’s a frequent proponent of refactoring.
Whose job is it to manage that risk? Is it the language and associated tooling’s job? Or is it the over worked, under staffed, programming department’s, who’s just trying to get the product out the door, job?
I hope it’s the tooling’s job.
Given a chance to take the shorter path, the easy path, we will take it to get the product out the door. Languages and tools that enforce and encourage better practices are little points of pain now, but greater sources of joy later.
I admire Uncle Bob’s work, but I agree. I think he should consider taking a step back from his “Stop wasting time inventing new languages!” rant and consider that we can have our cake and eat it too. It’s quite possible to exercise superior engineering discipline while leveraging a new programming language. Asserting that engineers lack discipline because they’re getting distracted by the bright and shiny is a red herring.
I haven’t used any programming languages with the nullable concept he’s railing against, but IMO innovating ways to protect programmers from needless bugs isn’t the dark path, it’s called evolution, and if we abandon it, we will be lost.
Discipline should be promoted for its own sake. Slamming new technologies because you’re cozy with Java and can’t see past that is… Unfortunate.
This article is ridiculous. You can believe in testing and still want your language to do compile-time type checking.
TLDR: You need to test your code and it’s not the language job to prevent you from making programming mistakes.
I totally agree with this. The whole notion that statistically typed languages somehow lead to less bugs than dynamic languages is completely falls. Statistically typed languages are useful because of performance gains, nobody should use a static language just to reduce bugs. You write tests to reduce bugs and you do this in exactly the same manner for dynamic and statically typed languages.
Sure languages can prevent many bugs. Think of the “let” syntax in JS, Python not allowing assignment in a while clause. There are many cases where the language reduces the scope for bugs. I’d definitely make more bugs if I had to use jump statements instead of a nice and simple if else :)
What Martin is talking about here is the use of static typing to prevent bugs. I completely agree with him. Static typing doesn’t prevent your code from being bugged. You still need adequate testing. That testing will also pick up on any type issues, so there is little difference between testing a statically typed or dynamic language.
Statically Typed languages have certainly prevented bugs for me. Bugs that dynamic languages and Unit Tests would not have caught. I’m not sure where you get the idea that they wouldn’t. In fact you contradict yourself in the very next paragraph so I’m a little confused about what you are trying to say here.