1. 18

For the love of God, do we have to label 50 year old software methods with some misused term from Category Theory?

1. 22

The term “monad” does apply here though, albeit without it’s full power. We should be glad to see that old techniques are being understood using another angle. Understanding monads has helped me see lots of patterns in code I write every day. It’s not merely a useless theoretical construct.

1. 6

I’m interested in hearing about what insights you got from “monad”. To me, it’s just pretentious.

1. 10

For me, monads are a concept that lets me think about computation that carries baggage from one statement to the next. The baggage can be errors, a custom state object, or context related to IO or async computations. Different things, similar way it looks in code. Recognizing the monad concept helps me understand the contours of the implementation and also helps me generate better designs. It goes both ways. I do think there’s value in using the word. It’s a way to compact thought.

1. 6

You may have reasons to dislike Haskell, but that shouldn’t mean you need to deprive yourself of the benefits of these fundamental concepts. Take the signatures of the following functions:

(\$)      ::                                     (a -> b) ->   a ->   b
(<\$>)    ::                      Functor f =>   (a -> b) -> f a -> f b
(<*>)    ::                  Applicative f => f (a -> b) -> f a -> f b
(=<<)    ::                        Monad f => (a -> f b) -> f a -> f b
foldMap  ::         (Foldable t, Monoid m) => (a -> m  ) -> t a -> m
foldMap  ::     (Foldable t, Monoid (f b)) => (a -> f b) -> t a -> f b
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)

These are all different ways of combining a function that may have some super powers (in a functor context, producing a functor context etc.) with some value that may have some super powers and obtain something with super powers. And note how you can combine them together to build a composite value. By the way, there are infinitely many like these, and new useful ones are discovered all the time.

Having an intuition for these combinators combined with a knowledge of common types that satisfy the Functor, Applicative, Monad, Foldable, Monoid and Traversable helps you write reusable code. Instead of tying down the implementation to a specific combination of concepts, you recognize the actual pattern the first time, so your code becomes easier to understand* and reusable the first time with less typing!

(*) Easier to understand for someone who has an intuition for these concepts. Because if you see a function with signature Applicative f => f A -> B -> f C, you immediately know that the function may decide to invoke the f effect you gave it (f A) a number of times and it may base that decision on B and it may also use your A and B to construct the C, but it’s impossible for it to use the A to decide on how to invoke the effects of f.

• a :: b: a has type b
• a -> b: a function from a to b
• f a: Type constructor f (for instance List) applied to type a (to produce “list of a”)
• a -> b -> c: Same as a -> (b -> c): a function that takes a and produces another function that will take b to produce c, a two-argument function in practice.
• c a => ... a ...: a is any type that has an instance (implementation) of the typeclass (interface) c
1. 5

most people don’t know this but the haskell version of monad is a crude simplification of the category theory definition of monad. it’s a lot less general.

1. 4

It’s always possible that I am wrong, but to me, “monad” as used in Haskell is partly just a pompous way of describing well known and trivial programming paterns and partly a painful effort to permit the absolutely necessary use of state variables within a system based on the incorrect theory that one should program statelessly.

http://www.yodaiken.com/2016/12/22/computer-science-as-a-scholarly-discipline/

1. 11

Full disclosure, I’m a monad-lovin’ fool, but to me the utility of the concept is to see that some (yes) well-known and trivial programming patterns are in fact conceptually the “same thing”. It unifies ideas, that many programmers only learn by convention, underneath a single concept which has more than just the weight of convention behind it.

It’s totally possible to go overboard with it, or to over-rely on the flimsy “mathiness” of FP as a way of dismissing or trivializing real problems, but imo if you want a bedrock understanding of programming patterns that is based in something factual (as opposed to flavor-of-the-month tribalism or appeals to the current authority), FP is the way to go.

1. 5

I’m with you on the pompous and unenlightening terminology, but that’s a pretty unfair characterization of the role monads play in demarcating state-dependent code.

1. 4

I think there are a couple of real insights in FP, but the “math” continues the basic CS confusion between meta-mathematics/axiomatic and working mathematics - which is often about processes and algorithms and state. And the project falls for a common CS error, in which we note that doing X is hard, messy, error prone, so we create a system which doesn’t do X because that’s clean/light/well-defined “easy to reason about” (even if not in practice) and then we discover that X is essential, so we smuggle it back in awkwardly. I just don’t see the insight that the Category theory brings to understanding state.

1. 4

My experience with describing stateful systems is that it’s much more elegant and bug-free in pure languages because you get to reason about it explicitly. I don’t see many imperative languages letting me statically prove propositions about the system state (“this resource exists in that state at that point in the code”). You need equational reasoning and powerful type systems for that. Monads help.

The point of purity isn’t about avoiding state and mutability, it’s about avoiding implicit state.

1. 2

That’s great. Do you have a link to a paper/post which works out a proof of some non-trivial program?

1. 1

The most complex I’ve seen is probably the state management in my video game, where it ensures proper initialization, resource management and deallocation. It’s very simple on the user side, it consists only in writing and complying with the state transition specifications in the types, e.g. quitRenderer : (r : Var) -> ST m () [remove SRenderer r]. This is a method in the Draw interface whose type says that it removes the renderer from the context m. SRenderer is some Type defined by the implementation, in my case a more complex composite type, and if I were to implement that method without properly handling those subcomponents, it wouldn’t compile. There is much more to the “complying” part though. ST tracks your state as you further develop your function, and you just print it out. It’s like a static debugger. You see how your states evolve as you unwrap and modify them without even running your code, and I cannot how rarely additional changes after you test are needed compared to other languages.

I haven’t worked on other serious Idris projects, I’ve just done the book exercises and went to code my game. I definitely recommend the book though.

2. 1

Yeah, I’m with you on the category theory, that could easily be obfuscatory math-envy (still looking for an example of a software situation where category-theory concepts carry more than their weight.) But separating out X as much as possible, and making it clear where X has to be mixed back in, that’s just modularization, usually a good policy.

1. 0

But modularization is hard and where module boundaries don’t work, creating a backdoor often just makes things less modular. To me, what would be more interesting than “pure functions” plus escape is to clarify exactly what, if any, state dependencies exist. I don’t want to bury e.g. file system pointer dependency in some black box, I want to know f(fd,y,z) depends on fd.seekpointer internally and fd.file contents externally, or, for a simpler example that f(x) has a dependency on time-of-day. I don’t see much serious work on trying to expose and clarify these dependencies.

1. 4

2. 1

I don’t have much problem with naming the idea of monads, though I definitely agree that pure functional programming is not a good way to program.

1. 1

I read your linked post. Since you don’t provide commentary, it’s hard to understand what your “Oh” is meant to imply. I’m guessing that you’re trying to highlight a contradiction between “Why functional programming matters” and the “Unix paper”. Could you explain your idea here?

1. 1

Hughes describes as a novel advance from FP something that was well known a decade earlier in systems programming - and described in one of the most famous CS papers ever published. The concept of programs streaming data over a channel, of course, also predates UNIX which is why Dennis Ritchie, who did know the literature did not make a claim as ignorant as Hughes’ - or the one in the paper discussed here.

1. 2

Hmm. I don’t know the literature either. Your critique that much of the first paragraph from “Why functional programming matters” quoted in your blog post isn’t novel in light of the “Unix paper” from more than a decade earlier seems true, given the context in your post.

That said, there are some novel things which lazy-evaluation brings to the table. Hughes hints at them in the section that you quoted:

1. It makes it practical to modularize a program as a generator that constructs a large number of possible answers, and a selector that chooses the appropriate one.

2. You can take this further: The “possible answers” may be large data structures which are expensive to compute, and lazy evaluation will only compute enough of the data structure to determine whether you want to use it, leaving the rest unevaluated when the “selector” rejects them.

Strict evaluation doesn’t give you either of these advantages unless explicitly constructed. You can get #1 with blocking semantics on reading & writing unix pipes, like Ritchie showed. You probably can’t get #2 in a general way without lazy evaluation.

1. 1
2. 19

misused term

Can we please stop fighting jargon? Do other industries have people constantly calling for things to be named differently? Like, does the Fashion Industry have people fighting to rename “balaclava” to “winter weather knitted headwear” because it’s headwear named after a Battle, and not really descriptive?

If not “monad,” what’s a better alternative?

1. 8

It seems pretty clear to me that the purpose of this post is to shame rob pike, people already think he’s too stupid to understand the hindley-milner type system so this just adds onto that.

1. 6

I don’t think the post shames Rob Pike. I think it’s meant to point out a pattern that shows up in the go stdlib also shows up elsewhere.

2. 2

I think in the pursuit of knowledge it makes sense to name & build understanding around the things that we already do, or observe nature doing. That’s what applying a “misused term” to “50 year old software methods” accomplishes.

1. 2

I don’t think jargon confers knowlege.

1. 1

I’d rather we name concepts than ceaselessly describe them in imprecise terms. That’s what the article is doing, and it serves the function that people can understand the pattern if they’ve already used it elsewhere, that communication is clearer because it is more concise, and that mistakes can easily be identified.

So, while jargon might not confer knowledge, it enables communication and facilitates thought.

1. 1

To me, there is nothing precise about the concept of monad as used in FP. It’s a collection of messy hacks. The Moggi paper is a travesty of Category Theory.

2. 1

You really need to relax.

1. 7

Not learnèd enough in Haskell yet to comment on the primary debate here, but I will say that I am in complete agreement with the author at least emotionally: the existing exceptions “story” with GHC IO is certainly confusing, but also it just seems really bad w/r/t writing reliable code.

Perhaps it’s just because I am a newbie, and I spend my days awaiting enlightenment with open arms, but as it is I would feel 100x more comfortable writing “reliable” (as in, doesn’t unexpectedly terminate) code with Swift — or Java for that matter — than Haskell. And that is after reading absolutely everything I’ve been able to find about Haskell exceptions, trying to understand the whole “no, unchecked exceptions are actually good, you see!” point of view.

1. 3

Especially for a language and ecosystem so otherwise obsessed with type-level safety of every kind… I just don’t get it

1. 9
1. Sending vibes and good feelings your way. You’re not alone in having these kinds of feelings, even though they are rarely discussed! Feel free to reach out if you ever need someone to talk to, etc.

2. I’ve also had issues (of a smaller magnitude, perhaps) with productivity, “focus”, etc. at previous jobs and in school. Just my opinion: it’s important to not internalize these criticisms. When I had these problems in the past, I got in the (bad) habit of ascribing them to me, myself (as in, “I am just not good at X” or “I am just not good at working hard”).

As mentioned, wisely, elsewhere in this thread, productivity, “focus”, getting things done, etc. are skills that can be systematized and practiced! Don’t let criticisms in these areas (even if they are valid!) become a part of your identity just because they relate to “softer” skills that feel less changeable than say, some pure tech skill.

True, there may be things that are harder for you for genetic or biological reasons. But it is always possible to work to improve these skills, and either way it’s no reflection on your quality as a person.

1. Speaking from a purely dispassionate, “paying the bills” point of view, it is probably in your best interest to work as a developer somewhere if you can manage it — the pay is probably better than you could get by switching industries, and five years of experience is not as small a number as it sounds!

In my experience, the “difficulty” of a job in the technical sense is not really related to the stress level produced by that job. I’ve had jobs that were technically unchallenging but very stressful, and I’ve had a blast working on very technically challenging teams. I think you might find that jumping over to an “easier” job is not any less stressful, or at least that it’s always a bit of a roll of the dice either way.

But also, as mentioned elsewhere in the thread, try to relax and unwind and un-burnout yourself before making any decisions like this, if you can.

1. 1

I’m a total Haskell novice, but when I read this a while ago I was left pretty underwhelmed.

1. 2

I prefer to pull configuration together in one place. I’ll usually use RecordWildCards for the actual settings record construction. I want to know statically that it’s all there.

data App = App
{ appSettings    :: !AppSettings
, appStatic      :: !Static -- ^ Settings for static file serving.
, appConnPool    :: !ConnectionPool -- ^ Database connection pool.
, appHttpManager :: !Manager
, appLogger      :: !Logger
, appPostmark    :: !PostmarkSettings
}

makeFoundation :: AppSettings -> IO App
makeFoundation appSettings = do
appHttpManager <- newManager
appLogger <- newStdoutLoggerSet defaultBufSize >>= makeYesodLogger
appStatic <-
(if appMutableStatic appSettings then staticDevel else static)
(appStaticDir appSettings)
let appPostmark = postmarkSettings (appPostmarkToken appSettings)
let mkFoundation appConnPool = App {..}
{... etc ...}

You can be cleverer than PartialOptions/Options and turn that into a type argument but it’s not worth it.

1. 3

Screenshots look very nice! The “design” of the app seems quite… understated? But in an appealing way.

1. 7

Poor framework choices cause a bad development experience in any language. This is particularly prone to happen in Java because most of the “de facto standard” frameworks are rubbish. To make elegant Java projects, you have to ignore 90% of the internet. Hopefully not this comment.

1. 10

In my experience, to write elegant Java:

• Learn SML, Haskell, Lisp, or at least Python. By no means should you learn idiomatic Java. If you already know it, unlearn as much as you can.
• Use Java 8+.
• Never use any dependency with more than a handful of public classes.
• Never use anything which uses reflection, annotations or inversion of control.

At this point, you’ve basically got a very verbose Python that’s screaming fast and has basic static typechecking. I honestly prefer it.

Also, for what it’s worth, I think (though I’m sure as hell not going to trawl through Spring’s garbage fire of “documentation” to confirm) that the author didn’t have to write as much boilerplate as they actually did; there are magic annotations to autogenerate at least some of that, like the mapper and probably the DAO as well. Note that this has absolutely no impact on their point (because none of this is discoverable without dealing with the pain yourself, cf. aforementioned garbage fire).

1. 2

Learn SML, Haskell, Lisp, or at least Python. By no means should you learn idiomatic Java. If you already know it, unlearn as much as you can.

This gave me pause. So you’re advocating for people writing code that will be all but unmaintainable by the rest of the team in the name of elegance?

I agree that learning other programming languages can make you a stronger programmer in your language of choice, but this strikes me as really unfortunate hyperbole.

1. 9

Given the specific list “SML, Haskell, Lisp, or at least Python”, I think what whbboyd’s actually going for here is “get comfortable writing simple programs that mostly pass values around, preferring simple uses of generics (i.e. parametric polymorphism¹) or interfaces (i.e. duck typing²) over doing unpleasant stuff with deep inheritance hierarchies, and not being scared of lambdas”, because that’s roughly the intersection (not union) of those PLs’ feature sets.

(¹ fancy word for “when you put a thing into a list-of-Ducks, the thing you put in is a Duck, not an arbitrary Object” and “when you take a thing out of a list-of-Ducks, you get a Duck back, not a mysterious Object reference that has to be cast to Duck and which might throw an exception when you do that.)

(² non-fancy word for typeclasses ^_~)

This is an argument for writing less-complicated Java. I’d expect the resulting programs to be more comprehensible, not less, and they might actually look simplistic.

This is not an argument for rewriting all your Java programs with the control flow implemented as Church-encoded data structures. That would be wildly incomprehensible.

1. 4

Precisely this. Java’s typical idioms are not conducive to writing elegant code, so learning better idioms instead will put you in a better place, and shouldn’t significantly affect the ability of competent teammates to understand and maintain your code.

2. 4

I agree that there is some hyperbole here, but I also do think the above commenter has a point. Java can be nice if you subtract the cultural gunk that has built up around “patterns”. The jOOQ blog covered it well, I think.

1. 9

This really seems more like the tragedy of “100% slavish adherence to policy”.

Having 100% coverage is pretty much essential if I am going to trust your library or framework as a dependency.

For business applications, be as sloppy as you want^Wcan get away with.

1. 17

I don’t think I’ve actually looked at test coverage or even number of tests in a dependency I’ve used. Is it common for people to post those things? I might be too cynical I wouldn’t trust any of those things to convince myself that a library isn’t broken. It’s almost certainly broken.

1. 2

For new projects that I care about I usually check coverage on my dependencies, especially if it’s something like math or algorithms. Those things are comparatively easy to test and are super important to get right.

Think of it this way:

If you have 3 dependencies, each of which has 50% code coverage, and your application has 100% code coverage–the final artifact is only 62.5% covered.

For serious engineering, we have to be responsible for our dependencies–or at least aware of their quality.

1. 2

wouldn’t it depend on the size of the app and it’s dependencies? if it were a 97 line app and those dependencies were left-pad, right-pad, and center, wouldn’t you be at 97% coverage?

1. 2

I usually assume that my dependencies are similar or greater in LoC to my main application–if they’re simple little things, I might as well just write them myself or use the standard library (you are using a language with a decent standard library, yes? ;) ).

To wit: there are probably a lot more code in the Rails or Angular frameworks than there are in apps using them, ditto jQuery.

1. 2

Fair. I just checked the project I’m working on (in Go) and our dependencies are vastly larger than our own code.

2. 1

If you have 3 dependencies, each of which has 50% code coverage, and your application has 100% code coverage–the final artifact is only 62.5% covered.

This is certainly nit-picking, but an argument could be made that your artifact does not necessarily use the full dependency.

2. 11

Having 100% coverage is pretty much essential if I am going to trust your library or framework as a dependency.

The Impossibility of Complete Testing - it’s because what really matters is the input domain coverage, not code lines coverage.

1. 3

A common refrain on this theme is “100% coverage tells you nothing”, since the tests could be bad.

I find the inverse more compelling (“90% coverage tells you that 10% of the code doesn’t have any tests”)

1. 2

Of course, is that untested 10% because it’s simple enough that tests are pointless, or because it’s complex enough that nobody knows how to test it?

I don’t think coverage is a useful metric on its own. You need to know more about a project than just which lines of code get exercised by its test suite to properly judge it.

1. 1

I think the simple stuff should be (incidentally) covered by your integration tests, and it’s useful to see when it isn’t (often it’s a sign of dead code that can be deleted, or of a “shadow feature” that’s not officially supported but users care about).

Anyone have examples of simple code not used by a feature? Devtool scripts come to mind.

I don’t think coverage is a useful metric on its own

It’s not; 100% coverage is the absence of information about untested code.

2. 2

Totally naive question here, isn’t really what you’re looking for 100% code coverage from the point-of-view of the public API surface? If the public API of a library is “100% covered”, I’m not interested in knowing further if there are tests covering every private method where a library developer has needed to populate a hash table!

1. 2

That’s not sufficient because there could be all kinds of weird cases where internally things are broken but externally it still appears (given a handful of tests) that there is complete conformance to an API.

Code coverage is not the same as API compliance, and they are not substitutes for one another.

1. 4

Perhaps part of the problem is considering it from the perspective of a one-dimensional metric. Problems are more likely to arise from the interaction of related features, and it’s easy to get “100%” coverage that tests each in isolation but does not test the complete combinatoric space at all.

2. 1

Some testing libraries dont let you test private methods at all, which kind of makes sense.

I remember running into this with phphunit while working on a laravel contract last summer.

1. 11

I feel like this article could be tightened up to present its argument more persuasively (tldr: tsx — React plus TypeScript — files are type-checked, whereas other common templating formats, even Angular 2’s, which professes a deep integration with TypeScript, are not), but I still thought it was great, particularly the animations.

After experiencing some disappointment with Angular 2 on a previous project, I’ve been working with React plus TypeScript recently and it is worlds better having your “templates” participating in type-checking. It’s actually the thing you want, in my opinion.

1. 7

Thank you for the feedback. I added a tldr to the top.

1. 4

even Angular 2’s, which professes a deep integration with TypeScript, are not)

I believe that this is outdated as of today (literally), thanks to TS 2.3’s language service plugin API which Angular now hooks into.

1. 11

No kidding! I didn’t realize. Well, that’s pretty interesting.

In retrospect my original comment fell victim to the classic error: never assume that Angular 2 doesn’t contain a feature unless you have re-confirmed within the last 24 hours that it still does not contain that feature.

1. 3

Wow. I’d love to be able to update this article with some animations that show inline editor errors in Angular templates.

edit: aaannd there’s the vscode plugin https://marketplace.visualstudio.com/items?itemName=Angular.ng-template

1. 2

And here’s a full talk from ng-conf going over the language service: https://www.youtube.com/watch?v=ez3R0Gi4z5A

1. 27

There are loads of companies working on things that aren’t evil.

My recommendation would be to look at companies that do boring things without anything like machine learning or something. Go work for an inventory tracking system for vegetables. A company that makes metro cards.

There’s also a couple open source projects out there that have a “consulting/development” angle. For example, I think EdX works like this

I think it’s easier to find this in small teams. 10-20 people. Might be harder to find in some places more than others.

Bootstrapped startups might also be a good place to look. People who were never tempted to get investors because they’re fine growing slowly

1. 15

Seconding the point about looking at boring things. I’ve worked mostly for what would be considered “good” companies (in the nonprofit sector, for example), and the work is almost always boring or un-glamorous.

This can be pretty frustrating! But every time I’ve looked into trying to find something more “interesting” technically, I am left disappointed by the types of ends I’d have to end up working in service of, as described by the OP.

I’m not sure of the correspondence here, but my theory is that “good” orgs more frequently need boring things: they want their CRUD app to work a little better, want their spreadsheet workflow to be more automated, etc. I think this is because one way to measure the “goodness” of your work is to measure how “close” to actual people it is. In other words, how person-centric it is.

It’s a democratization of tech thing. The smaller a group is, the more likely they are to have comparatively mundane technical problems. Similarly, most people are not technical, so chances are that any one individual’s technical problem of choice is also going to be fairly boring.

To put it another way, there are probably some investment banks who’d love to have you crunch some stuff through some “interesting” ML and put together an analysis for them. But if you stopped a random individual person on the street, their most pressing concern would probably be for you to help them fix their iPhone.

I think the challenge for programmers is to recalibrate our senses of self-worth so that they are oriented along how much we are helping people, not how fancy the work we are doing is. I am finding that this is extremely difficult to do!

1. 3

I mostly agree, and would restate it in a slightly different way.

Filtering jobs both on cool tech stacks solving challenging problems for top pay and morally pure business purpose isn’t going to leave you with many jobs. You’re gonna have to bend on a few of those points. It’s probably not too tough to find something you don’t mind doing morally if you can handle doing basic CRUD webapps in Java or PHP or something for low to middling pay.

1. 3

if you can handle doing basic CRUD webapps in Java or PHP or something for low to middling pay.

Yeah, but then why do this shit at all? Why not get a real career, at that point? (Unfortunately, I may be too old for that.)

1. 4

I second the emotion you’re hinting at here (doing cool shit is definitely a prime motivator), but this was the other thing I was hoping to get across in my post. One reason to do basic CRUD webapps “at all” under these circumstances may be that you’ve found a good org of good people, and when you deliver a basic CRUD app to them, they may be delighted.

It’s another yardstick available by which you could measure your worth. It’s not easy to get in that mindset, I agree with that. But I think it’s available if one wants it.

1. 2

I’m guessing you never tried to build automatic programming tools that automate CRUD apps. When I had to do some, I built my 4GL toolkit I occasionally reference. Even boring jobs can be executed in interesting new ways. Try doing them in Prolog, Flow-based Programming, etc. I’d say automate them without telling the boss. You have more time to bullshit around with cool stuff. Alternatively, pick an awesome language that compiles to the language for CRUD apps.

CRUD jobs suck. I agree there. I’m just saying even the most boring of all IT jobs present opportunities. I haven’t even mentioned homebrew, static analysis and refactoring tools some companies allow people to attempt.

1. 1

What’s a “real career”, and why isn’t tech one? Seriously, if you know of something that has better overall pay, working conditions, and lack of lifestyle interferences, with fewer formal qualifications, please tell us, and I’ll go do it.

I have my beefs with the tech world, but I’ve done a lot of different things, and I think it’s hard to beat overall. No job, career, or field is perfect, but life is much easier if you can recognize the good, tolerate the bad, and go home to something else at the end of the day.

I also think it can help to work somewhere where you can see and talk to your users. You can temper disappointment at not using cool trendy languages and building deeply innovative things by seeing every day how much time and frustration you’re saving your users. I’ve done things like cut the loading time of a screen from 2-4 minutes to <100ms (short version: multiple layers of N+1 queries in NHibernate replaced with a single SQL query) just because it was annoying me when I was doing testing, and seen how thrilled the users were at how much faster it was.

1. 1

Because technology can drive tremendous positive change in places like that.

2. 3

There’s some truth to this. Small companies tend to have tight budgets and want specific things done, and there isn’t a lot of room for experimentation or growth. If you can solve a specific, usually boring, problem in time, great. They probably don’t have the resources to justify a full-time role doing only interesting work. Large companies, on the other hand, tend to have ample budgets, but the power is often held by sociopaths who belong in prison.

This said, there are a lot of terrible small companies out there, too. There are plenty of efforts and people that I would not feel good about helping. The moral argument for capitalism is that business is potentially positive-sum, but the reality on the ground is that 90+ percent of the people play it zero-sum, and that seems to be true whether we’re talking about corporate executives (whom everyone hates) or small-business proprietors (whom we like to see as “the good guys”).

3. 6

Go work for an inventory tracking system for vegetables.

That might have been intended as a joke example. The funny part is I actually used one of those things at a prior job. The IT department there found their job boring outside of the people that had to go to production locations for fixing equipment, deploying cables, etc. They had some diversity in work. However, the whole IT group got paid to help the business deliver products to customers that they wanted. Also, to do it more effectively. None of them would write a post like the OP.

They’re not an exception. Far as I can tell, most of the businesses in at least the U.S. provide some kind of benefit to consumers that keeps them in business. Most of the IT work is about delivering that. The scheming bullshit in those companies, startups, or purely-evil companies seems to be a minority even if some of them are massive. One can avoid a lot of it by just not applying for jobs at such companies.

1. 6

This is a long article that essentially repeats the claim without evidence or solutions. The claim is false for basic testing but true for sophisticated testing. Let me illustrate by comparing the coding and testing step:

Coding: There’s a spec in their head of what the code is supposed to do. This usually has output, may produce side effects, and may have an output. They write a series of steps in some functions to do that. They then run it on some input to see what it does. They’re already testing.

Testing. Typing both correct and incorrect inputs into the same function to see if it behaves according to the mental spec. This can be as simple as tweaking the variables in a range or just feeding random data in (i.e. fuzzing). Takes less thought and effort than coding above.

So, the claim starts out to be false. The mechanics of testing are already built-in to the runtime part of the coding phase. The others are slight tweaks. The basic testing that will knock out tons of problems in FOSS or proprietary apps takes no special skill past willingness to do the testing. Now let’s look at a simple example of where testing might take extra knowledge or effort.

https://casd.wordpress.ncsu.edu/files/2016/10/kuhn-casd-161026-final.pdf

So, the government did some research. They re-discovered a rule that Hamilton wrote about for Apollo program: most failures are interface errors between two or more interacting components. That interfacing used them in ways they weren’t intended. The new discovery, combinatorial testing, was that you can treat these combinations of interfaces as sets to test directly in various random or exhaustive combinations. Just testing all 2-way interactions can knock out 96% of faults per empirical data. Virtually all faults drop off before 6-way point.

Why is this sophisticated enough to deserve a claim like in OP’s post? First, people don’t learn the concepts or mechanics during the course of programming. You have to run into someone that’s heard of it, be convinced of its value, think in terms of combinations, and so on. Once you know the method, you might have to build some testing infrastructure to identify the interfaces & test them. There’s also probably esoteric knowledge about what heuristics to use to save time when combinations go past 3-way toward combinatorial explosion. So, combinatorial testing is certainly a separate skill whose application could frustrate the hell out of developers. Until they learn it and it easily knocks out boatloads of bugs. :)

Regular testing of making inputs act outside of their range? Nope. Vanilla stuff same as the coding you’re doing. Easier than a lot of the coding actually since the concepts are so simple. Basic arithmetic and conditionals on functions you already wrote. What stops basic testing from happening is just apathy. Incidentally, that also stops them from learning the sophisticated stuff for quite a while.

1. 8

Not strictly related, but this comment reminded me of one of my favorite tweets of all time:

Trek Glowacki‏ @trek

Usually when I watch people who “don’t TDD” program, they’re TDDing in a browser/REPL/etc.

Then throwing those tests away.

1. 2

Well said. Same claim I’m making for basic testing.

2. 5

They then run it on some input to see what it does.

While I agree this is what most developers do, I’ve run into several who’ll just code up what they think is correct and call it a day without ever executing (or even compiling!) it. They don’t tend to last long, but they exist.

1. 3

That’s part of the apathy I was referring to. People who refuse to even attempt validation of their efforts are beyond help far as testing methods go. People might try to convince them but that’s orthoganal problem to training in testing methods. A people problem.

2. 2

The basic testing that will knock out tons of problems in FOSS or proprietary apps takes no special skill past willingness to do the testing.

I half agree with you.

The other half remembers that, for the most unskilled testers, they test for success, not for failure. And if you’re not testing for failure, then you’re doing little better than not testing at all.

Testing for failure requires an intuitive leap, and it benefits from practice. That makes it a skill (or a component of a skill). It’s an easy-to-acquire skill, and one that should frankly be part of the core description of developer, but it’s a skill nonetheless.

1. 33

I’m an Ocaml user and, except for a few rare conditions, I’ve found I much prefer a result type to exceptions. My response will be based on Ocaml which may not be the same as F# so if they don’t apply there then ignore it.

Some points I disagree with the author on:

AN ISSUE OF RUNTIME

I didn’t really understand the example here. How is the author accessing an optional value? In Ocaml we have to use an accessor that would throw an exception if the value is not present or pattern match the value out. This doesn’t seem to have anything to do with exceptions or results, just an invalid usage of an option.

AN AWKWARD RECONCILIATION

This is the case in Ocaml as well, which is why many libraries try to make exceptions never escape the API boundary. But writing combinators for this are really quite easy. A function like (unit -> 'a) -> ('a, exn) result is available in all the various standard libraries for Ocaml.

BOILERPLATE

The author should be using the standard applicative or monadic infix combinators. Maybe F# doesn’t allow that. In Ocaml the example would look like:

let combine x y z =
pure (fun x y z -> (x, y, z)) <*> x <*> y <*> z

WHERE’S MY STACKTRACE?

This is the one I disagree with quite a bit. If I am using exceptions then yes, I want stacktraces, because it’s a nearly unbounded GOTO. But the value result types give me is that I know, using the types, what errors a function can have and I have to handle it. This makes stacktraces much less valuable and the win of knowing what errors are possible and being forced to handle them. I’d much rather have this than stacktraces.

THE PROBLEM WITH IO

The problem here doesn’t have anything to do with exceptions, it’s that the return type should be a result where the Error case is a variant of the various ways it can fail. Ocaml makes this much much easier because it has polymorphic variants.

STRINGLY-TYPED ERROR HANDLING

Yeah, use a variant not a string.

INTEROP ISSUES

This can indeed be a problem. It’s also a problem with exceptions, though.

1. 9

100% agreed. Debugging from a stack trace is far more complicated than having good error handling through compiler enforced types.

1. 3

Ditto. This is the case I’ve found in any FP language that I worked at, it takes more time to work with the stack trace, and recover anything valuable from it, instead of utilizing the compiler and the type enforcing at compile time.

2. 2

WHERE’S MY STACKTRACE?

This is the one I disagree with quite a bit. If I am using exceptions then yes, I want stacktraces, because it’s a nearly unbounded GOTO. But the value result types give me is that I know, using the types, what errors a function can have and I have to handle it. This makes stacktraces much less valuable and the win of knowing what errors are possible and being forced to handle them. I’d much rather have this than stacktraces.

This is a case where you can eat your cake and have it too. Java has checked exceptions which the compiler enforces are handled. When call a function that can throw a checked exception, the calling a function either has to handle the exception in a try block, or include in its signature that it can throw an exception of the specified type.

You can also do the opposite and add the stack trace to the result type. Most languages provide some way to obtain a stack trace at runtime, so all you need to do is attach the stack trace to the error when it is instantiated.

1. 4

Checked exceptions in Java are a nice experiment but a rather colossal failure, unfortunately. Since the compiler cannot infer checked exceptions you have to retype them all out at each level and it becomes unwieldy. The situation is even worse with lambda’s where one has to turn a checked exception into an unchecked one.

1. 3

Is it simply type inference on function declarations that you see as the difference here? I am curious because as a Java programmer by day, I don’t see a ton of difference between “-> Result<FooVal, BarException>” and “FooVal someFunc() throws BarException { … }”.

Granted the implementation is quite different (unwinding the stack and all that), but is it simply ergonomics that makes the latter a “colossal failure” in your mind?

1. 3

No, the difference is that results are just types and values. From that you get all the great stuff that comes with types and values. For example:

• Type inference. I only specify the types of my functions at API boundary points.
• Aliasing types. If I have a bunch of functions that return the same error I can just do type err = ..... rather than type all of the errors out each time.
• They work with lambdas!
• They work with parametric polymorphism. I can write a function like 'a list -> ('a -> ('b, 'c) result) -> ('b list, 'c) result.
• And, probably most importantly, it does not add a new concept to the language.

That checked exceptions do not compose with lambdas in Java basically tells me they are dead. All the Java code I’m seeing these days makes heavy use of lambdas.

1. 2

Gotcha, thanks for the reply. I don’t disagree strongly, but I feel like what you are arguing for is Java, minus checked exceptions, plus more pervasive type inference, plus type aliases, plus several other changes. Which, that’d be pretty cool, but I think at this point we’re discussing sweeping language changes as opposed to the merits of checked exceptions strictly.

For example, simply replacing checked exceptions in modern Java with use of a Result would (at least as far as I can imagine) still result in a lot of verbosity. You’d just be typing “Result<Foo, Bar>” a lot as opposed to typing “throws Bar” a lot.

Not to be overly argumentative or anything. But “colossal failure” seems a little strong to me! :)

1. 7

“How is this different from nil?” is the inevitable question I get from rubyists upon learning about the Maybe monad. Until a flash of inspiration the other day, I didn’t quite have a good explanation for this question. Sure, I had an explanation, but I don’t think it was very convincing to rubyists. It went something like: “Because it forces you to explicitly handle the nil case rather than accidentally let it through.”

That much only explains why you would prefer Maybe to unrestricted nullability. It doesn’t explain why you would prefer Maybe to, say, Ceylon and Kotlin’s type-tracked nullability. To understand the latter, consider the type of the join function:

join :: Monad m => m (m a) -> m a
-- specialized to Maybe
join :: Maybe (Maybe a) -> Maybe a

Monads nest, because they’re type constructors like any other. On the other hand, nullability doesn’t nest, because it’s a hard-coded special case in the language.

1. 3

The example I like to give is to think of any real world situation where Maybe (Maybe a) makes sense and is a meaningful thing to return. These are actually pretty easy to come by, especially if you’re also talking about Either because then Either e1 (Either e2 a) forces the issue.

At this point, join becomes a highly meaningful operation and you can talk about how Kotlin-style nullability doesn’t let you talk about that. It becomes a tradeoff between convenience and expressiveness, and expressiveness can be defended by picking out real-world examples where nested errors are important to distinguish (see that Either e1 (Either e2 a) example again).

1. 2

For me, the justification for preferring Maybe to nil is much simpler: The basic building blocks of our programs must have nice algebraic properties. The introduction of special cases should be delayed until you have enough information about your problem domain to justify it. Language designers in general don’t have this sort of information, so language features should be as uniform as possible.

1. 2

Yeah, I honestly agree with you but think that kind of argumentation puts the cart ahead of the horse. Anyone who buys the value judgement won’t really need convincing as to why to include Maybe, no?

2. 2

I’m not sure I completely grok this comment, but I like it.

I’ve twice written “Maybe” implementations for languages with unrestricted nullability (C# and TypeScript, back when TypeScript lacked nullability annotations of its own), and it’s been a very interesting learning experience doing so.

Both types were naive implementations that broke associativity just like java.util.Optional does.

What I think is interesting is that at first it seemed like this breakage was the most appealing feature of the type — that is, by funneling all sorts of function calls through “Maybe”, each individual function could be assured of receiving non-null arguments, since any nulls would be “consumed” by the chaining machinery. Sounds great! Guard the “borders” of your module with Maybes and then everything inside can be your Happy Place!

Knowing what I know now, I probably would not have written that code. As you say: nullability is a special case in the language. It doesn’t seem possible to circumvent it without incurring breakages somewhere else.

1. 1

In general, when confronted with special cases, you can either embrace them (e.g., the way Common Lisp treats NIL) or pretend they don’t exist (e.g., the way idiomatic Scala treats null), but you can’t really fix them (e.g., the ways in which “perfect” forwarding in C++ fails to actually be perfect). That’s why IMO one should think twice before introducing special cases: you are imposing on the users of your code the responsibility to handle them all correctly.

1. 88

Submitted because just about every opinion in it is wrong, but Martin is still influential so we’re going to see this parroted.

1. 76

Sadly yes. Most bizarre is that he seems to be directly contradicting some positions he’s held re:professionalism and “real engineering”.

A sampler to save people having to read through the thing:

If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.

At some point, we agreed to stop using lead paint in our houses. Lead paint is perfectly harmless–defect-free, we might even say–until some silly person decides it’s paintchip-and-salsa o'clock or sands without a respirator, but even so we all figured that maaaaybe we could just remove that entire category of problem.

My professional and hobbyist experience has taught me that it if a project requires a defect-free human being, it will probably be neither on-time nor under-budget. Engineering is about the art of the possible, and part of that is learning how to make allowances for sub-par engineers. Uncle Bob’s complaint, in that light, seems to suggest he doesn’t admit to realities of real-world engineering.

You test that your system does not emit unexpected nulls. You test that your system handles nulls at it’s inputs. You test that every exception you can throw is caught somewhere.

Bwahahahaha. The great lie of system testing, that you can test the unexpected. A suitably cynical outlook I’ve seen posited is that a well-tested system will still exhibit failures, but that they’ll be really fucking weird because they weren’t tested for (in turn, because they weren’t part of the mental model for the problem domain).

We now have languages that are so constraining, and so over-specified, that you have to design the whole system up front before you can code any of it.

Well, yes, that sort of up-front design is the difference between engineers and contractors.

More seriously, while I don’t agree with the folks (and oh Lord there are folks!) who claim that elaborate type systems will save us from having to write tests, I do think that we can make tremendous advances in certain areas with these really constrained tools.

It’s not as bad as having to up-front design “the whole system”. We can make meaningful strides at the layer of abstractions and system boundaries that we normally do, we can quickly stub in and rough in those things as we’ve always done, and still have something to show for it.

I’ve discussed and disagreed at length with at least @swifthand about this, the degree to which up-front design is required for “Engineering” and the degree to which that is even desirable today–but something we both agree on is that these type systems do have a lot to offer in making life easier when used with some testing. That’s a probably a blog post for another day though.

And so you will declare all your classes and all your functions open. You will never use exceptions. And you will get used to using lots and lots of ! characters to override the null checks and allow NPEs to rampage through your systems.

And furthermore, frogs will fill the streets, locusts will devour the crops, the water will turn to blood and sulfur will rain from the sky! You know, the average day of a modern JS developer.

More likely, you’ll start at the bottom, same as we’ve always done, and build little corners of your codebase that are as safe as possible, and only compromise in the middle and top levels of abstraction. A lot of people will write shitty unsafe code, but it’s gonna be a lot easier to check it automatically and say “Hey, @pushcx was drunk last night and made everything unsafe…maybe we shouldn’t merge this yet” than it is to read a bunch of tests and say “yep, sure, :shipit:”.

~

In general, this kinda feels like Uncle Bob is starting to be the Bill O'Reilly of software development–and that makes me sad. :(

1. 39

You test that your system does not emit unexpected nulls. You test that your system handles nulls at it’s inputs. You test that every exception you can throw is caught somewhere.

For some languages that step is called compiling.

1. 6

I’m generally not a fan of the “but types are tests”-argument, but you rightly call that out.

“Nullness” is something that can be modeled for the compiler to easily analyse, so I don’t understand why he calls that out. (especially as non-null is such a prevalent default case and most errors of not passing a value are accidents).

1. 3

I wish I could upvote this comment a thousand times. Concise, funny, but also brutally true. You nailed it.

1. 3

… plus a thorough type system lets the compiler make a whole bunch of optimizations which it might not otherwise be able to do.

2. 32

Thank you for the thorough debunking I didn’t have the heart for.

In general, this kinda feels like Uncle Bob is starting to be the Bill O'Reilly of software development–and that makes me sad. :(

Clean Code was a great, rightly influential book. But the farther we get from early 90s tools and understandings of programming, the less right Martin gets.

This post makes total sense if your understanding of types and interfaces is C++ and your understanding of safety is Java’s checked exceptions and both are circa 1995. I used them, they were terrible! But also great because they recognized a field of potential problems and attempted to solve them. Even if they weren’t the right solution (what first system is?), it takes years of experience with different experiments to find good solutions to entire classes of programming bugs.

This article attacks decent modern systems with criticisms that either applied to the problem 20 years ago or fundamentally misunderstand the problem. His entire case against middle layers of his system needing to explicitly list exceptions they allow to wander up the call chain is also a case in favor of global variables:

Defects are the fault of programmers. It is programmers who create defects – not languages.

Why prevent goto, pointer math, null? Why provide garbage collection, process separation, and data structures?

I guess that’s why this article’s getting such a strong negative reaction. The argument boils down Martin not understanding the benefits of features that are now really obvious to the majority of coders, then writing really high-flying moral language opposed to that understanding

It’s like if I opened a news site today to read an editorial about how not using restrictive seatbelts in cars is the only sane way to drive, and drivers who buckle their kids into car seats are monsters for deliberately crashing their cars. It’s so wrong I can barely figure out where the author started to hope explain the misunderstanding, but the braying moral condemnation demolishes my desire to engage. Martin’s really wrong, but he’s not working towards shared understanding so he’s only going to get responses from people who think that makes for a worthwhile conversation.

1. 5

Java’s checked exceptions and both are circa 1995. I used them, they were terrible!

Interestingly for me, I came from a scripting language background and hated java checked exceptions with a passion. Because they felt tedious. It seemed lame that a large part of my programming involved IDE generated lists of exceptions. As I got more experienced and started writing software that I really want to not crash, I started spending a lot of mental effort tracking down what exceptions could be thrown in python and making sure I caught them all. Relying on/hoping documentation was accurate. I began to yearn for checked exceptions.

Ironically it seems like in java land they’ve mostly gone the route of magic frameworks and unchecked exceptions. So things like person.getName() can be used easily without worrying about whether or not the underlying runtime generated bytecode is using a straightup property access or if this attribute is being lazily initialized.

It seems like one of the simplest ways to retain your sanity is to uncouple I/O from your values and operate on simple Collections of POJOS. This gets into the arena of FP and monads, which use language level features to force this decoupling.

1. 1

I also prefer the checked exception approach. Spent a lot of time with exceptions being thrown uncaught, got tired of it.

2. 1

Why prevent goto, pointer math, null? Why provide garbage collection, process separation, and data structures?

I would say that Go has show there is a middle ground somewhere between 100% type-proven safety, and unsafe yet efficient paradigms.

I’m pretty fond of Rust, or Haskell, but also enjoy less strict tools like JS or Ruby. Of course, I would rather like it if my auto-cruiser were written in Rust rather than Node, but one tool’s success does not mean the others are trash: I may be mistaken but if Martin’s point is “type-safety sucks”, it seems you are just saying “non-type-safety sucks more”. I’m not convinced by either arguments.

1. 6

My point was that the people had deliberate reasons for the features they included or removed. I’m repeatedly asking “why” because Martin’s article dismisses the creators' reasons with an argument about personal responsibility and by characterizing them as punishments. The arguments Martin makes against these particular features also apply broadly to features he takes for granted.

I was writing entirely on the meta level of flaws in the article, not trying to argue for a personal favorite blend of safety/power features.

3. 5

More seriously, while I don’t agree with the folks (and oh Lord there are folks!) who claim that elaborate type systems will save us from having to write tests, I do think that we can make tremendous advances in certain areas with these really constrained tools.

Yes. This. Exactly. Evolutionary language features AND engineering discipline. No need for either or, that’s just curmudgeonly.

1. 4

then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.

Another argument to be made is productivity since he brought up a job. The productive programmers create maximum output in terms of correct software with minimum labor. That labor includes both time and mental effort. The things good, type systems catch are tedious things that take a lot of time to code or test for. They get scattered all throughout the codebase which adds effort if changing it in maintenance mode. Strong typing of data structures or interfaces vs manually managing all that will save time.

That means he’s essentially saying that developers using tools that boost their productivity should quit so less productive developers can take over. Doesn’t make a lot of business sense.

1. 4

Bwahahahaha. The great lie of system testing, that you can test the unexpected. A suitably cynical outlook I’ve seen posited is that a well-tested system will still exhibit failures, but that they’ll be really fucking weird because they weren’t tested for (in turn, because they weren’t part of the mental model for the problem domain).

I wrote this a couple weeks ago, but I figure it’s worth repeating in this thread. I wrote a prototype in Rust to determine if using conjunctive-normal form to evaluate boolean expressions could be faster than naive evaluation. I created an Expr data type that represents regular, user-entered expressions and CNFExpr which forces conjunctive-normal form at the type system level. In this way, when I finished writing my .to_cnf() method, I knew that the result was in the desired form, because otherwise the type system would have whined. Great! However, it did not guarantee that the resulting CNFExpr was semantically equivalent to the original expression, so I had to write tests to give myself more confidence that my conversion was correct.

Testing and typing are not antagonists, they’re just different tools for making better software, and it’s extremely unnerving that someone like Uncle Bob, who has the ear of thousands of programmers, would dismiss a tool as powerful as type systems and suggest that people who think they are useful find a different line of work.

1. 3

Thanks for the summary. Seems The Clean Coder has employed some dirty tricks to block Safari’s Reader mode, making this nigh on unreadable on my phone.

1. 4

And furthermore, frogs will fill the streets, locusts will devour the crops, the water will turn to blood and sulfur will rain from the sky! You know, the average day of a modern JS developer.

As a modern JS developer, I’ve started using Flow and TypeScript and have found that the streets have far fewer frogs now :)

2. 2

More like the Ann Coulter of programming in so much that it is increasingly clear that they spout skin-deep ridiculous lines of reasoning to trigger people so that they gets more publicity!

Remember, when one retorts the troll has already won. Don’t feed the troll!

~

A passing thought

defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.

This brings to mind one of my Henry Baker’s taunting remark about our computing environments

computer software are “cultural” objects–they are purely a product of man’s imagination, and may be changed as quickly as a man can change his mind. Could God be a better hacker than man?

Has it not occurred to him that these languages come from programmers themselves? Of course it has. So sure, defaults are always the responsibility of people. Some are the fault of the application programmer; some are the fault of the people responsible for the language design. (And when one is knee deep writing some CRUD app whose technological choices are already set in stone, determining whose fault is it is of little use)

3. 26

The entire point of software is to do stuff that people used to do by hand. Why on earth should we spend boatloads of hours writing tests to prove things that can be proved in milliseconds by the type system? That’s what type systems are for. If we were clever enough to write all the right tests all the time, we’d be clever enough to just not introduce NPEs in the first case.

1. 6

I had the same reaction reading this. He’s off his rocker. The whole point of Swift being so strongly typed is that we’ve learned if the language does not enforce it, then it’s not a matter of if those bugs will happen but how often we will waste time dealing with them.

The worst part to me is that right off the bat he recognizes these languages aren’t purely functional; implying that there is a big difference between a language that enforces functional programming and one that doesn’t. Of course there is, and the same thing goes for typing.

1. 2

He has just posted a follow up… http://blog.cleancoder.com/uncle-bob/2017/01/13/TypesAndTests.html

Alas, he says this…

Types do not specify behavior. Types are constraints placed, by the programmer, upon the textual elements of the program. Those constraints reduce the number of ways that different parts of the program text can refer to each other.

No Bob, Types are a name you give for a bundle of behaviour. It’s up to you to ensure that the type you name has the behaviour you think it has.

But whenever you refer to a type, the compiler ensures you will get something with that bundle of behaviour.

What behaviour exactly? That’s the business of you to decide and tests to verify or illustrate.

Whenever you loosen the type system…. you allow something that has almost, but not quite, the requested behaviour.

In every case I have investigated “Too Strict Type System”, I have come away with the feeling the true problem is “Insufficiently Expressive Type System” or “Type System With Subtle Inconsistencies” or worse, “My design has connascent coupling, but for commercial reasons I’m going to lie to the compiler about it rather than explicitly make the modules dependent.”

1. 1

In which community is he influential, if I may ask? I’ve only learned of him through Lobsters.

1. 4

I know him as a standard name in the Agile and Ruby communities, I think he’s well-known in Java but am not close enough to it to judge.

1. 3

My college advisor loved talking about him and referencing him, but I think he’s mostly lost his influence with programmers today. At least, most people I know generally disagree with everything he’s written in the past decade.

2. 1

Such blatant refusal (“everything is wrong”) seasoned with mockery (“parroted”) is exactly what has been stopping me from writing posts on this very topic.

1. 2

Declaring that the responsibility for your inaction belongs to strangers leaps over impolite into outright manipulation. I pass.