Good on the author for recanting. This sort of thing though is why a lot of people have a dim view of the Haskell community, and there seems little chance that will change.
The fact that Go lacks sum types and generics really is a pretty big flaw of Go, and every language akin to Go that lacks these features. “In Go you just can’t do that, the type system isn’t strong enough. So a lot of things which are (or can be) a compile-time error in Haskell are a runtime error in Go, which is just worse.” completely holds up. There’s nothing arrogant about saying this.
What I still find especially amusing is that Go designers first claimed that no one needs parametric polymorphism, and then implemented “generic” collections right in the standard library by silently downcasting user’s values to interface{} on insertion.
If that isn’t the Blub paradox in action, I don’t know what is. ;)
We can argue about whether Rob Pike actually said exact words “no one needs generics” or not—I assume that is the silly argument you are referring to.
The point remains: they intentionally left out polymorphism and generic collections from the first language versions. Then they implemented laughably type-unsafe generic collections in the standard library.
It’s equally dishonest to say “[allows you to do what generics allow], even if less smoothly” when the issue on hand is irrecoverable loss of type safety.
I don’t see why it should matter if they ever said those exact words, when the way they acted with the implementation shows it better. I don’t see a point in continuing this discussion—if you want to offer the Go team your service as a libel case attorney, feel free, I’m waiting for the summons.
“In Go you just can’t do that, the type system isn’t strong enough. So a lot of things which are (or can be) a compile-time error in Haskell are a runtime error in Go, which is just worse.” completely holds up. There’s nothing arrogant about saying this.
To be sure, turning runtime errors into compile-time errors is a good thing, and strong (expressive, etc.) systems are typically how you do that. But stronger type systems aren’t strictly or generally better than weaker ones: you have to pick a metric to measure them against, and there are lots of metrics where the costs of dialing up the strength (chiefly complexity) start giving you negative returns well before you hit e.g. Hindley-Milner or whatever.
It’s easy to see how general claims like this — that this property of Go in comparison to Haskell makes Go “just worse” — are seen as arrogant. The author presumes that his personal set of language judgment criteria are somehow universal.
What sort of thing? I have to admit that I didn’t find the post recant-worthy at all–it seemed quite balanced and respectful of Go, honestly–and I’m confused both by the follow-up post by the author as well as your comment. In fact a lot of these criticisms are the same as what I’ve heard from others, including (mostly) those from non-Haskell programmers.
I think the author originally wrote a very reasonable article and while I don’t agree with their analysis I respect it. Their follow-on invocation of the Blub stuff was humble and also good.
In other arenas, I have seen complaints from Haskellers that are not nearly so balanced and which certainly are not followed on with a moment of reflection. That’s what I was commenting on. I’ve had the good fortune to interact with some folks that are very kind and patient–but I think a lot of folks first exposure to that language and its proponents are not from those people. A parallel might be drawn to early (and even recent) evangelism of Linux.
I think the author originally wrote a very reasonable article and while I don’t agree with their analysis I respect it. Their follow-on invocation of the Blub stuff was humble and also good.
In other arenas, I have seen complaints from Haskellers that are not nearly so balanced
So, you actually have no real criticisms of this piece? Why say this if you thought this was a “very reasonable article?”
This sort of thing though is why a lot of people have a dim view of the Haskell community, and there seems little chance that will change.
Because the original article exists in a spectrum alongside other, less helpful critiques. If I find a donut shop I like, it’s not weird to point out other donut shops that I don’t.
I certainly could’ve worded it better, I suppose. :)
First we have the Blub class. It’s the “average language of its time”. Language X is in the Blub class if:
Most people agree that using a language less powerful than X would be a really bad idea.
Large numbers of people think X is as powerful as any language will ever need to be.
There exists a sizable population who think X isn’t powerful enough.
Then we have the Hyper-Blub class. It’s the upper limit of what is well- and widely-understood enough to be practical at a given time. When people argue with Blub fans, they point at one of these languages. It’s realistically possible to start a company and make product in a Hyper-Blub language.
Finally we have a diverse class of Meta-Blub languages that even Hyper-Blub fans may find strange and impractical.
I believe as of now, Haskell (and the Haskell/ML family in general) is slowly moving from the Hyper-Blub to the Blub class. There are people taking up Rust or Swift for reasons unrelated to their type system features and getting exposed to those features for the first time. Indeed, Swift docs carefully avoid the standard terminology and try to explain everything in “blubby” terms.
I’m seeing brave people who are treating dependently typed languages as Hyper-Blubs rather than Meta-Blubs. We may be living in the middle of a great Blub shift.
Can I suggest calling them “Blub”, “Super Blub” and “Super Duper Blub”? Mostly just for prosody, but also because this way hints that the tower might contribute to grow more in future. ❤️
I don’t think the tower grows. Today’s Hyper-Blubs become tomorrow’s Blubs, while Meta-Blubs become Hyper-Blubs.
For example, Smalltalk was in the Meta-Blub territory, until in the mid-2000’s Ruby and Python create a large population of people whose Blub is a dynamic OO language not too unlike the legendary “slow and slightly funny” Smalltalk.
I may be mistaken, but I can’t see anything that puts any of those high in the Blub hierarchy.
Take Coq, a Meta-Blub, for example. CompCert is a C compiler whose optimizations are proven to preserve the program semantic—if they don’t, it fails to build. It’s a constraint you cannot express in a less powerful language.
I don’t remember Forth and friends ever enabling things that weren’t possible before them. That might be why they are dead or relegated to narrow roles like PostScript.
Gotcha. I’m unsure I’d go with Meta-Blub vs Hyper-Blub. I’d probably call Meta-Blub Utra-Blub, or put a Super Blub and Hyper Blub. Meta, to me at least, indicates a difference in kind, not just degree.
What I find in so many of these “Go considered harmful” posts is that the authors almost always ignore or dismiss the most important feature of Go: concurrent programming. Yes Go lacks genetics and yes Go does garbage collection and yes Go lacks enums and all these things that people like to complain about….
But if you want to write code with tens of thousands of communicating sequential processes, you either learn Erlang or you take your existing procedural knowledge and Go to town…
The idea that Go’s concurrency is somehow a really well designed system only holds water when you compare it to the languages it’s trying to replace, C and C++. Go has one big hammer, goroutines communicating via channels, and that’s all you get. Haskell on the other hand has notions of concurrency and parallelism, which are separate. it also has many more primitives which allow you to make much more understandable systems with the properties you need; simple synchronisation between threads? Maybe an MVar is enough. Seen to asynchronously send data between two threads? Use a Chan. Need more complex transaction communication? STM lets you compose your communication with computation and get atomicity of arbitrarily complex operations.
I’m not familiar enough with Haskell to know anything about concurrency, so correct me if I’m wrong here.
It looks like Haskell’s concurrency is based on true threads, which puts a relatively tight limit on their number. Go’s concurrency allows an immense number of coroutines, hundreds of thousands even. When you can decompose problems into an essentially unlimited number of processes, it allows some problems to be solved very elegantly.
Haskell’s threads are not OS threads, they are one of the lightest weight green thread implementations of any language, even lighter than than Erlang’s. Running hundreds of thousands of threads simultaneously is fine. The way that network services work in Haskell is spawning one or more threads per connection, and if needed using the various synchronisation types to coordinate between threads. The IO system is based on the host OS’s event based IO libraries, so handling data from thousands of files/connections etc results in very cheap interleaving of thread execution, while being asynchronous in the background.
I use neither, but Haskell has at least as good a concurrency story as Go. Go’s channels still lead to plenty of bugs and the lack of generics prevents the creation of libraries of reusable concurrency patterns.
Haskell has at least as good a concurrency story as Go.
I have seen precisely one (1) highly-concurrent network system implemented in Haskell in my career. If your claim were true, I would expect to have seen many more. What metric are you using for comparison?
You mean apart from all the web services written using warp? or Facebook’s Sigma spam filtering? OR just about any program I have ever written to do anything remotely useful in Haskell? Concurrency in Haskell so so easy, we don’t talk about it. In my opinion, it’s a toss up between Haskell and Ada when you want to talk about flexible concurrency; Go’s primitives are downright primitive, and it’s been shown that some quite trivial patterns are actually impossible to implement in Go.
I have never encountered a web service written in Warp. I’ve heard of Sigma, and I remember it precisely because using Haskell was so notable. Whereas I guess I’ve personally owned, maintained, reviewed, or otherwise interacted with in a meaningful way, easily, 500 Go services.
I have never encountered a web service written in Warp.
How would you know?
On what basis do you claim that if Haskell had at least as good a concurrency story as Go, the expected result would be that you would see many more highly concurrent Haskell services (and recognise them as such)?
You say you’ve owned, maintained, and reviewed Go services, so I guess you’re a Go developer; are you also a Haskell developer?
Moreover, what does it prove? Do you know anything about Haskell’s concurrency story that isn’t just a lazy guess based on proxies like popularity?
I mean by now I hoped we all agreed that a good language is not the same as a popular language. Haskell has a multicore GC, channels, mvars, and more importantly values are immutable by default (so you don’t risk sharing mutable data over channels, which is a risk in Go.) My impression is that Go might make simple things simple, but not particularly safe or easy in real cases, where people have to pull “classic” shared memory primitives for performance and receive very little help from the language in exchange.
Followup from the author: https://memo.barrucadu.co.uk/blub-crisis.html
Good on the author for recanting. This sort of thing though is why a lot of people have a dim view of the Haskell community, and there seems little chance that will change.
The fact that Go lacks sum types and generics really is a pretty big flaw of Go, and every language akin to Go that lacks these features. “In Go you just can’t do that, the type system isn’t strong enough. So a lot of things which are (or can be) a compile-time error in Haskell are a runtime error in Go, which is just worse.” completely holds up. There’s nothing arrogant about saying this.
What I still find especially amusing is that Go designers first claimed that no one needs parametric polymorphism, and then implemented “generic” collections right in the standard library by silently downcasting user’s values to
interface{}
on insertion.If that isn’t the Blub paradox in action, I don’t know what is. ;)
Please consult the original sources, and don’t perpetuate silly arguments online.
This has been there for around 10 years. I don’t even program Go and I know that.
https://golang.org/doc/faq#generics
We can argue about whether Rob Pike actually said exact words “no one needs generics” or not—I assume that is the silly argument you are referring to. The point remains: they intentionally left out polymorphism and generic collections from the first language versions. Then they implemented laughably type-unsafe generic collections in the standard library.
It’s dishonest to claim that someone said something, and then say “we can argue about whether they said it”.
They probably didn’t say it. If they did, provide a source.
It’s equally dishonest to say “[allows you to do what generics allow], even if less smoothly” when the issue on hand is irrecoverable loss of type safety.
I don’t see why it should matter if they ever said those exact words, when the way they acted with the implementation shows it better. I don’t see a point in continuing this discussion—if you want to offer the Go team your service as a libel case attorney, feel free, I’m waiting for the summons.
Sounds like you’ve never designed or implemented a programming language.
You could leave out generics for reasons other than thinking “I don’t think that anyone needs generics”. Your conclusion doesn’t follow.
Yes, but that doesn’t support your claim.
To be sure, turning runtime errors into compile-time errors is a good thing, and strong (expressive, etc.) systems are typically how you do that. But stronger type systems aren’t strictly or generally better than weaker ones: you have to pick a metric to measure them against, and there are lots of metrics where the costs of dialing up the strength (chiefly complexity) start giving you negative returns well before you hit e.g. Hindley-Milner or whatever.
It’s easy to see how general claims like this — that this property of Go in comparison to Haskell makes Go “just worse” — are seen as arrogant. The author presumes that his personal set of language judgment criteria are somehow universal.
What sort of thing? I have to admit that I didn’t find the post recant-worthy at all–it seemed quite balanced and respectful of Go, honestly–and I’m confused both by the follow-up post by the author as well as your comment. In fact a lot of these criticisms are the same as what I’ve heard from others, including (mostly) those from non-Haskell programmers.
What do you think they did wrong here?
I think the author originally wrote a very reasonable article and while I don’t agree with their analysis I respect it. Their follow-on invocation of the Blub stuff was humble and also good.
In other arenas, I have seen complaints from Haskellers that are not nearly so balanced and which certainly are not followed on with a moment of reflection. That’s what I was commenting on. I’ve had the good fortune to interact with some folks that are very kind and patient–but I think a lot of folks first exposure to that language and its proponents are not from those people. A parallel might be drawn to early (and even recent) evangelism of Linux.
So, you actually have no real criticisms of this piece? Why say this if you thought this was a “very reasonable article?”
Because the original article exists in a spectrum alongside other, less helpful critiques. If I find a donut shop I like, it’s not weird to point out other donut shops that I don’t.
I certainly could’ve worded it better, I suppose. :)
It’s kinda surreal to think of something like Haskell being someone’s blub language. But, his arguments make sense.
My theory is that Blubs come in thee tiers. :)
First we have the Blub class. It’s the “average language of its time”. Language X is in the Blub class if:
Then we have the Hyper-Blub class. It’s the upper limit of what is well- and widely-understood enough to be practical at a given time. When people argue with Blub fans, they point at one of these languages. It’s realistically possible to start a company and make product in a Hyper-Blub language.
Finally we have a diverse class of Meta-Blub languages that even Hyper-Blub fans may find strange and impractical.
I believe as of now, Haskell (and the Haskell/ML family in general) is slowly moving from the Hyper-Blub to the Blub class. There are people taking up Rust or Swift for reasons unrelated to their type system features and getting exposed to those features for the first time. Indeed, Swift docs carefully avoid the standard terminology and try to explain everything in “blubby” terms.
I’m seeing brave people who are treating dependently typed languages as Hyper-Blubs rather than Meta-Blubs. We may be living in the middle of a great Blub shift.
Can I suggest calling them “Blub”, “Super Blub” and “Super Duper Blub”? Mostly just for prosody, but also because this way hints that the tower might contribute to grow more in future. ❤️
I don’t think the tower grows. Today’s Hyper-Blubs become tomorrow’s Blubs, while Meta-Blubs become Hyper-Blubs.
For example, Smalltalk was in the Meta-Blub territory, until in the mid-2000’s Ruby and Python create a large population of people whose Blub is a dynamic OO language not too unlike the legendary “slow and slightly funny” Smalltalk.
Even if it doesn’t, I like the suggestion that it might.
So, in your ontology, then, Forth/Postscript/Factor would be a Meta-Blub, probably along with Common Lisp then?
I may be mistaken, but I can’t see anything that puts any of those high in the Blub hierarchy.
Take Coq, a Meta-Blub, for example. CompCert is a C compiler whose optimizations are proven to preserve the program semantic—if they don’t, it fails to build. It’s a constraint you cannot express in a less powerful language.
I don’t remember Forth and friends ever enabling things that weren’t possible before them. That might be why they are dead or relegated to narrow roles like PostScript.
Gotcha. I’m unsure I’d go with Meta-Blub vs Hyper-Blub. I’d probably call Meta-Blub Utra-Blub, or put a Super Blub and Hyper Blub. Meta, to me at least, indicates a difference in kind, not just degree.
Might be good to add the year (2016) to the title.
What I find in so many of these “Go considered harmful” posts is that the authors almost always ignore or dismiss the most important feature of Go: concurrent programming. Yes Go lacks genetics and yes Go does garbage collection and yes Go lacks enums and all these things that people like to complain about….
But if you want to write code with tens of thousands of communicating sequential processes, you either learn Erlang or you take your existing procedural knowledge and Go to town…
(Go’s lack of enums does bother me to no end…)
The idea that Go’s concurrency is somehow a really well designed system only holds water when you compare it to the languages it’s trying to replace, C and C++. Go has one big hammer, goroutines communicating via channels, and that’s all you get. Haskell on the other hand has notions of concurrency and parallelism, which are separate. it also has many more primitives which allow you to make much more understandable systems with the properties you need; simple synchronisation between threads? Maybe an MVar is enough. Seen to asynchronously send data between two threads? Use a Chan. Need more complex transaction communication? STM lets you compose your communication with computation and get atomicity of arbitrarily complex operations.
I’m not familiar enough with Haskell to know anything about concurrency, so correct me if I’m wrong here.
It looks like Haskell’s concurrency is based on true threads, which puts a relatively tight limit on their number. Go’s concurrency allows an immense number of coroutines, hundreds of thousands even. When you can decompose problems into an essentially unlimited number of processes, it allows some problems to be solved very elegantly.
Haskell’s threads are not OS threads, they are one of the lightest weight green thread implementations of any language, even lighter than than Erlang’s. Running hundreds of thousands of threads simultaneously is fine. The way that network services work in Haskell is spawning one or more threads per connection, and if needed using the various synchronisation types to coordinate between threads. The IO system is based on the host OS’s event based IO libraries, so handling data from thousands of files/connections etc results in very cheap interleaving of thread execution, while being asynchronous in the background.
Interesting, thank you for informing me. I played around with Haskell many many years ago. Maybe I should get back into it.
I use neither, but Haskell has at least as good a concurrency story as Go. Go’s channels still lead to plenty of bugs and the lack of generics prevents the creation of libraries of reusable concurrency patterns.
I have seen precisely one (1) highly-concurrent network system implemented in Haskell in my career. If your claim were true, I would expect to have seen many more. What metric are you using for comparison?
You mean apart from all the web services written using warp? or Facebook’s Sigma spam filtering? OR just about any program I have ever written to do anything remotely useful in Haskell? Concurrency in Haskell so so easy, we don’t talk about it. In my opinion, it’s a toss up between Haskell and Ada when you want to talk about flexible concurrency; Go’s primitives are downright primitive, and it’s been shown that some quite trivial patterns are actually impossible to implement in Go.
I have never encountered a web service written in Warp. I’ve heard of Sigma, and I remember it precisely because using Haskell was so notable. Whereas I guess I’ve personally owned, maintained, reviewed, or otherwise interacted with in a meaningful way, easily, 500 Go services.
How would you know?
On what basis do you claim that if Haskell had at least as good a concurrency story as Go, the expected result would be that you would see many more highly concurrent Haskell services (and recognise them as such)?
You say you’ve owned, maintained, and reviewed Go services, so I guess you’re a Go developer; are you also a Haskell developer?
Moreover, what does it prove? Do you know anything about Haskell’s concurrency story that isn’t just a lazy guess based on proxies like popularity?
I mean by now I hoped we all agreed that a good language is not the same as a popular language. Haskell has a multicore GC, channels, mvars, and more importantly values are immutable by default (so you don’t risk sharing mutable data over channels, which is a risk in Go.) My impression is that Go might make simple things simple, but not particularly safe or easy in real cases, where people have to pull “classic” shared memory primitives for performance and receive very little help from the language in exchange.
My point is that “goodness” isn’t a general or objective property of a programming language, it exists only in the context of some criteria.
Permitting shared mutable access to data, for example, doesn’t make a language objectively or generally less-good.
You could also learn Pony, or Elixir (which is still based on Erlang, but a far sight less alien in syntax).