1. 24
  1. 14

    Why did Haskell’s popularity wane so sharply?

    What is the source that Haskell’s popularity are declining so sharply? Is there really some objective evidence for this, I mean numbers, statistics, etc.?

    It’s anecdotal and just my personal impression by observing the Haskell reddit 1 for 10 years, but I have never seen so many Haksell resources, Conferences, Books and even postings for jobs as now. I have not at all the impression that the language is dying. It has accumulated cruft, has some inconsistencies, is struggling to get a new standard proposal out, but other than that I have the impression that it attracts quite some people that come up with new ideas.

    1. 2

      Haskell had glory days when SPJ/Marlow were traveling to various conferences talking about the new language features. Mileweski’s posts, LYAH, Parsec, STM, and Lenses are from that era. The high-brow crowd was of course discussing Lenses. Sure, these things drove adoption, and there’s a little ecosystem for the people who went on the Haskell bandwagon back then.

      What innovation has it had over the last 5 years? The community couldn’t agree on how to implement any of the features of a respectable dependent-type system, so they invented a bunch of mutually incompatible flags, and destroyed the language. Thanks to the recent hacking, GHC is plastered with band-aids.

      It’s true that you can’t explain these things with some points on a pretty graph, but that doesn’t make it anecdotal. Look at the commits going into ghc/ghc, and look at the activity on the bread-and-butter Haskell projects: lens, trifecta, cloud-haskell. Maintenance mode. Where are the bold new projects?

      1. 23

        These assertions about Haskell are all simply false. There are plenty of problems with Haskell, we don’t need to add ones that aren’t true.

        The community couldn’t agree on how to implement any of the features of a respectable dependent-type system, so they invented a bunch of mutually incompatible flags, and destroyed the language. Thanks to the recent hacking, GHC is plastered with band-aids

        The reason GHC didn’t just turn on all flags by default is that many of them are mutually incompatible, so your individual .hs file has to pick a compatible set of language features it wants to work with.

        You keep saying this in multiple places, but it’s not true. Virtually no GHC extensions are incompatible with one another. You have to work hard to find pairs that don’t get along and they involve extremely rarely used extensions that serve no purpose anymore.

        The community is also not divided on how to do dependent types. We don’t have two camps and two proposals to disagree about. The situation is that people are working together to figure out how to make them happen. GHC also doesn’t contain bad hacks for dependent types, avoiding this is exactly why building out dependent types is taking time.

        That being said, dependent types work today with singletons. I use them extensively. It is a revolution in programming. It’s the biggest step forward in programming that I’ve seen in 20 years and I can’t imagine life without them anymore, even in their current state.

        Look at the commits going into ghc/ghc, and look at the activity on the bread-and-butter Haskell projects: lens, trifecta, cloud-haskell. Maintenance mode. Where are the bold new projects?

        Haskell is way more popular today than it was 5 years ago, and 10 years ago, and 20 years ago. GHC development is going strong, for example, we just got linear types, a huge step forward. There’s been significant money lately from places like cryptocurrency startups. For the first time I regularly see Haskell jobs advertised. What is true, is that the percentage of Haskell questions on stack overflow has fallen, but not the amount. The size of Stack Overflow exploded.

        Even the community is much stronger than it was 5 years ago. We didn’t have Haskell Weekly news for example. Just this year a category theory course was taught at MIT in Haskell making both topics far more accessible.

        Look at the commits going into ghc/ghc

        Let’s look. Just in the past 4 years we got: linear types, a new low-latency GC, compact regions, deriving strategies & deriving via, much more flexible kinds, all sorts of amazing new plugins (type plugins, source plugins, etc.) that extend the language and provide reliable tooling that was impossible 5 years ago, much better partial type signatures, visible type applications (both at the term level and the type level), injective type families, type in type, strict by default mode. And much more!

        This totally changed Haskell. I don’t write Haskell the way I did 5 years ago, virtually nothing I do would work back then.

        It’s not just GHC. Tooling is amazing compared to what we had in the past. Just this year we got HLS so that Haskell works beautifully in all sorts of editors now from Emacs, to vscode, to vim, etc.

        look at the activity on the bread-and-butter Haskell projects: lens, trifecta, cloud-haskell. Maintenance mode. Where are the bold new projects?

        lens is pretty complete as it is and is just being slowly polished. Haskell packages like lens are based on a mathematical theory and that theory was played out. That’s the beauty of Haskell, we don’t need to keep adding to lens.

        I would never use trifecta today, megaparsec is way better. It’s seen a huge amount of development in the past 5 years.

        There are plenty of awesome Haskell packages. Servant for example for the web. Persistent for databases. miso for the frontend. 5 years ago I couldn’t dream of deploying a server and frontend that have a type-checked API. For bold new ideas look at all the work going into neural network libraries that provide type safety.

        I’m no fanboy. Haskell has plenty of issues. But it doesn’t have the issues you mentioned.

        1. 1

          Right. Most of my Haskell experience is dated: from over five years ago, and the codebase is proprietary, so there are few specifics I can remember. I’m definitely not the best person to write on the subject. In any case, I’ve rewritten the Haskell section of the article, with more details. Thanks.

          1. 6

            From my definition of “dying language” it means losing popularity, or losing interest. For Haskell this is absolutely not clear. Also your section is about “why Haskell is bad” not “why it is dying”. People do not talk about Haskell as they used to in my opinion, but I still see a lot of activity in Haskell ecosystem. And it doesn’t really look like it’s dying.

            I think it is easier to agree about Clojure dying looking at Google trends for example: https://trends.google.com/trends/explore?cat=5&date=all&geo=US&q=haskell,clojure

            But Haskell looks more like a language that will never die but still probably never become mainstream.

            1. 5

              I’m definitely not the best person to write on the subject. In any case, I’ve rewritten the Haskell section of the article, with more details. Thanks.

              Great! Although there are still many issues that are factually untrue.

              I think this is just a sign that you’ve been away from the community for many years now, and don’t see movement on the things that were hot 5-10 years ago. Like “The high-brow crowd was obssessed with transactional memory, parser combinators, and lenses.” Well, that’s over. We figured out lenses and have great libraries, we figured out parser combinators, and have great libraries. The problems people are tackling now for those packages are engineering problems, not so much science problems. Like how do we have lenses and good type errors? And there, we’ve had awesome progress lately with custom error messages https://kodimensional.dev/type-errors that you would not have seen 5 years ago.

              The science moved on to other problems.

              The issue is that different extensions interact in subtle ways to produce bugs, and it’s very difficult to tell if a new language extension will play well with the others (it often doesn’t, until all the bugs are squashed, which can take a few years).

              This still isn’t true at all. As for the release cadence of GHC, again, things have advanced amazingly. New test environments and investments have resulted in regular GHC releases. We see several per year now!

              In Atom, the Haskell addon was terrible, and even today, in VSCode, the Haskell extension is among the most buggy language plugins.

              That was true a year ago, it is not true today. HLS merged all efforts into a single cross-editor package that works beautifully. All the basic IDE functionality you would want is a solved problem now, the community is moving on to fun things like code transformations.

              Then there’s Liquid Haskell that allows you to pepper your Haskell code with invariants that it will check using Z3. Unfortunately, it is very limited in what it can do: good luck checking your monadic combinator library with LH

              Not true for about 3 years. For example: https://github.com/ucsd-progsys/liquidhaskell/blob/26fe1c3855706d7e87e4811a6c4d963d8d10928c/tests/pos/ReWrite7.hs

              The worst case plays out as follows: the typechecker hangs or crashes, and you’re on the issue tracker searching for the issue; if you’re lucky, you’ll find a bug filed using 50~60% of the language extensions you used in your program, and you’re not sure if it’s the same issue; you file a new issue. In either case, your work has been halted.

              In 15 years of using Haskell I have never run into anything like this. It is not the common experience. My code is extremely heavy and uses many features only available in the latest compiler, with 20-30 extensions enabled. Yet this just doesn’t happen.

              There is almost zero documentation on language extensions. Hell, you can’t even find the list of available language extensions with some description on any wiki.

              Every single version of GHC has come with a list of the extensions available, all of which have a description, most of which have code: https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/glasgow_exts.html You can link to the manual that neatly explains everything, rather than to the git repo.

              Looking at the big picture: first, this is a poor way to do software development; as the number of language extensions increase, your testing burden increases exponentially.

              This is only true if you can’t prove how extensions interact, or more fundamentally, that they don’t interact.

              Second, the problem of having a good type system is already solved by a simple dependent type theory; you study the core, and every new feature is just a small delta that fits in nicely with the overall model.

              That’s totally untrue. There is no such general-purpose language today. We have no idea how to build one.

              As opposed to having to read detailed papers on each new language extension. And yes, there’s a good chance that very few people will be able to understand your code if you’re using some esoteric extensions.

              Again, that’s just not true. You don’t need to know how the extensions are implemented. I have not read a paper on any of the extensions I use all the time.

              In summary, language extensions are complicated hacks to compensate for the poverty of Haskell’s type system.

              That’s just the wrong way to look at language extensions. Haskell adds features with extensions because the design is so good. Other languages extend the language forcing you into some variant of it because their core is too brittle and needs fundamental changes. Haskell’s core is so solid we don’t need to break it.

              However, PL research has shifted away from Haskell for the most part

              That’s again totally factually untrue. Just look at Google Scholar, the number of Haskell papers per year is up, not down. The size of the Haskell workshop at ICFP is the same as 5 years ago.

              Moreover, there are no tools to help you debug the most notorious kind of bug seen in a complicated codebase: memory blowups caused by laziness.

              Again, that’s not factually true.

              We have had a heap profiler for two decades, in the past few years we got ThreadScope to watch processes in real time. We have systematic ways to find such leaks quickly, you just limit the GC to break when leaks happen. https://github.com/ndmitchell/spaceleak We also got stack traces in the past few years so we can locate where issues come from. In the past few years we got Strict and StrictData.

              As for the code examples. I can pick 2 lines of any language out of context and you’ll have no idea what they do.

              Who cares what every extension does for every example? That’s the whole point! I have literally never looked at a piece of Haskell code and wondered what an extension does. I don’t need to know. GHC tells me when I need to add an extension and it tells me when an extension is unused.

              How many more language features are missing?

              Extensions are not missing language features.

            2. 1

              GHC also doesn’t contain bad hacks for dependent types, avoiding this is exactly why building out dependent types is taking time.

              Honestly, I’d much rather prefer a simple core model, like that of HoTT.

              1. 3

                Honestly, I’d much rather prefer a simple core model, like that of HoTT.

                I’d love that too! As would everyone!

                But the reality is, we don’t know how to do that. We don’t even know how to best represent computations in HoTT. It might be decades before we have a viable programming language. We do have dependent types that work well in Haskell today, that I can deploy to prod, and that prevent countless bugs while making code far easier to write.

                1. 1

                  I think HoTT with computations is “cubical type theory”? It’s very active currently.

                  As for the dependent types as the backend for advanced type level features, I think it’s what Dotty/scala 3 is about. It’s definitely not the only way to do it, but it’s also not decades away. Idris 2 is also an interesting effort.

            3. 4

              Dependent types aren’t that useful for production software, and full blown dependent types are really contrary to the goals of Haskell in a lot of ways. Any language that’s >20 years old (basically 30) is gonna have some band-aids. I’m not convinced that Haskell is waning in any meaningful way except that people don’t hype it as much on here/hn. Less hype and more doing is a good thing, imho.

              1. 3

                Reminds me of the days when people said FP and complete immutability weren’t useful for production software. It is true that there is no decent general purpose language that implements dependent types, but that’s besides the point.

                It’s true, hype is a poor measure.

                1. 4

                  Yeah, that’s an interesting comparison, but I think it’s a totally different situation. Immutability and dependent types both are things you do to make certain assumptions about your code. In that, immutability allows you to know that some underlying value won’t change. Dependent types allow you to make more general statements/proofs of some invariant. The big difference is that immutability is a simplification. You’re removing complexity by asserting some assumption throughout your code. Generally, dependent types are adding complexity. You have to provide proofs of some statement externally or you have to build the proof of your invariants intrinsically into your constructions. IMHO, that’s a huge difference for the power to weight ratio of these two tools. Immutability is really powerful and fairly light weight. Dependent types are not really that powerful and incredibly heavy. I’m not saying dependent types are worthless. Sometimes you really really want that formal verification (eg. compilers, cryptography, etc). The vast majority of code doesn’t need it, and you’re just adding complexity, something I think should be avoided in production software.

                  1. 3

                    Tl;dr I have a good amount of experience with dependently typed languages, and I write Haskell for a living. After all of my experience, I have come to the conclusion that dependent types are over hyped.

                    1. 1

                      I’ve started writing a post on dependent types. Here’s early draft: https://artagnon.com/articles/dtt

                    2. 3

                      What about Ada?

              2. 10

                Neat! Some languages I think are missing:

                • Algol (which led to Pascal and C)
                • COBOL (blech)
                • C# (mostly interesting as a refinment of Java)
                • Forth
                • APL (which led to K)
                • Prolog
                • Typescript
                • SQL

                I think there’s just as much to be learned from “This is a language that failed to take off” and “This is a language that, in spite of any of its numerous failings, is in widespread use today”.

                1. 5

                  APL (which led to K)

                  Indeed, the APLs are a big omission imo. Even if you want to downgrade them based on relative obscurity today, they deserve mention for their lasting influence on everything being used in ML and data science from R to Julia to numpy.

                  1. 2

                    There was an article about that about a month ago:

                    https://lobste.rs/s/wu1bdw/numpy_another_iverson_ghost_2018

                2. 7

                  That Simula/67 is missing makes this diagram potentially misleading for newcomers. The biggest influence on Smalltalk was Simula (Alan Kay HOPL), as is the direct derivation of both C++ and Java [1,2] from Simula/67. This reinforces the false belief that SmallTalk is the foundation of the concepts of object orientation. Alan Kay didn’t get his Turing Award (see its citation) for the ideas of object orientation. Nygaard and Dahl got their Turing Awards for just that.

                  ———

                  [1] Java object model https://dl.dropboxusercontent.com/s/lmfmvdsgw3lqoew/2017-10-03_15-33-11.png

                  [2] “And my idea was very simple: to take the ideas from SIMULA for general abstraction for the benefit of sort of humans representing things… so humans could get it with low level stuff, which at that time was the best language for that was C, which was done at Bell Labs by Dennis Ritchie. And take those two ideas and bring them together so that you could do high-level abstraction, but efficiently enough and close enough to the hardware for really demanding computing tasks. And that is where I came in. And so C++ has classes like SIMULA but they run as fast as C code, so the combination becomes very useful.” — Bjarne Stroustrup

                  1. 3

                    Thanks for the read. I’ve added Simula now :)

                    1. 2

                      Merci bien for a thought provoking diagram/article :)

                  2. 7

                    Algol, Simula and COBOL are glaring omissions. Algol has basically influenced every C-like language out there. Simula was the start of the object idea. COBOL is still running and written today.

                    Fine. It’s opinionated. It’s a poorly informed opinion, then.

                    1. 1

                      Thanks to you, and others on this thread, I’ve been doing some reading on ALGOL and Simula. Indeed, they are glaring omissions, and I’ve included them now. I’m still not convinced COBOL is worth including though.

                      1. 4

                        I think omitting COBOL would be a mistake. It’s still used with literally billions of lines of code still running in production. Whether this is a good or bad thing is up for discussion. COBOL as a language has been updated as recently as 2014, with companies such as IBM and Micro Focus still producing and supporting compilers and other optimizers for it.

                        How it fits into any kind of lineage map is another point of discussion, but perhaps it should be a minor influence on another omission: BASIC. BASIC was on pretty much every home computer for over a decade and was the shell for many of those systems. It’s how an entire generation of computer uses learned to program, for better or worse.

                    2. 6

                      I can’t visually parse the chart. The distinctions between color and line thickness require more visual acuity than I have.

                      You’re by no means required to make your publications accessible, but you might consider making it a little easier on those of us who lack your visual acuity :)

                      Also, C as a novel root language with no predessor? BCPL? B? Algol? :)

                      1. 2

                        Hehe, sorry :) I’m not sure how to improve the readability though: it’s quite a dense chart, and I just went for the path of least resistance: much of my site is filled with commutative diagrams in xypic, so I figured why not use that? Any other solution would require a significant time investment :(

                        Pretty much all the root languages have some predecessors, but I’ve got to cut the chart off to some point. If it’s any consolation, I think all the root languages are sufficiently novel to appear in bold. I’ve tweaked the wording to “no significant predecessor” though.

                        1. 6

                          No significant predecessor? ALGOL is probably the most influential language of all time.

                          1. 2

                            No worries, I do totally understand, and it’s a nice piece of work. Good on you for publishing it!

                            FWIW I love this stuff, I actually own the ACM 3 volume (Now 4 but I haven’t bought 4 yet because it’s ungodly expensive) HoPL, and they’re great :)

                        2. 6

                          Lives in France. Likes Coq and OCaml. Checks out.

                          1. 5

                            Obviously missing a lot of languages, but prolog seems like a major omission. It was one of the inspirations for Erlang. In fact Erlang was prototyped in prolog before being rewritten in C for performance purposes.

                            Joe Armstrong even helped the writer of 7 languages in 7 weeks learn prolog as part of writing the book:

                            https://www.bennadel.com/blog/2060-seven-languages-in-seven-weeks-a-pragmatic-guide-to-learning-programming-languages-by-bruce-tate.htm

                            1. 1

                              Historically speaking, I’m of the opinion that Prolog isn’t that important. I know that it’s at least mentioned in many university-level PL courses, but why? It didn’t influence the design of any of the other languages on that chart: I didn’t know Erlang was prototyped in Prolog, but as far as language design goes, I don’t think Erlang derives inspiration from Prolog.

                              What other omissions did you have in mind?

                              1. 4

                                Well, lots of languages: Algol, scheme, self, simula, sql, c#, COBOL, pascal, etc. (Edit: didn’t see Self before! Originally looked at the graphic on mobile. My bad.)

                                Prolog I bring up because it is early logic programming language which is another (major? minor?) paradigm and it is a parent to Erlang, which is on the list.

                                Ultimately, this is an opinionated list. So whatever floats your boat.

                                I don’t think Erlang derives inspiration from Prolog.

                                Erlang started out as a modified prolog.

                                http://erlang.org/faq/academic.html#idp32827664

                                1. 3

                                  Scheme is somewhat a major language, but it didn’t have the kind of influence that I imagined it would. The chart does show LISP, CL, and their influences, if it’s any consolation. The others, I still can’t justify crowding the chart with.

                                  I added Prolog: thanks for the read.

                                  Self was added after I got some comments about it. Sorry, the site isn’t really made for mobile: there are way too many large commutative diagrams that don’t render well on mobile.

                                2. 3

                                  don’t think Erlang derives inspiration from Prolog

                                  seeing how it’s almost the weekend I’m gonna leave this here but I’m also gonna say that when Joe Armstrong was thinking about this new language he wrote down the rules and somebody else told him he had done “an algebra” and that he could embed it in prolog easily (not sure about where the quote comes from).

                                  1. 1

                                    Thanks for the interesting read. It convinced me to include Prolog in the chart :)

                              2. 5

                                Rust may be important in being a practical language filling a niche that desperately needed it, but I don’t think it has made history yet. Rust itself is a derivative language that merely popularized concepts from other languages.

                                Rust is a language that mostly cribs from past languages. Nothing new.

                                https://venge.net/graydon/talks/intro-talk-2.pdf

                                1. 1

                                  Agreed. It’s the newest language I was willing to include (albeit, a little reluctantly).

                                  1. 1

                                    There are ways of making history other than by innovating. But even given that, I think Rust does innovate by combining those features.

                                    I don’t know exactly what you’re talking about though, since you’ve cut out some work for the reader. Time to watch that talk!

                                  2. 4

                                    The graph gave me some good chuckles. :-)

                                    The only reason why I don’t object that C++ isn’t categorized as “Unlikely to influence anything in the future, due to remarkably poor design.” is that its remarkably poor design has influence, as it tells people what not to do in future languages.

                                    I think the author is missing the point a bit in the Rust to C++ comparison though, with its “look at what C++ can do, but Rust can’t!”.

                                    I mean, … yeah, that’s the idea of not adding everything to the language?

                                    1. 4

                                      C++ wasn’t designed from the beginning to be the way it is today and has evolved to stay relevant over its 30+ year history. As much grief as the C++ standards committee gets, they’ve done a good job significantly improving the language over the last decade, while maintaining compatibility even with projects with 5-6 millions of lines of existing code. In terms of influence, RAII and const/non-const member functions (i.e. &self, &mut self) in Rust are direct from C++.

                                      As many features as it has, quite a few of them are vestigial for practical purposes. Every team I’ve been on has actually used a subset of the language: very limited inheritance, no exceptions, no RTTI, and few (for specific purposes) templates. Since the downturn of OOP hype, a lot of C++ you find these days is much closer to FP than to OOP.

                                      1. 2

                                        laughs

                                        I must admit that I have a soft corner for C++. Pragmatically speaking, it’s a very effective language for engineering, for people willing to invest the time to read the std documentation. Yes, I agree that no future language would want to keep wedging in features like this, but I’m impressed C++ is able to do it at this pace without breaking the language.

                                      2. 3

                                        It will be interesting to see what other languages are influenced by Elm over the next decade or so. It’s already influenced both Rust and Reason error messages.

                                        1. 2

                                          Just because the world has more polyglots in it today doesn’t mean Ruby is dying… sure, it had a huge spike because of Rails, and those ideas percolated everywhere, but Ruby as a language unto itself it more popular than ever.

                                          Ruby has been a dramatic influence on both Elixir and Crystal, and many people love its goal of ‘developer happiness’. History will judge the impact it’s had 30 years from now, but I think it’s going to fare well.

                                          1. 2

                                            Why diagram lacks php, but gave ruby?

                                            1. 1

                                              I’m not sure Java should be included in this graph. The graph focuses on the language design, and IMO the main point of Java is not the language itself, but the runtime. The language itself was intentionally stripped from many features in order to “dumb it down”, so it’s probably true the language won’t influence anything, but it’s also true that Java ecosystem has influenced .NET environment, and both JRE+NET are being used to actually run businesses, unlike majority of languages from OP’s list. Of course the OP can think whatever he likes, but I believe that a “remarkedly bad design” wouldn’t allow the JRE design to grow to such sizes.

                                              But again, I’m not a big fan of the Java language, but telling how Java is bad, slow and poorly designed makes me think the person telling such things doesn’t know Java that well.

                                              1. 1

                                                Agreed. I included Java precisely for the JVM. See the (*) on top of Java.