1. 32
  1.  

  2. 19

    (Cross-posted with Reddit.)

    Haskell doesn’t suck. It’s quite good. However, I’ve got a major gripe that leaves me skeptical. It’s not with the language. It’s not really with the community. It’s just with reality. So, here goes.

    The difficulty of the language is not that bad, except for the fact that there will probably never be a time when anyone but an elite programmer can bet her career on Haskell. For that reason, I don’t see it taking off. I don’t see managers being willing to adopt it and I don’t see many programmers being able to put in the time necessary to learn it well. I hope that I’m wrong on this.

    There are enough Python jobs that you can bet your career on it. You could bet your career on Java, although you’ll have to become a manager in 10 years to keep sane. At this point, it wouldn’t be ridiculous to bet your career on Clojure. A Java programmer who’s remotely as good as the average Haskell programmer can get $200/hour consulting gigs, reliably. In the Haskell world, that doesn’t exist.

    Having tried to build a Haskell team and failed (cheap company, sparse talent pool, short-sighted management averse to investing in people) I would say without reservation that if I were in the same job today, I’d use Python or Clojure. From a management perspective, using “the best language” is less important than using a good enough language with low risk.

    The situation is possibly good for employers. There are a lot of absolutely brilliant Haskellers who’ll work for relatively average salaries. If I were a CTO at a stat-arb fund with reasonable confidence of being there for 5 years, I might bet on Haskell, in the same way that Jane Street bet on OCaml. That said, the decisions that drive language adoption are rarely made from positions of comfort. I don’t want to blame “Haskell’s community” because that sounds like I’m blaming the people, which I’m not. It’s a great language and a strong community, but the sad fact is that it’s difficult enough and the career rewards are sparse enough that few people are going to invest the time to understand it well.

    This would have been less of a problem 30 years ago when people could expect to be at a company for 5+ years. Back then, learning a hard language was an upfront cost but people wouldn’t be averse to doing it because they had a sense that it would pay off. In today’s economy of mandatory job hopping, people don’t want to bet their careers on “weird” languages that require extra effort to learn.

    1. [Comment removed by author]

      1. 16

        The other thing is mandatory job hopping. I haven’t seen this so much, probably due to my physical location and also my career path. Is job hopping mandatory in some areas?

        I’m going to make a bold claim for michaelochurch.

        If you are a technologist - not on a managerial track - and you haven’t switched jobs in the last five years, you are absolutely leaving money on the table.

        I don’t have numbers to back this up, just a boatload of anecdotes, both personal and those that have been shared. But the consensus is this: if you hop, you get significant raises every time, and typically end up hopping up for a better title.

        Additionally, hopping forces you to stay sharp and avoid falling into the trap of becoming a SME in something that will never translate to a new position (think: antiquated build process, custom framework, etc)

        1. 4

          If you are a technologist - not on a managerial track - and you haven’t switched jobs in the last five years, you are absolutely leaving money on the table.

          Correct. Sadly, this can be true even if you’re switching jobs a lot. When the market softens, the job-hoppers will be hardest hit.

          When you’re doing well and you’re happy, you don’t get raises because management knows you’ve got a good thing going. Why would they give you raise when you already have the “perk” of being on a real project instead of the career-negative, demoralizing garbage that most programmers are assigned to work on? When you’re miserable, you can’t get a raise because you’re probably not doing well enough for management to care if you leave.

          This kind of stuff is why it’s becoming clear to me that we as technologists need to think about organizing, or at least being less disorganized. It wouldn’t be a traditional labor union; it would have to be more like the creative guilds of Hollywood.

        2. 3

          As a developer, I view languages (definitely plural) as tools rather than as a worldview that I might be judged by.

          I do, too. That said, others are going to judge you, and if you’re not as good as they are in the language in which you’re judged, it can go bad for you. I would never typecast myself as “A $LANG programmer”, but that’s what other people do.

          This is one reason why I hate submitting code samples. I’m a good programmer, but I’ve never gotten a better job than I would have otherwise gotten because of one. I was once told that my code sample was among the best the company had ever seen… and still given a junior level position. (This was a few years ago.) In other words, no upside.

          Since you mention it several times, can you give some more detail on how picking something like Haskell (or whatever) might kill your career?

          That would be an exaggeration. I think we both agree that it’s silliness to commit to one language for all purposes. That said, it’s hard to get to the point in multiple languages to make a case for yourself as a truly elite programmer. To function at the highest levels, and reliably so, you do have to pick and choose.

          It won’t kill your career, of course, to know and use Haskell. However, the jobs get rare once you’re older and languages like Java and Python are safer bets.

          The other thing is mandatory job hopping. […] Is job hopping mandatory in some areas?

          It’s really rare to find management and a company that will keep investing in you, and supply bigger and better challenges, for 5 years. If you find such an arrangement, stick with it for as long as it lasts. However, my experience is that most jobs turn to shit before that. Even if you have a great manager, how do you know that he’s going to stick around? In technology, bad managers tend to keep their jobs longer than good ones do.

        3. 4

          How do you think Python— or more relevantly Clojure— got to the point where you’d feel safe enough to “bet” on it.

          Side note: who the writes in only one language these days?

          1. 5

            How do you think Python— or more relevantly Clojure— got to the point where you’d feel safe enough to “bet” on it.

            Python is a B+-at-everything language that gets out of your way. It has limitations and it has traits that will annoy you if you take it into production, but no one doubts that it can be used in production. It’s also managed to succeed in web development as well as data science, in addition to being the go-to language for managers-who-still-program.

            With Clojure, that was intentional, and Cognitect (formerly Relevance) played a major role. The community was built with practicality as a core value. It had the opportunity to learn from the failures of Common Lisp (both as a language and as a community).

            who the writes in only one language these days?

            No one. But people tend to be judged as coders based on language-specific competencies rather than general CS knowledge– in part, because the bottom 75% of programmers (the open-plan Scrum brogrammers) don’t know any CS– so people who want to distinguish themselves as engineers (and note that, after a certain point, it gets easier to just climb the management ladder) have to think about that.

            1. 5

              I very clearly remember when people did doubt Python could be used in production. It took years of slowly gained momentum (2.x series) and a right time / right place luck (Django). My point being that languages don’t appear fully formed, production worthy and “career bet-able.”

              (n.b. I too get no impression Haskell will become an industrial language. If only because the community seems to explicitly not want to be so.)

              And, by the by, I do agree with many of your talking points around unionization. Open-plan is often harmful. Scrum is micromanagement. Jumping from harmful organisational and managerial practices to a class of people though? You haven’t given any evidence of the existence of these “open-plan Scrum brogrammers.” The way you use that term— it seems like you’re trying to manufacture an Other and appeal to tribalism as opposed to make a valid point.

              1. 1

                Jumping from harmful organisational and managerial practices to a class of people though? You haven’t given any evidence of the existence of these “open-plan Scrum brogrammers.” The way you use that term— it seems like you’re trying to manufacture an Other and appeal to tribalism as opposed to make a valid point.

                They exist. They’re the ones who drive out women and minorities with sexist, racist, and exclusionary comments. They’re the “useful idiots” from a tech boss’s perspective; they buy into the macho-subordinate culture and make it (seem to?) work. They create the “youthy” aura that makes companies (even if they’re actually underperforming) attractive to VCs. (“I don’t know what these 200 people are doing, but they’re in an open-plan environment and it’s loud so I know some kind of work is being done.”)

                The horrible culture couldn’t be imposed on programmers if there weren’t some people who supported and were even attracted to it. Of course, there are a lot of young, male programmers who aren’t into this culture at all. Even by 28, I could see that it was toxic.

              2. 2

                Don’t talk about Common Lisp in the past tense :-(

            2. 3

              I’m not sure I get whats so difficult about haskell. Except for the paradigm shift which is present with other functional languages as well. I mean the struggle I’ve seen people go through when learning bog standard object oriented coding makes me think that learning haskell as a first language would be about as easy as learning c# as a first language.

              I had to write a parser for a bring home job interview task and I used haskell even though I had no experience at all with parsers and very little with haskell. It went very well and as soon as it compiled it worked, which is more than I can say with every other language I know… (Though this could be due to bytemyapp book)

              1. 10

                I wrote a short piece on that over on Quora when I still used that site.

                Before I paste it, I’ll remark that yes, it’s not really that difficult. But to the extent that it is, this is why. :)

                Speaking as a long-term user of Haskell, I do note that it occupies several high-concept niches at once.

                Beginners coming from dynamically-typed imperative languages must simultaneously understand algebraic datatypes, typeclasses, the lack of Smalltalk-style object orientation, static typechecking, pure functions, higher-order functions, recursion as a complete replacement for iteration, and updating of immutable data structures. Oh, and the syntax is like very little else, with the indentation significance having lots of corner cases that I’m not aware of any tutorial on.

                Having mastered these things, intermediate Haskellers will discover that performance characteristics are very difficult to get clear explanations of.

                All of this with everyone who advocates for the language pushing monads as the reason to even learn the language at all, so that no amount of advice to “leave that abstract concept for later; it’s of limited importance” will be the slightest deterrent.

                And the error messages are unhelpful to the point of absurdity. Coders who aren’t comfortable asking for help are never going to learn the difference between a type error caused by a missing comma, and an identically-reported type error caused by an extra parameter!

                I like the language a lot; it was my first choice for nearly any task, for a period of eight years, and I’ve even been paid to work in it, which is fairly rare and was a great experience. But if I’m trying to help a friend advance her expertise as a programmer in general, I will generally suggest a language which has a real, static typesystem, but leaves out most of the other hurdles. Lately, Rust seems to be this, although I have very limited familiarity with it myself.

                1. 1

                  That’s an excellent answer.

                  Why’d you leave Quora, if you don’t mind my asking?

                  1. 8

                    It felt too self-indulgent posting there - too much looking for attention, not enough sense of community.

                  2. 1

                    What has your experience been with Rust since then?

                    My impression is that it’s not simpler than Haskell, maybe even harder … it just requires different high-concept niches, which often have less tutorials and explanations than Haskell’s.

                    If you consider Haskell error messages to be bad, you haven’t seen anything yet. Rust still does very poorly in this regard. I think the hard problem they have is that many errors are not and cannot be localized, because multiple places cause an error in combination, while each line alone would be fine.

                    1. 2

                      Please file bugs about bad error messages! We care about making them better.

                      1. 1

                        I haven’t had a chance to use it for anything significant. It’s still on my list of things to learn at some point.

                    2. 3

                      It’s not that hard, but keep in mind that:

                      • employers don’t train people anymore and, although it absolutely should be otherwise, it’s not socially acceptable at most companies to use “working time” to gain skills. You’re supposed to know all the job skills on Day 0.
                      • there are hundreds of things arguably worth learning that are competing for a programmer’s time. Spend a month with scikit-learn and you can talk your way on to a data science job earning $150k. Spend a month with Haskell and you’ve gained a lot in general but you’re probably not qualified for the few “real” Haskell jobs out there.
                      • it’s significantly harder to learn than the next HackerNews.js framework. It’s probably not beyond the intellectual capacity most serious programmers, but it’s beyond the reach of the open-plan brogrammers for whom Scrum was designed.
                      1. 1

                        little with haskell. It went very well and as soon as it compiled it worked, which is more than I can say with every other language I know… (Though this could be due to bytemyapp book)

                        If you can rely on the existing libraries nothing seems too hard, but a lot of fundamental work seems to require a much higher degree of skill than it does in other languages. Writing a little web framework like sinatra doesn’t seem that hard in ruby, but designing the monad transformer stack to do it in haskell seems really difficult.

                      2. 3

                        I think your points are all sound. Practically, Haskell is hard for new people because:

                        • You can not sprinkle printf wherever you would like.
                        • Error handling has an “as you like it” quality.
                        • The range of abstractions available for IO is quite broad when you start looking at web services and API integrations.

                        Haskell can be very productive because the defect rate tends to be very low and the quality and re-usability of abstractions is high. However, productivity is not a priority for the community as a whole. Some examples of features which were overlooked for a long time (or still are) include:

                        • Doc comments in GHCi.
                        • Scripting (in other words, not having to compile your package at all).
                        • Record field overloading and row typing as an accepted mode of polymorphism.
                        • Abandoning strictness (the laziness thing proved to introduce a great deal of confusion).
                        • Mainlining extensions that “everyone uses”.
                        • Bringing some unity to IO, error handling and concurrency.

                        Now it is fair to say that Haskell did not start out with the intension to make the most productive language; but rather a very principled one, with an eye to teaching computer science. However, as the project developed there came to be a great deal of research on performance related topics as opposed to productivity related ones. HPC is a niche area, though; and the impact of these features would never extend very far relative to those targeted at web developers or systems engineers.

                        1. 3

                          You can not sprinkle printf wherever you would like.

                          Not to be a pain in the ass, but this is in the base library that comes with every GHC install and doesn’t add IO to your type: http://hackage.haskell.org/package/base-4.8.2.0/docs/Debug-Trace.html

                          1. 1

                            You are totally right; but say I want to sprinkle debug logging or metrics wherever I like. There is a need for this kind of instrumentation as part of the runtime, given that we don’t want to allow IO everywhere.

                            1. 2

                              but say I want to sprinkle debug logging or metrics wherever I like

                              You can, though.

                              1. 1

                                How?

                                Edit: I ask as I’m just learning Haskell.

                                1. 2

                                  here’s a program I just wrote:

                                  data Cat = Cat { lives :: Int } deriving (Show)
                                  
                                  curiosity :: Cat -> Cat
                                  curiosity cat =
                                    cat { lives = lives cat + 1 }
                                  
                                  main = do
                                    let cat = curiosity (Cat 9)
                                    print cat
                                  

                                  If you run this, you’ll get a cat with ten lives! Something in my program has gone horribly wrong!

                                  Here’s how I can debug it with trace:

                                  import           Debug.Trace (trace)
                                  
                                  data Cat = Cat { lives :: Int } deriving (Show)
                                  
                                  curiosity :: Cat -> Cat
                                  curiosity cat =
                                    trace ("My cat has " ++ show (lives cat) ++ " lives") $
                                    cat { lives = lives cat + 1 }
                                  
                                  main = do
                                    let cat' = curiosity (Cat 9)
                                    print cat'
                                  

                                  Now I can see the following output:

                                  My cat has 10 lives
                                  Cat {lives = 10}
                                  

                                  I can even alter the program to:

                                  curiosity cat =
                                    let newCat = cat { lives = lives cat + 1 } in
                                    trace ("My cat had " ++ show (lives cat) ++ 
                                           " lives but now it has " ++
                                           show (lives newCat) ++ " lives") newCat
                                  

                                  And get the even more helpful:

                                  My cat had 9 lives but now it has 10 lives
                                  Cat {lives = 10}
                                  

                                  Yay trace!

                      3. 4
                        • Lack of modularity (typeclass issues being just one of the symptoms of that)
                        • Lack of tooling (feels a bit like JavaScript … what is this week’s build tool that everyone should migrate to?)
                        • Lack of common “language core” (there is Haskell-Haskell and GHC-Haskell, and dozens of varying language pragmas)

                        More subjectively, using :: for type annotations really sucks; some recent Haskell-inspired languages dressed this though. Names of common operations are still a mess, even after AMP and related efforts.

                        1. 3

                          In a way I think Haskell would need a successor now, distilling a “good” subset of the language into one that I would feel comfortable building a company on.

                          Probably that would mean pruning the freely definable operator space, deciding which language extensions to keep as a default and which to throw out, defining commonly used type classes in the prelude, etc.

                        2. 4

                          <sarcasm>Sounds like that thread needs a Real Haskell Expert to explain why none of those things are actually problems.</sarcasm>

                          I’ve tried Haskell a few times over the years, and the number one thing that bothers me each time is the attitude of many Haskell users and community members.

                          Go in #haskell or ask on SO how to solve any of the problems from that thread, and without a doubt the first replies will be along the lines of “why would you want to do that?!?” like you’re totally stupid for even asking.

                          1. 1

                            Go in #haskell or ask on SO how to solve any of the problems from that thread, and without a doubt the first replies will be along the lines of “why would you want to do that?!?” like you’re totally stupid for even asking.

                            [sarcasm] So #haskell is the new comp.lang.lisp? [/sarcasm]

                            I saw that in the early Perl community, among others, where the expert practitioners have forgotten what it’s like to be a newbie are are either fatigued by questions or puffed up on themselves and feel justified in responding with hostility. Sometimes it is justified, but a lot of times it was what a colleague introduced to me as an “XY” (ob StackExchange link) where the person raising the question isn’t a troll or otherwise clueless but is focused on some detail and fails to describe the actual goal.

                            1. 3

                              Yes, in a lot of technical forums I’ve participated in, there’s plenty of blame to go around for both querents and responders. It really does take a lot of patience to be helpful… but, then, it would be nice if people who don’t want to be patient would find a social forum that isn’t an advice forum also.

                          2. 4

                            One comment in the Reddit thread contains a great explanation of why parametric polymorphism is hard to implement (the explanation can be useful the next time someone says generics are a solved problem and Go should already have them):

                            The compilation model (used by GHC) for polymorphism means that abstraction isn’t free. If I write data Foo = Foo !Int, the Int is stored unboxed in Foo. (That’s if you even remember to use strictness annotations.) But if I write data Bar a = Bar !a, an Int is stored boxed in Bar Int, which requires allocating at construction, following an indirection at access, and has at least a triple space overhead: the indirection itself and a header for the garbage collector’s convenience. Memory bandwidth and caches are increasingly the bottleneck for performance-sensitive code these days and this sort of space-waste, pointer-chasing and fragmentation is not very advantageous. (You can work with unboxed data in Haskell but it’s cumbersome and requires duplication and type system effort – Data.Vector.Unboxed, data families, and so on.)

                            This is necessary because polymorphic code has to be able to work with data of any type, which is solved by making every type have the same in-memory representation – a pointer.

                            A possible alternative would be to use monomorphization together with “intensional type analysis” (note: nothing to do with “intensional type theory” as a form of dependent types), wherein anything whose types can be instantiated statically is unboxed (in other words: anything that would be unboxed in C++ or Rust, in other words: anything you could define in C++ or Rust), and anything where the types can’t be known statically (polymorphic recursion, existentials, higher-rank types) would have the size and alignment of the type stored or passed in as runtime values with memory accesses calculated based on them at runtime. This would have the effect of making first-order polymorphic code (everything in Haskell 98 minus polymorphic recursion) exactly as fast as monomorphic code, //probably// in exchange for making higher-order polymorphism (existentials, RankNTypes, and polyrec) slower.

                            1. 4

                              For compiled languages, there’s also the ML approach, which compiles one copy of a function for each concrete type it’s ever called with. But that requires whole-program analysis, which is another can of worms implementation-wise.

                              1. 1

                                that is probably true, but Python is sucessful with “boxing all the things”. The problem about boxing values is more of an issue why C++ is being chosen over Haskell, than why haskell is not really common, but in Compiler construction / prototyping.

                              2. 2

                                Because nobody has used it to spend a weekend writing a browser that doesn’t suck. ;)

                                1. [Comment removed by author]

                                  1. 1

                                    And even easier with parinfer

                                  2. 2

                                    Poor mechanical sympathy and high cognitive load imposed by lazy evaluation. An interesting experiment though.

                                    Also, Cabal Hell, but that’s not intrinsic to the language itself.