1. 29
  1.  

  2. 6

    Like C++, Haskell is (in part) a research project with a single initial Big Idea and a few smaller ones.

    That is an interesting point of view.

    1. 9

      C++ took a bunch of language features from a mix of well-designed, research, and popular languages. It’s so kludgy specifically because it’s like many languages put together with a foundation (C) that wasn’t designed for that unlike say LISP. So, the quoted description is justifiable.

      1. 8

        Honestly the more I learn about C++ the more that I see it did improve C in alot of subtle ways. Specifically semantics of references are very nice. It also actually manages to be much better typed.

        But even given that I’d rather see rust succeed. It’s nasty and complicated too but in return you get safety.

        1. 5

          It isn’t all that clear to me that C++’s references are a net improvement over pointers. On the plus side, they can never be null. On the minus side:

          • They can still become dangling.
          • Their non-first-class nature creates problems, e.g., a class containing non-static reference members is automatically not trivially (copy|move)-(constructible|assignable).
          • They are a form of indirection that isn’t immediately visible in the code. How much does it improve your life not to have to type those pesky ampersands in what would otherwise be std::swap(&a, &b)?
          • Rvalue references are just the wrong thing. AFAICT, they have two main use cases: implementing move semantics and implementing forwarding. Regarding the former: using a reference is wrong, you should move the object itself. Regarding the latter: think about why there is no notion of forwarding in Rust.
      2. 3

        This is very similar to my experience but I wouldn’t compare Haskell to C++.

        C++ hit feature targets at all costs over 30+ years, including an era when really bad ideas (e.g., OOP for everything) was in vogue. Haskell is complex but it’s principled and ideas have to be proven out before they make into the core language for the long haul.

        1. 9

          FWIW, GHC does have some language features that are relatively controversial for reasons that aren’t easy to dismiss out of hand. e.g. GeneralizedNewtypeDeriving being unsafe for a while, implicit parameters, arrow notation, and I’ve met at least one person who really really hated GADTs (which I on balance didn’t agree with, but their reasons were pretty good).

          None of the above have been folded into the core language, they’re all behind {-# LANGUAGE … #-} pragmas. However, that’s not saying much because practically nothing got folded back into Haskell2010. At the same time there are language extensions (like rank-n types) that are de-facto standard because they’re used all over the ecosystem and multiple Haskell implementations include them.

          1. 6

            Implicit parameters are hardly used anywhere at least. Arrow notation probably is just really a kludge though :(

          2. 1

            To be fair asserting “functional for everything” is no less silly than “OOP for everything”. Each era has its extremists. In the end a balanced approach of some sort is often more effective - but it’s also less exciting to the bandwagoneers.

            1. 15

              I dunno. I’m perhaps a little suspect to make this argument, but I’d want to suggest that there’s a different character to “functional everything” that may give it much more longevity than “OO everything”.

              In particular, FP as a culture has a lot of practices that might end up being a little faddy. On the other hand, perhaps the core driving principles are “dramatically simplify language semantics through focus on values and functions”. The “dramatic simplification” is the important part. FP does not introduce any metaphor-driven frameworks for thinking about programming. If anything, in endeavors to make programming more abstract and complex. In doing so, however, it bleeds out a lot of accidental complexity.

              The result is better scaling for managing complexity because you just generate less of it to begin with. This is opposed to carefully structuring ways to hide complexity. Instead, you are simply forced to pay lots of penalties for it and therefore are driven to simpler solutions.

              Overall, that idea seems to me to (a) have longevity and (b) be the core of FP-as-a-culture.

              1. 4

                I agree with this. There are innumerable things in FP that might end up being bogus in the small sense (arrows already seem to be heading that way, for example), but the larger sense is a great deal more cohesive and basic than OO. Examining any tenet of OO in depth seems to lead me down a road to either reject the idea, find that it’s a halfway implementation of a functional idea, or to realize that it’s entirely preferential but has been enshrined in Received Wisdom because no one has thought about it.

                1. 5

                  My experience has been that it is easier to implement good OO ideas in FP than in OO. That’s why I switched from OO languages to Haskell.

                  For a long time I have been looking for an example of good OO design that’s hard to replicate or do better in FP and I haven’t found one. Such an example can probably only be found by someone who’s an expert in both OO and FP though. If anyone knows one please let me know!

                  1. 6

                    I think there’s one thing that I really want implemented in Haskell to continue along this train: an “Actor” system. Erlang style “let it fail” error handling drives toward supervisor hierarchies that lead to really robust “upper levels” of application organization. I think this could be achieved in Haskell and completely replace the standard “big ball of IO” application running.

                    I know Cloud Haskell has demonstrated some of this, but they are also tackling distribution which I think is nice but not really necessary in order to accomplish many of these goals. I spent a few hours and tapped out a sketch for some basics of this (https://github.com/tel/hotep/blob/master/src/Hotep.hs) but I think it needs to go a lot further with design.

                    But then I also think again that this could be an OO concept better implemented in Haskell.

              2. 2

                What’s extreme about using values for everything?

            2. 2

              the most generic implementation of the code tends to be very inefficient

              I’ve found this to be true in the data analytics space regardless of the language being used. Some composition of generalized functions yields the numerically correct result but often very inefficiently.

              1. 2

                The really big missing piece is the equivalent of ccache for Haskell.

                Doesn’t GHC do this out of the box? If you haven’t changed a file, GHC will use the interface and object files that are already compiled, and it’s much faster.