1. 35
  1.  

  2. 11

    I’ve switched most of my own code from Haskell to SML and OCaml this last year and I have to agree, it’s a very nice world to live in :) I can’t say that it’s all perfect but I’m willing to overlook the annoyances for simple, clean, modular functional programming.

    1. 3

      Out of curiosity, why did you switch? Personally I hear more stories about people migrating towards Haskell not away from it.

      1. 8

        Well part of the reason I switched was just to try something new. It felt like I’d spent a long time writing the same sort of hobby projects in Haskell and I felt like I was reaching a point of diminishing returns with what it was teaching me. Haskell has a huge and vibrant culture but Haskell is not all of functional programming.

        The biggest change was switching from thinking in terms of expressing my abstractions by slotting them into the existing ones in Haskell, like finding the applicatives in my program, factoring things into some stack of monads, making my DSL a monoid to using what ML supported: modules! My programs now make use of far simpler tools locally but have a much richer structure in the large because they’re decomposed better into modules, functors and signatures. I think this makes them a lot more readable and I’m far happier with the results. I sometimes miss the fancier aspects of Haskell but I never ended up missing the sugary bits like do-notation or type classes (much to my surprise).

        Honestly, the biggest thing I miss is having lots of vocal, smart people building software in the same language I am. The communities are far smaller/quieter so there is a missing sense of communal pride and camaraderie. Also I have to write more utils.

        Haskell is still my third favorite language though :)

        1. 1

          That was a great story, thanks for sharing! What’s your languages number 1 and 2?

      2. 2

        as someone who “committed” to ocaml without ever really exploring SML i’d be interested in hearing if there were ever times you felt SML was a better choice for the program at hand.

        1. 9

          SML is arguably a nicer language than Haskell or OCaml just because it’s a small core with relatively few corners that make no sense. I like to think of it as the intersection of the sane parts of most functional languages even though historically it predates most of them. It’s got a nice module language and the expression language is minimal but “enough” for everything that I want. I’ve also written a few compilers for ML so I know the language well enough that I don’t get surprised by what’s in there and I can hold most of the language in my head and regularly exercise all of it. I also like the fact that I have multiple implementations and an exceedingly well defined language, stability is arguably a practical reason but also it just appeals to me aesthetically.

          This means that it’s great whenever I’m writing something that has essentially zero dependencies. So that means things like simple compilers, example programs (particularly for teaching since there are fewer sharp edges of the language), whatever are quite pleasant to write in SML because I don’t have to fight with the language. On the other hand, the second that I need that one library because I want to snarf some JSON and generate sexps on the other end, or even simple things like make a nice colorized or unicode output, I’m basically dead in the water. There’s some tragic irony of SML being wonderful to build abstractions in but there’s almost no one to share it with! The lack of ecosystem, build tools, package management, etc, etc, etc, mean that it’s hard to build things which depend on other things. As soon as I’m writing something that isn’t self contained it’s Haskell or OCaml time.

          TLDR: SML when able, OCaml/Haskell when not

          1. 5

            SML is arguably a nicer language than Haskell or OCaml just because it’s a small core with relatively few corners that make no sense.

            Yeah. My go-to analogy is SML is related to Haskell the way C is related to Java. SML and C are both self-contained languages that can fit inside your head. Java and Haskell are both very rich environments with many bells and whistles and libraries.

            (side note: the fact that SML has complete formal semantics is also pretty amazing. It’s not immediately useful to me, but it is to some, and it’s a unique and impressive achievement.)

            The downsides to that are exactly as you describe: lack of libraries, lack of community. There has been some effort in the past few years to change that. https://github.com/standardml is one of the centers of that, which came out of the #sml channel on freenode. It’s still pretty meager, though.

            1. 3

              The most useful bit to me of the SML semantics isn’t so much the formal part (though that’s nice), but that it at least has a very well-defined semantics, and they’re tested in practice by having multiple independent implementations (SML/NJ, MLton, Poly/ML, etc.). OCaml by contrast one of the defined-by-its-only-implementation style languages. There are some advantages to that, such as easier experimentation with new features. But I like the multi-implementation approach. (This is also one of the things I like about Common Lisp, which has an even broader range of independent implementations with different characteristics.)

            2. 2

              thanks, that’s about what i suspected. it is, as you say, sad; there are a lot of ML dialects out there (AliceML and Mythryl looked especially interesting, and MLTon’s support for writing libraries that can be called from other languages sounded intriguing) but what i’m mostly interested in is writing desktop apps, and for both CLI and GUI apps it seems to be OCaml or nothing. i experiment with F# every so often, as being more promising for cross-platform GUIs, but I’ve never actually gotten it working on linux.

              1. 1

                Yep! I try to steer clear of the tiny pockets of SML dialects because it seems like a bit of a hellscape of abandoned languages but there have been a lot of small fragmented pushes in interesting directions there.

        2. [Comment removed by author]

          1. 10

            Having looked deeply at both the GHC runtime and the OCaml runtime, I can safely say that ocamlc needs on he order of 10+ person-years to get it up the point of GHC in terms of the optimizer, gc, and multicore support. The compiler frontend has a lot of nice things going, but the backend is not state of the art in 2016.

            1. 7

              Framing! What you say is not state of the art is a plus for me. Haskell seems to require a lot of optimizations to make it fast, but Ocaml does pretty well for itself with its fairly dumb compiler and is, IMO, much more operationally friendly because of it. I have not looked at the compiler code at all, but as a consume I’m fairly happy with the output. I think multicore adds a huge amount of complexity, though, so we’ll see how that goes.

              1. 4

                Haskell seems to require a lot of optimizations to make it fast

                Not in my experience, but it’s easy to write naive code in any language you don’t know well.

                I find it a lot easier to write efficient, correct concurrent code in Haskell that does what I want than I would trying to bash Async or LWT into place with OCaml.

                That’s not going to stop me enjoying my forays into SML and OCaml but I can’t use them for work.

                <grumble>Wish more people had paid attention to Concurrent ML</grumble>

                1. 4

                  Sorry, I was not referring to optimizations the developer has to write, but that the compiler has to perform a lot of optimizations. IMO, it is much easier to predict how an Ocaml program will run than a Haskell program, which is a huge operational win.

                  1. 1

                    IMO, it is much easier to predict how an Ocaml program will run than a Haskell program, which is a huge operational win.

                    Up to a point. Upper and lower bounds matter more. You can only celebrate an upper and lower bound that are close together just so far until you have to admit you’ve either made a slow language or an obnoxiously inexpressive language. (I’m not saying OCaml is either of these things, but imagine what a language that has this in extremis would be like)

                    I haven’t found Haskell code to be that unpredictable perf-wise. Streaming analytics, ad servers… Our latency bounds were predictable, never had a memory leak, and nothing really ever surprised us.

                    Don’t get me wrong, it’s not perfect. There are a few mistakes that are common'ish to beginners writing little bench test programs (CAFs that never happen in real programs, not specifying a type, using lists like they were vectors) but these are early misunderstandings that go away quickly.

                    1. 4

                      Definitely, it’s a matter of degree not binary. So far Ocaml is much more within my comfort zone.

                      1. 3

                        Have you not had the space leak experience? There’s a lot I like about Haskell but the laziness is too big a source of uncertainty for me. Hoping I can start using Idris soon.

                        1. 4

                          Have you not had the space leak experience?

                          In production code? No. Experimental code designed to tickle the heap or in helping other people? Sure.

                          The way people write code when writing little one-offs is (unfortunately) susceptible to leaks because they’ll often have large CAFs in their module whereas real world projects aren’t pulling working set data textually and statically defined in a module.

                          I’ve often thought it’d be better if GHC just didn’t CAF things without a pragma asking for it but they lean towards favoring not messing up the asymptotics. Anyhoo, this is why we covered it in the haskell book because it often trips people up.

                          Only other thing is to use a streaming library for large data, long running computations, or indefinite computations. Doing that has precluded any other leaks that might’ve otherwise happened, I think.

                          1. 1

                            Only other thing is to use a streaming library for large data, long running computations, or indefinite computations. Doing that has precluded any other leaks that might’ve otherwise happened, I think.

                            I’m not sure I follow? I would think processing a stream into a single value would be a case where it’s very easy to build up a long chain of thunks that all evaluate to the same thing and get a space leak.

                2. 1

                  I would be interested in a bit more specific. I have a relatively good idea of what the OCaml runtime is like (in particular, I appreciate the fact that is clean C code that is readable and hackable), but I have never looked deeply in the GHC runtime. I know it has a decent parallel GC, and that it recently got some work to reduce GC pauses (coincidentally, Damien Doligez worked on GC latency improvements for OCaml’s GC, to be included in the coming release).

                  I am thus easily convinced on the multicore aspect (getting a good multicore runtime requires a horrendous amount of work), but do you have some specifics on the ‘gc’ angle? I am also curious about what aspects of the optimizer you think would be important for the kind of OCaml programs you would be interested in writing. The kind of programs I work with (symbolic manipulations of ASTs mostly) are rather hard to optimize as they tend to be memory-access-bound. I know OCaml could improve its unboxing on numerical code (and one thing I miss from Haskell is explicit support in the source language for unboxed values; but that had negative impact in terms of language complexity as well), and there is ongoing work to specialized higher-order function (the ‘flambda’ intermediate representation), but I wondered if you had other things in mind.

                3. 9

                  You might like F#! It’s my favorite language to work in these days. It’s essentially “OCaml.Net” for most purposes. It lacks OCaml’s functors (parameterized modules), and its OO system is designed for interop with .Net languages, but most everything else is the same.

                  Bonus – there is no GIL! Here is a parallel Mandelbrot generator of mine. Changing Array.init to Array.Parallel.init is sufficient to parallelize across all cores.

                  1. 4

                    I’m not sure what kind of programs you generally write, but it has not ever directly been a problem for me. Can you not get over it just on principle or because it’s an actual technical impediment? And I think the next release or the release after that will have multicore support. I’m not exactly sure if it will be good or not (I’m really concerned about memory semantics) but you’ll no longer have the GIL as an excuse so you’ll have to find a new one ;)

                    1. [Comment removed by author]

                      1. 12

                        In defense of Ocaml over Haskell, I think Ocaml is significantly simpler, both in terms of a language and implementation. Reasoning about how Ocaml code will perform on a system is fairly straight forward. Haskell certainly has merits, but even with a GIL I find Ocaml’s simplicity a big selling point.

                        1. 1

                          One could argue that a GIL encourages a programming style (tightly coupled threads) that’s undesirable for most use cases, so it’s better to simply not have the option.

                        2. 2

                          To me, the bigger problem around it is the way they’ve put it off and pretended it’s not an issue for so long. They’ve done the same thing with other stuff, like Unicode and Utf-8 in the standard library.

                          Leaving stuff out and not implementing things keeps the language and standard library simple, but it makes things more complicated in the long run when people have to implement it themselves or come up with workarounds.

                          1. 5

                            To offer another perspective, the Ocaml community is, and has been, rather small. It was not until recently that they have had the human resources and money to work on multicore. Up until then, a lot of the discussion against it was based around concern over not being sure how to do it safely without impacting performance. Getting a memory model as confusing as C++ or Java’s would not be good. So to give it a positive spin, the Ocaml community has been conservative to not give the users a worse experience.

                            As for Unicode, I don’t really have any knowledge about it. I know there are a few unicode libraries. The idea of unicode in the language has come up a few times and generally viewed as libraries being sufficient. But maybe it’s a net negative with multiple unicode libraries out there.

                            1. 2

                              To me, the bigger problem around it is the way they’ve put it off and pretended it’s not an issue for so long.

                              As a contributor to the OCaml implementation, I find this characterization rather unfair. OCaml has been mostly maintained by volunteers for a long while, and I don’t think one can be blamed for not deciding to embark on a multiple-man-years adventure that one would not directly benefit from. When people decide to contribute work on those topics, they usually have a reasonable view of the situation and I have seen little “pretense” on these issues.