1. 32
  1.  

  2. 24

    For those not wanting to watch 6 minutes of video, the summary is:

    Haskell has traditionally been safe but also useless, meaning that it’s free of side-effects (and what’s the point of a computer program that has no side effects?). He contrasts this with C, Java, C# which are unsafe but useful and points out that all languages are developing in the direction of safe and useful. For C/Java/C# this means going from unsafe to safe, and for Haskell it means going from useless to useful. To do this for Haskell, they’ve figured out how to keep side effects separate from pure code, and this concept has been slowly leaking back into C#.

    1. 4

      Excellent summary, that is exactly what I took away from it.

      1. 4

        and what’s the point of a computer program that has no side effects?

        A side-effect is a function which does something other than produce a value. We don’t need those to write programs; Haskell is the perfect example of treating IO as values.

        1. 5

          The act of producing a value is a side effect. If you print it to a screen, it’s a side effect. If you set the exit code of your process, that’s a side effect. If you want to do anything like communicate with a database or a service or a keyboard, those are all side effects.

          Even non-termination is a side-effect (one which happens to be uncontrolled in Haskell).

          1. 7

            The act of producing a value is a side effect.

            What does that mean?

            I’ve seen some people say similar things but their definition of a side-effect is “something does something” which is next to useless and not what side-effect actually means!

            1. 5

              I think the following sentences explained it very well.

              To actually produce a value using a program running on a computer it must perform a side effect. This is the standard definition in the literature, including Peyton-Jones’s.

              Edit: I think I see how this is confusing. Technically, any observable effect of running a program is a side effect. Modifying the exit code of a process is observable and is technically a side effect.

              However, this tends not to be a definition of super-practical use. In context, the term “no side effects” is often used as more of “it doesn’t really touch state that matters.” This is also the way monad is often used – people will say something is a monad, when it has the right type, but actually doesn’t strictly follow the category rules.

              The point I was making was really that you were nitpicking unproductively. Serious language designers should recognize that the goal in language design is to actually write programs, not debate grammatical points.

              1. 5

                I don’t think I’m nitpicking, it’s an important distinction to make and people often get it wrong. I don’t think it’s unproductive to point out the problem, especially since you’re now wondering about it.

                To actually produce a value using a program running on a computer it must perform a side effect. This is the standard definition in the literature, including Peyton-Jones’s.

                Here’s a screen shot of the awesome Functional Programming in Scala book. Doing something other than “returning a value” really is the formal definition! Pinky promises that I’m not making things up.

                Serious language designers should recognize that the goal in language design is to actually write programs, not debate grammatical points.

                I don’t know what this means, are you imagining a dichotomy between talking about what ‘side-effect’ means and enabling the writing of programs? What does it mean to be a serious language designer?

                1. 1

                  Purity is a bit up to the language to define, however. For instance, evaluation in Haskell can force thunks and cause the execution of other code to change observably. It’s simply a matter of Haskell preference to decide that evaluation time is “unobservable” from the point of view of the language semantics.

                  There still a super important distinction to be had around purity. It becomes the most clear when you have strong types and thus a consistent and total notion of canonical values. In that case, one poor image of “purity” is to say that the only value you get out of running a function is to receive the canonical value of its result. In a more mathematically rigorous sense you might write

                  The function f : A -> B is pure iff
                  
                    const () . f == const ()
                  

                  In other words, when I throw away the result of computing the function f using const () the result is somehow “equal” to const (). The exact choice of const () and the exact meaning of equality shift around, though, and provide slight variations on what purity means. For instance, we might replace const () with

                  trivial :: a -> ()
                  trivial a = a `seq` ()
                  

                  which is a more restrictive form of “purity” since non-termination is now a side-effect.

                  1. 2

                    How is it a preference for Haskell to say that evaluation time is unobservable? If Haskell allowed functions to observe time then those terms would not be referentially transparent! It’s not about what a language is “allowed” to observe.

                    Haskell works under the definition that bottom is a valid value and seq is not referentially transparent. If you change the definition of Haskell terms, you could say that bottom is not a value and then you get to say that non-termination is a side-effect. I’m definitely with you on doing that!

                    1. 1

                      No, of course. Haskell is internally consistent with its definition of pure (except where it isn’t, and those places are forbidden). I’m just relating that the definition of side effect is often a little bit fuzzy at least. It’s a good argument to have as to what the tradeoffs of stricter and stricter definition of purity are.

                      I’d love to have non-termination as an effect, but I’m also perfectly happy to admit that it’s a tradeoff.

            2. 2

              So under this definition, what isn’t a side effect? I don’t disagree that most of the things you mentioned are side-effects but if we extend this notion such that even producing any value is a side-effect then this is just a truism since every conceivable computation is effectful. It’s more interesting to study the topic under the constrained definition of observable vs unobservable effects.

              1. 2

                No, once you accept the definition that everything is a side effect, it lets you focus on adding language support for controlling the side effects that matter.

                Writing

                x = 1 y = 2 x = 1

                Is side-effectful according to Haskell definitions (you modified x), but it’s also completely unimportant. You haven’t gained anything from pointing that out. Talking about side-effects is only useful in a highly contextual discussion about making it easier to perform some specific task in a programming language.

                1. 5

                  You do gain things from pointing it out, though. Huge things!

                  By pointing out something is a side-effect, you’re also pointing out that you’ve lost referential transparency. By pointing out that you’ve lost referential transparency, you’re also pointing out that you’ve lost the substitutional model of programs. By pointing out you’ve lost the substitutional model of programs, you’re also pointing out that you’ve lost equational reasoning.

                  And I really don’t want to lose equational reasoning. I do functional programming because I highly value treating my programs as sets of equations. It allows me to treat types as theorems, allows abstraction without implications and is just an extremely simple form of formal reasoning which people can do in their heads or on paper!

                  1. 1

                    When I was writing C and IA-32 assembler in kernel mode I didn’t care that I didn’t have equational reasoning and I didn’t want equational reasoning. I was altering state by writing a program that altered state in exactly the way I wanted it to.

                    You’re describing a very specific way of writing a program. What you don’t seem to accept is that it is not the only way to write programs, or sometimes even the best way to write programs. Sometimes those two ways of programming are even two parts of the same program. Maybe I want referential transparency sometimes, and not others. It’s the job of the language to help me perform the task I’m working on in the best way possible in that context.

                    1. 3

                      That seems a little like an appeal to moderation, can you clear it up for me and give an explicit reason to not want a substitution model for your programs?

                      I know I’d definitely take referential transparency when describing a kernel, if I could get it. It would absolutely give me more confidence that I’m altering state in the exact way I’d want to. I may not be able to achieve it, but that doesn’t mean I wouldn’t want to!

                        1. 4

                          I accept that we can’t have referential transparency with assembly. That code doesn’t make me want to give it up, though, it makes me want to get to a place where we can do what assembly can while maintaining referential transparency!

                          Can we? Who knows? I just know I’m not satisfied…

                          Oh, I just realised that we could define this stuff inside of an referentially transparent EDSL and could probably do the same stuff. Almost like what Microsoft has been playing with.

                          1. 2

                            what Microsoft has been playing with.

                            Hey, I never said that typing was inappropriate.

                            (Also, read the discussion on that paper. It’s quite funny – basically Andrew is trying to describe in the kindest words how crappy their dev experience was).

          2. 2

            Great tl;dw (didn’t watch)

            There was something that Simon Peyton Jones once said (though I’m having trouble finding the link) where he mentioned that you can parse the unofficial Haskell motto: “avoid success at all costs” in two ways. The first way, and probably the way that most people take it is “do everything you can to not be successful.” Haskellers sometimes joke that we’re starting to fail at that. But Simon mentions that you can also parse it in another way (my paraphrasing): “avoid doing unprincipled things just to become popular.” So it could be: “avoid: ‘success at all costs’.” And I think this video hints at that. Haskell started “safe but useless” and is moving carefully up the usefulness axis.

          3. 5

            Despite a link-baity title this is how I introduce many people to Haskell. For those who don’t know who the speaker is he’s

            1. One of the major designers of Haskell The Language
            2. One of the main implementers in GHC, the main Haskell compiler
            3. One of the designers of C–, an intermediate language for functional compilers
            4. One of the most visible and well respected members of the Haskell/PLT community
            5. An incredibly nice and geniune guy

            If you’re interested in Haskell, I’d strongly urge you to watch a few of his lectures (lots on youtube).

            1. 4

              That video has been sitting in my favorites for a long time now, wish there was more Peyton Jones stuff around, he’s awesome! He could use some better slides and less comic sans though.

              Still, something that I link to everyone I know that has ever gotten into programming.

              1. 3

                I’ve heard he uses Comic Sans because it’s easier for people with reading difficulties (dyslexia like myself) to read. I hate the font but it is definitely easier for me to read than most people’s choices.

                1. 2

                  He mentions his reasons at the end of one his talks about education in the UK. Naff being British slang for “tacky”, I had to look it up.

                  But nobody’s ever been able to tell me what’s wrong with it. I think it’s a nice legible font, I like it. So until someone explains to me … I understand that it’s meant to be naff, but I don’t care about naff stuff, it’s meant to be able to read it. So if you’ve got some rational reasons why I should not, then I’ll listen to them. But just being unfashionable, I don’t care.

                  Here’s the video.

                  1. 2

                    If there’s one thing SPJ doesn’t care about, it’s being fashionable, both in language design and clothing; those knitted jumpers he always has are fantastic.

              2. 2

                It’s really great watching intelligent people converse. I’m curious where Simon Peyton Jones would put erlang in his matrix. I assume higher than haskell in terms of usefulness, but where in terms of safeness?

                Edit: extra commas