1. 1

    This is really cool. Thanks for the post!

    1. 2

      It’s hard to google for but I remember hearing about a language (ALGOL?) which specified a “typeset syntax.” Keywords are set in bold, identifiers in italic, etc. It seems that this could be a very readable sort of syntax highlighting as well.

      1. 2

        I’ve used AsciiDoc for a book that I’ve been writing. I found that it lies somewhere between LaTeX and Markdown in terms of verbosity and features. It’s more or less syntax sugar on top of DocBook XML.

        1. 3

          While we’re on the subject, does anyone have a great recommendation for a book on SQL? I don’t have to write a ton of super complex queries in my job, but once every month or two, some task calls for a good bit of SQL writing, and I’d like to get a better foundation that just “what I’ve picked up over the years plus Google”.

          1. 9

            Not a book recommendation but a couple pieces of advice which helped me shift out of the procedural mindset:

            • Think about the problem in terms of sets and transformations rather than individual instances.

            • When formulating a query start with the SELECT and write a list of what you want to see in the result set. This is the goal you’re working towards. Add in appropriate joins to build a set of the data you require. This is your starting point. Figure out the intermediate set transformations required to get from start to finish. Coincidentally this made the ordering of SELECT, FROM, and WHERE click. I was previously thinking in terms of FROM, WHERE, and SELECT.

            Hopefully that’s not too elementary. Coming from a similar background I’d never really had that spelled out to me.

            I came across the advice in Joe Celko’s SQL for Smarties, which I think is probably too basic for your needs. I haven’t read anything else by him so can’t vouch but “Joe Celko’s Thinking in Sets: Auxiliary, Temporal, and Virtual Tables in SQL” might be helpful? I’ve also heard good things about “The Essence of SQL” but it’s out of print so good luck finding a copy!

            1. 1

              I find it amazing how differently you approach this than I do, yet I would assume, we would still end up writing very similar queries.

              1. 1

                How do you approach it?

                1. 4

                  I tend to think of the database like a tree or a perhaps a map (as in google not reduce). I look at the fields I know I need to return, and then mentally map them across the database. I start my query by working from the trunk, or root node if you don’t like wood, of my tree and then map out the query from there. The root node isn’t always the same table; so that can vary based upon the required fields. After selecting all the fields I need from the root, I proceed to join the next table required by my SELECT. That isn’t always direct, and many times, there are tables in between. The process repeats till I have all the tables I need.

                  This line of thinking has lended itself well to pretty much every dataset I’ve encountered. Words like “set”, “transformation”, and “instance” never even crossed my mind.

                  1. 1

                    Now obviously words like “set” and “instance” have a great deal of meaning in database land, but as far as writing queries go, those aren’t words I tend to think of.

              2. 1

                I use CTEs a lot in Postgres, so I find that I work towards the final result in a different way, more like I would in code - by treating the query as a series of data transformations. So, for example, if I want to produce a result for a set of users on a range of days, I write a CTE to get the users, then another one to generate the range of dates, then another one to combine them to get all possible “user-days”, then another to retrieve the data for each and process it, and so on.

                This results in very readable queries - way better than having subqueries all over the place. There are performance caveats to using CTEs so sometimes I have to structure the query differently, but it works well for a lot of them.

              3. 5

                The docs for Postgres are amazing, and a good resource for this. It will call out which parts of SQL it’s explaining are Postgres-specific/not in the SQL standard.

                https://www.postgresql.org/docs/current/static/sql.html

                1. 2

                  I’m a very from first principles thinker and so I hope this recommendation isn’t off the mark for your needs. I really liked “Relational Theory for Computer Professionals” by C.J. Date. The book is roughly broken up into parts, the first is an introduction to relational theory and what that’s all about. This is the best intro to relational algebra that I’ve ever seen, a close second is the Stanford Database online class (you can just study the SQL part of the course). The second part of the book takes what you now know about relational algebra and shows how it fits with SQL.

                  This helped me peel away some of the syntactic confusion around SQL syntax and made the whole concept make more sense.

                  1. 1

                    My 2 favorite resources over the years have been: “SQL for web nerds: http://philip.greenspun.com/sql/” and the official postgres docs.

                    1. 1

                      You can also check this book : https://twitter.com/BonesMoses/status/832983048266330113 and his blog how is full of very good sql

                    1. 9

                      An explanation that I find enlightening goes something like this “Maybe is a typed nil.” That is, it lets you tell the difference between, say, the lack of a string and the lack of an integer (as well as the lack of a polymorphic type, ‘a’, but now I’m getting carried away). From a tweed-jacket beard-stroking philosophical perspective, this is kinda interesting, no? If I search a database for a user and get nil and I likewise search a database for a telephone number, also getting nil, it makes sense that these “nope-flavored” answers would be incompatible. What nil does then is take a lack of a user and obscures even that little bit of information leaving only a jagged nope! in its wake.

                      1. 1

                        “An explanation that I find enlightening goes something like this “Maybe is a typed nil.” ”

                        Like “nan” ?

                        1. 1

                          Yeah, that’s a really good example! The use of NaN in the wild follows this pattern. We don’t see people writing functions where this happens:

                          function getName(id) {
                            var result = lookup(id);
                            if(result) {
                              return result;
                            } else {
                              return NaN;
                            }
                          }
                          

                          So the concept for at a typed null, at least for numbers, is out there.

                      1. 3

                        I’ve been thinking about this sort of thing in Ruby - the language makes a lot of tradeoffs for dynamic, polymorphic everything that is little-used in practice.

                        1. 6

                          Three years ago I attempted to introduce such an optimisation in MRI, the canonical Ruby VM. I looked for monomorphic call sites and had them compiled to custom opcodes invoking statically determined methods. I hoped to introduce some savings by removing expensive method lookups dominating MRI’s profiling graphs.

                          It all looked promising in theory, but in practice I didn’t achieve a lot, as documented in my MSc thesis. Some synthetic benchmarks showed promise, but more real world scenarios—e.g. simple Sinatra or Rails apps under load—weren’t affected by my changes. MRI’s built-in inline method caches were already doing a very good job. Moreover, I’m happy to see that people are still working on making them more efficient.

                          1. 1

                            Great research! I would never have expected that practical result, that’s really interesting.

                            1. 1

                              I can’t seem to find a PDF of this. I’m super interested, as my day job is working on JIT technology, currently being proven on Ruby, and we’re always on the lookout for reading material.

                            2. 2

                              I don’t remember which story it was, but there was one posted here month or so back about trying to optimize JavaScript by removing type checks. The observation the author made was that the type of something is almost always static its entire life, only rarely is the dynamicism actually used. This makes a lot of sense if you consider the human elemant of programming: if everything was dynamic all the time it would be impossible for any programmer to keep track of what is going on.

                              My opinion is that many people default to dynamically typed and move to static types over time but we should probably have languages which make it easier to default to static types and provide a dynamic escape hatch. While I have not used it, I think C# offers this with it’s dynamic type.

                              1. 9

                                This made me finally learn about two Haskell features that I’d heard about. Haskell has (at least) two different mechanisms for doing dynamic sorts of things. The first use case is when you have incorrect code in your program, but you’d like to compile it anyway. You know that you aren’t actually going to use that part of the code and so it shouldn’t matter. In that case, use the compiler flag -fdefer-type-errors. This gets you pretty much what you have in a dynamic language. Blatant type errors are only a problem if you actually try to run that code:

                                aString :: String
                                aString = 42
                                
                                main :: IO ()
                                main = putStrLn "Hello world!"
                                

                                This will compile with just a warning. The second case is when you want explicit dynamic types. Here you use the Dynamic type from the standard library:

                                vals :: [Dynamic]
                                vals = [ toDyn 1
                                       , toDyn "cat"
                                       , toDyn 3.14159
                                       ]
                                
                                useString :: [Dynamic] -> String
                                useString list =
                                  let str :: String
                                      str = fromDyn (list !! 1) ""
                                   in map toUpper str
                                

                                And this behaves like this:

                                > useString vals
                                "CAT"
                                

                                All-in-all, since the bulk of code is (monomorphically) typed, defaulting to static types, with dynamic escape hatches like the above sounds nice!

                                1. 5

                                  It’s always interesting to me that Haskell has such a nice dynamic type which sees almost no use.

                                2. 3

                                  I wonder if you’re thinking of Simple and Effective Type Check Removal through Lazy Basic Block Versioning – Maxime Chevalier-Boisvert, Marc Feeley, ECOOP 2015.

                                  Even if you weren’t, it’s a good read; Interesting technique for exploiting this observation in a novel JIT compilation strategy.

                                  Edit: Updated link, which didn’t have PDF.

                              1. 4

                                Just bought the MEAP. I’m really liking it so far. I think I’m finally starting to wrap my mind around dependent types. The printf function is so cool.

                                1. 4

                                  I also made a video on how to implement the printf function in Idris if you’re interested:

                                  https://youtu.be/fVBck2Zngjo

                                  1. 3

                                    I’ve also been enjoying the MEAP. It has the most readable explanation of totality I’ve found so far. (Someone asked me to define this on Twitter a while back, and I had a hard time finding an explanation aimed at programmers instead of mathematicians.) I especially liked this bit: “…knowing that a function is total allows us to make much stronger claims about its behaviour based on its type.” Very clear and focused on the practical benefits to the programmer.

                                    1. 2

                                      Have to say I agree with the totality explanation being quite good. Overall it is a pretty good start.

                                  1. 4

                                    I found these slides while wandering around (in a daze) after watching this video: https://www.youtube.com/watch?v=w-I6XTVZXww

                                    It seemed completely baffling that doing a sum: 1 + 2 + 3 + ... could yield -1/12. After reading Everything and More, it is as interesting to hear peoples' reactions to notions of infinity as it is to ponder the mathematics itself. I’m also less willing to trust my intuition when it comes infinity. The Wikipedia page about the sum of the natural numbers starts off easily enough, but then veers off into some really heady reasoning by Ramanujan & finally pulls in the zeta function.

                                    I just wanted to share this interesting gem with others.

                                    1. 6

                                      FWIW, I think it’s a mistake to think of the equation

                                      1+2+3+… = -1/12

                                      as relating to notions of or intuitions about infinity. Rather, we begin by studying the complex-valued function

                                      \sum_n n^{-s}.

                                      Where is this defined? Whenever the series converges, i.e., when the real part of s is greater than 1. Next, by a process called analytic continuation, we define a function, the Zeta function, which has the property that

                                      Zeta(s) = \sum_n n^{-s}

                                      whenever Re(s) > 1. A key fact of complex analysis is that any two smooth (complex-differentiable) complex functions which agree on any disk, no matter how small, must be equal everywhere they are defined. This tells us that there is only one possible extension of our series to the entire complex plane, namely the zeta-function we defined.

                                      Finally, one can show that Zeta(-1) = -1/12. If we were to plug in s = -1 to our series, we’d get

                                      1+2+3+…=-1/12,

                                      giving the “formula” we wanted, but this doesn’t have much to do with notions of infinity. It’s just a way to assign a value to a power series.

                                      P.S.: Thanks for posting this — it’s great to see the string-theory perspective on the number theory.

                                      1. 0

                                        That does it. Math is broken.

                                        Edit: Indeed it is broken! From Wikipedia:

                                        In particular, the step 4c = 0 + 4 + 0 + 8 + · · · is not justified by the additive identity law alone. For an extreme example, appending a single zero to the front of the series can lead to inconsistent results. [1]

                                        That paragraph is followed by some insanity, then this:

                                        A summation method that is linear and stable cannot sum the series 1 + 2 + 3 + … to any finite value. (Stable means that adding a term to the beginning of the series increases the sum by the same amount.) […] The methods used above to sum 1 + 2 + 3 + … are either not stable or not linear.

                                        The idea that 1 + 2 + 3 plus more non-zero numbers sums up to something less than zero is simply utter nonsense.

                                        Wikipedia has yet another gem:

                                        In the primary literature, the series 1 + 2 + 3 + 4 + ⋯ is mentioned in Euler’s 1760 publication De seriebus divergentibus alongside the divergent geometric series 1 + 2 + 4 + 8 + ⋯. Euler hints that series of this type have finite, negative sums, and he explains what this means for geometric series, but he does not return to discuss 1 + 2 + 3 + 4 + ⋯. In the same publication, Euler writes that the sum of 1 + 1 + 1 + 1 + ⋯ is infinite.

                                        So there you go, 1 + 2 + 3 + … is a negative number, but 1 + (1 + 1) + (1 + 1 + 1) + … is infinite.

                                        Right. Yeah.

                                        If some system of math tells you otherwise, I think it’s more likely the case that there’s some sort of logical error in reasoning within that system.

                                      1. 9

                                        I’ll admit that if the object under discussion is JavaScript, then this may actually be a bit of a no-brainer. But if we expand the argument out a bit, I think we can have some more interesting conversations about testing value.

                                        For example, what class of errors can we detect with tests, and perhaps as importantly, which do we actually catch? I’ve seen codebases littered with assertions and tests which verifies that something explodes in the expected way if some constructor receives a null argument. OK, are unit tests the best way to approach that? Even short of using a language with no nulls, could we do better by adopting some form of gradual typing? (A solution which may actually apply to JS for example).

                                        Let’s move on and assume that we’ve eliminated the class of errors (somehow) which could be categorised as “simple mechanical” such as the previous. We’ll never get invalid types of things, or non-things, passed to functions which expect specific things. Now we might be in to places which are more awkward, so what does happen to my address validation function if I pass it a string of length 216? An empty string? A string containing control characters? A string containing unusual unicode planes? We could reach for unit testing again, but could we ever really feel like we’ve written enough tests? No, something like property based testing is much more likely to be valuable than our manual efforts.

                                        Now we’re probably in to the world where “simple and complex mechanical” errors are removed (to some degree). Now, operational. Are we ever likely to catch complex concurrency errors with unit testing? Probably not, most people assume unit tests to be isolated, single threaded, etc. and so by definition we’re not going to be finding races or interesting edges. That’s really got to be some level of integration/system test.

                                        What does this leave us with? Well, we’ve still got actual business logic of course. Unit tests now then? Hmmm. Maybe… Who’s writing them? The same person as writing the code? OK. Am I likely to come up with the logical inconsistencies to write tests for them, if I didn’t when writing the code i’m testing (regardless of the order I write the code/tests!) Indeed, how well do I understand what I’m building? If there’s a specification, then I’m probably better off trying to move things back towards mechanical verification of compliance (perhaps even proofs in tractable cases). If there isn’t, then what we’re testing is my understanding of the requirements - am I able to write tests which highlight my lack of understanding? Probably we reach the realm of philosophy now, but…

                                        OK, so this is a bit tongue in cheek. Of course unit testing can have value. But I see far too much uncritical valuation of testing, particularly unit testing, without true consideration of the possibilities or flaws. Testing is hard and a lot of modern testing theory promises to reduce it to mere mechanical effort. It never has been and never will be if it’s going to have lasting value.

                                        1. 3

                                          If there’s a specification, then I’m probably better off trying to move things back towards mechanical verification of compliance (perhaps even proofs in tractable cases). If there isn’t, then what we’re testing is my understanding of the requirements - am I able to write tests which highlight my lack of understanding?

                                          I’ve worked on a project with dedicated BAs who could always tell me exactly what should happen in any specific scenario, but whenever they tried to write a specification it didn’t correspond to their examples. In the end I just asked them for examples and wrote the simplest code that made those test cases pass. So maybe the use case is when the domain itself isn’t formally understood?

                                          1. 5

                                            Though it didn’t turn out to be more than a week or two of headache, I have sometimes found myself given examples by a client having the flavor of the voting paradox. There’d be examples like “if such and such happens, then Concern A trumps Concern B,” “but if so and so happens, then Concern B trumps Concern A,” and so on like that. Pretty soon we’d arrive at a result that the client found to be really unexpected and he’d point out the bug. Then we’d walk back through some examples that showed nope, this is consistent with the (implied) rules. I have come to prefer to deduce from principle rather than induce by example, if for no other reason than it seems to cut to the chase a lot faster. But, like you said, it doesn’t help if the domain isn’t really understood.

                                        1. 2

                                          This definitely falls under the “hobby” end of your request, but there’s a novel about debugging that I really enjoyed. Try “The Bug” by Ellen Ullman

                                          1. 4

                                            Although it seems lesser known, AsciiDoc should probably be mentioned. As in the article, while Markdown can be seen as syntax sugar for HTML, AsciiDoc is like syntax sugar for DocBook. It supports the full range of things that are required in more long-form writing like footnotes, cross-references, external include files, and bibliographies. I’ve been using it to write a tech book for Manning and have been pretty pleased. The syntax overhead is similar to Markdown and in some cases the two formats use the same notation.

                                            The two implementations that I know about are the original, and asciidoctor. A big bummer, for me at least, is that AsciiDoc isn’t an input format to pandoc. This means there’s no clear way into that swiss-army knife world.

                                            1. 3

                                              A big bummer, for me at least, is that AsciiDoc isn’t an input format to pandoc. This means there’s no clear way into that swiss-army knife world.

                                              DocBook is though, so you could do asciidoc -o - file.txt | pandoc -f docbook -t [whatever] :-)

                                            1. 2

                                              Where in the article do they go over writing a Slack bot?

                                              1. 7

                                                I don’t. That’s just something I pulled out of the oven, Julia Child-style. The real point of this article is just how to use the stack command. The bot is really simple. Here’s the entire source (slightly modified):

                                                {-# LANGUAGE OverloadedStrings #-}
                                                module Main where
                                                
                                                import Data.Monoid ((<>))
                                                import System.Environment (getEnvironment)
                                                import Web.Spock.Safe
                                                
                                                main :: IO ()
                                                main = do
                                                    env <- getEnvironment
                                                    let port = maybe 8080 read $ lookup "PORT" env
                                                
                                                    runSpock port $ spockT id $
                                                        post "spock" $ do
                                                            comment <- param "text"
                                                            text $ case comment of
                                                                Just t  -> "\"" <> t <> "\"" <> "highly illogical, captain"
                                                                Nothing -> "I didn't understand that."
                                                

                                                That just gets the environment variable, PORT, to run the server on and then runs the app. The app responds to a POST at http://some.url.here/spock with a param of text and then builds a response based on that. That’s enough to handle a slash command in Slack. Like I said, really simple.

                                                1. 3

                                                  Sorry for coming off as linkbait. :(

                                                  1. 2

                                                    Eh, haters gonna hate. You got 12 positive votes.

                                                    Are you the OP? It’s great to see more people using Haskell.

                                                    1. 3

                                                      twopoint718 is the author, I’m the OP. We work together at Bendyworks.

                                                      1. 2

                                                        Haskellers in Madison? I lived there for a year. The lakes are beautiful.

                                                        I’m in Chicago and I’m also very excited about Haskell. We should get coffee or a beer if you’re ever in town (and I’ll reach out if I’m ever in your neck of the woods).

                                                    2. 1

                                                      I didn’t see it as linkbait, but I was also surprised by the article not documenting any of the development. I think a title change would solve the issue.

                                                  1. 3

                                                    Was this talk recorded anywhere?

                                                      1. 3

                                                        Sad but true. As an engineer who works primarily in Ruby I’d really like to see Ruby become statically typed. Unit tests would become much less of a nightmare.

                                                        1. 1

                                                          I have embarked on a: just use haskell for ALL THE THINGS [1] for the past 6 months. Honestly, I’m starting to really really freak out having to use python and ruby or perl or whatever for work. Its kind of weird thinking and going, why is there so much unneeded drama to using classes and ugh. I keep wanting hoogle so I can just find the function that implements the function shape I want. Which is generally always there or if it isn’t it isn’t too hard to build.

                                                          I think my time of taking dynamic languages as the best thing ever is definitely reaching a middle.

                                                          1. 1

                                                            I feel like I’m in the same phase. I read so much about Haskell over that past year or so and recently, I started doing the same thing. Like John Carmack mentioned in his QuakeCon talk: (paraphrasing) “I’ve set out to write my 10,000 lines of Haskell to really learn about it in a practical way.”

                                                            1. 1

                                                              My problem is that I can’t just slip in Haskell code at work… I think I’d certainly ruffle some feathers. :)

                                                          2. 1

                                                            In other words: has Betteridges law of headlined been disproved? No.

                                                            1. 1

                                                              That video link was broken, but I think this is the same video: https://www.youtube.com/watch?v=M8x0bc81smU

                                                            1. 12

                                                              The author has an interesting point but deliberately biases the whole discussion by defining type systems such as Hindley-Milner as “human friendly” and traditional ones as “compiler friendly”, then circularly arguing that the ones he defines as human friendly are more friendly to humans.

                                                              If you’re going to employ biased language to make an argument you could just as easily describe Hindley-Milner as “mathematician friendly” and C-style ones as “programmer friendly”. But that would get us nowhere.

                                                              What I’d love to see is an empirical study of the effect that type systems have on programmer productivity, software reliability and code maintenance. Does anyone know if there are any studies out there like that?

                                                              1. 8

                                                                Yes, it turns out there are a lot of studies on this. After hearing some particularly strong claims about this sort of thing recently, I did a literature review and wrote up notes on each study, just for my own benefit.

                                                                The notes are way too long to post as a comment here (the notes are over 6000 words, i.e., longer than the average Steve Yegge blog post), and they’re actually pretty boring. Let me write a short summary. If anyone’s interested in the full results, I can clean them up and turn them into a blog post.

                                                                1. Most older controlled studies have a problem in their experimental methodology. Also, they often cover things most people probably don’t care about much anymore, e.g., is ANSI C safer than K&R C?

                                                                2. Some newer controlled studies are ok, but because of their setup (they compare exactly one thing to exactly one other thing) and the general lack of similar but not identical studies, it’s hard to tell how generalizable the results are. Also, the effect size these studies find is almost always very small, much smaller than the difference in productivity between programmers (who are usually selected from a population that you’d expect to have substantially smaller variance in productivity than the population of all programmers).

                                                                3. People rarely study what PL enthusiasts would consider “modern” languages, like Scala or Haskell, let alone languages like Idris. I don’t like that, but it’s understandable, since it’s probably easier to get funding to study languages that more people use.

                                                                4. There are a number of studies that mine data from open source repositories. They come up with some correlations, but none of them do anything to try to determine causation (e.g., instrumental variable analysis). Many of the correlations are quite odd, and suggest there are interesting confounding variables to discover, but people haven’t really dug into those.

                                                                For example, one study found the following in terms of safety/reliability of languages: (Perl, Ruby) > (Erlang, Java) > (Python, PHP, C), where > indicates more reliable and membership in tuple indicates approximately as reliable. Those weren’t the only languages studied; those particular results just jumped out at me. The authors of the study then proceeded to aggregate data across languages to determine the reliability of statically typed v. dynamically typed languages, weakly typed v. strongly typed, and so on and so forth.

                                                                That analysis would be interesting if the underlying data was sound, but it doesn’t pass the sniff test for me. Why are languages that are relatively similar (Perl/Ruby/Python) on opposite ends of the scale, whereas some languages that are very different end up with similar results? The answer to that is probably a sociological question that the authors of the study didn’t seem interested in.

                                                                This seems to echo the results of the controlled studies, in that the effect a language has on reliability must be relatively small for it to get swamped by whatever confounding variables exist.

                                                                5. There are lots of case studies, but they’re naturally uncontrolled and tend to be qualitative. Considering how little the other studies say, the case studies make for some of the most interesting reading, but they’re certainly not going to settle the debate.

                                                                1. 3

                                                                  Anecdotally, SML’s type system is not particularly friendly to programmers (it’ll beat you over the head over and over again until you satisfy its exacting requirements) but extremely supportive of correctness; Java’s type system isn’t especially friendly to anyone due to its inflexibility, inexpressiveness, and extraordinary amount of required boilerplate; and Python’s type system (or lack thereof) is incredibly friendly to programmers (“sure, you can do that! Why not! Treat this int as a dict, I don’t care!”) but allows all sorts of bugs ranging from the blatantly obvious to the deviously subtle to make it into running code.

                                                                  I’ve seen a couple of studies of programmer productivity in various languages/environments (not, IIRC, looking specifically at type systems), but none well-founded or conclusive enough for me to remember or recommend them.

                                                                  1. 4

                                                                    Under the dispatching section the author talks really briefly about branching based on the type of the first argument. There’s a really good section on this in HaskellWiki about using existential types to create something like the visitor pattern: http://www.haskell.org/haskellwiki/Existential_type#Dynamic_dispatch_mechanism_of_OOP

                                                                    1. 3

                                                                      Okay, out of curiosity, could someone implement the:

                                                                      float myDivide3(float a, float b, proof(b != 0) p) {
                                                                          return a / b;
                                                                      }
                                                                      

                                                                      function given in the article?

                                                                      1. 5

                                                                        Well, here’s the code in Agda. If you know a little FP it shouldn’t be too terrible.

                                                                        I reimplemented just about everything except numbers for “learning purposes”. In practice this could be shortened by using the stdlib much more heavily, in particular we can hack out the divisions of Eq, not, False, ProofPair and so on and so on.

                                                                        Update: Here’s the version with the stdlib.

                                                                      1. 24

                                                                        For those not wanting to watch 6 minutes of video, the summary is:

                                                                        Haskell has traditionally been safe but also useless, meaning that it’s free of side-effects (and what’s the point of a computer program that has no side effects?). He contrasts this with C, Java, C# which are unsafe but useful and points out that all languages are developing in the direction of safe and useful. For C/Java/C# this means going from unsafe to safe, and for Haskell it means going from useless to useful. To do this for Haskell, they’ve figured out how to keep side effects separate from pure code, and this concept has been slowly leaking back into C#.

                                                                        1. 4

                                                                          Excellent summary, that is exactly what I took away from it.

                                                                          1. 4

                                                                            and what’s the point of a computer program that has no side effects?

                                                                            A side-effect is a function which does something other than produce a value. We don’t need those to write programs; Haskell is the perfect example of treating IO as values.

                                                                            1. 5

                                                                              The act of producing a value is a side effect. If you print it to a screen, it’s a side effect. If you set the exit code of your process, that’s a side effect. If you want to do anything like communicate with a database or a service or a keyboard, those are all side effects.

                                                                              Even non-termination is a side-effect (one which happens to be uncontrolled in Haskell).

                                                                              1. 7

                                                                                The act of producing a value is a side effect.

                                                                                What does that mean?

                                                                                I’ve seen some people say similar things but their definition of a side-effect is “something does something” which is next to useless and not what side-effect actually means!

                                                                                1. 5

                                                                                  I think the following sentences explained it very well.

                                                                                  To actually produce a value using a program running on a computer it must perform a side effect. This is the standard definition in the literature, including Peyton-Jones’s.

                                                                                  Edit: I think I see how this is confusing. Technically, any observable effect of running a program is a side effect. Modifying the exit code of a process is observable and is technically a side effect.

                                                                                  However, this tends not to be a definition of super-practical use. In context, the term “no side effects” is often used as more of “it doesn’t really touch state that matters.” This is also the way monad is often used – people will say something is a monad, when it has the right type, but actually doesn’t strictly follow the category rules.

                                                                                  The point I was making was really that you were nitpicking unproductively. Serious language designers should recognize that the goal in language design is to actually write programs, not debate grammatical points.

                                                                                  1. 5

                                                                                    I don’t think I’m nitpicking, it’s an important distinction to make and people often get it wrong. I don’t think it’s unproductive to point out the problem, especially since you’re now wondering about it.

                                                                                    To actually produce a value using a program running on a computer it must perform a side effect. This is the standard definition in the literature, including Peyton-Jones’s.

                                                                                    Here’s a screen shot of the awesome Functional Programming in Scala book. Doing something other than “returning a value” really is the formal definition! Pinky promises that I’m not making things up.

                                                                                    Serious language designers should recognize that the goal in language design is to actually write programs, not debate grammatical points.

                                                                                    I don’t know what this means, are you imagining a dichotomy between talking about what ‘side-effect’ means and enabling the writing of programs? What does it mean to be a serious language designer?

                                                                                    1. 1

                                                                                      Purity is a bit up to the language to define, however. For instance, evaluation in Haskell can force thunks and cause the execution of other code to change observably. It’s simply a matter of Haskell preference to decide that evaluation time is “unobservable” from the point of view of the language semantics.

                                                                                      There still a super important distinction to be had around purity. It becomes the most clear when you have strong types and thus a consistent and total notion of canonical values. In that case, one poor image of “purity” is to say that the only value you get out of running a function is to receive the canonical value of its result. In a more mathematically rigorous sense you might write

                                                                                      The function f : A -> B is pure iff
                                                                                      
                                                                                        const () . f == const ()
                                                                                      

                                                                                      In other words, when I throw away the result of computing the function f using const () the result is somehow “equal” to const (). The exact choice of const () and the exact meaning of equality shift around, though, and provide slight variations on what purity means. For instance, we might replace const () with

                                                                                      trivial :: a -> ()
                                                                                      trivial a = a `seq` ()
                                                                                      

                                                                                      which is a more restrictive form of “purity” since non-termination is now a side-effect.

                                                                                      1. 2

                                                                                        How is it a preference for Haskell to say that evaluation time is unobservable? If Haskell allowed functions to observe time then those terms would not be referentially transparent! It’s not about what a language is “allowed” to observe.

                                                                                        Haskell works under the definition that bottom is a valid value and seq is not referentially transparent. If you change the definition of Haskell terms, you could say that bottom is not a value and then you get to say that non-termination is a side-effect. I’m definitely with you on doing that!

                                                                                        1. 1

                                                                                          No, of course. Haskell is internally consistent with its definition of pure (except where it isn’t, and those places are forbidden). I’m just relating that the definition of side effect is often a little bit fuzzy at least. It’s a good argument to have as to what the tradeoffs of stricter and stricter definition of purity are.

                                                                                          I’d love to have non-termination as an effect, but I’m also perfectly happy to admit that it’s a tradeoff.

                                                                                2. 2

                                                                                  So under this definition, what isn’t a side effect? I don’t disagree that most of the things you mentioned are side-effects but if we extend this notion such that even producing any value is a side-effect then this is just a truism since every conceivable computation is effectful. It’s more interesting to study the topic under the constrained definition of observable vs unobservable effects.

                                                                                  1. 2

                                                                                    No, once you accept the definition that everything is a side effect, it lets you focus on adding language support for controlling the side effects that matter.

                                                                                    Writing

                                                                                    x = 1 y = 2 x = 1

                                                                                    Is side-effectful according to Haskell definitions (you modified x), but it’s also completely unimportant. You haven’t gained anything from pointing that out. Talking about side-effects is only useful in a highly contextual discussion about making it easier to perform some specific task in a programming language.

                                                                                    1. 5

                                                                                      You do gain things from pointing it out, though. Huge things!

                                                                                      By pointing out something is a side-effect, you’re also pointing out that you’ve lost referential transparency. By pointing out that you’ve lost referential transparency, you’re also pointing out that you’ve lost the substitutional model of programs. By pointing out you’ve lost the substitutional model of programs, you’re also pointing out that you’ve lost equational reasoning.

                                                                                      And I really don’t want to lose equational reasoning. I do functional programming because I highly value treating my programs as sets of equations. It allows me to treat types as theorems, allows abstraction without implications and is just an extremely simple form of formal reasoning which people can do in their heads or on paper!

                                                                                      1. 1

                                                                                        When I was writing C and IA-32 assembler in kernel mode I didn’t care that I didn’t have equational reasoning and I didn’t want equational reasoning. I was altering state by writing a program that altered state in exactly the way I wanted it to.

                                                                                        You’re describing a very specific way of writing a program. What you don’t seem to accept is that it is not the only way to write programs, or sometimes even the best way to write programs. Sometimes those two ways of programming are even two parts of the same program. Maybe I want referential transparency sometimes, and not others. It’s the job of the language to help me perform the task I’m working on in the best way possible in that context.

                                                                                        1. 3

                                                                                          That seems a little like an appeal to moderation, can you clear it up for me and give an explicit reason to not want a substitution model for your programs?

                                                                                          I know I’d definitely take referential transparency when describing a kernel, if I could get it. It would absolutely give me more confidence that I’m altering state in the exact way I’d want to. I may not be able to achieve it, but that doesn’t mean I wouldn’t want to!

                                                                                            1. 4

                                                                                              I accept that we can’t have referential transparency with assembly. That code doesn’t make me want to give it up, though, it makes me want to get to a place where we can do what assembly can while maintaining referential transparency!

                                                                                              Can we? Who knows? I just know I’m not satisfied…

                                                                                              Oh, I just realised that we could define this stuff inside of an referentially transparent EDSL and could probably do the same stuff. Almost like what Microsoft has been playing with.

                                                                                              1. 2

                                                                                                what Microsoft has been playing with.

                                                                                                Hey, I never said that typing was inappropriate.

                                                                                                (Also, read the discussion on that paper. It’s quite funny – basically Andrew is trying to describe in the kindest words how crappy their dev experience was).

                                                                              2. 2

                                                                                Great tl;dw (didn’t watch)

                                                                                There was something that Simon Peyton Jones once said (though I’m having trouble finding the link) where he mentioned that you can parse the unofficial Haskell motto: “avoid success at all costs” in two ways. The first way, and probably the way that most people take it is “do everything you can to not be successful.” Haskellers sometimes joke that we’re starting to fail at that. But Simon mentions that you can also parse it in another way (my paraphrasing): “avoid doing unprincipled things just to become popular.” So it could be: “avoid: ‘success at all costs’.” And I think this video hints at that. Haskell started “safe but useless” and is moving carefully up the usefulness axis.

                                                                              1. 2

                                                                                Odd presentation aside, this definitely piqued my interest in Idris, in particular the ?wtf variable used to start interactively proving the theorem embedded in the types.

                                                                                I was curious enough to go look for official Idris docs (here’s the wiki on github) - I was hoping for just a slightly deeper explanation along the same lines, but mostly ended up in the deep end and gave up. It’d be great to see a similar “here’s how to use dependent types and interactive proving in a real program”, with a little more detail and a little less trollage :)

                                                                                1. 4

                                                                                  Paul Callaghan wrote an amazing series of posts in the PragPub Magazine about functional programming. Later on in the series he dips into a few posts about dependent types in Idris:

                                                                                  1. http://pragprog.com/magazines/2013-03/uncle-bob-and-functional-programming
                                                                                  2. http://pragprog.com/magazines/2013-04/dependent-types
                                                                                  3. http://pragprog.com/magazines/2013-05/dependent-types-part-ii
                                                                                  4. http://pragprog.com/magazines/2013-06/unification
                                                                                  5. http://pragprog.com/magazines/2013-07/dependent-types-iii

                                                                                  I’m including the part of the series covering Idris and dependent types, but if you are interested, he has earlier articles which are very much worth reading.