1. 13

    Rich has been railing on types for the last few keynotes, but it looks to me like he’s only tried Haskell and Kotlin and that he hasn’t used them a whole lot, because some of his complaints look like complete strawmen if you have a good understanding and experience with a type system as sophisticated as Haskell’s, and others are better addressed in languages with different type systems than Haskell, such as TypeScript.

    I think he makes lots of good points, I’m just puzzled as to why he’s seemingly ignoring a lot of research in type theory while designing his own type system (clojure.spec), and if he’s not, why he thinks other models don’t work either.

    1. 14

      One nit: spec is a contract system, not a type system. The former is often used to patch up a lack of the latter, but it’s a distinct concept you can do very different things with.

      EDIT: to see how they can diverge, you’re probably better off looking at what Racket does than what Clojure does. Racket is the main “research language” for contracts and does some pretty fun stuff with them.

      1. 4

        It’s all fuzzy to me. They’re both formal specifications. They get overlapped in a lot of ways. Many types people are describing could be pre/post conditions and invariants in contract form for specific data or functions on them. Then, a contract system extended to handle all kinds of things past Boolean will use enough logic to be able to do what advanced type systems do.

        Past Pierce or someone formally defining it, I don’t know as a formal, methods non-expert that contract and type systems in general form are fundamentally that different since they’re used the same in a lot of ways. Interchangeably, it would appear, if each uses equally powerful and/or automated logics.

        1. 13

          It’s fuzzy but there are differences in practice. I’m going to assume we’re using non-FM-level type systems, so no refinement types or dependent types for full proofs, because once you get there all of our intuition about types and contracts breaks down. Also, I’m coming from a contract background, not a type background. So take everything I say about type systems with a grain of salt.

          In general, static types verify a program’s structure, while contracts verify its properties. Like, super roughly, static types are whether a program is sense or nonsense, while contracts are whether its correct or incorrect. Consider how we normally think of tail in Haskell vs, like, Dafny:

          tail :: [a] -> [a]
          method tail(s: seq<T>) returns (o: seq<T>)
          requires s.Len > 0
          ensures s[0] + o = s

          The tradeoff is that verifying structure automatically is a lot easier than verifying semantics. That’s why historically static typing has been compile-time while contracts have been runtime. Often advances in typechecking subsumed use cases for contracts. See, for example, how Eiffel used contracts to ensure “void-free programming” (no nulls), which is subsumed by optionals. However, there are still a lot of places where they don’t overlap, such as in loop invariants, separation logic, (possibly existential contracts?), arguably smart-fuzzing, etc.

          Another overlap is refinement types, but I’d argue that refinement types are “types that act like contracts” versus contracts being “runtime refinement types”, as most successful uses of refinement types came out of research in contracts (like SPARK) and/or are more ‘contracty’ in their formulations.

          1. 3

            Is there anything contacts do that dependent types cannot?

            1. 2

              Fundamentally? Not really, nor vice versa. Both let you say arbitrary things about a function.

              In practice contracts are more popular for industrial work because they so far seem to map better to imperative languages than dependent types do.

              1. 1

                That makes sense, thanks! I’ve never heard of them. I mean I’ve probably seen people throw the concept around but I never took it for an actual thing

        2. 1

          I see the distinction when we talk about pure values, sum and product types. I wonder if the IO monad for example isn’t kind of more on the contract side of things. Sure it works as a type, type inference algorithms work with it, but the sife-effect thing makes it seem more like a pattern.

        3. 17

          I’m just puzzled as to why he’s seemingly ignoring a lot of research in type theory

          Isn’t that his thing? He’s made proud statements about his disinterest in theory. And it shows. His jubilation about transducers overlooked that they are just a less generic form of ad-hoc polymorphism, invented to abstract over operations on collections.

          1. 1

            wow, thanks for that, never really saw it that way but it totally makes sense. not a regular clojure user, but love lisp, and love the ML family of languages.

          2. 6

            seemingly ignoring a lot of research in type theory

            I’ve come to translate this utterance as “it’s not Haskell”. Are there languages that have been hurt by “ignoring type theory research”? Some (Go, for instance) have clearly benefited from it.

            1. 11

              I don’t think rich is nearly as ignorant of Haskell’s type system as everyone seems to think. You can understand this stuff and not find it valuable and it seems pretty clear to me that this is the case. He’s obviously a skilled programmer who’s perspective warrants real consideration, people who are enamored with type systems shouldnt be quick to write him off even if they disagree.

              I don’t like dynamic languages fwiw.

              1. 3

                I dont think we can assume anything about what he knows. Even Haskellers here are always learning about its type system or new uses. He spends most of his time in a LISP. It’s safe to assume he knows more LISP benefits than Haskell benefits until we see otherwise in examples he gives.

                Best thing tl do is probably come up with lot of examples to run by him at various times/places. See what says for/against them.

                1. 9

                  I guess I would want hear what people think he’s ignorant of because he clearly knows the basics of the type system, sum types, typeclasses, etc. The clojure reducers docs mention requiring associative monoids. I would be extremely surprised if he didn’t know what monads were. I don’t know how far he has to go for people to believe he really doesn’t think it’s worthwhile. I heard edward kmett say he didn’t think dependent types were worth the overhead, saying that the power to weight ratio simply wasn’t there. I believe the same about haskell as a whole. I don’t think it’s insane to believe that about most type systems and I don’t think hickey’s position stems from ignorance.

                  1. 2

                    Good examples supporting he might know the stuff. Now, we just need more detail to further test the claims on each aspect of languge design.

                    1. 1

                      From the discussions I see, it’s pretty clear to me that Rich has a better understanding of static typing and its trade offs than most Haskell fans.

              2. 10

                I’d love to hear in a detailed fashion how Go has clearly benefited from “ignoring type theory research”.

                1. 5

                  Rust dropped GC by following that research. Several languages had race freedom with theirs. A few had contracts or type systems with similar benefits. Go’s developers ignored that to do a simpler, Oberon-2- and C-like language.

                  There were two reasons. dmpk2k already said first, which Rob Pike said, that it was designed for anyone from any background to pick up easily right after Google hired them. Also, simplicity and consistency making it easy for them to immediately go to work on codebases they’ve never seen. This fits both Google’s needs and companies that want developers to be replaceable cogs.

                  The other is that the three developers had to agree on every feature. One came from C. One liked stuff like Oberon-2. I dont recall the other. Their consensus is unlikely to be an Ocaml, Haskell, Rust, Pony, and so on. It was something closer to what they liked and understood well.

                  If anything, I thought at the time they shouldve done something like Julia with a mix of productivity features, high C/Python integration, a usable subset people stick to, and macros for just when needed. Much better. I think a Noogler could probably handle a slighty-more-advanced language than Go. That team wanted otherwise…

                  1. 2

                    I have a hard time with a number of these statements:

                    “Rust dropped GC by following that research”? So did C++ also follow research to “drop GC”? What about “C”? I’ve been plenty of type system conversation related to Rust but nothing that I would attribute directly to “dropping GC”. That seems like a bit of a simplification.

                    Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared? I’ve seen Rob Pike talk about wanting to appeal to C and C++ programmers but nothing about ignorning type research. I’d be interested in hearing about that being done and what they thought the benefits were.

                    It sounds like you are saying that the benefit is something familiar and approachable. Is that a benefit to the users of a language or to the language itself? Actually I guess that is more like, is the benefit that it made Go approachable and familiar to a broad swath of programmers and that allowed it to gain broad adoption?

                    If yes, is there anything other than anecdotes (which I would tend to believe) to support that assertion?

                    1. 9

                      “That seems like a bit of a simplification.”

                      It was. Topic is enormously complex. Gets worse when you consider I barely knew C++ before I lost my memory. I did learn about memory pools and reference counting from game developers who used C++. I know it keeps getting updated in ways that improve its safety. The folks that understand C++ and Rust keep arguing about how safe C++ is with hardly any argument over Rust since its safety model is baked thoroughly into the language rather than an option in a sea of options. You could say I’m talking about Rust’s ability to be as safe as a GC in most of an apps code without runtime checks on memory accesses.

                      “Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared?”

                      Like with the Rich Hickey replies, this burden of proof is backwards asking us to prove a negative. If assessing what people knew or did, we should assume nothing until we see evidence in their actions and/or informed opinions that they did these things. Only then do we believe they did. I start by comparing what I’ve read of Go to Common LISP, ML’s, Haskell, Ada/SPARK, Racket/Ometa/Rascal on metaprogramming side, Rust, Julia, Nim, and so on. Go has almost nothing in it compared to these. Looks like a mix of C, Wirth’s stuff, CSP like old stuff in 1970’s-1980’s, and maybe some other things. Not much past the 1980’s. I wasn’t the first to notice either. Article gets point across despite its problems the author apologized for.

                      Now, that’s the hypothesis from observation of Go’s features vs other languages. Lets test it on intent first. What was the goal? Rob Pike tells us here with Moray Taylor having a nicer interpretation. The quote:

                      The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

                      It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.

                      So, they’re intentionally dumbing the language down as much as they can while making it practically useful. They’re doing this so smart people from many backgrounds can pick it up easily and go right to being productive for their new employer. It’s also gotta be C-like for the same reason.

                      Now, let’s look at its prior inspirations. In the FAQ, they tell you the ancestors: “Go is mostly in the C family (basic syntax), with significant input from the Pascal/Modula/Oberon family (declarations, packages), plus some ideas from languages inspired by Tony Hoare’s CSP, such as Newsqueak and Limbo (concurrency).” They then make an unsubstantiated claim, in that section at least, that it’s a new language across the board to make programming better and more fun. In reality, it seems really close to a C-like version of the Oberon-2 experience one developer (can’t recall) wanted to recreate with concurrency and tooling for aiding large projects. I covered the concurrency angle in other comment. You don’t see a lot of advanced or far out stuff here: decades old tech that’s behind current capabilities. LISP’ers, metaprogrammers and REBOL’s might say behind old tech, too. ;)

                      Now, let’s look at execution of these C, Wirth-like, and specific concurrency ideas into practice. I actually can’t find this part. I did stumble upon its in-depth history of design decisions. The thing I’m missing, if it was correct, is a reference to the claim that the three developers had to agree on each feature. If that’s true, it automatically would hold the language back from advanced stuff.

                      In summary, we have a language designed by people who mostly didn’t use cutting-edge work in type systems, employed nothing of the sort, looked like languages from the 1970’s-1980’s, considered them ancestors, is admittedly dumbed-down as much as possible so anyone from any background can use it, and maybe involved consensus from people who didn’t use cutting-edge stuff (or even much cutting-edge at 90’s onward). They actually appear to be detractors to a lot of that stuff if we consider the languages they pushed as reflecting their views on what people should use. Meanwhile, the languages I mentioned above used stuff from 1990’s-2000’s giving them capabilities Go doesn’t have. I think the evidence weighs strongly in favor of that being because designers didn’t look at it, were opposed to it for technical and/or industrial reasons, couldn’t reach a consensus, or some combo.

                      That’s what I think of Go’s history for now. People more knowledgeable feel free to throw any resources I might be missing. It just looks to be a highly-practical, learn/use-quickly, C/Oberon-like language made to improve onboarding and productivity of random developers coming into big companies like Google. Rob Pike even says that was the goal. Seems open and shut to me. I thank the developers of languages like Julia and Nim believing we were smart enough to learn a more modern language, even if we have to subset them for inexperienced people.

                  2. 4

                    It’s easy for non-LtU programmers to pick up, which happens to be the vast majority.

                    1. 3

                      Sorry, that isn’t detailed. Is there evidence that its easy for these programmers to pick up? What does “easy to pick up” mean? To get something to compile? To create error-free programs? “Clearly benefited” is a really loaded term that can mean pretty much anything to anyone. I’m looking for what the stated benefits are for Go. Is the benefit to go that it is “approachable” and “familiar”?

                      There seems to be an idea in your statement then that using any sort of type theory research will inherintly make something hard to pick up. I have a hard time accepting that. I would, without evidence, be willing to accept that many type system ideas (like a number of them in Pony) are hard to pick up, but the idea that you have to ignore type theory research to be easy to pick up is hard for me to accept.

                      Could I create a language that ignores type system theory but using a non-familiar syntax and not be easy to pick up?

                      1. 5

                        I already gave you the quote from Pike saying it was specifically designed for this. Far as the how, I think one of its designers explains it well in those slides. The Guiding Principles section puts simplicity above everything else. Next, a slide says Pascal was a minimalist language designed for teaching non-programmers to code. Oberon was similarly simple. Oberon-2 added methods on records (think simpler OOP). The designer shows Oberon-2 and Go code saying it’s C’s syntax with Oberon-2’s structure. I’ll add benefits like automatic, memory management.

                        Then, the design link said they chose CSP because (a) they understood it enough to implement and (b) it was the easiest thing to implement throughout the language. Like Go itself, it was the simplest option rather than the best along many attributes. There were lots of people who picked up SCOOP (super-easy but with overhead) with probably even more picking up Rust’s method grounded in affine types. Pony is itself doing clever stuff using advances in language. Go language would ignore those since (a) Go designers didn’t know them well from way back when and (b) would’ve been more work than their intent/budget could take.

                        They’re at least consistent about simplicity for easy implementation and learning. I’ll give them that.

                    2. 3

                      It seems to me that Go was clearly designed to have a well-known, well-understood set of primitives, and that design angle translated into not incorporating anything fundamentally new or adventurous (unlike Pony and it’s impressive use of object capabilities). It looked already old at birth, but it feels impressively smooth, in the beginning at least.

                      1. 3

                        I find it hard to believe that CSP and Goroutines were “well-understood set of primitives”. Given the lack of usage of CSP as a mainstream concurrency mechanism, I think that saying that Go incorporates nothing fundamentally new or adventurous is selling it short.

                        1. 5

                          CSP is one of oldest ways people modeled concurrency. I think it was built on Hoare’s monitor concept from years before which Per Brinch Hansen turned into Concurrent Pascal. Built Solo OS with mix of it and regular Pascal. It was also typical in high-assurance to use something like Z or VDM for specifying main system with concurrency done in CSP and/or some temporal logic. Then, SPIN became dominant way to analyze CSP-like stuff automatically with a lot of industrial use for a formal method. Lots of other tools and formalisms existed, though, under banner of process algebras.

                          Outside of verification, the introductory text that taught me about high-performance, parallel computing mentioned CSP as one of basic models of parallel programming. I was experimenting with it in maybe 2000-2001 based on what those HPC/supercomputing texts taught me. It also tied into Agent-Oriented Programming I was looking into then given they were also concurrent, sequential processes distributed across machines and networks. A quick DuckDuckGo shows a summary article on Wikipedia mentions it, too.

                          There were so many courses teaching and folks using it that experts in language design and/or concurrency should’ve noticed it a long time ago trying to improve on it for their languages. Many did, some doing better. Eiffel SCOOP, ML variants like Concurrent ML, Chapel, Clay with Wittie’s extensions, Rust, and Pony are examples. Then you have Go doing something CSP-like (circa 1970’s) in the 2000’s still getting race conditions and stuff. What did they learn? (shrugs) I don’t know…

                          1. 10


                            I’m going to take the 3 different threads of conversation we have going and try to pull them all together in this one reply. I want to thank you for the time you put into each answer. So much of what appears on Reddit, HN, and elsewhere is throw away short things that often feel lazy or like communication wasn’t really the goal. For a long time, I have appreciated your contributions to lobste.rs because there is a thoughtfulness to them and an attempt to convey information and thinking that is often absent in this medium. Your replies earlier today are no exception.

                            Language is funny.

                            You have a very different interpretation of the words “well-understood primitives” than I do. Perhaps it has something to do with anchoring when I was writing my response. I would rephrase my statement this way (and I would still be imprecise):

                            While CSP has been around for a long time, I don’t that prior to Go, that is was a well known or familiar concurrency model for most programmers. From that, I would say it isn’t “well-understood”. But I’m reading quite a bit, based on context into what “well-understood” means here. I’m taking it to me, “widely understood by a large body of programmers”.

                            And I think that your response Nick, I think it actually makes me believe that more. The languages you mention aren’t ones that I would consider familiar or mainstream to most programmers.

                            Language is fun like that. I could be anchoring myself again. I rarely ask questions on lobste.rs or comment. I decided to on this occasion because I was really curious about a number of things from an earlier statement:

                            “Go has clearly benefited from “ignoring type theory research”.

                            Some things that came to mind when I read that and I wondered “what does this mean?”

                            “clearly benefited”

                            Hmmm, what does benefit mean? Especially in reference to a language. My reading of benefit is that “doing X helped the language designers achieve one or more goals in a way that had acceptable tradeoffs”. However, it was far from clear to me, that is what people meant.

                            “ignoring type theory research”

                            ignoring is an interesting term. This could mean many things and I think it has profound implications for the statement. Does ignoring mean ignorance? Does it mean willfully not caring? Or does it mean considered but decided not to use?

                            I’m familiar with some of the Rob Pike and Go early history comments that you referenced in the other threads. In particular related to the goal of Go being designed for:

                            The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

                            It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.

                            I haven’t found anything though that shows there was a willful disregard of type theory. I wasn’t attempting to get you to prove a negative, more I’m curious. Has the Go team ever said something that would fall under the heading of “type system theory, bah we don’t need it”. Perhaps they have. And if they have, is there anything that shows a benefit from that.

                            There’s so much that is loaded into those questions though. So, I’m going to make some statements that are possibly open to being misconstrued about what from your responses, I’m hearing.

                            “Benefit” here means “helped make popular” because Go on its surface, presents a number of familiar concepts for the programmer to work with. There’s no individual primitive that feels novel or new to most programmers except perhaps the concurrency model. However, upon the first approach that concurrency model is fairly straightforward in what it asks the programmer to grasp when first encountering it. Given Go’s stated goals from the quote above. It allows the programmers to feel productive and “build good software”.

                            Even as I’m writing that though, I start to take issue with a number of the assumptions that are built into the Pike quote. But that is fine. I think most of it comes down to for me what “good software” is and what “simple” is. And those are further loaded words that can radically change the meaning of a comment based on the reader.

                            So let me try again:

                            When people say “Go has clearly benefited from “ignoring type theory research” what they are saying is:

                            Go’s level of popularity is based, in part, on it providing a set of ideas that should be mostly familiar to programmers who have some experience with the Algol family of languages such as C, C++, Python, Ruby etc. We can further refine that to say that from the Algol family of languages that we are really talking about ones that have type systems that make few if any guarantees (like C). That Go put this familiarity as its primary goal and because of that, is popular.

                            Would you say that is a reasonable summation?

                            When I asked:

                            “Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared?”

                            I wasn’t asking you for to prove a negative. I was very curious if any such statements existed. I’ve never seen any. I’ve drawn a number of conclusions about Go based mostly on the Rob Pike quote you provided earlier. I was really looking for “has everyone else as well” or do they know things that I don’t know.

                            It sounds like we are both mostly operating on the same set of information. That’s fine. We can draw conclusions from that. But I feel at least good in now saying that both you and I are inferring things based on what appears to be mostly shared set of knowledge here and not that I am ignorant of statements made by Go team members.

                            I wasn’t looking for proof. I was looking for information that might help clear up my ignorance in the area. Related to my ignorance.

                            1. 2

                              I appreciate that you saw I was trying to put effort into it being productive and civil. Those posts took a while. I appreciate your introspective and kind reply, too. Now, let’s see where we’re at with this.

                              Yeah, it looks like we were using words with a different meaning. I was focused on well-understood by PLT types that design languages and folks studying parallelism. Rob Pike at the least should be in both categories following that research. Most programmers don’t know about it. You’re right that Go could’ve been first time it went mainstream.

                              You also made a good point that it’s probably overstating it to say they never considered. I have good evidence they avoided almost all of it. Other designers didn’t. Yet, they may have considered it (how much we don’t know), assessed it against their objectives, and decided against all of it. The simplest approach would be to just ask them in a non-confrontational way. The other possibility is to look at each’s work to see if it showed any indication they were considering or using such techniques in other work. If they were absent, saying they didn’t consider it in their next work would be reasonable. Another angle would be to look at, like with C’s developers, whether they had a personal preference for simpler or barely any typing consistently avoiding developments in type systems. Since that’s lots of work, I’ll leave it at “Unknown” for now.

                              Regarding its popularity, I’ll start by saying I agree its simple design reusing existing concepts was a huge element of that. It was Wirth’s philosophy to do same thing for educating programmers. Go adopted that philosophy to modern situation. Smart move. I think you shouldn’t underestimate the fact that Google backed it, though.

                              There were a lot of interesting languages over the decades with all kinds of good tradeoffs. The ones with major, corporate backing and/or on top of advantageous foundations/ecosystems (eg hardware or OS’s) usually became big in a lasting way. That included COBOL on mainframes, C on cheap hardware spreading with UNIX, Java getting there almost entirely through marketing given its technical failures, .NET/C# forced by Microsoft on its huge ecosystem, Apple pushing Swift, and some smaller ones. Notice the language design is all across the board here in complexity, often more complex than existing languages. The ecosystem drivers, esp marketing or dominant companies, are the consistent thread driving at least these languages’ mass adoption.

                              Now, mighty Google claims they’re backing for their massive ecosystem a new language. It’s also designed by celebrity researchers/programmers, including one many in C community respect. It might also be a factor in whether developers get a six digit job. These are two, major pulls plus a minor one that each in isolation can draw in developers. Two, esp employment, will automatically make a large number of users if they think Google is serious. Both also have ripple effects where other companies will copy what big company is doing to not get left behind. Makes the pull larger.

                              So, as I think of your question, I have that in the back of my mind. I mean, those effects pull so hard that Google’s language could be a total piece of garbage and still have 50,000-100,000 developers just going for a gold rush. I think that they simplified the design to make it super-easy to learn and maintain existing code just turbocharges that effect. Yes, I think the design and its designers could lead to significant community without Google. I’m just leaning toward it being a major employer with celebrity designers and fanfare causing most of it.

                              And then those other languages start getting uptake despite advanced features or learning troubles (esp Rust). Shows they Go team could’ve done better on typing using such techniques if they wanted to and/or knew about those techniques. I said that’s unknown. Go might be the best they could do in their background, constraints, goals, or whatever. Good that at least four, different groups made languages to push programming further into the 90’s and 2000’s instead of just 70’s to early 80’s. There’s at least three creating languages closer to C generating a lot of excitement. C++ is also getting updates making it more like Ada. Non-mainstream languages like Ada/SPARK and Pony are still getting uptake even though smaller.

                              If anything, the choices of systems-type languages is exploding right now with something for everyone. The decisions of Go’s language authors aren’t even worth worrying about since that time can be put into more appropriate tools. I’m still going to point out that Rob Pike quote to people to show they had very, very specific goals which made a language design that may or may not be ideal for a given task. It’s good for perspective. I don’t know designers’ studies, their tradeoffs, and (given alternatives) they barely matter past personal curiosity and PLT history. That also means I’ll remain too willfully ignorant about it to clear up anyone’s ignorance. At least till I see some submissions with them talking about it. :)

                              1. 2

                                Thanks for the time you put into this @nickpsecurity.

                                1. 1

                                  Sure thing. I appreciate you patiently putting time into helping me be more accurate and fair describing Go designers’ work.

                                  1. 2

                                    And thank you, I have a different perspective on Go now than I did before. Or rather, I have a better understanding of other perspectives.

                  3. 6

                    I don’t see anything of substance in this comment other than “Haskell has a great type system”.

                    I just watched the talk. Rich took a lot of time to explain his thoughts carefully, and I’m convinced by many of his points. I’m not convinced by anything in this comment because there’s barely anything there. What are you referring to specifically?

                    edit: See my perspective here: https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey#c_povjwe

                    1. 3

                      That wasn’t my point at all. I agree with what Rich says about Maybes in this talk, but it’s obvious from his bad Haskell examples that he hasn’t spent enough time with the language to justify criticizing its type system so harshly.

                      Also, what he said about representing the idea of a car with information that might or might not be there in different parts of a program might be correct in Haskell’s type system, but in languages with structural subtyping (like TypeScript) or row polymorphism (like Ur/Web) you can easily have a function that takes a car record which may be missing some fields, fills some of them out and returns an object which has a bit more fields than the other one, like Rich described at some point in the talk.

                      I’m interested to see where he’s gonna end up with this, but I don’t think he’s doing himself any favors by ignoring existing research in the same fields he’s thinking about.

                      1. 5

                        But if you say that you need to go to TypeScript to express something, that doesn’t help me as a Haskell user. I don’t start writing a program in a language with one type system and then switch into a language with a different one.

                        Anyway, my point is not to have a debate on types. My point is that I would rather read or watch an opinion backed up by real-world experience.

                        I don’t like the phrase “ignoring existing research”. It sounds too much like “somebody told me this type system was good and I’m repeating it”. Just because someone published a paper on it, doesn’t mean it’s good. Plenty of researchers disagree on types, and admit that there are open problems.

                        There was just one here the other day!


                        I’ve found that the applicability of types is quite domain-specific. Rich Hickey is very clear about what domains he’s talking about. If someone makes general statements about type systems without qualifying what they’re talking about, then I won’t take them very seriously.

                    2. 4

                      I don’t have a good understanding of type systems. What is it that Rich misses about Haskells Maybe? Does changing the return type of a function from Maybe T to T not mean that you have to change code which uses the return value of that function?

                      1. 23

                        Does changing the return type of a function from Maybe T to T not mean that you have to change code which uses the return value of that function?

                        It does in a way, but I think people sometimes over-estimate the amount of changes that are required. It depends on whether or not really really care about the returned value. Let’s look at a couple of examples:

                        First, let’s look at an example. Let’s say that we had a function that was going to get the first element out of a list, so we start out with something like:

                        getFirstElem :: [a] -> a
                        getFirstElem = head

                        Now, we’ll write a couple of functions that make use of this function. Afterwards, I’ll change my getFirstElem function to return a Maybe a so you can see when, why, and how these specific functions need to change.

                        First, let’s imagine that I have some list of lists, and I’d like to just return a single list that has the first element; for example I might have something like ["foo","bar","baz"] and I want to get back "fbb". I can do this by calling map over my list of lists with my getFirstElem function:

                        getFirsts :: [[a]] -> [a]
                        getFirsts = map getFirstElem

                        Next, say we wanted to get an idea of how many elements we were removing from our list of lists. For example, in our case of ["foo","bar","baz"] -> "fbb", we’re going from a total of 9 elements down to 3, so we’ve eliminated 6 elements. We can write a function to help us figure out how many elements we’ve dropped pretty easily by looking at the sum of the lengths of the lists in the input lists, and the overall length of the output list.

                        countDropped :: [[a]] -> [b] -> Int
                        countDropped a b =
                          let a' = sum $ map length a
                              b' = length b
                          in a' - b'

                        Finally, we probably want to print out our string, so we’ll use print:

                        printFirsts =
                          let l = ["foo","bar","baz"]
                              r = getFirsts l
                              d = countDropped l r
                          in print l >> print r >> print d

                        Later, if we decide that we want to change our program to look at ["foo","","bar","","baz"]. We’ll see our program crashes! Oh no! the problem is that head doesn’t work with an empty list, so we better go and update it. We’ll have it return a Maybe a so that we can capture the case where we actually got an empty list.

                        getFirstElem :: [a] -> Maybe a
                        getFirstElem = listToMaybe

                        Now we’ve changed our program so that the type system will explicitly tell us whether we tried to take the head of an empty list or not- and it won’t crash if we pass one in. So what refactoring do we have to do to our program?

                        Let’s walk back through our functions one-by-one. Our getFirsts function had the type [[a]] -> [a] and we’ll need to change that to [[a]] -> [Maybe a] now. What about the code?

                        If we look at the type of map we’ll see that it has the type: map :: (c -> d) -> [c] -> [d]. Since both [[a]] -> [a] and [[a]] -> [Maybe a] satisfy the constraint [a] -> [b], (in both cases, c ~ [a], in the first case, d ~ a and in the second d ~ Maybe a). In short, we had to fix our type signature, but nothing in our code has to change at all.

                        What about countDropped? Even though our types changed, we don’t have to change anything in countDropped at all! Why? Because countDropped is never looking at any values inside of the list- it only cares about the structure of the lists (in this case, how many elements they have).

                        Finally, we’ll need to update printFirsts. The type signature here doesn’t need to change, but we might want to change the way that we’re printing out our values. Technically we can print a Maybe value, but we’d end up with something like: [Maybe 'f',Nothing,Maybe 'b',Nothing,Maybe 'b'], which isn’t particularly readable. Let’s update it to replace Nothing values with spaces:

                        printFirsts :: IO ()
                        printFirsts =
                          let l = ["foo","","bar","","baz"]
                              r = map (fromMaybe ' ') $ getFirsts' l
                              d = countDropped l r
                          in print l >> print r >> print d

                        In short, from this example, you can see that we can refactor our code to change the type, and in most cases the only code that needs to change is code that cares about the value that we’ve changed. In an untyped language you’d expect to still have to change the code that cares about the values you’re passing around, so the only additional changes that we’ve had to do here was a very small update to the type signature (but not the implementation) of one function. In fact, if I’d let the type be inferred (or written a much more general function) I wouldn’t have had to even do that.

                        There’s an impression that the types in Haskell require you to do a lot of extra work when refactoring, but in practice the changes you are making aren’t materially more or different than the ones you’d make in an untyped language- it’s just that the compiler will tell you about the changes you need to make, so you don’t need to find them through unit tests or program crashes.

                        1. 3

                          countDropped should be changed. To what will depend on your specification but as a simple inspection, countDropped ["", "", "", ""] [None, None, None, None] will return -4, which isn’t likely to be what you want.

                          1. 2

                            That’s correct in a manner of speaking, since we’re essentially computing the difference between the number of characters in all of the substrings minutes the length of the printed items. Since [""] = [[]], but is printed " ", we print one extra character (the space) compared to the total length of the string, so a negative “dropped” value is sensible.

                            Of course the entire thing was a completely contrived example I came up with while I was sitting at work trying to get through my morning coffee, and really only served to show “sometimes we don’t need to change the types at all”, so I’m not terribly worried about the semantics of the specification. You’re welcome to propose any other more sensible alternative you’d like.

                            1. -3

                              That’s correct in a manner of speaking, since …

                              This is an impressive contortion, on par with corporate legalese, but your post-hoc justification is undermined by the fact that you didn’t know this was the behavior of your function until I pointed it out.

                              Of course the entire thing was a completely contrived example …

                              On this, we can agree. You created a function whose definition would still typecheck after the change, without addressing the changed behavior, nor refuting that in the general case, Maybe T is not a supertype of T.

                              You’re welcome to propose any other more sensible alternative you’d like.

                              Alternative to what, Maybe? The hour long talk linked here is pretty good. Nullable types are more advantageous, too, like C#’s int?. The point is that if you have a function and call it as f(0) when the function requires its first argument, but later, the requirement is “relaxed”, all the places where you wrote f(0) will still work and behave in exactly the same way.

                              Getting back to the original question, which was (1) “what is it that Rich Hickey doesn’t understand about types?” and, (2) “does changing the return type from Maybe T to T cause calling code to break?”. The answer to (2) is yes. The answer to (1), given (2), is nothing.

                              1. 9

                                I was actually perfectly aware of the behavior, and I didn’t care because it was just a small toy example. I was just trying to show some examples of when and how you need to change code and/or type signatures, not write some amazing production quality code to drop some things from a list. No idea why you’re trying to be such an ass about it.

                                1. 3

                                  She did not address question (1) at all. You are reading her response to question (2) as implying something about (1) that makes your response needlessly adverse.

                            2. 1

                              This is a great example. To further reinforce your point, I feel like the one place Haskell really shows it’s strength in these refactors. It’s often a pain to figure out what the correct types should be parts of your programs, but when you know this and make a change, the Haskell compiler becomes this real guiding light when working through a re-factor.

                            3. 10

                              He explicitly makes the point that “strengthening a promise”, that is from “I might give you a T” to “I’ll definitely give you a T” shouldn’t necessarily be a breaking change, but is in the absence of union types.

                              1. 2

                                Half baked thought here that I’m just airing to ask for an opinion on:

                                Say as an alternative, the producer produces Either (forall a. a) T instead of Maybe T, and the consumer consumes Either x T. Then the producer’s author changes it to make a stronger promise by changing it to produce Either Void T instead.

                                I think this does what I would want? This change hasn’t broken the consumer because x would match either alternative. The producer has strengthened the promise it makes because now it promises not to produce a Left constructor.

                                1. 4

                                  When the problem is “I can’t change my mind after I had insufficient forethought”, requiring additional forethought is not a solution.

                                  1. 2

                                    So we’d need a way to automatically rewrite Maybe t to Either (forall a. a) t everywhere - after the fact. ;)

                            4. 2

                              Likewise, I wonder what he thinks about Rust’s type system to ensure temporal safety without a GC. Is safe, no-GC operation in general or for performance-critical modules desirable for Clojure practitioners? Would they like a compile to native option that integrates that safe, optimized code with the rest of their app? And if not affine types, what’s his solution that doesn’t involve runtime checks that degrade performance?

                              1. 7

                                I’d argue that GC is a perfectly fine solution in vast majority of cases. The overhead from advanced GC systems like the one on the JVM is becoming incredibly small. So, the scenarios where you can’t afford GC are niche in my opinion. If you are in such a situation, then types do seem like a reasonable way to approach the problem.

                                1. 3

                                  I have worked professionally in Clojure but I have never had to make a performance critical application with it. The high performance code I have written has been in C and CUDA. I have been learning Rust in my spare time.

                                  I argue that both Clojure and Rust both have thread safe memory abstractions, but Clojure’s solution has more (theoretical) overhead. This is because while Rust uses ownership and affine types, Clojure uses immutable data structures.

                                  In particular, get/insert/remove for a Rust HashMap is O(1) amortized while Clojure’s corresponding hash-map’s complexity is O(log_32(n)) for those operations.

                                  I haven’t made careful benchmarks to see how this scaling difference plays out in the real world, however.

                                  1. 4

                                    Having used clojure’s various “thread safe memory abstractions” I would say that the overhead is actual not theoretical.

                              2. 2

                                Disclaimer: I <3 types a lot, Purescript is lovely and whatnot

                                I dunno, I kinda disagree about this. Even in the research languages, people are opting for nominal ADTs. Typescript is the exception, not the rule.

                                His wants in this space almost require “everything is a dictionary/hashmap”, and I don’t think the research in type theory is tackling his complaints (the whole “place-oriented programming” stuff and positional argument difficulties ring extremely true). M…aybe row types, but row types are not easy to use compared to the transparent and simple Typescript model in my opinion.

                                Row types help o solve issues generated in the ADT universe, but you still have the nominal typing problem which is his other thing.

                                His last keynote was very agressive and I think people wrote it off because it felt almost ignorant, but I think this keynote is extremely on point once he gets beyond the maybe railing in the intro

                              1. 22

                                I’d take Rich Hickey’s opinions on type systems with a grain of salt, a hefty lemon wedge, and about a pint of vodka.

                                a to a… List of a to list of a… It means nothing! It tells you nothing!

                                — Rich Hickey, Effective Programs.

                                I understand that he’s everyone’s hero, but to be a programmer of such stature and to fail to grok parametricity that badly is frankly embarrassing. If anyone else were to spout such nonsense, they’d be immediately dismissed as a charlatan.

                                Luckily, I do know some hardcore Clojurists who were also confused and disappointed by that part of his talk.

                                There is definitely a cult following around Hickey, as has been pointed out to me by some Clojurists. Should we worship him? Maybe Not.

                                1. 13

                                  Not a Hickey disciple, but… I don’t think he’s talking about parametricity at all—the specific form of the type signature is irrelevant to his point. He’s saying that any type signature doesn’t tell you what the function does; only the name tells you that. Saying that the signature tells you nothing is hyperbole, but he’s emphasizing that to humans, information in the name >> information in the type signature.

                                  1. 6

                                    He’s saying that any type signature doesn’t tell you what the function does

                                    I’m kind of a static-typing weenie, but to be fair to Hickey, this is more true in Clojure than it is in Haskell, if you imagine a hypothetical version of Clojure that is statically typed but which still doesn’t track effects using the type system. (This is, admittedly, kind of a strawman.)

                                    Take the type signature [a] -> [a]. In Haskell, there aren’t many possibilities for what this function could be; it could be id or reverse, and it could be something that selects/reorders elements of the list based purely on their indices, but it couldn’t be a random shuffle function, because that would have signature [a] -> IO [a] or similar. But in a language where you can have IO anywhere, [a] -> [a] could just as well be a shuffle function. So what I’m trying to say is, Haskell gives you more static guarantees than Clojure along multiple axes, not just in types ;-)

                                    That being said, even a powerful type system does not excuse you from using descriptive names for your identifiers!

                                    1. 12

                                      Sure, I’ll buy that, to a degree. Names are important.

                                      However, he uses a function name reverse as his example. Funnily enough, the function you use to reverse a list in Haskell is called… reverse.

                                      Furthermore, he goes on to say that modelling the real world with types doesn’t work in practice, and that you need tests instead. Since when are these two tools mutually exclusive? Does he not know that we also write tests in Haskell?

                                      he’s emphasizing that to humans, information in the name >> information in the type signature.

                                      I’d say this is certainly true of people who haven’t invested any time in learning what types mean and what constraints they enforce. To the people who have though, it provides a wealth of information, e.g., that the first type signature example that slipped out of Hickey’s mouth — a -> a — is an incredibly specific signature that can only have one possible implementation.

                                      1. 10

                                        I agree with you, but just want to be sure we’re arguing about what he’s actually saying. The a -> a is pretty clearly just him misspeaking when reading [a] -> [a] off the slide. You’re confirming his point – you know that reverse reverses the list because of the name, not the type signature. The type signature of a function called reverse couldn’t be anything else.

                                        He doesn’t see sufficient value in the additional information from the type signature to compensate for what he sees as the overhead of keeping types fully conformant. So much so that he hyperbolically claims the additional information is “nothing”.

                                        1. 7

                                          But the type [a] -> [a] tells us something very important about the function. It encodes that this function doesn’t care about the type of the elements. That no ordering properties matter about these elements. Now we don’t know if the list will be truncated, extended, reversed, or anything like that. But, we know if it were extended and there are no other arguments of type a floating around those extra elements must come from the input list.

                                          We can further refine the time to encode the length of the list, something like List a n -> List a n. In this case, we know even more about the function. This process can be repeated a few times to eventually include nearly all properties we want the function to obey. Do I think that’s a substitute for tests, the name of the function, or even documentation for the function? Of course not. But I do see how keeping a type checker happy is any more onerous than passing unit tests, doc tests, style guidelines, etc.

                                          1. 3

                                            Sure, and I think he chose a bad example to make his point (more hyperbole), because [a] -> [a] really doesn’t have that many choices for useful things it could do. (I just checked Hoogle and there’s cycle, init, tail, and reverse. I guess there could also be shuffle.)

                                            If the example was something like GeneralLedger -> String -> CurrencyAmount -> [Transaction] his point would be better served.

                                            But a few minutes later he presents his example of adding a new value to a union type and not wanting to waste time updating all the case statements for that type, because only a few functions will ever see the new value in real operation. I think that brings out a difference in implementation philosophy that is underlying the whole argument. In a strongly-typed system you want to prove to the compiler exhaustively that no error case exists. In a weakly-typed system you only have to prove that to yourself, and if you “know” a code path will never receive a certain combination of data, you don’t have to do any work to support it. The strong-typing advocate would say the only reason you’re saving work is that you designed the types wrong in the first place and need to fix them. The weak-typing advocate would ask why they even have to do that much work when the code is doing fine, as verified by their tests.

                                            1. 3

                                              But a few minutes later he presents his example of adding a new value to a union type and not wanting to waste time updating all the case statements for that type, because only a few functions will ever see the new value in real operation.

                                              When he made this point, it reminded me of this great point that Kris Jenkins made in his recent talk:

                                              When you have a type error, and you always get a type error, the question is whether you get it from QA or your users or your compiler.

                                              I think rebeccaskinner also made a good point earlier in this thread that the cost of making these changes is massively overstated, and I think this is people’s instinct after working with technology that doesn’t make change easy. If you want to make changes across a system in a dynamic language, you need to rely on human discipline to have written all the tests. Writing these tests manually is never going to be as quick as a compiler writing them for you. With a good compiler [and I know you know this — I’m just thinking out loud in a friendly way :) ], these kinds of changes go from being nearly impossible to just tedious.

                                              1. 3

                                                For sure — the counterargument in the talk is that the strongly-typed compiler makes me do work (specifically, adding a case for the new union type alternative to all the places that are never going to receive that alternative) that I would never have done or tested, because I “know” it will never happen. (Note how I keep putting “know” in quotes. :) )

                                                It seems like the real philosophical difference is more about whether you want to be forced to write a program that provably covers every situation, or you want the freedom to fail to cover some situations you “know” aren’t relevant to actual usage. Kind of similar to error returns vs. exceptions. Or heap vs. static string buffers. And in all those cases the intuition about the pro/con tradeoff is different.

                                                1. 1

                                                  I’d argue types only force you to cover the cases you are choosing to enforce. While actively discouraged, nothing is stopping you from converting all your values to something like Strings and manipulate them that way. Even in Haskell, nothing is stopping you from dropping in a catch-all _ -> error "FIXME" Anecdotally, any time I’ve been too lazy to handle a case, it has always bitten me down the line.

                                                  1. 2

                                                    While this is true, imagine how much more failure you would encounter when applying the same human laziness to a technology which enforces far fewer constraints.

                                              2. 1

                                                I guess there could also be shuffle.

                                                Think about this for a moment. How would this work? Given referential transparency, what would happen on subsequent calls to this function?

                                                For it to be shuffle, it would need a random seed. You could either pass that in, which would make it something like f :: Seed -> [a] -> [a], or you would have to do it in IO, which would make it f :: [a] -> IO [a].

                                                1. 1

                                                  Yeah, something was bugging me about that. which is why I weaseled with “I guess”. :) On the other hand, I think Hickey is arguing against even less-pure systems than Haskell. [a] -> [a] with invisible effects allowed could be a lot of things!

                                          2. 12

                                            Since when are these two tools mutually exclusive? Does he not know that we also write tests in Haskell?

                                            People getting into types vs tests arguments should always keep in mind that the Haskell community invented property based testing!

                                            1. 2

                                              They made it popular. It was called specification- or model-based test generation before that label. It was sporadic, though. People doing contracts call it contract-based. This problem is why you see me using 2-4 terms when I say it here.

                                              1. 1

                                                To my understanding model-based testing is different property-based testing in roughly the same way end-to-end testing is different from unit testing.

                                                I’m also unfamiliar with contract testing used in that way. Do you have any good links?

                                                1. 2

                                                  The models are often for a subset of the functionality or attributes. Gets them closer to unit testing. Comparisons depend on model coverage. Another thing is Model-Driven Development hit buzzword status covering everything from UML tools to Alloy extensions. That expanded what people called model-driven/based… anything.

                                                  Far as contracts, pretty similar to how people are doing property-based testing. Just much broader since contracts got picked up by many crowds. Just typing any of this into DuckDuckGo and Google will get you examples. Try this for example. Another thing I do is add current year in quotes, do a few pages, subtract one, repeat,…, repeat. Works for many types of CompSci once you know subfield’s buzzwords.

                                            2. 5

                                              [a] -> [a] could be identity, could be reverse, could be tail, could be “every other element”, could be “list where it’s the first element repeated over and over again”

                                              It does reveal a bit in that you know that stuff happening in there depend only on list operations and not the element, but his argument (which is pretty true in “enterprise-y software”) is that in practice you have such a large possible set of functions inhibiting certain types that it doesn’t bring that much info to the table.

                                              For example, if I give you a signature BlogPost -> String -> BlogPost, you might be able to say “well this probably isn’t doing any IO”, but it could be about setting a title, it could be about swapping the content, if the object contains a list of comments it could be appending to that…. Lots of “real world”[0] domain objects are huge and have a lot of data built in. This also explains his complaints about place-oriented programming (your object has 100 fields, do ADTs provide value at that level? You end up wanting for nicer tools to work on that)

                                              I think he has a thesis he is convinced about, but the fact he’s also spending so much time on spec shows that he gets that there’s…. something there. The bashing gets a bit tired though

                                              [0]: “Real world” like “sofware built to run some internal business operations”. He used to work on something about scheduling radio broadcasts? Where messy constraints came in all the tim

                                              1. 4

                                                [a] -> [a] could be identity, could be reverse, could be tail, could be “every other element”, could be “list where it’s the first element repeated over and over again”

                                                Technically, [a] -> [a] could have infinite implementations. I believe also, that technically a function with this signature could have the side effect of crashing your program through infinite recursion.

                                                That said, I don’t think Rich Hickey compares the two approaches on fair terms. It isn’t fair to say “type systems don’t help because if you try hard enough you can break them. What you need to do instead is [lots of hand-waving here] make things simple!”

                                                There are definitely a few logical fallacies being made, and I’m struggling to keep up with them all.

                                                The example I outlined above: is that a Straw Man, or is it Special Pleading, or perhaps something else?

                                                When Rich Hickey dismisses the value of a powerful type system by saying “oh it doesn’t really work in practice”, is this the Anecdotal logical fallacy? Because anecdotally, this stuff works for me and for many people I’ve worked with, in practice. It could also perhaps be Ambiguity, or No True Scotsman.

                                                And the people defending his take on this? Appeal to Authority.

                                                I say this being totally aware that I may be committing the Fallacy Fallacy, but I’m yet to be convinced that an accumulation of design aids does not yield a net benefit.

                                          3. 5

                                            Well, it’s true. The type signature tells you nothing. This reminds me… when I checked out Dylan some time ago I was actually rather shocked that in the type signature for a function definition there’s a spot to name the return value.

                                            define method sum-squares (in :: <list>) => (sum-of-element-squares :: <integer>)

                                            That name has no programmatic use. It can’t be referred to in the body of the function. It has no special semantics. It is simply documentation. But the fact that it’s there speaks volumes about what was important to the designers (people like David Moon, Lisp veteran and principle designer of the Common Lisp object system).

                                            Mocking “list a to list a” isn’t out of ignorance and has context. It’s a reference to a very famous, 30 year old paper by Philip Wadler called Theorems for Free. This one paper inaugurated an entire sect of theoreticians proving properties of programs solely based on their type signatures, and led to people making the claim that, yes, there is actually a fair amount that a type signature can tell you, without regard even to what the arguments represent or the name of the function. If there’s a cult, here, it’s this particular cult of type worship. It’s hair shirt junk theory, useless for the working programmer, useless for people interested in making expressive programming languages.

                                            1. 3

                                              That name has no programmatic use. It can’t be referred to in the body of the function. It has no special semantics. It is simply documentation.

                                              I find this surprising and interesting! There are some languages that let you name the output parameter for use in verification, such as to say “the returned z is going to be between inputs x and y.” This is the first I’m hearing of using it purely for documentation.

                                          1. 2

                                            I’m on Mastodon https://mastodon.xyz/web/accounts/8213 but I’ve still had trouble finding likeminded people.

                                            1. 7

                                              I think that’s a good insight that defense programming makes sense at I/O boundaries the most. I do think if you are in an untyped language it helps to create checks at all API boundaries to enforce assumptions.

                                              1. 14

                                                As someone who does ML in C++ and built a production, petabyte level, computer vision system, and interviewed many ML people and the reason is pretty obvious. ML people can’t code. seriously , many are math people, who are academic, and have very little experience building production systems. It’s not their fault, it’s not what they trained for or interested in. These high level apis exist to address their needs.

                                                1. 5

                                                  I want to emphasize one thing that might make my previous comment more clear. The hard part of ML isn’t programming.

                                                  The hard part of ML is data collection, feature selection, and algorithm construction.

                                                  The only part where programming matters is building the training software and execution software. However most ML people care about the former, not the latter.

                                                  1. 2

                                                    ML has certainly been growing fast. I see this as mirroring what’s happening in CS in general nowadays, with ML simply being the foremost frontier, and a hype word to boot.

                                                    However, I would temper your statement with an aspect of what u/zxtx said. Even those who are capable of building a project in a low-level language, won’t always want to. It’s nice to be able to dodge the boilerplate while hacking on something new. And that goes for those who understand the low-level stuff, too. So I’m not too surprised that people aren’t using low-level languages for everyday ML development. (Libraries are another story, of course.)

                                                    Perhaps you know more about how to write ML projects in C++ without getting mired in boilerplate, though. Was this ever a problem for you? Or does low-level boilerplate generally not get in your way?

                                                    1. 1

                                                      Great question. I am not mired in boilerplate because C++ is a high level language. I use it because the transition from prototype to production is very smooth and natural.

                                                      I think ultimately it’s not fashionable to learn. I’m finding most younger programmers simply don’t have proficiency in it. Meaning they haven’t developed the muscle memory so it feels slower. The computer field is all about pop culture , which I believe is the actual answer to OP’s question now that I think about it. In other words, python and R are fashionable and that’s why they are being used.

                                                      1. 4

                                                        I think ultimately it’s not fashionable to learn

                                                        It’s not just fashion: it’s an incredibly complicated language. It’s so complicated that Edison Design Group does the C++ front-ends for most commercial suppliers just because they know they’ll screw it up. Some C++ alternatives are easier to learn or provide extra benefits for extra efforts.

                                                        On top of that, it had really slow compiles compared to almost any language I was using when considering C++. That breaks developer’s mental state of flow. To test the problem, I mocked up some of the same constructs in language designed for fast compiles with it speeding way up. It was clear C++ had fundamental, design weaknesses. Designers of D language confirmed my intuition with design choices that let it compile fast despite many features and being C-like in style.

                                                        1. 1

                                                          It’s true the compile times are slow , but it doesn’t kill flow because you don’t need to compile while you program, only when you want to run and test. I would argue any dev style where you quickly switch between running and coding slows you down and takes you out of flow anyway.

                                                          In regards to it being complicated, this is true. However c++17 is much more beginner friendly. Even though 1980s C++ was arguably harder to learn than today, millions learned it anyway because of fashion. Don’t underestimate the power of fashion.

                                                          And lastly , D has it’s own design flaws like introducing garbage collection. Why in a language that has RAII do you need or want garbage collection ? Nobody writing modern C++ worries about leaking memory.

                                                          1. 1

                                                            “Don’t underestimate the power of fashion.”

                                                            You just said it’s out of fashion. So, it needs to be easier to learn and more advantageous than languages in fashion. I’m not sure that’s the case. Hardest comparison being it vs Rust where I don’t know which will come out ahead for newcomers. I think reduced temporal errors is a big motivator to get through complexity.

                                                            “And lastly , D has it’s own design flaws like introducing garbage collection.”

                                                            You can use D without garbage collection. Article here. The Wirth languages all let you do that, too, with a keyword indicating the module was unsafe. So, they defaulted on safest option with developer turning it off when necessary. Ada made GC’s optional with defaulting on unsafe (memory fuzzy) since real-time w/ no dynamic allocation was most common usage. There are implementations of reference counting for it and a RAII-like thing called controlled types per some Ada folks on a forum.

                                                            So, even for C++ alternatives with garbage collection, those targeting the system space don’t mandate it. Feel free to turn it off using other methods like unsafe, memory pools, ref counting, and so on.

                                                            1. 2

                                                              Sorry, I had a very hard time groking your response. What I meant was that python and R are used for ML, not because of technical reasons, but because it’s fashionable. There is social capital behind those tools now. C++ was fashionable late 80s to late 90s in programming (not ML). Back then lisp and friends were popular for ML!

                                                              Do you mind clarifying your response about fashion ?

                                                              In regards to D, I still think garbage collection, even though it’s optional , is a design flaw. It was such a flaw that if you turned it off, you could not use the standard library, so they were forced to write a new one.

                                                              C++ is such a well designed language that you can do pretty much any kind of programming (generic, OOP, functional, structural, actor) with it and it’s still being updated and improved without compromising backwards compatibility. Bjarne is amazing. By this time, most language designers go off and create a new language, but not Bjarne. I would argue that’s why he is one of the greatest language designers ever. He was able to create a language that has never stopped improving.

                                                              Now WebAssembly is even getting we developers interested in C++ again!

                                                              1. 2

                                                                I was agreeing it went out of fashion. I dont know about young folks seeing as it happened during push by managers of Java and C#. They kept getting faster, too. Even stuff like Python replaced it for prototyping, sometimes production. Now, there’s C/C++ alternatives with compelling benefits with at least one massively popular. The young crowd is all over this stuff for jobs and/or fun depending on language.

                                                                So, I just dont see people going with it a lot in the future past the social inertia and optimized tooling that causes many to default on it. The language improvements recently have been great, though. I liked reading Bjarne’s papers, too, since the analyses and tradeoffs were really interesting. Hell, I even found a web, application framework using it.

                                                            2. 1

                                                              I would argue any dev style where you quickly switch between running and coding slows you down and takes you out of flow anyway.

                                                              I have to disagree. REPL-driven development has only grown more & more useful over time. Now, when I say this, you may think of languages with high overhead such as Python and Clojure, and cringe. But nowadays you can get this affordance in a language that also leaves room for efficient compilation, such as Haskell. And don’t forget that even Python has tools for compiling to efficient code.

                                                              If you feel like having your mind opened on this matter, Bret Victor has done some very interesting work liberating coding from the “staring at a wall of text” paradigm. I think we’re all bettered by this type of work, but perhaps there’s something to be said for keeping the old, mature standbys in close proximity.

                                                              1. 2

                                                                Sorry, just to clarify. I LOVE repl development! Bret Victors work is amazing. What I mean is anything that takes you out of the editor. For example , if you have to change windows to ccompile and run.

                                                                REPLs are completely part of the editor and live coding systems don’t take you out of the flow. But if you need to switch out of the editor and run by hand, then it takes you out of flow because it’s a context switch.

                                                                1. 2

                                                                  100% agreed. Fast compilation times can’t fix a crappy development cycle.

                                                                  1. 2

                                                                    I think the worst example of anti-flow programming is TDD. A REPL is infinitely better.

                                                                    1. 1

                                                                      Do the proponents of TDD knock REPLs? In my opinion, REPL-driven is just the next logical step in TDD’s progression.

                                                                      1. 2

                                                                        no, I’m knocking TDD ;-)

                                                          2. 3

                                                            How do you feel about platforms like https://onnx.ai/ which make it easy to write a model in a language like python, but have it deployed into a production system likely written in C++?

                                                            1. 2

                                                              I think they are great but don’t go far enough. because we are entering a new paradigm where we write programs that write programs. People need to go further and write a DSL and not just a wrapper in python. I think a visual language where you connect high level blocks into a computation graph would be wonderful. And then feed it data and have it learn the parameters.

                                                              1. 2

                                                                So intuitively a DSL is the correct approach, but as can be seen with systems like Tensorflow it leads to these impedance mismatches with the host language. This mismatch slows people down and ultimately leads them systems that just try to extend the host language like pytorch.

                                                                1. 1

                                                                  I guess when I think of DSL’s, I don’t think of host languages. I’m thinking more about languages that exist by themselves specific to a domain. In other words, there isn’t a host language like in Tensoflow.

                                                                  1. 1

                                                                    Well for Tensorflow, I mean something like python as the host language.

                                                                    1. 1

                                                                      wow, rereading my last sentence I can see how it had the opposite meaning than I intended. I meant I was thinking of DSLs without a host language , unlike tensorflow.

                                                        2. 2

                                                          Would you mind shedding some light on what this petabyte level computer vision system is? I’m very curious!

                                                          1. 3

                                                            It was a project within HERE maps to identify road features and signs to help automate map creation. Last time I was there it processed dozens of petabytes of LiDAR and imagery data from over 30 countries . It’s been a couple years so can’t tell you where it’s at today.

                                                        1. 5

                                                          The main reason is machine learning until very recently had mostly existed in a research context. In that context development speed is everything. What matters is how quickly can you design and run experiments. With a good ffi to C, both R and Python hit a good enough place for that. Historically lots of the numpy/scipy effort was to have an open-source Matlab, another dynamic language. Most of R’s growth is also fairly recent

                                                          Although now it can seem all the action now is in Python, many machine learning tools were written for Java (Weka, Mallet, etc). I’d love to see a data science ecosystem in Haskell or Ocaml or Rust.

                                                          1. 4

                                                            Funny you should mention Haskell, as that was exactly what I had in mind! I personally keep an eye on this doc to track whether Haskell’s ML picture has improved. It’s updated regularly, though some progress might not make it into the doc.

                                                            1. 5

                                                              Well there is also the https://www.datahaskell.org/ initiative

                                                          1. 2

                                                            I mean questioning Apple design guidelines goes back decades https://www.nngroup.com/articles/anti-mac-interface/

                                                            I really love this kind of work. For me the issue is that the UI innovations we see on the web and mobile never came back to the Desktop. On the web ui and application logic really have been decoupled for many services. On mobile with the Android and IOS APIs there is an amazing amount of interoperability.

                                                            But why hasn’t the Desktop gotten this stuff. Why are applications there so silo’ed?

                                                            1. 1

                                                              Neither of those things are true.

                                                              Android and iOS apps are extremely siloed. On Windows or Linux, the end user can unconditionally access all persistent data through the file browser. On the new mobile platforms, everything is sandboxed, and the only way to get data between apps is with explicitly-added “intent” routing that’s keyed by mimetype to guarantee that it’ll never do anything that the developer didn’t intend.

                                                              Worse, you claim that there’s been no movement of idea from mobile to desktop, both completely ignoring Windows 8 (which tried to merge them entirely), and ignoring the fact that the new versions (Windows 10, macOS Sierra) have adopted features like the app stores, pervasive sandboxing, and global notification mailbox.

                                                              1. 1

                                                                So I haven’t used Windows in decades so I had no idea what was going on there. While you might view the intents as a restriction I think they are a richer interaction medium than files. The sandboxing is a good feature and defines clear ways applications gather resources and communicate. This is much better than ad-hoc protocols between programs that are closed and proprietary.

                                                                1. 2

                                                                  I own an iPhone. I have pictures I want to get off the device. Ideally, I would hook the phone up to my Linux box, have it show up as a storage device and use cp to move the files to Linux (and then rm to remove them off the iPhone). But no, I can’t do that. I have to hook the iPhone up to the Mac, run Photos, import them into Photos [1], then export the “full” images [2].

                                                                  Yes, I am not a typical user. But Apple has made it hard to use the flow I’ve used for twenty years.

                                                                  I hate hate hate this siloed, intent based system Apple and Google are forcing upon us.

                                                                  [1] I don’t even want to use Photos. I have my own system for storing photographs I’ve developed in the late 90s.

                                                                  [2] The keyboard shortcut used to export the full data. Now it helpfully strips EXIF data, thus I have to use the menu to explicitly select “full” of which there is no keyboard shortcut.

                                                                  1. 1

                                                                    That does suck. What I’m surprised about is why not use an application that exposes the API you want? On Android, I make use of Airdroid quite a bit.

                                                                    I know everyone thinks I’m wrongheaded in my top remark, but really all I was trying to say is that efforts in mobile have tried to move away from WIMP. I don’t think we have a great solution yet. Even programmer-friendly interfaces like Emacs have serious discoverability problems.

                                                                    1. 1

                                                                      The “application” I want to use is cp and rm, both of which are standard “applications” under Unix. I can script them so it happens pretty much automatically (by also using the mount application to make the “files” on the camera visible to Linux).

                                                                      But nooooooo! There’s some new, media specific USB protocol used to suck images down from smart phones (and tablets). My older digital cameras work the old way (showing up as a storage device, files and all).

                                                            1. 8

                                                              Filled out the survey. I spent a few months trying to get haskell to work for me but I found it a frustrating experience. I got the hang of functional programming fairly quickly but found the haskell libraries very hard to work with. They very rarely give examples on how to do the basic stuff and require you to read 10,000 words before you can understand how to use the thing. I wanted to do some ultra basic XML parsing which I do in Ruby with nokogiri all the time but with the haskell libraries I looked at it was just impossible to quickly work out how to do anything. And whenever I ask a question to other haskell devs they just tell me its easy and to look at the types.

                                                              1. 3

                                                                There’s often way too few examples, yeah :( And type sigs are definitely not the best way to learn. That said, once you get it up and running, parsing XML in Haskell is quite nice (we use xml-conduit for this at work).

                                                                Someone actually took it upon themselves to write better doc’s for containers at https://haskell-containers.readthedocs.io/en/latest/ and shared their template for ReadTheDocs: https://github.com/m-renaud/haskell-rtd-template in case anyone else feels inspired :)

                                                                1. 3

                                                                  I agree. The language is beautiful, but we need to put more work into making libraries easier to understand and use. What makes it even worse for newbies is that as an experienced developer, I can understand when a library is using a familiar pattern for configuration or state management, but you have to figure out that pattern itself at the same time.

                                                                  You shouldn’t have to piece together the types or, worse, read the code, to understand how a library works. I dislike the “I learned it this way, so you should too” attitude I often see. We can do better.

                                                                  1. 5

                                                                    I agree too. Hackage suffers from the same disease as npm: it’s a garbage heap that contains some buried gems. The packages with descriptive names are rarely the good ones. Abandoned academic experiments rub elbows with well engineered, production-ready modules. Contrast with Python’s standard library and major projects like Numpy: a little curation could go a long way.

                                                                  2. 3

                                                                    I think the challenge is unless the documentation includes an example or even documentation at all it can be hard to know where to interact many libraries. While reading the types is often the way you figure it out, I wish more libraries pointed me towards the main functions I should be working with.

                                                                    1. 2

                                                                      It’s a skill to look at the types, but it is how I do Haskell development. I’d love to teach better ways to exercise this skill.

                                                                      1. 6

                                                                        I started to get the hang of it but it really felt like the language was used entirely for academic purposes rather than actually getting things done and every time I wanted to do something new people would point me to a huge PDF to do something simple that took me 3 minutes to work out in ruby.

                                                                        1. 2

                                                                          I use Haskell everywhere for getting things done. Haskell allows a massive amount of code reuse and people write up massive documents (e.g. Monad tutorials) about the concepts behind that reuse.

                                                                          I use the types and ignore most other publications.

                                                                      2. 1

                                                                        Ruby and Haskell are on opposite sides of documentation spectrum.

                                                                        Ruby libs usually have great guide but very poor API docs, so if you want to do something outside of examples in guide, you have to look at source. Methods are usually undocumented too and it’s hard to figure out what’s available and where to look due to heavy use of include.

                                                                        Haskell libs have descriptions of each function and type, and due to types you can be sure what function takes and what it returns. Haddock renders source docs to nice looking pages. However, usually there are no guides, getting started and high-level overviews (or guides are in the form of academic papers).

                                                                        I wish to have best of both worlds in both languages.

                                                                        When I started to learn Haskell, the first thing that I wanted to do for my project is to parse XML too. I used hxt and that was really hard: it’s not a standard DOM library and probably has great stream processing capabilities, and it’s based on arrows which is not easiest concept when you are writing your first Haskell code. At least hxt has decent design, I remember that XML libs from python standard library are not much easier to use. Nokigiri is probably the best XML lib ever if you don’t use gigabyte-sized XML files.

                                                                      1. 3

                                                                        A long, long time ago, I had a boss who would say “This Linux thing will never catch on because no one knows how to pronounce it. LIE-nux, LEE-nooks… DEEB-ian, deb-EE-an… it’s doomed.”

                                                                        He may have been right for the desktop… (-:

                                                                        1. 1

                                                                          I consistently say “Mac Oh-Es Ex” but apparently the consensus is “Mac Oh-Es Ten”?

                                                                          1. 2

                                                                            “Tencode Ecks Beta Six”

                                                                            1. 1

                                                                              I say it Os-Es-Ex, and no has corrected me.

                                                                              1. 2

                                                                                If you walk around Apple HQ saying “oss ecks” (with the “oss” being the same sound as in “hoss” or “cross”), people get really mad at you. I’ve also been saying “eye oss” for so long I’ve forgotten it was initially a joke, and have probably weirded out a few coworkers after I switched jobs.

                                                                                1. 3

                                                                                  Thank goodness they dropped it and it is plain old macOS now ;-)

                                                                              2. 1

                                                                                Not just consensus, that’s how Apple employees pronounced it in keynotes.

                                                                                Now, I know that’s basically the “GIF argument”, but “OS Ten comes after OS Nine” actually makes sense, unlike “Jraphics Interchange Format” :)

                                                                                (and now it’s pronounced mac-O-S anyway)

                                                                            1. 2

                                                                              I got to see Stallman speak the other week in Illinois. His idealism about Free and Open Source is pretty hardcore. The man refuses to buy an AmTrak ticket because you can’t pay in cash and he doesn’t want to be tracked. He pays for his transit cards in cash instead of using a card at the machine.

                                                                              During the lecture he said he decided early on, if he couldn’t work and earn a living entirely using free and open source software, he would rather wait tables (emphasizing that waiting tables is a very respectable profession).

                                                                              Looking at this background in this article though, he grew up in a different world, got in early on some ground floors and made some early strides that are difficult to achieve today. I recently lost my job and am very reluctant to go back on the market and go through the entire interview process again.

                                                                              I would love to work entirely in open source software, and have been working on getting back into a PhD program (I currently hold a masters). I work with a professor a knew from an old startup and he uses some of the software I wrote (BigSense.io) in his classes and we’re currently working on some new tutorials, docker containers and lab manuals. He’s still struggling to get funding though, and even if any of it comes in, I’d be lucky if I got a stipend to that would cover my rent.

                                                                              The big question is, now, today, in 2018, how does one live entirely off FOSS development? I feel like the big developers for things like the Linux kernel; over 50%~60% of them probably work in the OSS divisions of Redhat, Microsoft, AMD or Intel (I have several friends I graduated with in Intel’s division out in Hillsboro/Portland).

                                                                              What types of grants and fellowships should I be applying for if I want to be able to work on FOSS full-time?

                                                                              1. 4

                                                                                If you want to apply for a job writing entirely FOSS (or doing other work for a fully-FOSS organization), there are a number of job boards for that:

                                                                                You and @zxtx are right about some of the other options: if you start working on a big FOSS project that lots of companies use, if you’re good enough one of those company will eventually offer you a job out of the blue.

                                                                                Personally, I would recommend starting your own company that sells FOSS in some way. There are a number of great business models for this (and some not great ones, which are unfortunately used a lot) - I gave a talk about these at LibrePlanet this year:

                                                                                My own example is https://jmp.chat/ (and to a lesser extent https://ossguy.com/ss_usb/ ). I was fortunate enough to have a few months of runway to try it out, and it ended up working out. In general, you do need some runway for any of the options listed above that are not the job boards.

                                                                                1. 2

                                                                                  I think it depends on the kind of work you do. The popular approach is to have an open-source library important to a company so your work is effectively maintaining this library. Many developers use patreon, and for some niches like scientific computing there are non-profits like NumFocus which fund open source projects. It’s not a straightforward path, but there are openings.

                                                                                1. 3

                                                                                  So to what extent is “probabilistic programming” a new/different programming paradigm and to what extent is it something like a DSL for setting-up various probabilistic algorithms?

                                                                                  Not to imply I’d dismiss it if it was the latter.

                                                                                  1. 2

                                                                                    People have approached it from both sides, so some systems have a flavor more like one or the other.

                                                                                    A very simplified story which you could poke holes in, but which I think covers some part of the history is: The earliest systems (from where the name came) thought of themselves, I believe, as adding a few probablistic operators to a “regular” programming language. So you mostly programmed “normally”, but wherever you lacked a principled way to make a decision, you could use these new probabilistic operators to leave some decisions to a system that would automagically fill them in. If you want to analyze what the resulting behavior is though, it’s somewhat natural to view the entire program as a complicated model class, and the whole thing therefore as a way of specifying probabilistic models. In which case, if you want the system to have well-understood behavior, and especially if you want efficient inference, there’s a tendency towards wanting to constrain the outer language more, ending up with something that really looks more like a DSL for specifying models. Lots of possible places to pick in that design space…

                                                                                    1. 2

                                                                                      I think it’s useful to think about it as something like logic programming. Logic programming is useful when the answer you want can be posed as the solution to a series of logical constraints. Probabilistic programming shines your problem can be posed as the expectation of some conditional probability distribution. Problems that benefit from that framing are particularly well-suited for probabilistic programming.

                                                                                      I think it’s use in practice will like SQL or Datalog resemble using a DSL as you don’t need probabilistic inference everywhere in your software, but in principle as it is just an extension of deterministic programming it does not need to restricted in this way.

                                                                                    1. 2

                                                                                      I’ve done this tour. While it looks like Brucker was let go last year for giving them on his days off it was remarkably accessible.

                                                                                      1. 17

                                                                                        Hi Lobsters, author here.

                                                                                        I wanted to give a little bit of background on the motivation behind this post. For a while, I’ve been making academic posters using PowerPoint, Keynote, or Adobe Illustrator, and while it’s possible to get a high-quality result from these tools, I’ve always been frustrated by the amount of manual effort required to do so: having to calculate positions of elements by hand, manually laying out content, manually propagating style changes over the iterative process of poster design…

                                                                                        For writing papers (and even homework assignments), I had switched to LaTeX a long time ago, but for posters, I will still using these frustrating GUI-based tools. The main reason was the lack of a modern-looking poster theme: there were existing LaTeX poster templates and themes out there, but most of them felt 20 years old.

                                                                                        A couple weeks ago, I had to design a number of posters for a conference, and I finally decided to take the leap and force myself to use LaTeX to build a poster. During the process, I ended up designing a poster theme that I liked, and I’ve open-sourced the resulting theme, hoping that it’ll help make LaTeX and beamerposter slightly more accessible to people who want a modern and stylish looking poster without spending a lot of time on reading the beamerposter manual and working on design and aesthetics.

                                                                                        1. 4

                                                                                          Yes, I use LaTeX or ConTeXt for most of my writings, apart from notes in plain text.

                                                                                          No, I just don’t think TeX is a great way for posters. Probably because I am a control freak in making posters, I really want my prominent content/figures exactly where they are supposed to be and how large I want them to be on a poster. Sometimes I ferociously shorten my text to just be able to get the next section a little higher, so the section title does not fall off the main poster viewing area. So, yes, I still use pages.

                                                                                          I guess the difference is whether I am more focused on explaining things, which I use LaTeX, or I am more focused on laying out text blocks and figures, which GUI-based tools excel.

                                                                                          1. 2

                                                                                            I often want something in between. Like I want to click and draw arrows and figures but have that turned into LaTeX code so I can still style around that.

                                                                                        1. 3

                                                                                          I think the value of ML comes in very particular scenarios.

                                                                                          1. Tasks where business logic would be too cumbersome to implement. For example, language detection of a document and for that matter most NLP and Image recognition tasks.

                                                                                          2. Tasks where it’s easy to collect data in hindsight. For example, recommendation systems, detecting nudity in videos, and predicting user demographics.

                                                                                          3. Tasks that are simply statistical tasks. A/B testing and election forecasting involve doing some amount of calculation and there is no equivalent in terms of SQL and business logic.

                                                                                          4. Tasks which have an optimization component. This could deciding what prices to set for which users, or predicting server utilization to save power.

                                                                                          All these settings can and should be combined with business logic but it’s unlikely to be enough.

                                                                                          1. 5

                                                                                            If you found this article interesting I suggest checking out http://www.fakenewschallenge.org, There they thought long and hard what a fake news dataset should look like and some models trained against it.

                                                                                            1. 3

                                                                                              I’m restricting myself to predictions that seem to be already happening. So consider this in the spirit of Gibson’s the future is already here and just not fully distributed yet. Also historically, I usually get all my predictions spectacularly wrong and this might be more predictive as a set of things that don’t happen.

                                                                                              Virtual Reality

                                                                                              As pixel density increases in the next 5 years you are going to see VR in more common places. Expect to see stuff like Google Daydream being used on flights to watch movies. Expect to see pop-up sports centers doing VR games and events. Think stuff like laser tag, mini golf, or squash.

                                                                                              Programming Languages

                                                                                              Over the 5 - 10 year standpoint expect to see gradual typing enter the mainstream. As the languages figure out a way not to check the annotations constantly at runtime they will start to have a performance profile like today’s scripting languages.

                                                                                              Expect in 5 years the first practical applications of dependent types to appear in the wild.


                                                                                              In 5 years, a large set of user will do all their computer through their cell phones. They will have accessories for connecting them to a bigger screen and keyboard but no dedicated desktop / laptop.


                                                                                              In 5 years, another cryptocurrency will supplant Bitcoin by basically fixing all the engineering issues in the protocol, the governance issues in the project, and delivering on the claims the technology originally made. This won’t be obvious until about a year before it happens. Most blockchain startups will be out of business or have been acquired by a bank at this point.

                                                                                              Machine Learning

                                                                                              In 5 years, you will start to see machine learning algorithms and systems become truly engineered. Between the need for fairness in ML, GDPR, and a general need for reliable diagnostics in devops expect lots of work into making ML system easier to debug and engineer with strong reliability guarantees.

                                                                                              Expect to see more regular products using image recognition. People will create fan-films featuring celebrities that didn’t actually act in them, or are still alive.

                                                                                              In 5 years, conversational agents will still be a niche without clear successes.

                                                                                              In 5 years, there will be a startup which compellingly writes news articles for multiple languages.

                                                                                              ML will start making inroads into other areas of CS. Expect to hear about new state of the art results where deep learning is used to augment network protocols, program synthesis, theorem provers, and compiler optimizations.

                                                                                              With the exception of small geofenced areas, we will not have self-driving cars widely deployed.

                                                                                              1. 2

                                                                                                Should you do this, I’ve found this config works best for making emacs load like and quick


                                                                                                1. 4

                                                                                                  recovering from errors is overrated imo, just give me one good error message. Trying to recover from errors just gives you a bunch of false later errors caused by the original. This post addresses parse errors, but doesn’t really help with all the false semantic errors you get when a syntax error obscures a symbol or type definition.

                                                                                                  1. 2

                                                                                                    Recover doesn’t necessarily mean you continue parsing or something else. The idea is to use as much of the state as possible to construct a useful error message. Often in parsers things are too low level to give a particularly helpful message. Structuring the parser so some higher-level information is preserved can be really helpful to guiding the end-user.

                                                                                                  1. 3

                                                                                                    So this feels like a referentially-transparent R. I’m curious what do you think your FFI story will look like? Or is the intention to implement things like BLAS, HTTP, CSV, JSON, etc in the language vs in libraries? I feel like new serialization formats and other ways to get an munge data keep appearing and historically they would enter a system is initially through the FFI.

                                                                                                    1. 1

                                                                                                      No FFI!

                                                                                                      That’s tantamount to a user-defined library.