1. 31
  1.  

  2. 23

    It’s interesting that he suggests that you wouldn’t write a Haskell app on PS4. I happen to be in gamedev, and although the common wisdom is that everything in this field is hyper-bummed hand-tuned C++ and assembly, more and more those languages are retreating to the core engine and in some cases fragmenting and dissipating completely. Most (by any metric) games these days are written in pure C# on Unity. Unity itself is written in C++, but few companies splurge for the source license. Even most of the existing top tier games (world of warcraft, call of duty…) are written primarily in an embedded extension language (lua or similar) that exposes primitives at a higher level to the game designers and scripters.

    But intriguingly, even that is undergoing revision – one designer I interviewed recently had been working on a recently released top-tier AAA game in the running for best game of 2016, and he said that their engine was literally Racket all the way down to the metal, from DSLs used to create the scripting interface all the way down to the shader compilers. And another dev I interviewed from another top tier AAA gaming company also said their engine was almost entirely Racket as well now. I almost didn’t believe it until he showed me screenshots of the debug interface.

    Similarly, I know from personal experience that many mobile game studios are using or experimenting with erlang on the backend – including me! So the notion that these languages are so esoteric as to be implausible isn’t informed by facts on the ground; at least, not this particular ground.

    1. 3

      IIRC, Crash Bandicoot was written using a Scheme dialect.

      1. 3

        one designer I interviewed recently had been working on a recently released top-tier AAA game in the running for best game of 2016, and he said that their engine was literally Racket all the way down to the metal, from DSLs used to create the scripting interface all the way down to the shader compilers. And another dev I interviewed from another top tier AAA gaming company also said their engine was almost entirely Racket as well now.

        That’s unexpected! Can you mention the game and company names?

        1. 9

          I’m speculating without any direct knowledge, but Uncharted 4 was recently released to critical acclaim and was developed by Naughty Dog, who famously used AOT-compiled Lisp to write games like Crash Bandicoot and Jak and Daxter.

          1. 1

            Thanks for the reference.

        2. [Comment removed by author]

          1. 10

            Uncharted 4 and a game I can’t disclose.

          2. 1

            I once worked at a company that wrote extensive amounts of PHP to host websites. One day I suggested that we rewrite the entire core of the infrastructure in order to make it faster and more stable, but all I got back was some grumbles about how much time it would consume.

            I’m also soon going to be working somewhere where Angular is the core of the frontend, but I have no idea how I could convince them to consider switching to React without breaking down barriers and starting fires.

            1. 4

              I don’t know what your specific situation was like, but I think the response you got back from the PHP company was valid. A big rewrite is something that one should be able to make a really good ROI on or be pulling in enough money to cover a failure if it happens. They do need to happen (probably more often than they should), but there should be a pretty healthy dose of “no way” in order to elucidate a compelling reason.

          3. 32

            The unspoken criteria for this is that the author values getting stuff done now over making things better for the future. I’m one of those people who uses Ocaml, against the grain, because I think it’s a better language. I’ve had to spend some time making the tooling a bit better and building some non-existent libraries. But I did and it works fine. Luckily, thousands of other people valued using a tool they think is better as well, so Ocaml has a reasonable number of libraries for doing random stuff. At some point some small group of people had to say Python, or Perl, was better than what existed despite the ecosystem not existing and build it. How else does a language get there?

            So yes, the author is right. If you have a problem that needs solving now, use something that works now. But if you have the time and the desire to make something better, make it better. Don’t listen to this anti-intellectual drivel.

            1. 28

              anti-intellectual

              This being the key word. The amount of hostility in this industry directed towards people interested in solving long-term problems (especially academics) is unreal.

              1. 21

                Absolutely. The comments against PLT researchers, questioning whether what they’re doing is even worthwhile is so frustrating. PLT research does not exist to make better production-ready languages today. It exists to explore ideas in the world of programming languages that can be used to make the great languages of tomorrow, or next year, or 10 years from now.

                1. 24

                  Further: academia does not exist for the sole purpose of industry to consume & monetize. I see shades of this anytime academia is discussed among industry.

                  I feel like industry weighs too much on PL research as is, especially with the uptick in research on problems like, “how do we bolt type checking onto langs that fundamentally reject it?”

                  1. 12

                    The attempted inclusion of Software Transactional Memory into C# is a great example of this. “We concluded STM is impractical because we tried to use it in a language that assumes a mutable memory model everywhere.” Well yeah…

                    1. 4

                      It could have been the case that STM works great with C#. It just didn’t turn out that way. I think “STM doesn’t work well with mutable memory language” is a valuable result. Sure, it’s different from “STM is impractical”, but if you are using mutable memory language it’s roughly “STM is impractical for me”, which is nice to know. I think publishing negative results should be encouraged.

                2. 10

                  The amount of hostility in this industry directed towards people interested in solving long-term problems (especially academics) is unreal.

                  “Don’t argue semantics” (Words don’t have meanings, stop arguing)

                  “This is academic” (meaning pointless).

                  “MVP” (don’t even think about long-term implications)

                  1. [Comment removed by author]

                    1. 9

                      Nowhere in particular I am guessing, just stereotypical things people say that devalue our work.

                      If you’re looking for something concrete that embodies my complaints (besides this article), check out Uncle Bob’s latest blog post. “Your research in PL is as significant as your favorite color, because <ignorant bullshit>!”

                      1. 3

                        There are a lot of problems with that article, but yours is I think an unfair summary.

                        1. 19

                          If you are objecting to “Your research in PL is as significant as your favorite color”: he literally says that no future language will yield measurable reduction in workload, and then ends with: “So then why are there always new languages being invented? … Ah, so it’s really just a matter of your favorite color.” So I will stand by that statement.

                          If you are objecting to “because <ignorant bullshit>”: Bob’s never used an ML-family language given his reaction to learning Swift, so he is not qualified to speak of the future of PL, nor was he qualified to talk about the future of type systems in his previous post. I think it is unprofessional to behave in such a manner. If he won’t respect researchers enough to even look at what we are doing before he declares our work irrelevant, then why should I treat his arguments with respect?

                          1. 5

                            just stereotypical things people say that devalue our work

                            What kind of research do you do? (just curious)

                            1. 5

                              So, first, you are assuming that ML-family languages are the future of programming languages. That’s a reasonable statement, but then again, the future was once Smalltalk and Prolog, and it was obvious to everyone that that was the case. The fact that ML-family languages right now look promising is hardly a guarantee that they’ll be useful for actual software engineering–similarly, carbon fiber is great but we still build a lot of stuff out of wood (which, while buggier, is handier for practical matters).

                              Second, the core argument was not “PLT is pointless”. The core argument was “We are past the point of diminishing returns, and people keep inventing new languages because they keep searching for the perfect language.”

                              Note that you need not dismiss PLT in order to acknowledge the tremendous rehasing and duplication of effort that most language designers undertake. Most programming languages are made to scratch a particular itch with some existing development system, and so are largely the same plus or minus some matters of taste.

                              Lastly, you seem to keep taking the article personally as an attack on researchers: it’s not. If anything, it’s just pointing out that the existing lines of development and invention (again, not research!) have run dry, and we probably are better off sticking with what we know. On that last point, I think he’s wrong, but that’s because he isn’t comparing languages in a meaningful way. The idea that the paradigms in use now, though, are pretty well explored I can agree with.

                              1. 8

                                you are assuming that ML-family languages are the future of programming languages

                                No, I’m saying you need to be familiar with them to have a credible opinion on this subject. Would say the same for any other major family of languages.

                                The fact that ML-family languages right now look promising

                                From your perspective I am following trends, from my perspective industry is finally starting to catch up. Also this is irrelevant, trendy or not I would expect someone making the grandiose claims Bob is making to be familiar with them.

                                Second, the core argument was not “PLT is pointless”. The core argument was “We are past the point of diminishing returns, and people keep inventing new languages because they keep searching for the perfect language.”

                                If the benefit of new languages is virtually immeasurable, then yes, PLT is pointless. Why would anyone spend time and money on PLT if this were the case?

                                Lastly, you seem to keep taking the article personally as an attack on researchers: it’s not.

                                Are you trying to dismiss my argument by calling me emotional? I cited it as an example of something I consider anti-intellectual.

                                If anything, it’s just pointing out that the existing lines of development and invention (again, not research!)

                                I don’t think this interpretation is supported by the source material. “No future language will give us 50%, or 20%, or even 10% reduction in workload over current languages. The short-cycle discipline has reduced the differences to virtual immeasurability.” He did not qualify his statement, so I don’t understand why you feel compelled to qualify it for him.

                        2. 4

                          Live experience, sadly.

                      2. 2

                        “People have constructed bridges that don’t immediately collapse using nothing but intuition and guesswork. Why bother thinking about whether there’s a better way to do things?”

                        1. 2

                          The amount of hostility in this industry directed towards people interested in solving long-term problems (especially academics) is unreal.

                          I am with you. I think there are two forces at play. First, we’ve accepted the attitudes of our colonizing culture. That’s because we think that we’re well-paid, which we are in comparison to average people, but when you consider the obscene compensation given to the non-producers who run this industry, we’re underpaid in comparison. The second is an expression of jealousy. Once the corporate programmers realize that they’ve been duped and that most of them will never work on the kinds of projects that brought us into the industry, many swing around 180 degrees and take sour-grapes attitude: “the cool stuff” doesn’t matter “in the real world”.

                        2. 12

                          Don’t listen to this anti-intellectual drivel.

                          You’re wrong here, and I think that you marred an otherwise good comment with this sloppy statement.

                          The unspoken criteria for this is that the author values getting stuff done now over making things better for the future.

                          That’s nowhere in the text, and more importantly, ignores how we actually make things better. There’s basically 3 ways to hit a project, right?

                          1. Use the terrible tools you have and bludgeon your way to a solution. This works well if you are very fast at bludgeoning. Unfortunately, it means that you’ll have to duplicate the effort every time you run across those problems in the future.
                          2. Improve the tools you enough to get to a solution more easily. This works well if you have a repeatable set of problems and if the tools are easy to extend and everyone who uses the tools are better off. However, this can’t fix the case where tools are fundamentally wrong for a task.
                          3. Throw away the tools entirely and build new tools. This works well if you can make tools that exactly fit the class of problem correctly or if you are bringing a fundamentally new approach to the problem that existing tools fail at. Sadly, the new tools are not guaranteed to actually be any better or even more usable than the old tools, they are guaranteed to require retraining, and they probably aren’t compatible with the old tools at all.

                          The first case is just plowing ahead with code. The second case is building libraries. The third case is building new languages.

                          At this point, I would argue that the problems the vast majority of us face can be addressed within the confines of existing languages, either directly or with libraries. New languages are bringing some interesting ideas out (hail ML et al), but even they aren’t nearly as groundbreaking as things we had 30-40 years ago (declarative languages like Prolog, concatenative languages like Forth) for normal development.

                          There’re three main camps in language dev and invention right now (and I do mean development, not PLT–these aren’t always nonacademic folks).

                          People retrofitting old languages with shiny stuff for whatever reason.: These would be the folks bolting more and more features onto C++ and Javascript. Sometimes, as with arrow syntax in JS, this is helpful. Sometimes, as with the many many ways of declaring an integer in C++, this is bad. Sometimes, as with the oddly-named Bag/Map/etc. additions to Perl 6, this is merely fucking weird. In all cases, though, there is more complexity added to codebases–almost never does a language group remove a feature, and complexity explodes. This work tends to end up injuring end users more than it helps, because the bad old code never ever goes away.

                          People writing new research languages: These would be folks doing pure research at schools or labs, answering questions like “Can we build a fault tolerant language? Yes, Erlang.”, “Can we build a language that has better contractual guarantees and security tricks? Yes, Spec#.”, “Can we build a language with channels and emulation? Yes, Inferno.” and so on. These usually have ideas that are useful later.

                          People writing new languages for aesthetic reasons/lulz.: The final category which brought us languages like Dart, CoffeeScript, YAML, Perl, Ruby, Python, D, and so forth. These languages sometimes make things easier in the long run, but as often as not just mislead and distract the existing folks in the field.

                          Bob’s rant is basically aimed at the third category, it seems to me.

                          1. 7

                            I agree with your post, but of course:

                            New languages are bringing some interesting ideas out (hail ML et al), but even they aren’t nearly as groundbreaking as things we had 30-40 years ago (declarative languages like Prolog, concatenative languages like Forth) for normal development.

                            This is the purest pedantry, but Robin Milner was working on ML in the ‘70s.

                            1. 4

                              No, no, thank you for the correction!

                            2. 8

                              I find your comment far too generous to this prog21 post. Let me defend my claim that this post is anti-intellectual, as that seems to be the contentious part of my comment.

                              Here are some quotes from the article:

                              There’s little thought process needed here. Each language is the obvious answer to a question. They’re all based around getting real work done, yet there’s consistent agreement that these are the wrong languages to be using.

                              … but more importantly, as with Keith’s crisis, the wrong criteria are being used.

                              It hasn’t been stopping people from making great things with the language.

                              Can PLT even be trusted as a field?

                              The gist of these are centered on getting “real work” done. That people who like the languages that the author doesn’t think are useful are using the wrong criteria. That people are still accomplishing things with the inferior languages (as described in the post). And finally: can the field of programming language theory even be trusted? A research field that was founded on lambda calculus, lambdas which Java and C++ have only recently received (to great acclaim), is possibly the untrusted part of this equation??

                              One interpretation is that this is just rhetoric, however the author has not given us any good reason to think that in this post. Another interpretation is that these statements look alarmingly similar to those used by people who don’t think doing basic science is worth it. Why go to space when we have problems on earth? Why build the LHC when we can spend that money on real world problems now? Why research programming languages when we have JavaScript? The criteria we are given by the author for how to decide what language to use is all about solving things in the present. The experiment at the end is about having a problem now and solving it now.

                              Finally, he breaks the world into two types of people: those who are in the now and will use tools in the now and those that he uses the pejorative term “dilettante” to describe. One could say that the author does not intend for this to be a complete enumeration of people, but again the author gives us no way to know that. He never hints at a more complicated world where people are willing to solve problems with existing technology but also trying to move technology forward in other ways.

                              In the company I keep, those that make these basic science arguments, over simplification, and creating false dichotomies are often called “anti-intellectuals”. If you disagree with that term, that’s fine, just replace it with whatever word you would use to describe that kind of behaviour.

                              I usually enjoy prog21 posts. He does a good job of taking a complicated world and making it feel simpler and approachable. In this post, however, his elegance misses its mark. I would have really enjoyed this post if it was about striking a balance. Like the 70-30, or 70-20-10 rule, which break up your time into the status quo, research related to the status quo, and far-out research.

                              1. 6

                                You’re of course welcome to your opinion, but I still disagree.

                                I think that you have misread this article as saying there is no point to PLT stuff or learning lots of languages. My reading is that the author has found (after trying lots of languages) that the things that make a language “bad” according to PLT standards seem to not be useful in predicting failure. He backs up this assertion by pointing out the large body of work done in languages that lack features that PLT folks would say are crucial–I’ll chip in with your own example of C++/Java getting massive amounts of work done without lambdas. Your own comment there is exactly what he’s talking about: PLT things that should be showstoppers but oddly haven’t been.

                                He then goes on to suggest (again, backed up by personal experience) a more predictive set of metrics for languages, with an explanation for each. PLT folks cannot describe to me why a lambda is critical in a language, however the author’s points about being at implementors' mercy ring true. These aren’t “anti-intellectual” arguments–if anything, the author is suggesting a more rigorous way of evaluating languages based on practical concerns instead of merely counting language paradigms and features!

                                You also suggest that the article presents a false dichotomy between research and get-it-done programming, or between mundane people and dilettante–in my reading, dilettante was used to describe a set of people, not one of two or however many. I think you just misread that.

                                1. 7

                                  Your own comment there is exactly what he’s talking about: PLT things that should be showstoppers but oddly haven’t been.

                                  Who has credibly said this is a showstopper? This view that the PLT community looks out their windows wearing blinders is nonsense.

                                  You also suggest that the article presents a false dichotomy between research and get-it-done programming, or between mundane people and dilettante–in my reading, dilettante was used to describe a set of people, not one of two or however many. I think you just misread that.

                                  I’m not sure what distinction you are making from what I said.

                                  1. 4

                                    PLT folks cannot describe to me why a lambda is critical in a language

                                    Since someone gave a non-PLT perspective explanation, here is my PLT explanation:

                                    First class functions (aka “lambdas”) are the NAND-gates of compositional computing, you can macro-express any (pure) data structure with them. They are a uniquely fundamental construct.

                                    To macro-express X in terms of Y means there is a local transformation mapping occurrences of X to equivalent constructs in Y. It is a formal way of talking about programming language expressiveness. The “local” criterion means you can package up the construct into a reusable package, which I don’t think I need to motivate. This terminology comes from On the expressive power of programming languages by Felleisen.

                                    We are well aware you can get work done without lambdas. In fact, there are (non-macroexpressible) algorithms for eliminating them called closure conversion and defunctionalization. The existence of such algorithms doesn’t mean that lambdas are useless, it means they have an implementation strategy.

                                    Note that you can remove the pure qualification above by adding delimited continuations, which are the NAND-gates of side-effects. If you see people raving about monads, it is because they are a different way of expressing delimited continuations. See Representing Monads by Filinski.

                              2. 6

                                the author touches on this “programming languages vs getting things done” topic frequently. at least one of his posts, slumming with basic programmers, ranks high on my list of favourite programming posts. but this post does seem closer to the $100M language post, which makes the same mistake of (essentially) not distinguishing between one-shot and iterated games.

                                (having been a big fan of his blog in general for some years now, i’d hesitate to call him anti-intellectual, but this does seem to be a blind spot of his)

                                1. 5

                                  To be clear: I don’t think the author is but I think this post has the characteristics of being anti-intellectual.

                                2. 3

                                  Don’t listen to this anti-intellectual drivel.

                                  I hate software anti-intellectualism. (I got downvoted heavily for pro-intellectualism in a recent thread.) That said, I don’t think it’s fair to the OP to assign him the “anti-intellectual” label. He has actually tried a bunch of languages, and settled on the “worse” but more successful languages being the right choice for him. There’s nothing wrong with that.

                                  I dislike his disingenuous dichotomy about “language dilettantes” and

                                  [t]he kind of person who’d jump in and start writing writing a book rather than dreaming about being famous novelist.

                                  Of course, the “real-world task to accomplish” target is poorly specified. Is it a long-term or short-term project? Is it application-level or infrastructural? Is it going to go multi-developer? When it comes to the trade-offs involved in language choice, those factors matter immensely. He seems to be overly focused on the short-term projects, and he ought to know better. Still, I don’t think he qualifies as a representative of the archetypal anti-intellectual tech boss, insofar as he’s actually tried a number of languages.

                                  1. [Comment removed by author]

                                    1. 1

                                      I actually felt like @burntsushi had a valid viewpoint, but it felt a lot of the downvoting of me and upvoting of him was a case of people Voting Waldo. I could be biased by having been attacked by the YCs on Reddit, though. Lobsters is generally better than that, so it’s possible that I just made my point poorly.

                                      The greater conflict here is an echo of the Federalists vs. Jeffersonians conflict of early 19th century America. You have idealists who think that everyone should program– the same idealists who believed in the moral supremacy of the agrarian life– and the realists who tend (like me) toward cultural and intellectual conservatism and who think that protecting the field is of higher priority than opening it up. (Both are valuable, but we should decide what we stand for before increasing our numbers.) The Federalists were a disliked minority party in their own time, but proven to be right. Similarly, people like me who rail against open-plan Scrum culture (i.e. the devaluation of skills we spent decades acquiring) are seen as an elitist minority, but my bet is that we’ll be viewed as having been correct.

                                    2. 21

                                      I got downvoted heavily for pro-intellectualism in a recent thread.

                                      That’s a ridiculous spin. I know I didn’t downvote you for being in favor of “intellectualism.” I downvoted you because of your presumption that you’re qualified to make a whole bunch of assessments on who should not only be allowed in this industry, but who should even be a programmer in the first place. That’s not “pro intellectualism,” that’s arrogance of the highest calibur. I also downvoted you for your excessive stereotyping and false dichotomies.

                                      If you were just “pro intellectualism,” then you’d be doing your part to encourage the exploration of new ideas. Instead, you’re bitching about “scrum drones.” IMO, doing the latter while pretending it’s the former is part of what makes you so poisonous.

                                      For instance, I have zero problems with the anti-intellectual complaints in this thread. I totally buy @apy’s argument, hook, line and sinker. I even identify with it to an extent. But there’s a huge chasm between the anti-intellectual complaints in this thread and the supposed anti-intellectual complaints that are in your comments.

                                      1. 2

                                        This has gone off-topic and I’d be happy to take the debate to a more appropriate channel, if you know of one.

                                  2. 9

                                    Getting stuck too easily in a local maximum is bad. Searching indefinitely for the global maximum is bad. You just have to be willing to stick most of the time, and hop around other times. Now I just need a name for it… pretend tempering?

                                    1. 10

                                      I have mixed views about this.

                                      On one hand, the languages that are “good enough” now (see: Python) were once elite, fringe languages. If we apply this reasoning, we’re stuck with Java. I don’t think that anyone wants that. There’s a lot of social value, as @apy notes, in people investing time and energy into languages . It’s the standard trade-off between non-local jumps and local maximization: you need both. And if you’re going to be taking on a long term project where the choice of language will affect multiple people every day for the next 10+ years, it’s probably the right choice to use OCaml or Haskell and copy over or rewrite the external libraries that you’ll need.

                                      On the other, I’ve learned the hard way that using elite, fringe languages can make it harder to keep current in other fields. If you want to be a top-notch data scientist and be recognized for it, you should probably work in Python. Haskell and Clojure and Scala are nowhere near competitive on libraries, so you’re going to spend a lot of time building infrastructure. That’s important work, but it won’t establish you as a data scientist. If you want to be a high-end data engineer for whom there are 7 jobs in the whole country at any given time, you can do data science in Haskell. If you want to win the data science game, you should probably focus on the Python stack, which is almost always going to be good enough for you to do your job.

                                      A further problem with “elite” languages is the political fight involved in getting them adopted. Often, it’s just not worth it. I say this as one who’s done it on multiple occasions. It’s demoralizing, and it draws energy away from the work.

                                      1. 2

                                        prog21.dadgum exemplifies “git r dun” in programming.