1. 6

    Programming Languages: Application and Interpretation (PLAI) is pretty good, and has the added benefit of being free online.

    http://cs.brown.edu/~sk/Publications/Books/ProgLangs/2007-04-26/

    Essentials of Programming Languages is another good intro PLT book.

    Programming Language Pragmatics is a good book, and it’s useful. I have a copy. If I lost it, I’d replace it. I refer to it occasionally.

    Whether it is a good choice as the primary text for a PLT class depends on the specific PLT class.

    Programming Language Pragmatics is basically a large collection of small sections about specific programming language features. Each feature is introduced, described, and several code snippets in different languages are provided to illustrate the use of that feature (by the end of the book, dozens of languages have been mentioned). What is conspicuously absent is the theoretical basis for the feature and any real detail about how the feature is actually implemented. (TLDR: There’s a reason the book is called “Programming Language Pragmatics” rather than “Programming Language Theory.”)

    If your PLT course is about “learn about using a bunch of different programming language features,” then Programming Language Pragmatics makes a lot of sense as a primary text.

    Personally, I think that’s a perfectly reasonable subject for a course, but I wouldn’t call that course “Programming Language Theory.”

    If your PLT course is about “learn the theoretical basis of programming languages and use that theory to implement a simple programming language and several variations of it,” or something similar, then I think Programming Language Pragmatics is a poor choice - that just isn’t what the book is about. It might be handy if you’re having trouble understanding what the pieces you’re building do, but it won’t really help you build them.

    As an example, you mention type systems. Programming Language Pragmatics only has a few pages total on type systems, type checking, and type inference. There’s no mathematical description of types, no discussion of how to actually DO type checking, and no discussion of how to actually DO type inference. The entire section basically boils down to “some programming languages have types, and will make sure that the types match up - some languages will even figure out the types for you!”

    1. 9

      Please note that there’s also a second edition of PLAI, which is also available at the same link:

      http://cs.brown.edu/~sk/Publications/Books/ProgLangs/2007-04-26/

      I think the second edition is much better than the first. (Of course, I’m a bit biased!) It’s the result of teaching the first edition for about a decade, finding much better ways of explaining its concepts, and eventually transcribing those better ways back into the book.

      The language of implementation is also slightly different. This has some advantages and disadvantages.

      Incidentally, the second edition has as of a week or two ago just been translated into Chinese, though that may not be of must interest to people on an English-language thread. (-:

      1. 3

        This was what we used in our first level PL class (at Cal Poly), and I just want to say thanks for writing such an easy to approach book!

        While there wasn’t much about types, I found it was perfect for the initial dip to get the context of types while making a basic PL.

        1. 2

          My pleasure — thanks! There isn’t much on types because I didn’t see the value in producing a watered-down version of TAPL. Rather, I show people the notation and what they need to know so that they can read TAPL.

        2. 2

          I’ll have to take a look at the second edition, I enjoyed the first.

          Thank you for your generosity in making such a valuable resource available at no cost.

          1. 1

            Thank you kindly! It’s a delight.

        3. 1

          So I “think” the course is a bit of both. But I’ve only had the intro yet and I’m doing the first exercise tonight. So I’ve yet to have a full understanding of how the course will be.

          For instance, most of the intro talked about BNF, programming paradigms and a short intro of different languages. The teacher did mention hoping that everyone would, at the very least, understand closures perfectly by the end of the course.

        1. 3

          Really nice tutorial! Has anyone else found material covering the implementation of some other common language features (Objects, methods, etc)?

          1. 3
            1. 3

              Thanks (-:. See also the follow-up: http://papl.cs.brown.edu/

              1. 2

                i went through a few chapters, and i’m curious about the focus of this book - it seems to spend less time than PLAI on the details of implementing interpreters, and more on building up the foundations a la HtDP. is it meant to eventually supersede PLAI, or is it more of a companion volume for people who want to start from an earlier point?

                1. 3

                  It’s actually a hybrid. Pretty much most of the content of PLAI is in there, but mostly in the second half. The first half is like a very brief version of HtDP. But there is also content that is not in either book: e.g., the material on tables is driven by language features we added to Pyret.

                  The real goal is to eventually interleave these parts: “here is some programming, now here’s the corresponding PL content; here’s more programming, here’s the corresponding PL content”. I was starting to do that, but realized there are some open research questions I have about how to present semantics. So with Preston Tunnell Wilson I am now investigating those issues. Once they are sufficiently resolved, they will feed back into the restructuring of PAPL.

                  Hope that make sense.

          1. 2

            Is there really an “emerging idea of language-oriented programming, or LOP” as the article states? The way I see it, most modern languages are carefully crafted to balance their features (in their type and effect systems) such that I see little room for the modularity that LOP would require. I’ve never heard the term before. It sounds like we have extended the hierarchy upwards but I am not yet convinced:

            • LOP
            • Embedded DSL
            • Framework
            • Library
            • Functions, Methods
            1. 4

              I imagine people made a similar arguments against objects in the early 80s. When languages did not natively provide support for objects, it was so inconvenient that people hardly ever used them, so they necessarily felt “we have extended the hierarchy upwards but I am not yet convinced”. Now we can’t get them out of our languages even if we try. (-:

              Where languages make it — by design or by accident — easy to extend the language, language extension is rife. The paper mentions the case of JavaScript. Though JS was not invented with any meaningful metaprogramming capabilities, it left enough hooks that people have gone off and created all sorts of sub- and super-languages around it. This is also true in Ruby (“Ruby DSL” is a whole thing in itself), because Ruby also provides such hacks.

              Furthermore, a growing number of new languages have been adding macros: Scala, Julia, etc. You can view what has happened in Racket as a natural destination of where macros end up. We’re just ahead of the curve by about 20 years; there’s a good chance that as people start to use macros more in those other languages, they will slowly recapitulate all the lessons that we’ve learned, and end up creating similar solutions.

              1. 1

                When a language has an advanced type or module system, this cannot be easily extended. The language can still implement a macro system to accommodate developing patterns (like deriving RPC interfaces, support logging, or serialisation) to help cut down boilerplate code but there are limits to that. A term like LOP suggest a language can be assembled from building blocks and my lack of conviction is around that aspect.

                1. 2

                  Well, no, LOP does not imply that a “language can be assembled from building blocks” — that is sometimes a consequence, but it isn’t part of the definition. The point is simply that every program has lots of small languages that are itching to surface, and languages should make it possible to do so — not in an ad hoc way, but in a way that lets those languages be turned into abstractions in their own right.

              2. 3

                It sounds like metaprogramming with DSL’s just with a new method. Language-oriented programming might be a more approachable term for that, though. If I heard it, the first things I’d think of were tools such as Rascal and Ometa that let one arbitrarily define, operate on, or use languages. That covers language part. Far as integration, a number of techs supporting DSL’s… probably a lot of LISP’s and REBOL… had a powerful language underneath one could drop into.

                So, this seems like a new instance of old ideas they’re giving a new name. I do like how they’ve integrated it into a general-purpose, GUI-based, programming environment instead of it being it’s own thing like Rascal. An old idea I had was researchers should do more experiments in building things like Rascal or Ometa alternatives in Racket leveraging what they already have to see how far one can stretch and improve it.

                1. 2

                  Terra is another language in the “make DSL’s” approach to programming - although more geared towards lower level programming I think.

              1. 4

                I’d love to see a comparison between this and quorum.

                1. 3

                  We’re really different. We’re very focused on getting the semantics right, less focused on syntactic details. So a direct comparison would not be very meaningful.

                  1. 1

                    I keep thinking about putting an evidence package from safe and verified systems to give them some language features to study. Especially like contracts or Ada’s built-in safety features.

                    1. 3

                      Sure, though I’m not sure there’s a lot of those we haven’t heard of. We’re deeply familiar with safety semantics of several languages (and have created our share of them). We’re also very familiar with contracts, especially from Racket (I’m one of the co-originators).

                      1. 1

                        I was talking about the Quorum people getting the evidence packet. Not yall unless you do that, too.

                        And welcome to Lobsters! :)

                        1. 3

                          No, send it to them, that’s fine (-:. And thanks — first time on here, but I see some familiar faces…

                  1. 2

                    The post talks about developing multiple languages for a single project and using them all together. Wouldn’t this mean before starting on a project you would have to learn every version used in the project and mess things up and forget something that works somewhere in the project works different in another section?

                    1. 3

                      Languages develop organically. You start by writing functions; you abstract these into libraries; then you start to notice patterns in the use of the libraries and try to abstract those; and eventually you notice that the abstractions don’t quite work out, and what you’d really like is a language for assembling the pieces offered by the libraries.

                      What Racket does is blur the line between library and language (as the paper explains). A library can export “language constructs” just as well as it can export functions.

                      For instance, consider file descriptors. You provide a library of files, including ways to open and close them. You notice that people are constantly screwing up, opening the file and forgetting to close it. You could provide a function that does both, and takes as an argument what to do in-between, but that’s unwieldy. So you can provide a constructwith-file, if you will — that lets people say what they want to do, and does the file opening and closing automatically. At some point you may even realize everyone should be using with-file only, and there’s no need to provide explicit opening- and closing-functions at all (after all, people might also close files before opening them). Voilà, you’ve gone from functions to a little language for dealing with files…seamlessly.

                      1. 2

                        Yes, but you get used to it the same way you learn to use a library. In a way, after you learn Racket, you lose respect for most contemporary dynamic languages…

                        The only reason to use them is that they have more batteries.

                      1. 7

                        At what point does the type become rich enough that it might as well just be the implementation?

                        Or to be less facetious: When do we start seeing mainstream languages with integrated relational and constraint programming?

                        1. 2

                          The flaw in your facetious statement is the definite article, the. The type can be a implementation: a slow (or otherwise inefficient), but ascertained-to-be-correct implementation, while “the implementation” is a fast, less-likely-to-be-precise one.

                          I don’t see this as connected to relational and constraint programming. These are ideas I use on a regular basis; there are other times when I use relations and constraints; and when I’m doing one I’m rarely (by choice) doing the other.

                          Though Racket comes with Datalog built-in, if that’s really your thing. (-:

                        1. 1

                          Reminds me of C++ template expansion errors in Visual Studio 6.

                          1. 1

                            Yes, except this post is about reconstructing run-time information, not template-time information.

                          1. 1

                            Interesting how a language that so many seem to either loathe, love, or find utterly boring keeps inspiring new tools and paradigms all the time.

                            Must be something to it, despite all the naysayers.

                            1. 1

                              Which one?

                            1. 5

                              Hi, Pyret developer here (I hack on error messages)! If anyone has any questions, I’ll do my best to answer or grab someone who can!

                              1. 4

                                A couple questions, about the check and where blocks:

                                1. Do you know if they’re consciously inspired by the Design by Contract ideas from Eiffel (and other languages)?
                                2. The front page says “These assertions are checked dynamically” – are those conditions checked at compile time, first execution of each function, only during a separate testing run, or what?
                                1. 2

                                  check, where and examples blocks all exist to support the ‘examples’ and ‘testing’ phases of How to Design Program’s functional design recipe. The placeholder expression syntax (...) and doc blocks are other features that Pyret has to encourage the design recipe.

                                  Testing statements are executed in the order they appear in the program alongside other top-level expressions. If you run:

                                  print(1)
                                  
                                  check:
                                    print(2)
                                  end
                                  
                                  print(3)
                                  

                                  The result is that 1, 2, and 3 are printed (in that order).

                                  1. 1

                                    Interesting. For the where expressions on function definition: are those executed when the function definition is first encountered, or when the function itself is first executed?

                                    1. 2

                                      At the point when the definitions are “encountered” is a good way to think about it. So, the former.

                                1. 2

                                  Indeed. (-: