1. 47

I’m not sure I agree about the statement that LLVM is bad for functional languages - maybe for purely functional or lazy languages, i.e. Haskell, but for something like ML I think it’s definitely a valid option for a backend. That said, while I’m working on my own compiler for a functional language (built on LLVM), perhaps I haven’t gotten far enough to see the problems at scale.

The dismissal of effect systems seems like valid criticism, but overly fatalistic about it.

Type checker ergonomics are what I’m most concerned with at the moment in my own language, I haven’t found a whole lot of discussion on the topic (that isn’t just another dynamic vs static trash fire), I’d be interested to hear more on this.

I couldn’t find the talk for these slides, but if anyone knows where to find it, I’d love to watch it.

  1.  

  2. 14

    I find slide #12 odd.

    There is interest in proper engineering in some circles of academia, namely the one within industrial companies. Interestingly, the research groups within these companies are the anti-thesis to what the author writes about them: they are not bound by quarterly returns.

    Finally, there is a big hobbyist and enthusiast community that is interested in participating, they just can’t provide infrastructure.

    The slide makes it seem like these are three silos and a new programming language has to come out of one of these.

    A combination of these is very powerful though: Rust specifically is rooted in all three. It is a programming language built by the research arm of a company, initially as a research project, but geared towards solving problems that company has at scale. This interplay is very powerful, as each and every language feature of Rust can be traced to problems that Mozilla has.

    But, and that’s where it’s rather unique: it was built with community participation from 0.1.0 on (there’s history before that). Indeed, the language has gone through fundamental changes through hobbyist involvement and feedback. Even today, 75% of all contributions are from non-Mozillians.

    This all is made easier, though, without the huge amount of infrastructure Mozilla provides: CI, build tooling, hosting for the odd website you might have, hosting for the package manager.

    So, yes, individually, the next big language doesn’t have many chances to come out of one of these silos. But smashing the silos has a lot of potential. And Mozilla is a actually good at doing this.

    P.S.: For someone interested in some history, this is a talk by a community contributor that build many features that were subsequently removed for 1.0 (they are not bitter about it, they think Rust is a better laguage for it). https://www.youtube.com/watch?v=olbTX95hdbg&list=PL85XCvVPmGQic7kjHUfjHb6vz1H5UTEVG

    1. 6

      Yeah I found that strange too - I’ve worked at a few companies which had R&D departments, oftentimes focused on software, and it doesn’t seem far fetched that a programming language could grow out of one of those departments (and as you said, they already have).

      A lot of these slides seemed to take an “everything sucks” point of view - and I can’t tell if that was in the interest of stirring up conversation, or whether it’s actually their attitude about these things, but I can’t help but feel there are better ways to get people thinking about the issues involved in growing practical programming language improvements.

    2. 6

      Very weird to see Haskell in Industry language and OCaml in Academic, no?

      1. 1

        It does mirror my experience. While you do see some applications of OCaml in industry (Jane Street et al), I’ve observed Haskell to have more real-world penetration simply because the ecosystem is more mature. (Not having multiple core libraries to choose from, what feels like a more stable toolchain, etc. probably helps.)

      2. 5

        I really enjoy effects systems, though the verbosity is rough. Purescript ends up with some gnarly annotations if you go all in.

        One thing I want to really see is moving more towards tagging and working in partial information. Particularly: being able to “overlay” types onto things that are already built.

        For example, you might have an internal HTMLString tag/row, and some third party lib provides concatHtml, but doesn’t know about your stuff. But if you could offer type annotations over this without having to rewrite all your code (super useful when working on larger projects to avoid churn just to try out new typechecks), then you could tag things simply.

        We do a lot of stuff with wrapping/unwrapping nowadays, but then we hit the “monad stack” problem pretty quickly. Does HTMLString . EvenLengthString mean the same thing as EvenLengthString . HTMLString?

        We have a lot of tools out there for types and building packages/modules, and I think we can expand static analysis beyond just current types.

        1. 2

          I’m really intrigued by them as well, but mostly from the point of view of optimizations (for example, memory management) or interesting types of static analysis. I haven’t read anything that makes me think user-defined effects are very useful in practice, but I’d love to read some good examples. Custom effect-handlers can offer some interesting options, such as the concurrent scheduler often showed in papers, but when I compare that model to how much more intuitive working with processes in Erlang is, I can’t help but feel like it is a bit of a “square peg in a round hole” situation. To me the promise of effect systems (and effect inference), is a compiler which is able to statically determine a lot more interesting properties about a program, and thus able to optimize it better, or warn about things like conflicting effects. I’m really interested in this myself, because I know I could extend my type system with effects if I really wanted to go that direction, but beyond the memory management aspect, I’m not sure what I would be trying to accomplish with it.

          I think my issue with tagging/gradual typing is that you lose a lot of the guarantees/benefits you get with a system designed around static types (whether inferred or otherwise). You basically have a system where everything is effectively dynamically typed anyway, because some of the code may be working with typed values, but then passing them off to code which has no type information, and has to rely on tags. One of the benefits of statically typed languages (in my opinion) is that the type system gives you enough information to do some really nice optimizations (such as unboxing primitives), and also the guarantee that if your type system is sound, and your program compiles, you can be certain it will work.

          You mention using a value of type HTMLString with some third party lib providing a concatHtml function. How are you proposing to make it work with your types? Are you assuming it has no types and you can provide your own type signature for it that works with your HTMLString type, ala Haskell typeclasses/instances? Or are you suggesting that your HTMLString type could be made compatible with whatever type concatHtml is annotated with? I think a number of gradual typing systems do something like the former (i.e. decorate untyped functions with their missing annotations), but in practice I’ve heard mostly complaints about this approach - most developers don’t want to spend time doing that for an entire third-party library, let alone multiple, and while some industrious community members will certainly do so, it doesn’t prevent the feeling of “the type system doesn’t really protect me”. The latter seems unlikely to work, other than in the case of subtyping, at least without heavy restrictions (i.e. allowing types which are structurally the same, or types which are basically aliases for the same thing).

          I think a lot of the gradual typing approach is focused on initial productivity, right at the beginning of a project, where things are constantly shifting, sometimes in big ways - and I think that having a more dynamic system at that point is valuable, but it doesn’t take long for things to settle down to a point where a static type system is staying out of the way but doing a good job helping you make changes/preventing mistakes. Rather than gradual typing, it would be nice if you could defer typechecking until the point where it’s required and be able to tell the typechecker “I know this looks wrong, but let it fly for now”. I think this combination would allow quite a bit of the flexibility desired near the beginning of a project. Unfortunately I think it introduces a ton of complexity into a compiler, which I think is fine, but can also make it difficult to implement and balance with all of the other concerns (error reporting, soundness, codegen).

        2. 4

          On type checker ergonomics, I recommend Improving Type Error Messages in OCaml (2015), especially check “Related work” section.

          1. 1

            Thanks for the link!

          2. 3

            Is there a video to go with the slides? A few of the bullet-points leave something to be desired in terms of context and clarity.

            1. 1

              I couldn’t find one, and I looked for awhile. I agree it’s lacking a larger point/context that must have been established in the actual talk, but didn’t make it into the slides. Hopefully somebody knows where it can be found.

            2. 3

              I really enjoyed reading this, thank you :). I’ve been squarely in the build Smalltalk on Mac camp, because after all macOS is NeXTSTEP, and NeXTSTEP was a compromise to do Alto development on a Unix system. I still think that NeXTSTEP and ObjC were two of the finest compromises in computing. There have been others doing more or different things in this space: F-Script, Objective Smalltalk, StepTalk, the GNU Smalltalk/Objective-C bridge, and probably even more that I haven’t heard of.

              I’m still interested in and excited by developments from that camp, but am not (any longer) expecting some Alan Kay-esque “blue plane” event in which this makes programming an order of magnitude faster, easier or otherwise better and a group of programmers break away and form some commune for those who (subjectively speaking) want to “do software right”.

              I have lots of inputs into that, but this is too narrow a margin for a full proof so I want to focus on one of them: collectively software practitioners (and separately, software engineering academics) are a conservative group. Whatever your favourite metric for programming language popularity, those that are in vogue are mostly either currently promoted by incumbent platform vendors (Google and Go/Kotlin[rather than IntelliJ]; Apple and Swift; Mozilla and Rust) or are decades old and came from the vendors (Netscape and JS; MS and C#/VB; Oracle/Sun and JS; AT&T and C/C++; Apple/NeXT and ObjC). The favoured tools—except in a few specific domains—tend to be of the traditional edit text->[optionally compile]->run->[either insert a print statement or maybe run a debugger] variety, even the new ones like VSCode, Nuclide etc. Tools like LightTable/Playgrounds or CodeBubbles make that flow more efficient.

              That the platform vendors have so much clout, present and historical, reflects a fundamental reality that if you follow the money you no longer find yourself in developer tools R&D.

              1. 2

                Interesting slides, but I feel there are diminishing returns in programming language research. There are still some interesting areas, but there haven’t been many obvious gains in a while, especially not since functional programming has gone mainstream.

                I’ve used a lot of languages over the years, in a variety of situations, and one thing I’ve found is that developers generally understand the pain points and limitations of the language they’re using and take measures to alleviate them.

                The major problems in software development nowadays simply aren’t due to programming languages. Programming languages won’t fix missed requirements, feature creep, poor development practices, crappy architecture, poor communication, etc.

                Security is one area where languages can help a bit, but even there the biggest screw ups (like Equifax) usually boil down to people problems, infrastructure problems, or poor architecture.

                1. 2

                  I’m secretly hoping we see a bunch of progress in APLs.

                2. 2

                  There is no dependently typed language compiler mature enough to compile itself.

                  ATS2 is written in ATS and compiles with the original ATS compiler (which bootstraps from C), as far as I know.

                  1. 2

                    This reddit comment sums it up pretty well I think.

                    1. 2

                      I’ve been thinking about these issues a lot, at least in the past.

                      The economic incentive may not be there, but waiting for megacorps to fix things belies a failure of imagination IMO. The question I’ve been wrestling with is: how do we bring the costs of language innovation down sufficiently such that a dedicated group of hobbyists can make meaningful contributions to the state of the art? Are metasystems helpful here?

                      I think the big error of this presentation is taking too much consideration for the concerns of programmers who, for all intents and purposes, probably aren’t going to try out an experimental language anyway. What they’re saying is mostly after-the-fact rationalization for justifying fashion. And that is okay. I just don’t think we should kid ourselves into thinking that they actually care about moving the needle forward. They don’t. They just want to use what everyone else is using. They’ll learn things, but only when absolutely necessary. Therefore, I don’t think it is productive to give much credence to what they have to say because they don’t have much skin in the game to begin with. That shifts the audience to enthusiasts, who tend to be a lot more forgiving.

                      It should be possible for a dedicated group of enthusiasts to innovate and disrupt the PL space. Gating progress behind the comment section and the low imagination of megacorps only fosters an environment of mediocrity and ‘pragmatism.’

                      1. 1

                        I think this conflates engineering as “plt” with software engineering as the diverse array of techniques, tools, and processes we use to make stable software. Even if radically new languages aren’t coming out, we still can advance the field in other ways.