1. 61
    1. 46

      I would happily make a long-term wager that small, bespoke languages will not replace high-level languages in any meaningful way. I have three reasons for believing this:

      1. The relative time/effort savings of “using the right language for the job” are not as great as they seem.
      2. The cost of working with multiple languages is greater than it seems.
      3. Political forces of organizations, and the individual force of laziness, favors high-level languages.
      1. 9

        Very good points. It might very well be the case that “little languages” will take their place next to Lisp as one of those ideas that showed impressive results but failed to make a meaningful dent in the industry.

        1. 8

          As someone who has done a lot of research on program semantics, static analysis and verification, I have gone from thinking little DSLs are ugly to thinking they are the future (as you can see from my other comments in Lobsters).

          The key idea is that verifying programs written in Turing-complete languages is hard. We should either try to automate as much as possible (and here static analysis might be the most practical approach, but it is somehow less popular than other techniques) or switch to DSLs, where verification is much easier.

          I think what could make DSLs take off is some tooling to construct them (like Racket) paired with some tooling for building verifiers.

          1. 4

            An interesting related example to this big languages vs DSLs debate is the design of Julia vs Python in the context of ML.

            Julia has taken the big Turing-complete language route, whereas Python is just a framework to build little DSLs (or not so little, JAX has around 200 operators). Julia is undoubtedly more elegant, but it’s a lot trickier to implement differentiable programming on a huge language vs a tiny DSL.

            So, the end result is that nobody uses Julia to build large ML models because autodiff is not robust enough. You can find a very long and illuminating discussion here: https://discourse.julialang.org/t/state-of-machine-learning-in-julia/74385/2

        2. 8

          What’s interesting about “little languages” in a Lisp system is that they are no different than libraries. If you use that same argument in non-Lispy languages, “libraries are little languages,” then “little languages” are already incredibly successful and are absolutely the future. The API is a “little language.”

          Personally, I want these “library languages” to be able to provide more to the higher level languages they are designed for, but I seem to be in the losing side of this, too. I recently wrote a small wrapper on top of Go’s database/sql to make it easy to do transactions across our database abstraction, and, of course, the only way to make this work reasonably was to take a func () error as a sort of “block” and on error rollback, else commit. But of course, most Go programmers would cringe at this, and even I cringe at it in some ways.

        3. 3

          Lisp is a funny comparison to make.

          One thing I like about Lisp (Common Lisp, specifically) is that the language lets you embed “little languages” and DSLs, so this trade-off doesn’t need to exist there - if I need a new “language” to describe something, I can create it and use it easily without messing around with new tooling, editor support, etc. It’s basically no different than using a library.

      2. 2

        I think I mostly agree here… but a couple points:

        1. Your usage of “high-level” seems off here. Do you mean “general purpose” or just “big”, maybe, instead? I’d argue that many/most of the little languages in question are also high-level by any common definition.
        2. I think one thing this misses is another route to success for little languages, which is to bypass “traditional software development” altogether and allow end users and domain experts to put together things that are “good enough.” There’s a long history of things like Access and Excel, up to IFTTT and the glut of modern “low-code/no-code” projects. I’d argue some of these absolutely have (and will continue to have) that sort of success.
      3. 1

        i might argue with #1, but #2 and #3 are definitely huge factors. if a ton of work is put into reducing the cost of working with multiple languages industry-wide i can see the strategy becoming more common.

    2. 22

      I never really like “we need something as elegant as Maxwell’s equations” argument because as soon as you substitute anything into the equations you get a gigantic mess that’s so big everybody uses alternate approaches to solve stuff. PDEs are no joke.

      More relevant to your actual point, IMO the biggest barrier for little languages is tooling adoption. LSP/Treesitter plugins are 1) a lot of work to make, and 2) enormous productivity boosters. That pushes people to stay with big languages with extant tooling.

      EDIT: I really wouldn’t trust the STEPS program’s claims that they got feature-parity of a 44,000 LoC program with only 300 lines of Nile, not without carefully evaluating both programs. Alan Kay really likes to exaggerate.

      1. 3

        Yeah, tooling-wise you’re really swimming upstream as users won’t even have syntax highlighting from day one.

        I guess the upside is that certain tools become much easier to create for little languages (eg Bret Victor’s graphical Nile debugger—for example, writing a simplified regular expression engine could realistically be an undergraduate homework project—but it’s still work that needs to be done, and it’s the kind of work that won’t pay anyone’s bills.

      2. 3

        Regarding Nile VS Cairo—yes, if you read the fine print the graphics rendering pipeline actually works out to be much more, as that doesn’t include some of the systems underlying Nile, like Gezira. Looking at the STEPS final report you can see that the total amount of graphics code ends up being somewhere closer to 2000-3000 lines. As I mentioned briefly in the article, part of that size reduction was probably due to replacing hand-rolled optimizations with JITBlt .

        However, if you think about it, JITBlt isn’t really connected to the whole “little language” concept at all; you could’ve probably achieved the same thing by adding a JIT compiler as a library in Cairo, or by using some really gnarly C++ template tricks, and end up with a significant size reduction of the Cairo code base instead.

        (There might be some other caveats as well that I’m not aware of—for example, there could be a lot of backwards compatibility code in Cairo that they could just skip implementing as Nile is a green-field project)

      3. 1

        EDIT: I really wouldn’t trust the STEPS program’s claims that they got feature-parity of a 44,000 LoC program with only 300 lines of Nile, not without carefully evaluating both programs. Alan Kay really likes to exaggerate.

        The claim about Cairo really stood out to me. We run Cairo, nobody runs Nile, after all. If there is this amazing library that does all the same things in just 300 lines of code, then it would have been a no-brainer to adopt it when it came out… unless there is some details that we would rather not talk about.

        1. 1

          I see 3 such potential “details”:

          • Saying that Nile’s performance “compete” with Cairo may mean it is comparable, within the same order of magnitude. Up to twice as slow. And even if it was only 5% slower people would probably refuse to switch.
          • The 300 lines don’t count the rest of the STEPS compilation toolchain, which I believe takes about 2K lines of code. They’re just not counted here because those 2K lines are used for the whole system, not just Nile.
          • As far as I know Nile doesn’t have a C API. This one is probably a deal breaker.
      4. 1

        The link is paywalled for me, is there an alternate source?

    3. 21

      Oh is it time to hype dsls again? That makes sense as we’re starting to all get a little embarrassed about the levels of hype for functional programming.

      I guess next we’ll be hyping up memory safe object oriented programming.

      1. 16

        I’m just sitting here with my Java books waiting for the pendulum to swing back…

        1. 9

          I’m going to go long on eiffel books.

          1. 6

            I think a language heavily inspired by Eiffel, while fixing all of its (many, many) dumb mistakes, could go really far.

            1. 2

              I’ve just started learning Eiffel and like what ive seen so far, just curious what do you consider its mistakes?

              1. 8
                1. CAT-calling
                2. Bertrand Meyer’s absolute refusal to use any standard terminology for anything in Eiffel. He calls nulls “voids”, lambdas “agents”, modules “clusters”, etc.
                3. Also his refusal to adopt any PL innovations past 1995, like all the contortions you have to do to get “void safety” (null safety) instead of just adding some dang sum types.
            2. 1


      2. 14

        I, personally, very much doubt full on OOP will ever come back in the same way it did in the 90s and early 2000s. FP is overhyped by some, but “newer” languages I’ve seen incorporate ideas from FP and explicitly exclude core ideas of OOP (Go, Zig, Rust, etc.).

        1. 5

          I mean, all of those languages have a way to do dynamic dispatch (interfaces in Go, trait objects in Rust, vtables in Zig as of 0.10).

          1. 13

            And? They also all support first-class functions from FP but nobody calls them FP languages. Inheritance is the biggest thing missing, and for good reason.

            1. 12

              This, basically. Single dynamic dispatch is one of the few things from Java-style OO worth keeping. Looking at other classic-OO concepts: inheritance is better off missing most of the time (some will disagree), classes as encapsulation are worse than structs and modules, methods don’t need to be attached to classes or defined all in one batch, everything is not an object inheriting from a root object… did I miss anything?

              Subtyping separate from inheritance is a useful concept, but from what I’ve seen the world seldom breaks down into such neat categories to make subtyping simple enough to use – unsigned integers are the easiest example. Plus, as far as I can tell it makes most current type system math explode. So, needs more theoretical work before it wiggles back into the mainstream.

              1. 8

                I’ve been thinking a lot about when inheritance is actually a good idea, and I think it comes down to two conditions:

                1. The codebase will instantiate both Parent and Child objects
                2. Anything that accepts a Parent will have indistinguishable behavior when passed a Child object (LSP).

                IE a good use of Inheritance is to subclass EventReader with ProfiledEventReader.

                1. 10

                  Take a cookie from a jar for using both LSP and LSP in a single discussion!

                2. 4

                  Inheritance can be very useful when it’s decoupled from method dispatch.

                  Emacs mode definitions are a great example. Nary a class nor a method in sight, but the fact that markdown-mode inherits from text-mode etc is fantastically useful!

                  On the other hand, I think it’s fair to say that this is so different from OOP’s definition of inheritance that using the same word for it is just asking for confusion. (I disagree but it’s a reasonable argument.)

                3. 2

                  Inheritance works wonderfully in object systems with multiple dispatch, although I’m not qualified to pinpoint what is it that makes them click together.

                4. 1

                  I’ve lately come across a case where inheritance is a Good Idea; if you’re plotting another of your fabulous blog posts on this, I’m happy to chat :)

                5. 1

                  My impression is that inheritance is extremely useful for a peculiar kind of composition, namely open recursion. For example, you write some sort of visitor-like pattern in a virtual class, then inherit it, implement the visit method or what have you, and use this to recurse between the abstract behavior of traversing some structure, and your use-case-specific code. Without recursion you have to basically reimplement a vtable by hand and it sucks.

                  Well, that’s my only use of inheritance in OCaml. Most of the code is just functions, sum types, records, and modules.

                6. 1

                  Forrest for the trees? When you want to create a framework that has default behaviour that can be changed, extended or overridden?

              2. 4
                • obj.method syntax for calling functions — a decent idea worth keeping.
                • bundling behavior, mutable state, and identity into one package — not worth doing unless you are literally Erlang.
                1. 3

                  IMO there is a fundamental difference between Erlang OO and Java OO to the point that bringing them up in the same conversation is rarely useful. Erlang actively discourages you from having pellets of mutable state scattered around your program: sure, threads are cheap, but that state clump is still a full-blown thread you need to care for. It needs rules on supervision, it needs an API of some kind to communicate, etc, etc. Erlang is at it’s best when you only use threads when you are at a concurrency boundary, and otherwise treat it as purely functional. Java, in contrast, encourages you to make all sorts of objects with mutable state all over the place in your program. I’d wager that MOST non-trivial methods in Java contain the “new” keyword. This results in a program with “marbled” state, which is difficult to reason about, debug, or apply any kind of static analysis to.

              3. 2

                In all honesty, you sound quite apologetic to what could be arguably considered objectively bad design.

                Attaching methods to types essentially boils down to scattering data (state) all over the code and writing non pure functions. Why honestly cannot understand how anyone would think this is a good idea. Other than being influenced by trends or cults or group thinking.

                Almost the same could be said about inheritance. Why would fiting a data model in a unique universal tree be a good idea? Supposedly to implicitly import functionality from parent classes without repeating yourself. Quite a silly way to save a line of code. Specially considering the languages that do it are rather verbose.

                1. 5

                  Why honestly cannot understand how anyone would think this is a good idea. Other than being influenced by trends or cults or group thinking.

                  Here’s a pro tip that has served me well over many years. Whenever I see millions of otherwise reasonable people doing a thing that is obviously a terribly stupid idea, it is always a lack of understanding on my part about what’s going on. Either I am blind to all of the pros of what they are doing and only see the cons, or what they’re doing is bad at one level but good at a different level in a way that outbalances it, or they are operating under constraints that I don’t see or pretend can be ignored, or something else along those lines.

                  Billions of lines of successful shipped software have been written in object-oriented languages. Literally trillions of dollars of economic value have been generated by this software. Millions of software developers have spent decades of their careers doing this. The though that they are all under some sort of collective masochistic delusion simply does pass Hanlon’s Razor.

                  1. 1

                    To be honest, the more I study OOP (or rather, the hodgepodge of features and mechanisms that are claimed by various groups to be OOP), the less room I see for a genuine advantage.

                    Except one: instantiation.

                    Say you have a piece of state, composed of a number of things (say a couple integers, a boolean and a string), that represent some coherent whole (say the state of a lexer). The one weird trick is that instead of letting those be global variables, you put them in a struct. And now you can have several lexers running at the same time, isn’t that amazing?

                    Don’t laugh, before OOP was popular very prominent people thought it was a good idea to have global state in Lex, Yacc, or error handling (errno). So here’s my current guess: the success we attribute to OOP doesn’t really come from any of its overly hyped features. It comes from a couple very mundane, yet very good programming practices it adopted along the way. People attributed to the hyped stuff (such as inheritance) a success they have earned mostly by avoiding global variables.

                    Abstract data types are amazing, and used everywhere for decades, including good old C. The rest of OOP though? Contextual at best.

                  2. 1

                    It has been the opposite for me.

                    • typecast everything to and from object in early versions of java
                    • EJBs 2
                    • Bower package manager. Its creator wrote on stack overflow that he was confused when he created the project and that it was essentially useless.
                    • Ruby gems security incident
                    • Left pad fiasco
                    • Apache web server htaccess configs

                    I could go on with more esoteric examples to an ever growing list.

                    All these had criticism screaming long before they happened: why?

                2. 3

                  Many decisions are only clearly good or bad in retrospect.

            2. 6

              Inheritance is the biggest thing missing, and for good reason.

              That reason being “inheritance was the very first mechanism for subtyping, ADTs, and code-reuse, and people using it got ideas for better mechanisms from it.” ;)

              1. 1


            3. 3

              The first versions of Simula and Smalltalk didn’t have inheritance either. Self and other prototypal object-oriented languages don’t use traditional inheritance either. We still call all of them object-oriented.

              Honestly, it’s well beyond time that we retire all programming language paradigm terms. Modern languages simply aren’t organized into paradigms they way older simpler languages were.

              It’s like we’re looking at a Honda Accord and arguing over whether it’s a penny farthing or a carriage. The taxonomy no longer makes sense.

        2. 1

          Ah yes and that’s why it’s ripe to have a come back. :)

          Seriously though I expect that the next incarnation will be “oop without inheritance” or something. Probably combined with some large corporation “inventing” gc-less memory management.

          1. 2

            The good parts of OOP never really left. We already have that exact language: Rust. It has formal interfaces (Traits), encapsulation, polymorphism, and gc-less memory management.

            1. 10

              The main thing about OOP that needs to die is the idea that OOP is a coherent concept worth discussing on its own. Talk about the individual concepts as independent things! It’s much more productive.

              1. 1

                Talk about the individual concepts as independent things!

                IMO OOP these days really means inheritance and an object lifecycle. All the other concepts aren’t really unique to OOP.

                1. 3

                  I think “OOP” generally means “features of object-oriented languages that I don’t like” to a lot of people. The people using those languages don’t generally get into paradigm arguments.

                  (Personally, I consider inheritance to be common in OOP languages but not a particularly interesting or salient part of them. Many early OOP languages didn’t have inheritance and prototypal ones have an entirely different code reuse model.)

                  1. 1

                    For some people “OOP” means “features of languages I do like”. For instance I’ve seen people include templates/generics/parametric polymorphism and unnamed functions as core parts of OOP… having learned CamlLight (OCaml without the “O”) in college, I confessed I was quite astonished.

                2. 2

                  You say that but it means different things to different people. I don’t disagree that your definition would be a good one if you could get people to agree on it, but I can’t assume that when other people say “OOP” that’s what they’re talking about.

        3. 1

          I think it will come back, rediscovered as something new by a new generation disillusioned with whatever has been the cool solves-everything paradigm of the previous half decade. Perhaps this time as originally envisaged with a “Scandinavian school” modeling approach.

          Of course it never left as the first choice for one genre of software… the creation of frameworks featuring default behavior that can be overridden, extended or changed.

          Those languages you mention (Go, Zig, Rust) are primarily languages solving problems in the computer and data sciences, computing infrastructure and technical capability spaces. Something is going to be needed to replace or update all those complex aging ignored line-of-business systems.

      3. 11

        There isn’t really any need to “hype” DSLs because they’re already widely used in all domains of programming:

        • front end: HTML / CSS / JavaScript, and most JS web frameworks introduce a new DSL (multiple JSX-like languages, Svelte, etc.)
        • back end: a bajillion SQL variants, a bazillion query languages like Redis
        • builds: generating Ninja, generating Make (CMake, Meson, etc.)
          • there at least 10 CI platforms with their own YAML DSLs, with vars, interpolation, control flow, etc.
        • In games: little scripting languages for every popular game
        • Graphics: scene description languages, shader languages
        • Compilers: LLVM has its own TableGen language, languages for describing compiler optimizations and architecture (in the implementation of Go, a famously “not DSL” language), languages for describing VMs (Ruby)
        • Machine Learning: PyTorch, TensorFlow, etc. (these are their own languages, on top of Python)
        • Distributed computing: at least 10 MapReduce-derived frameworks/languages; there are internal DSLs in Scala for example, as well as external ones
        • Mathematics and CS: Coq, Lem, etc.

        All of these categories can be fractally expanded, e.g. I didn’t mention the dozens of languages here: https://github.com/oilshell/oil/wiki/Survey-of-Config-Languages – many of which are commonly used and featured on this site

        If you think you don’t use DSLs, then you’re probably just working on a small part of a system, and ignoring the parts you’re not working on.

        ALL real systems use tons of DSLs. I think the real issue is to mitigate the downsides

        1. 1

          Oh yes but at the same time if you haven’t seen the hype for DSLs then you haven’t spent long enough in the industry to go through that part of the hype cycle. DSLs are what they are and it looks like we might be entering a hype cycle where people want to make them out to be much more.

          1. 3

            I don’t agree, I’ve been in the industry for 20+ years, there are plenty of things more hyped than DSLs (cloud, machine learning, etc.)

            DSLs are accepted standard practice, and widely used, but often poorly understood

            I’m not getting much light from your comments on the subject – you’ve made 2 claims of hype with no examples

            1. 2

              Here’s an example of recent hype https://www.codemag.com/Article/0607051/Introducing-Domain-Specific-Languages

              Here’s some hype from the year 2000 https://www.researchgate.net/publication/276951339_Domain-Specific_Languages

              Arguably the hype for 4GLs was the prior iteration of that specific hype.

              I’m not arguing that DSLs are bad - I’m saying that they’re one of the things on the roster of perfectly good things that periodically get trumpeted as the next big thing that will revolutionize computing. These hype cycles are characterized by attempts to make lots of DSLs when there isn’t a strong need for it or any real payoff to making a language rather than a library.

      4. 4

        I know it might sound a bit controversial, but the way I see it we need to reach a new level of abstraction in order for large-scale software development to be sustainable. Some people might say AI is the way forward, or some other new programming technique. Either way I don’t think we’ll get there by incrementally improving on the paradigms we have—in order to reach the next level we’ll have to drop some baggage on the way up.

        1. 4

          I mean, humans aren’t getting better at groking abstraction, so I don’t know that “new levels of abstraction” are the way forward. Personally, I suspect it means more rigor about the software development process–if you’re building a tall tower, maybe the base shouldn’t be built with a “move fast and break things” mentality.

          1. 3

            Groking abstractions isn’t the problem, at the end of the day abstractions are just making decisions for the users of an abstraction. Over-abstraction is the root of many maintainability woes IMO, the more a programmer knows what’s actually going on underneath the better, but only to the degree that it’s relevant.

        2. 3

          I’ve heard it before. DSLs have their place, and some people love them while others hate them. This is one of a rotating cast of concepts that you’ll eventually see rehyped in 10 years.

    4. 7

      I think the main thrust of this argument depends on the claim that existing little languages like SQL and regular expressions are somehow simpler than “big” languages, and thus better suited for understanding/implementation. But I think to lump SQL and regexes together shows a lack of understanding.

      SQL is a horrible example of a “little language.”

      It has an incredibly powerful/complicated interface which is notoriously hard for people to understand. It even has control structures: loops, if/else, procedure calls…

      And a real database is as much of an optimizing compiler as a compiler for C++. You’re just trading dataflow analysis for relational algebra, etc.

      If SQL is a little language, then that bodes poorly for new little languages staying little!

      1. 2

        the link you give for “control structres” describe PL/pgSQL which is not part of SQL

    5. 6

      However, there are still many unanswered questions, such as: How should these little languages talk to each other? Should they compile to a common intermediate representation? Or should different runtimes exist in parallel and communicate with each other via a common protocol

      This portion at the end asks the good, hard questions!

      I definitely believe in the ability of DSLs to raise the level of abstraction, and ensure composition (languages compose!). Abstraction is really the only way we have of building bigger systems that are reliable and fast.

      But there is such a thing as a language that’s too little, and there is the tendency for such languages to evolve badly (into general purpose languages) and haphazardly (SQL is mentioned in this thread).

      One observation I made early on in Oil is that shell, Awk, and Make have highly overlapping concerns, and have evolved badly. The combination of the 3 is worse than just one language.

      Example Code in Shell, Awk, and Make

      (Perl recognized that, but ironically I’d say both shell and make are more widespread than Perl these days – e.g. shell is #6 in growth on Github in 2022 – https://octoverse.github.com/2022/top-programming-languages )

      I still think it makes sense to consolidate little languages in this case, although I’m learning first hand that making a language runtime is hard :)

      As for the common protocol, a minimum is that shells need to support some JSON, since that has become the de-facto glue almost everywhere (e.g. tools like Clang, which have nothing to do with the web, output JSON for the compile commands database, etc.) Oil also has a protocol with Unix domain sockets to pass file descriptors between processes, which can improve startup time, another barrier to composing little languages.

      Anyway the overall point is that if you want more little languages, you also want a better shell for interop.

      It’s a similar argument for having all of C++, Rust, Swift, Go, Zig, etc. Everybody wants all systems to be written in their favorite language – that’s why they argue about it so much. But in reality the future is more heterogeneity in terms of programming languages, and it will only be reconciled with glue (shell-like languages, which includes Make).

      1. 2

        It feels like web-tech may lead us towards both the common protocol and the common runtime…

        Like you say, JSON has become an almost universal exchange format. I think WASM may eventually become the common runtime. Initially many languages are just targeting it as a way to get a foothold in the web app space, but I think as that continues there will be a push for more standardized interop between them all.

    6. 5

      Some of the points of SQL, like not writing an explicit algorithm, apply to Prolog and similar languages while being more generalistic than “little languages”

      1. 2

        Good point—I initially had a Prolog example in the article but dropped it in favour of shell scripts, as I thought they better illustrated the idea of “little languages”. But Prolog might actually be more of a happy medium; I don’t have high hopes of it ever becoming mainstream but I’d be very happy to be proven wrong!

    7. 5

      I highly recommend watching Guy Steele’s Growing a Language for the counter-argument.

      The pertinent question is: where do you want to develop new behavior? In the compiler, or in library code? I assert that compiler features are inherently more risky, and much harder to write.

      1. 3

        Perhaps there’s an apt analogy to be made between the friction involved with changing the compiler from underneath the code written for it, and changing words in a closed linguistic class from under a spoken language’s speakers. Changes at the foundation level seem to shake things up for people’s thought process (how they grok syntax, how they think about their programs) more so than adding sugar at the top of the stack.

        I think it’s just one of the challenges of designing any formal method of human expression, let alone programming languages: the goalposts are always shifting, but no one wants to update the rulebook. So they just invent a new sport instead. It’s a silly dance we do, but there may be no good way around it. Just one area of many where we “elegance”-loving coders perhaps need to tolerate some amount of inevitable inelegance.

    8. 5

      Isn’t it at least as good to have a language that allows little-language expressiveness without loss of big-language functionality?

      A number of languages are good at letting you define rich DSLs that also happen to be valid programs in the larger language, at the cost of some constraints on syntax. LISP comes immediately to mind, of course, but rich DSLs are also common in other languages like Kotlin.

      1. 2

        This is a valid point. My counter-argument would be that in some cases you want that loss of big-language functionality, e.g. to make static analysis easier. Some problems I have with macro-based DSLs is that:

        1. The runtime usually doesn’t know or care whether you started off from a macro or not, since it only sees the expanded code. This means that e.g. error messages may lack context, or be worded in a confusing manner, etc. Macros is a wafer-thin abstraction, and unpredictable things can happen when you break it.
        2. You don’t know what “rules” from the original language apply in the macro world. An example that comes to mind is the Cascalog DSL for Hadoop. Since Hadoop ships JARs to different machines for running its distributed MapReduce, this means any function references you use need to be named; supplying a lambda function will not work. This breaks the expectations from the Clojure “host” language in a rather brusque manner, where you’re used to treating them as interchangeable.

        Don’t get me wrong—I really like macros and basically think all languages should default to using S-expressions and macros, but I’m not 100% convinced about building DSLs with them. Racket seems interesting for this very reason—it actually does allow you to take things away, by letting you “opt out” of the standard library.

        1. 2

          That’s quite an interesting argument. Perhaps once you get to a certain point with macros it’s better to reimplement them as a proper language. You can still use s-expressions though - “just” pass unevaluated lists to a “compiler” function that converts the code to a state machine.

          The SRE syntax does this, and there are plenty of examples of assemblers that take s-expression syntax and produce bytecode (e.g. sassy for x86, my own bpf assembler and glls for GL shaders).

    9. 5

      Having used Ruby a few times now, with its own opinionated approach to “little languages”, my thought is simple: It’s easier to learn a big language than it is to learn two little languages. 🤷‍♂️

      1. 2

        Very much this: big languages tend to win usage. They’re also more work to implement, so you’re at less risk of many competing implementations (c.f. Scheme).

    10. 4

      Not convinced… and I say that as someone who has designed a little language in the past.

      • There is always general purpose stuff you want to do as well… so language gets big.
      • The documentation / tutorials / examples / debuggers / tools / compiler warnings / … compared to a big language like Ruby is non-existent.
      • If I were to do it again, I’d use a general purpose language like Ruby and provide enough classes and modules to represent the domain and ordinary ruby to glue it together.
    11. 3

      Having spent a significant amount of my professional working with a web-scale (sorry for the term) DSL in the form of Elm, I’ve seen both sides: the side where people are very happy and content with a language that is optimised for doing frontend web stuff in one particular way, and the side where people reach the limit of what Elm can do and start asking themselves “why can’t I use Elm on the backend” or “why can’t I use websockets inside Elm” or “why can’t I write JS interop in the way I want”?.

      My response to those people while I was active in the Elm community was that Elm does things one way, and it does those things well. To do other things, use other options. There is no rule that says you can’t use different things in different places.

      But I think it’s a common thing with DSLs: if you adopt it for some small part of your code, it becomes hard to maintain. It’s harder for your team of TS developers to understand fully. It’s harder to keep the DSL engine in sync with the rest of your application. And in the best case scenario, as with Elm, people will love the language design so much that they’ll want to use it elsewhere, and be frustrated that they can’t. That frustration isn’t a bad thing, though. It leads to inspiration and libraries and new language features being developed.

    12. 3

      To be honest, I don’t think small languages will take over programming.

      What I do think will happen is that small and/or academic languages will act as laboratories for abstractions, and that, depending on the cost of a given abstraction, some of them will be folded into existing languages (like async), or will have strong enough of an upside as to spawn their own, new language (rust and the borrow checker seems like the most successful example here).

      Like, we’ve already passed through what maximal flexibility looks like, now it’s a matter of figuring out the most useful guard rails.

    13. 2

      This makes me think of The Programing Language Wars, a thought-provoking* paper I read for a class. Skip to Section 2.2 if you - like me - generally don’t have the patience to casually read research papers.

      * Jury’s out on the, erm, interesting choices of quotes.

    14. 2

      Hey @chreke, you said

      The end result of STEPS was KSWorld, a complete operating system including both a document editor and spreadsheet editor, which ended up being about 17 000 lines of code.

      Do you know if the source code is available? Can people actually download and run KSWorld?

      1. 1

        I don’t think so unfortunately, which I think is kind of weird. However, you can find Gezira, JITBlt and Nile on Dan Amelang’s GitHub page: https://github.com/damelang

    15. 2

      I agree that domain-specific languages will become more prevalent, and DSL-specific tooling in IDEs and CLIs will come along to embrace it.

      Every business problem has several ‘dimensions’ as I would call them:

      a) business analogies (e.g. how we model business-specific entities and workflows)

      b) time component - how we model evolution/change

      c) constraints component – how we model rules and constraints that apply to ( a ) and ( b )

      e) integration component – how we reflect our internal ‘product’ onto the outside world

      Those 4 dimensions each require a DSL in a way, and then there is a ‘Combined’ DSL that integrates the language of these 4 dimensions together into a cohesive, executable model.

      For each one of these 4 + 1dimensions we need debugging tools, model checking tools, visualization tools, documentation tools, and iterative development tools.

      We are probably 30-40 years away from the above (as it requires educational, academic, and industry support) – but we will get there, as software will become more and more of a engineering discipline that does not tolerate constant patching, fixing and ‘fail-fast’ operating models.

    16. 2

      DSLs mean very different things to many people, we should talk about them that way.

      It could be a whole new textual language and its own runtime It could be a whole new textual language that compiles into the base language It could be a macro It could be JS style proxies, or similar ruby libraries It could even just be a library (like xstate)

      All of these are “languages”, in the sense that they have different semantics & sometimes syntax than the language its used alongside.

      I agree with what hwanye said, tooling is what actually makes a DSL good or not. If you have a whole new textual language & there’s no LSP, or if it doesn’t generate good type information for a language like typescript for example, its not going to be a pleasant experience.

      If the DSL requires doing lots of work to convert data between the base language & the DSL, it’s often gonna feel like a pain work with - using SQL with a library that only return tuples for rows feels like this.

      If the DSL’s semantics are complex because it relies on obscure features in the base language its likely going to have poor types, or horrible stack traces it might be difficult.

      These are not small issues, but dear god I would kill for great DSLs in certain places like state machine definitions and I don’t want to go back from using a DSL like svelte for UIs.