1. 41

The recent announcement from the Go team led me to put into words some of the ideas I have been having recently about programming language design.

If my choice of the PLT tag was inappropriate (or, after reading the article, you don’t agree with my intentional omission of the Go tag), I’d be happy to hear about it.

  1.  

  2. 19

    I clicked expecting clickbait, but was pleasantly surprised to find a thoughtful, nuanced take. I don’t necessarily agree (I think the long-term danger to Go is not refusing to learn from other languages, but rather in bolting on features from them), but it was worth the time to read.

    I’ll second the suggestion of Tcl. While I myself prefer Lisp, Tcl is an amazing little language which really doesn’t deserve to be as little-known as it is. It is a bit long in the tooth, but it has some great ideas. Rebol, too, is worth a look.

    1. 4

      If you like Rebol you might want to look at a more modern take on it:

      https://www.red-lang.org/p/about.html

      I haven’t used it much myself but it looks like a promising, impressive full-stack language.

      1. 2

        An early web framework, the one AOLServer used, was fone with Tcl. Made its obscurity even more strange given AOL’s popularity and traffic.

      2. 13

        Regarding this point:

        Scheme seems to be an interesting compromise.

        Scheme is relevant to this post, I think, but in a bit different way. It has basically tried all the responses to the dilemma outlined, at various points and sometimes simultaneously. Various Schemes added more or fewer things. The 6th revision (R6RS) tried to standardize large amounts of similar functionality, but was rejected by significant parts of the community for being too big/complex. Racket split off at this point, mostly going the “big” direction and rebranding itself not a Scheme. The 7th revision (R7RS) responded by issuing two language standards, one following each path, named R7RS-large and R7RS-small. Various implementations have chosen either of those standards, or some other point near or between them, or stuck with R6RS, or stuck with R5RS, etc.

        I definitely think all this experimentation is interesting, but I’d argue the jury is still out on whether any kind of stable compromise in the design space has been reached.

        1. 13

          At least R7RS seems to be pretty much universally accepted, and makes it possible to write portable libraries. We’re not quite there yet, but I believe Scheme is definitely close to the point where it is easy enough to write portable code. As for all the other points, as I was reading the article I kept thinking “Scheme fits the bill” all the time, until of course the author mentioned it.

        2. 9

          Clojure seems to have everything and nothing more you want from such language: stable and small core, great tooling, discourages macros, so no fancy DSLs, etc.

          1. 7

            Have you considered Tcl ?

            1. 2

              I’m somewhat embarrassed to say that I haven’t taken more than a passing glance at it. It certainly seems to check the boxes I was looking for.

              How decoupled is it from Tk? If I wanted to write a web application, how much support would I find? (Wapp looks cool).

              Edit: There’s quite a few options: https://wiki.tcl-lang.org/page/Category+Webserver

              1. 6

                Tk is a separate extension from Tcl, like Expect and TLS are separate extensions. There are many ways to use Tcl for web applications.

                1. 3

                  It got used for OpenACS (and related, AOLServer). https://en.wikipedia.org/wiki/ArsDigita_Community_System. OpenACS still seems to be around, so must be useful to someone.

                  1. 2

                    Ran into OpenACS twice in my career, both times it’s been used for applications which were more than 10 years old at the time and would have crossed the 15 year mark by now. Impressively enough, at least one of them seems to be still online! Can’t check the second one because I’ve lost access to that intranet since.

              2. 6

                I disagree that simplicity is a worthwhile goal for a programming language. Simplicity has no value for the user.

                Easyness can be a goal because it means newbies are quickly productive. Python is an example.

                Safety can be a goal depending on the use case.

                You can try simplicity as a proxy metric to achieve a safe and easy language. However, if you shift the complexity into the libraries instead (fancy macros), then the resulting code is neither safe nor easy.

                Easy and safe code is valuable. If you need to sacrifice some language simplicity for it then you should probably do so.

                1. 2

                  Simplicity has no value for the user.

                  Wrong. Managing complexity and abstraction is THE central problem in programming, a simpler language has benefits that the abstractions are easier to understand and work with, and therefore (ideally) less complexity is created.

                  1. 1

                    A simple abstraction is usually not easier to understand. It is usually more leaky and has more pitfalls.

                    1. 1

                      [citation needed]

                      EDIT: I’ll provide what I’m basing my knowledge on I guess.

                      1. It’s literally the first damn thing that Sussman says in the SICP recordings.
                      2. Something fundamental, repeatedly shown by cogsci, is that the human brain has a limited ‘stack space’. A good programmer will understand the complexity and tame it in a way that allows them to simplify that (otherwise it’s not an abstraction, it’s bloat). However on some level that programmer still needs to know how it translates to the layer of abstraction below them. The simpler the abstraction, the less work it is to understand how it works, the less work to fix it when it breaks

                      because

                      fundamentally

                      all abstractions

                      are

                      in some way

                      leaky

                      1. 1

                        You might know Rich Hickeys classic Simple made Easy. The definition there:

                        Simple is often erroneously mistaken for easy. “Easy” means “to be at hand”, “to be approachable”. “Simple” is the opposite of “complex” which means “being intertwined”, “being tied together”. Simple != easy.

                        Let us consider an example: I want to store some data persistently on disk. We can use the file system or sqlite. Sqlite is certainly more complex since it introduces the mental overhead of SQL and more code to implement it. However, we recently had this article about the perils of file systems which recommends:

                        If you want something lightweight and simple that you can use in most places you’d use a file, SQLite is pretty good.

                        So I would argue that SQL is an easier abstraction compared to the simpler file system abstraction.

                        Another example would be how you serialize the data you want to store? Not at all, just dump the memory contents (simple)? Alternatively, use some serialization library (easy but more complex).

                        Another example: How does a languages string data type look like? C’s null-terminated char pointer is simple. Explicitly storing the length in addition is more complex but avoids problems thus easier.

                        Using the numerics of your hardware (32bit words, IEEE floating point) directly is simple. Wrapping them in some BigNum data type is more complex but easier because you don’t need to consider overflows anymore.

                        The last two example can be applied to language and library design. Does your language come with a length-knowing string datatype? Does your language use an arbitrary precision number data type?

                        1. 1

                          You deliberately misunderstand the idea of simplicity, ignoring my clear intent, and then you build false examples based on that.

                          There are different types of complexity. Complexity of semantics, complexity of implementation, complexity of structure, etc. “Simple” is something opposed to one or more of those.

                          Why is a Pascal String more complex than a C String? They’re simply different types of saying the same thing. They both have around the same semantic and structural complexity but it’s offloaded in different places. Getting the length of a C String is more difficult than getting the length of a Pascal String. Splitting a Pascal String from one array into many arrays is a difficult thing.

                          Likewise, the semantics of a floating point calculation, and the implementation of rounding, mean that floating point is much more complex in implementation than a bignum implementation.

                          There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.

                          • CAR Hoare
                2. 6

                  Since you set up the context of the post as being about Go, I was wondering if you would do a comparison with Rust and find that you preferred it. But seeing your criteria later on, I suppose that’s not the direction you want to go in. Maybe Elixir, then? They have a great community with a BDFL (José Valim), great tooling (mix) and documentation (hexdocs), a small core language with most constructs (like def, if, …) being built out of hygienic macros, limited amount of operator overloading so no crazy DSL syntax, and of course a great stdlib and a widely known ‘killer app’ web framework, Phoenix.

                  1. 2

                    Relevantly, Elixir was also recently declared feature-complete:

                    As mentioned earlier, releases was the last planned feature for Elixir. We don’t have any major user-facing feature in the works nor planned. I know for certain some will consider this fact the most excing part of this announcement!

                    It is also important to highlight that there are two main reasons why we can afford to have an empty backlog.

                    First of all, Elixir is built on top of Erlang/OTP and we simply leverage all of the work done by Ericsson and the OTP team on the runtime and Virtual Machine. The Elixir team has always aimed to contribute back as much as possible and those contributions have increased in the last years.

                    Second, Elixir was designed to be an extensible language. The same tools and abstractions we used to create and enhance the language are also available to libraries and frameworks. This means the community can continue to improve the ecosystem without a need to change the language itself, which would effectively become a bottleneck for progress.

                  2. 5

                    the language must avoid making it too easy to “silo” into a custom DSL that other programmers must learn to work on your program

                    Why? Why are DSLs such a bad thing? Don’t they help mitigate the “feature-grafting” you were bemoaning?

                    This includes a good debugger (bonus points for what Common Lisp does by not unwinding the stack!)

                    Anyone got a link? I’ve never heard about this before.

                    The inclusion of some kind of Web framework (similar to net/http in scale and simplicity) and image processing in the standard library would help with initial adoption.

                    This reminds me of Racket!

                    1. 4

                      I clarified my point regarding DSLs a little elsewhere in this comment thread. Yes, they do help prevent “bolt-on syndrome,” but at the same time they decrease interoperability throughout the stack by encouraging writing towers of incompatible abstractions. I think it hurts CL’s library ecosystem (but that could also have to do with its lack of “leadership,” which isn’t an issue for Racket or CHICKEN).

                      From A Road to Common Lisp:

                      In Common Lisp you can certainly choose to panic on or ignore errors, but there’s a better way to work. When an error is signaled in Common Lisp, it doesn’t unwind the stack. The Lisp process will pause execution at that point and open a window in your editor showing you the stack trace. Your warrior’s sword is hovering over the monster, waiting for you. At this point you can communicate with the running process at the REPL to see what’s going on. You can examine variables in the stack, or even run any arbitrary code you want.

                      Once you figure out the problem (“Oh, I see, the calculate-armor-percentage function returns 0 if a shielding spell ran out during the same frame”) you can fix the code, recompile the problematic function, and restart the execution of that function (or any other one!) in the call stack! Your warrior’s sword lands, and you move back to what you were doing before.

                      You don’t have to track down the bug from just a stack trace, like a detective trying to piece together what happened by the blood stains on the wall. You can examine the crime as it’s happening and intervene to save the victim. It’s like if you could run your code in a debugger with a breakpoint at every single line that only activates if something goes wrong!

                      1. 2

                        In Common Lisp you can certainly choose to panic on or ignore errors, but there’s a better way to work. When an error is signaled in Common Lisp, it doesn’t unwind the stack.

                        I’ve read about this before and understand why it would be a great feature in production software – almost like a superpower. What I don’t understand is why such power was not enough for everyone in the world to have switched to CL immediately, and also why none (afaik) of the languages created after CL replicated that feature. Is the feature very expensive or complex? Does it make error handling needlessly complicated for the common case?

                        1. 2

                          What I don’t understand is why such power was not enough for everyone in the world to have switched to CL immediately, and also why none (afaik) of the languages created after CL replicated that feature.

                          I think the short version is that it requires features which Lisp has but other languages don’t: dynamic scope, to enable different functions to mutate the context; closures, which were uncommon decades ago; macros, to make the whole thing legible; a runtime.

                          Does it make error handling needlessly complicated for the common case?

                          Well, I think that this (from the HyperSpec) is pretty legible:

                          (handler-case (read s)
                            (end-of-file (c)
                              (format nil "~&End of file on ~S." (stream-error-stream c)))))
                          

                          HANDLER-CASE executes one statement, returning its value if none of the indicated errors are signaled, or the value of the error handler if they are; any other errors bubble up as normal. This is a pretty good analogue for:

                            try:
                              s.read()
                            except EndOfFile, e:
                              print "End of file on %s" % e.stream
                          

                          Edit: part of what I’m getting at above is that this is a pretty clear case of the Blub Paradox. Folks who don’t understand dynamic scope don’t understand why they would want it, and don’t appreciate how it makes things like error handling better — to the point that they fail to consider a useful feature useful.

                          1. 1

                            APL keeps the )SI stack and you can inspect variables and make changes. In general I guess the cost of doing that is probably too high, such that dumping a core file with the help of the OS became the common thing.

                          2. 2

                            I think there are certainly cultural issues or “lack of leadership” that cause incompatibilities like using different data representations, or having poor or misleading documentation, (which both exist at the level of functions as well as the level of macros), but I think there are also technical hurdles and abstraction models that were not yet solved by the time Common Lisp’s macro system (its means of syntactic abstraction) were designed. If you compare Common Lisp and Racket, for example, Racket has huge advantages – including a hygienic macro system, a DSL for creating robust macros with good error reporting (syntax/parse), higher-order contracts to enforce invariants across boundaries, etc – that make its DSLs and abstractions much more robust and interoperable. I think there is still a lot of room to improve here, but I think Racket has already demonstrated that large-scale language-oriented programming can be interoperable, understandable, well documented, and empowering. I think powerful DSL programming has a promising future (inside and outside of Racket), and shouldn’t be ruled out due to poor results from early attempts.

                            1. 1

                              My personal view on DSLs is that they’re great for writing new applications in and not so great if people start writing libraries in them. Racket’s approach of explicitly forking off DSLs using #lang is a fairly good way of helping devs realize how big a step they’re making when they create a DSL.

                              Out of curiosity, does “DSL” mean the same thing to CL devs that it does to everybody else? Or does it mean something different in the context of that particular Lisp?

                              Also, that is an excellent feature, but… is that the default behavior for any & all CL application? Because I can see plenty of cases where opening a debugger opens you up to way more vulnerabilities than stopping everything and printing an error. Useful for development but a liability for release, IMO.

                              1. 2

                                My gut wanted to say “yes” for a moment, until I decided that I’m not at all sure what “everybody” thinks the term “DSL” means. I think it’s a very fuzzy term with many interpretations – I’ve certainly seen a lot of people disagree on or be unsure about what it means.

                                My guess is that CL people view DSLs roughly similar to how people in other Lisp communities view them. But there’s a lot of variance among different people I’ve talked to.

                                My view on what a DSL is is mostly flavored by Racket. I think there is a pretty broad spectrum of what might be considered a DSL. A simple library of functions for a particular domain may be considered a DSL – it may not change the semantics of function evaluation or anything, but it gives you a vocabulary to talk about that domain. On the heavier end you have external DSLs or heavyweight Racket #langs – full of weird syntax, weird semantics, possibly difficult to interoperate with. But I think most Lispers think primarily of DSLs somewhere in the middle – a macro (or set of macros) that compiles a custom form (tailored to some domain, even a micro-domain like “the domain of loops” or “the domain of pattern matching”) down to the base language, maybe using specialized functions built to implement the macro’s run-time semantics. This covers a broad spectum from CL’s loop to Racket’s match to Minikanren – all have semantics specialized to some degree, but all are able to be embedded and interoperate cleanly with the host language.

                                But I think a lot of people have other views on what usually makes a DSL – Python or Ruby people might think about dynamic class-oriented metaprogramming, such as introspecting on a class and stuffing it with a bunch of implicit methods, instead of static macro metaprogramming (which is what I tend to do and think about), some think about eval based approaches, and others may only think of external DSLs like SQL or Bash. One’s opinion of what makes a DSL, how much of a big step it is to make or use one, and how well it interoperates may vary a lot based on what kind of DSLs you’re used to.

                                Do others think there is broad consensus on what a DSL is? Because I think I see several camps who largely mean different things by the term. I certainly have a definition that I give people which seems to be largely shared especially by the Racket community, but most people I run into outside of Racket and other Lisps tend to have very different views.

                                1. 2

                                  From my understanding the “lighter” ones are known as eDSLs (embedded DSLs) and characterised by being hosted entirely within the source language. Some Haskell libraries (the kind that live within their own types) call themselves eDSLs, and I would call Ruby blocks eDSLs as well. “DSL” includes everything else – they may or may not have their own character-level parsers. So the main difference seems to be one of implementation rather than the “size” of the domain involved.

                                  At least, that’s the primary distinction I’ve seen, but it definitely gets fuzzy. Is Awk a DSL? Are Racket #langs all DSLs? (edit: yes and no, I reckon)

                                  I don’t think CLers make a distinction between libraries and (e)DSLs because the act of writing a program is thought of as defining and using a problem-specific DSL.

                          3. 3

                            What about a different approach - build as feature complete language as you can at the time, and when that’s done, only update standard library. While the initial effort to learn the language will by high, there won’t be any need to re‑learn the language, just to update onto standard library abstractions.

                            1. 7

                              For a programming language that followed something like this model, see Lua.

                            2. 2

                              I think this also happens the other way to “hard” languages. A language starts out to fix a specific, difficult problem, but this makes the language unpleasant. So the language adds convenience features until it’s no longer solving the original problem well.

                              This is how we’ve tried to replace C and C++ for so long, and ended up with C#, Java, D, Go, and a bunch of other vaguely C-like languages, but with a garbage collector that made them nice, but unsuitable as a direct C/C++ replacement.

                              1. 5

                                C# was never meant to replace C and C++. It was meant to copy Java.

                                Java may have initially meant to replace C and C++; however, we’ve since realized it occupies a separate, but equally important, niche.

                                1. 6

                                  I’d say Java did replace C/C++ for a significant niche. Think about all the Java code that exists. Without Java most of it would probably have been written in C++. Java is a good thing because now all the enterprise web services are at least safe from buffer overflows and out of bounds bugs.

                                  Java was not suitable to replace C/C++ from the embedded niche (although they certainly tried). Hopefully Rust can do that now.

                                  1. 3

                                    C# was never meant to replace C and C++. It was meant to copy Java.

                                    That’s a common trope, but incorrect. Microsoft was using a safe dialect of C to simplify writing COM components; C# evolved out of that. Its support for raw pointers and memory management, distinction between virtual and nonvirtual functions, etc. way back in v1.0 all belie that. Wikipedia has more. So yes, I think C# was very much intended to replace C and C++, at least for working with COM, and would agree it failed to do so (although I love the result).

                                    1. 1

                                      Ok, I’ll give you that. But I still don’t feel that it was ever going to replace C, even if that was their intention. I’m glad they realized that and pivoted to furthering the development of the CLR. That was a good contribution.

                                2. 2

                                  I was interested in using some minimal language like Go as a compilation target for other more “DSL” focused languages, curious to hear your thoughts on it OP - http://andrewjudson.com/programming/2019/04/19/read-only.html

                                  1. 4

                                    Interesting! I think that sounds like a very good idea. I’ve been interested in writing an S-expression syntax for Go for a while now, in a similar vein to Hy for Python and Fennel for Lua, because Go gets a lot of the work of writing a Lisp out of the way upfront (GC, type system, etc).

                                    I wonder if this might work as a technique for quickly generating a Go codebase, which could then be turned into the canonical project so as to avoid having two codebases (a one-way operation). I know this isn’t exactly what your post proposed, but it would alleviate the concern of having to maintain two distinct codebases. Debugging might be easier that way, too.

                                    Thank you for sharing! (And I also had no idea how popular the Minima theme for Jekyll was—I’ve now made some changes to my own CSS to spice mine up)

                                    1. 2

                                      It wouldn’t be the first time a Lisp has been used that way. The .NET garbage collector was prototyped in Common Lisp, with an aim to cross-compile it into C++

                                    2. 3

                                      I think the core issue here is more that we haven’t come as far in terms of having a philosophy of good design for macros and DSLs as we have for functions. All of these problems exist at the function level as well – do I need to hunt down and deeply understand every function call that I read in source code to understand a program? No, I can frequently elide reading function definitions because we have established conventions around function design, naming, documentation, etc that help us sufficiently understand a functional abstraction without worrying about its implementation details. In a similar way to your proposal, we could say we want to inline all function calls or see the resulting assembly code to be able to read code that uses function abstractions – and sometimes we need to do that to understand poorly written code! But the conventions we have established make this less necessary. I think with better tooling, more experience and communication about good practices, etc, we can achieve this with syntactic abstractions and DSLs as well.

                                    3. 2

                                      I can see Clojure not being too far off. The core is somewhat simple, it has power for abstractions, and while new core features are sometimes polarising, like spec, you can just replicate the functionality in some other way, like schema, because it’s built in the language. I think lisp in general is really strong in this regard, because language extensions can be almost optional, if the community decides something isn’t working for them.

                                      1. 3

                                        Not a great article. Your DSL problem sounds like a non-problem, all nontrivial programs to some degree function like a DSL. And I mean seriously: you can’t choose a Python module to function like net/http? Again, a real non-problem. Who cares when the tooling came around, as long as you have it?

                                        Your “perfect language” is probably in the set {Python, Lua, Racket, Go}.

                                        1. 14

                                          I think it’s a really great article, it voices some things I wanted to write down, but couldn’t find the time.

                                          A few things from my consideration on keeping languages small:

                                          • Do not only consider the cost of adding a feature, but also the cost of removing it.
                                          • If 10% of users would gain 20% more utility from a feature being added, that still means that the other 90% lose utility, because they still need to learn and understand the feature they didn’t ask for. It’s likely that the equation ends up being negative for most features if you account for that.
                                          • Don’t focus at being great at something. Focus on not being bad at anything.
                                          • Not every problem needs a (language-level) solution. If something is verbose, so be it.
                                          • Allow people to save code by being expressive, not by adding short-cuts for every individual annoyance.
                                          • Design things by writing the code you want your users to write. Then make that work.
                                          • Have a way to deprecate, migrate, and remove language and library elements from day one.

                                          And a few of the standard ones:

                                          • Eliminate special-cases.
                                          • Something that can be a library should never be a language feature.
                                          • Make sure all features are orthogonal to each other.
                                          • The 80/20 rules doesn’t apply to language design.
                                          • Make things correct first. Correct things are simple. Simple things are fast. – Focusing on “fast” first means sacrificing the other two.
                                          1. 3

                                            If 10% of users would gain 20% more utility from a feature being added, that still means that 90% lose utility. It’s likely that the equation ends up negative if you consider that those 90% still need to learn and understand the feature they didn’t ask for.

                                            You don’t lose utility from a feature being added. That’s nonsensical.

                                            1. 23

                                              You don’t lose utility from a feature being added. That’s nonsensical.

                                              You definitely can for some features. Imagine what would happen if you added the ability to malloc to Java, or the ability to mutate a data structure to Erlang.

                                              But of course this doesn’t apply to most features.

                                              1. 1

                                                if you added the ability to malloc to Java

                                                Java has that already? Various databases written in Java do allocate memory outside the GC heap. You can get at malloc via JNI, as well as using the direct ByteBuffers thing that they kinda encourage you to stick to for this.

                                                1. 4

                                                  Java has that already?

                                                  Yes, and when it was added it was a huge mistake.

                                                  Everyone I know who uses the JVM won’t touch JNI with a ten-foot pole.

                                                2. 1

                                                  I think it pretty much applies to all features.

                                                  For whatever utility you get out of a feature, you have to take into account that when users had to learn 50 features before to use the language, they now need to understand 51.

                                                  This issue is usually discarded by those who propose new features (expert users), because the have already internalized the 50 features before. Their effort is just “learn this single new thing”, because they know the rest already.

                                                  But for every new user, the total amount of stuff to learn just increased by 2%.

                                                  That doesn’t sound much but if you think that – whatever language you use – 99.99% of people out there don’t know your language.

                                                  It’s hard to offset making things worse for 99.99% by adding a “single” great new feature for the 0.01%.

                                                  1. 2

                                                    For whatever utility you get out of a feature, you have to take into account that when users had to learn 50 features before to use the language, they now need to understand 51.

                                                    Yes, but this is a completely different category from “this language had an important feature, and by adding this new feature, we destroyed the old feature”.

                                                    Adding mutability to Erlang doesn’t just make the language more complicated; it destroys the fundamental feature of “you can depend on a data structure being immutable”, which makes the language dramatically worse.

                                                    1. 1

                                                      but this is a completely different category

                                                      Yes, but this is the category I had in mind when I wrote the list.

                                                      The point the GP mentioned is above listed under “And a few of the standard ones”:

                                                      Make sure all features are orthogonal to each other.

                                                3. 7

                                                  Don’t just think about the code you write; think about the code you need to read that will be written by others. A feature that increases the potential for code to become harder to read may not be worth the benefit it provides when writing code.

                                                  1. 7

                                                    C++ comes to mind. I think it was Ken Thompson who said it’s so big you only [need to] use a certain subset of it, but the problem is that everyone chooses a different subset. So it could be that you need to read someone else’s C++ but it looks like a completely different language. That’s no good!

                                                    1. 7

                                                      You don’t lose utility from a feature being added.

                                                      That’s nonsense. Consider the case of full continuations, as in Scheme: implementing them requires that certain performance optimisations are impossible, which makes all code — even code which doesn’t directly use them — perform more slowly. Granted, this can be somewhat mitigated with a Sufficiently Smart Compiler™, but not completely.

                                                      1. 4

                                                        “Lose utility” is not the right framing. It’s more like increased cognitive overhead.

                                                        1. 3

                                                          You certainly pay a cost, though. That’s indisputable.

                                                          1. 2

                                                            Maybe “utility” is the wrong word for the thing you lose but you definitely lose something. And the amount of that thing you lose is a function of how non-orthogonal the new feature is to the rest of the language: the less well integrated the feature is, the worse your language as a whole becomes.

                                                        2. 8

                                                          Thanks for the feedback. While I haven’t worked on any Common Lisp program large enough to have turned itself into a DSL, I also know that for any task, there are usually a few libraries that each don’t work for more than 80% of the use-cases for such a library. Whether this is caused by the language itself or its community, I don’t know, but I think it has more to do with the way that CL encourages building abstractions.

                                                          As for the fact that Python doesn’t have a net/http equivalent in its standard library, I remember this being a somewhat major driver for Go’s adoption. You could build a simple website without having to choose any kind of framework at all. It was really easy to get something together quickly and test-drive the language, which is super important for getting people to use it. Also, having something that creates a shared base for “middleware” and frameworks on top of the standard library had to have led to better interoperability within the early Go web ecosystem.

                                                          I will concede that good tooling shortly after launch is the least important point, but really spectacular tooling is a good enough selling point for me to use a language on its own, so I think it does matter, since it allows people to write larger programs without waiting so much for the language to mature.

                                                          It appears that I did a poor job of communicating that my list of points were geared towards new languages today (or ones of a similar age to Go), but I will absolutely play with Tcl and continue to investigate other existing options.

                                                          1. 1

                                                            As for the fact that Python doesn’t have a net/http equivalent in its standard library

                                                            Well, there technically is http with http.client and http.server modules, just it’s so old that it’s abstractions are no longer abstract. It seems that nowdays python’s standard library needs updated abstractions, but that no wouldn’t have any use, as there are 3rd party libraries providing those abstractions(e.g. requests)

                                                        3. 1

                                                          I think Go does very well on abstractions. It does a great deal to prevent DSL silos mostly by utilizing the fact many parts are made for simplicity and when you build abstractions you are a lot more likely to think about them, rather than just saying “I will make a class around that library” and then due to not really understanding the layer you are trying to abstract making it easy to end up with an approach that other than being a different way to do the same thing is slower, less powerful, always requires hacks because it tries to hide complexities or design choices that are inherent to the problem.

                                                          Oftentimes this leads to simple libraries to deal with a set of problems turning into a huge framework and mechanisms being added that allow you to deal with completely unrelated problems, which people simply start doing, because they know that DSL/framework/abstraction. You end up with the equivalent of using UDP on top of HTTP, simply because HTTP is well known so to speak.

                                                          While a programming language cannot fix that per se, creating a mindset where “The bigger the interface, the weaker the abstraction.” and in large hiding too much complexity are frowned upon is helpful.

                                                          This is not to say that this is the one way to go. It is just that with a focus on adding expressiveness and features to both languages and libraries you are eventually going to end up there. There’s already a great amount of programming languages in this area and many that started out differently shifted there.

                                                          Simplicity rather than quickly having something, because everything is a function or part of a major framework already is an approach that hasn’t been used a lot in languages that got popular enough that most people know about it. Especially if you only look at languages that emerged in the last few decades.

                                                          And when most programmers do and learn things a certain way and therefore work around all sorts of problems in an area there will be a bias, because that’s already there and familiar. Starting out with a different approach to things is very hard and unless “it’s by Google” or other big names chances for it to simply never make it out of infancy are very high. If Go was done independently I think Go might as well be where Plan 9 is for operating systems in terms of popularity. Limbo, Newsqueak and Alef which kind of are predecessors could very well be examples of that.