1. 5

    Computer science clocksteps at the rate of algorithms and discoveries. Languages are always going to come and go, unless the language springs up from a good theory.

    If you want to understand why this would be true, just look at the history of mathematics. Read about algebraic representations, which kind of abacuses have been used, slide rules, mechanical calculators. You will find out that what we have present today is a small fragment of what used to be, and the stuff that still exists was lugged to today because there’s not many obvious better ways to do the same thing.

    By this basis, I’d propose that the current “top 20” by Redmonk cannot form any kind of a long-running status quo. It’s a large list of programming languages rooting to the same theory (Javascript, Java, Python, PHP, C#, C++, Ruby, C, Objective-C, Swift, Scala, Go, TypeScript, Perl, Lua).

    There is going to be only one in 30 years, and I think it’ll be falling to C or Javascript axis. They are syntactically near and lot of software was and gets written with these languages. Although there is even more written with C++, it’s way too contrived to survive without reducing back to something like C.

    CSS may have some chance of surviving, but it’s pretty much different from the rest. About Haskell I’m not sure. I think typed lambda calculus appear or will reappear in a better format elsewhere. The language will be similar to Haskell though, and may bear the same name.

    Unix shell and its commands will probably survive, while Powershell and DOS will wither. Windows seems to have its days counted already by now. Sadly it was not because of open source movement. Microsoft again just botched themselves up.

    R seems like a write-and-forget language. But it roots to Iverson’s notation.. Though perhaps the notation itself will be around, but not the current instances of it.

    I think that hardware getting more concurrent and diverging from linear execution model will do permanent shakeup on this list in a short term. The plethora of programming languages that describe a rigid evaluation strategy will simply not survive. Though I have bit of bias to think this way so I may not be a reliable source for checking into the future.

    But I think this be better than looking at programming language rankings.

    1. 8

      I think, most importantly, we haven’t even seen anything like the one language to rule them all. I expect that language to be in the direction of Conal Elliott’s work compiling to categories.

      A language that is built around category theory from the start, like you have many different syntactic constructs and the ones you use in a given expression determines the properties of the category that the expression lives in. Such a language could locally have the properties of all the current languages and could provide optimal interoperation.

      BTW, I think we won’t be calling the ultimate language a “programming language” because it’ll be as good for describing electrical circuits, mechanical designs and biological systems as for describing programs. So I guess it’ll be called something like a specification language.

      1. 4

        “we haven’t even seen anything like the one language to rule them all. “

        That’s exactly what the LISPers always said they had. Their language could be extended to do anything. New paradigms and styles were regularly backported to it as libraries. It’s also used for hardware development and verification (ACL2).

        1. 3

          Well, it’s hard to say anything about LISPs in general since the span is so vast and academic, and especially for me, since my contact with any LISP is quite limited. But, from my understanding of the common usage of LISP, it doesn’t qualify.

          First of all, I think dropping static analysis is cheating, but I don’t intend to tap into an eternal flame war here. What I mean when I say “the properties of the current languages” is no implicit allocations, borrow-checking and inline assembly like in Rust, purity and parametricity like in Haskell, capabilities-security like in Pony etc. etc. , and not only the semantics of these, but also compilers taking advantage of these semantics to provide static assistance and optimizations (like using the stack instead of the heap, laziness & strictness analysis etc.).

          And I’m also not just talking about being able to embed these into a given language; you should also be able to write code such that if it’s simple enough, it should be usable in many of them. For instance, it’d be hard to come up with some language semantics in which the identity function cannot be defined, so the identifier id x = x should be usable under any local semantics (after all every category needs to have identity morphisms). You should also be able to write code that interfaces between these local semantics without leaving the language and the static analysis.

          I know you can embed these things in LISP, expose enough structure from your LISP code to perform static analysis, get LISP to emit x86 assembly etc. etc. But, IMHO, this doesn’t make LISP the language I’m talking about. It makes it a substrate to build that language on.

      2. 2

        I think one major difference between math and computer science, and why we’re not going to see a lot of consolidation for a while (not even in 30 years, I don’t think), is that code that’s on the internet has a way of sticking around, since it’s doing more than just sitting in research papers, or providing a tool for a single person.

        I doubt we’ll see 100% consolidation any time soon, if for no reason than that it’s too easy to create a new programming language for that to be the case.

        Hardware changes might shake up this list, but I think it’ll take 30 years for that to be realized, but there will be a lot of programming languages that fall out of that.

        We’re definitely still going to have COBOL in 30 years, and Java, and C. The rest, I’m unsure of, but I’ll bet that we’ll be able to recognize the lineage of a lot of the top 30 when we look in 30 years.

        1. 1

          R seems like a write-and-forget language. But it roots to Iverson’s notation.

          Did you mean to write J or APL? I understand R as the statistics language.

        1. 4

          So it works by tracing calls to open in the tools that are being invoked. I would be curious how well that works, especially for languages not from the C family. A link to a medium-size project using it would be convincing.

          1. 2

            Google is working on a high-performance editor called Xi and I remember having read an in-depth discussion of its data structures.

            1. 4

              Unrelated: I read an article recently on Haskell programming that asserted one should never write a parser, and to always use a parser combinator library. Yet on the other end of the spectrum, I see a lot of people claiming you should never use a parser generator as they universally produce awful error messages, and to always write your own. Is this actually a contradiction? Are error messages from parser combinator libraries as bad as the ones from yacc? I’ve never used a parser combinator library as I’ve never needed to do any parsing in Haskell.

              Ontopic:

              The article here says:

              if the BNF production A recognizes a language, and B recognizes a different language, then the production A | B will the union of the two languages recognized by A and B. As a result, swapping the order of an alternative from A | B to B | A will not change the language recognized.

              However, most implementations of parser combinators do not satisfy these kinds of properties! Swapping the order of the alternatives usually can change the parse tree returned from a parser built froom parser combinators.

              Is it not the case that the property that swapping ‘A | B’ and ‘B | A’ will not change the language recognised and the property that swapping ‘A | B’ and ‘B | A’ will not change the parse tree of parsing a particular string are quite different things? Grammars formally are not instructions for building parse trees, they’re string predicates.

              1. 2

                It is desirable that the combinator A | B recognises the same language as B | A as it is more declarative. Otherwise this can introduce hard to detect problems. For an implementation of parser combinators this is difficult to guarantee if efficiency is a concern. Yacc has a global view of a grammar and therefore it is easier to guarantee in a Yacc-generated parser than a combinator-based parser because each combinator has typically only a local perspective.

                1. 3

                  The standard haskell parser combinator library parsec does not have commutative disjunction, only readp managed that. Second, the PEG system is bias towards A by choice - and for a reason: This reduces the ambiguity in languages and makes parsing certain aspects of programming languages easier.

                  1. 1

                    Second, the PEG system is bias towards A by choice - and for a reason: This reduces the ambiguity in languages and makes parsing certain aspects of programming languages easier.

                    I understand that pattern matching also has a first-match policy and I don’t complain about this. I am still not convinced it is the right choice for parsing a language that is typically much larger than a runtime value deconstruction. In Parsing: a timeline, Jeffrey Kegler writes about PEG:

                    But PEG is, in fact, pseudo-declarative – it uses the BNF notation, but it does not parse the BNF grammar described by the notation: PEG achieves unambiguity by finding only a subset of the parses of its BNF grammar. And, as with its predecessor GTDPL, in practice it is usually impossible to determine what the subset is. This means that the best a programmer usually can do is to create a test suite and fiddle with the PEG description until it passes. Problems not covered by the test suite will be encountered at runtime.

                    Yacc has admittedly its own problems with shift-reduce conflicts.

                2. 2

                  It’s definitely hard to produce good error messages from a parser generator. Especially from a parser combinator library because it’s built up dynamically and there’s no preprocessing stage.

                  The system in this paper does have a first class ADT representation of grammars though. Which is translated into executable code via staging (similar to my PEG library)

                  1. 1

                    I believe your point about swapping A | B for B | A is exactly the problem the authors describe with parser combinators; for most parser generators, the resulting code should behave exactly the same. For any tool, the problem only arises if there’s ambiguity; if all strings are recognized by at most one of A and B, then clearly A | B and B | A will behave the same. Parser combinators have a tough time detecting ambiguity, and so will often just take the first one that matches, so in the case where there’s a set of strings recognized by both A and B, A | B and B | A will be treated differently by most parser combinator libraries. Tools like yacc and bison tend to not tolerate ambiguity, which means for each string, only one of A or B will recognize it, so A | B and B | A will behave the same way.

                  1. 8

                    Wow great to see this! I remarked after one of the chapters in craftinginterpreters that I didn’t find a lot of literature on bytecode VMs.

                    There are a lot of books on compilers, but not as much for interpreters (i.e. the bytecode compiler + dynamic runtime combo that many languages use).

                    There are the Lua papers, a few blog posts about Python, some papers about the predecessor to OCaml, and craftinginterpreters, but not much else I could find. I found this book recently, but it’s not specifically about the compiler / VM:

                    http://patshaughnessy.net/ruby-under-a-microscope

                    Anyway I am glad to see another addition to this small space! :)

                    I’m hoping to find some time to really push into my compiler / VM project this winter: http://www.oilshell.org/blog/2018/03/04.html .

                    Basically the idea is that Python has a “dumb” compiler and a very rich runtime. (This fact has been impressed upon me by hacking on its source code!). I want to make it have more of a smart compiler and small/dumb runtime.

                    1. 5

                      There’s also Nils M Holm’s books: http://t3x.org/

                      1. 2

                        That’s just pure gold :O

                        1. 1

                          Which parts discuss bytecode interpreters? I see a lot of different things there, including native code compilers, but no bytecode interpreters.

                          1. 3

                            One of the features of the T3X compiler is a portable bytecode interpreter. Here’s the source, I think you’ll like it: https://www.t3x.org/t3x/t.t.html

                        2. 4

                          There are a lot of books on compilers, but not as much for interpreters (i.e. the bytecode compiler + dynamic runtime combo that many languages use).

                          Not a book, but you still might find this paper interesting: Engineering Definitional Interpreters:

                          Abstract: A definitional interpreter should be clear and easy to write, but it may run 4–10 times slower than a well-crafted bytecode interpreter. In a case study focused on implementation choices, we explore ways of making definitional interpreters faster without expending much programming effort. We implement, in OCaml, interpreters based on three semantics for a simple subset of Lua. We compile the OCaml to x86 native code, and we systematically investigate hundreds of combinations of algorithms and data structures. In this experimental context, our fastest interpreters are based on natural semantics; good algorithms and data structures make them 2–3 times faster than interpreters. Our best interpreter, created using only modest effort, runs only times slower than a mature bytecode interpreter implemented in C.

                          1. 3

                            Wow thanks! This is exactly the kind of thing I’m looking for.

                            For example, even the first sentence is somewhat new to me. Has anyone else made a claim about how much slower a tree interpreter (I assume that’s what they mean by definitional) is than a bytecode interpreter? I’ve never seen that.

                            I know that both Ruby and R switched from tree interpreters to bytecode VMs in the last 5-8 years or so, but I don’t recall what kind of speedup they got. That’s something to research (and would make a good blog post).

                            Anyway I will be reading this and following citations :-) I did find a nice paper on the design of bytecode VM instructions. Right now choosing instructions seems to be in the category of “folklore”. For example, there are plenty of explanations of Python’s bytecode, but no explanations of WHY they are as such. I think it was basically ad hoc evolution.

                            1. 1

                              I did find a nice paper on the design of bytecode VM instructions.

                              Would mind sharing what that paper is? I’d love to read more about how to do this properly.

                              1. 3

                                I’m pretty sure this is the one I was thinking of:

                                ABSTRACT MACHINES FOR PROGRAMMING LANGUAGE IMPLEMENTATION

                                http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.68.234

                                We present an extensive, annotated bibliography of the abstract machines designed for each of the main programming paradigms (imperative, object oriented, functional, logic and concurrent). We conclude that whilst a large number of efficient abstract machines have been designed for particular language implementations, relatively little work has been done to design abstract machines in a systematic fashion.

                                A good term to search for is “abstract machine” (rather than bytecode interpreter). The OCaml paper is called the “ZINC Abstract Machine” and it’s quite good.

                                There is a bunch of literature on stuff like the SECD machine, which is an abstract machine for Lisp, and which you can find real implementations for.

                                There seems to be less literature on stack / register bytecode interpreters. The Lua papers seem to be the best read in that area.

                                This guy is asking a similar question: https://stackoverflow.com/questions/1142848/bytecode-design

                                1. 1

                                  This is great, thank you for sharing!

                        1. 3

                          There is a middle ground. The mechanism to change something can be convenient for the user, which usually means it is more work for the developer as it needs UI and documentation. Or it can be difficult for the user to use up to the point that it is hidden but still can be changed. Mechanisms for this would include environment variables, the X Window resource mechanism, or setting variables in a built-in language.

                          1. 4

                            I like the way of thinking these moments reveal.

                            Though, I prefer to rebase to bring changes from master into my feature branch.

                            1. You can merge in both directions.
                            1. 1

                              A rebase can force you to resolve many conflicts, each of which has a chance of introducing a bug. A merge can result in fewer conflict resolution and be preferable for that reason.

                              1. 1

                                True in general, but depends on your commit workflow; I generate a clean history locally before I rebase so there’s rarely multiple conflicts for the same thing. Git rerere helps too.

                                1. 1

                                  I would agree, but it’s less likely to happen when I rebase often.

                              1. 3

                                I’m surprised Revelation is not mentioned. It’s full featured, can import/export – the only drawback is that it’s not cloud-based and has no mobile/web interface. But for desktop use, it’s really good.

                                1. 1

                                  This is part of many Linux distributions - so easy to install. Latest commit f574668 on 20 Sep 2013 on the GitHub repository does not instil confidence that this is maintained.

                                1. 15

                                  I’m using 1Password with local sync over the built-in web server. 1Passwords also supports syncing via Dropbox, iCloud and, most recently, 1Passwords’s own servers. I want nothing of that but it means that I can’t use 1Password on Linux. What’s great about 1Password is that it is highly polished - using it adds very little friction. I understand that I could manage all passwords encrypted in Git but the integration would never be as good and there is a lot of risk that this would somehow not be as secure as it sounds.

                                  1. 5

                                    I recently switched to Enpass, which is a conceptual clone of 1Password. Reason for switching was Linux support.

                                    1. 7

                                      This is a closed-source app that has not yet received a lot of scrutiny. Using it for truly sensitive information requires quite a bit of trust. They claim to use sqlite with encryption – which I would trust but of course, there is a lot of code around it that would have to be trusted as well.

                                      1. 2

                                        The first thing it tried to do when I ran it was reach out to Google Analytics. I said enough of that, and promptly uninstalled it.

                                      2. 2

                                        1Password (at least the hosted version) has linux support via both 1password-x and 1password-cli. I quite enjoy the CLI and generally find that it has a better user experience than LastPass.

                                    1. 2

                                      This impressive but I do wonder: In dense urban areas where buildings blocking GPS signals is a problem, isn’t the visibility of Wifi access points and cell towers not a good-enough additional signal?

                                      1. 8

                                        The main attraction of Slack is not its IRC compatibility. Because if it were, we all would be using IRC and Slack had no reason to exist. Slack won its large following (and first among technical users) because its total feature set^ and usability was found to be convincing. While I would have liked Slack to continue to offer these gateways, they were never the central features. I therefore find the accusation of bait and switch overblown.

                                        ^ Ease of use, OS support, integrations, inline expansion of images, URL, tweets, emoji support, search, signup process, UI design, …

                                        1. 4

                                          Interesting. My impression was that in the beginning Slack was getting popular among designers (UX, graphical) that still had fond memories of Stewart Butterfield’s Flickr and could get tech crowd to go along because they could keep using their existing IRC/XMPP clients. But like I said in my other comment, I don’t actually know.

                                          1. 1

                                            As I see it, usability is all about the clients available. IRC does not mean glass tty at all.

                                            And I wonder why would technical users run away from technical tools.

                                          1. 15

                                            I was hoping for something else from this.

                                            Given the title, I was hoping for a deep insight into the tradeoffs of macros and why those tradeoffs are worthwhile in Rust. Instead, its more of a listing of “things you can do with Rust macros”. There’s nothing wrong with that.

                                            I’d really love to see a deep pros/cons of macros in Rust. There probably aren’t many people who could write what I’d like to see though. You’d need a deep understanding of macros across a variety of languages, a strong grounding in programming language theory, and a strong grounding in Rust.

                                            1. 10

                                              Rust’s macros currently don’t really work nicely with the module system - they all live in a global namespace and you have to use the odd #[macro_use] and #[macro_export] attributes to import and export them. It’s quite ugly. This will be fixed in Macros 2.0 though - from what I understand they are learning from Racket - I think it might be called ‘staged macro compilation’ or something… it is a non-trivial problem to solve though. Also, macros at the top level are not hygenic, I think this will also be fixed in Macros 2.0.

                                              Another annoyance is the APIs surrounding procedural macros - it’s not as nice as working with Racket-style reader macros. You’re more working with a token stream, rather than an AST. Also, the API does not enforce hygiene as far as I know. Again, this should also be fixed in Macros 2.0.

                                              Another frustration is that they don’t work nicely with rustfmt - because rustfmt doesn’t really know how the original macro author wanted to format them.

                                              Other issue is that they have a restricted syntax that doesn’t allow you to create really nice looking language extensions. So a match! replacement would have to look like:

                                              my_match!{ expr;
                                                  pattern1 => expr1,
                                                  pattern2 => expr2,
                                              }
                                              

                                              Rather than:

                                              my_match! expr {
                                                  pattern1 => expr1,
                                                  pattern2 => expr2,
                                              }
                                              

                                              This is because we want tooling to have to understand macros in order to parse Rust code. I don’t think there is any solution for this.

                                              All in all, I think macros a a nice addition to Rust, but they still feel a little ‘bolted on’ to the language, and could do with some improvement in the future. The Rust team knows this, and they are working hard to make those improvements!

                                              1. 1

                                                Thanks @brendan!

                                                1. 5

                                                  No worries! I’m not super up-to-speed with macro theory, alas. But here are some RFCs (alas no prior art or references to the literature are given):

                                              2. 5

                                                This comment is a pretty good rundown of issues with the current macro approach.

                                                The list is pretty good, I’d only add the call-site syntax (!) to the list of problems.

                                                1. 2

                                                  Just something I’ve observed, some of the popular languages are either strongly typed but have macros, or loosly typed and don’t have macros. ie. C/C++ vs Javascript. However, Java is strongly typed but doesn’t have macros, so take my observation with a grain of salt :)

                                                  1. 7

                                                    My intuition is that everybody hates boiler-plate, but static languages tend to the problem with macros, while dynamic languages address the problem with reflection and runtime meta-programming.

                                                    Java is a notable outlier in that it addresses the boiler-plate problem with IDEs.

                                                    1. 1

                                                      “Java is a notable outlier in that it addresses the boiler-plate problem with IDEs.”

                                                      Smalltalk partly solved boilerplate with the IDE and live coding. In Java ecosystem, boilerplate was the solution to C/C++‘s problems. The IDE’s then ensure that various modules have correct amounts and types of boilerplate. Or something like that.

                                                    2. 1

                                                      Haskell has no macros but Scheme does. Anyway, strong typing prohibits runtime type inspection and this leads to some repetitive boilerplate code, for example to derive serialisation code from type declarations. Macros can make this simpler. In the case of type-driven code, type classes (Haskell) are powerful enough to not require macros. Another case for macros is code instrumentation and here I believe type classes would not be enough.

                                                      1. 5

                                                        Haskell does have macros - they’re called (somewhat confusingly) ‘Template Haskell’. I prefer type-directed code generation using type classes, but this is often at the expense of compilation time, when compared to Template Haskell. Hopefully this will be improved at some stage…

                                                        1. 1

                                                          Where are you putting TemplateHaskell in this categorization? I believe it’s the recommended way to handle lenses, and is also used in the Yesod ecosystem.

                                                    1. 1

                                                      I’m not fully understanding what issue is being described here. Is it that the archive URLs are unreliable, i.e. the “Source code (zip / tar.gz)” URL?

                                                      1. 2

                                                        The hash of the auto-generated tar files is not stable. I assume the compression level changes or the tar implementation to create them.

                                                        1. 1

                                                          And what about the zip files?

                                                          1. 3

                                                            Same problem with zip files.

                                                            The OpenBSD ports tree stores checksums of release artifacts to ensure authenticity of code that is being compiled into packages.

                                                            Github’s source code links create a new artifact on demand (using git-archive, I believe). When they upgrade the tooling which creates these artifacts the output for existing artifacts can change, e.g. because the order of paths inside the tarball or zip changes, or compression level settings have changed, etc.

                                                            Which means that trying to verify the authenticity of a github source link download against a known hash is no better than downloading a tarball and comparing its hash against the hash of another distinct tarball created from the same set of input files. Hashes of two distinct tarballs or zip files are not guaranteed to match even if the set of input files used to create them is the same.

                                                            1. 1

                                                              Thank you for the detailed response! I understand the issue now.

                                                              There are likely tradeoffs from GitHub’s perspective on this issue, which is why they create a new artifact on demand. They maintain a massive number of repositories on their website, so they probably can’t just store all those artifacts for long periods of time as one repository could potentially be gigantic. There are a number of other reasons I can think of off the top of my head.

                                                              Why not have the checksum run against the file contents rather than the tarball or zip?

                                                              1. 3

                                                                Why not have the checksum run against the file contents rather than the tarball or zip?

                                                                One reason is that this approach couldn’t scale. It would be insane to store and check potentially thousands of checksums for large projects.

                                                                It is also harder to keep secure because an untrusted archive would need to be unpacked before verification, see https://lobste.rs/s/jdm7vy/github_auto_generated_tarballs_vs#c_4px8id

                                                                I’d rather turn your argument around and ask why software projects hosted on github have stopped doing releases properly. The answer seems to be that github features a button on the web site and these projects have misunderstood the purpose of this button. While some other projects which understand the issue actively try to steer people away from the generated links by creating marker files in large friendly letters: https://github.com/irssi/irssi/releases

                                                                I’d rather blame the problem on a UI design flaw on github’s part than blaming best practices software integrators in the Linux and BSD ecosystems have followed for ages.

                                                                1. 2

                                                                  Some more specifics on non-reproducible archives: https://reproducible-builds.org/docs/archives/.

                                                                  Why not have the checksum run against the file contents rather than the tarball or zip?

                                                                  Guix can do something like that. While it’s preferred to make packages out of whatever is considered a “release” by upstream, it is also possible to make a package based directly on source by checking it out of git. Here’s what that looks like.

                                                        1. 2

                                                          Is there really an “emerging idea of language-oriented programming, or LOP” as the article states? The way I see it, most modern languages are carefully crafted to balance their features (in their type and effect systems) such that I see little room for the modularity that LOP would require. I’ve never heard the term before. It sounds like we have extended the hierarchy upwards but I am not yet convinced:

                                                          • LOP
                                                          • Embedded DSL
                                                          • Framework
                                                          • Library
                                                          • Functions, Methods
                                                          1. 4

                                                            I imagine people made a similar arguments against objects in the early 80s. When languages did not natively provide support for objects, it was so inconvenient that people hardly ever used them, so they necessarily felt “we have extended the hierarchy upwards but I am not yet convinced”. Now we can’t get them out of our languages even if we try. (-:

                                                            Where languages make it — by design or by accident — easy to extend the language, language extension is rife. The paper mentions the case of JavaScript. Though JS was not invented with any meaningful metaprogramming capabilities, it left enough hooks that people have gone off and created all sorts of sub- and super-languages around it. This is also true in Ruby (“Ruby DSL” is a whole thing in itself), because Ruby also provides such hacks.

                                                            Furthermore, a growing number of new languages have been adding macros: Scala, Julia, etc. You can view what has happened in Racket as a natural destination of where macros end up. We’re just ahead of the curve by about 20 years; there’s a good chance that as people start to use macros more in those other languages, they will slowly recapitulate all the lessons that we’ve learned, and end up creating similar solutions.

                                                            1. 1

                                                              When a language has an advanced type or module system, this cannot be easily extended. The language can still implement a macro system to accommodate developing patterns (like deriving RPC interfaces, support logging, or serialisation) to help cut down boilerplate code but there are limits to that. A term like LOP suggest a language can be assembled from building blocks and my lack of conviction is around that aspect.

                                                              1. 2

                                                                Well, no, LOP does not imply that a “language can be assembled from building blocks” — that is sometimes a consequence, but it isn’t part of the definition. The point is simply that every program has lots of small languages that are itching to surface, and languages should make it possible to do so — not in an ad hoc way, but in a way that lets those languages be turned into abstractions in their own right.

                                                            2. 3

                                                              It sounds like metaprogramming with DSL’s just with a new method. Language-oriented programming might be a more approachable term for that, though. If I heard it, the first things I’d think of were tools such as Rascal and Ometa that let one arbitrarily define, operate on, or use languages. That covers language part. Far as integration, a number of techs supporting DSL’s… probably a lot of LISP’s and REBOL… had a powerful language underneath one could drop into.

                                                              So, this seems like a new instance of old ideas they’re giving a new name. I do like how they’ve integrated it into a general-purpose, GUI-based, programming environment instead of it being it’s own thing like Rascal. An old idea I had was researchers should do more experiments in building things like Rascal or Ometa alternatives in Racket leveraging what they already have to see how far one can stretch and improve it.

                                                              1. 2

                                                                Terra is another language in the “make DSL’s” approach to programming - although more geared towards lower level programming I think.

                                                            1. 2

                                                              I am too lazy to define (and remember) macros in my editor but consider tab-expansion (Emacs has dynamic abbrevs, Vim has C-p) a killer feature. I don’t understand why mainstream text editors like they are built into mail applications don’t have this.

                                                              1. 2

                                                                A phone network isn’t an organisation’s memory either. Treating it that way would be a mistake, and the same holds for Slack. I think there is a hint in “instant messaging”. That aside, I agree that Slack can lead to distractions but I don’t think that moving to email (as suggested) would solve that problem.

                                                                1. 2

                                                                  This is very well written and is worth reading for the section about prior work alone. I would have liked a closer look at LambdaPi since I am not familiar with it.

                                                                  1. 3
                                                                    1. Henk: A Typed Intermediate Language by Erik Meijer and Simon Peyton Jones contains a tutorial covering the Lambda Cube and can serve as a brief introduction to Pure Type Systems.
                                                                    2. Lambda Pi: A Tutorial Implementation of a Dependently Typed Lambda Calculus by Andres Löh, Conor McBride and Wouter Swierstra is a worked introduction to implementing a dependently-typed lambda calculus, and has the resulting implementation available in Haskell which can be useful to follow along.

                                                                    I’d recommend reading those in order. The Henk paper in particular is great, and despite being from 1997 I haven’t personally found any better introduction to the topic.

                                                                    (That is, I recommend reading Henk before the actual link regarding LambdaPi or the reorganisation.)

                                                                  1. 2

                                                                    The article is worth it if only for pointing out this tweet

                                                                    1. 1

                                                                      It’s funny, I think that, too. But what is this adding beyond that? I’d like to read better ideas. I believe people are trying a lot ideas like staging, containers, reproducable builds, … We also have to consider what kind of software a ompany is creating: software that ships to customers or software that is pushed to a server.j

                                                                    1. 5

                                                                      Most problems are not solved by cutesy writing either. This is an opinion piece that disguises itself as an analysis.

                                                                      1. 1

                                                                        I really feel that many problems are solved, but not well documented, or at least the books and articles are still very obscure.

                                                                      1. 8

                                                                        The general approach reminded me of the X Window System: a client-server architecture and a very general design with the additional focus on performance. I was surprised that this was not lead by a study of the operations that an editor should support based on the tasks a user has. Are multiple cursors a new promising way? Should we better support fuzzy search beyond regexp? We probably want handling of variable-width fonts and font attributes. How to integrate IDE-like features? How to handle tables and images? Equations? Isn’t this also an aspect of performance: how fast a task can be solved given the tools available?

                                                                        1. 9

                                                                          We’re thinking about most of those questions. I personally think multiple cursors are very powerful and useful, but recognize this is a question of user preference. Rich text (including variable width) is high on the roadmap, and support for Language Server not far behind. I’m not sure about equations, that seems possibly beyond the scope of the project, but it might be interesting to see how far that could go as a plugin.

                                                                          And yes, how fast you can get your work done is ultimately the proper measure. It’s just a little harder to quantify, even with an Arduino :)