1. 34

I am looking for an FP language which everyone thinks will last at least 100 years.

I am of the belief that FP is very good for hobbyist programming: reuse is very very high. Until now I’ve used Rust and TypeScript as my daily drivers. These are not languages which I can step away from and come back to 10 years apart without having to remember things or reading updates. They are “work” languages. I understand now why C remains a strong choice for many.

I am looking for a real FP language which means supporting composition, partial application, and first-class functions.

It may be built on top of a systems language, but nothing else; this means C, C++, or Rust.

I am looking for something closer to Haskell than I am Lisp/Scheme in terms of syntax.

The current best contender I have found though has been scheme.

Type system is a bonus but not necessary.

    1. 31

      I think both OCaml and Haskell are here to stay and are ‘safe’ bets. Erlang will probably still be kicking as well.

      1. 8

        I’d like to think Haskell is a contender but it’s mutating quite fast and I’m not entirely convinced we can build all the older Haskell programs today, which doesn’t bode well for a higher magnitude timescale.

        1. 11

          Some of the programs are intentionally not buildable because older versions of the compiler used to accept code that wasn’t meant to be accepted.

    2. 22

      Perhaps SML (Standard ML)? Unusually, it has a formal specification of its typing rules and operational semantics, as well as a standard library, all of which are somewhat set in stone. By that I mean that apparently, those specifications haven’t changed in 25 years, even though it is still widely used in certain areas (such as proof assistants). That said, there’s been efforts to improve/extend it, under the umbrella “Successor ML”.

      1. 10

        Yeah, for the particular needs that OP is describing, SML is ideal. You can take any SML program written in the last 25 years and compile with any SML compiler (modulo implementation-specific extensions), and that will remain the case for the next 100 years, because of the specification.

        Now, whether that is a good thing for the language ecosystem and adoption is another question entirely.

        1. 7

          I agree. SML is a dead language, which is what the OP is asking for, yet it is functional and reasonably capable. It does not have a very active community, and indeed OCaml, F# or GHC Haskell are probably more pleasant to use in practice, but they are not stable as languages.

          1. 11

            Precisely: “dead” is a feature here, if we mean dead as in “no longer being extended”.

            After all things considered and looked at, SML is 100% what I’m going to be looking at. It looks great!

            EDIT: I think everyone should take a minute to read https://learnxinyminutes.com/docs/standard-ml/ … It is beautiful! FFI looks very easy too. Next step is to implement an RSS feed manager with SML + SQLite.

            1. 4

              It’s also not wasted time. SML is used as the foundation for a lot of PLT research because it is so formally defined and provides a nice basis for further work.

            2. 3

              To be fair, Haskell98 is also quite old and dead. It’s hard to get a recent compiler into pure H98 mode (especially wrt stdlib) but old ones still exist and don’t need to change. Haskell2010 is the same just less old. Only “GHC Haskell” is a moving target, but it’s also the most popular.

              1. 2

                I would care more if there was a Haskell98 which people were actively improving performance wise.

                1. 3

                  Why? Do you find GHC’s H98 under-performing?

      2. 7

        OCaml and Haskell are well ahead in terms of features as well as adoption. F# and Scala are not going anywhere. That said, I really like SML and glad that mlton, MLKit, PolyML,SML/NJ, SML# are all alive. If only someone (don’t think I have the chops for it) could resuscitate SML.NET

        1. 4

          OCaml and Haskell are well ahead in terms of features

          I think OP wants stability over features. Some people consider new features “changes” because what is idiomatic may change

      3. 2

        I think this is the right answer. I have had similar impulses to the OP and always came back to Standard ML as the thing that comes closest to fulfilling this purpose. I just wish it were less moribund and a little closer to Haskell in terms of aesthetics.

    3. 12

      PSA: before proposing your favorite language as a better alternative, make sure to read the previous story to avoid rehashing the same arguments. Thanks :)

      1. 2

        hard agree with this comment, and also it would be great if we could apply the ML tag

        1. 2

          you can use the suggest link under a submission to change which tag’s are applied to a submission

          1. 2

            I never provide feedback to posts with enough frequency to remember this, so thank you! will comply

            edit actually I don’t have a suggest link for this one…

    4. 10

      Cool blog post Ozymandias. After having my code broken enough times by linux distros I started asking myself the same question. I think the most important things you left out are:

      1. Binary. Something like x86-64 is likely to survive. Even a stable language like C there’s still a good chance ABIs will drink your milkshake. So it’s nice to have portable code in addition to portable binaries.
      2. Operating systems. Your binaries should run on multiple operating systems. That way if one falls out of fashion, it’ll still run on the others. My binaries run on seven operating systems.
      3. Tools. Vendor tools not just dependencies. For example, I always keep a vendored copy of the GCC binaries in my repo. That way my software can build deterministically for everyone anytime. Because compilers break all the time. Even for supposedly stable languages. But the hardware architecture that lets old version of compiler binaries run isn’t going to break.
      4. Time. That’s my biggest issue right now. I need to be doing more to audit for instances where I’m using long or long double nanoseconds which will probably only last another 80 years. There’s also Y2036 and a bunch of other ones I need to audit for. Leap seconds are evil. I have to use a zip structure so my binaries can have their tzdata updated if needed. Printing timestamps in general seems to be the biggest hurdle to making a program that can survive until the heat death of the universe.

      https://github.com/jart/cosmopolitan

      1. 7

        Binary. Something like x86-64 is likely to survive.

        What makes you say that? x86-64 is barely 20 years old. UNIX C code from the ’70s will have a good chance of working on a modern *NIX system but running a PDP-11 binary from the same era is a lot more difficult.

    5. 8

      I am looking for an FP language which everyone thinks will last at least 100 years.

      What do you mean by “will last at least 100 years”?

      Do you mean a program written today will also work 100 years from now? If so, any language (functional or not) would do, because virtualization exists and will exist 100 years from now. (We are still using COBOL…) :)

      Or, do you mean a library written today can still be used 100 years from now, perhaps with minimal modifications? Personally I don’t believe this is possible, especially if the library does anything which is not strictly an algorithm juggling some primitives.

      In any case, my belief is that the longevity of any language is heavily impacted by its complexity, namely:

      • simple concepts and syntax – Lisp/Scheme or even Go (although not functional) will perhaps fare better than C++, Java or Rust; Erlang could fit, but without it’s BEAM and OTP, I don’t see Erlang as a “language” (there are no other implementations);
      • self-contained interpreter – the simpler the installation the better (i.e. one binary); as a counter example would be Python or many Scheme interpreters, where you can’t even move them in another folder without breaking;
      • interpreted or byte-code compiled (as opposed to native compilation); WASM perhaps;

      As for a particular language, I’m hoping Scheme – I’m kind of invested, having written my own Scheme interpreter – however I don’t think this will be the one… (Looking at today’s language popularity I’m afraid it’s going to be JavaScript…) :)

      I don’t think this 100 year language has been created yet…

      1. 7

        To put the 100-year requirement in perspective:

        • FORTRAN is from 1957
        • Algol is from 1958
        • Lisp is from 1958
        • COBOL from 1960

        Of these, surviving Lisp dialects still have quite a lot in common with the original, Fortran 90 is almost a completely different language (and is being slowly displaced by a mix of C++, Julia, and Python), Algol is basically dead, and COBOL is a zombie that will probably keep groaning forever but no one actually wants to use it.

        C is a relative newcomer, but 1972 C won’t compile with a C23 compiler (thankfully - K&R functions without prototypes are an abomination that should have died in 1989) and it’s increasingly hard to hire C programmers as the language is displaced by C++. C++ is even newer, from 1985, and it’s starting to be squeezed in the systems space by Rust and by a large number of applications languages at the higher-abstraction end.

        If Fortran survives, then we still have 35 years to go - more than a third of the total required time - before we can claim that the industry has produced a single 100-year language. I recall a talk in the late ’90s where the speaker said ‘I don’t know what the syntax of semantics of the language that we’ll be using for HPC in 2030 will be, but I know that the language will be called Fortran’. A language called Fortran will almost certainly be around in 2057 but it may not be recognisable to someone who wrote FORTRAN I. GCC dropped support for FORTRAN 77 a while ago, and I don’t think it ever supported FORTRAN I, so it’s not even clear that Fortran has been a 50-year language yet.

        All of that said, I don’t really agree with the goal. I don’t want a language that remains the same for that long a time. The computers that I’m programming now have almost nothing beyond the superficial level in common with the machines from the ’70s. Abstractions that made sense in the ‘90s are increasingly more of a problem than a help. Similarly, the problem domains that are important have shifted. COBOL has a fantastic set of built-in functionality for driving text terminal interfaces, working with binary-coded decimal, and interfacing with record-oriented filesystems. I don’t care about any of these things today.

      2. 6

        I don’t think this 100 year language has been created yet…

        Itʼs called Fortran.

        1. 7

          Right. It will be called Fortran. What the language will actually be? That’s a different question.

      3. 5

        I don’t see Erlang as a “language” (there are no other implementations);

        I see your point, but how can it be anything else than a language since you use it to express programs?
        It’s just not a standardized one.

        1. 2

          For me, when it comes to programming, I see things this way:

          • the language – the syntax and semantics decoupled from a particular implementation, with clearly documented semantics; (C, C++, JavaScript, Scheme, CommonLisp, even WASM;) (and perhaps even Python with PyPy or Starlark, Go and Rust with GCC, fit in here;)
          • the compiler / interpreter – the code that actually makes things written in “the language” work; (Clang, GCC, JVM, BEAM in case of Erlang, etc.)
          • the standard library, and third-party libraries – which actually provide functionality to be used from within “the language”; (OTP in case of Erlang;)
          • the tooling – cargo, hex, npm, maven, etc.; some languages are impossible to be used without one of these (especially Rust);

          Now, when speaking in general, when one says “language” he actually means “the language + the compiler / interpreter + the standard library + the tooling”.

          However, in the context of this topic, about the 100 year language, “the compiler”, “the standard library”, “the tooling” will most likely not survive… (Take Java: we had Ant, then Maven, and I’m not even sure what we have now. Take Go: until standardizing on modules there were a couple of tools to manage dependencies. Etc.)

          The only thing that will survive (if even that) is perhaps only “the language”. And thus, having more than one implementation is important. (And given how many Scheme interpreters are out there, including mine :), I think it can be a good candidate… That or C.) :)

      4. 2

        Lambda calculus has been around for 100 years. Alonzo Church’s original encodings of Peano numerals can be directly translated into modern functional programming tools, e.g. Haskell:

        -- Lambda calculus
        0 ≡ λf. λx. x
        
        -- Haskell
        zero = \f -> \x -> x
        
    6. 8

      While it’s certainly true that our field has far more inertia than fourty years ago, I’m personally of the opinion that the “100 year program” is not yet the right way to achieve “100 years of usefulness”, and that the right way to get to it is still the “100 year specification”.

      I struggled with this myself for the past year or so, after moving my notes/knowledge management system (man that sounds fancy…) to a third solution in about twenty years (HTML via Netscape, later Mozilla Composer, then a wiki, then Markdown). I decided that, at this rate, moving it a fourth time is inevitable, and I’d rather do it for good, so I just wrote my own thing.

      My own aims are a lot more modest, I’m aiming for the thirty, fourty years of reasonably good health that I expect I still have left.

      Except for terminal programs written in C, I don’t think there’s any 40 year-old tech stack that’s still usable today except through emulators, and even if it were, I don’t exactly want to keep my shit frozen in 2022, that defeats the whole purpose of accumulating knowledge and developing wisdom.

      Instead, I settled for a system that will be easy enough to re-implement if I ever need it, and that will still remain useful even while I’m in-between implementations. For example, the “markup” language I use is barely-annotated plain text, supporting hyperlinks and media, but with a little less visual noise than Markdown. If something better comes up and I want to migrate to it, I expect a conversion script would be a few dozen lines. An incomplete implementation (i.e. all of the critical features I need day-to-day, but about 50% of the optional ones) of a WYSIWYG-ish editor for it, including some fancy-ass stuff like in-line LaTeX equation support, is about 1,200 lines of really bad Python, much of which is Tkinter boilerplate. (At its peak it was about 5,000 lines but I ended up dropping a bunch of stuff I figured is better handled separately or not at all, like file browsing and media access and hyperlinking based on unique identifiers).

      It runs really fast, too – even with a hand-rolled, extremely inefficient parser, and a mountain of Tkinter workarounds, it happily chugs along on some of my longer documents (a few hundred pages) on which the likes of TinyMCE choke instantly, and the only things it doesn’t support 1:1 are tables and inline display of some types of media, like video. The former isn’t critical for me (I’m okay with plain text tables), and the latter is 100% deliberate (i.e. I don’t want to poorly re-implement video or music players in my program, the way browsers do – the implementation effort with the current tech stack would be minimal, but I think the results would be worse than an optional preview + an “open” button that launches a proper player, which is what it does now). I expect the complete implementation would be about 2,500 lines, with a few warts that I can live with (e.g. document headings aren’t numbered automatically – a feature that’s not hard to implement but I just don’t care about it enough to bother). I wasn’t even planning to use this, I hastily threw together a Tkinter implementation as a proof-of-concept, expecting this would be the initial prototype that I’d use to validate my approach before implementing it using something less obtuse (Qt, or GNUstep if I could remember enough Objective-C). I was expecting it to be slow to the point of uselessness but nope.

      The initial implementation effort was about two weeks of working for a few hours in the evening, but much of it was spent figuring out how I want various things done, and I’ve thrown away about 80% of what I’ve implemented in that time. Re-implementing all this from scratch is something that I can probably do over the course of two or three evenings, in any tech stack that I know well. In fact, the first incarnation was based on TinyMCE – it took a long time because I had to (poorly) reimplement all sorts of browser wrappers over native features, but I could’ve probably pulled it off if I didn’t value my time. Two or three evenings even once a year is something I can live with, and is certainly comparable with the effort of resurrecting thirty year-old technology.

      1. 1

        Very true, but I disagree with “100 years of usefulness”. A specification can unfortunately lead to many incompatible implementations. A single, understandable implementation which works for 100 years is much more useful. There are no arguments in semantics which natural language is bad for.

        I have thought a lot about specs over programs - it led me to playing with Coq for a year before realizing that programs need to be understood by regular people too.

        Regardless your reasoning is very nice to see and I appreciate it :) Pretty much what I thought for awhile.

        1. 2

          Very true, but I disagree with “100 years of usefulness”. A specification can unfortunately lead to many incompatible implementations

          Oh, absolutely! I’d personally prefer a 100-year program, written in a 100-year language over my contraption as well. The reason I went by a different route is that I think our industry is currently at a stage where it can’t deliver that, not without compromises that I’m not willing to make, like relying on potentially obsolete hardware or emulators for it. I’m definitely cheating: my 100-year program is actually 20 five-year programs that just do the same thing. I think twenty implementations compatible enough to be useful are a problem that’s easier to solve in our current landscape and with the kind of time I expect to have on my hands, but it’s definitely the worse version.

          FWIW, I don’t think our tech landscape’s main problem here is at the language layer, but at the system layer. My first jab at a system of this kind was actually an attempt to solve some of the more irksome bugs in Seamonkey Composer. Their C++ dialect still compiles fine and it’s a fair bet it’ll still compile fine by the time Seamonkey Composer turns fifty. But the tech stack around it is so diverse, and based on so many components maintained by volunteers, that I expect most of them will be gone long before that. That’s why I eventually – and very grumpily – shifted strategy from “using stable programs that are likely to stick around and remain useful” to “writing programs that I can maintain and rewrite myself”.

          Edit: I suspect a careful choice of dependencies might alleviate most of that, though. Portable libraries that an individual developer can port by themselves, for example, might make tech churn a moot point. Similarly, huge systems that large commercial players are likely to depend on for legacy systems might work remarkably well, albeit for different reasons. Lots of tools we use now will probably be dead and buried by 2052, but it’s a pretty safe bet that programs which can run JavaScript and render DOMs will still be around (although, eww!)

      2. 1

        FWIW I have been using the same notes / knowledgement management system since 2004. I wrote my own Wiki, which started out as a CGI in Python 2.2 or so and is now a WSGI app in Python 2.7. It uses sqlite and doesn’t have any dependencies outside the Python stdlib.

        I still use it every day, and while there are many things I could improve about it, I’m pretty sure it will last until I die :-/

        The good thing about a web app is that it is divorced from the rest of the system by textual protocols. So as long as web browsers exist, this program will be usable.

        The Python 2.7 tarball has few dependencies and I think it will be compilable on Unix systems forever essentially …

        I think the focus on the programming language is perhaps overblown. The real issue for code longevity is DEPENDENCIES, and the language implementation is a special case of that.

        Also, being able to build your own software from source is a big boon to longevity. And that’s part of my interest in shell!

        1. 1

          The middle solution (the wiki – or, rather, its final incarnation, I started with an off-the-shelf wiki) was pretty much the same thing here, too – it was essentially a werc clone written in Python rather than rc & friends.

          My main reason for seeking an alternative was that I really wanted something with a semblance of WYSIWYG editing. A few years ago I decided I’d start using this for things that I was still doing on paper notebooks – history, mythology, mainly – and at that point the whole wiki thing blew up. Most of my notebooks in these areas are very much living documents, that I never just read, or just write – I usually do both. It was pretty important for me to be able to scroll back through a document and see pictures inline, to quickly get from a picture to its folder with dozens of other pictures and so on. Writing notes in a window and seeing the output in another window just didn’t cut it anymore.

          Before that I mostly archived write-once notes on purely technical subjects and a regular wiki was more than enough for that, yeah – the preview pane always showed basically the same thing but with prettier fonts.

          1. 1

            Yeah I see that for sure … My wiki is text only, but I started using https://stackeditpro.com for the blog because WYSIWYG editing is useful there. Vim is great for code, but prose and hyperlinks and images benefit from a GUI editor.

            So it does seem like some kind of in-browser wiki GUI makes sense. StackEdit was surprisingly good and fast and local-first – but it got a bit janky lately. I still use it though.

            Even though the browser is a very weird GUI platform, it will probably last longer than Windows, and definitely OS X which has constant churn.

            I think a Linux GUI could last a long time too, but after looking at a bunch of Linux distro internals, I feel like they have a lot of “tech debt”. They also have churn – I am hesitant of the direction that Ubuntu is going in. I use Ubuntu because it seems like they test on real hardware.

            Writing a GUI that works in all of Firefox/Chrome/Safari seems a little more future proof. But it’s probably something I will never have time to do :-/

    7. 7

      I’d second OCaml. Have even in the past run bare OCaml without an OS/kernel, it is very portable. Whilst Haskell isn’t going to go anywhere, even GHC can’t compile itself with any old version of GHC, so that would fail the 100 years test, IMO.

      1. 4

        OCaml can’t be bootstrapped from source either (it requires binary, although it is cross-platform), but I agree it doesn’t matter in practice. There is an effort to bootstrap OCaml from source if you are interested in such things: https://github.com/Ekdohibs/camlboot.

        1. 3

          If memory serves, OCaml only needs a C compiler to build. I’m guessing C compilers will be around in 100 years..

    8. 6

      JavaScript. Straight up unadulterated static hosted prototypically architected, build system free, untyped JavaScript. There’s programs I’ve left completely unattended for 10 years that still work on every computer I own, including the 15 year old ones. Nothing this side of C comes close.

      A PL is only as long term viable as its runtime envirobment and toolchain. No ‘proper’ FP langs seem to care

      1. 3

        Just as your simple JavaScript code is still running 10 years later, so would any code that is written in the simplest form of a language and self-contained (without external dependencies), for example:

        • C – if one takes the simplest C code, one can build it without issues on most OS’s and I bet for years from now… (there are even a few C compilers out there that are very portable;)
        • Scheme, especially R5RS – there are many Scheme implementations that are able to run the core parts of R5RS, and if not, one could cobbler together a Scheme interpreter in a matter of weeks, if not days;

        So in this regard, the longevity of a code base is not especially given by the programming language, but by the way in which it was used.

        1. 1

          Yes.

    9. 5

      Trying to think why Common Lisp isn’t the right answer, though SML is a good one.

    10. 5

      Interesting post! Did you consider Common Lisp? I’m not contesting the choice of Standard ML, but CL is something I’ve heard people mention when the question of a “100 year language” comes up. I’m not a CL user myself, but reports indicate a similar experience to yours, eg the language is stable and there’s very little breakage in the ecosystem

      1. 1

        CL would seem to be discounted for the same reasons as Scheme – it carries many of the same flaws

        1. 1

          Good point; any Lisp you pick you’ll have to deal with a lot of parentheses. However, as opposed to Scheme there aren’t that many CL implementations that I know of and there’s an ANSI standard that I guess all current implementations follow.

          I’m not really a good CL advocate as I haven’t used it, but Steve Losh puts forth some good arguments for it here: https://stevelosh.com/blog/2018/08/a-road-to-common-lisp/

    11. 5

      I’m not against people using SML, but I feel like the language the post author might have wanted but for some reason missed is the other ML—OCaml. ;)

      • It has all the features of SML since they share common origin, but it’s evolving and has many useful features that SML doesn’t (polymorphic variants, local module opens, monadic let operators…).
      • MLton is a very slow compiler (it’s whole program optimizing, but as a side effect also slow) and lacks a REPL, which is why people often use a second compiler for experimentation, e.g. SML/NJ that is interactive but can’t produce binaries. OCaml bootstraps itself in ~10 minutes, but the binaries it produces are neither slow nor bloated. It also has a built-in bytecode target and a REPL.
      • SML’s packaging is still in its infancy, while OPAM has been a standard way to install OCaml itself and its packages. There are quite a lot of packages, too.
      • There’s a musl compiler flavor readily available for building eternal Linux binaries.
      • There’s also a mature tool for trans-compiling to JS that can make a JS version of OCaml’s REPL.

      I can also personally attest to it being at least a 20% of a 100-year programming language. The Lua-ML project has been dormant for almost two decades. It also makes rather intricate use of the module system and functors to make the standard library of the Lua interpreter reconfigurable.

      When I set out to resurrect it, there were very few things I had to update for modern compiler versions, mostly changing String to Bytes when a mutable string was intended. I don’t remember any runtime errors caused by a new compiler. In the compiler, that change was preceded by a years-long deprecation, too.

      Generally, both compiler maintainers and library authors take compatibility very seriously. Modules removed from the standard library become OPAM packages. Many third-party library packages has the upper bounds of their dependencies open rather than pinned, and that rarely fails.

      Of course, it’s not a perfect language or a perfect ecosystem, but to me it’s quite close.

      1. 2

        but it’s evolving and has many useful features that SML doesn’t (polymorphic variants, local module opens, monadic let operators…).

        I think OP would consider this an anti-feature. The language should NOT evolve. The longer it’s been static the better :)

        1. 1

          It’s evolving in a backward-compatible manner, so I see no problem with it. Besides, one can always opt out of further evolution by vendoring a specific compiler. ;)

          There are tools in the OPAM repo for preparing a tarball with everything including the compiler to reproduce a program build from scratch, so it’s pretty easy to make an “eternal snapshot” of your dependencies.

    12. 4

      There’s a lot of difficult trade-offs here, but IMO the clear winner for “functionalish thing that will still be around more or less as-is for the indefinite future” is Scheme. Probably r5rs specifically.

      It just is what it is. The spec is quite small and understandable. Countless people have implemented it, in various ways, to various degrees of completion and/or quality (no doubt several people reading this have had a go at some point). It’s pretty aggressively distilled down to its essence and has a philosophy of not bolting on genuine new language features unless it’s completely unavoidable.

      Of course, it’s Scheme, so… yes, parentheses, and dynamically typed, and many of the above things that make it a good contender for this “long-lived stability” criteria are also the reasons why it’s not the most practical language to use, and there is quite a lot of implementation variability if you’re not careful to restrict yourself to the actual standard, and and and…

      Still though, I have a very hard time imagining a future world where one couldn’t grab a copy of SICP or whatever and fire up some ~r5rs environment or another and get on with programming in much the same way they could have any number of decades in the past, much like C. Scheme is forever, and the almost religious philosophy of not adding special case syntax and piling on complexity to solve problems keeps it that way. One could make similar arguments for many other languages, especially those with a more academic background, but Scheme seems relatively unique in being something defined precisely in what are more or less timeless papers, while also being widely implemented and not a dead language in practice (like, say, SML).

      Being a dead language could be considered an advantage here (as another thread is diving in to), but I don’t think that is universally true. An alive language community doesn’t necessarily mean that the language itself will tend to break. That’s at least partially an outcome of the language design itself. Love it or hate it, the hyper-minimalist “lambda is all you get” anti-syntax s-expressions thing of the Scheme world keeps that at bay much more than, say, in the ML-derived world.

      Long-term language stability, I think, mainly derives from having a relatively small set of core concepts that everything else is built on top of. You can learn and understand more or less everything that’s in C, or in Scheme, now or 30 years ago or 30 years from now and they’ll still be more or less the same as they’ve always been. It’s the “concept” part that really clarifies this for me: you can even see it in the discussions in programming language communities. C and Scheme forums almost never have topics about entirely new concepts in the language (beyond the superficial anyway, I’m using a sort of capital-C “Concepts” here), the language just is what it is. They are also very stable languages. Meanwhile, C++ and Rust forums are constantly packed with whole new concepts for the language, or why existing ones are bad, or (etc etc etc). They are notoriously unstable languages. This is not a coincidence.

      I guess that rambling rant boils down to: find a language where the answer to “what fundamental or syntactic constructs is the community likely to try and add to the language in the future?” is “probably none at all”.

    13. 4

      I think there’s a decent argument to be made for Clojure for two reasons:

      1. It runs on the JVM. There’s a high likelihood that JVM’s will be running in 100 years, given how much Fortran is running right now.
      2. Clojure is a remarkably stable language, with very few breaking changes.
      1. 2

        I was disappointed not seeing more comments supporting Clojure and then seeing yours, but that didn’t make me happier because I expected you’d be bringing other arguments to the discourse.

        • Clojure is great not because it runs on JVM but because it is capable of running on JVM. And on top of Javascript. And also .Net. And soon, on top of other language platforms. Clojurists today figuring out Python and R interop. Clojure-Dart is coming soon. There already exist clojure-like languages that compile/transpile/interpret to Lua, Golang, and other PLs. The hosted nature of Clojure makes it a pretty compelling choice for future-proofing projects. Even if JVM falls out of favor and goes away, majority of Clojure functions in your codebase could probably be reused to run on a different platform.

        • Clojure doesn’t have strong ties with any programming paradigm. You can use object-orientation (where it makes sense), meta-programming (if desired), logic, etc. When tomorrow new paradigms are invented, they most likely can be adapted for Clojure too.

        • Clojure has made simplicity and pragmatism a cornerstone of its philosophy. There aren’t too many gotchas or things that don’t really make sense. And most other PLs are riddled with some weird constructs and made-up shit like “dunder methods”, “iiffies”, “eieios”, etc. Who wants to deal with all that garbage in the future?

    14. 4

      These are not languages which I can step away from and come back to 10 years apart without having to remember things or reading updates.

      I don’t know a single language that this wouldn’t apply to.

      1. 4

        C. Though, not functional.

      2. 3

        common lisp and sml are very much applicable. frozen standard, multiple confirming implementations, changes mostly in libraries. even smalltalk qualifies that way. it is not that people program pharo very differently from how they did squeak or older implementations.

    15. 4

      These are not languages which I can step away from and come back to 10 years apart without having to remember things or reading updates.

      This is highly anecdotal, and in direct conflict with your requirement of “closer to Haskell than Lisp/Scheme” and “built on top of C, C++ or Rust”, but I think Clojure might be worth taking into consideration.

      I have a personal project that’s 10+ years old, and I tend to work on it at a cadence of a couple of days of work a year — or less. Obviously, my progress on the project is not something to write home about, but every time I come back to it, I’m surprised at how stable the language and the ecosystem are, and how easy it is to pick up where I left off.

      Sure, the language changes, but it does so slowly enough to avoid “FOMO” even if I don’t upgrade to the latest version. Libraries and tools come & go, but even the ones that have been replaced by something new & hip still work — they’re either actively maintained or stable enough. The JVM is a decent platform that values backwards compatibility while continuously improving. The community is small, but strong. You can target JavaScript as a platform should you need so, and write parts in Java if performance is a concern.

      For what it’s worth, the project I’m talking about is a database-backed HTTP service with a very simple frontend. 100 years is a big ask for anything computer-related, but I could vouch for the 10-year viability of Clojure.

    16. 4

      Out of all the languages I’ve working with during my career, Erlang and POSIX shell have felt the most ‘solid’. I can look at a shell script or Erlang program written in the last 30 years and be confident that I

      1. can understand it
      2. can run it on modern systems

      Erlang definitely doesn’t shine in the pure-functional department. Its type system is very lispy, not offering the wonders of what ML-based systems give you. … yet despite all that, it’s a very fun and elegant programming environment, and I could see it hanging around long-term.

    17. 3

      I would like to add Prolog, even though it’s logic programming, some idioms are also used in FP. It’s already 50 years old, and the Prolog core is still the same. It was standardized in the 90s and there are lots of systems.

      The bad news is that the ISO standard is a bit small when it comes to real-world applications, and systems outside the standard can implement a lot of different things.

      1. 1

        I like Prolog too, I’ve used it quite a bit, but it has different use cases than your usual prog lang. I could see both Prolog and SML living forever.

    18. 3

      Bare Scheme can be a bit trying, although Guile is a very useful tool. If I was going for Scheme I’d go for Racket https://racket-lang.org/

      An interesting contender, although I dislike it’s reliance on the jvm is Clojure.

      Multiparadigm languages like Ruby and D allow your laundry list, and everything else!

      So the question then becomes, do you wanta full no mutable state no side effect FP language or only composition, partial application, and first-class functions.

      If you want the first, then you want Clojure or Haskell not Scheme.

      If you want “code is data” then Clojure or https://factorcode.org/ (Factor is a very interesting one on the code is data front)

    19. 3

      Lisp.

    20. 3

      Lisp is 70 years old. Even if you don’t start the timer with John McCarthy’s first version, it’s clearly an idea with staying power.

      @wizeman’s suggestion of ML is good for many of the same reasons - it’s already stood the test of time.

    21. 3

      One possibility, if all you’re interested in is longevity, is Coq. It’s a year older than Haskell (making it nearly 33), and for nearly all of that time it has had many fancy features that Haskell still lacks. It’s widely used in some mathematical research communities, and I believe its popularity has steadily grown. The primary downside is that it doesn’t do I/O by default (though there are now ways of getting around that, and e.g. extracting OCaml programs from your Coq code).

      1. 2

        Been there, done that; Coq is great for many reasons but the language I’m looking for needs to be somewhat performant and i/o capable. I still love Coq regardless.

    22. 3

      suprised no-one shared https://www.unison-lang.org/ already, but I smiled when I saw how opitimistic you are about 100 years.

      Aren’t we in a “cambrian explosion” age of languages?
      Nothing but useful concepts and stable ways to represent them will remain in 100 years.
      Referential transparency might be one of them.
      Equational reasoning might become mainstream.
      Algebraic effects might stick around to compose effectful programs.

      Learn any language(s) that contain those concepts and you’ll be covered if you plan to code a thing at age 120. PS: my 2 cents, I have no idea what I’m doing :)

    23. 3

      100 years is a very long timeline. Just think about tech 100 years ago for example. :P

      It’s possible computing will be so different that current programming languages won’t have any relevance in that paradigm. I would think that even 50 years things will be so different that current ideas will be heavily obsoleted.

      1. 5

        In 1922 there were no electronic computers, but 1972 had COBOL, Fortran, C, IBM System/360, ASCII, and the beginnings of Unix and ARPANET. It’s hard to make predictions, especially about the future, but my guess is that anything with sufficient install base is essentially immortal and will never go away until civilization ends somehow. So, Unicode, HTML, CSS, JavaScript, PDF, JPEG, PNG, etc. will always exist in one form or another.

    24. 2

      FWIW Python 2.7 is pretty much frozen in time and is a very capable language …

      SML is nice but I haven’t seen many (any?) big or common programs written in it. One reason may be that a language isn’t a sufficient foundation for useful programs (100 year or otherwise) – you also need a definition of an operating system.

      On the other hand, I’m sure there are many >100K and >1M line codebases still running on Python 2.7 (mostly inside companies or big organizations).


      Also it feels like this 2003 essay deserves a mention: http://www.paulgraham.com/hundred.html

      In fact I remember Larry Wall saying it was partly an inspiration for Perl 6 (now Raku).

      And the two languages from its author: Arc and Bel

      http://arclanguage.org/

      http://www.paulgraham.com/bel.html (apparently defined in a single big text file: https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=1595850613&)

      I’ve never used them but the design motivation is related. They’re trying to “define away” most of the implementation details of scheme, i.e. bootstrap the language from a smaller set of axioms. So presumably they won’t change in 100 years.

      But I think the real reason that something won’t change is because a lot of working software depends on it. There are large portions of Unix and the web that will never change.

      This is mostly food for thought… If you are having fun with SML, then that is what is important :-) But it seems like there is a lot of personal taste involved. Invoking the “Lindy Effect”, I’d choose Unix / C / Shell / Fortran / Lisp / SQL, since those languages are extremely old and still widely used.

      So it seems like there are 2 directions here:

      1. Finding a language IMPLEMENTATION that has not changed very much and is likely to exist in the future. That’s why I pointed to Python 2.7

      2. Building your own language (like Arc or Bel, though of course it doesn’t have to be a Lisp). I feel like if you take the the 100 year goal very seriously it will eventually lead to this :-)

      1. 1

        SML is nice but I haven’t seen many (any?) big or common programs written in it.

        Several large provers are written in it (like HOL4, Twelf, and Isabelle, which admittedly I am most familiar with), and it’s used in that space in industry (as a prover implementation). When MLWorks (related to LispWorks and other Harlequin products) was new, it was used in that space as well as in financial modeling, where nowadays we see Haskell, OCaml, Scala, & F# used. There are definitely many large SML code bases that will be floating around for quite some time to come.

    25. 1

      Did you consider Ocaml? It seems to have cannibalized most of SML’s audience these days.

    26. [Comment removed by author]