1. 57
  1.  

    1. 52

      Correct me if I am wrong, but I don’t believe C was “designed” to be a 100-year language. It kind of fell into that space through being what Unix was written in. It is only thought of like that in hind sight. It seems more important to solve some real problems with your language, get a solid user base, then worry about long term stability (probably around 1.0 like they said).

      1. 45

        As someone put it: C is for UNIX the same way JS is for the web.

        You don’t really get there through merit, it just happens.

        1. 13

          Counterpoint:

          This significantly understates the real appeal of C at the time, even and especially to people who had alternative languages. A great illustration of this is C on the early Macintosh. You see, unlike environments like MS-DOS (which had no language associated with it, just assembler), the early Macintosh systems already had a programming language; they were designed to be programmed in Pascal (and the Mac ROMs were originally written in Pascal before being converted to assembler).

          This was more than just an issue of Apple’s suggested language being Pascal instead of C. The entire Mac API was designed around Pascal calling conventions and various Pascal data structures; it really was a Pascal API. Programming a Mac in C involved basically swimming upstream against this API, full of dealing with things like non-C strings (if I remember right, Mac ROM strings were one byte length plus data). I believe that Mac C compilers had to introduce a special way of declaring that a C function should have the Pascal calling convention so that it could be used as a callback function.

          Despite all of this, C crushed Pascal to become by far the dominant programming language on the Macintosh. I don’t think it even took all that long. Programmers didn’t care that dealing with the API issues were a pain; working in C was worth it to them. It didn’t matter that Pascal was the natural language to write Mac programs in or that it was a perfectly good language in its own right. C was enough better to displace Pascal in a hostile environment.

          C did not win just because it was at the right place at the right time. C won in significant part because it was (and is) a genuinely good language for the job it does. As a result it was the language that a lot of pragmatic people picked if you gave them anything like a choice.

          1. 5

            That’s a good anecdote/example.

            But I would say it’s kind of evidence of the general principle of OSes/platforms being more important. C was “proven” by writing the Unix kernel and user space. As long as it was good enough for that, it was good enough for the analogous things on Macs and Windows.

            Then this language of OSes became the language of apps because of the “viral” effect.

            It was good enough that for a long time it basically had a monopoly on portable kernels and “high performing production quality” apps. Unix kernels, Windows NT kernels, word processors, spreadsheets, etc.

            Even today we still use nginx, Apache, sqlite, redis, etc. which are all written in plain C.

          2. 2

            C did not win just because it was at the right place at the right time. C won in significant part because it was (and is) a genuinely good language for the job it does

            Why do you think that is the case? What does it do differently compared to, say, Pascal at a time? (Especially that the aforementioned Pascal API is just as prevalent nowadays, it’s just become a C-based one, and that’s how programs supposed to do FFI. Not due to any merit, but because C)

            1. 2

              In case you haven’t read it, I recommend Why Pascal is Not My Favorite Programming Language (1981) by Brian Kernighan. A very influential article in the Pascal vs C debate.

          3. 2

            This significantly understates the real appeal of JavaScript at the time, even and especially to people who had alternative languages. A great illustration of this is JavaScript in desktop applications. You see, unlike environments like MS-DOS (which had no language associated with it, just assembler), the desktop operating systems already had a programming language; they were designed to be programmed in C.

            This was more than just an issue of their suggested language being C instead of JavaScript. The entire API was designed around C calling conventions and various C data structures; it really was a C API. Programming a desktop OS in JavaScript involved basically swimming upstream against this API, full of dealing with things like manual memory management. I believe that desktop JavaScript runtimes had to introduce a special way of declaring that a C function should have the JavaScript calling convention so that it could be used as a callback function.

            Despite all of this, JavaScript crushed C to become by far the dominant programming language on desktops. I don’t think it even took all that long. Programmers didn’t care that dealing with the API issues were a pain; working in JavaScript was worth it to them. It didn’t matter that C was the natural language to write desktop programs in or that it was a perfectly good language in its own right. JavaScript was enough better to displace C in a hostile environment.

            JavaScript did not win just because it was at the right place at the right time. JavaScript won in significant part because it was (and is) a genuinely good language for the job it does. As a result it was the language that a lot of pragmatic people picked if you gave them anything like a choice.

            1. 2

              And… yes? JavaScript has taken off because it’s a reasonable high-level language with lots of libraries and computers can support such things now. Garbage collection and strings are important, and JavaScript has them.

        2. 5

          Yep, it’s the advantage to being the default in a new platform that goes through an inflation phase.

          1. 11

            Yeah basically operating systems and platforms have all the influence, and programming languages are just along for the ride

            Most people learn C# to make windows apps, Objective C or Swift to make iOS apps, Kotlin/Java to make Android apps, etc.

            Or JS for the web, and C and shell for Unix.

            Shell was more niche during the days of Windows desktops apps, but it came back big with the cloud, virtual machines, containers, distributed systems, etc.

            PHP is another good example. It wasn’t even a language at first – it was a bunch of CGI utils.

            I also learned that MATLAB wasn’t a language either at first – it was a bunch of linear algebra routines.


            But I do think the bar has been raised in recent years, so hopefully we will get some better languages, not just the minimum hacked on top of a popular operating system :-)

            That said, I think stability is a very good goal. I think Go has done very well here.

            In contrast the recent distutils/setuptools deprecation mess in Python makes me a little sad. I don’t fault the current maintainers, but it seems like the original authors didn’t really design for stability / for the future, which leads to a big ongoing headache.

      2. 9

        Yes, I think that’s exactly what happened. My reading is: Hare copies a lot of language design from C as it’s considered “proven”. Hare aims to copy or even exceed C’s ability to endure the test of time. Hare does not try to Copy C’s design process. It is explicitly going to stop more or less all language evolution, whereas C is continuing to evolve.

        I’m happy it’s at least not copying C’s most detrimental and easy to fix defects such as

        • Unchecked array indexing
        • Header files (as opposed to a module system)
        • Leaving way too many things undefined in order to placate obsolete or dangerous implementations
      3. 2

        Being what Unix was written in, C didn’t have to advertise its elegance to gain a user base. In a modern context, stability and simplicity can be a selling point that distinguishes the language from the many competing alternatives.

        1. 3

          I find it weird that people call C “simple” when its standard has over 500 pages. And you do actually need to know everything in the standard to program in C, otherwise your program will release nasal demons for some bizarre technical reason.

          1. 2

            But that kind of bullshit never really happened in the early days. Compilers were more straightforward and less eager to exploit the technicalities of the “standard” (which initially there wasn’t - at least C got popular years before it was finally standardized by ANSI). Back then, it really was more of a “what you see is what you get” kinda language which was reasonably close to the metal.

            People used to use inline assembly a lot more as well. I often think if you did that nowadays all hell would break loose due to these nasal demons.

          2. 1

            I don’t think most C programmers know everything in the standard. One indication of simplicity is the existence of compilers written in relatively few lines of code, such as cproc, tcc, and scc.

            1. 2

              That raises the question, if C can be described in relatively few lines of code, what does the standard need 500 pages for?

              Yeah, most C programmers don’t know the standard, which means that most C programs presumably have a lot of nasal demons lurking in them.

              1. [Comment removed by author]

              2. 1

                That raises the question, if C can be described in relatively few lines of code, what does the standard need 500 pages for?

                Chances are it doesn’t.

                Yeah, most C programmers don’t know the standard, which means that most C programs presumably have a lot of nasal demons lurking in them.

                I wouldn’t presume that.

    2. 28

      Hare does not, and will not, support any proprietary operating systems.

      Good luck. I doubt it will become a 100 year language without people being able to hack on code on their Macs or Windows machines. But I could be wrong of course :-)

      1. 4

        Presumably, if it becomes popular, someone other than ddv will port it to Windows, Mac OS, and so forth. This decision is certainly going to increase friction for adoption, though.

        1. 10

          There are thankless jobs, and then there’s being the maintainer of a hostile fork of a ddv project…

          1. 5

            Now I’m thinking about creating a fork of SourceHut that soft-wraps text in mailing lists.

        2. 3

          Presumably, because it’s a ddv project, it won’t become popular. All I ever read about the guy is friction with his opinions.

          1. 5

            Drew has many other projects which have achieved significant traction, most notably Sourcehut and Sway/wlroots.

        3. 2

          Except they say: we don’t want those changes. We will never upstream them.

          Nobody will burn their fingers in this. What would be the point of being an unwanted port that the original project doesn’t want.

      2. 3

        Well, Darwin is open source :-)

      3. 3

        Will it work on reactos? And then just happen to work on windows.

      4. 2

        Yeah, as much as I like Drew’s work and some aspects of Hares design, this is probably going to be the reason it doesn’t gain traction. That being said, with WSL, Windows support is probably less important than ever, but the lack of MacOS support will hurt it drastically.

      5. 1

        I think the hope is that people stop using MacOS and Windows within 100 years.

    3. 23

      The feature freeze thing seems a little short-sighted. C has lasted a long time without making fundamental changes, but it’s made a lot of incremental changes that have improved type safety, runtime safety*, or in some cases just developer quality of life. I can think of at least one outright semantic bug (exposed array values) in C89 which got fixed in C99. Even the grumpiest, most conservative C programmers I’ve met—and this is a field where the baseline is high—embraced C99 eventually, admittedly in most cases after C11 landed.

      More generally, I’m not sure if C’s pace of development has much to do with its longevity. Its dead contemporaries moved slowly too.

      * Yes, really. It was much worse when function calls weren’t type-checked and there was no snprintf.

      1. 3

        I still see new projects in C89.

        1. 7

          XScreenSaver just started allowing non-C89 patches in the last month or so.

      2. 3

        If I need to write in standard C, I only write C89. If I were in a situation where I wanted to write C but with extra non-standard sugar, I would just use a C++ compiler. This is better than writing in so-called C99 because everywhere that C99 is supported, C++ is supported but not everywhere that C++ is supported, C99 is supported.

        1. 19

          Non-standard sugar … so-called C99

          Dude, it’s an official ISO standard (ISO/IEC 9899:1999). What’s your beef against it?

          Personally, if I were forced to write C code (ugh) and couldn’t even declare variables in mid-block or use “//“ comments, I’d be rooting through the medicine cabinet for something to OD on. That’s taking retro-computing a bit too far.

          1. 6

            C11 is also a standard and is the first version where you can write multithreaded code without relying on non-standard extensions or undefined behaviour.

            I’m curious where supports C++ but not C99. Visual Studio used to be the exception, but that changed several years ago.

            1. 4

              I’m curious where supports C++ but not C99.

              And how old the C++ version supported is…

              1. 2

                Good point. A few years ago, I used armcc with the mBed SDK and it supported C99 but only C++98. Going from C++14 everywhere else to C++98 there was incredibly painful.

            2. 3

              I’m curious where supports C++ but not C99. Visual Studio used to be the exception, but that changed several years ago.

              That was a particularly annoying example that was fully addressed in Visual C++ 2015. Depending on how old you are that is either recent or ages ago. There are other examples of course in the legacy domain. Systems that came up in the 90s are another example.

              C11 is also a standard and is the first version where you can […]

              Important to note that if you are targeting contemporary systems there is no reason to use C99 or C11. Any compiler that supports those standards supports C++98 or C++11. If you are not targeting contemporary systems that is also no reason to use C99 or C11 since those compilers didn’t exist back then.

          2. 3

            Dude, it’s an official ISO standard (ISO/IEC 9899:1999). What’s your beef against it?

            There’s no beef of course. It was just a concise way of implying that it’s a pointless standard from a practical standpoint. Why would you target C99 when you have a C++98 compiler available? Why would you target C11 when you have a C++11 compiler available? Based on your second comment I think we both agree that writing in C is less productive than writing in C++, so in a world where more powerful C++ compilers are prevalent why would you target C99 or C11? Just restrict yourself to the C++ subset that you like.

            Personally, if I were forced to write C code (ugh) and […]

            There are legitimate situations where you must write C code or it’s prohibitively expensive to use a newer language. E.g. you are writing low-level code that must be portable to a wide variety of legacy systems that don’t have C++ compilers. Another example is bootstrapping a proprietary platform that only provides a system C89 compiler. E.g. the official way to bootstrap current versions of GCC is system C89 compiler -> Old GCC written in C89 -> New GCC written in C++

            1. 9

              Why would you target C99 when you have a C++98 compiler available?

              So I don’t have to deal with C++.

            2. 1

              If the build result is supposed to be a C shared library (for use with other C applications, or another language’s FFI), do you have to jump through any hoops to generate the library when compiling as C++ instead of C? Does all the C++ name mangling go away if you aren’t using C++ classes?

              1. 1

                C++ has native support for C’s ABI. Just use extern "C".

          3. 1

            One of my CS professors said the lack of a K&R book about C99 reflected how the authors feel about C99 in some way, but didn’t go into detail. I always wondered if there was substance to that…

    4. 40

      AFAIK it’s not even 0-year programming language yet, because it’s really a pet project that no one uses. Nothing wrong with it per se, I have some pet project that no one uses too. But it’s hard not to point it out when looking at such a grandiose goals.

      I would instead be worried and focused on becoming more than just a pet project, as I find it extremely unlikely that there’s a niche for Hare, especially given the overlap with Zig. I’m more of a Rust-dev, but if I had a need for something more C-like, I would definitely go with Zig over Hare for soo many reasons.

      1. 9

        Given Hare has 87 contributors I wouldn’t necessarily call it a pet project…

        Source: cd hare && git shortlog -sn | wc -l

        1. 2

          Fair. Some people’s pet project are way popular than e.g. mine. :D

      2. 2

        I do think there’s a niche for Hare. It’s the extended circle of ddevault’s friends and like-minded folks who like plan9, hobbyist C, wayland, sourcehut, and all the other projects in the extended ddevaultverse. On the other hand I don’t see more than a few dozen (maybe hundreds??) people using Hare in any serious fashion, because I don’t think this circle is bigger than that.

        And it’s fine! It’s what scratches their itch, just like Jakt might be used only be the SerenityOS/ladybird group of people. There’s just no chance it gets bigger than that small niche.

        1. 1

          Why wouldn’t such people just use C?

          1. 4

            For me, it would be because C’s undefined situation is becoming more untenable by the year. Even the simplest possible use case for it (constant time cryptographic code with zero dependencies), requires significant effort, with sanitizers, Valgrind, and in a couple cases even those aren’t enough.

          2. 1

            I suspect there’s a few reasons:

            • Hare is objectively better;
            • It’s fun to hack on a language that you also use. These people are driven by fun, at least in part, they’re not clocking in at a 9-to-5 job when they’re writing Hare;
            • They have given a try to modern languages (say, Zig, Rust, Go, etc.) and realized how lacking C can be. That’s clearly why Hare was started in the first place. Hard not to miss type-safe unions and basic modules for example once you’ve experienced them.
    5. 21

      Putting aside whether they succeed at becoming a 100 year language (I think that for a software project to live 100 years, adaptability will be essential), I find it refreshing to see a group who willingly accept that they will be an inferior choice for certain domains in order to embody the values that they find important.

      1. 4

        Fully agree. I also really appreciate that they are applying some taste and are discerning in what features they add instead of blindly adding everything and the kitchen sink, like most languages seem to have taken to nowadays (looking at you, async/await meme).

        1. 2

          I also really appreciate that they are applying some taste and are discerning in what features they add instead of blindly adding everything and the kitchen sink

          Cannot agree more with you there! When I was a 20 year old programmer, I loved all the advanced (and usually complex) features that languages would throw at me: macros in Common Lisp, meta classes in Python, higher-kinded types in Haskell, etc. Now that I’m in my 40s, I appreciate simple languages that emphasize simple constructs (functions, arrays, structs, loops) and don’t try to shove in every feature that are considered “modern” and “necessary for scalability”.

    6. 16

      Is this number a reference to Paul Graham’s essay The Hundred-Year Language? I think it’s safe to say by now Arc will not be used in 2103. Maybe Hare will fare better.

      1. 5

        Was Arc even being used in 2013 by anyone other than Graham?

    7. 12

      An admirable goal for sure. It does remind me of Standard ML, which lives up to many of those points (the language and standard library hasn’t evolved since 1997). Although it saddens me, the result has been that Standard ML, for all intents and purposes, is dead.

      Another apt comparison would be Common Lisp, with similar conclusions.

      1. 7

        CL is not trendy, some of its first Google results were not attractive, but CL might be much more lively than one thinks.

        I’ve been cursed and I deployed a couple web apps for professional purposes into the wild. Oops. Yes I could have done them in Python or in Rust, but no thanks I was more effective, for them, in CL, thank you very much oh my god.

        1. 3

          Writing CL is so much fun. As fast to write as Python, but so much less restarting (even compared to Python auto-reloading environments like Django runserver). And the debugging is so much better, especially with SLY. For personal projects, it’s completely replaced Python for me. I don’t even use a whole bunch of advanced CL features yet.

        2. 2

          It is great to see all this CL usage. Thanks for the links! The awesome-lisp-companies list is a good one. Is there a list of open source projects in CL?

          What’s popular in open source gets adopted in companies too. The devs who make the decisions to choose tech stacks tend to do that from their experience in open source. If technology X is popular in open source, it will get adopted in companies too because the devs like to work on X. So a list for open source awesome-lisp-projects will be definitely useful.

          1. 2

            There is a list of CL software in general: https://github.com/azzamsa/awesome-cl-software you’ll find a mix of stuff, from the usual suspect (pgloader, maxima, music notation software, Lem, Nyxt…) to proprietary tools (grammarly) to not so awesome stuff, but required to fill the category, to nice to see games and other geek projects (ballish, cl-pkr…), old links (reddit v1) or again surprisingly old and used pro software (PTC 3D CAD designer).

          2. 1

            Is there a list of open source projects in CL?

            I’ve found GitHub’s code search is a good starting for that type of question. It’s not everything, but it’s a good starting place.

      2. 6

        Although it saddens me, the result has been that Standard ML, for all intents and purposes, is dead.

        Why are we tying “deadness” to the freshness of the specification or the last commit time of an implementation? Things can be complete. Things can be bug free enough, or fast enough, or whatever enough to be useful without constant change.

        That said, the last release of S/ML of NJ was released on August 1st. Mlton was last released in 2021. These are the two major implementations of SML. Neither are dead. They do (I assume) have small communities, and a small set of useful libraries to leverage, making them less than practical for a large set of people, however.

        Edit: It seems as though there’s also been some effort to produce “Successor ML” (inspired by HaMLet), and both SML of NJ and Mlton are adding features from it to their implementations…

        1. 5

          I’d tie deadness to “number of lines of code written in it”.

          1. 3

            Lines of code only matter if someone is running them for reasons other than preserving a dusty deck.

            1. 3

              This is tough. You want number of lines written in a language (over some time period), but it also has to factor in how many projects those lines are being written for. If there’s only code written for the implementation of the compiler, for instance, it’s probably about dead (or just born, and that’s where the time component comes in).

              1. 2

                Yeah, the writing or maintenance of the code can be just navel-gazing or preserving a museum piece, if it isn’t being run to do something useful. Code is only valuable if the benefit you get from using it outweighs the costs of maintaining it.

          2. 3

            Pretty much exactly my point. SML and Mlton all seem to be happily chugging along, with our without updates to the specification – which appears to be happening, albeit extremely slowly due to a small community.

      3. 4

        Another apt comparison would be Common Lisp, with similar conclusions.

        I seriously doubt that CL would be any more popular just due to getting updated standards. They never did any additional standards because the process was so slow and expensive, but outside of that, CL has absolutely been evolving. There are many extensions and a good degree of cross-implementation compatibility. Hell, many libraries written today will work in all current implementations, but in zero implementations from 10 years ago. I think that speaks to the language evolving and new stuff gaining adoption.

        Edit: Actually, I even think CL is more alive today than, say, 10 years ago.

        1. 2

          The design of CL makes updating the standard almost unnecessary. New behavior and constructs can always be added with macros, and out-dated constructs don’t need to be used. Language extension is done in libraries (packages) and developers choose to use them or not.

          I think that property bothers some people, but it seems to work okay in practice.

          1. 1

            Many, many things cannot be added via macros. Though portability libraries have a great track record so far for making it easy to use features that have to be implemented in each implementation.

      4. 4

        I think I see people, apparently Common Lisp users, arguing for using Common Lisp every few months or so. I’m not sure I realized anyone still used Standard ML until I read https://lobste.rs/s/ilpdug/til_addlicense_tool_automate_license.

        1. 5

          I mean, I love both languages, and will probably keep coding in them no matter what. I’ve even written Standard ML and gotten paid for it. But Standard ML is very much not alive. I think Common Lisp is faring better (it is used at Google, as far as I know), but not a lot.

          1. 14

            Common Lisp is also somewhat exempt from being “frozen” because of how much language-extending macros are, and always have been, a core part of what makes Common Lisp…well, Common Lisp. Things like asdf and Quicklisp are so common at this point that they might as well be part of the standard, despite not being so. That means I can grab any mainstream CL implementation and immediately have access to a huge number of libraries that fix/paper over/extend various issues in Common Lisp, which in turn means that the Common Lisp I’m writing ni 2023 looks meaningfully different than what I wrote back in college. (Off the top of my head, and focusing purely on language-level changes, things like iterate, Alexandria, and Bordeaux are just in my default utility pack.)

            Beyond that, though, while the Common Lisp spec is frozen, the actual implementations definitely aren’t. SBCL routinely adds new features and functionality that some of the libraries I just mentioned end up relying on, enabling them to do things (or at least do them efficiently) that a strict ANSI Common Lisp might not be able to.

            Combine all these things, and Common Lisp is about as frozen as the north pole in summer. Things might move slower, and sure, there are some bits that no one wants to change, but the overall feel is still very much evolving.

          2. 7

            But Standard ML is very much not alive.

            Some large and important theorem provers (which are being actively developed) are implemented in Standard ML.

            The latest Poly/ML release was less than 2 months ago. The maintainer still seems to be making improvements and appears to be responsive to submitted PRs.

            MLton hasn’t had a release recently, but its Github repo is very active. The latest commit was 3 hours ago (as of this comment).

            SML/NJ’s latest release was 3 months ago.

            So while I agree that Standard ML as a language is effectively dead (is that a bug or a feature?), it’s still being actively used and the most important Standard ML implementations still seem to be improving.

            Also note that all of these Standard ML implementations have their own (non-standard) extensions (as libraries, at least, but also some language extensions I think), which are also being actively used. This includes heap save/restore, lightweight threads/continuations, native threads and FFI mechanisms which you can use to interface with code written in other languages. So it’s not like you are necessarily stuck in 1997 forever if you decide to use Standard ML… :)

            Also, isn’t Standard ML still the only significant programming language which has an official formal specification?

          3. 7

            What most of the other commenters are failing to understand is that the liveness of the language has more to do with an active community generating new libraries and supporting software. I agree with you, I rather like Standard ML the language, although I wish higher order functors were part of the standard. But it’s not very enjoyable to program in because there is basically no ecosystem to base your work on.

            1. 3

              I agree with you, I started doing Standard ML due to have started working with F#. I fell in love with the language, however it does show its age and lack of libraries during development. It’s also quite awkward that the two most popular compilers have different build systems. SuccessorML features are also not supported all across the board, you have to be careful to support all the major compilers.

              Shameless plug to my general experience with SML in 2023: https://gluer.org/blog/2023/thoughts-on-standard-ml-as-of-2023/

      5. 3

        Standard ML may be dead, but effectively its successor is OCaml, which is still very much alive and kicking (and even trending nowadays).

    8. 32

      I aim to become a 100-year person, but uh, you never know for sure.

      1. 53

        I plan to live forever, or die trying!

    9. 10

      To make what Celeritas said about C’s longevity being an accident and not intentional, the requirements of what computing will be like 50, let alone 100 years from now will differ heavily in ways you won’t anticipate. If you have enough critical mass though, you could accidentally dictate systems design. C’s design and lifespan have influenced CPUs, after all - probably for the worst.

    10. 4

      This essay has, IMHO, one extremely large oversight: bootstrappability. If I want to be able to use a language 100 years from now, it needs to have grown in everywhere like an invasive weed (C, JS), or have a regularly-exercised boostrap path. Hare aims to become self-hosting, and if that’s not managed extremely carefully it’s easy to lose the bootstrappability. I have been hacking on a (still private) side project that’s meant to be part of a bootstrap chain, and was sad that I had to reject Zig as the implementation language because of its (clever, but still blob-using) WASM-based bootstrap. (It’s currently in C99 for convenience, but once it’s up and stable, I’ll either target the intersection of C99 and an old C++ standard, or maybe even C89.)

      1. 5

        Zig ships with a single file wasm interpreter, though, too. So it still just requires a C compiler.

        It’d arguably be better to target something like Nguyen and Kay’s Chifir “Cuneiform” which can be implemented in a page of code.

        1. 2

          Yes, and it’s a very clever way around many of the problems, but that WASM build of the Zig compiler still acts as an opaque binary blob as far as assessing bootstrappability is concerned. This is why you see projects working on full-source bootstrap doing things like bootstrapping GNU Flex from Heirloom Lex.

          1. 1

            Does bootstrappability mean no platform dependence (and)or no obfuscated source?

            1. 1

              https://bootstrappable.org/ is a good site to read, as are the blog posts from the Guix project.

              If I need to be able to use this language 100 years from now, I need to able to build it because what happens if all the bindists are no longer available? I would also like to be able to trust any binaries used in the boot process, and they should be as simple as possible. The Guix people have got this trusted seed down to a 357-byte program called hex0.

    11. 4

      Does anyone know if Rust has this goal not by name but almost like a backronym. I know where the name comes from originally but I heard someone say “software that Rusts”, sort of like a stability feature. Write once, runs forever (which I believe I understand the limits of). Does anyone think this is a perceived benefit or real benefit of Rust projects, especially feature complete?

      1. 7

        Does anyone know if Rust has this goal

        Not quite as ambitiously, Carol Nichols (now on the Rust Project Leadership Council) gave a talk titled “Rust: A Language for the Next 40 Years” (YouTube; I haven’t watched it myself).

      2. 2

        Certainly. The very second post on the Rust blog was titled Stability as a Deliverable. Three years later, the project introduced Rust 2018 via a novel edition mechanism. Crates declare the edition they’re written in, and the Rust project makes several promises:

        • stability: the compiler will always compile code written in any edition it knows about. So your old projects continue to build even as you update your compiler.
        • no ecosystem fracturing: crates can import functions/data structures/etc from crates written in different editions. So editions can be mixed freely.
        • no stagnation: thanks to the above two promises, Rust can use a new edition to introduce backwards-incompatible changes without breaking existing code.

        Also, Rust double-checks against accidental breaking changes by doing ‘crater runs’: every stable release of the compiler gets tested by building every crate on crates.io.

    12. 3

      The last 100 years have seen multiple revolutions in programming (yes, I know), and the next 100 years will likely see even more.

      Here are some obstacles I see for trying to design a 100-year language now in 2023.

      • The fundamental paradigm of interacting with a CPU: machine language. The machine language as described in the programming manual for a modern CPU is growing increasingly removed from what the CPU actually does, and often interferes with maximizing the throughput of the silicon hardware. This is where clean-sheet designs like the Mill CPU may see dramatic performance gains for the same silicon process technology. What kind of language would best fit such drastically different paradigms for machine code?
      • Speed of memory vs. the CPU. SDRAM has been a bottleneck for a quite a while, and it shows no sign of getting better soon. Will we instead move to system designs where a smaller CPU is embedded on the RAM die itself? What’s the best way to program a typical desktop system or phone that has a thousand cores, all with high bandwidth to their own local memory but slower access to global memory?
      • 3D. Silicon process technology is still fundamentally 2D for various reasons (various process nodes are called 3D, but we are still laying out the circuits in a 2D plane with routing in multiple layers). If we move beyond silicon photolithography to molecular nanotechnology, and create truly 3D designs with processors embedded in memory in all directions, does convenient fictions like a linear address space make sense any more?
      • Who is doing the programming? Ten or twenty years from now, we might have mostly AGI systems writing programs, which go from a high-level formal description (or just vague intent expressed in a human native language) directly to machine code. What, if any, intermediate representation would be appropriate then? (Probably the formal specification.)
    13. 3

      Should be functional by default then; mutation should be considered a specially-controlled side-effecting optimization going forward

    14. 3

      Happy to see the term “100-year X” catching on: http://len.falken.directory/100-year-programs.txt - I feel Zig isn’t there, but could become a 100-year language with the right moves.

      It’s interesting, I feel Hare meets all the checkmarks that StandardML has met for me:

      • Standard ML even has a good type system - something I was willing to live without.
      • Standard ML even compiles programs to small binaries.
      • Standard ML has easy to use FFI.
      1. 4

        How are those three properties related to longevity? Particular the size of binaries seems unrelated to the longevity.

      2. 1

        Which Standard ML implementation do you use? I, honestly, had to wonder today why I am not using it…

        1. 1

          I use Poly/ML for my tiny projects. SML# and MLKit are also actively maintained in addition to mlton.

    15. 3

      Narrator: “It wouldn’t”