Threads for bshanks

  1. 2

    The goal is to cover the entire RV64 user-level instruction set […] RV32 is currently out of scope, but might be nice in the future.

    Isn’t RV64 an extension of RV32, with some additional instructions for larger integers? I feel like you’d have most of the RV32I, at least, by virtue of having done RV64 in the first place.

    Also I am not so familiar with Sourcehut – where is the actual draft guide?

    1. 2

      Also I am not so familiar with Sourcehut – where is the actual draft guide?

      i’m not sure but i think it’s here: https://hg.sr.ht/~icefox/riscv-reference/browse/instructions.toml

      1. 1

        Isn’t RV64 an extension of RV32, with some additional instructions for larger integers? I feel like you’d have most of the RV32I, at least, by virtue of having done RV64 in the first place.

        At first glance, yes. But at second glance, this is not quite true, or at least while it may be true in practice it is not quite a strict extension in principle. From some of the commentary in the spec, section 1.3:

        The four base ISAs in RISC-V are treated as distinct base ISAs. A common question is why is there not a single ISA, and in particular, why is RV32I not a strict subset of RV64I?

        …[reasons]…

        A related question is why there is a different encoding for 32-bit adds in RV32I (ADD) and RV64I (ADDW)? The ADDW opcode could be used for 32-bit adds in RV32I and ADDD for 64-bit adds in RV64I, instead of the existing design which uses the same opcode ADD for 32- bit adds in RV32I and 64-bit adds in RV64I with a different opcode ADDW for 32-bit adds in RV64I. This would also be more consistent with the use of the same LW opcode for 32-bit load in both RV32I and RV64I. The very first versions of RISC-V ISA did have a variant of this alternate design, but the RISC-V design was changed to the current choice in January 2011. Our focus was on supporting 32-bit integers in the 64-bit ISA not on providing compatibility with the 32-bit ISA, and the motivation was to remove the asymmetry that arose from having not all opcodes in RV32I have a *W suffix (e.g., ADDW, but AND not ANDW). In hindsight, this was perhaps not well-justified and a consequence of designing both ISAs at the same time as opposed to adding one later to sit on top of another, and also from a belief we had to fold platform requirements into the ISA spec which would imply that all the RV32I instructions would have been required in RV64I.

        Basically, teasing out all the details that may differ is more work than I want to do for now. Plus a spec for RV64 will be 99% valid for RV32 anyway, so I don’t see a great benefit to separating the two. I’m more interested in phone/desktop-class RISC-V stuff than embedded right now, though I confess I may be in the minority there.

      1. 2

        It saddens me when open source projects/languages move to a proprietary solution for community support. It makes me reluctant to sign up and I worry that the project not owning the data will become a problem later in the future. I feel similarly about Julia, which has the same approach. Maybe I just don’t understand the advantage of going into that direction. Also those websites do not work without Javascript, which I don’t understand why that is a requirement.

        1. 10

          I think this is using the Discourse discussion platform, which is open source: https://github.com/discourse/discourse

          1. 2

            Well, now my statement makes a lot less sense, for some reason I thought this was the same as disqus. Thank you for pointing it out!

          2. 6

            Well, the main community links are to the mailing lists: https://lists.racket-lang.org/

            Understandably many people prefer Discourse to mailing lists though.

            Also those websites do not work without Javascript,

            Discourse instances usually also support a mailing list mode.

            1. 2

              I did not know this, too. Although I don’t get why the text content cannot be rendered in a browser without Javascript it’s nice to know there is at least some alternative.

          1. 5

            There is SWEET16 but that was used for actual serious work so maybe it doesn’t meet your criteria.

            There are plenty of One Instruction Set Computers to choose from. Subleq seems to be the design most often discussed.

            I wanted to link to some documentation of The Pinky Processor but all I could find was a post that briefly mentions it. As far as I remember the interesting thing about the design was that it was addressing granularity was not based on bytes like we’re used to but was instead done by bits. You could specify any bit in memory with a single address and there were no alignment requirements so you could put a 4-bit field followed immediately by a 13-bit field, followed immediately by a 5-bit field, and so on. If anybody can find the original specification document I’d love to see it.

            Edited to add: The Gray-1: a computer made from just memory chips.

            1. 2

              Each and every link has brought a smile; I especially like the Gray-1 computer.

              1. 1

                Here’s some Pinky Processor stuff: https://github.com/tslug/pinky/

              1. 1

                I think one thing that makes Perl hard to read is Perl’s idea of list/scalar context, which I believe is a form of ad-hoc polymorphism. jbert posted this link introducing scalar/list/void context in Perl: https://www.perlmonks.org/?node_id=738558

                The article makes it sound like sigils are just clarifying what type things are, and wouldn’t that improve readability? But because of context, sigils in Perl aren’t just like type annotations that help clarify and double-check what’s already going on, rather they are actually choosing which of the overloaded meanings of the annotated expression is being run.

                Some other programming languages have ad-hoc polymorphism based on how the value of an expression is later used, as well. In my opinion this can be powerful but can make things less readable; you can no longer understand code by “single-stepping” with your eyes from the first thing computed to the last.

                1. 1

                  But because of context, sigils in Perl aren’t just like type annotations that help clarify and double-check what’s already going on, rather they are actually choosing which of the overloaded meanings of the annotated expression is being run.

                  I like that description. They don’t describe the variable at all, since you can change sigils on foo at will. They’re closer to casting an expression.

                1. 3

                  More papers, and comments on which ones to read, at https://www.cs.tufts.edu/~nr/c--/papers.html

                  I believe this is the same as the ‘Papers’ section of the archived C– website that adamo pointed to.

                  1. 17

                    I think this mini-renaissance of Ada awareness is in part due to the “mass discovery” of safe systems programming led by Rust.

                    I like everything I’ve read about Ada, and I think SPARK is its killer feature. That being said, in the realm of “safe systems programming” I don’t think Ada’s going to be able to withstand the onslaught of Rust – Rust has the modern tooling, the ecosystem, and all the momentum.

                    (This pains me to say because I’m not the biggest Rust fan and I really like what I’ve seen of Ada, but if I’m going to learn one strict language, it seems like the best one career-wise would be Rust. I’d love to be proven wrong, though; the world needs to ring with the keyboards of a thousand languages.)

                    1. 11

                      Ada’s been flying under the radar doing it’s own thing, until AdaCore started open sourcing things and pushing awareness a few years ago. GNAT is actually a mature ecosystem (gpr, gnat studio, gnatdoc, gnattest, gnatpretty, etc) and has libraries which go back decades, it’s just not well known. The issue is that there had been no unifying system for distributing libraries before, which has hampered things, but Alire is fixing that now and also streamlining working in the language as well.

                      Rust is doing well for good reason. It has an easy way to start, build, test and share things, which is exactly why Alire is based on the work that community (and the npm community) has done. The big philosophical difference I’ve found between writing Rust and Ada is that Rust focuses on traits/types, whereas Ada focuses on functions since it relies predominantly on function overloading and separation into packages (modules), though you can use ML-style signatures. It’s interesting because in Ada, in general (except for tasks/protected objects), types aren’t namespaces for functions, whereas Rust relies on this with traits. In this way, Ada feels more to me like a functional language like Haskell and Rust feels more like an OOP language like Java. I’ve always found this bizarre because it should be the opposite.

                      Ada’s big advantage is how its general conceptual models are much more similar to C or C++ (think type safe C with function overriding, with classes if you want them), but it builds in a much stronger type system and concurrency types as well. Built-in pre/postcondition, invariants, and type range checks also goes a lot towards correctness, even if you don’t want to fully commit to format verification with SPARK. Note also that SPARK isn’t an all or nothing thing either with Ada code, you can verify individual components.

                      1. 7

                        While I’m generally a huge user of Rust and a proponent of it for many CPU-bound workloads, Rust really doesn’t have much in the way of safety features except when you compare it strictly to C and C++. Rust has a lot of antipatterns that the ecosystem has broadly adopted around error handling and error-prone techniques in general, and the overall complexity of the language prevents a lot of subtle issues from being cheap to find in code review. Building databases in Rust is not all that different from building them in C++, except there is a bit narrower of a search scope when I discover memory corruption bugs during development.

                        Ada (and much more so, SPARK) are a completely different tool for a completely different class of problems. The proof functionality of why3 that you gain access to in SPARK has no usable analogue in Rust. The ability to turn off broad swaths of language features is one of the most powerful tools for reducing the cost to build confidence in implementations for safety critical domains. There are so many Rust features I personally ban while working on correctness-critical projects (async, most usage of Drop, lots of logical patterns that have nothing to do with memory safety directly etc…, many patterns for working with files that must be excruciatingly modeled and tested to gain confidence, etc…) but with Ada it’s so much easier to just flip off a bunch of problematic features using profiles.

                        Ada SPARK faces zero competition from Rust for real safety critical work. It is simply so much cheaper to build confidence in implementations than Rust. I love Rust for building things that run on commodity servers, but if I were building avionics etc… I think it would be a poor business choice TBH.

                        1. 3

                          The ability to turn off broad swaths of language features is one of the most powerful tools for reducing the cost to build confidence in implementations for safety critical domains

                          This is something I never thought about, but yeah, Ada is the only language I’ve even heard of that lets you globally disable features.

                          1. 2

                            antipatterns that the ecosystem has broadly adopted around error handling and error-prone techniques […]

                            What kind of antipatterns are you thinking of? A lot of people appreciate the Result type and the use of values instead of hidden control flow for error handling.

                            1. 3

                              Try backed by Into is far, far worse than Java’s checked exceptions in terms of modeling error handling responsibilities. I’ve written about its risks here. Async increases bugs significantly while also increasing compile times and degrading performance to unacceptable degrees that make me basically never want to rely on broad swaths of essentially untested networking libraries in the ecosystem. It makes all of the metrics that matter to me worse. There are a lot of things that I don’t have the energy to write about anymore, I just use my own stuff that I trust and increasingly avoid relying on anything other than the compiler and subsets of the standard library.

                              1. 1

                                I think your concerns are valid, but it’s basically like saying “don’t write int foo() throws Exception” in java, which would be the equivalent.

                                There is a strong tendency in the Rust community to throw all of our errors into a single global error enum.

                                citation needed there, I guess? Each library does tend to have its own error type, which means you’re going to have to compose it somewhere, but it’s not like everyone returns Box<dyn Error>, which would be the single global error type (although that’s what anyhow is for applications). The right balance should be that functions return a very precise error type, but that there’s Into implementations into a global foo::Error (for library foo) so that if you don’t care about which errors foo operations raise, you can abstract that. If you do care then don’t upcast.

                                I think async is overused, but claiming it degrades performance is again [citation needed]?

                            2. 1

                              antipatterns that the ecosystem has broadly adopted around error handling and error-prone techniques in general

                              I’m interested to hear some of these problematic antipatterns around error handling and error-prone techniques in general?

                            3. 3

                              You’re right about the tooling, ecosystem and momentum, but Rust is a very long way from provability. I wonder if we’ll ever see something SPARK-like but for a subset of Rust.

                              1. 4

                                The question is how much provability matters. My guess is not a whole lot in most cases.

                                I feel like Rust’s momentum is largely a combination of, “C and C++ have too many footguns,” “Mutable shared state is hard to get right in a highly concurrent program,” and, “Everyone is talking about this thing, may as well use it for myself.”

                                1. 5

                                  Provability is starting to matter more and more. There are very big companies getting involved in that space - Nvidia has started using Spark, Microsoft has been funding work on things like Dafny and F* for a long time, Amazon uses TLA+… Provability is just getting started :).

                                  1. 3

                                    I think provability definitely has some applications, largely in the space that Ada is used for currently – safety-critical systems mostly. I want to be able to prove that the software in between my car’s brake pedal and my actual brakes is correct, for example.

                                    1. 18

                                      I want to be able to prove that the software in between my car’s brake pedal and my actual brakes is correct, for example.

                                      I think we’ve shown pretty conclusively you can’t solve the car halting problem.

                                      1. 17

                                        I want to be able to prove that the software in between my car’s brake pedal and my actual brakes is correct, for example.

                                        If you’re going to use 11,253 read/write global variables (yes, eleven thousand, not a typo) in your car software with very limited and broken failsafes against bit flips and other random faults while intentionally ignoring OS errors, then you’re going to run in to problems no matter which tool you use.

                                        Just running codesonar on the Toyota software resulted in:

                                        • 2272 - global variable declared with different types
                                        • 333 - cast alters value
                                        • 99 - condition contains side-effect
                                        • 64 - multiple declaration of a global
                                        • 22 - uninitialized variable

                                        The throttle function was 1300 lines of code. With no clear tests.

                                        These issues are only partly technical problems IMO; something like Ada can help with those, but the far bigger problem isn’t technical. People choose to write software like this. Toyota choose to not take the reports seriously. They choose to lie about all sorts of things. They choose not to fix any of this. The US gov’t choose to not have any safety standards at all. Regulators choose not to take action until there really wasn’t any other option.

                                        And then it took a four-year investigation and an entire team of experts to independently verify the code before action finally got taken. You didn’t need any investigation for any of this. You just needed a single software developer with a grain of sense and responsibility to look at this for a few days to come to the conclusion that it was a horribly unacceptable mess.

                                        (And yes, you do need a proper investigation for legal accountability and burden of proof, and more in-depth analysis of what exactly needs fixing, but the problems were blatantly obvious.)

                                        I’d wager all of this would have failed with Ada as well. There weren’t even any specifications for large parts of the code (and other specifications didn’t match the code), there wasn’t even a bug tracker. If you’re going to cowboy code this kind of software then you’re asking for trouble. Do you think that Ada, TLA+, or SPARK would have fared better in this kind of culture? At the end of the day a tool is only as good as your willingness to use it correctly. Software people tend to think in terms of software to solve these problems. But if you look a bit beyond all the software bugs and WTFs, the core problem wasn’t really with the language they used as such. They knew it was faulty already.

                                        Does provability matter? I guess? But only if you’re willing to invest in it, and it seems abundantly clear that willingness wasn’t there; they willingly wrote bad software. It’s like trying to solve the VW emission scandal by using a safer programming language or TLA+. That’s not going to stop people from writing software to cheat the regulatory tests.

                                        It’s kind of amazing how few faults there were, because in spite of the many problems it did work, mostly, and with some extra care it probably would have always worked. More guarantees are always good, but to me this seems to indicate that maybe provability isn’t necessarily that critical (although it can probably replace parts of the current verification tools/framework, but I’m not so sure if it has a lot of additional value beyond economics).

                                        Summary of the reports: one, two.

                                        1. 1

                                          Does provability matter? I guess?

                                          Great, I’m glad we agree.

                                          More seriously: sure, culture is a problem; I certainly wasn’t trying to discount it.

                                          But this is an AND situation: to have proved-correct software you need a culture that values it AND the technology to support it. Writing off provability as a technology because the culture isn’t there yet is nonsensical. It actively undermines trying to get to the desirable state of proved-correct safety-critical software.

                                          I appreciate you wanted to have a rant about software culture, and believe me, I love to get angry about it too. You’re just aiming at something other than what I said.

                                          1. 1

                                            I appreciate you wanted to have a rant about software culture, and believe me, I love to get angry about it too. You’re just aiming at something other than what I said.

                                            It wasn’t intended as a rant, it was to demonstrate that at Toyota things were really bad and that this is why it failed at Toyota specifically. This wasn’t about “software culture”, it was about Toyota.

                                            Writing off provability as a technology because the culture isn’t there yet is nonsensical. It actively undermines trying to get to the desirable state of proved-correct safety-critical software.

                                            Of course you need to consider the actual reasons it failed before considering what the solution might be. Overall, the existing systems and procedures work pretty well, when followed. When is the last time your accelerator got stuck due to a software bug? It’s when you choose to not follow them that things go wrong. If Toyota has just followed their own procedures we wouldn’t talking about this now.

                                            Great, I’m glad we agree.

                                            I don’t think we agree at all, as I don’t think that provability will increase the average quality of this kind of software by all that much, if at all. The existing track record for this kind of software is already fairly good and when problems do arise it’s rarely because of a failure in the existing procedures as such, it’s because people decided to not follow them. Replacing the procedures with something else isn’t going to help much with that.

                                            It may replace existing test procedures and the like with something more efficient and time-consuming, but that’s not quite the same.

                                            And the Toyota problem wasn’t even necessarily something that would have been caught, as it was essentially a hardware problem inadequately handled by software. You can’t prove the correctness of something you never implemented.

                                            1. 1

                                              Proving software correct is categorically better verification than testing it with examples. If we can achieve provably-correct software, we should not throw that away just because people are not even verifying with example testing.

                                              Programming languages that are provable increase the ceiling of potential verification processes. Whether a particular company or industry chooses to meet that ceiling is an entirely different discussion.

                                  2. 3

                                    Have you run across any good SPARK resources? I’m interested in learning not only about how to use it, but also the theory behind it and how it’s implemented.

                                    1. 4

                                      https://learn.adacore.com/ should be a good starting point to learn Spark. If you’re interested in how it’s implemented, reading https://www.adacore.com/papers might be a good way to learn.

                                  1. 29

                                    I agree with the points raised in the article. One thing to add is that many tools built in academia are there to support the paper, not to take a life of their own. Given the short life of a thesis, there is not much one could do to make these tools gain much traction (all examples in the article show this).

                                    There is another weird thing in our industry — I’ve seen many companies embracing the latest shiny tools and frameworks (k8s, react, containers, …), yet when it comes to thinking and implementing ways to speed up developer work by improving build times or reorganize dependencies, that is always a task for the intern, the least suitable person to handle such a thing.

                                    1. 10

                                      yet when it comes to thinking and implementing ways to speed up developer work by improving build times or reorganize dependencies, that is always a task for the intern, the least suitable person to handle such a thing.

                                      That has not been my experience. I’ve been tasked with that sort of work many times, and given permission to pursue it many more. Currently a senior UI engineer at Microsoft, but previously worked at a 500 person EdTech company for several years, and had a brief, but enjoyable stint at Todoist. All three were very happy to give that work to an experienced developer.

                                      Have I had managers who were averse to that? Yes. But it has been the exception.

                                      1. 3

                                        Agreed with this. My experience at both large and small tech firms has been that when a senior person wanted to work on developer productivity, it was almost always welcomed. Maybe it’s different at companies whose product is something other than what the developers are building, though.

                                      2. 7

                                        The problem in academia is that it is controlled by where the professor can get money from. There is a lot more grant money for making shiny new tools than for maintaining and adapting old tools, and your academic life depends really strongly on how much grant money you can bring in (determines how many students you can support, how many conferences you can send your students to, and what equipment you can give your students). (New papers are also indirectly about grant money. You need new papers to bring in grant money, and you can’t write new papers simply by maintaining or updating existing tools).

                                        1. 3

                                          I agree, and it’s also about promotion criteria; it would be hard to get tenure if all you ever worked on was maintaining old tools. I think the solution is to create a new tenure-track academic role which is evaluated, promoted, and funded based on the maintenance and creation of useful open-source artifacts (rather than the traditional goal of the discovery of novel knowledge).

                                      1. 6

                                        Another solution to this problem would be to create a new kind of academic role whose function is to create and maintain open-source tools/content. Economically, the academic system is fundamentally well-suited to solve any societal tragedy-of-the-commons regarding intellectual public goods. However, the current system is more narrowly focused on the goal of discovering novel knowledge, which is only one special case of intellectual public goods.

                                        We should create a role of “open source professor” whose job would be to create and/or maintain open artifacts which are useful to society, rather than to make new discoveries. This would have applications beyond just developer tools, too.

                                        1. 3

                                          Kineses Freestyle Edge, split and tented to about 35 degrees by using a book stand.

                                          Main keyboard criteria for me are: split, tentable to between 30 and 50 degrees, >= 72 keys (more keys better though). Nice-to-haves that the Freestyle has: 95 keys, mechanical keys, backlight, somewhat portable/compact, softwareless remapping/macros. Nice-to-haves that the Freestyle is missing: thumb clusters (like the Kinesis Advantage, except i would like them to be closer), mouse movement layer (like the Keyboardio Model 01), built-in tenting to >= 30 degrees, contoured/sculpted/concave keywells (like the Kinesis Advantage), columnar/ortholinear layout (like the Kinesis Advantage), maybe trackpoint, palm key (like the Keyboardio Model 01).

                                          I also tried the Kineses Advantage2 (86 keys), but i guess my hands are too small because the thumb clusters were too far away, and it isn’t tented to a high enough angle for me, also it’s extremely bulky. I also tried the Keyboardio 01 (tented via book stand) but its 64 keys were not nearly enough; all that chording/layer-switching slowed me down. I used to happily use the SafeType vertical keyboard, but it didn’t include arrow keys; they recommended a separate, horizontal keypad, but the frequent wrist rotation between vertical and horizontal caused pain. I am averse to the idea of building my own keyboard.

                                          For mouse i use a Contour Rollermouse Red in front of the keyboard.

                                            1. 5

                                              Wonderful article! It articulates and supports the concept extremely well.

                                              It’s also very relevant to the programming language I’m designing. It supports the full range of substructural types, with full type inference of required capabilities (using a type class / traits system). This means, for example, that code that does not move values on the stack(s) will not require the “Move” trait, and can thus be trivially compiled to use stack allocation.

                                              And linear types will be used for I/O, so that all functions remain pure, but you don’t need the complexity of Monads like in Haskell. All such capabilities/effects will be infered and tracked by the type system.

                                              1. 1

                                                And linear types will be used for I/O, so that all functions remain pure, but you don’t need the complexity of Monads like in Haskell.

                                                Are you referring an API like this?

                                                getLine : s -o (s, String)
                                                putLine : String -> s -o s
                                                
                                                main : s -o (s, Int)
                                                main = ...
                                                
                                                1. 1

                                                  It’s a (the first?) multi-stack concatenative language, so you don’t have to explicitly thread linear values through every function call. All functions are inherently linear functions from one state of the (implicit) multi-stack to the next state of the multi-stack.

                                                  The surface syntax will likely be simpler, but the fully explicit type for a “print line” term will be something like:

                                                  (forall s1, s2, w WriteStr . $: s1 * Str $stdout: s2 * w -> $: s1 * Str $stdout: s2 * w )
                                                  

                                                  Where the identifiers between $ and : are the stack identifiers (empty for the default stack), the variable w is qualified by the WriteStr trait, and the stack variables s1 and s2 can expand to any product type.

                                                  The stack variables make the function (multi-)stack polymorphic. If you’re interested in learning more about stack polymorphism, I highly recommend this blog post from Jon Purdy: http://evincarofautumn.blogspot.com/2012/02/why-concatenative-programming-matters.html?m=1

                                                  In order to clone, drop, or move a value, it must possess the Clone, Drop, or Move trait, respectively. In this way, we can support the full range of substructural types, making the type system quite expressive! This makes it possible to express things like a file handle that must be opened before being written to and an open file handle that must be closed before it can be dropped.

                                                  1. 1

                                                    That’s a dense-looking type signature. Let me try to make sense of it:

                                                    So ‘print line’ transitions two stacks - the default stack and a stack called stdout. ‘print line’ expects a Str on top of the default stack, and a WriteStr dictionary on the stdout stack (which I assume contains functions that can actually make a system call to print to stdout). ‘print line’ does not change the default stack or the stdout stack.

                                                    Is that right?


                                                    Purity for applicative languages is something like equivalence between programs f <x> <x> and let y = <x> in f y y for all f and <x> (where <x> is some source code, like 1 + 2 or print("hello")).

                                                    What’s the equivalent in a concatenative, stack-based language? I’d guess an equivalence between programs f <x> <x> and f dup <x>. It doesn’t look like your ‘print line’ has this property. (I might be wrong about what to expect from a pure concatenative language, though.)

                                                    The reason I bring this up is because given what you’ve told me, I don’t think that this is true of your language yet:

                                                    And linear types will be used for I/O, so that all functions remain pure, but you don’t need the complexity of Monads like in Haskell.

                                                    1. 1

                                                      Is that right?

                                                      Essentially, yes. (There’s a minor detail that the WriteStr dictionary is not actually on the stack. Rather a type that implements the WriteStr trait is on the stack. In this case, it’s effectively the same thing, though.)

                                                      Purity for applicative languages is something like equivalence between programs f and let y = in f y y for all f and (where is some source code, like 1 + 2 or print(“hello”)).

                                                      What’s the equivalent in a concatenative, stack-based language?

                                                      Let’s make the example concrete. In Haskell you would have

                                                      f (putStrLn "hello") (putStrLn "hello")
                                                      

                                                      is equivalent to

                                                      let y = putStrLn "hello" in f y y
                                                      

                                                      In this linear concatenative language, you might correspondingly have

                                                      ["hello" putStrLn] ["hello" putStrLn] f
                                                      

                                                      is equivalent to

                                                      ["hello" putStrLn] clone f
                                                      

                                                      Note that terms between brackets are “quoted”. That is, they are pushed onto the stack as an unevaluated function, which can be later evaluated using apply.

                                                      I believe this fulfills the notion of purity you’re describing, unless I’m misunderstanding something.

                                                      Edit: To be clear, cloning is only allowed here because both "hello" and putStrLn, and thus ["hello" putStrLn] implements the Clone trait. If you tried to clone a function that closed over a non-Clone type, then it would not type check.

                                                      Edit 2: And because it might not be clear where linearity comes in, the value on top of the $stdout stack that implements WriteStr is linear. So when you go to evaluate putStrLn, you must thread that value through the calls. Thanks to the multi-stack concatenative semantics, such value threading is completely implicit.

                                                      This has an advantage over Haskell’s monad system in that there’s no need for monad composition and all the associated complexity (which I have a lot of trouble wrapping my head around).

                                                      1. 2

                                                        I believe this fulfills the notion of purity you’re describing, unless I’m misunderstanding something.

                                                        That seems right.

                                                        Thanks for the chat! You’ve given me a lot to think about.

                                                2. 1

                                                  Neat! Is there a website?

                                                  1. 1

                                                    No, not yet. It will probably be a bit before it’s ready for that, but I do hope to get there as quickly as possible.