1. 1

    I don’t follow? Why have a special language? If the only observable result is the linked list then how can it ever be non-deterministic. I don’t understand any of this. There might be something here but I can’t make sense of it. The conflation of a protocol with an implementation is what is most confusing. Why can’t I execute some Ruby and give you an answer if all we’re doing is communicating results to a linked list?

    1. 1

      At the end of the day it all boils down to some data on a linked list on IPFS, that’s true.

      When given a linked list of data you must still interpret it though; one must give the data some semantics. In this case this means specifying a state-machine: code for functions which maintain some state. The accumulated state is the semantics of the machine (e.g. which issues are currently open), and depending on that state some of the future inputs will either be disallowed or have different effects (e.g. you can only close an issue if you opened it originally or you are an admin).

      The idea of the Radicle language is that using the whole “code is data” thing from LISP, the specification of the semantics of the state machine are just part of the data inputs too. This means that Radicle machines have a semantics/protocol all on their own. It boils down a domain-specific protocol to a single universal protocol: “this is a valid sequence of Radicle expressions”.

      You could use Ruby instead of Radicle, but Ruby is not deterministic, so people running the code in the linked-list might get different results, and thus disagree on what the “state” is. Some nodes might even think the chain is invalid while others don’t.

      1. 1

        Still doesn’t make sense. If we run the same Ruby code with the same inputs then where is the non-determinism?

        1. 4

          “Same inputs” is a tall order for arbitrary Ruby code. It includes things like:

          • Same initial clock state
          • Same filesystem state
          • Same system RNG state, and same inputs to that RNG (including things like incoming network packet and user input timings)
          • Same I/O with the exact same timings (which otherwise may impact process scheduling or just timing in a detectable manner)

          You can try to sandbox all this out, but at that point, you’re borderline writing a new language (you’ve tossed or stubbed much of the standard library, and in the case of Ruby probably a few language features, as well)—which is what the author did, just based on CL, not Ruby.

          1. 2

            You can try to sandbox all this out, but at that point, you’re borderline writing a new language (you’ve tossed or stubbed much of the standard library, and in the case of Ruby probably a few language features, as well)

            We actually did consider this approach with Lua (since it is already quite minimal), but in the end we thought a LISP was more appropriate, because the code is more naturally thought of as data e.g. EDN, and also wanted more of a pure/functional flavour, because this seemed more natural for programming deterministic state-machines.

    1. 4

      Sorry to go off on a tangent, but it’s unfortunate that they chose to implement their own Lisp-like language rather than write in Lisp in the first place. Writing it from scratch misses the point of Lisp as a DSL.

      Judging by their example code, they could’ve written a few Common Lisp macros and had nearly identical state machine code. Then they could’ve focused entirely on the domain specific parts of the language and piggybacked on CL for the “boring” parts like arithmetic, looping, etc.

      Granted Lisp isn’t very popular, but it’s more so than Radicle and there are many books and tutorials and pre-existing libraries (of varying quality, of course). But now their team will have to support a whole language ecosystem just to support the distributed state machine work they’re actually interested in.

      1. 3

        We needed, for this entire architecture to work, a language that is deterministic. If there’s some compiler extension that restricts CL to only the deterministic parts, that could have worked. But (and I have to admit I’m relatively ignorant about CL here) I’d be surprised if that were easy, and didn’t imply most tutorials and libraries wouldn’t work anyhow.

        1. 2

          I’m not sure I understand the use of “deterministic” here. Which parts of CL are non-deterministic?

          In any case, I used CL as the example because it’s what I use, but a simpler Lisp with macros, like Scheme, should also work.

          1. 4

            By “deterministic” I mean that the language should be such that, for any program written in it P, and for any inputs to that program I, the result of running P with I, P(I), should always be the same between runs, for everyone. So no IO, no platform-specific behavior, no concurrency, no randomness, no weak refs that allow observing garbage collection results, etc.

            1. 2

              The state machines defined using Radicle need to be deterministic, but that doesn’t require the language defining the state machines to be deterministic.

              1. 9

                That’s true, but then that would require a separate proof for each machine (or the risk of a mistake). The idea behind Radicle is a special purpose programming language in which all programs are automatically deterministic state machines.

              2. 1

                Why not just use a pure language?

          2. 2

            chose to implement their own Lisp-like language rather than write in Lisp in the first place.

            I’m not sure it’s from scratch…. “Lisp dialect in the flavor of Black.” combined with the fact Black’s flavour is…

            Black is an extension of Scheme with a reflective construct exec-at-metalevel. It executes its argument at the metalevel where the interpreter that executes user programs is running. There, one can observe, access, and even modify the metalevel interpreter in any way. Because the metalevel interpreter determines the operational semantics of the language, Black effectively allows us to observe, access, and modify the language semantics from within the same language framework.

            ie. I’m curious to know to what extent it’s “implement their own Lisp-like language” vs a “scheme minus mutability” with a Black program that mutates Black.

            After all Scheme isn’t exactly a massive language.

            1. 1

              After all Scheme isn’t exactly a massive language.

              It’s … a fair bit bigger than that. That doesn’t even implement all of r4rs. The most recent standard, r6rs (from 2007), expanded the language so significantly than many (real, non-toy) schemes still haven’t implemented everything.

              I agree with your overall point, just that your example is misleading.

              1. 1

                The most recent standard is r7rs and as I understand it, it makes the language smaller than r6rs.

                1. 1

                  I forgot about r7rs. IIRC, that added so many things to the language that they split it into two specs: the big one and the small one (the latter which is closer to r5rs (again, iirc)).

              2. 1

                What I was getting at is the language isn’t written in Lisp or Scheme, but is a Lisp-like language written in Haskell. The power of Lisp as a DSL is that macros allow the language to be seamlessly extended using the language itself. Instead of writing an entire compiler, they could have written a few macros.

                The nondeterminism argument that jkarni and jameshh brought up kind of makes sense, but seems like it would be easy to circumvent by generating Radicle code at runtime.

                1. 1

                  Well, the meta-level stuff is interesting and does more than a macro could do…. and not available in the Lispy things I have used…

                  I wonder if and what they use it for…

            1. 3

              Looks very interesting! I’m just reading about their Lisp-like ‘utility language’ (sounds reminiscent of Ethereum…)

              I self-host my git repos (and use Artemis for issue tracking inside the repos), and have been pushing them (unpacked), along with git2html pages, to IPFS with IPNS names for a while. Unfortunately I find the main Go IPFS implementation to be too resource-intensive to keep running at the moment :(

              1. 10

                Unfortunately I find the main Go IPFS implementation to be too resource-intensive to keep running at the moment :(

                We are using our own Haskell library for hosting on IPFS, would be curious to know if it works better than the Go one. They should be compatible.