Threads for tonyg

  1. 5

    I have become something of a stuck record about this, but syntax is boring: semantics is where it’s at.[1] When are two values (denoted by syntax) the same? When are they different? For example: Is B0 00 the same as A0 the same as B1 00 00? Is A1 the same as B9 00 00 80 3F? When are two dictionaries the same? (Are duplicate keys permitted? Is ordering of keys important?) Is +Inf encoded AE the same as +Inf encoded using tags B8, B9 or BA?

    Aside from equivalences, I have other questions: Can I represent pure binary data that isn’t a string? What is a “tag” (bytes 8A-8F, FF)? What is an “attr”? Why does a typed array end in 00? What happens if a constrained system with a short LRU cache is presented with a document using a large LRU index?

    [1]: Hence my work on Preserves

    1. 1

      Thanks for those excellent questions!

      Equivalences are there on purpose. You can select from fast or small representation, for example. Also, some representations are not available in TypedArrays. Typed array ends in 00 because it denotes an empty chunk (please note chunking is allowed here). Muon allows adding tags (see github repo) with additional infoabout object sizes inside of the document to enable efficient queries (entirely skipping uninteresting parts by the parser).

      LRU size is an application-specific detail, but it can also be explicitly encoded in the document, if needed

      1. 1

        From what I can tell, this encoding is not bijective. I know it’s a not a terribly important thing to ask, but I do wish it had that property. Otherwise, this looks very nice!

        1. 2

          Do you mean that it should be free of equivalent representations?

          1. 2

            Yeah, it means that. It also means that there is exactly one representation, and for every representation there is only one way for it to be decoded. Right now I’m using bencode which is a very nice serialization format that’s great for a) binary data and b) its bijective. One nice side effect of this, which is how I think it’s being used in bittorrent, is that you can encode an object, take its digest and compare those digests to know if you have the same thing.

            1. 1

              But it’s also not true for JSON, i.e:


              is the same as

              1. 1

                Actually JSON doesn’t specify one way or the other whether those two documents are the same.

                1. 1

                  Oh yeah, definitely not true for JSON. I mean, look at how many json libraries offer to sort keys for you, and you can see that people want to use JSON that way (I guess mainly for things like caching where you want the json to encode the same way twice)

        2. 1

          I think an additional thing to consider is evolution of the semantics and the ability to reason about contextual equivalence and picking representatives of equivclassess… yes, I am also working on this kinda stuff, my design is finally stable, implementation hit a snag when my devices were stolen but is finally crawling ahead again. Mostly there is just chaos in written form but I am very willing to explain and discuss, also the chaos will get sorted (because it has already been sorted out in my head, at least to a sufficient extent to have confidence in my roadmap).

        1. 11

          The graph that compares the various formats after gzip is kind of a killer - it seems that compression makes the differences between the various formats more-or-less irrelevant, at least from the perspective of size. For applications where size is the most important parameter and a binary format is acceptable, I think I might just tend to prefer gzipped JSON and feel happy that I probably won’t need any additional libraries to parse it. If I got concerned about speed of serialization and deserialization I’d probably just resort to dumping structs out of memory like a barbarian.

          1. 6

            The main issue with gzipped documents is that you must unpack them before use. It will hurt badly if you need to run a query on a big document (or a set of documents). I recommend reading BSON and UBJSON specs, which explain it in details.

            1. 4

              The syntax you’ve chosen requires a linear scan of the document in order to query it, too, so it’d be a question of constant-factor rather than algorithmic improvement, I think?

              1. 2

                constant factors matter!

              2. 2

                That makes sense; I had been blinded to that use-case by my current context. I’ve been looking at these kinds of formats recently for the purpose of serializing a network configuration for an IoT device to be transmitted with ggwave. if I was working on big documents where I needed to pick something out mid-stream, I could definitely see wanting something like this.

                1. 2

                  i’m currently doing a similar thing, but for OTA upgrade of IoT devices: packing multiple files into Muon and compressing it using Heatshrink. It’s then unpacked and applied on the fly on the device end.

                  1. 1

                    I think I’ll also publish it in the following weeks

                2. 1

                  You can stream gzip decompression just fine. Not all queries against all document structures can be made without holding the whole document in memory but for a lot of applications it’s fine.

                3. 3

                  Yeah, json+gzip was so effective at $job, we stopped optimizing there. Another downside not mentioned by the other replies, though: gzip can require a “lot” (64KB) of memory for its dictionary, so for example, you couldn’t use that on an arduino.

                  1. 2

                    BTW, you can use Heatshrink compression on arduino (as I do currently)

                  2. 2

                    The main advantage of a binary format isn’t size, but speed and ability to store bytestrings

                    1. 1

                      I have encountered similar situations and thought the same, but one place this was relevant was ingesting data in WASM. Avoiding serde in Rust, or JSON handling in another language, makes for significantly smaller compiles. Something like CBOR has been a good fit since it is easy/small to parse, but this looks interesting as well.

                    1. 4

                      most of the same semantics [as JSON]

                      JSON doesn’t really have any semantics to speak of: when are two JSON values equal? When are they different? You could do better than JSON here by defining an equivalence relation over JCOF terms.

                      1. 3

                        Maybe it would’ve been better say it “the JSON data model”, which is what CBOR calls it. I’ll consider updating the readme.

                        1. 7

                          Well, call it what you will, there’s no there there :-) The JSON data model is not well-defined enough to really be said to exist. I ranted a little on this topic here:

                          1. 2

                            I took the time to try to define semantics, in a way I think is consistent with how JSON parsers and serializers are often implemented:

                            I would love some feedback on that. In particular, is everything clear enough? Should I have a more thorough explanation of how exactly numbers map to floating point values? On the one hand, it would’ve been nice; but on the other hand, correct floating point parsing and serialization is so complicated that it’s nice to leave it up to a language’s standard library, even if that result in slight implementation differences. (While doing research on how other languages do this, I even found that JavaScript’s number to string function has implementation-defined results.)

                            1. 2

                              That’s really nice. You probably don’t have to pin down text representation of floats further, but you might say something like “the IEEE754 double value closest to the mathematical meaning of the digits in the number” if you like. It’s a bit thorny, still, depressingly, isn’t it! For preserves I pointed at Will Clinger’s and Aubrey Jaffer’s papers. It might also be helpful to give examples of JCOF’s answers to the questions I wrote down in my rant linked upthread. Also useful would be to simply point at the relevant bit of the spec for comparing two doubles: for preserves I chose to use the totalOrder predicate from the standard, because I wanted a total ordering, not just an equivalence, but I think the prose you have maps more closely to compareQuietEqual from section 5.11.

                              1. 1

                                I actually originally had wording to the effect of “the IEEE 754 double value closest to the meaning of the digits”, but I tried to figure out if that’s actually what JavaScript’s parseFloat does, which is when I found out that JavaScript actually leaves it up to the implementation whether the value is rounded up or down after the 20th digit. So for the string "2.00000000000000000013" (1 being the 20th significant digit), it’s implementation-defined whether you get the float representing 2.0000000000000000001 or 2.0000000000000000002, even though the former is closer. I could try to copy the JavaScript semantics, as that probably represents basically what’s achievable on as broad a range of hardware as is reasonable. I certainly don’t think I should be more strict than JavaScript. Though I was surprised that JavaScript apparently doesn’t require that you can round-trip a float perfectly with parseFloat(num.toString()).

                                I also originally tried looking into how IEEE 754 defines equality, thinking I could defer to that instead of talking about values being bit-identical, and I found the predicate compareQuietEqual in table 5.1 in section 5.11. I was never able to find a description of what compareQuietEqual actually does, however, nor did I find anything else which describes how “equality” is defined. If you have any insight here, I’d like to hear. (Additionally, my semantics would want to consider -0 and 0 to not be the same; this is actually why I use the phrase “the same” rather than “compare equal”. I wouldn’t want a serializer to encode -0 as 0.)

                                I also noticed that JavaScript doesn’t mention compareQuietEqual; it defines numbers x and y to be equal if, among other things,“x is the same Number value as y”, where “the Number value for x” is defined to be the same as IEEE754’s roundTiesToEven(x). And roundTiesToEven is just a way to go from an abstract exact mathematical quantity to a concrete floating point number. So that, to me, sounds like JavaScript is using bitwise equality, unless it uses “the same” to mean “compares equal according to compareQuietEqual”.

                                It always seems that once you dig deep enough into the specs underpinning our digital world, you find that at the core, it’s all just ambiguous prose and our world hangs together because implementors happen to agree on interpretations.

                                Regarding the questions, my semantics answer most of them, but I would need to constrain float parsing to be able to answer the second one. The answers are:

                                • are the JSON values 1, 1.0, and 1e0 the same or different? They are all the same, since they parse to the same IEEE 754 double precision floating point numbers.
                                • are the JSON values 1.0 and 1.0000000000000001 the same or different? Currently ambiguous, since I don’t define parsing rules. If we used JavaScript’s rules, they would be different, since they differ in the 17th significant digit, and JavaScript parseFloat is exact until the 20th.
                                • are the JSON strings “päron” (UTF-8 70c3a4726f6e) and “päron” (UTF-8 7061cc88726f6e) the same or different? They are different, since they have different UTF-8 code units.
                                • are the JSON objects {"a":1, "b":2} and {"b":2, "a":1} the same or different? They are the same, since order doesn’t matter.
                                • which, if any, of {"a":1, "a":2}, {"a":1} and {"a":2} are the same? Are all three legal? The first one is illegal because keys must be unique. The second and third are different, since the value of key “a” is different.
                                • are {"päron":1} and {"päron":1} the same or different? They are the same if both use the same UTF-8 code point sequence for their keys.

                                Once we have the float parsing thing nailed down, it would be a good idea to add updated answers to the readme.

                                1. 1

                                  I think IEEE-754 floats is one area where the binary formats win over text. CBOR can represent IEEE-754 doubles, singles and halfs exactly (and include +inf, -inf, 0, -0, and NaN). When I wrote my own CBOR library, I even went so far as to use the smallest IEEE-754 format that would would trip (so +inf would be encoded as a half-float for instance).

                                  For Unicode, you may want to specify a canonical form (say, NFC or NFD) to ensure interoperability.

                                  1. 1

                                    +1 for binary floats.

                                    Re unicode normalization forms: I’d avoid them at this level. It feels like an application concern, not a transport concern to me. Different normalization forms have different purposes; the same text sometimes needs renormalizing to be used in a different way; etc. Sticking to just sequence-of-codepoints is IMO the right thing to do.

                                    1. 1

                                      I won’t specify a Unicode canonicalization form, since that would require correct parsers to contain or depend on a whole Unicode library, and it would mean different JCOF implementations which operate with different versions of Unicode are incompatible. Strings will remain sequences of UTF-8 encoded code points which are considered “the same” only if their bytes are the same.

                                      Regarding floats, I agree that binary formats have an advantage there, since they can just output the float’s bits directly. Parsing and stringifying floats is an annoyingly hard problem. But I want this to remain a text format. Maybe I could represent the float’s bits as a string somehow though; base64 encode the 8 bytes or something. I’ll think about it.

                                      1. 1

                                        Hexfloats are a thing!

                                        For preserves text syntax, I didn’t offer hexfloats (yet), instead escaping to the binary representation.

                                    2. 1

                                      consider -0 and 0 to not be the same

                                      Aha! Then you do want totalOrder after all, I think. When used as an equivalence it ends up being a comparison-of-the-bits IIRC. See here, for example.

                                      1.0 =?= 1.0000000000000001

                                      Wow, are you sure this is ambiguous for IEEE754 and Javascript? Trying it out in my browser, the two parseFloat to identical-appearing values. I can’t make it distinguish between them. What am I missing?

                                      Per Wikipedia on IEEE754 (not Javascript numbers per se): doubles have “from 15 to 17 significant decimal digits precision […] If an IEEE 754 double-precision number is converted to a decimal string with at least 17 significant digits, and then converted back to double-precision representation, the final result must match the original number.” I used this info when cooking up the example.

                                      Oh, wow, OK, I’ve just found RoundMVResult in the spec. Perhaps it’s aimed at those implementations that use, say, 80-bit floats for their numbers? But no, that can’t be right. What am I missing?

                                      3 extra decimal digits is… about 10 bits of extra mantissa. Which gets us pretty close to the size of the mantissa of an 80-bit float. So maybe that’s the reason. Hmmm.

                            2. 2

                              One detail is the meaning of an object that uses the same key twice - what does that mean?

                            1. 3

                              @tonyg since you didn’t get many comments here i won’t feel too bad asking a somewhat off-topic question in have you seen arcan? if so, what do you think of it? there was an arcan release post linked here on on the same day you posted this. -

                              1. 2

                                Thank you for highlighting that! Arcan looks extremely relevant to my interests. It also looks like kind of a lot, so I’ll need to sit down and go through the website properly to get a better feeling for it. But yes, it looks really neat.

                              1. 3

                                I love the use of a string table, this seems pretty novel compared to other options. Obviously the text-based nature of your format precludes some JSON alternatives such as MessagePack and CBOR.

                                1. 4

                                  I extended my test suite/benchmark to compare JSON, JCOF, MessagePack and CBOR, MessagePack and CBOR get only modest size gains compared to JSON:

                                    JSON: 299 bytes
                                    jcof: 134 bytes (0.448x)
                                    msgp: 217 bytes (0.726x)
                                    cbor: 221 bytes (0.739x)
                                    JSON: 8315 bytes
                                    jcof: 2093 bytes (0.252x)
                                    msgp: 5666 bytes (0.681x)
                                    cbor: 5678 bytes (0.683x)
                                    JSON: 219635 bytes
                                    jcof:  39650 bytes (0.181x)
                                    msgp: 194685 bytes (0.886x)
                                    cbor: 194811 bytes (0.887x)
                                    JSON: 56812 bytes
                                    jcof: 23132 bytes (0.407x)
                                    msgp: 46817 bytes (0.824x)
                                    cbor: 46866 bytes (0.825x)
                                    JSON: 37960 bytes
                                    jcof: 11923 bytes (0.314x)
                                    msgp: 31887 bytes (0.840x)
                                    cbor: 31882 bytes (0.840x)
                                    JSON: 244920 bytes
                                    jcof:  87028 bytes (0.355x)
                                    msgp: 199004 bytes (0.813x)
                                    cbor: 198669 bytes (0.811x)
                                    JSON: 51949 bytes
                                    jcof: 37480 bytes (0.721x)
                                    msgp: 39948 bytes (0.769x)
                                    cbor: 39530 bytes (0.761x)

                                  This makes sense, precisely because CBOR and MessagePack lack a string table and an object shapes table, instead including all the key strings for every object.

                                  I think there could be some modest gains from going with a binary format, but honestly, there’s not that much to gain unless you go with a format which operates on bits rather than bytes.

                                  1. 7

                                    Just gzipping the JSON gives very good results:

                                       790 Jul 15 23:49 circuitsim.json.gz
                                     14524 Jul 15 23:49 comets.json.gz
                                      6813 Jul 15 23:49 madrid.json.gz
                                     35831 Jul 15 23:49 meteorites.json.gz
                                      8503 Jul 15 23:49 pokedex.json.gz
                                      5933 Jul 15 23:49 pokemon.json.gz
                                       163 Jul 15 23:49 tiny.json.gz
                                    1. 5

                                      Yes, using a proper compression algorithm will always produce smaller data than just using a more efficient data encoding. If all you care about is the compressed size, and you don’t worry about the uncompressed size, you probably don’t need JCOF, CBOR, MessagePack, or any other serialization format which tries to be “JSON but smaller”. Clearly there is a desire for a more efficient way to encode JSON-like data.

                                      That said though, there is some space saving even when gzipping:

                                        162 tiny.json.gz
                                        139 tiny.jcof.gz (0.858x)
                                        806 circuitsim.json.gz
                                        696 circuitsim.jcof.gz (0.863x)
                                       5946 pokemon.json.gz
                                       4248 pokemon.jcof.gz (0.714x)
                                       6634 madrid.json.gz
                                       5645 madrid.jcof.gz (0.851x)
                                       7483 pokedex.json.gz
                                       7404 pokedex.jcof.gz (0.989x)
                                      14120 comets.json.gz
                                      14023 comets.cbor.gz (0.993x)
                                      35829 meteorites.json.gz
                                      33152 meteorites.jcof.gz (0.925x)

                                      The size reduction in tiny, circuitsim, pokemon and madrid when going from gzipped JSON to gzipped JCOF is about on par with the size reduction you get from going from uncompressed JSON to uncompressed CBOR or MessagePack, and both of those formats sell themselves as being smaller and more concise than JSON.

                                    2. 3

                                      The big downside of a string table is that it prevents streaming generation. Or, at least, requires you to stream, in parallel, into two buffers and then combine them, which means that you can’t stream through a compression or encryption algorithm, you need to buffer the data and then compress. It does allow streaming on the receive side, but requires that you keep the entire string table in memory until you have processed an entire document. If you want to extract a single node from a tree, it’s more expensive. That eliminates a lot of use cases where JSON is a good choice.

                                      If you’re using it as a message format for a well-defined protocol, then the comparison might want to include things like FlatBuffers and ASN.1, which are specifically optimised for this kind of use case, rather than as a generic JavaScript data serialisation format.

                                  1. 3

                                    I wrote this because I’m mid-way through building a native Smalltalk X11 protocol implementation for Squeak. (Where’s the tilting-at-windmills tag?)

                                    1. 3

                                      Hah! There was a Squeak goodie for X11 forever and a half ago, albeit predating all the cool stuff; I remember using it when I (briefly) tried to make Squeak be my whole OS back in the late 90s, effectively treating the Linux underneath as a convenient but ultimately ignorable VM. (There was also a VNC client that was quite handy.) Good luck with a fresh implementation!

                                      1. 1

                                        Thanks! Do you happen to have a link around to the goodie? I am having trouble finding it on the internet; perhaps it has fallen off (!)

                                        1. 2

                                          I don’t; I’m sorry. There’s a remote chance I have it on a backup from around then, but those are all on tape; I’m not gonna be able to find it quickly. The fact I could find the VNC client in three seconds and can’t find the X11 one is making me wonder if I’m misremembering, too, though I don’t think I am.

                                          1. 2

                                            Ah well. I do recall Joe Armstrong’s X protocol implementation in Erlang from back in the day:

                                            And yay, the wayback machine has Joe’s original page:

                                      2. 2

                                        I haven’t seen anyone do this before and that’s made me a bit sad because this was one of the original promises of XCB: the C binding was almost incidental, bindings for any other language were intended to be generated in the same way. I’d love to see a Squeak native version.

                                        1. 1

                                          Yes, I was a bit surprised that I couldn’t find any other bindings based on XCB! At first I thought I just wasn’t searching properly, but I’m starting to think there really aren’t any. I think the gap that xcb-shim fills explains some of the reasons why.

                                            1. 1

                                              Awesome, thank you very much!

                                      1. 4

                                        If I want to define the API via language independent IDL . . .

                                        Do you? Why?

                                        IDLs are schemas that define a formal protocol between communicating parties. They represent a dependency, typically a build-time dependency, coupling producers and consumers. That can bring benefits, definitely. But if I can’t call your API without fetching a specific file from you, somehow, and incorporating it into my client, that’s also a significant cost! Open networks are supposed to work explicitly without this layer of specificity. HTTP is all I need to talk to any website, right?

                                        HTTP+JSON is pretty language-agnostic. I can even do it at the shell. JSON-RPC is nowhere near as mature, or as supported. What does this extra layer get you? What risks does it actually reduce? And are those risks relevant for your use cases? Why do you think e.g. Stripe doesn’t require JSON-RPC for its API?

                                        IMO the only place that IDLs make sense are in closed software ecosystems — i.e. corporate environments — that have both experienced specific pains from loosely-defined JSON APIs, and can effectively mandate IDL usage across team boundaries. Anywhere else, I find it’s far more pain than pleasure.

                                        1. 3

                                          Heh, I actually was thinking about your’s similar comment in another thread when asking, thanks for elaborating!

                                          I think I agree with your reasoning, but somehow still come to the opposite conclusion.

                                          First, agree that the JSON-RPC is a useless layer of abstraction. I’ve had to use it twice, and both times the value was negative. In this question, I am asking not about the jsonrpc 2.0, but about an RPC which works over HTTP, encoding payloads in JSON. I do have an RPC, rather than REST, style in mind though.

                                          I also couldn’t agree more about the issue with build-time dependencies is worth solving. Build time deps is exactly the problem I see at my $dayjob. We have an JSON-over-HTTP RPC interface and the interface is defined “in code” – there’s a bunch of structs with #[derive(Serialize)] structs. And this leads people to thinking along the lines of “I need to use the API. The API is defined by this structs. Therefore, I must depend on the code implementing the API”. This wasn’t explicitly designed for, just a path of least resistance if you don’t define API explicitly, and your language has derives.

                                          That being said, I think there has to be some dependency between producer and consumers? Unless you go full HATEOAS, you somehow need to know which method to call and (in a typed language, for ergonomics) which shape the resulting JSON would have. For stripe, I need to fetch to figure out what’s available. And, again, there is

                                          And, if we need at least an informal doc for the API, I don’t see a lot of drawbacks in making it more formal and writing, say, literal typescript rather than free-form text, provided that the formalism is lightweight. The biggest practical obstacle there seems to be absence of such lightweight formalisms.

                                          So, the specific “why”s for me wanting an IDL in an open decentralized ecosystem are:

                                          • reifying “this is the boundary exposed to outside world” in some sort of the specific file, so that it is abundantly clear that, if you are changing this file, you might break API and external clients. You could do that in the “API is informally specified in code” scenario, but that requires discipline, and discipline is finite resource, running out especially quickly in larger teams.
                                          • providing the low-friction way to document and publish promises regarding the API to the outside world. Again, with some discipline, documenting API in file would work, but it seems to me that doc-comments require less effort for upkeep than separate guides.
                                          • making sure that details about the language the implementation is written in don’t accidentally leak. Eg, APIs probably shouldn’t use integers larger than 32 bits because they won’t work in JavaScript, but, if my impl language is Rust, I might not realize that as native serialization libraries would make u64 just work. More generally, in Rust just slapping derive(Serialize) always works, and often little thought is given to the fact that the resulting JSON might be quite ugly to work with in any non-rust language (or just to look at).
                                          • Maaaaybe generating stub clients and servers from the spec.
                                          1. 3

                                            That being said, I think there has to be some dependency between producer and consumers?

                                            Yeah! When a client calls a server there is definitely a dependency there. And as you note, there has to be some kind of information shared between server in client in order for that call to succeed. I suppose my point is about the… nature? of that dependency. Or maybe the efficacy of how it’s modeled?

                                            At the end of the day, you’re just sending some bytes over a socket and getting some bytes back. An IDL acts as a sort of filter, that prevents a subset of those byte-sets from leaving the client, if they don’t satisfy some criteria which is assumed will be rejected by the server. Cool! That reduces a class of risk that would otherwise result in runtime errors.

                                            1. What is impact of that risk?
                                            2. What benefits do I get from preventing that risk?
                                            3. What costs do I incur by preventing that risk?

                                            I suppose I’m claiming that, outside of a narrow set of use cases, and certainly for general-purpose public APIs, the answers to these questions are: (1) quite low, (2) relatively few, and (3) very many. (Assuming you’re careful enough to not break existing consumers by modifying the behavior of existing endpoints, and etc. etc.)

                                            reifying “this is the boundary exposed to outside world” in some sort of the specific file, so that it is abundantly clear that, if you are changing this file, you might break API and external clients. You could do that in the “API is informally specified in code” scenario, but that requires discipline, and discipline is finite resource, running out especially quickly in larger teams.

                                            I get that desire! The problem is that the IDL is a model of reality, not the truth, and, like all models, it’s a fiction :) Which can be useful! But even if you publish an IDL, some dingus can still call your API directly, without satisfying the IDL’s requirements. That’s the nature of open ecosystems. And there’s no way for you to effectively mandate IDL consumption with plain ol’ HTTP APIs, because (among other reasons) HTTP is by construction an IDL-free protocol. So the IDL is in some sense an optimistic, er, optimization. It helps people who use it — but those people could just as easily read your API docs and make the requests correctly without the IDL, eh? Discipline is required to read the API docs, but also to use the IDL.

                                            . . . document and publish … API [details] . . . making sure that details about the language [ / ] implementation don’t accidentally leak . . .

                                            IDLs convert these nice-to-haves into requirements, true.

                                            generating stub clients and servers

                                            Of course there is value in this! But this means that client and server are not independent actors communicating over a semantically-agnostic transport layer, they are two entities coupled at the source layer. Does this model of your distributed system reflect reality?

                                            I dunno really, it’s all subjective. Do what you like :)

                                            1. 1

                                              Convinced! My actual problem was that for the thing I have in mind the client and the server are implemented in the same code base. I test them against each other and I know they are compatible, but I don’t actually know how the JSON on the wire looks. They might silently exchange xml for all I know :-)

                                              I thought to solve this problem with an IDL which would be close to on-the-wire format, but it’s probably easier to just write some “this string can deserialize” tests instead.

                                              I’d still prefer to use IDL here if there were an IDL geared towards “this are docs to describe what’s on the wire” rather than “these are types to restrict and validate what’s on the wire”, but it does seem there isn’t such descriptive thing at the moment.

                                              1. 1

                                                docs to describe what’s on the wire [vs] types to restrict and validate what’s on the wire

                                                Is there a difference here? I suppose, literally, “docs” wouldn’t involve any executable code at all, would just be for humans to read; but otherwise, the formalisms necessary for description and for parsing seem almost identical to me.

                                                1. 1

                                                  an IDL which would be close to on-the-wire format

                                                  (Just in case you missed it, this is what Preserves Schema does. You give a host-language-neutral grammar for JSON values (actually Preserves, but JSON is a subset, so one can stick to that). The tooling generates parsers, unparsers, and type definitions, in the various host languages.)

                                                  1. 1

                                                    If your client and server exist in the same source tree, then you don’t have the problems that IDLs solve :)

                                                    edit: For the most part. I guess if you don’t control deployments, IDLs could help, but certainly aren’t required.

                                                    1. 1

                                                      They are the sever and a client — I need something to test with, but I don’t want that specific client to be the only way to drive thing.

                                                      1. 1

                                                        They are the sever and a client — I need something to test with, but I don’t want that specific client to be the only way to drive thing.

                                                        I test them against each other and I know they are compatible, but I don’t actually know how the JSON on the wire looks. They might silently exchange xml for all I know :-)

                                                        It seems you have a problem :)

                                                        If the wire protocol isn’t specified or at least knowable independent of the client and server implementations, then it’s an encapsulated implementation detail of that specific client and server pair. If they both live in the same repo, and are reasonably tested, and you will never have other clients or servers speaking to them, then no problem! It’s a closed system. The wire data doesn’t matter, the only thing that matters is that client-server interactions succeed.

                                                        If all you want to do is test interactions without creating a full client or server component, which I agree is a good idea, you for sure don’t need an IDL to do it. You just tease apart the code which speaks this custom un-specified protocol from the business logic of the client and server.

                                                        package protocol
                                                        package server // imports protocol
                                                        package client // imports protocol

                                                        Now you can write a mock server and/or mock client which also import protocol.

                                                        1. 2

                                                          This (extracted protocol with opaque wire format) is exactly the situation I am in. I kinda do want to move to the world where the protocol is knowable indepedently of the specfic client and server, hence I am seeking an IDL. And your comment helped me realize that just writing tests against specific wire format is a nice intermediate step towards making the protocol specification – this is an IDL-less way to actually see what the protocol looks like, and fix any weirdness. As in, after writing such tests, I changed JSON representations of some things because they didn’t make sense for an external protocol (it’s not stable/public yet, so it’s changable so far).

                                            1. 1

                                              You might find Preserves Schema interesting [1,2]. I’ve been using it (uh, and defining and implementing it :-) ) for multi-language interop in a few different settings including RPC. It’s still early days for it, though.


                                              1. 2

                                                We need more and better options for building reactive systems. We need reactive worlds we can run on the server-side, and the browser-side, and a common jargon and pattern language with which to connect them.

                                                But we need them to be more than spooky action at a distance. We need to make sure we’re not papering over fundamental sync/async disconnects, or hiding real interface points just to make a good demo.

                                                1. 2

                                                  Yep! This is what I’ve been working on recently, with I was intrigued by the OP because they are exploring (rigorously!) the styles of programming that are natural for Syndicate, too. For example, a little table display program [1] in Syndicate is reminiscent of their program.

                                                1. 1

                                                  I think a measurement of Watt consumption would be more useful and provide a better idea about drain. As is, depends on batter size, age, etc.

                                                  1. 3

                                                    I think the point of this exercise was to see how it looks on a standard device – so… whatever battery the PinePhone comes with, and not a very old one (judging by the looks of the curve, and assuming the underlying device doesn’t do any fancy adjustment in reported levels which I’m guessing it doesn’t).

                                                    FWIW, while power (or, more commonly for battery-powered devices, current) measurements are obviously more useful for examining consumption, battery discharge curves are in fact a very useful data source, especially for consumer devices. They allow you to derive answers to questions like “will the user be able to answer an email if they have 10% battery left?” with less number-wrangling and less guessing about what happens at the end of the discharge curve than if you had to derive them from power consumption figures.

                                                    (Although you generally need both if you want to make anything more than educated guesses. EE101 intros like to pretend you can mostly get one from the other but IRL it’s… not really like that. So I’d definitely love to see the power consumption profile for these things, too :-D.)

                                                    1. 3

                                                      You’re right that it’d be useful. It’s very early days, though! I ran this little experiment to see what kind of a baseline I could expect from the dumbest possible power management strategy of “do nothing”.

                                                      Once I have the userland doing something more interesting in a stable-ish way, I’ll be looking into the details more.

                                                    1. 3

                                                      Everything old is new again; I’m heavily reminded of Zephyr ASDL, which Oil, Python, and Monte all use for AST schemata. It seems like there should be a simple adapter from Preserves to ASDL.

                                                      1. 2

                                                        Yes, Zephyr ASDL was something I looked into when thinking about schemas. It was definitely an influence. Another strong influence is RELAX NG, though I’ve not properly lifted the stuff about sequencing and interleave from that latter yet! One difference that I can see between Preserves schemas and Zephyr is that Preserves schemas include not only a description of (roughly) algebraic types, but also a (kind of invertible) connection between those types and surface syntax.

                                                        So for example NamedAlternative = [@variantLabel string @pattern Pattern] in the metaschema matches a two-element sequence (array) with a string in slot 0 and something that parses to a Pattern in slot 1, but the parse of a NamedAlternative results in a record containing a string-valued “variantLabel” field and a Pattern-valued “pattern” field; whereas NamedSimplePattern_ = <named @name symbol @pattern SimplePattern> again results in a record with a “name” and a “pattern” field, but matches records with label named that have two fields parsing as a symbol and a SimplePattern respectively. And serializing the two types produces syntax that parses back to them.

                                                        It’d probably be straightforward to extract something very close to an ASDL definition from a Preserves schema.

                                                      1. 2

                                                        It would be great if this article and the one on Preserve’s data model had more information on the rationale and goals, as it’s not immediately apparent what problem Preserve is addressing and what makes it different from the rest. Looks interesting, though!

                                                        1. 2

                                                          Thanks! That’s a comment that keeps coming back, it’s good feedback. I’ll try to write something up on the motivation.

                                                        1. 5

                                                          Can it be less ambient?

                                                          A recent topic in capability theory is whether Dataspaces’ way of modeling ambient shared scratchpads (the “data spaces” themselves) needs to have its exemption from capability-safety. At the same time, a common complaint of systemd/D-Bus/GNOME/etc. is that there is an ambient message bus which has many powerful clients connected by default. These have the same taste to me, when imagining a capability-aware language being used to implement PID 1; is there some ambient authority that could be removed here?

                                                          I might well not understand Dataspaces, and I’m happy to learn more about your plan.

                                                          1. 3

                                                            Absolutely. So the theory up until recently had exactly one dataspace as execution context & communications medium for each actor in a tree.

                                                            But now I’ve reworked things to include capabilities, I’ve also moved away from that perspective. Now a dataspace is an object-in-a-vat like any other. Capabilities secure access to dataspaces or to any other object-in-a-vat.

                                                            There’s no longer exactly one privileged dataspace per actor. My early impressions of this new style of “syndicated actor” programming are that it will lead to many more smaller more tightly-focussed task- or domain-specific dataspaces interconnected in a loose web, within and among machines.

                                                            Programs connect to a server and upgrade access from Macaroon-like datastructures (basically sturdyref-like) to more ephemeral references.

                                                            There’s a little (completely undocumented) proof-of-concept in

                                                            Hang on, I’ll do a quick screencast and post it here.

                                                            1. 3
                                                          1. 2

                                                            This is a very nice project! Now I feel very curious about what a deamon soup looks like in Syndicate-lang, and about the extension of object-capabilities to syndicated actors. The latter may have suggestions for many system designs outside the current scope of your project – I was just reading about Matrix Spaces which seem to be trying to couple spatial intuitions with access/permissions, and I wonder what Chris Webber wonderings about ocaps in the Fediverse.

                                                            1. 1

                                                              Yes indeed! I’m excited to find out what it will be like.

                                                              I’ve actually been discussing all this stuff somewhat regularly with Chris Webber. I really like his stuff and our discussions are always useful and interesting.

                                                            1. 2

                                                              Thanks @gasche :-)

                                                              Hi, I’m the author, AMA!

                                                              1. 2

                                                                Self taught dev here. I’ve been really enjoying reading your dissertation but I’m getting stuck at the type theory. What’s out there for getting up-to-speed on how to read CS proofs?

                                                                1. 3
                                                                  1. 2

                                                                    Hi! @gasche’s recommendation is solid, and I’d also like to recommend the Redex book [1] [2], the first half of which is a course in modern small-step operational semantics. That book, plus the standard type systems text [3], should be heaps to be getting on with :-)

                                                                    [1] Felleisen, Matthias, Robert Bruce Findler, and Matthew Flatt. Semantics Engineering with PLT Redex. Cambridge, Massachusetts: MIT Press, 2009.


                                                                    [3] Pierce, Benjamin C. Types and Programming Languages. MIT Press, 2002.

                                                                1. 2

                                                                  Can’t we just implement actors inside the linux kernel ?

                                                                  1. 2

                                                                    Sure; we can model actors as Linux processes. This gives us the mutable state, isolated turn-taking and I/O, and ability to connect to other actors.

                                                                    In terms of improving the security around each process so that they behave more like isolated actors, Capsicum/CloudABI was a possibility and is available on e.g. FreeBSD, but on Linux, eBPF is the API that folks are currently using.

                                                                    1. 2

                                                                      This is the complaint I commonly hear about processes in linux - it allocates too much memory … can that be solved ? Assuming it allocates even 1kb you can’t run millions of actors. Supervising is also tricky it seems.

                                                                      1. 3

                                                                        Yes, the kernel isn’t designed for millions of processes. (Though it is only software, so it could be changed…) One approach I find interesting is recursive layering of actor systems: where a runtime-for-actors is itself an actor. This gives you a tree of actors. With a non-flat system, the vanilla actor model of addressing and routing doesn’t work anymore, so you have to do something else; that’s part of what motivated my research (which in turn led to this Java actor library…). But it does solve the issue of not being able to run millions of actors directly under supervision of the kernel. You’d have one actor-process acting (heh) as an entire nested actor system with millions of actor-greenthreads within it.

                                                                        (Edited to add: 1kb * 1 million = 1 gigabyte. Which doesn’t seem particularly excessive.)

                                                                  1. 5

                                                                    This core library is very similar to the promise and vat cores of E, Monte, etc. I am not really surprised that it is relatively small and simple. The difficulty will come when wiring up vats to I/O, and converting blocking I/O actions into intra-vat actions which don’t mutate actors.

                                                                    1. 3

                                                                      What difficulties did you have mind? I’ve done it before, largely following the Erlang playbook, and didn’t have any particular trouble. It does mean that users of the system have to really buy in to the Actor style - naively using won’t work well - but that’s rather a benefit than a drawback :-)

                                                                      1. 6

                                                                        Many issues come to mind. It’s quite possible that we overcomplicated the story of I/O in Monte, and of course our reference implementation is horribly messy, like a sawblade caked with sawdust. I don’t think that you have to deal with any of these, but they all came up in the process of making a practical flavor of E, so I figure that they’re worth explaining.

                                                                        • A few community members wanted Darwin and Windows support, so rather than writing my own glue over ((RPython’s version of) Python’s version of) BSD sockets, I bound libuv. This probably was a mistake, and I constantly regret it. But we don’t have to write many reactors/event-loops, so it’s a wash.
                                                                        • We have a rather disgusting amount of glue. IPv4 and IPv6 are distinct; sockets, files, and pipes are all streams but with different implementation details.
                                                                        • I/O needs to be staged. For example, on file resources, the method .getContents() :Vow[Bytes] will atomically read a file into memory, returning a promise for a bytestring, but the underlying I/O might need to be broken up into several syscalls and the intermediate state has to live somewhere. Our solution to this was a magic with io: macro system which turns small amounts of Python into larger amounts of Python, adding state storage and error-handling.
                                                                        • Backpressure isn’t free. Our current implementation of perfect backpressure requires about 900 lines of nasty state machines, promise-routing, and error handling (in Monte) and mostly it makes slowness.
                                                                        • I/O must happen between vat turns. This is probably obvious to you, but it wasn’t obvious to us that we can’t just have a vat where all the I/O happens, and instead we have to explicitly interleave I/O. The scheduler I wrote is a baroque textbook algorithm which can gallop a bit but is not great at throughput.

                                                                        But there’s also thoughts on how to make it better.

                                                                        • We used to joke that TLS is difficult but stunnel is easy. Similarly, HTTP is hard, but shelling out to curl is easy. Maybe we should use socat instead of binding many different pipe-like kernel APIs. After all, if we ever have Capsicum or CloudABI support, then we’ll have all I/O multiplexed over a single pipe anyway.
                                                                        • We can do per-platform support modules, and ask community members to be more explicit in indicating what they want. If we stopped using libuv, then I think that we could isolate all of the Darwin-specific fixes.
                                                                        • Because Monte is defined as a rich sugar on a kernel language, including high-level promise-handling idioms, we can redefine some of those idioms to improve the semantics of promises. We’ve done it before, simplifying how E-style when-blocks are expanded.
                                                                        1. 2

                                                                          Thanks for these! Very interesting. And looking at the Monte website, there’s a lot for me to learn from there too.

                                                                          Re: backpressure: good generic protocols for stream-like interactions are, it seems to me, still an open research area. So far I’ve had good-enough results with explicit credit-based flow control, and circuit-breaker-like constructs at system boundaries, but it still feels pretty complex and I find it easy to get it wrong.

                                                                          Re: I/O staging: in the Erlang world, there’d likely be a new actor for managing the state of the transfer. Is this kind of approach not suitable for Monte?

                                                                          Re: I/O and turns. My approach to this is that “real” I/O is managed by special “device driver” actors(/vats), like Erlang, but that the I/O calls themselves actually take place “outside” the actor runtime. Special gateway routines are needed to deliver events from the “outside world” into the actor runtime. In the OP Java library, the “gateway” is to schedule a Turn with Turn.forActor() (if you don’t have a properly-scoped old Turn object already) or myTurn.freshen() (if you do).

                                                                          1. 3

                                                                            I/O staging: in the Erlang world, there’d likely be a new actor for managing the state of the transfer. Is this kind of approach not suitable for Monte?

                                                                            It probably would work in a hypothetical future Monte-in-Monte compiler which could replace various low-level data objects (Double in particular) with pure-Monte implementations. In that world, we’d have to generically solve the state-storage problem as a function of memory layouts, and then we could implement some complex I/O in terms of simpler I/O.

                                                                            Thanks for your hard work. I like learning from other actor systems.

                                                                    1. 5

                                                                      As ever, Homoiconicity isn’t the point. Automated, scriptable refactoring tools, though - those are nice! Good to see an example here from Clojure land. Smalltalk also has (or, can have) good support for such things. For an example, see the library that underpins the RefactoringBrowser in Squeak Smalltalk, the Refactoring Engine. You can use it for ad-hoc refactorings from a Workspace window.

                                                                      1. 2

                                                                        Super cool! I have wanted to get into ST recently. At the moment the thing that most prevents me is not having a recent vm on OpenBSD.

                                                                        I did pick up an M1 Mac recently though, maybe working under Rosetta will be fast enough.

                                                                        1. 2

                                                                          Looks like Cog can be built for OpenBSD:

                                                                          I haven’t tried it myself! But there’s a screenshot in that thread that shows Squeak running, so it could be a promising line of investigation.

                                                                          Also, the aarch64 build of Cog works pretty well. (Not sure about M1 specifically.) The aarch64 build of Cog is what’s driving squeak-on-a-phone.

                                                                          1. 1

                                                                            Oh awesome! ty for digging this up! Last I knew it took a bunch of patches :D