I suggest running any new formats or protocols by the LANGSEC people to ensure their tools can turn them into secure, predictable parsers. Modify if not.
In this case the authors have made an announcement there, and at least Tony Arcieri lurks in that community besides. See the (currently quite short) discussion thread here.
Good to know they’re already talking.
I was hoping for something a little bolder – maybe some entity definitions including e.g. required fields and such. Sort of like a protocol buffers over JSON kind of thing.
Protocol buffers 3 does away with required/optional, but roundtrips through JSON natively now.
Can someone elaborate when and why it would make sense to handle this logic within the JSON document? The article references “Its primary intended use is in cryptographic authentication contexts”. It’s not clear to me why you would need an alternative to JSON for this use case.
It’s needed to cleanly disambiguate binary data from unicode strings in a content-aware hash setting, provided you want the same digests to be computed for (T)JSON data and e.g. protos. This is particularly important if your document contains fields like cryptographic hashes or public keys. Otherwise, to verify the protos you’d have to round trip all the binary fields to Base64(url) to compute their hashes.
Neat! Inline tagging seems like it might be useful in some contexts.
I wonder if you could express Hexadecimal, Base32, etc. with JSON Schema instead (example), though, which also has support for required fields and conditional dependencies, and is implemented in several languages.
The goal of TJSON is to be a self-describing, schema-free format.
I don’t get why the type should be self describing, looks like that introduce a lot of clutter