1. 16
    1. 6

      I really enjoy when someone comes up with a solution I’ve just never even come close to thinking of. The surprise is fun.

      Here, it was the realization that we can try to solve the “updating non-optional fields” challenge by treating readers and writers separately. Time will tell if that’s actually a good idea, but it’s at least a new point in the design space. Very cool.

      Also this library seems surprisingly well fleshed-out. I expected more alpha-quality software, but it seems like everything’s in order. Great documentation.

      I wonder how the performance stacks up against other encoding frameworks. It would be cool to see it added to this comparison benchmark:


      1. 1

        Thanks for the kind words! It was a pleasant surprise waking up and seeing Typical on Lobsters this morning.

        1. 1

          I learnt about it on ZuriHac. Tie, which allows generation of Haskell server stubs from OpenAPI was presented there, and given that we at work generate OpenAPI from applicative style Haskell schemas (like autodocodec, but not type class based), I am curious about this space.

          Still looking for comprehensive literature on this. I heard that Designing Data-Intensive Applications covers Avro, but I’d like to see some coverage comparing more options.

    2. 3


      This is a data serialization format and library (like Protocol Buffers) with first class support for algebraic data types (like Rust enums). It has a novel approach for ensuring compatibility between schema versions when adding fields to structs or cases to choice types.

      In the case of adding a field to a struct and marking it “asymmetric”:

      • new readers assume the field is optional
      • new writers assume the field is required
      • old writers still out in the field just keep writing messages without the new field

      In the case of adding a case to a choice:

      • new writers that use the new case MUST specify a fallback option
      • new readers will happily consume the new case
      • old readers automatically handle cases they don’t know by reading the fallback option (this can be a chain that will eventually bottom out to a case that the old reader does know how to handle)

      The author intends for people to use this feature to gradually update readers and writers over time. Once the updates have rolled out over a sufficient period of time or a suitable fraction of users have upgraded, the asymmetric qualifiers can be removed.

    3. 2

      @lettuce are you still working on this thing? Other than asymmetric fields, is there a compelling advantage to choice over Thrift’s union type? Asking as someone who’s never used Thrift but is considering serialization libraries & schema languages for a Typescript/ADT first company.

      1. 3

        are you still working on this thing?

        Absolutely, in the sense that Typical is a member of my portfolio of projects that I actively maintain. In terms of feature development, Typical has reached a stable point where things are not changing (e.g., you can count on the binary format not changing in breaking ways).

        Other than asymmetric fields, is there a compelling advantage to choice over Thrift’s union type?

        Typical’s choice types are roughly equivalent to Thrift’s “strict unions” (in Credit Karma’s Thrift to TypeScript code generator)—both support exhaustive pattern matching, which is the proper elimination principle for coproducts. Thrift’s default unions are quite weak in terms of what guarantees you get from the type checker, leaving the critical invariant (that exactly one field is set) up to a runtime check.

        However, you wouldn’t want to use Thrift’s strict unions with exhaustive pattern matching for RPCs, because there is no way to safely add/remove cases as your code evolves over time. I know you said “other than asymmetric fields”, but asymmetric fields are the key feature that allows schema changes to be made safely.