1. 25
  1. 8

    I’ve been using gRPC for about the past 4 years and by far the most important thing I wish we had understood at the outset is that you can use gRPC with a variety of other encodings.

    Protobuf is a mess (https://reasonablypolymorphic.com/blog/protos-are-wrong/) and you should avoid it if at all possible; you can still use gRPC without it.

    1. 4

      I’ve been working with Protobufs for a few years now. The author’s criticisms are valid, but I’d change the emphases. The formal “niceness” of the type system has generally not been an issue in my usage. Optionals, enums, and submessages using ints and strings are what are used for practically everything. What is more of a problem is that every field in a Protobuf definition is usually chosen to be an optional, and, I believe, an optional by default in the latest version. This is best practice to prevent deserialization failures if the field is omitted, especially if the send-side schema is a newer version that no longer requires the field. Optionals make you check for the presence, which at best is annoying and at worst encodes a brittle schema defined in code. It also makes it hard to use the type in interior code as you’re not sure which fields are filled out. The author touches on the problem of having transport and internal types; optionals pose a major problem with using Protobufs for internal types. At the crux of it, I think the problem is Protobufs is implicitly supposed to be used for senders/receivers with evolving versions of the schema, but doesn’t provide any extra features to make that process smooth. There’s been plenty of horror stories with systems crashing b/c mismatched proto expectations.

      1. 2

        Um; I can only say since I understood it, I see all-optional as an important feature, not bug. To me it conveys the same semantics as the “zero by default” semantics of strict fields in Go. The trick is to define your fields in a way where a zero value is a perfectly sensible and meaningful common default. Now, for some values like string names zero (empty string) might not make much sense - but then you probably need more complex validation anyway, so e.g. some validation annotations with a code generator for them (protoc-gen-validator or what was the name) might be the next useful step. And for even more advanced ones, you have to check them in code anyway, unless the expectation is that protobufs would be a formal proofs language. That said, as described in https://aip.dev/203, there are annotations like ‘REQUIRED’, though they have quite nuanced semantics, and those make quite some sense to me as such. And here I do indeed miss not seeing an automatic protoc generator for their validation - but no efully one will appear sooner or later (and I sometimes wonder would it be so hard to write myself?)

      2. 2

        I wish I’d known this before I left my last gig.

        I also dislike protobuf, but although it’s a mess I think the one exception to “you should avoid it if at all possible” is if you’re using java on both ends. That doesn’t make it any less of a mess, but it’s mostly a mess in exactly the same way that the java type system is a mess so it ends up being beneficial.

        Or at least that’s what I heard from the cloud devs doing java on the other (unfortunately only theoretical) end gPRC interface that I was working on from Rust.

        1. 3

          My experience comes from using it with the JVM on both ends (but Clojure rather than Java) and we still had a lot of headaches. In particular the part where you send it a nil in an integer field and it silently converts it to zero is mind-bogglingly bad. Or where you send a negative number in a field that’s defined as an unsigned int and it silently accepts it; what the hell.

          1. 2

            sounds like annotating deserialized stuff in kotlin as non-null, only for the deserialization system to not care about that for obvious reasons - those are the times I miss rust so much

            1. 2

              Woof. I guess I was slightly more fortunate. We ended up only sending and receiving with Rust, so everything was super explicit to get things into the types needed by the generated protobuf message structs.

              So many messages with foo and foo_is_set fields though.

          2. 2

            Just curious if you recommend any specifically? I’ve tried once or twice to look into gRPC, but each time I was put off by protobuf.

            1. 2

              We use EDN at work to communicate between Clojure services; that’s the only one I have experience with, but I’ve heard people like msgpack too.

              1. 1

                Would you link any references to using gRPC with EDN or msgpack encodings?

                1. 1

                  I believe you guys have multiple good reasons so could you tell me why you don’t just use JSON with a schema validator over this? I find the inspectability, flexibility and interoperability of JSON makes it a better choice over anything else so in my attempt to not be a frog in a well could you let me know under what constraints it makes more sense to use gRPC?

                  1. 2

                    JSON would have been a big improvement over protobuf too, but being able to seamlessly encode UUIDs and Dates directly was more important to us than being able to support non-Clojure services.

                    I’m only talking about the encoding within gRPC; whether to use gRPC vs REST is a completely different question that unfortunately was made above my pay grade. If it were up to me we would have used EDN over REST.

                    1. 1

                      Got it. Thank you. Maybe they have some magical wisdom I’m missing.

            2. 4

              Helpful article!

              I recently tried and liked buf which seems to have a lot of overlap with the topics mentioned here.

              https://github.com/bufbuild/buf

              (Apologies if this tooling is redundant for gRPC. I’m using Protobufs with TWIRP, but there was enough overlap with what it suggested that I wanted to mention it)

              1. 4

                just had a thought/question : if the topic of “best practices” applies to X, does that mean the UX for X is not completely refined?

                1. 3

                  In general, I’d say yes. This has come up with patterns before, too: a lot of the patterns advertised in books for Java weren’t needed for Smalltalk (usually due to Smalltalk having lambdas with nonlocal return and being fully dynamic), and in turn, a lot of GoF patterns have notes that they aren’t needed/don’t apply in Common Lisp.

                  Where I think it can show up legitimately is when a tool is radically different from the one most people know, and so people try to bend it in ways it’s not designed for. I’ve had a perennially half-finished book called “Windows Is Not Linux,” for example, that goes into common mistakes Linux admins make when dealing with Windows. But that kind of situation is an exception, not the rule.

                  1. 2

                    I had this though too and have been observing this phenomenon. I would say yes. This article and the comments here at lobsters read like this to me:

                    #1: don’t use it.

                  2. 4

                    There’s also Google’s API styleguide: https://google.aip.dev/general

                    All public Google Cloud APIs conform to these guidelines except if exempt or for historical reasons.

                    1. 2

                      I absolutely recommend; a lot to read and understand, but encodes tons of clearly hard-learnt experience; and notably I see it first of all as a guide on how to design and evolve a good REST API, with gRPC being more of an optimization detail.