If I’d write an article called “Protobuffers Are Wrong” its content would mostly be mutually exclusive to this article. Because the #1 problem with Protobuf is performance: it has the need to do a ton of dynamic allocations baked into its API. That’s why I designed FlatBuffers to fix that. Most of the rest of Protobuf is actually rather nice, so I retained most of it, though made some improvements along the way, like better unions (which the article actually mentions).
At the time Protobuf was designed, Google was mostly C++. It is not that unnatural to arrive at a design like Protobuf: 1) Start with the assumption that reading serialized data must involve an unpacking step into a secondary representation. 2) Make your serialized data tree-shaped in the general case, 3) allow arbitrary mutation in any order of the representation. From these 3 it follows that you get, even in C++: 4) the in-memory representation must be a dynamically allocated tree of objects. FlatBuffers questions 1) and 3) :)
To me it sounds like your issue is with whichever protobuf implementation you were playing with when you checked it out.
There are protobuf libs that will do the job fine without all the allocations. Are you aware there are other implementations, and that protobuf is by now a bit of a protocol in and of itself… ?
Link to these magical allocation-less Protobuf implementations?
At least internally to Google, Protobuf allocs are a huge cost, which they’ve so far been unable to eliminate. The best they can do is arenas. If it was easy to fix, they would have done it by now.
I can imagine ways in which you could read a Protobuf without allocations, but it be a) completely incompatible to the current API, b) not have O(1) or random access to data (unlike FlatBuffers) and c) not allow mutation. That would thus be entirely useless to most users of Protobuf.
I’m aware of nanopb.. using that without malloc is only possible in very limited situations, where you might as well have used an even simpler serialization method. It has some serious limitations and is.. slow. Compare that with FlatBuffers, which can be used with even less memory, is very fast, and can also be used with more complex datasets.
I use nanopb quite effectively, so none of your issues bother me in the slightest. Nevertheless it demonstrates that its quite possible to use Protobufs without any of the original issues you claim make it unsuitable.
Protobufs are an attempt at a solution for a problem that must be solved at a much lower level.
The goal that Protocol Buffers attempt to solve is, in essence, serialization for remote procedure calls. We have been exceedingly awful at actually solving this problem as a group, and we’ve almost every time solved it at the wrong layer; the few times we haven’t solved it at the wrong layer, we’ve done so in a manner that is not easily interoperable. The problem isn’t (only) serialization; the problem is the concept not being pervasive enough.
The absolute golden goal is having function calls that feel native. It should not matter where the function is actually implemented. And that’s a concept we need to fundamentally rethink all of our tooling for because it is useful in every context. You can have RPC in the form as IPC: Why bother serializing data manually if you can have a native-looking function call take care of all of it for you? That requires a reliable, sequential, datagram OS-level IPC primitive. But from there, you could technically scale this all the way up: Your OS already understands sockets and the network—there is no fundamental reason for it to be unable to understand function calls. Maybe you don’t want your kernel serialize data, but then you could’ve had usermode libraries help along with that.
This allows you to take a piece of code, isolate it in its own module as-is and call into it from a foreign process (possibly over the network) without any changes on the calling sites other than RPC initialization for the new service. As far as I know, this has rarely been done right, though Erlang/OTP comes to mind as a very positive example. That’s the right model, building everything around the notion of RPC as native function calls, but we failed to do so in UNIX back in the day, so there is no longer an opportunity to get it into almost every OS easily by virtue of being the first one in an influential line of operating systems. Once you solve this, the wire format is just an implementation detail: Whether you serialize as XML (SOAP, yaaay…), CBOR, JSON, protobufs, flatbufs, msgpack, some format wrapping ASN.1, whatever it is that D-Bus does, or some abomination involving punch cards should be largely irrelevant and transparent to you in the first place. And we’ve largely figured out the primitives we need for that: Lists, text strings, byte strings, integers, floats.
Trying to tack this kind of thing on after the fact will always be language-specific. We’ve missed our window of opportunity; I don’t think we’ll ever solve this problem in a satisfactory manner without a massive platform shift that occurs at the same time. Thanks for coming to my TED talk.
I’ve been thinking along the same lines. I’m not really familiar with Erlang/OTP but I’ve taken inspiration from Smalltalk which supposedly influenced Erlang. As you say it must be an aspect of the operating system and it will necessitate a paradigm shift in human-computer interaction. I’m looking forward to it.
I’ve been finding myself thinking this way a lot recently, but I’ve also been considering a counterpoint: all software is fundamentally just moving data around and performing actions on it. Trying to abstract moving data and generalizing performing actions always just gets me back to “oops you’re designing a programming language again.”
Instead, I’ve started to try and view each piece of software that I use as a DSL for a specific kind of data movement and a specific kind of data manipulation. In some cases, this is really easy. For example, the jack audio framework is a message bus+library for realtime audio on linux, dbus does the message bus stuff for linux desktopy stuff, and my shell pipelines are a super crude data mover with fancy manipulation tools.
Rampant speculation: the lack of uniformity in IPC/RPC mechanisms boils down to engineering tradeoffs and failure modes. Jack can’t use the same mechanism that my shell does because jack is realtime. dbus shouldn’t use full-blown HTTP with SSL to send a 64 bit int to some other process. Failure modes are even more important, a local function call fails very differently from an RPC over a TCP socket fails very differently than an RPC over a UDP socket fails very differently than a multicast broadcast.
I feel like the abstractions and programming models we have/use leak those engineering tradeoffs into everything and everybody ends up rolling their own data movers and data manipulator DSLs. From my limited exposure, it seems like orgs that are used to solving certain kinds of problems end up building DSLs that meet their needs with the primitives that they want. You say those primitives are “lists, text strings, byte strings, integers, floats”, but I’d just call all of those (except maybe floats) “memory” which needs some interpretation layer/schema to make any sense of. Now we’re back into “oops I’m designing an object system” or “oops I’m coming up with rust traits again” because I’m trying to find a way to wrangle memory into some nice abstraction that is easily manipulable.
In conclusion I keep finding myself saying things very similar to what you’re written here, but when I’ve explored the idea I’ve always ended up reinventing all the tools we’ve already invented to solve the data movement + data manipulation problems that programs are meant to solve.
cap’n proto offers serialisation and RPC in a way that looks fairly good to me. Even does capability-based security. What do you think is missing? https://capnproto.org/rpc.html
Cap’n proto suffers from the same problem as Protobuffers in that it is not pervasive. As xorhash says, this mechanism must pervade the operating system and userspace such that there is no friction in utilizing it. I see it as similar to the way recent languages make it frictionless to utilize third-party libraries.
well, the fundamental problem imho is pretending that remote and local invokations are identical. when things work you might get away with it, but mostly they dont. what quickly disabuses you of that notion is, that some remote function calls have orders of magnitude higher turnaround time than local ones.
what does work is asynchronous message passing with state-machines, where failure modes need to be carefully reasoned about. moreover it is possible to build a synchronous system on top of async building blocks, but not so the other way around…
I have done some decent digging on both and they honestly look pretty similar in purpose and design. Do you have any opinions or references for important differences?
I honestly can’t remember exactly why I chose CBOR over MessagePack, and yes, they are very similar. A big one for me was that CBOR is backed by an RFC, but on the other hand MessagePack’s wider use may make it preferable depending on the situation. There are probably more important things to worry about though!
When we had to choose between MessagePack and CBOR we landed on msgpack because it is so much simpler and straightforward. The spec is trivial to read and understand and there are many libraries available in a variety of languages. CBOR seems ambiguous and complex by comparison, particularly with tags being interpreter optional and weirdly arbitrary (MIME?).
There is also some history between the magpack community and the CBOR guy, but that isn’t a technical point.
Whatever it’s faults there are plenty of good reasons to use protobuf not related to scale.
Protobuf is a well known, widely used format with support in many different programming languages. And it’s the default serialization format used by gRPC.
The only more boring choice would be restful JSON. But gRPC is actually easier to use for an internal service. You write a schema and get a client and server in lots of different languages.
And you also get access to an entire ecosystem of tools, like lyft’s envoy, automatic discovery, cli/graphical clients, etc.
Maybe instead of using an atypical serialization format (or god-forbid rolling your own), it would be better to spend your innovation tokens on something more interesting.
I second ASN.1. Although underappreciated by modern tech stacks, it is used quite widely, e.g. in X.509, LDAP, SNMP and very extensively in telecom (SS7, GSM, GPRS, LTE, etc). It is suitable for protocols that need a unique encoding (distinguished encoding rules, DER) and for protocols where you can’t keep all the data in memory and need to stream it (BER).
It has some funny parts that might be better done away with, e.g. the numerous string types that nobody implements or cares about. I find it hilarious to have such things as TeletexString and VideotexString.
Support in today’s languages could be better. I suspect that Erlang has the best ASN.1 support of any language. The compiler, erlc, accepts ASN.1 modules straight up on the command line.
Nobody should use BER ever again, and people should use generated parsers rather than (badly) hand-rolling it. All of the certificate problems that I have seen are not fundamental to ASN.1, but rather badly hand-rolled implementations of BER.
XDR/sunrpc predates even that by ~a decade I believe, and its tooling (rpcgen) is already available on most Linux systems without installing any special packages (it’s part of glibc).
But gRPC is actually easier to use for an internal service. You write a schema and get a client and server in lots of different languages.
Swagger/openapi are so much better than grpc in that respect that it is borderline embarasing. No offense intended.
It’s human readable and writeable. You can include as much detail as you want. For example you can include only the method signatures or you can include all sorts of validation rules. You can include docstrings. You have an interactive test GUI out of the box which you don’t need to distribute. All they need is the url for you swagger spec. There are tools to generate client libraries for whatever languages you fancy, certainly more than those grpc offers, in some cases multiple library generators per language.
But most importantly. It doesn’t force you do distribute anything. There is no compile step necessary. Simply call the API via http, you can even forge your requests by hand.
In a job I had, we replaced a couple of HTTP APIs with gRPC because a couple of Google fanboys thought it was critical to spend time fixing something that just works with whatever Google claims to be the be all end all solution. The maintaining effort for those APIs jumped up an order of magnitude easily.
gRPC with protobuf is significantly simpler than a full-blown HTTP API. In this regard gRPC is less flexible, but if you don’t need those features (ie you really are just building an RPC service), it’s a lot easier to write and maintain. (I’ve found swagger to be a bit overwhelming every time I’ve looked at it)
Why was there so much maintenance effort for gRPC? Adding a property or method is a single line of code and you just regenerate the client/server code. Maybe the issue was familiarity with the tooling? gRPC is quite well documented and there are plenty of stack-overflowable answers to questions.
I’ve only ever used gRPC with python and Go. The python library had some issues, but most of them were fixed over time. Maybe you were using a language that didn’t play nice?
Also this has nothing to do with Google fanboyism. I worked at a company where we used the Redis protocol for our RPC layer, and it had significant limitations. In our case, there was no easy way to transfer metadata along with a request. We need the ability to pass through a trace ID and we also wanted support for cancellation and timeouts. You get all that out of the box with gRPC. (in Go you use context) We looked at other alternatives and there were either missing features we wanted or the choice was so esoteric that we were afraid it would present too much of an upfront hurdle for incoming developers.
I guess we could’ve gone with thrift. But gRPC seemed easier to use.
And it’s the default serialization format used by gRPC.
We got stuck using protobufs at work, and they’ve been universally reviled by our team as being a pain in the neck to work with, merely because of their close association with gRPC. I don’t think the people making the decision realized that gRPC could have the encoding mechanism swapped out. Eventually we switched to a better encoding, but it was a long, tedious road to get there.
What problems have you had with protobufs? All the problems the original post talks about come from the tool evolving over time while trying to maintain as much format compatibility as possible. While I agree the result is kind of messy, I’ve never seen any of those quirks actually cause significant problems in practice.
The two biggest complaints I’ve heard about gRPC are “Go gRPC is buggy” and “I’m annoyed I can’t serialize random crap without changing the proto schema.” Based on what I know about your personal projects, I can’t imagine you having either problem.
Part of the problem is that the Java API is very tedious to use from Clojure, and part of the problem is that you inherit certain properties of golang’s pants-on-head-stupid type system into the JVM, like having nils get converted into zeroes or the empty string. Having no way to represent UUIDs or Instants caused a lot of tedious conversion. And like golang, you can forget about parametric types.
(This was in a system where the performance implications of the encoding were completely irrelevant; much bigger bottlenecks were present several other places in the pipeline, so optimizing at the expense of maintainability made no sense.)
But it’s also just super annoying because we use Clojure spec to describe the shape of our data in every other context, which is dramatically more expressive, plus it has excellent tooling for test mocks, and allows us to write generative tests. Eventually we used Spec alongside Protobufs, but early on the people who built the system thought it would be “good enough” to skip Spec because Protobufs “already gives us types”, and that was a big mistake.
Thanks for the detailed reply! I can definitely see how Clojure and protobufs don’t work well together. Even without spec, the natural way to represent data in Clojure just doesn’t line up with protobufs.
While I actually like the idea in this article for a simpler basis that heavily emphasizes coproducts/sum types, this is not OK:
Protobuffers were obviously built by amateurs because they offer bad solutions to widely-known and already-solved problems.
Obviously, they were most likely professional software developers disregarding that they might have made a couple of choices that the author disagrees with.
It would also be nice if the author drafted out the consequences of their proposed design.
In particular, they rant about fake compatibility claims. I might be missing something but I think that without default values their design does make evolving messages difficult in simple cases. You cannot enhance a “product” with a field because that would be incompatible. One way around this would be to use a versioned “coproduct” top-level and emit deserialization errors if their are unknown variants.
That somewhat works but I think that there are use cases that can be solved much better. Compatibility depends often on whether the value is read or written. Obviously, REST APIs often use the same message for both - which comes with its own bag of problems. If you are only concerned about maintaining compatibility with message readers, additional fields which are ignored might just befine.
Despite map fields being able to be parameterized, no user-defined types can be
all protobuffers can be zero-initialized with absolutely no data in them? Scalar fields get false-y values—uint32 is initialized to 0 for example, and string is initialized as “”
Waaait a second. I can think of another infamous Google designed thing where both of these are the case…
Meh. They may be “wrong” but they’re a godsend compared to a crappy XML format I had to deal with where everything was a string. At least protobuf gives you more options than stringly typed.
Maintain a separate type that describes the data you actually want, and ensure that the two evolve simultaneously.
Don’t you actually not want to do this with things like enums? Plus, Java enums have major issues anyway, such as being inextricably order dependent for their ordinal values, and proto at least provides a way out of that flavor of bad with its slightly more sane generation of enums.
Also, the author doesn’t point out any real library that’s a solution, only pseudocoding a lisp-like proto replacement. All of these “proto is bad” rants I’ve seen don’t offer alternatives that I can actually use for my day job. And they usually don’t talk about the advantages, which include not needing to bikeshed a serialization format when you have real work to do.
They may be “wrong” but they’re a godsend compared to a crappy XML format I had to deal with where everything was a string.
In theory nothing prevents you from encoding ProtoBuff-defined structure to XML.
which include not needing to bikeshed a serialization format when you have real work to do
If that would be the case, then everyone would use ASN.1, which despite few problems (bloat), is still IMHO nice format to use. I would love to see ASN.2 published that would remove most of the ASN.1 problems.
In theory nothing prevents you from encoding ProtoBuff-defined structure to XML.
The problem is more that new fields required parsing if they were intended to be anything but a string, and we had to write the corresponding generator - also - by hand. But part of that is that the XML was not specced formally anyway.
then everyone would use ASN.1
In my experience, protobuf tooling feels much more modern than ASN.1 tooling.
Protobuf as a general purpose serialization format is not good. As far as I can tell, all implementations require loading the entire dataset of a message into memory (multiple times, depending on the language) to deserialize, which means anything larger than a few megabytes is impractical. That’s not really what it was intended for, but there are many serialization formats out there that can handle streaming data while also in general being better at everything else.
all implementations require loading the entire dataset of a message into memory (multiple times, depending on the language)
(Some of) the dynamic language bindings use upb now to avoid this problem. Historically protobufs have been primarily for C++ and Java, with token support for other languages. I suppose now that Google Cloud exposes all their APIs over gRPC, first class support for other languages matters more.
means anything larger than a few megabytes is impractical. That’s not really what it was intended for
You’re half right. Protobufs absolutely do support that use case, but zero-copy types haven’t been open sourced for some reason. Search for “Cord” and “StringPiece” if you’re interested. You can find numerous references to them throughout Google open source projects.
Very interesting if you skim the ranty bit at the start. Though can anyone direct me to an explaination of coproduct types and how they differ from sum types that is better/more complete than Stack Overflow?
I use protobufs extensively in various projects, and for me they are just fine. I have none of the issues of the author of the article - I can put the libs anywhere/everywhere I want, and they solve lots of problems relating to transport of data between independent nodes.
Also, since they have enough meta data on board, they’re a pretty interesting way to derive a quick UI for an editor. So I use them not only as a transport layer, but as a description for what the user should see, in some cases.
Perhaps my view is too broad in scope beyond the horizon, but even though I can accomplish all of the above with something like JSON or XML, I still prefer the performance and ease of use of pbufs where I’m using them.
So, I think the argument is lost on me. Although, there are other ways to accomplish all of the above too, which I might learn about for my next project …
The “google is never wrong” cargo cult is pretty strong out there.
It’s a sad reflection of the intellectual blinders many in the industry have.
If I’d write an article called “Protobuffers Are Wrong” its content would mostly be mutually exclusive to this article. Because the #1 problem with Protobuf is performance: it has the need to do a ton of dynamic allocations baked into its API. That’s why I designed FlatBuffers to fix that. Most of the rest of Protobuf is actually rather nice, so I retained most of it, though made some improvements along the way, like better unions (which the article actually mentions).
Your comment and also some remarks in the article suggests to me that Protobuf was designed for Java and never lost that bias.
At the time Protobuf was designed, Google was mostly C++. It is not that unnatural to arrive at a design like Protobuf: 1) Start with the assumption that reading serialized data must involve an unpacking step into a secondary representation. 2) Make your serialized data tree-shaped in the general case, 3) allow arbitrary mutation in any order of the representation. From these 3 it follows that you get, even in C++: 4) the in-memory representation must be a dynamically allocated tree of objects. FlatBuffers questions 1) and 3) :)
To me it sounds like your issue is with whichever protobuf implementation you were playing with when you checked it out.
There are protobuf libs that will do the job fine without all the allocations. Are you aware there are other implementations, and that protobuf is by now a bit of a protocol in and of itself… ?
Link to these magical allocation-less Protobuf implementations?
At least internally to Google, Protobuf allocs are a huge cost, which they’ve so far been unable to eliminate. The best they can do is arenas. If it was easy to fix, they would have done it by now.
I can imagine ways in which you could read a Protobuf without allocations, but it be a) completely incompatible to the current API, b) not have O(1) or random access to data (unlike FlatBuffers) and c) not allow mutation. That would thus be entirely useless to most users of Protobuf.
https://github.com/nanopb/nanopb
I’m aware of nanopb.. using that without malloc is only possible in very limited situations, where you might as well have used an even simpler serialization method. It has some serious limitations and is.. slow. Compare that with FlatBuffers, which can be used with even less memory, is very fast, and can also be used with more complex datasets.
I use nanopb quite effectively, so none of your issues bother me in the slightest. Nevertheless it demonstrates that its quite possible to use Protobufs without any of the original issues you claim make it unsuitable.
Protobufs are an attempt at a solution for a problem that must be solved at a much lower level.
The goal that Protocol Buffers attempt to solve is, in essence, serialization for remote procedure calls. We have been exceedingly awful at actually solving this problem as a group, and we’ve almost every time solved it at the wrong layer; the few times we haven’t solved it at the wrong layer, we’ve done so in a manner that is not easily interoperable. The problem isn’t (only) serialization; the problem is the concept not being pervasive enough.
The absolute golden goal is having function calls that feel native. It should not matter where the function is actually implemented. And that’s a concept we need to fundamentally rethink all of our tooling for because it is useful in every context. You can have RPC in the form as IPC: Why bother serializing data manually if you can have a native-looking function call take care of all of it for you? That requires a reliable, sequential, datagram OS-level IPC primitive. But from there, you could technically scale this all the way up: Your OS already understands sockets and the network—there is no fundamental reason for it to be unable to understand function calls. Maybe you don’t want your kernel serialize data, but then you could’ve had usermode libraries help along with that.
This allows you to take a piece of code, isolate it in its own module as-is and call into it from a foreign process (possibly over the network) without any changes on the calling sites other than RPC initialization for the new service. As far as I know, this has rarely been done right, though Erlang/OTP comes to mind as a very positive example. That’s the right model, building everything around the notion of RPC as native function calls, but we failed to do so in UNIX back in the day, so there is no longer an opportunity to get it into almost every OS easily by virtue of being the first one in an influential line of operating systems. Once you solve this, the wire format is just an implementation detail: Whether you serialize as XML (SOAP, yaaay…), CBOR, JSON, protobufs, flatbufs, msgpack, some format wrapping ASN.1, whatever it is that D-Bus does, or some abomination involving punch cards should be largely irrelevant and transparent to you in the first place. And we’ve largely figured out the primitives we need for that: Lists, text strings, byte strings, integers, floats.
Trying to tack this kind of thing on after the fact will always be language-specific. We’ve missed our window of opportunity; I don’t think we’ll ever solve this problem in a satisfactory manner without a massive platform shift that occurs at the same time. Thanks for coming to my TED talk.
You might want to look into QNX, an operating system written in the 80s.
AHEM OSI MODEL ahem
/offgetlawn
I’ve been thinking along the same lines. I’m not really familiar with Erlang/OTP but I’ve taken inspiration from Smalltalk which supposedly influenced Erlang. As you say it must be an aspect of the operating system and it will necessitate a paradigm shift in human-computer interaction. I’m looking forward to it.
I’ve been finding myself thinking this way a lot recently, but I’ve also been considering a counterpoint: all software is fundamentally just moving data around and performing actions on it. Trying to abstract moving data and generalizing performing actions always just gets me back to “oops you’re designing a programming language again.”
Instead, I’ve started to try and view each piece of software that I use as a DSL for a specific kind of data movement and a specific kind of data manipulation. In some cases, this is really easy. For example, the jack audio framework is a message bus+library for realtime audio on linux, dbus does the message bus stuff for linux desktopy stuff, and my shell pipelines are a super crude data mover with fancy manipulation tools.
Rampant speculation: the lack of uniformity in IPC/RPC mechanisms boils down to engineering tradeoffs and failure modes. Jack can’t use the same mechanism that my shell does because jack is realtime. dbus shouldn’t use full-blown HTTP with SSL to send a 64 bit int to some other process. Failure modes are even more important, a local function call fails very differently from an RPC over a TCP socket fails very differently than an RPC over a UDP socket fails very differently than a multicast broadcast.
I feel like the abstractions and programming models we have/use leak those engineering tradeoffs into everything and everybody ends up rolling their own data movers and data manipulator DSLs. From my limited exposure, it seems like orgs that are used to solving certain kinds of problems end up building DSLs that meet their needs with the primitives that they want. You say those primitives are “lists, text strings, byte strings, integers, floats”, but I’d just call all of those (except maybe floats) “memory” which needs some interpretation layer/schema to make any sense of. Now we’re back into “oops I’m designing an object system” or “oops I’m coming up with rust traits again” because I’m trying to find a way to wrangle memory into some nice abstraction that is easily manipulable.
In conclusion I keep finding myself saying things very similar to what you’re written here, but when I’ve explored the idea I’ve always ended up reinventing all the tools we’ve already invented to solve the data movement + data manipulation problems that programs are meant to solve.
cap’n proto offers serialisation and RPC in a way that looks fairly good to me. Even does capability-based security. What do you think is missing? https://capnproto.org/rpc.html
Cap’n proto suffers from the same problem as Protobuffers in that it is not pervasive. As xorhash says, this mechanism must pervade the operating system and userspace such that there is no friction in utilizing it. I see it as similar to the way recent languages make it frictionless to utilize third-party libraries.
well, the fundamental problem imho is pretending that remote and local invokations are identical. when things work you might get away with it, but mostly they dont. what quickly disabuses you of that notion is, that some remote function calls have orders of magnitude higher turnaround time than local ones.
what does work is asynchronous message passing with state-machines, where failure modes need to be carefully reasoned about. moreover it is possible to build a synchronous system on top of async building blocks, but not so the other way around…
Consider CBOR
CBOR is so overlooked, I really wish it had as much attention as MessagePack.
https://tools.ietf.org/html/rfc7049
I have done some decent digging on both and they honestly look pretty similar in purpose and design. Do you have any opinions or references for important differences?
There’s a comparison in the RFC:
https://tools.ietf.org/html/rfc7049#appendix-E.2
I honestly can’t remember exactly why I chose CBOR over MessagePack, and yes, they are very similar. A big one for me was that CBOR is backed by an RFC, but on the other hand MessagePack’s wider use may make it preferable depending on the situation. There are probably more important things to worry about though!
When we had to choose between MessagePack and CBOR we landed on msgpack because it is so much simpler and straightforward. The spec is trivial to read and understand and there are many libraries available in a variety of languages. CBOR seems ambiguous and complex by comparison, particularly with tags being interpreter optional and weirdly arbitrary (MIME?).
There is also some history between the magpack community and the CBOR guy, but that isn’t a technical point.
CBOR is an excellent “efficient/binary JSON” but it’s, well, JSON. The main attraction of these protobuf style things is the schema stuff.
I’d love to see a typed functional style schema/interface definition language that uses CBOR for serialization…
Whatever it’s faults there are plenty of good reasons to use protobuf not related to scale.
Protobuf is a well known, widely used format with support in many different programming languages. And it’s the default serialization format used by gRPC.
The only more boring choice would be restful JSON. But gRPC is actually easier to use for an internal service. You write a schema and get a client and server in lots of different languages.
And you also get access to an entire ecosystem of tools, like lyft’s envoy, automatic discovery, cli/graphical clients, etc.
Maybe instead of using an atypical serialization format (or god-forbid rolling your own), it would be better to spend your innovation tokens on something more interesting.
ASN.1 anyone? For gods’ sake, it is one of the oldest protocol description format out there and for some reason people are still missing this out.
I second ASN.1. Although underappreciated by modern tech stacks, it is used quite widely, e.g. in X.509, LDAP, SNMP and very extensively in telecom (SS7, GSM, GPRS, LTE, etc). It is suitable for protocols that need a unique encoding (distinguished encoding rules, DER) and for protocols where you can’t keep all the data in memory and need to stream it (BER).
It has some funny parts that might be better done away with, e.g. the numerous string types that nobody implements or cares about. I find it hilarious to have such things as TeletexString and VideotexString.
Support in today’s languages could be better. I suspect that Erlang has the best ASN.1 support of any language. The compiler, erlc, accepts ASN.1 modules straight up on the command line.
If certs taught one thing to people it is that noone in their right mind should EVER use ASN.1 again.
Nobody should use BER ever again, and people should use generated parsers rather than (badly) hand-rolling it. All of the certificate problems that I have seen are not fundamental to ASN.1, but rather badly hand-rolled implementations of BER.
XDR/sunrpc predates even that by ~a decade I believe, and its tooling (
rpcgen
) is already available on most Linux systems without installing any special packages (it’s part of glibc).I love ASN.1 I guess other people prefer long text description of objects instead of a dot numbered notation.
Swagger/openapi are so much better than grpc in that respect that it is borderline embarasing. No offense intended. It’s human readable and writeable. You can include as much detail as you want. For example you can include only the method signatures or you can include all sorts of validation rules. You can include docstrings. You have an interactive test GUI out of the box which you don’t need to distribute. All they need is the url for you swagger spec. There are tools to generate client libraries for whatever languages you fancy, certainly more than those grpc offers, in some cases multiple library generators per language. But most importantly. It doesn’t force you do distribute anything. There is no compile step necessary. Simply call the API via http, you can even forge your requests by hand.
In a job I had, we replaced a couple of HTTP APIs with gRPC because a couple of Google fanboys thought it was critical to spend time fixing something that just works with whatever Google claims to be the be all end all solution. The maintaining effort for those APIs jumped up an order of magnitude easily.
gRPC with protobuf is significantly simpler than a full-blown HTTP API. In this regard gRPC is less flexible, but if you don’t need those features (ie you really are just building an RPC service), it’s a lot easier to write and maintain. (I’ve found swagger to be a bit overwhelming every time I’ve looked at it)
Why was there so much maintenance effort for gRPC? Adding a property or method is a single line of code and you just regenerate the client/server code. Maybe the issue was familiarity with the tooling? gRPC is quite well documented and there are plenty of stack-overflowable answers to questions.
I’ve only ever used gRPC with python and Go. The python library had some issues, but most of them were fixed over time. Maybe you were using a language that didn’t play nice?
Also this has nothing to do with Google fanboyism. I worked at a company where we used the Redis protocol for our RPC layer, and it had significant limitations. In our case, there was no easy way to transfer metadata along with a request. We need the ability to pass through a trace ID and we also wanted support for cancellation and timeouts. You get all that out of the box with gRPC. (in Go you use
context
) We looked at other alternatives and there were either missing features we wanted or the choice was so esoteric that we were afraid it would present too much of an upfront hurdle for incoming developers.I guess we could’ve gone with thrift. But gRPC seemed easier to use.
We got stuck using protobufs at work, and they’ve been universally reviled by our team as being a pain in the neck to work with, merely because of their close association with gRPC. I don’t think the people making the decision realized that gRPC could have the encoding mechanism swapped out. Eventually we switched to a better encoding, but it was a long, tedious road to get there.
What problems have you had with protobufs? All the problems the original post talks about come from the tool evolving over time while trying to maintain as much format compatibility as possible. While I agree the result is kind of messy, I’ve never seen any of those quirks actually cause significant problems in practice.
The two biggest complaints I’ve heard about gRPC are “Go gRPC is buggy” and “I’m annoyed I can’t serialize random crap without changing the proto schema.” Based on what I know about your personal projects, I can’t imagine you having either problem.
Part of the problem is that the Java API is very tedious to use from Clojure, and part of the problem is that you inherit certain properties of golang’s pants-on-head-stupid type system into the JVM, like having nils get converted into zeroes or the empty string. Having no way to represent UUIDs or Instants caused a lot of tedious conversion. And like golang, you can forget about parametric types.
(This was in a system where the performance implications of the encoding were completely irrelevant; much bigger bottlenecks were present several other places in the pipeline, so optimizing at the expense of maintainability made no sense.)
But it’s also just super annoying because we use Clojure spec to describe the shape of our data in every other context, which is dramatically more expressive, plus it has excellent tooling for test mocks, and allows us to write generative tests. Eventually we used Spec alongside Protobufs, but early on the people who built the system thought it would be “good enough” to skip Spec because Protobufs “already gives us types”, and that was a big mistake.
Thanks for the detailed reply! I can definitely see how Clojure and protobufs don’t work well together. Even without spec, the natural way to represent data in Clojure just doesn’t line up with protobufs.
While I actually like the idea in this article for a simpler basis that heavily emphasizes coproducts/sum types, this is not OK:
Obviously, they were most likely professional software developers disregarding that they might have made a couple of choices that the author disagrees with.
It would also be nice if the author drafted out the consequences of their proposed design.
In particular, they rant about fake compatibility claims. I might be missing something but I think that without default values their design does make evolving messages difficult in simple cases. You cannot enhance a “product” with a field because that would be incompatible. One way around this would be to use a versioned “coproduct” top-level and emit deserialization errors if their are unknown variants.
That somewhat works but I think that there are use cases that can be solved much better. Compatibility depends often on whether the value is read or written. Obviously, REST APIs often use the same message for both - which comes with its own bag of problems. If you are only concerned about maintaining compatibility with message readers, additional fields which are ignored might just befine.
Waaait a second. I can think of another infamous Google designed thing where both of these are the case…
I know https://capnproto.org/ handles sum and product, not sure about the other parts. Next time I’m near a laptop I’ll check the rest.
Meh. They may be “wrong” but they’re a godsend compared to a crappy XML format I had to deal with where everything was a string. At least protobuf gives you more options than stringly typed.
Don’t you actually not want to do this with things like enums? Plus, Java enums have major issues anyway, such as being inextricably order dependent for their ordinal values, and proto at least provides a way out of that flavor of bad with its slightly more sane generation of enums.
Also, the author doesn’t point out any real library that’s a solution, only pseudocoding a lisp-like proto replacement. All of these “proto is bad” rants I’ve seen don’t offer alternatives that I can actually use for my day job. And they usually don’t talk about the advantages, which include not needing to bikeshed a serialization format when you have real work to do.
In theory nothing prevents you from encoding ProtoBuff-defined structure to XML.
If that would be the case, then everyone would use ASN.1, which despite few problems (bloat), is still IMHO nice format to use. I would love to see ASN.2 published that would remove most of the ASN.1 problems.
The problem is more that new fields required parsing if they were intended to be anything but a string, and we had to write the corresponding generator - also - by hand. But part of that is that the XML was not specced formally anyway.
In my experience, protobuf tooling feels much more modern than ASN.1 tooling.
What bloat do you think exists? Too many obsolete types? Large wire format?
Obsolete types mostly, and hell lot of them. CUPER provides quite small wire format.
Protobuf as a general purpose serialization format is not good. As far as I can tell, all implementations require loading the entire dataset of a message into memory (multiple times, depending on the language) to deserialize, which means anything larger than a few megabytes is impractical. That’s not really what it was intended for, but there are many serialization formats out there that can handle streaming data while also in general being better at everything else.
(Some of) the dynamic language bindings use upb now to avoid this problem. Historically protobufs have been primarily for C++ and Java, with token support for other languages. I suppose now that Google Cloud exposes all their APIs over gRPC, first class support for other languages matters more.
You’re half right. Protobufs absolutely do support that use case, but zero-copy types haven’t been open sourced for some reason. Search for “Cord” and “StringPiece” if you’re interested. You can find numerous references to them throughout Google open source projects.
Very interesting if you skim the ranty bit at the start. Though can anyone direct me to an explaination of coproduct types and how they differ from sum types that is better/more complete than Stack Overflow?
I’m pretty sure coproduct types/sum types/variants/tagged unions are all synonymous (https://en.m.wikipedia.org/wiki/Tagged_union)
In case of this article - there is none. In general coproduct is more general than sum-types.
I use protobufs extensively in various projects, and for me they are just fine. I have none of the issues of the author of the article - I can put the libs anywhere/everywhere I want, and they solve lots of problems relating to transport of data between independent nodes.
Also, since they have enough meta data on board, they’re a pretty interesting way to derive a quick UI for an editor. So I use them not only as a transport layer, but as a description for what the user should see, in some cases.
Perhaps my view is too broad in scope beyond the horizon, but even though I can accomplish all of the above with something like JSON or XML, I still prefer the performance and ease of use of pbufs where I’m using them.
So, I think the argument is lost on me. Although, there are other ways to accomplish all of the above too, which I might learn about for my next project …