I’m fascinated to see what becomes of this project. I’ve been similarly frustrated by the lack of good, reliable gRPC tooling and have been following buf closely for that reason. I like that they don’t seem to want to replace gRPC altogether, just offer their own take on what it ought to be that you can opt in to.
gRPC is an IDL-based protocol, and like all IDL-based protocols, it relies on communicating parties sharing knowledge of a common schema a priori. That shared schema provides benefits: it reduces a category of runtime risks related to incompatibilities, and it — can, sometimes —improve wire performance. That schema also carries costs, chief among them that it requires producers and consumers to share a dependency graph, and usually one that’s enforced at build-time. That represents a coupling between services. But isn’t one of the main goals of a service-oriented architecture to decouple services?
Over many years, and across many different domains, I’ve consistently found that, for service-to-service communication, informally-specified HTTP/JSON APIs let teams work at very high velocity, carry negligible runtime risk over time, and basically never represent a performance bottleneck in the overall system. Amusingly, I’ve found many counter-factuals — where gzipped HTTP+JSON APIs significantly outperformed gRPC and/or custom binary protocols.
I’m sure there are situations where gRPC is right tool for the job! But all my experience suggests it’s a far narrower set of use-cases than is commonly understood, almost all in closed software ecosystems. But maybe I’m missing some angle?
I’m sure there are situations where gRPC is right tool for the job! But all my experience suggests it’s a far narrower set of use-cases than is commonly understood, almost all in closed software ecosystems. But maybe I’m missing some angle?
I’ve found cases where gRPC works better than gzipped HTTP+JSON, but I have to agree that it’s in very limited cases. Specifically I’ve worked with a rate limiter which has a “frontend” service to talk to the backend which actually keeps track of counts, and here the requests were both very repetitive (increment in-flight request, decrement in-flight request) and the fields that changed very specific. The service would receive very high request throughput and its repetitive requests made a gRPC implementation a lot faster (and less compute heavy to avoid any de/compression) than an HTTP+JSON implementation. We ran rigorous tests and found anywhere from 3-20x speedups depending on the type of load we were receiving.
I think gRPC matters more for services that see high scale, but if you’re working at a shop that has a few high scale services, it may still make more sense to standardize around gRPC just so that the high-scale services don’t have to work completely differently than the rest of the shop. gRPC has a lot less (how many channels to create, how will interceptors work, etc) tooling around it than HTTP+JSON so it pays to develop that expertise in-house. When we decided to use gRPC for a few services, it was painful having to learn the ecosystem of debugging and monitoring tools especially when there were so many easily available tools and well documented RFCs for HTTP+JSON on the general net.
EDIT: If low latency is important to your service though, gRPC staves off a lot of the overhead inherent in setting up and transmitting/receiving an HTTPS stream. If latency is of the utmost importance (say you’re building an SDP/VoIP signaling layer) though, gRPC may be the way to go.
If low latency is important to your service though, gRPC staves off a lot of the overhead inherent in setting up and transmitting/receiving an HTTPS stream.
Compared to Twirp, this can also do streaming. They plan to release a Typescript version in the next few (?) months, and maybe more languages (Python?) later, based on the developer comments on HN and Reddit.
I’m fascinated to see what becomes of this project. I’ve been similarly frustrated by the lack of good, reliable gRPC tooling and have been following buf closely for that reason. I like that they don’t seem to want to replace gRPC altogether, just offer their own take on what it ought to be that you can opt in to.
I like capnproto but it is a good thing to be compatible with what clearly has the more mature debug tooling.
gRPC is an IDL-based protocol, and like all IDL-based protocols, it relies on communicating parties sharing knowledge of a common schema a priori. That shared schema provides benefits: it reduces a category of runtime risks related to incompatibilities, and it — can, sometimes —improve wire performance. That schema also carries costs, chief among them that it requires producers and consumers to share a dependency graph, and usually one that’s enforced at build-time. That represents a coupling between services. But isn’t one of the main goals of a service-oriented architecture to decouple services?
Over many years, and across many different domains, I’ve consistently found that, for service-to-service communication, informally-specified HTTP/JSON APIs let teams work at very high velocity, carry negligible runtime risk over time, and basically never represent a performance bottleneck in the overall system. Amusingly, I’ve found many counter-factuals — where gzipped HTTP+JSON APIs significantly outperformed gRPC and/or custom binary protocols.
I’m sure there are situations where gRPC is right tool for the job! But all my experience suggests it’s a far narrower set of use-cases than is commonly understood, almost all in closed software ecosystems. But maybe I’m missing some angle?
I’ve found cases where gRPC works better than gzipped HTTP+JSON, but I have to agree that it’s in very limited cases. Specifically I’ve worked with a rate limiter which has a “frontend” service to talk to the backend which actually keeps track of counts, and here the requests were both very repetitive (increment in-flight request, decrement in-flight request) and the fields that changed very specific. The service would receive very high request throughput and its repetitive requests made a gRPC implementation a lot faster (and less compute heavy to avoid any de/compression) than an HTTP+JSON implementation. We ran rigorous tests and found anywhere from 3-20x speedups depending on the type of load we were receiving.
I think gRPC matters more for services that see high scale, but if you’re working at a shop that has a few high scale services, it may still make more sense to standardize around gRPC just so that the high-scale services don’t have to work completely differently than the rest of the shop. gRPC has a lot less (how many channels to create, how will interceptors work, etc) tooling around it than HTTP+JSON so it pays to develop that expertise in-house. When we decided to use gRPC for a few services, it was painful having to learn the ecosystem of debugging and monitoring tools especially when there were so many easily available tools and well documented RFCs for HTTP+JSON on the general net.
EDIT: If low latency is important to your service though, gRPC staves off a lot of the overhead inherent in setting up and transmitting/receiving an HTTPS stream. If latency is of the utmost importance (say you’re building an SDP/VoIP signaling layer) though, gRPC may be the way to go.
Is this still true with HTTP/2?
It seems like it’s only Go right now?
Also, it would be interesting to know learn how this compares to other RPC solutions, like twirp.
Compared to Twirp, this can also do streaming. They plan to release a Typescript version in the next few (?) months, and maybe more languages (Python?) later, based on the developer comments on HN and Reddit.