1. 66
    1. 24

      I am confused about why the Rest crowd is all over grpc ant the likes. I thought the reason why Rest became a thing was that they didn’t really thought RPC protocols were appropriate. Then Google decides to release an binary (no less) RPC protocol and all of the sudden, everyone thinks RPC is what everyone should do. SOAP wasn’t even that long ago. It’s still used out there.

      Could it be just cargo cult? I’ve yet to see a deployment where the protocol is the bottleneck.

      1. 14

        Because a lot of what is called REST wends up as something fairly close to an informal RPC over HTTP in JSON, maybe with an ad-hoc URI call scheme, and with these semantics, actual binary rpc is mostly an improvement.

        (Also everyone flocks to go for services and discover that performant JSON is a surprisingly poor fit for that language)

      2. 14

        I’I imagine that the hypermedia architectural constraints weren’t actually buying them much. For example, not many folks even do things like cacheability well, never mind building generic hypermedia client applications.

        But a lot of the time the bottleneck is usually around delivering new functionality. RPC style interfaces are cheapter to build, as they’re conceptually closer to “just making a function call” (albeit one that can fail half way though), wheras more hypermedia style interfaces requires a bit more planning. Or at least thinking in a way that I’ve not seen often.

        1. 10

          There has never been much, if anything at all, hypermedia specific about HTTP, It’s just a simple text based stateless protocol on top of TCP. At this day an age, that alone buys anyone more than any binary protocol. I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations. Which I don’t think are common to encounter even among tech giants.

          Virtually every computing device has a TCP/IP stack these days. $2 microcontrollers have it. Text protocols were a luxury in the days where each kilobyte came with high costs. We are 20-30 years pst that time. Today even in the IoT world HTTP and MQTT are the go to choices for virtually everyone, no one bothers to buy into the hassle of an opaque protocol.

          I agree with you, but I think the herd is taking the wrong direction again. My suspicion is that the whole Rest histeria was a success because of being JSON over HTTP which are great easy to grasp and reliable technologies. Not because of the alleged architectural advantages as you well pointed out.

          SOAP does provide “just making a function call”, I think the reason why it lost to Restful APIs, was because requests were not easy to assemble without resourcing to advanced tooling. And implementations in new programming languages were demanding. I do think gRPC suffers from these problems too. It’s all fun and games while developers are hyped “because google is doing”, once the hype dies out, I’m picturing this old embarrassing beast no one wants to touch, in the lines of GWT, appengine, etc.

          1. 9

            I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations.

            Those are not rare situations, believe me. Binary protocols can be much more efficient, in bandwidth and code complexity. In version 2 of the product I work on we switched from a REST-based protocol to a binary one and greatly increased performance.

            As for bandwidth, I still remember a major customer doing their own WireShark analysis of our protocol and asking us to shave off some data from the connection setup phase, because they really, really needed the lowest possible bandwidth.

          2. 2

            hypermedia specific about HTTP

            Sure, but the framing mostly comes from Roy Fielding’s thesis, which compares network architectural styles, and describes one for the web.

            But even then, you have the constraints around uniform access, cacheability and a stateless client, all of which are present in HTTP.

            just a simple text based stateless protocol

            The protocol might have comparatively few elements, but it’s just meant that other folks have had to specify their own semantics on top. For example, header values are (mostly) just byte strings. So for example, in some sense, it’s valid to send Content-Length: 50, 53 in a response to a client. Interpreting that and maintaing synchronisation within the protocol is hardly simple.

            herd is taking the wrong direction again

            I really don’t think that’s a helpful framing. Folks aren’t paid to ship something that’s elegant, they’re paid to ship things that work, so they’ll not want to fuck about too much. And while it might be crude and and inelegant, chunking JSON over HTTP achived precisely that.

            By and large gRPC succeeded because it lets developers ignore a whole swathe of issues around protocol design. And so far, it’s managed to avoid a lot of the ambiguity and interoperability issues that plagued XML based mechanisms.

      3. 3

        Cargo Cult/Flavour of the Week/Stockholm Syndrome.

        A good portion of JS-focussed developers seem to act like cats: they’re easily distracted by a new shiny thing. Look at the tooling. Don’t blink, it’ll change before you’ve finished reading about what’s ‘current’. But they also act like lemmings: once the new shiny thing is there, they all want to follow the new shiny thing.

        And then there’s the ‘tech’ worker generic “well if it works for google…” approach that has introduced so many unnecessary bullshit complications into mainstream use, and let slide so many egregious actions by said company. It’s basically Stockholm syndrome. Google’s influence is actively bad for the open web and makes development practices more complicated, but (a lot of) developers lap it up like the aforementioned Lemming Cats chasing a saucer of milk that’s thrown off a cliff.

      4. 2

        Partly for sure. It’s true for everything coming out of Google. Of course this also leads to a large userbase and ecosystem.

        However I personally dislike Rest. I do not think it’s a good interface and prefer functions and actions over (even if sometimes very well) forcing that into modifying a model or resource. But it also really depends on the use case. There certainly is standard CRUD stuff where it’s the perfect design and it’s the most frequent use case!

        However I was really unhappy when SOAP essentially killed RPC style Interfaces because it brought problems that are not inherent in RPC interfaces.

        I really liked JSON RPC as a minimal approach. Sadly this didn’t really pick up (only way later inside Bitcoin, etc.). This lead to lots of ecosystems and designs being built around REST.

        Something that has also been very noticeable with REST being the de-facto standard way of doing APIs is that oftentimes it’s not really followed. Many, I would say most REST-APIs do have very RPC-style parts. There’s also a lot of mixing up HTTP+JSON with REST and RPC with protobufs (or at least some binary format). Sometimes those “mixed” pattern HTTP-Interfaces also have very good reasons to be like they are. Sometimes “late” feature additions simply don’t fit in the well-designed REST-API and one would have to break a lot of rules anyways, leading to the questions of whether the last bits that would be worth preserving for their cost. But that’s a very specific situation, that typically would only arise years into the project, often triggered by the business side of things.

        I was happy about gRPC because it made people give it another shot. At the same time I am pretty unhappy about it being unusable for applications where web interfaces need to interact. Yes, there is “gateways” and “proxies” and while probably well designed in one way or another they come at a huge price essentially turning them into a big hack, which is also a reason why there’s so many grpc-alikes now. None as far as I know has a big ecosystem. Maybe thrift. And there’s many approaches not mentioned in the article, like webrpc.

        Anyways, while I don’t think RPC (and certainly gRPC) is the answer to everything I also don’t think restful services are, nor graphql.

        I really would have liked to see what json-rpc would have turned to if it got more traction, because I can imagine it during for many applications that now use REST. But this is more a curiosity on an alternative reality.

        So I think like all Google Project (Go, Tensorflow, Kubernetes, early Angular, Flutter, …) there is a huge cargo cult mentality around gRPC. I do however think that there’s quite a lot of people that would have loved to do it themselves, if that could guarantee that it would not be a single person or company using it.

        I also think the cargo cult is partly the reason for contenders not picking up. In cases where I use RPC over REST I certainly default to gRPC simply because there’s an ecosystem. I think a competitor would have a chance though if it would manage a way simpler implementation which most do.

        1. 1

          I can’t agree more with that comment! I think the RPC approach is fine most of the time. Unfortunately, SOAP, gRPC and GraphQL are too complex. I’d really like to see something like JSON-RPC, with a schema to define schemas (like the Protobuf or GraphQL IDL), used in more places.

          1. 2

            Working in a place that uses gRPC quite heavily, the primary advantage of passing protobufs instead of just json is that you can encode type information in the request/response. Granted you’re working with an extremely limited type system derived from golang’s also extremely limited type system, but it’s WONDERFUL to be able to express to your callers that such-and-such field is a User comprised of a string, a uint32, and so forth rather than having to write application code to validate every field in every endpoint. I would never trade that in for regular JSON again.

            1. 1

              Strong typing is definitely nice, but I don’t see how that’s unique to gRPC. Swagger/OpenAPI, JSON Schema, and so forth can express “this field is a User with a string and a uint32” kinds of structures in regular JSON documents, and can drive validators to enforce those rules.

    2. 8

      This is interesting and I am prolly going to use it in our services. Could you talk more about the underlying protocol?

      Also, aren’t these sentences contradicting:

      So we rewrote gRPC and migrated our live network. DRPC is a drop-in replacement that handles everything we needed from gRPC

      and

      It’s worth pointing out that DRPC is not the same protocol as gRPC, and DRPC clients cannot speak to gRPC servers and vice versa.

      How it is a drop in replacement if both client and server needs to be changed

      1. 15

        This is interesting and I am prolly going to use it in our services. Could you talk more about the underlying protocol?

        The wire format used is defined here.

        Logically, a sequence of Packets are sent back and forth over the wire. Each Packet has a enumerated kind, a message id (to order messages within a stream), a stream id (to identify which stream), and a payload. To bound the payload size, Packets are split into Frames which are marshaled and sent.

        A marshaled Frames has a single header byte, the varint encoded stream and message ids, a varint encoded length, and that many bytes of payload. The header byte contains the kind, if it is the last frame for a Packet, and a control bit reserved for future use (the implementation currently ignores any frame with that bit set).

        Because there’s no multiplexing at this layer, the reader can assume the Frames come in contiguously with non-decreasing ids, limiting the amount of memory and buffer space required to a single Packet. The Frame writing code is as simple as appending some varints, and the reading code is about 20 lines, neither of which have any code paths that can panic.

        How it is a drop in replacement if both client and server needs to be changed

        There’s a drpcmigrate package that has some helpers to let you serve both DRPC and gRPC clients on the same port by having the DRPC clients send a header that does not collide with anything. You first migrate the servers to do both, and can then migrate the clients.

        The drop in replacement part refers to the generated code being source code API compatible in the sense that you can register any gRPC server implementation with the DRPC libraries with no changes required. The semantics of the streams and stuff are designed to match.

        Sorry if this was unclear. There’s multiple dimensions of compatibility and it’s hard to clearly talk about them.

        1. 8

          (not the one who asked). Thank you for the comment. Do you plan to document the protocol somewhere (other than code)? It seems to me a RPC protocol needs a healthy ecosystem of implementations in many languages to be viable long term :-)

          (edit: I mean document it cleanly, with request/reply behavior, and talk about the various transports that are supported.)

          1. 6

            You are absolutely right that having good documentation is essential, and I’ve created a github issue to start to keep track of the most important bits I can think of off the top of my head. To be honest, I’ve always struggled a bit with documentation, but I’m trying to work on it. In fact, the current CI system will fail if any exported symbols do not have a doc string. :) Thanks for bringing this up.

        2. 2

          Thanks for the explanation! Are there any plans for writing Python client/server?

          1. 4

            I do hope to get more languages supported. I created an issue to add Python support. I don’t currently deal with any Python code, so help will probably be needed to make sure it “fits” correctly with existing codebases.

            1. 1

              I have started looking into the drpcwire code, I will wait for the documentation. Meanwhile, is there any place, like IRC, Discord where drpc/storj developers hang out? The Python issue is tagged help-wanted, so, can I be of any help?

    3. 5

      I was just about to ask if it supported passing metadata, but I decided to RTFC instead and lo and behold it supports metadata via ctx.Context. Very nice.

    4. 3

      The worst part of gRPC is its crappy protobuf notation. This tool doesn’t address anything about that.

      I’m wondering why I got banned when I tried to promote another RPC tool with a throwaway account.

      1. 24

        I’m wondering why I got banned when I tried to promote another RPC tool with a throwaway account.

        Sockpuppeting on Lobsters is heavily frowned upon.

      2. 13

        promote … with a throwaway account.

        That. Don’t use throwaway accounts and shill projects.

      3. 4

        Do you mean the service definitions being .proto files? If so, DRPC has a very modular design, and all of the functionality is implemented by referring to only the interfaces defined in the main module. Because of that, you can effectively manually write out the code generated glue, or generate it in some other way. Here’s an example of that: https://play.golang.org/p/JQcS2A9S8QX

      4. 1

        I’m wondering why I got banned when I tried to promote another RPC tool with a throwaway account.

        Which tool were you trying to promote?

        1. 6

          He got banned again so we may never know

          1. 13

            Who thinks it’s a good idea to tell everybody they’re a spammer?

            1. 2

              Growth hackers.

        2. 4

          The moderation log answers your question, fwiw.

        3. 4

          You can check the moderation log for timestamp 2021-04-17 12:40 -0500

        4. 1

          click through to their username, which includes a ban reason.

    5. 2

      I personally hate the implementation and code generated by gRPC Java in particular. My problem with the whole philosophy is that gRPC tries to hide the fact that it’s built on top of HTTP/2. Accessing headers, and capability to modify headers/trailers, weird way to attach context are all just tell tale signs of it was not thought through and all these concepts around protocol were after thoughts.

      1. 2

        Your problem with the whole philosophy is strangely misfounded. I’m not a gRPC apologist by any stretch, but it seems strange to state (much less base your entire disagreement on the idea) that gRPC “tries to hide the fact that it’s built on top of HTTP/2” when it’s not just clearly described up front but in great depth

        1. 3

          Hide as in abstract away from the developer, not as in trying to hide that fact from people’s awareness. But yeah, being an RPC protocol, ideally everything should be abstracted away.

          What is confusing is: what exactly is the advantage for Java developers when compared to RMI? The fact that one is 25 years old and subject of mockery by fellow developers and the other one is relatively hip?

          I frankly find gRPC philosophy extremely flawed if not completely inexistent.

          1. 2

            If you are in a Java only environment RMI might just be fine for you, but the point about any RPC framework is a way to make remote procedure calls independent from your language, and hence are able to include new consumers and services that are written in other languages down the line. The reality is, projects often live longer than expected and requirements change and a new project with a new language might help you solve the new problems, and that’s where RMI and ecosystem-bound RPC falls short and gRPC, Thrift & Co help.

            1. 2

              Well, the same argument applies to this one (DRPC) then, because it has only a single (Go) implementation, and no specification docs.

    6. 1

      i, fwiw, am personally biased towards ietf-xdr. it provides the minimal thing that is required to exchange data between nodes separated on the network. rest everything is up to endpoints.

      frankly, the idea of making a procedure call over the network horrifies me :) it just hides so many failure modes…