1. 78
  1.  

  2. 39

    This reminds me sadly of one of the basic laws of what somebody expressed to me as “software thermodynamics”:

    Any sufficiently replaceable system will eventually be supplanted by a system too large to remove.

    1. 10

      plus

      Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can. (Zawinski’s law)

      1. 2

        Hm, that reminds me of bash too :-/

        It accreted so many features from other shells, and so many programs, build systems, and autocompletion scripts grew to depend on its features, that it takes tremendous effort to remove…

      2. 28

        gRPC is Protocol Buffers running over HTTP/2. It’s a got a “g” at the beginning of the name to remind you that the only time it’s acceptable to use is when you are actually working for Google inside a Google-owned building eating Google-branded food and breathing Google-branded air.

        Have worked with protobufs, can confirm this is true.

        1. 5

          What’s so bad about protobufs?

          1. 20

            The code generated is ridiculously complex for what it does. The implementation is ridiculously complex for what it does. The generated code in all languages except C++ (and maybe Java) is unidiomatic, verbose, and fits poorly with the code written in the language. The GRPCv3 code is a regression over GRPCv2, since you can no longer tell if values are present or not: all values are optional, but you can’t tell if they were filled in or not unless you wrap each one of them in their own message.

            And then there’s GRPC, which takes this complexity to a new level, and adds lots of edge cases and strange failure modes because of the complexity.

            And to top it off, while they’re a bit faster than json, they’re pretty slow for a binary protocol.

            1. 6

              Protobufs certainly has its dusty corners but there is a rationale for dropping required fields.

              1. 5

                My compliant wasn’t about dropping required fields. I agree with that: Required is a pain for compatibility. My complaint was that they broke optional on top of that.

                message Foo {
                      int32 x = 1;
                 }
                

                In proto2, you could check if x was set:

                if(foo.has_x()) { use(x) }
                

                In proto3, there’s no has_x(), so an unset x is indistinguishable from x=0. You need to write:

                message Integer {
                    int32 val = 1;
                }
                
                message Foo {
                    Integer x = 1;
                }
                

                And then check:

                if(foo.get_x() != null) { use(foo.get_x().get_val()) }
                

                Note that in addition to being just plain clunky, and the potential to forget setting ‘val’ within the wrapper message, it’s inefficient – in languages with value types, like C++, Rust, Go, …, you’re now adding an extra allocation.

                1. 1

                  That does seem annoying but they may be re-adding optional in version 3.13.

                  1. 2

                    Which is kind of telling…

            2. 9

              The footnote links to https://reasonablypolymorphic.com/blog/protos-are-wrong/index.html which gos into that

              1. 5

                Kenton Varda’s response to this rant is worth reading.

                1. 9

                  I stopped reading at

                  This article appears to be written by a programming language design theorist who, unfortunately, does not understand (or, perhaps, does not value) practical software engineering.

                  Typical Googler stuff.


                  The comment in the original article is so on point:

                  I now consider it to be a serious negative on someone’s resume to have worked at Google.

                  1. 6

                    While it is often perfectly valid to opt for a solution which works over one which is elegant, I get the impression that the words like “pragmatic” are increasingly being used as an anti-intellectual “excuse” for not doing something properly and ignoring well studied solutions, simply because they “weren’t invented here”, or are proposed by people who the developer disagrees with or simply doesn’t associate with.

                    1. 3

                      Yep.

                      I come from an environment where “pragmatic” is only used sarcastically, and that’s honestly quite refreshing.

                      If someone says “the software is pragmatic”, I assume it’s buggy as hell.

                2. 4

                  This article is typical FP hardliner complaining that something isn’t “correct enough” because it doesn’t use a Haskell-like type system. The last section is kind of good, though.

            3. 11

              Why not fork etcd using the last commit before the gRPC changes? Surely, if this author is right, there is still a market for a simple, easy to use, consensus driven database with an HTTP API.

              1. 11

                On the one hand, sure, decent idea, but on the other hand, I think the point of this article is that there’s much more of a market for complex, overengineered solutions which stroke your ego by telling you that your situation is just like Google’s and that it justifies putting up with a great deal of tedium.

                Following the market is how we got into this mess in the first place.

                1. 5

                  I believe that the ecosystem is getting the behavior it incentivizes. :)

                  It seems that there is a lot more money to be made playing ball with this than there is in actually engineering things efficiently.

                  1. 7

                    Full agreement except I’d replace “money to be made” with “money to be milked from credulous investors”

                    1. 11

                      My hot take is basically that investors are going to wise up to this when remote work becomes more common and when some sufficiently large chunk of tech gets serious about unionizing. That day there will be a reckoning and our compensation will be readjusted to be more in-line with other white-collar (or possibly even blue-collar) professions.

                      So, make hay while the sun shines and try to think of responses to the future generations who are annoyed that they can’t make as much money as we did.

                      1. 3

                        “You had to be there, yo.”

                        1. 1

                          I think that thought is deserving of a bit more than a ‘hot take’. Our entire industry seems to be marrying increased expectations to diminishing returns.

                      2. 1

                        It seems that there is a lot more money to be made playing ball with this than there is in actually engineering things efficiently.

                        “Here’s a pile of dirt, build your own solution.” – “But, I don’t have the tools for this…” – “That’s OK… I’m building a new shovel that will allow you to dig new holes to shovel dirt into.”

                        The market cap in complex tools is unstoppable.

                      3. 1

                        A simple solution can usually only cover a limited set of use cases. A more complex solution can often cover a wider range or use cases, and while it may arguably be more difficult (or “worse”) for various simpler use cases, it rarely makes them impossible. In that sense, a complex solution is “better”. I think this explains most of the drift towards complexity, and you see this in many projects not necessarily because of the market or whatnot, but just because a project wants to be useful for many cases. It can be kind of a difficult trade-off to make.

                        At any rate, I don’t think that a complicated etcd really takes away anything from a “simple etcd”. Like apg said, just fork the old etcd or some such and let etcd be etcd.

                      4. 2

                        if the public API has changed, now for every client or consumer you want to use, you potentially have to fork those too, and now you have to maintain them. it’s not really the same thing if you have to give up the ecosystem, and there’s probably a lot in the ecosystem that expects the newer stuff.

                        1. 1

                          I am pretty sure curl was an acceptable etcd client back in the day.

                          1. 1

                            sure but at that rate why bother with forking an old version of etcd at all? if you don’t care about the ecosystem at all you might as well just build your own thing at that point.

                            1. 3

                              Because it solved the hard part (consensus). Why would you build that again and again?

                              1. 1

                                The hard part was figuring out Raft. That’s been figured out. You don’t have to start over from Lamport’s papers and figure out Raft from scratch. You can import etcd’s Raft implementation as a package if that’s all you want out of etcd. Indeed, many projects do exactly that. That’s a totally fine conclusion.

                                A lot of the value of using something that’s widely used like etcd is that someone new comes along and when you say “we use etcd” they can say “I’m familiar with its features and with building things with etcd” or “I have a good library that uses that” or “I built a tool for that” and get up and running quickly but, no, surprise, whether their knowledge or techniques or libraries or tools work on your forked version is dependent on whether or not they worked for the version that you’ve decided is “good”. If all of those tools still work on your forked version and this isn’t a problem, then the argument for forking is pretty weak because the old APIs that you like are still supported.

                                1. 1

                                  Then we’re not disagreeing I think. Of course etcd uses Raft, that’s what I meant. It does what it does and uses a well-known working implementation for consensus.

                                  I just don’t see a point in reinventing 90% of etcd using Raft as well AND having a different api. Unless it’s so different it brings something new to the table.

                                  1. 1

                                    Just a nitpick, Paxos was invented by Lamport. Raft was invented as an alternative to Paxos by Ongaro and Ousterhout (Stanford University): https://raft.github.io/raft.pdf

                                    1. 1

                                      yes, I know, that’s … the same paper that I linked. When I say “figure out raft” I didn’t mean “implement raft from the whitepaper”, I meant “author the original raft whitepaper”. I can see how my original statement was kinda ambiguous though.

                                      1. 1

                                        ah, sorry, should’ve read it more closely. I agree with you, tho – the hard part was Raft, that was done elsewhere. One can now add any functionality to it (which, ironically, is what the original post complains about). However, lately I’ve been thinking that specialized/custom software wins over any general, popular software, just for the sake of simplicity and understanding.

                                        1. 1

                                          have you read Fast key-value stores: An idea whose time has come and gone? It basically argues that point, it’s a really good read. I work on a stateful server at work that keeps its state in memory and replicates it to other nodes in its cluster with CRDTs, it’s a lot of fun and it works! But also it’s a multiplayer game server so I don’t really have to persist anything which makes the problem a lot easier.

                                2. 2

                                  if you don’t care about the ecosystem at all you might as well just build your own thing at that point.

                                  The author was happy with etcd before they went and made it all complicated. No reason to make something new. Reuse what was previously good.

                            2. 1

                              Then I will instead go with Consul which, while being a little bit more than ETCD, keeps simple HTTP API.

                              1. 0

                                Why even do that?

                                etcd still has an http api

                              2. 9

                                If you are running a truly enormous system and want to have off-the-shelf orchestration for it, Kubernetes may be the tool for you. For 99.9% of people out there, it’s just an extra layer of complexity that adds almost nothing of value.

                                In the industry, we call this Job Security

                                1. 9

                                  Upvoted because it’s interesting, and because I think there’s a lot of truth in this piece. However, I found a lot of the assertions extreme. To pick a random example: it seems silly to say that HTTP/3 is “strictly worse” for everybody except megacorps, or that no one cares about TCP head-of-line blocking. HTTP/3 solves actual performance problems on the web.

                                  Now, I think it would be completely fair to say that the web isn’t the only consumer of HTTP (although, I mean, HTTP/1.1 is still available right there if you need it), or to argue that the costs of HTTP/3 outweigh the benefits. But that’s not what the author said, and it undermines their point which, again, I mostly agree with.

                                  1. 22

                                    The question is: which users actually have those problems? Which users are harmed by complicated specs that favor particular orgs, and which users benefit from shaving percents off of their server farms?

                                    It’s important to remember that a lot of the “problems” that get talked about in tech have important context.

                                    1. 10

                                      Again, I mostly agree. I’m not really criticizing the point that the article made, just that I wish it had made the point in a more honest/balanced way. The hardline no-nuance stance the article takes is a disservice to the underlying point that you’re referring to. It’s much better to admit there are upsides and then say “the downsides still outweigh the upsides”.

                                      1. 3

                                        That’s a totally reasonable take, well put!

                                      2. 5

                                        Can’t speak about HTTP/3, but just as a side-note QUIC in my experience is really nice to write protocols for. It has lightweight streams that can be reliable or unreliable, and are multiplexed over a single real connection for you. This takes out a large amount of the work of defining framing and messaging, and gives you more flexibility than TCP. Maybe if we had QUIC 20 years ago we wouldn’t be shoving everything over HTTP as the easy messaging option.

                                        1. 2

                                          Realistically speaking, users and developers aren’t harmed by complicated specs at all, since they are protected by libraries. Complicated specs only harm library developers. It’s not like people implement HTTP/1.1 themselves. (Yes it is valuable you can do HTTP/1.1 yourself when needed, but that is not a normal scenario.)

                                          Also, while this does not apply to HTTP/3, HTTP/2 is easier to handle correctly than HTTP/1.1, binary protocol vs text protocol. Text protocol is easier to experiment with, but fixed size binary protocol is overall better for production.

                                          1. 7

                                            Complex specs do harm users because it results in less choice in the ecosystem. Every feature like this could be valuable in itself, but increases the barrier to developing another browser, server, etc. which results in less diversity. Not all libraries can be used everywhere, and even when they can be shoe-horned in, the application may suffer in reliability or performance.

                                            On the other hand, relatively simple specs like JSON or HTML give a lot of choice (HTML can at least be partially implemented in a productive way).

                                            1. 7

                                              Complicated specs only harm library developers.

                                              That kind of harm tends to “trickle down” to developers who use said libraries, and then to their users. Complicated specs result in libraries that are slow to write, difficult to test, and very large and opaque. Even when they’re open source, it’s very hard (and usually impossible) for anyone except the people who are working on it full-time to add fixes or new features.It’s not like library developers have magical immunity to the perils of complicated specs.

                                              1. 3

                                                This is not always the case. TCP is a complicated spec: the original may not be, but TCP-as-currently-used certainly is. But TCP users are protected by socket API, and TCP’s complexity mostly does not trickle down to its users. Good abstractions are possible, and implementation complexity DOES NOT imply interface complexity. I detest the widespread attitude against implementation complexity.

                                                1. 4

                                                  I don’t mean that complexity trickles down to its users, I mean that the bugs, slow improvement pace and/or frequent patching – all inherent to anything that’s difficult to implement – trickle down. Libraries don’t just pop into existence, somebody has to write them, and the people who write them are just as susceptible to making mistakes as other programmers. The more complex the specification, the more likely it is that it will have mistakes.

                                                  (Edit:) having spent quite a few years dealing with… not the best libraries, I think this is a pretty big deal. When something breaks in an application, you can’t just tell users that, uh, it’s not us, it’s a library we’re using that has some bugs and, uh, you know what, here’s a link to their bug tracker, go talk to them. If it’s your firmware that crashes, it’s your bug to fix, no one cares that it’s not in any code that you’ve written. The more complex the library is, the harder it is to fix, it doesn’t matter how well-written it is (although a poorly-written library definitely adds its own difficulty to that of the spec). But it’s not magical armor. Libraries have bugs, just like anything else.

                                              2. 6

                                                In my career, the devs I rate the most are those that understand the entire stack top to bottom on a deep level.

                                                People that effortlessly fire up Wireshark or strace to get to the bottom of some strange bug.

                                                Even someone whose entire work life revolves around React will at some point need to interact with a backend. If they don’t know what’s actually going on, the most trivial problem can get them stuck. Or worse, make bad unscalable arch decisions because it appears to work fine in their dev setup.

                                                Libraries speed up development, but they mustn’t replace understanding.

                                          2. 3

                                            I’ve noticed that projects that come out of Google are crazy complicated.

                                            Example: Angular. Dear Zeus who designed this nightmare. Rxjs and observables everywhere (and with typescript in the mix, you end up fighting the transpiler as an added bonus when you do anything slightly complex). Inability to build shareable libraries Two separate form modules. A complicated component lifecycle. Crazy build times. The bizarre parameters they favor over query strings, on and on.

                                            The cherry on top is the awful docs and the issue that the showcase site for Angular https://angular.io/docs regularly fails to load properly.

                                            Forget turtles, it’s clown shoes all the way down.