1. 25
  1.  

  2. 14

    I’m trying to think of protocols or standards that have been implemented by many parties, even competitors, that have been regularly updated and improved over years or decades, and I’m drawing a blank. I’m curious to find strategies that have worked. It feels like a lot of “successful” standards are immediately trapped into stagnation by their popularity. Even standards designed with extensibility in mind inevitably want some vital change that’s backwards incompatible and we end up with the next version stuffing everything into a comment (early inline js), tolerance of invalid data (unknown html tags to add stylesheet links), special constructions (docstrings, js pragmas), etc. The alternative seems to be forming a committee and drafting an RFC or other standard that is considered fast-moving if a new version comes out once per decade (and committee members don’t destroy it through as a stalking horse or proxy war).

    HTML has been a nightmare. IP stalled for decades at IPv4. DNS stalled for longer. CSS had a rough start and suffers fits and starts, but the “core and modules” system of CSS3 (and “CSS4”) has done well over the last decade. Email has deliberate extensibility in headers and protocols… but it took about 15y after basic client-server encryption was an obvious necessity for it to become ubiquitous, end-to-end encryption will probably never happen, and IMAP makes me miss the 90s browser wars. Jabber replaced a mess of proprietary protocols for a few years before fracturing back into walled gardens.

    I feel like I’m missing something obvious. or totally ignorant of some industrial CNC standard or avionics protocol or something, can someone point out a good example of a long-lived protocol with regular improvement? Or maybe my scale is wrong - it takes 5+ decades to replace hardware standards (adding grounding pins to the NEMA electrical outlet standard), maybe I should be thrilled it only takes 1 or 2 in software.

    1. 20

      I think the problem is that you’re thinking about this the wrong way, and that a lot of other people are too.

      The fundamental question is what is the benefit of a protocol or standard that constantly improves?

      I think an answer to this is “not a hell of a lot”.

      IPv4 has worked and worked well enough for decades. IPv6 has failed in a lot of ways because people kept piling on shit to make it spiffier and more academic and awesome, and in so doing keeping it from ever being easy to roll out or to be quite finished. Likewise, something isn’t “stalled” if it is continuing to deliver value.

      HTML isn’t that gross, especially once CSS came out. It’s as good as it ever was for displaying documents. It’s a reasonable approximation at a 2D scenegraph with automatic layout capabilities. Certain implementations were terrible, but that’s not the fault of HTML but instead the vendors.

      The main takeaway here is that both worked and worked well enough, and it was worth more to freeze them than to keep updating them. Protocols are centered around conversations, and if the content matter of a conversation doesn’t change (e.g., how to send and receive byte buffers with KV metadata as in HTML) there is no reason to continually add on things that are outside of that.

      1. 8

        Thank you, I appreciate this response. To unpack what I meant by “stalled” in the case of CSS, at points there were the problems that obvious “next features” that people wanted were kludging around for years (flexbox address most of the missing layout/grid features) and also painfully uneven support, especially for the first ten years or so.

        1. 2

          Ah, thank you for your clarification!

      2. [Comment removed by author]

        1. 9

          WiFi too. Lots of vendors, lots of versions over time (802.11: a, b, g, n, ac).

          1. 3

            Ethernet and WiFi are really good examples.

            Maybe hardware standards (like Ethernet and WiFi) make progress faster than software standards (like IP, DNS, HTML and CSS) because hardware vendors have a strong incentive to converge and agree on new features and new specifications, because they need this to justify buying their new product?

            I think big players like Google, Facebook, Twitter, Microsoft and others agreed on HTTP 2.0 because it’s a net win for them in terms of network/server usage and user experience.

          2. 10

            USB is a reasonably good example of standards that are well thought-out and long-lived, and yet manage to productively evolve, often with impressive backwards compatibility.

            The set of OS semantics we broadly call Unix have lasted a long time

            PostScript

            The versioning schtick of TeX and Metafont are aimed at answering this question; whether they can be said to be a big success is a different question though.

            …?

            1. 4

              IP stalled for decades at IPv4

              I think that’s more “if it’s not broke…”. TCP has had extensions and options and development. IPv6 was standardised long in advance of it’s actual need (maybe that’s the problem…)

              Email has deliberate extensibility in headers and protocols…

              They were retrofitted in a back compatible way. RFC821 doesn’t know about EHLO, RFC822 doesn’t know about MIME or charsets in headers.

              IMAP makes me miss the 90s browser wars.

              Interesting. The protocol is opinionated, but I’ve not followed recent developments - what’s the problem here?

              […email…] end-to-end encryption will probably never happen

              S/MIME and PGP have been standardised for a long time. I think that’s not a protocol failure but an incentive/commercial/UX failure. (One can argue that the protocol forces poor UX, which is perhaps fair but I’m not sure I understand that well enough).

              On balance, I’d say the RFC approach has worked well. I don’t know how healthy the current IETF RFC system is but in the past lots of people put the effort in to build interoperable systems which could run as “internet scale”.

              I actually think the problem is that since google search demonstrated you can scale a “single website” to “internet scale”, the assumption that you need to implement scalable, interoperable protocols to do big things on the internet was broken, perhaps reducing the incentive and importance of standardisation efforts.

              1. 2

                SCSI

                1. 1

                  We should just stick with Gopher

                  1. 1

                    USB

                    SCSI

                    SAS

                    ATA

                    PCI bus

                    x86 instruction set

                    C and C++ languages

                    POSIX

                  2. 13

                    Full disclosure: I work on finagle, and in particular I’m working on the http2 implementation, so I have been thinking a decent amount about protocols, and http, so I am probably more caremad than I normally am. Also I’m sick, so I’m grumpier than I normally am.

                    So the title is really a macguffin, it has nothing to do with the rest of the article. It’s also senseless. The statement about lisp was interesting because we were comparing programming languages to other programming languages, and in particular it was saying something about metaprogramming. Making the statement about APIs and HTTP is nonsense, because HTTP sits somewhere between the session layer and the application layer, and the APIs that the author is talking about are all built on top of HTTP. Really the author’s complaint is that some people aren’t using obscure features in the spec, not that everyone reimplements streaming or headers.

                    More generally, HTTP is a tool, and it should be used to the extent to which it’s useful. If the authors didn’t find content range queries useful, or if they didn’t know about them, who cares? They were able to solve their business problem. The reason we follow specs is for compatibility with other implementations, not because we will be rewarded for following the spec. In particular, resumable uploads or downloads are a tricky thing to get right–if you control both sides of an upload or a download, then who cares how much you adhere to a spec? The parts of a protocol implementation that you should obey are the ones that the implementations you must interoperate with care about. For many, this will be things like apache, nginx, okhttp, or web browsers.

                    I don’t think exhorting people to obey a protocol spec is worth anyone’s time. If it solves your problem, great! Clearly all of these companies have done quite fine without them, and have not missed using a browser’s performance improvements due to them following the spec. In fact, the reason why no one uses it might be because some browser implementation handles it poorly, and it’s better for them to use a more tried and true method, one that a browser won’t try to handle specially.

                    The author complains, “why didn’t you just extend HTTP"–the resumable uploads that the author complains about are an extension of HTTP, so it becomes even less clear why the author is complaining.

                    TL;DR the spec is only as useful as it is a document for uniting implementations. if most implementations don’t care about a use case, or handle it poorly, then the spec is useless and you should just do your own thing.

                    1. 3

                      Everything about this article annoys me. The over the top sarcastic tone. The 5 lines of blank space between each line of text. The one and two sentence paragraphs. The author doesn’t understand the original quote, and the comparison doesn’t work (nobody reimplements HTTP, they build on it). If he’s complaining about this, he should really be complaining that people are even using a stateless protocol like HTTP for things that need state.

                      1. 2

                        There are a number of people that swear up and down that HTTP 1.1 is dead and failed technology, but seldom consider the actual mileage they can get out of it if they actually took the time to understand it. People just aren’t keen on reading these days.

                        1. 2

                          At least one HTTP storage provider humyo.com (https://en.wikipedia.org/wiki/Humyo) (disclaimer: used to work there) did just this. Access to the platform was over webdav (GET/DELETE/PUT/MKCOL/PROPFIND/COPY/MOVE). We didn’t support locking, and only as much of propfind as we needed.

                          There were private extensions to effectively do rsync-over-http for efficient desktop sync of changing files, but the core accesses to the platform (by both the webui and the desktop client) were webdav. We also supported access via 3rd party webdav.

                          Ranged GET and PUT were both supported (and used for upload resumption and by smart-enough video players).

                          1. 2

                            Excellent, excellent article. Leonard Richardson gave a talk on similar lines 8 years ago called Justice Will Take Us Millions Of Intricate Moves, in which he makes the important point that “the web” is really three technologies, all working together for you: HTML, HTTP, and URIs. He worked on Launchpad, which I think of as a sort of Github also-ran, but fortunately the designers of Github’s API also deeply get HTTP and use it to its fullest advantage. Best API, best protocol.

                            1. 1

                              How many clients support range requests in puts? If I’m writing a Dropbox client in each of ruby, Python, clojure, erlang, perl, rust, how many would allow me to interface with this extended put syntax?

                              1. 1

                                I don’t know for the other languages, but I think the Common Lisp HTTP library drakma can support it, but I haven’t actually tried it.