I think a big part of what happened was that the “pop culture” version of rest turns out to be really useful. Just trying to sort of use http methods in a reasonable way, exploit caching, use the basics of response codes (no status 200 error codes…) delivers a ton of benefits—compared to not doing it, it’s “giving ice water to people in hell”.
So you want a word to identify these rather basic but important features, and it tends to crowd out the much harder ideas that have less obvious impact.
This essay is confusing to me. It seems unkind and part of some new conversation I’m not part of.
For a bit of history, Roy’s thesis was finished in 2000. Around 2001 I was doing some high profile work at Google with SOAP and RPC-style Internet APIs. The REST folks gave us a bit of a hard time (kindly) saying there was a better way, to just use HTTP verbs and a particular paradigm for API design. Roy’s thesis was a helpful articulation of that paradigm. And broadly speaking, it won. SOAP is certainly dead. There’s still plenty of RPC style APIs on the Internet but they tend to be designed with state and documents in the center rather than RPC verbs.
I think part of the problem is touched on in the post – REST was seen as promising a universal API that didn’t require people to design and program one-off clients for interacting with specific sites’ APIs, and especially for the case of unattended programs/machines interacting with APIs. That’s what SOAP/Web Services was supposed to be for, so that’s what the alternative must be for, too, was the thinking I remember.
But it never delivered on that. No matter how perfectly, accurately, true-to-the-thesis “RESTful” your API is, clients are still going to be one-off and/or need manual up-front programming. Because REST is more or less “just build web pages”. Even if what you’re returning is JSON with relative URLs in it instead of HTML with anchor tags in it, there’s nothing in-band that can tell a machine when it should follow those links, or which ones it should follow, or any of the other things that are key behaviors for automated clients/consumers.
And so that information is out of band and either encoded in some other server-side system (some kind of metadata about the API), or encoded in the client, and we’re back to square one for the thing people were trying to do in the first place, with either needing to agree on several large tomes’ worth of metadata specs (the old-school approach from the SOAP days) or building one-off clients for everything.
The fact that we got slightly better ergonomics out of it is nice. It is a lot easier to whip up a one-off client than it used to be, and the “RESTful” schema specs are somewhat nicer to produce and consume than the alternatives, though neither of those really is due to “REST”. So it’s hard to say that “REST” was the thing that won – it feels more like JSON and YAML won out over XML, and simpler schema/metadata specs won out over the SOAP/WS-* stack, but not in a way that fundamentally changed us all over to a hypermedia-native approach.
Absolutely. If nothing else, the paper, and what was distilled from it has helped many folks understand HTTP better, and use it for what it is, an incredibly well thought out protocol at the application level, not just the transport level. If nothing else, that is a fabulous side effect.
Spoiler: It’s not “dead” or “wrong”, and there need not be any arguments one way or another about ideal URLs and HTTP method usage. It’s what Roy Fielding had in mind writing his thesis, and it’s damn good.
[two hours later] people discover the thesis and re-enact the fall of the tower of babel:
Last time I checked, the thesis is ca. 130 pages. I think one would need a bit more than two hours to read that carefully (though the 2hr reference may be unrelated, I still sense disdain for reading things carefully).
this methodology doesn’t really seem to work, given that no-one actually understands rest or restful very well.
This is not how the suitability of the methodology and the validity of its application are assessed. Furthermore, an academic thesis only needs to be understood by experts in the field (specifically, PhDs). Especially the non-experts who could only dedicate 2 hours to reading the thesis are excluded. The alluded methodology was successfully used to analyze existing architectural “styles”: https://www.ics.uci.edu/~fielding/pubs/dissertation/net_arch_styles.htm#sec_3_1
The term ‘restful’ is a complete lost cause.
The term “AI” is a lost cause too. Blame marketing folks, not academics.
there’s no real immediate benefit to having a precise technical understanding
I am afraid this is not how academia works either. Imagine someone told you that the term “quantum physics” is a lost cause for precisely the same reason.
What if we called that detail “representational state transfer” […] let’s write a […] thesis about […] what sorts of constraints got us there
In academia, this is quite a serious accusation. He is hinting at that REST principles/constraints were known before Roy Fielding’s thesis, essentially questioning the contribution of the dissertation. A small nit on top: suggesting that REST imitated the known “constraints [that] got us there” perfectly fits the definition of mimesis, quite the opposite of anti-mimesis. Also, in academia, when industry uses some academic output, we call it “adoption” or “technology transfer”. To call it anti-mimetic sounds quite pretentious (though, antimemetic may be fine if you insist).
Bottom line: the author of the blog post does not seem to be well-versed in academic work. While some criticism of how REST is adopted in the wild, the dangers of incorrect REST application as well as the benefits of applying only a subset of REST principles could all be valid, it’s incorrect to apportion blame on the dissertation for these reasons alone.
Finally, the least successful thesis is the one read only by the author and the opponent (not even the supervisor or the rest of the committee). Dr. Roy Fielding has nothing to worry about in this regard.
I think defending “everyone misunderstood and misapplied this” with “ah, who cares about practitioners, I bet some PhDs read it and understood it in theory” seems… to reinforce what the article is getting at. HATEOS/REST is an amazing principle that can be well understood in theory but has been realized in practise once, maybe twice (most notably in the form of web browsers) and most practitioners who claim to love it actually don’t know what it is. What some academics somewhere may or may not do is really quite beside the point if it doesn’t have an effect on practitioners.
This article worked for me in that it was, in equal measure, nonsense, funny, intriguing, wrong, right and lots more (not to mention the (deliberate?) two different spellings of “antimemetic”). A rant is allowed to be all those things and more.
Wonderful post.
I think a big part of what happened was that the “pop culture” version of rest turns out to be really useful. Just trying to sort of use http methods in a reasonable way, exploit caching, use the basics of response codes (no status 200 error codes…) delivers a ton of benefits—compared to not doing it, it’s “giving ice water to people in hell”.
So you want a word to identify these rather basic but important features, and it tends to crowd out the much harder ideas that have less obvious impact.
in hindsight…I may have just repeated tef’s point. I am not 100% sure, because I find his writing hard to distill into a capsule version.
This essay is confusing to me. It seems unkind and part of some new conversation I’m not part of.
For a bit of history, Roy’s thesis was finished in 2000. Around 2001 I was doing some high profile work at Google with SOAP and RPC-style Internet APIs. The REST folks gave us a bit of a hard time (kindly) saying there was a better way, to just use HTTP verbs and a particular paradigm for API design. Roy’s thesis was a helpful articulation of that paradigm. And broadly speaking, it won. SOAP is certainly dead. There’s still plenty of RPC style APIs on the Internet but they tend to be designed with state and documents in the center rather than RPC verbs.
I think part of the problem is touched on in the post – REST was seen as promising a universal API that didn’t require people to design and program one-off clients for interacting with specific sites’ APIs, and especially for the case of unattended programs/machines interacting with APIs. That’s what SOAP/Web Services was supposed to be for, so that’s what the alternative must be for, too, was the thinking I remember.
But it never delivered on that. No matter how perfectly, accurately, true-to-the-thesis “RESTful” your API is, clients are still going to be one-off and/or need manual up-front programming. Because REST is more or less “just build web pages”. Even if what you’re returning is JSON with relative URLs in it instead of HTML with anchor tags in it, there’s nothing in-band that can tell a machine when it should follow those links, or which ones it should follow, or any of the other things that are key behaviors for automated clients/consumers.
And so that information is out of band and either encoded in some other server-side system (some kind of metadata about the API), or encoded in the client, and we’re back to square one for the thing people were trying to do in the first place, with either needing to agree on several large tomes’ worth of metadata specs (the old-school approach from the SOAP days) or building one-off clients for everything.
The fact that we got slightly better ergonomics out of it is nice. It is a lot easier to whip up a one-off client than it used to be, and the “RESTful” schema specs are somewhat nicer to produce and consume than the alternatives, though neither of those really is due to “REST”. So it’s hard to say that “REST” was the thing that won – it feels more like JSON and YAML won out over XML, and simpler schema/metadata specs won out over the SOAP/WS-* stack, but not in a way that fundamentally changed us all over to a hypermedia-native approach.
Absolutely. If nothing else, the paper, and what was distilled from it has helped many folks understand HTTP better, and use it for what it is, an incredibly well thought out protocol at the application level, not just the transport level. If nothing else, that is a fabulous side effect.
I mean, almost every API out there today is RPC style, often including “api wrappers” which are needed because the RPC of each API is bespoke.
Though SOAP style RPC is also back in the form of GraphQL
People really like thinking in terms of RPCs.
I feel like this article only makes sense if you’re already familiar with hateos
You’ll find a wonderful description of hypermedia by the author of HTMX here: https://hypermedia.systems/hypermedia-reintroduction/
Spoiler: It’s not “dead” or “wrong”, and there need not be any arguments one way or another about ideal URLs and HTTP method usage. It’s what Roy Fielding had in mind writing his thesis, and it’s damn good.
Last time I checked, the thesis is ca. 130 pages. I think one would need a bit more than two hours to read that carefully (though the 2hr reference may be unrelated, I still sense disdain for reading things carefully).
This is not how the suitability of the methodology and the validity of its application are assessed. Furthermore, an academic thesis only needs to be understood by experts in the field (specifically, PhDs). Especially the non-experts who could only dedicate 2 hours to reading the thesis are excluded. The alluded methodology was successfully used to analyze existing architectural “styles”: https://www.ics.uci.edu/~fielding/pubs/dissertation/net_arch_styles.htm#sec_3_1
The term “AI” is a lost cause too. Blame marketing folks, not academics.
I am afraid this is not how academia works either. Imagine someone told you that the term “quantum physics” is a lost cause for precisely the same reason.
In academia, this is quite a serious accusation. He is hinting at that REST principles/constraints were known before Roy Fielding’s thesis, essentially questioning the contribution of the dissertation. A small nit on top: suggesting that REST imitated the known “constraints [that] got us there” perfectly fits the definition of mimesis, quite the opposite of anti-mimesis. Also, in academia, when industry uses some academic output, we call it “adoption” or “technology transfer”. To call it anti-mimetic sounds quite pretentious (though, antimemetic may be fine if you insist).
Bottom line: the author of the blog post does not seem to be well-versed in academic work. While some criticism of how REST is adopted in the wild, the dangers of incorrect REST application as well as the benefits of applying only a subset of REST principles could all be valid, it’s incorrect to apportion blame on the dissertation for these reasons alone.
Finally, the least successful thesis is the one read only by the author and the opponent (not even the supervisor or the rest of the committee). Dr. Roy Fielding has nothing to worry about in this regard.
I think defending “everyone misunderstood and misapplied this” with “ah, who cares about practitioners, I bet some PhDs read it and understood it in theory” seems… to reinforce what the article is getting at. HATEOS/REST is an amazing principle that can be well understood in theory but has been realized in practise once, maybe twice (most notably in the form of web browsers) and most practitioners who claim to love it actually don’t know what it is. What some academics somewhere may or may not do is really quite beside the point if it doesn’t have an effect on practitioners.
This is an extremely cool article.
This article worked for me in that it was, in equal measure, nonsense, funny, intriguing, wrong, right and lots more (not to mention the (deliberate?) two different spellings of “antimemetic”). A rant is allowed to be all those things and more.
Is this what happening nowadays? This sounds horrible.
No, it’s not.
Post author was expressing themselves rhetorically.
Doesn’t gmail have an api you need to use if you want to support it properly?