The term REST almost immediately diverged from its author’s definition. This happens.
I think the really important part of REST is the use of HTTP as an object-oriented API, with URLs as objects and, um, methods as methods. Obvious, but in contrast to a lot of early usage where APIs got shoehorned into a single URL, usually with …/cgi_bin/foo.pl?… in it.
The insistence on HTML is interesting, but really limits what can be done client-side. It worked for early browsers when JS was primitive, but it seems arbitrarily limiting now — it explicitly limits the browser to being a “dumb pipe”.
You’re describing Level 2 of the Richardson Maturity Model, which is, generally, about as far as most REST APIs got.
I wouldn’t say he’s insisting on HTML, but on hypermedia which can be interpreted by a generic client. It’s just the case that HTML is the best fit for that in the existing tech stack. I don’t think that it explicitly limits the browser to being a “dumb pipe”, but it does imply that the model of “thick client -> network API -> backend” is not really REST-compatible. A client for a REST app is a hypermedia browser, not a VM that’s running a thick client.
I suppose XML plus XSLT would be a possible middle ground that would work in current browsers without having to resort to ugly HTML parsing. The data returned is XML, and if it’s a browser that downloads it an XSLT is applied to turn it into presentable HTML; if the data is downloaded by something else they just ignore the XSLT and use the data directly.
That would certainly be one way to do it. The knowledge of the semantics of the app are conveyed in the data type, in this case, in the XML schema. Another way would be HTML with RFC 8228 relation types.
I buy the argument that REST now means the opposite of the intended meaning. I also think that if we used the original definition, approximately no one needs a “REST API” because they’re not made for programmatic use, and most people just want JSON RPC over HTTP instead.
I think RPC with JSON encodings is what most people want; it’s what’s compatible with the thick-client architecture web apps are written with today.
On the other hand, I kind of think they ought to want a REST API, because over the last 10 years, the web dev community has been tying itself in absolute knots over the issue of state management, which each attempted solution to the problem adding another layer of intractable complexity. Whereas with a REST API designed around HATEOAS, state management is just not a problem, because every response is sending you the application state.
A big frustration with HATEOAS is that it can be incompatible with some useful deployment models. Exposing an API application via two hostnames, ports, URL path prefixes, etc. The need for response generation to understand how to send a requestor back through the same network route used to reach the API in the first place can be similarly complex. It’s not an intractable problem but it feels like initial implementations deemed API response generation as the purview of the application. If we had middle boxes (very present in the REST literature) that transformed responses to/from a form imbued with global identify versus logically isolated it would make HATEOAS feel a lot more maintainable.
Using relative urls helps a lot here, though it can still get a bit complicated in some cases. But interestingly, the utility of relative uris are why I actually disagree with the common “REST” advice of never using a trailing slash. Indeed, I say you should ALWAYS use a trailing slash on resources.
Imagine if you go to /users/me/ and have links on them like “subresource/” and “child”. The relative link there now works, without the generated thing needing to know that it live at /users/me/. You can go to other users by linking to “../someone-else/” so even if you were mounted somewhere else on a new path prefix it can still work.
I’d argue that, on the contrary, they’re better suited for programmatic use. Having hypermedia formats like html or json-ld+hydra is what enables programs to understand an otherwise opaque ad-hoc vocabulary, forcing a human to hardcode the knowledge in yet another client. The inverse of programmable.
I don’t understand what that means. Humans have to program computers. Computers cannot program themselves. Computers can share vocabularies, schemas, endpoints, etc. with each other, but at the end of the day, a human being has to make the decision to call some API or not. How does hypermedia change any of that? You can standardize a DELETE verb, but you can’t standardize “do delete spam; don’t delete user data.” It’s just the wrong level for standardization.
Here’s an example problem I solved recently: there was a spam page that listed spam messages in an inbox for me. I got so much spam that clicking each message was time consuming, but the page had no “select all” option. I worked around this by sniffing the network traffic, learning what endpoints it was calling, and just writing a CLI to call those endpoints myself.
How would that process be different or easier with real REST?
I mean this in the most literal sense: How? What would that look like in the real world? How is it easier than what I did?
As it was the network calls were just GET which returned a JSON list of messages and a DELETE sent to an endpoint that had the message ID in it. Why would that be simpler in a world where REST won?
document.querySelectorAll(".spam input[type=checkbox]").forEach(function(e) { e.checked = true; }); (you might not even need the .spam parent and then this little thing would just select all on the page, whatever fits your actual thing)
That’s your user agent extension to select all. Then you can click delete. It isn’t all that different, since you’re still looking for the container element or whatever rather than watching network traffic but since the hypertext format is relatively standardized you can do pretty generic extensions too.
PS kinda hard to say REST lost considering the massive success of the world wide web, including generic user agent extensions.
That was my first approach, BTW, but whatever React does to the elements makes it not work. 🙃
I personally agree that Plain Old Webpages With Forms are pretty good and people use React and SPAs etc. too much. But making a CLI to work with a POWWFs (as opposed to a browser extension) would not be easier and in many cases would be harder than making one to work with JSON over HTTP RPC. And the reason people overuse SPAs isn’t just that they’re trendy, it’s because it solves a business problem to have specialized frontend and backend engineers who communicate over formally defined interfaces. The end product isn’t as nice in some ways as a well made MPA, but it’s hard to blame business for wanting to decouple their teams even if it makes the product strictly worse.
One of the missing pieces to make HATEOAS work in practice is that it can be difficult to trust that your application wants to make those API calls. A mis-configured node that redirected an staging app to a production API would be annoying in the conventional model and potentially catastrophic in the HATEOAS model. Similarly for MITM attacks.
I think the difference is is that you’d teach the client to interpret a document type once, and re-use it in multiple places across your application (assuming you have a problem amenable to that kind of re-use).
real REST
if they used a standard vocabulary (eg: a kind of IMAP encoding a-la JMAP) then you’d be able to pull a client off the shelf. If not, then you’d still need to write some logic to interpret and act based on the documents you get. But assuming that this is an alternative universe where people actually took hypertext applications and ran with them, you might even have some framework that handles a lot of the state management for you. But this is not that universe.
Maybe another example might be Jim Webber’s REST-bucks example. Although again, it very much lives in a alternate reality where coffee vendors have standardized on a common set of base document formats.
What did you mean by “not made for programmatic use” then?
but the page had no “select all” option
Good example of when hypermedia could have been used to help the agent discover the “delete all spam” action and help him drive to a new state, all that without adding more coupling than having to know what “delete all spam” means.
It’s an internal API for a website. They don’t want me to program against it. I can’t imagine them choosing to document it. I also still am extremely unclear what form you imagine this documentation taking. Is it just HTML? Is REST just another name for web scraping? Because people do that every day.
and one big reason why people can do that is thanks to hypermedia’s discoverability (generic hypermedia formats and Uniform Interface), allowing the same spider bot (or any hypermedia agent really, like you+firefox) to traverse the whole web with a single client.
approximately no one needs a “REST API” because they’re not made for programmatic use, and most people just want JSON RPC over HTTP instead.
I’ve had this thought several times in the last few years. I feel like most of the web APIs I’ve worked on were really not intended to be navigated ad-hoc by a client, so why bother limiting ourselves to the 6 or so HTTP “verbs” and then having to contort our business concepts into noun-ified words that go with the HTTP verbs?
I mean, it’s nice that a programmer can go from job to job and have a common industry language/pattern to get up to speed quicker. So, there’s definitely a social advantage to being REST-ish just for the sake of convention.
Does anyone have an example of discoverability of urls working?
I’ve implemented an (JSON over HTTP) API which tried to be HATEOAS in that a client was expected to discover URLs from a root, which it could then manipulate to achieve certain goals.
I think we had two developer teams using the API (one local, one remote) and the remote one just hardcoded the URL fragments they discovered so they didn’t need to start a session and walk down to discover the correct URLs. The idea was we could change implementation and API versions and clients would handle it, but obviously this broke it.
In hindsight, latency is king and I don’t blame the remote devs for doing that, I’m just curious if anyone ever got this to work (and how)? I guess returning fully randomised URLs in the discovery phase is one way….
The Web is intended to be an Internet-scale distributed hypermedia system, which means considerably more than just geographical dispersion. The Internet is about interconnecting information networks across organizational boundaries. Suppliers of information services must be able to cope with the demands of anarchic scalability and the independent deployment of software components. – intro to Fielding’s thesis
This is the idea of Anarchic Scalability.
If the client and server belong to different organizations, the server cannot force the client to upgrade, nor can the client force the server not to.
You cannot force a client to use your service, you can’t stop a thousand from deciding to use your service… and there is always another web site one click away…
So how are you going to design a system that allows the server to upgrade and change… without “flag days” arranging for all clients to upgrade at the same time as the server?
Conversely, if you all part of the same organization… isn’t there something simpler than REST you can do? The downside the boss of your team and their team is usually so far from the technical side… they can’t understand the problem.
This article tries to say a lot but meanders, and is sorely lacking in actual practical lessons from application development. The theory is mostly irrelevant, that’s why REST doesn’t mean REST.
As originally conceived, REST is too naive, and it only seemed to be a good idea during that brief period that client-side JS was focused on progressive enhancement: serve the exact same HTML for script and noscript, then sprinkle on extra behavior.
The very idea of having all state serialized in HTML implies that no other changes will be made by anyone else. It effectively requires the client to have an indefinite lock, or otherwise the HTML could become stale and lead to 404s or 403s, or result in silent last-write-wins after completing an action. So it is impractical for a connected world.
The more important question with REST is whether the API actually works by passing state-objects whole-sale back and forth, or whether most of the work is done via POST requests which perform specific mutations. The only true RESTful JSON API is really a key-value store like a CouchDB, which is theoretically pure but practically not sufficient.
APIs are really about enforcing policy, something which is usually done in the boilerplate of writing handlers for individual server methods or mutations. A good solution for APIs should treat policies as first class things. GraphQL has the same issue, it solves the reading-data part, but leaves the writing-data part up to individual implementations.
It wasn’t really aimed at what we now call APIs, but looking at ways of extending the current web (so indeed, lobste.rs is a fully functioning REST application, if you’ll excuse tunneling all commands over POST). As another example, WebDAV was very commonly used as a common method of doing online data sync, and was based along the same principles described in the thesis.
And ultimately, each of the constraints enables certain abilities–for example the focus on caching makes certain kinds of disconnected operation easier.
Ultimately, I half remember Fielding (or someone similar) REST as being designed for applications that last on the scale of decades, as the focus on document exchange rather than RPC style leans towards interactions with less coupling.
Conversely, the vast majority of end-user applications built today have fairly tight control of both the client and the server (think web or mobile apps). In that case, you can get away with supporting older clients for a far shorter span of time. For example, for a web app, you can relatively easily force the entire page to reload, and voila, you have your new client version.
Ultimately, I think the descpancy comes because the original REST style solves problems that most current developers don’t care about–whether for economic or other reasons.
Kind of the opposite. The theory is so ubiquitously relevant that we only have a handful of systems that have been designed that way because it works so well. When an alternative to the web shows up, like Gemini, the first question everyone asks is, “Why not the web?”
implies that no other changes will be made by anyone else. It effectively requires the client to have an indefinite lock, or otherwise the HTML could become stale and lead to 404s or 403s, or result in silent last-write-wins after completing an action.
No, it implies that the client will receive a full description of possible actions to take from where it is now in the last payload it received. If that payload is different from last week, that’s not the client’s problem. Stable URIs is a different issue, and not necessary to REST.
Nor is last-write-wins the only option for REST. You can also have semantics which are “there’s a token in the URI for writing, and if there’s been an update since that token was provided, your request fails but returns the new result and an updated URI+token that will let you write. Or potentially a lot more URIs if there are now new options available. You could also have any other conflict resolution scheme that you want.
APIs are really about enforcing policy
One aspect that some people want from APIs is enforcing policy. Or maybe it’s better to drop API and speak of RESTful interfaces, since they’re not just for programming applications against. The difference here is that an RPC API expects a policy to be set from outside and obeyed by all involved. A RESTful interface expects that policy is dictated by what’s available during traversing hypermedia. If I provide a link to do something, it’s allowed in the policy. If I don’t, it’s not. The client does not know for certain ahead of time. If you are talking about large, long lived systems with different parts controlled by a huge number of unrelated entities, it is a fact of life that you won’t know ahead of time.
If that seems useless to the problem of making a client and server you both control stay in sync with each other, it’s because it is. But like people complaining about the complexities of relational databases when they only need to save some data in a file, that’s not what it’s for.
Having to press your resources into HTTP verbs and paths sucks and is just pointless work. RPC is easier, more flexible and usually gets you generated client stubs.
That’s just classical people thinking they are smart for using words they obviously don’t understand. Reflection barely Happens. People do what sounds good.
That’s true for IT, but also for other areas in life.
An ne people who know better just tag along, because what’s the point. In best case you can reap profit by becoming the expert, if you can confidently say “Well REST actually means…”.
Another example is the DevOps Engineer send what they do in real life. People kinda realize sometimes and then they call it SRE or something.
But then… does it really matter:? Pretty much all discussions using these terms in a way where either it doesn’t matter or are purely philosophical anyways.
I kinda agree that JSON is not a native hypermedia but so is not HTML. Have you ever tried to encode any method other than GET of POST in pure HTML? Well, you can’t. So it turns out HTML is not a fully realized hypermedia format either. The OP links to another 7 posts trying to convince that HTML is the one true REST format and neglects to mention that you can only encode half of the method semantics.
The author insists that the client needs all sorts of special knowledge to interpret JSON payloads but HTML is somehow natively understood. Well, it’s not if the client is not a browser. The client can very well understand some JSON with a schema that supports lining and method encoding, and whatever. And that API is very much RESTful even though not every client can use it.
HTML is a native hypermedia in that it has native hypermedia controls: links and form. JSON does not. You can impose hypermedia controls on top of JSON, but that hasn’t been as popular as people expected.
I agree entirely that HTML is a limited hypermedia, and, in particular, that it is silly that it doesn’t support the full gamut of HTTP actions. This one of the four limitations of HTML that htmx is specifically designed to fix (from https://htmx.org/):
Why should only and be able to make HTTP requests?
Why should only click & submit events trigger them?
Why should only GET & POST methods be available?
Why should you only be able to replace the entire screen?
I get what htmx is trying to achieve. However, it doesn’t help with the REST narrative OP presents. It tries to convince us that REST is good and everyone is wrong about it (which is fine). But it also tries to convince us that HTML is the way while also being a thing on top of HTML to make it actually fulfil its role in REST.
Let’s assume for the sake of the argument that htmx is the actual hepermedia format the REST requires. Does it make REST useful? To actually use the REST API we need a very special kind of agent: a conforming web browser with scripting enabled.
Given that constraint it’s no wonder no one actually implements REST APIs. We have whole lot of clients that are not browsers: mobile clients that implement native UI and IOT devices that can not run a browser. And if we need to build an API for those that is not REST (by the OP’s definition) anyway then why bother building a separate REST API for the browser?
I like the idea of REST. I believe it’s ideas are valuable and can guide API design. Insistence on a particular hypermedia format (HTML but, I guess, meaning htmx) is misguided.
The term REST almost immediately diverged from its author’s definition. This happens.
I think the really important part of REST is the use of HTTP as an object-oriented API, with URLs as objects and, um, methods as methods. Obvious, but in contrast to a lot of early usage where APIs got shoehorned into a single URL, usually with
…/cgi_bin/foo.pl?…
in it.The insistence on HTML is interesting, but really limits what can be done client-side. It worked for early browsers when JS was primitive, but it seems arbitrarily limiting now — it explicitly limits the browser to being a “dumb pipe”.
You’re describing Level 2 of the Richardson Maturity Model, which is, generally, about as far as most REST APIs got.
I wouldn’t say he’s insisting on HTML, but on hypermedia which can be interpreted by a generic client. It’s just the case that HTML is the best fit for that in the existing tech stack. I don’t think that it explicitly limits the browser to being a “dumb pipe”, but it does imply that the model of “thick client -> network API -> backend” is not really REST-compatible. A client for a REST app is a hypermedia browser, not a VM that’s running a thick client.
I suppose XML plus XSLT would be a possible middle ground that would work in current browsers without having to resort to ugly HTML parsing. The data returned is XML, and if it’s a browser that downloads it an XSLT is applied to turn it into presentable HTML; if the data is downloaded by something else they just ignore the XSLT and use the data directly.
That would certainly be one way to do it. The knowledge of the semantics of the app are conveyed in the data type, in this case, in the XML schema. Another way would be HTML with RFC 8228 relation types.
[Comment removed by author]
I buy the argument that REST now means the opposite of the intended meaning. I also think that if we used the original definition, approximately no one needs a “REST API” because they’re not made for programmatic use, and most people just want JSON RPC over HTTP instead.
I think RPC with JSON encodings is what most people want; it’s what’s compatible with the thick-client architecture web apps are written with today.
On the other hand, I kind of think they ought to want a REST API, because over the last 10 years, the web dev community has been tying itself in absolute knots over the issue of state management, which each attempted solution to the problem adding another layer of intractable complexity. Whereas with a REST API designed around HATEOAS, state management is just not a problem, because every response is sending you the application state.
A big frustration with HATEOAS is that it can be incompatible with some useful deployment models. Exposing an API application via two hostnames, ports, URL path prefixes, etc. The need for response generation to understand how to send a requestor back through the same network route used to reach the API in the first place can be similarly complex. It’s not an intractable problem but it feels like initial implementations deemed API response generation as the purview of the application. If we had middle boxes (very present in the REST literature) that transformed responses to/from a form imbued with global identify versus logically isolated it would make HATEOAS feel a lot more maintainable.
Using relative urls helps a lot here, though it can still get a bit complicated in some cases. But interestingly, the utility of relative uris are why I actually disagree with the common “REST” advice of never using a trailing slash. Indeed, I say you should ALWAYS use a trailing slash on resources.
Imagine if you go to /users/me/ and have links on them like “subresource/” and “child”. The relative link there now works, without the generated thing needing to know that it live at /users/me/. You can go to other users by linking to “../someone-else/” so even if you were mounted somewhere else on a new path prefix it can still work.
I’d argue that, on the contrary, they’re better suited for programmatic use. Having hypermedia formats like html or json-ld+hydra is what enables programs to understand an otherwise opaque ad-hoc vocabulary, forcing a human to hardcode the knowledge in yet another client. The inverse of programmable.
I don’t understand what that means. Humans have to program computers. Computers cannot program themselves. Computers can share vocabularies, schemas, endpoints, etc. with each other, but at the end of the day, a human being has to make the decision to call some API or not. How does hypermedia change any of that? You can standardize a DELETE verb, but you can’t standardize “do delete spam; don’t delete user data.” It’s just the wrong level for standardization.
Here’s an example problem I solved recently: there was a spam page that listed spam messages in an inbox for me. I got so much spam that clicking each message was time consuming, but the page had no “select all” option. I worked around this by sniffing the network traffic, learning what endpoints it was calling, and just writing a CLI to call those endpoints myself.
How would that process be different or easier with real REST?
You wouldn’t have to sniff anything since the endpoints it calls are declared with on the ui you use. You can form.submit() each delete button.
I mean this in the most literal sense: How? What would that look like in the real world? How is it easier than what I did?
As it was the network calls were just GET which returned a JSON list of messages and a DELETE sent to an endpoint that had the message ID in it. Why would that be simpler in a world where REST won?
document.querySelectorAll(".spam input[type=checkbox]").forEach(function(e) { e.checked = true; });
(you might not even need the .spam parent and then this little thing would just select all on the page, whatever fits your actual thing)That’s your user agent extension to select all. Then you can click delete. It isn’t all that different, since you’re still looking for the container element or whatever rather than watching network traffic but since the hypertext format is relatively standardized you can do pretty generic extensions too.
PS kinda hard to say REST lost considering the massive success of the world wide web, including generic user agent extensions.
That was my first approach, BTW, but whatever React does to the elements makes it not work. 🙃
I personally agree that Plain Old Webpages With Forms are pretty good and people use React and SPAs etc. too much. But making a CLI to work with a POWWFs (as opposed to a browser extension) would not be easier and in many cases would be harder than making one to work with JSON over HTTP RPC. And the reason people overuse SPAs isn’t just that they’re trendy, it’s because it solves a business problem to have specialized frontend and backend engineers who communicate over formally defined interfaces. The end product isn’t as nice in some ways as a well made MPA, but it’s hard to blame business for wanting to decouple their teams even if it makes the product strictly worse.
One of the missing pieces to make HATEOAS work in practice is that it can be difficult to trust that your application wants to make those API calls. A mis-configured node that redirected an staging app to a production API would be annoying in the conventional model and potentially catastrophic in the HATEOAS model. Similarly for MITM attacks.
I think the difference is is that you’d teach the client to interpret a document type once, and re-use it in multiple places across your application (assuming you have a problem amenable to that kind of re-use).
if they used a standard vocabulary (eg: a kind of IMAP encoding a-la JMAP) then you’d be able to pull a client off the shelf. If not, then you’d still need to write some logic to interpret and act based on the documents you get. But assuming that this is an alternative universe where people actually took hypertext applications and ran with them, you might even have some framework that handles a lot of the state management for you. But this is not that universe.
Maybe another example might be Jim Webber’s REST-bucks example. Although again, it very much lives in a alternate reality where coffee vendors have standardized on a common set of base document formats.
What did you mean by “not made for programmatic use” then?
Good example of when hypermedia could have been used to help the agent discover the “delete all spam” action and help him drive to a new state, all that without adding more coupling than having to know what “delete all spam” means.
It’s an internal API for a website. They don’t want me to program against it. I can’t imagine them choosing to document it. I also still am extremely unclear what form you imagine this documentation taking. Is it just HTML? Is REST just another name for web scraping? Because people do that every day.
Well, sort of, yes. REST is a formalization of what Fielding observed in the wild web 1.0 days. A traditional www site is a REST system.
and one big reason why people can do that is thanks to hypermedia’s discoverability (generic hypermedia formats and Uniform Interface), allowing the same spider bot (or any hypermedia agent really, like you+firefox) to traverse the whole web with a single client.
Though probably not JSONRPC because of course that exists. It looks like an alternative to XML-RPC using JSON as the transport.
I’ve had this thought several times in the last few years. I feel like most of the web APIs I’ve worked on were really not intended to be navigated ad-hoc by a client, so why bother limiting ourselves to the 6 or so HTTP “verbs” and then having to contort our business concepts into noun-ified words that go with the HTTP verbs?
I mean, it’s nice that a programmer can go from job to job and have a common industry language/pattern to get up to speed quicker. So, there’s definitely a social advantage to being REST-ish just for the sake of convention.
Does anyone have an example of discoverability of urls working?
I’ve implemented an (JSON over HTTP) API which tried to be HATEOAS in that a client was expected to discover URLs from a root, which it could then manipulate to achieve certain goals.
I think we had two developer teams using the API (one local, one remote) and the remote one just hardcoded the URL fragments they discovered so they didn’t need to start a session and walk down to discover the correct URLs. The idea was we could change implementation and API versions and clients would handle it, but obviously this broke it.
In hindsight, latency is king and I don’t blame the remote devs for doing that, I’m just curious if anyone ever got this to work (and how)? I guess returning fully randomised URLs in the discovery phase is one way….
yes, but probably not one you are thinking about: the web
This is the idea of Anarchic Scalability.
If the client and server belong to different organizations, the server cannot force the client to upgrade, nor can the client force the server not to.
You cannot force a client to use your service, you can’t stop a thousand from deciding to use your service… and there is always another web site one click away…
So how are you going to design a system that allows the server to upgrade and change… without “flag days” arranging for all clients to upgrade at the same time as the server?
Conversely, if you all part of the same organization… isn’t there something simpler than REST you can do? The downside the boss of your team and their team is usually so far from the technical side… they can’t understand the problem.
This article tries to say a lot but meanders, and is sorely lacking in actual practical lessons from application development. The theory is mostly irrelevant, that’s why REST doesn’t mean REST.
As originally conceived, REST is too naive, and it only seemed to be a good idea during that brief period that client-side JS was focused on progressive enhancement: serve the exact same HTML for script and noscript, then sprinkle on extra behavior.
The very idea of having all state serialized in HTML implies that no other changes will be made by anyone else. It effectively requires the client to have an indefinite lock, or otherwise the HTML could become stale and lead to 404s or 403s, or result in silent last-write-wins after completing an action. So it is impractical for a connected world.
The more important question with REST is whether the API actually works by passing state-objects whole-sale back and forth, or whether most of the work is done via POST requests which perform specific mutations. The only true RESTful JSON API is really a key-value store like a CouchDB, which is theoretically pure but practically not sufficient.
APIs are really about enforcing policy, something which is usually done in the boilerplate of writing handlers for individual server methods or mutations. A good solution for APIs should treat policies as first class things. GraphQL has the same issue, it solves the reading-data part, but leaves the writing-data part up to individual implementations.
Maybe to help with a bit of context, but this is from Fieldings original thesis:
It wasn’t really aimed at what we now call APIs, but looking at ways of extending the current web (so indeed, lobste.rs is a fully functioning REST application, if you’ll excuse tunneling all commands over
POST
). As another example, WebDAV was very commonly used as a common method of doing online data sync, and was based along the same principles described in the thesis.And ultimately, each of the constraints enables certain abilities–for example the focus on caching makes certain kinds of disconnected operation easier.
Ultimately, I half remember Fielding (or someone similar) REST as being designed for applications that last on the scale of decades, as the focus on document exchange rather than RPC style leans towards interactions with less coupling.
Conversely, the vast majority of end-user applications built today have fairly tight control of both the client and the server (think web or mobile apps). In that case, you can get away with supporting older clients for a far shorter span of time. For example, for a web app, you can relatively easily force the entire page to reload, and voila, you have your new client version.
Ultimately, I think the descpancy comes because the original REST style solves problems that most current developers don’t care about–whether for economic or other reasons.
Kind of the opposite. The theory is so ubiquitously relevant that we only have a handful of systems that have been designed that way because it works so well. When an alternative to the web shows up, like Gemini, the first question everyone asks is, “Why not the web?”
No, it implies that the client will receive a full description of possible actions to take from where it is now in the last payload it received. If that payload is different from last week, that’s not the client’s problem. Stable URIs is a different issue, and not necessary to REST.
Nor is last-write-wins the only option for REST. You can also have semantics which are “there’s a token in the URI for writing, and if there’s been an update since that token was provided, your request fails but returns the new result and an updated URI+token that will let you write. Or potentially a lot more URIs if there are now new options available. You could also have any other conflict resolution scheme that you want.
One aspect that some people want from APIs is enforcing policy. Or maybe it’s better to drop API and speak of RESTful interfaces, since they’re not just for programming applications against. The difference here is that an RPC API expects a policy to be set from outside and obeyed by all involved. A RESTful interface expects that policy is dictated by what’s available during traversing hypermedia. If I provide a link to do something, it’s allowed in the policy. If I don’t, it’s not. The client does not know for certain ahead of time. If you are talking about large, long lived systems with different parts controlled by a huge number of unrelated entities, it is a fact of life that you won’t know ahead of time.
If that seems useless to the problem of making a client and server you both control stay in sync with each other, it’s because it is. But like people complaining about the complexities of relational databases when they only need to save some data in a file, that’s not what it’s for.
The theory is relevant because RPC is usually worse than alternatives.
Having to press your resources into HTTP verbs and paths sucks and is just pointless work. RPC is easier, more flexible and usually gets you generated client stubs.
That’s just classical people thinking they are smart for using words they obviously don’t understand. Reflection barely Happens. People do what sounds good.
That’s true for IT, but also for other areas in life.
An ne people who know better just tag along, because what’s the point. In best case you can reap profit by becoming the expert, if you can confidently say “Well REST actually means…”.
Another example is the DevOps Engineer send what they do in real life. People kinda realize sometimes and then they call it SRE or something.
But then… does it really matter:? Pretty much all discussions using these terms in a way where either it doesn’t matter or are purely philosophical anyways.
I kinda agree that JSON is not a native hypermedia but so is not HTML. Have you ever tried to encode any method other than GET of POST in pure HTML? Well, you can’t. So it turns out HTML is not a fully realized hypermedia format either. The OP links to another 7 posts trying to convince that HTML is the one true REST format and neglects to mention that you can only encode half of the method semantics.
The author insists that the client needs all sorts of special knowledge to interpret JSON payloads but HTML is somehow natively understood. Well, it’s not if the client is not a browser. The client can very well understand some JSON with a schema that supports lining and method encoding, and whatever. And that API is very much RESTful even though not every client can use it.
HTML is a native hypermedia in that it has native hypermedia controls: links and form. JSON does not. You can impose hypermedia controls on top of JSON, but that hasn’t been as popular as people expected.
I agree entirely that HTML is a limited hypermedia, and, in particular, that it is silly that it doesn’t support the full gamut of HTTP actions. This one of the four limitations of HTML that htmx is specifically designed to fix (from https://htmx.org/):
I get what htmx is trying to achieve. However, it doesn’t help with the REST narrative OP presents. It tries to convince us that REST is good and everyone is wrong about it (which is fine). But it also tries to convince us that HTML is the way while also being a thing on top of HTML to make it actually fulfil its role in REST.
Let’s assume for the sake of the argument that htmx is the actual hepermedia format the REST requires. Does it make REST useful? To actually use the REST API we need a very special kind of agent: a conforming web browser with scripting enabled.
Given that constraint it’s no wonder no one actually implements REST APIs. We have whole lot of clients that are not browsers: mobile clients that implement native UI and IOT devices that can not run a browser. And if we need to build an API for those that is not REST (by the OP’s definition) anyway then why bother building a separate REST API for the browser?
I like the idea of REST. I believe it’s ideas are valuable and can guide API design. Insistence on a particular hypermedia format (HTML but, I guess, meaning htmx) is misguided.
I never knew that REST ever meant anything other than http+json (trying to be stateless) before reading this. Thanks for posting!
If we’re talking about rest principles, I’ll throw in my two cents: graphql fulfills all of the rest criteria.