There’s also the IETF HTTP Signatures draft which addresses some of these issues and lets you additionally sign key headers of your choice. It does present some issues with buffering requests though (and being a draft, support is somewhat limited and subject to change).
My off the cuff reaction: Nooooo!
I’ve been watching with dismay as every protocol I learn to love starts out simple and then becomes more and more complex within a relatively short span, and the progression isn’t linear. It’s near logarithmic :(
Witness dear old HTTP. You used to be able to debug it with netcat. Now? Forget about it. HTTPS started down the road to perdition, and now there’s HTTP/2 and BEYOND with every more levels of complexity and no more ASCII.
I’m very willing to embrace the idea that I, as a simple minded technology tinkerer, shouldn’t be considered the most important use case when talking about protocol design, but sheesh, why can’t we ever appreciate simple, straight forward things anymore?
I can understand the idea of signing API responses in high impact applications - I certainly wouldn’t want to have any question at all about the responses I might get from some medical technology returning a scan from a machine, but why not save the complexity for when it’s really warranted rather than making blanket statements about APIs as a general case?
I share your dismay at protocol complexification. IMHO the slight improvement in speed isn’t worth the added complexity and resulting near-monocultures in implementations. Same with HTML - you can do fancy things, but it’s impossible for anyone to implement a new web engine unless they can afford many man years of highly skilled developers.
About API signatures, I suppose the request needs to be included in the signature - knowing that ”yes” is a correct answer doesn’t say much unless you know the question asked.
The HTTP2 protocol isn’t actually that complicated. It’s quite implementable by a single person working for not-very-long.
Well, that depends on your definition of web browser, I suppose. If you want something that has a chance to replace one of the established players for general web usage, there are many hours of work ahead of you.
As for HTTP2, you first need to implement TLS… There is value in being able to telnet to port 80 and typing in a request by hand, IMHO.
HTTP2 servers are required by the specification to support HTTP 1.1 are they not?
It seems to me that this is not really about APIs in general, but about newsworthy information that’s meant to be redistributed and relied upon by parties who don’t know anything about the original caller of the API. Of course, that’s just a subset of APIs. It might also be relevant as defense in depth for very sensitive APIs.
For an “ordinary” (non-news, non-critical) API, you often can just rely on things like HTTPS to verify that you’re talking to the right server, and you’re not republishing the data, so there’s no question about whether third parties should trust it (or perhaps you’re republishing the data to other systems within your company).
Maybe I’m on the wrong side of this, like people who used to argue that HTTPS everywhere was excessive. However, I think not, because I think the pervasive use of HTTPS is already a significant protection, and that debate was about having zero verification vs. some, while this is about adding an additional level of verification.
This sounds untenable, in the face of EU data deletion at gunpoint “right to be forgotten” legislation.