Aggressive use of hypermedia pays too much in terms of traversal cost. Aggressive use of GraphQL would require the implementation of a graph database engine at the endpoint of every GraphQL-enabled service, bringing in even less tractable issues of traversal and query optimization in a situation where that’s really not what you need.
you don’t need a graph database, PostgreSQL can do just fine
Or you can have an intermediate tier that sources data from existing APIs (like Falcor). The data source is by-the-by as long as you have something that can parse and run GQL queries.
I’m not sure what the advantages are of creating a GQL api over creating a SQL api - which my gut feeling says would be a bad idea, but the two ideas seem functionally equivalent.
I don’t mean that you need to be running a literal graph database; instead, i mean that the queries that you can perform via GraphQL need the same optimizations and calculations that a graph database requires. I.e. if there are multiple paths to a target, you have to choose the optimal one, or else pay a sometimes-large performance penalty.
For my next API, I will probably try GraphQL – possibly whether I want to or not!
That said, I think we will start seeing more RPC systems (grpc, et. al.) tunneling over http(s) again. Technology often goes in circles/cycles, and it seems like there is a good deal of angst (at least in the circles I am in) against all the “REST noodling” (rest/hypermidia/etc).
I agree with the author’s closing opinions, but I’m not convinced that a move to a new paradigm is necessary to strengthen convention and productivity. The REST-ish, rpc-over-http with JSON serialisation approach seems to be working pretty well. I’d argue that the tooling that author is suggesting is already available to most APIs that are using this approach, if the developers care to implement swagger and then use swagger-codegen to generate their client SDKs.