1. 4
  1.  

  2. 2

    After reading article, I still don’t understand why it’s not OData and how to constrain GraphQL queries to avoid DoS and poor optimization. Except that it looks more like SPARQL than SQL/OData: more “data-oriented” and less implementation details like tables.

    The server implements resolvers that fulfill specific graph queries—the client cannot ask for anything the server does not explicitly handle.

    If query language supports arbitrary filtering and arbitrary joins (well, maybe filtering is limited to ==), then what are differences from SQL? Yes, you can limit what can be joined and what fields support filtering, but you can do the same if your API talks SQL by parsing queries and looking into where and joins and returning 4xx if client requests too much. But this is very bad way to design APIs.

    Maybe I have wrong picture of GraphQL? As I understand, it works like SQL: client sends query, which looks like this:

    {
      human(id: 1002) {
        name
        appearsIn
        starships {
          name
        }
      }
    }
    

    It lists which fields to return, which associated entities to join, which fields of them to return, which filters to apply. Server replies with requested entities and their fields, arranged hierarchically. Queries are unconstrained, if you want to constraint them, you have to do it at the application level, by analyzing queries. If you don’t want clients to issue recursive queries which return infinite amounts of data, you have to construct “antivirus” which detects bad queries. Am I right?

    Maybe GraphQL just have bad documentation? For example,

    Along with functions for each field on each type:

    function Query_me(request) {
      return request.auth.user;
    }
    
    function User_name(user) {
      return user.getName();
    }
    

    Wut? Why freaking getters as part of protocol?

    1. 3

      If query language supports arbitrary filtering and arbitrary joins (well, maybe filtering is limited to ==), then what are differences from SQL? Yes, you can limit what can be joined and what fields support filtering, but you can do the same if your API talks SQL by parsing queries and looking into where and joins and returning 4xx if client requests too much. But this is very bad way to design APIs.

      Agreed.

      Maybe I have wrong picture of GraphQL? As I understand, it works like SQL: client sends query, which looks like this: It lists which fields to return, which associated entities to join, which fields of them to return, which filters to apply. Server replies with requested entities and their fields, arranged hierarchically.

      You certainly could design your GraphQL API to simply represent every model as a GraphQL Node, expose every attribute as a GraphQL field, and every join a GraphQL edge – in much the same way as you could design a REST API that way. In that case, yes, GraphQL is basically web SQL.

      But there’s no reason to do that, really, and every reason to carefully design your GraphQL API to be an API – an abstraction designed for the construction of the front end which doesn’t necessarily correspond in any meaningful way to the implementation of the backend, eg:

      query {
         starship_captains {
              shipName
              showName
         }
      }
      

      might under the hood be dealing with a Human table, a ships table, a show table, etc. It all depends on what your front end needs – this is the meat of API design, after all. It’s better in my mind to think of it as a kind of batch REST API protocol than any kind of SQL-for-the-frontend

      Wut? Why freaking getters as part of protocol?

      No. The documentation on that page seems very confusing. It is demonstrating what resolvers would look like on the backend for those types, written in javascript.

      Queries are unconstrained, if you want to constraint them, you have to do it at the application level, by analyzing queries.

      Not really, or at least, not in the way I’m understanding your sentence – you don’t peer at the text of a query and try to discern whether or not it touches something it shouldn’t, like some kind of PHP SQL sanitization library circa 1996. You constrain queries in a couple of way – firstly simply via the design of your API, in what you do and do not expose. Sensitive fields you never want pass to a front end shouldn’t be made part of a GraphQL node definition in the first place, in much the same way that you’d exclude them from the definition of your JSON response serialization or w/e in a REST API.

      Secondly, when a field or node should be exposed to some users but not to others, you do permission checks at the resolver level. Those poorly introduced functions in the document you linked to are backend resolvers – they’re functions you provide to your GraphQL library which are called every time it needs the values for those fields to build the response. You thus during the response rendering are gating each access to a field via your own access code, which is where you would implement relevant permissions and presentation logic. This is, again, not notably different than how you would normally construct a REST API.

      If you don’t want clients to issue recursive queries which return infinite amounts of data, you have to construct “antivirus” which detects bad queries. Am I right?

      No. You tell the GraphQL library a max call depth – stack depth, basically. 3 is pretty common. If the GraphQL library finds itself nesting the response any further than that, it aborts and you can send the client whatever response you’d like.

    2. 1

      just reading the title… my thought is… no… but its is awfully darned similar and comparing it to odata when explaining it to someone aware of odata is a pretty great place to start.