Definitely an interesting article, I learned a few things. But I think it is addressing a very specific type of “service”. For example, the author mentions wanting to be able to perform authentication in nginx, and then proxy the connection to an existing instance of the server running locally, which uses HTTP over unix sockets. I have never run such a setup, though I suspect it isn’t uncommon. But when I think of “running a service” I typically think of it running on another remote machine, not locally, and communicating with it over a structured protocol like gRPC, Thrift, dbus, or JSON-RPC, and being queried from my application code (for example, $service->getAdvertisementForUser). Many of these requested behaviors don’t seem very applicable for this type of service.
Hi, author here: I definitely focused on HTTP, it is so widely used that it makes sense as a default. I agree that these other protocols can be great choices as well, especially for internal services where you can stick to the same stack across an organization. However I think most of the same concerns apply, even if the mechanism is different. For example you probably still want to offload auth from the service itself and into a proxy (or common library). Rate limiting may not be required for internal services but if you need it I would again recommend providing the same sort of hooks so that it can be done by a proxy or common library. Even receiving your socket from your caller is still relevant if you utilize static ports as it ensures that services can’t bind each other’s port (by error or malice) and you don’t need to start them with elevated permissions. For setups like docker it isn’t as relevant (although you could view listening on Docker’s isolated network interface as basically equivalent, because at the end of the day it is the Docker daemon which binds your listening socket to the outside world).
Definitely an interesting article, I learned a few things. But I think it is addressing a very specific type of “service”. For example, the author mentions wanting to be able to perform authentication in nginx, and then proxy the connection to an existing instance of the server running locally, which uses HTTP over unix sockets. I have never run such a setup, though I suspect it isn’t uncommon. But when I think of “running a service” I typically think of it running on another remote machine, not locally, and communicating with it over a structured protocol like gRPC, Thrift, dbus, or JSON-RPC, and being queried from my application code (for example, $service->getAdvertisementForUser). Many of these requested behaviors don’t seem very applicable for this type of service.
What you describe is exactly what is described in the article. The only difference is the addition of nginx (or equivalent) as an intermediary.
Hi, author here: I definitely focused on HTTP, it is so widely used that it makes sense as a default. I agree that these other protocols can be great choices as well, especially for internal services where you can stick to the same stack across an organization. However I think most of the same concerns apply, even if the mechanism is different. For example you probably still want to offload auth from the service itself and into a proxy (or common library). Rate limiting may not be required for internal services but if you need it I would again recommend providing the same sort of hooks so that it can be done by a proxy or common library. Even receiving your socket from your caller is still relevant if you utilize static ports as it ensures that services can’t bind each other’s port (by error or malice) and you don’t need to start them with elevated permissions. For setups like docker it isn’t as relevant (although you could view listening on Docker’s isolated network interface as basically equivalent, because at the end of the day it is the Docker daemon which binds your listening socket to the outside world).