1. 6
  1.  

  2. 4

    Took me a while to figure out what bothered me about this post – it makes the deployment choice for components (e.g. threads vs processes vs machines) sound almost trivial. It’s anything but. If a component is deployed in-process, perhaps using green or native threads, it’s reasonable to use a blocking, fine-grained API that communications with native domain objects. If it’s deployed as a REST service, this means using an asynchronous API, using a more coarse API to balance out the additional latency, choosing a serialization format, adding more monitoring…the list goes on.

    For a high-level whiteboard conversation, this assumption is fine. When building a production system, it’s not.

    1. 1

      I agree with you that the devil is in the details, and there are a ton of details that need to be considered for us to not care whether something is in-process or not. This can be true for GC pressure, thread pool usage, memory usage, CPU consumption, context switches, practically any resource.

      On the subject of programming model, I wonder if it might be possible to end up with the best of both worlds.

      For fine vs coarse graining, we can have an abstraction which automatically batches for us–consider Promise Pipelining (à la Cap'n Proto, or E), or Haxl’s automatic batching (this is redundant, but I don’t have a better name for it). We can imagine a system where we program against a fine non-blocking interface, with the understanding that it will batch it for us, planning the query as efficient as possible.

      With asynchronous vs synchronous, we should in theory be able to reap the benefits of a synchronous style with an asynchronous style. A sufficiently sophisticated model could figure out that doing it synchronously will reap rewards, and adjust, so that although it’s written in an asynchronous style, it’s actually executing asynchronously.

      With that said, it definitely depends how far you’re willing to go on the abstraction scale, and it might not be worth it.

    2. 2

      Deployment affects the properties of communication between components (e.g. reliability and latency in a micro-service architecture) and a common abstraction is likely to end up being the lowest common denominator. Therefore there is a cost for this flexibility that should be taken into account. Ultimately, we’re talking about accounting for changing requirements and that requires judgement rather than blanket ‘flexibility is always good’ statements.

      What bothers me with this article, as with much of Robert Martin’s writing, is that good ideas are presented as truth without a balanced view of the tradeoffs involved.