1. 13
  1.  

  2. 8

    Which exacerbated another problem: uWSGI is so confusing. It’s amazing software, no doubt, but it ships with dozens and dozens of options you can tweak. So many options meant plenty of levers to twist around, but the lack of clear documentation meant that we were frequently left guessing the true intention of a given flag.

    This is the biggest drawback of uWSGI, to the point where we are looking at using something else in our stack. I personally have good experience with Gunicorn. It has good documentation and what seems like a reasonable feature set, compared to uWSGI - which for instance, has not just one, but two different async task runner capabilities built-in!

    I would have liked if the original post talked a bit about how the load on the system was balanced (IO vs. CPU) as that would have an impact on other people looking to benefit from the work performed by them.

    1. 2

      You shouldn’t expect reasonable latencies at 80% CPU utilization, or at least you should expect latency to be quite volatile at that point in the curve. Here is mean latency vs. utilization for an M/M/1 queue (mean service time and utilization normalized to 1, so latency units are multiples of the mean service time): https://www.wolframalpha.com/input/?i=plot+1%2F%281+%E2%88%92+x%29+from+0+to+1. Note that mean latency at 80% utilization is ~5x the mean service time.