1. 13
  1.  

  2. 6

    If you do this, I think you’ll have one pg connection per user connected. For this to get beyond say 100 users (depending on your db config and how many connections it allows) you’ll want to pool the connects. You probably just want one pg connect that is LISTENing that can pass the message on to the http sse connections. Not sure the easiest way to do that.

    1. 3

      Does connection pooling work with listen/notify?

      1. 3

        it does not

        edit: to be more precise, it does not work with transaction pooling. it works with session pooling. Issue for those that care: https://github.com/pgbouncer/pgbouncer/issues/655

      2. 2

        You probably just want one pg connect that is LISTENing that can pass the message on to the http sse connections.

        +1

        Not sure the easiest way to do that.

        You have to be able to control the number of connections to the DB, invariant to client HTTP/SSE request load. Typically this means deploying a fixed number of application servers, which each maintain a small (1-4) number of connections to the DB, and serve client LISTEN requests by muxing them over those pre-established connections. I’m not sure how a connection-oriented tool like pgbouncer would be able to solve this problem.

      3. 3

        This is a really great tutorial.

        Does anyone know what the performance/resource impact of creating a dedicated async PostgreSQL connection for every active streaming HTTP endpoint is?

        1. 2

          We’re finally seeing the payoff from the move to ASGI?