1. 17
  1. 1

    Holy shit. What is the Pythonic way?

    1. 7

      They’re all different use cases. There isn’t “one way to do it” since there are multiple “it”s.

      1. 5

        asyncio is an embarrassment and though it is the officially blessed library the truly Pythonic way is to ignore it and use trio

        Here’s a post on some of the issues with asyncio: https://web.archive.org/web/20171206104600/https://veriny.tf/asyncio-a-dumpster-fire-of-bad-design/

        And another one (by the eventual author of trio): https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world/

        I used to have some links to what made Trio great but seem to have lost them. The documentation is pretty good: https://trio.readthedocs.io/en/stable/design.html And in practice, I’ve spent unknown amounts of time debugging crazy asyncio issues because some task threw a StopIteration wen it shouldn’t have and caused some other coro to fail silently or whatever the problem of the day is. I’ve used Trio less but these kinds of problems just don’t seem to happen.

        1. 4

          I like Trio but the ecosystem around it seems in it’s infancy. I made a library for the Quart web framework that supports both the asyncio and Trio eventloop. This library communicates with Redis so I had to get an adapter for both.

          For asyncio, aioredis is an easy pick, however for Trio it was hard to find something. I ended up copying some unmaintained toy project into my project as-is since I could not find a better alternative (which is working fine so far, but def. not an optimal situation). This effectively splits the community in half.

          Using Trio for my professional work is a hard sell because of that reason, even though I reckon Trio is probably the better implementation.

          1. 1

            Yes, most libraries are written for asyncio, which is kind of a shame.

            However, trio-asyncio lets you use them from inside your trio app: https://trio-asyncio.readthedocs.io/en/latest/

            It’s kind of a nightmare to port a large existing app from asyncio to trio, and probably isn’t worth it, but I think the existence of trio-asyncio means there are few reasons to start a new asyncio app.

          2. 1

            asyncio is a worse-is-better API. It’s good enough to be productive despite its warts and that’s going to require any replacement to keep a substantial compatibility layer or else stay in relative obscurity as most other better projects suffer when a worse-is-better API or project becomes popular.

            1. 6

              asyncio is a worse-is-better API. It’s good enough to be productive despite its warts

              I’m going to try not to sound too bitter, but asyncio is not even worse-is-better

              It’s not simple, for either implementers or users. It’s not correct, given how it’s essentially impossible to cleanly handle cancellations and exceptions. And it’s not consistent, it’s a collection of layers that have slowly accreted, occasionally you have to drop down and real with the raw generators.

              that’s going to require any replacement to keep a substantial compatibility layer or else stay in relative obscurity

              I would have agreed with you a few years ago, the things you’re saying are generically correct, but now we have Trio. Trio is simpler both in implementation and also in programming model, it just makes sense.

              It also has a lovely compatibility layer: https://trio-asyncio.readthedocs.io/en/latest/

              1. 2

                I’m going to try not to sound too bitter

                It’s okay. I avoid asyncio Protocols and transports like the plague - they’re just too fragile for me to handle.

                Thank you for the trio asyncio adapter. I wonder if I can integrate it into my sanic + uvloop + aioredis + asyncpg project

          3. 1

            Awaiting bare coroutines in the simplest procedural case, using asyncio.get_running_loop() to get the event loop reference and loop.create_task(coroutine) to make Tasks to schedule in parallel for fan-out operations. At a previous job, I actually plugged an ProcessPoolExecutor in the loop.run_in_executor as the default one and used it to do truly parallel fan-outs for a project that would index TomTom product data with an API key, determine the new releases and download in parallel the 7z archives for map shapes, unpack and repack into an S3 backed tar of shapefile tar gzs and index the offsets into a database so it was trivial to fetch exactly what country/state/city polygon on the fly for use in our data science and staged polygon updating. It was pretty nifty actually, considering that archive packing was cpu bound which necessitated process concurrency and my hack allowed me to develop single process first (using a ThreadPoolExecutor descendant class which just scheduled coroutines in parallel) and then go full multiprocess for the things that ate CPU.