1. 12

    I feel like this misses the mark at a basic level: I don’t want to write async rust.

    I want to write concurrent rust and not have to worry about how many or which executors are in the resulting program. I want to write concurrent rust and not worry about which parts of the standard library I can’t use now. I want to write concurrent rust and not accidentally create an ordering between two concurrent functions.

    1. 19

      I feel like those wants don’t align with rust’s principles. Specifically rust has a principle behind making anything that comes with a cost, explicit. It doesn’t automatically allocate, it doesn’t automatically take references, it doesn’t automatically lock things, etc. What you’re suggesting sounds like making transforms that come with costs implicit. That’s a reasonable tradeoff in many languages, but not rust.

      1. 12

        Sure. This initiative seems really great for people who end up choosing to use async Rust specifically because they need it for their high-performance application, and it sounds like they’ll really get a huge benefit out of this sort of work!

        But I feel like a lot of people don’t actually want to use async Rust, and just get forced into it by the general ecosystem inertia (“I want to use crate X, but crate X is async, so guess I’m async now (or I’m using block_on, but that still requires importing tokio).”). These people (hi, I’m one of them!) are going to be difficult to win over, because they don’t actually want to care about async Rust; they just want to write code (for which async Rust is always going to be net harder than writing sync Rust, IMHO).

        1. 5

          I think you’re describing a desire for the Rust ecosystem whereas the proposal in the OP is about the language. I’ve also been there, wanting to use library X but it turns out its async. This, to me, isn’t a language problem, it’s that someone (including myself) hasn’t written the library I want in a sync context.

          I don’t believe anything in the proposal is going to directly related to the situation you described.

          1. 1

            That’s a very fair point! :)

      1. 6

        I gained an appreciation for remote X when I realized you could use it to run web browsers in virtual machines. I neither like nor trust browsers all that much, and sometimes I’ve found it to be worth the trouble to sandbox them.

        I do like low-power small form factor computers, and I keep meaning to experiment with using a Raspberry Pi as an X terminal for accessing a browser hosted on the dedicated server I have running in a data center.

        1. 1

          Ah, but from what I understand about X11 the “sandboxed” browser can still read everything on the screen / all keyboard input, no?

          1. 2

            There’s various ways to protect yourself from that with X today (and the last decade…) too, including a fully sandboxed X window (e.g. Xephyr) and an “untrusted” client, see https://www.x.org/wiki/Development/Documentation/Security/ which is used if you ssh -X into the vm.

            1. 1

              It may depend on your threat model. I tend to only have one browser tab open at a time, and I only run X11 for the web. As a rule I don’t keep browsers open unless I’m actively using them.

              The thing that got me interested in running browsers in VMs was this CVE from 2015. My main concern was keeping the browser away from the filesystem.

          1. 22

            This was something I realised when writing Rust on STM32 for the first time: it was like “woah, why is everything so complicated??”. Turns out, microcontrollers have a significant amount of stuff hiding underneath the Arduino libraries…

            1. 8

              I’m not sure what the point of this story is? That two nerds trying to impress one another have overengineered the simplest problem to infinity? That most programmer interviews are bollocks? Something about how the company is too woke, or not woke enough? I’m thoroughly confused.

              1. 17

                https://aphyr.com/posts/340-reversing-the-technical-interview and its follow up posts might provide some more context :)

                1. 2

                  People used to try to come up with clever ways to do fizzbuzz to show how clever they were. That’s not in vogue anymore, so people instead veil it as a criticism of contemporary technical interviews instead. Now they not only get to show how clever they are, but also how many fancy words they have in their eclectic lexicon.

                  Some of it is pretty cool, but I would prefer to get straight to the clever part, without the short story.

                1. 39

                  I don’t this article is very useful for a general audience, and I don’t think this article makes a convincing argument about what you think it does. I do think it’s a pretty decent article though.

                  This article is basically “rust is bad because I find it’s closures confusing and they’re becoming really common”. Now that’s a pretty reasonable complaint, rust closures are complex. But this article isn’t really thorough enough to make a convincing argument that this isn’t more of an issue with how the author is trying to do things, rather than the language itself.

                  As someone who uses these features regularly, I have to say that

                  • The closure types (Fn, FnMut, FnOnce) make sense if you understand the borrow checker, I think these fall into the category of unavoidable complexity.
                  • You should almost always use move |args| body instead of |args| body, and doing this will make closures make a whole lot easier to use. I really wish this was the default (but understand why it isn’t).
                  • Callbacks are usually a mistake in rust because of the memory model, with exceptions, but this is also one of those unavoidable complexity things.
                  • Unnameable types annoys me too.
                  • I think there are some legitimate complaints about async rust, but these aren’t them. I haven’t really sat down and thought about how to make the complaints about async rust that I have, but I would be looking at subjects like Pin, Waker, lack of standardization, lack of generators, closures being too opaque, and so on.

                  I hope this article will be useful for the people working on rust documentation and tooling, because it’s pretty obvious that we could do a better job of helping people with this subject (if just because it is one of the harder topics in one of the harder languages out there).

                  1. 12

                    You’re right in that the article doesn’t really go into enough depth to conclusively prove the points made – that’s partially because it was a long enough article anyway, and it’s hard to get the balance right :)

                    This article is basically “rust is bad because I find it’s closures confusing and they’re becoming really common”. Now that’s a pretty reasonable complaint, rust closures are complex.

                    This isn’t really what I wanted to get across, although I can certainly see why you’d come to that conclusion (again, article isn’t that thorough). The point was not that I find Rust’s closures confusing (I mean, I do a little, but I’m more than used to it now ;P), but that a lot of Rust’s design decisions around ownership, borrowing, and how closures are represented (i.e. opaque types) make their ergonomics really bad.

                    And then that doing asynchronous programming is fundamentally equivalent to making a bunch of closures (because async stuff ends up being some form of CPS transform). I definitely agree that this point could have been proved more conclusively – but then again, the “what colour is your function” post already talked about that a lot.

                    I think there are some legitimate complaints about async rust, but these aren’t them.

                    I mean, yeah, I could definitely have talked more about the 2021 async library ecosystem specifically – the 2017 post did more of that (but, uh, for 2017). This was an attempt to try and identify some sort of root cause for all of the ecosystem instability & churn (in the spirit of this recent Lobsters comment by @spacejam), instead of fixating on specific issues that will probably go away (and be replaced with other similar issues >_<)


                    I do actually write async Rust code for my day job (and have been using async stuff since it came out in 2016/7). As I attempted to say, it’s not terrible – but I do think it’s all built on rather precarious foundations, and that leads to a fair deal of instability.

                    1. 3

                      I do actually write async Rust code for my day job

                      Hm, that’s weird. Why do you have to do async then? Can’t you just stick with threads?

                      1. 3

                        It wasn’t my choice; the work codebase had already started using it in earnest before I joined :)

                        (which is ok – I wouldn’t have used it myself due to the ecosystem volatility and pain points, but doing it as a team at work isn’t that bad, and it’s not just me who has to maintain it when it breaks)

                        1. 2

                          Hm, that’s weird. Why do you have to do async then? Can’t you just stick with threads?

                          Async and threads don’t really solve the same problems and aren’t interchangeable. The alternative to async would more likely be to hand-roll a reactor loop.

                          1. 2

                            To quote the original article:

                            Was spinning up a bunch of OS threads not an acceptable solution for the majority of situations?

                            I tend to agree that “just use threads” would probably be a solution for many problems. Though “just” might mean that someone needs to once write an epoll loop to handle http and offload complete requests/responses to a thread pool.

                            1. 2

                              Using OS threads in this way generally doesn’t scale well. We already know this from years of experience in many other languages. In concrete example terms, to avoid terminology issues: if the codebase were an HTTP server handling 10K concurrent TCP connections on a machine with 32 hardware CPU cores: a thread per TCP connection (10K threads) isn’t a great way to scale, but a thread per CPU core dedicated to async TCP connection handling (~32 threads, handling ~312 connections each using some kind of nonblocking eventloop (or async/await, which is just the fashionable way to factor eventloop-driven programs now)) is a great way to scale.

                              1. 6

                                We already know this from years of experience in many other languages.

                                I’d like to challenge this: people have been writing web apps with Django & Rails for years, and that works. async/await can simultaneously be a tremendous help for niche use-cases and unnecessary for majority of use-cases.

                                I haven’t seen a conclusive benchmark that says: “if you are doing HTTP with a database, async/await needs X times less CPU and Y times less RAM”. Up until earlier this year, I haven’t even seen a conclusive micro benchmark which compares threads and async await (there’s https://github.com/jimblandy/context-switch now). I don’t agree that there’s anything “we know” here: I don’t know, and I’ve been regularly re-researching this topic for several years.

                                EDIT: to put some numbers into discussion, 10k threads on Linux give on the order of 100MB overhead: https://github.com/matklad/10k_linux_threads.

                                1. 1

                                  My point isn’t so much about the numbers themselves or some particular 10K limit. We could re-make the argument at some arbitrary, higher threshold. The point is about how the SMP machines of today work, and how synchronization primitives and context-switching and locking works, etc. In general, irrespective of programming language, thread-per-cpu scales better at the limit than thread-per-connection (or substitute “connection” for some other logical object which can get much larger than the cpu count as you try to scale up).

                                  Obviously, if one doesn’t care about scalability, then none of this debate matters. You can always claim that the problems you care about or are working on simply don’t stretch current hardware’s capabilities and you don’t care, but that’s kind of a cop-out when discussing language features built to enable concurrency in the first place.

                                  1. 1

                                    I do think that specific thresholds matter, the words “majority of situations” / “many problems” are important.

                                    Of course you can do more connections per cycle when you go down the levels of abstractions from Os threads to stackful coroutines to stackless coroutines to manually coded state-machines to custom hardware.

                                    The question, for a specific application, where is the threshold when this stops being important and is dominated by other performance concerns? It doesn’t make sense to make network infinitely scalable if the bottleneck is the database you use.

                                    The current fashion is to claim that you always need async, at least when doing web stuff, and that threads are just never good enough. That’s a discussion about culture, not a discussion about a language feature,

                                    Language-feature wise, I would lament the absence of other kinds of benchmarks: how dynamically-dispatched+match based resumption compares to a hand-coded event loop, how thread-per-core works with mandatory synchronized wake ups, etc.

                                2. 2

                                  C10K has been solved for years. I mean, you’re not wrong that having a full thread stack for every connection is expensive, but 10k connections isn’t the point at which thread-per-connection breaks.

                      1. 25

                        Note a couple things:

                        • With the 2021 edition, Rust plans to change closure capturing to only capture the necessary fields of a struct when possible, which will make closures easier to work with.
                        • With the currently in-the-works existential type declarations feature, you’ll be able to create named existential types which implement a particular trait, permitting the naming of closure types indirectly.

                        My general view is that some of the ergonomics here can currently be challenging, but there exist patterns for writing this code successfully, and the ergonomics are improving and will continue to improve.

                        1. 12

                          I have very mixed feelings about stuff like this. On the one hand, these are really cool (and innovative – not many other languages try and do async this way) solutions to the problems async Rust faces, and they’ll definitely improve the ergonomics (like async/await did).

                          On the other hand, adding all of this complexity to the language makes me slightly uneasy – it kind of reminds me of C++, where they just keep tacking things on. One of the things I liked about Rust 1.0 was that it wasn’t incredibly complicated, and that simplicity somewhat forced you into doing things a particular way.

                          Maybe it’s for the best – but I really do question how necessary all of this async stuff really is in the first place (as in, aren’t threads a simpler solution?). My hypothesis is that 90% of Rust code doesn’t actually need the extreme performance optimizations of asynchronous code, and will do just fine with threads (and for the remaining 10%, you can use mio or similar manually) – which makes all of the complexity hard to justify.

                          I may well be wrong, though (and, to a certain extent, I just have nostalgia from when everything was simpler) :)

                          1. 9

                            I don’t view either of these changes as much of a complexity add. The first, improving closure capturing, to me works akin to partial moves or support for disjoint borrows in the language already, making it more logically consistent, not less. For the second, Rust already has existential types (impl Trait). This is enabling them to be used in more places. They work the same in all places though.

                            1. 11

                              I’m excited about the extra power being added to existential types, but I would definitely throw it in the “more complexity” bin. AIUI, existential types will be usable in more places, but it isn’t like you’ll be able to treat it like any other type. It’s this special separate thing that you have to learn about for its own sake, but also in how it will be expressed in the language proper.

                              This doesn’t mean it’s incidental complexity or that the problem isn’t worth solving or that it would be best solved some other way. But it’s absolutely extra stuff you have to learn.

                              1. 2

                                Yeah, I guess my view is that the challenge of “learn existential types” is already present with impl Trait, but you’re right that making the feature usable in more places increases the pressure to learn it. Coincidentally, the next post for Possible Rust (“How to Read Rust Functions, Part 2”) includes a guide to impl Trait / existential types intended to be a more accessible alternative to Varkor’s “Existential Types in Rust.”

                                1. 6

                                  but you’re right that making the feature usable in more places increases the pressure to learn it

                                  And in particular, by making existential types more useful, folks will start using them more. Right now, for example, I would never use impl Trait in a public API of a library unless it was my only option, due to the constraints surrounding it. I suspect others share my reasons too. So it winds up not getting as much visible use as maybe it will get in the future. But time will tell.

                              2. 2

                                eh, fair enough! I’m more concerned about how complex these are to implement in rustc (slash alternative compilers like mrustc), but what do I know :P

                                1. 7

                                  We already use this kind of analysis for splitting borrows, so I don’t expect this will be hard. I think rustc already has a reusable visitor for this.

                                  (mrustc does not intend to compile newer versions of rust)

                                  1. 1

                                    I do think it is the case that implementation complexity is ranked unusually low in Rust’s design decisions, but if I think about it, I really can’t say it’s the wrong choice.

                                2. 3

                                  Definitely second your view here. The added complexity and the trajectory means I dont feel comfortable using Rust in a professional setting anymore. You need significant self control to write maintainable Rust, not a good fit for large teams.

                                  What I want is Go-style focus on readability, pragmatism and maintainability, with a borrow checker. Not ticking off ever-growing feature lists.

                                  1. 9

                                    The problem with a Go-style focus here is: what do you remove from Rust? A lot of the complexity in Rust is, IMO, necessary complexity given its design constraints. If you relax some of its design constraints, then it is very likely that the language could be simplified greatly. But if you keep the “no UB outside of unsafe and zero cost abstractions” goals, then I would be really curious to hear some alternative designs. Go more or less has the “no UB outside of unsafe” (sans data races), but doesn’t have any affinity for zero cost abstractions. Because of that, many things can be greatly simplified.

                                    Not ticking off ever-growing feature lists.

                                    Do you really think that’s what we’re doing? Just adding shit for its own sake?

                                    1. 8

                                      Do you really think that’s what we’re doing?

                                      No, and that last bit of my comment is unfairly attributed venting, I’m sorry. Rust seemed like the holy grail to me, I don’t want to write database code in GCd languages ever again; I’m frustrated I no longer feel confident I could corral a team to write large codebases with Rust, because I really, really want to.

                                      I don’t know that my input other than as a frustrated user is helpful. But I’ll give you two data points.

                                      One; I’ve argued inside my org - a Java shop - to start doing work in Rust. The learning curve of Rust is a damper. To me and my use case, the killer Rust feature would be reducing that learning curve. So, what I mean by “Go-style pragmatism” is things like append.

                                      Rather than say “Go does not have generics, we must solve that to have append”, they said “lets just hack in append”. It’s not pretty, but it means a massive language feature was not built to hammer that one itch. If “all abstractions must have zero cost” is forcing introduction of language features that in turn make the language increasingly hard to understand, perhaps the right thing to do is, sometimes, to break the rule.

                                      I guess this is exactly what you’re saying, and I guess - from the outside at least - it certainly doesn’t look like this is where things are headed.

                                      Two, I have personal subjective opinions about async in general, mostly as it pertains to memory management. That would be fine and well, I could just not use it. But the last few times I’ve started new projects, libraries I wanted to use had abandoned their previously synchronous implementations and gone all-in on async. In other words, introducing async forked the crate community, leaving - at least from my vantage point - fewer maintained crates on each side of the fork than there were before the fork.

                                      Two being there as an anecdote from me as a user. ie. my experience of async was that it (1) makes the Rust hill even steeper to climb, regressing on the no. 1 problem I have as someone wanting to bring the language into my org; (2) it forked the crate community, such that the library ecosystem is now less valuable. And, I suppose, (3) it makes me worried Rust will continue adding very large core features, further growing the complexity and thus making it harder to keep a large team writing maintainable code.

                                      1. 8

                                        the right thing to do is, sometimes, to break the rule.

                                        I would say that the right thing to do is to just implement a high-level language without obvious mistakes. There shouldn’t be a reason for a Java shop to even consider Rust, it should be able to just pick a sane high-level language for application development. The problem is, this spot in the language landscape is currently conspicuously void, and Rust often manages to squeeze in there despite the zero-cost abstraction principle being antithetical to app dev.

                                        That’s systems dynamics that worries me a lot about Rust: there’s a pressure to make it a better language for app development at the cost of making it a worse language for systems programming, for the lack of an actual reasonable app dev language one can use instead of Rust. I can’t say it is bad. Maybe the world would be a better place if we had just a “good enough” language today. Maybe the world would be better if we wait until “actually good” language is implemented.

                                        So far, Rust resisted this pressure successfully, even exceptionally. It managed to absorb “programmers can have nice things” properties of high level languages, while moving downwards in the stack (it started a lot more similar to go). But now Rust is actually popular, so the pressure increases.

                                        1. 4

                                          I mean, we are a java shop that builds databases. We have a lot of pain from the intersection of distributed consensus and GC. Rust is a really good fit for a large bulk of our domain problem - virtual memory, concurrent B+-trees, low latency networking - in theory.

                                        2. 4

                                          I guess personally, I would say that learning how to write and maintain a production quality database has to be at least an order of magnitude more difficult than learning Rust. Obviously this is an opinion of mine, since everyone has their own learning experiences and styles.

                                          As to your aesthetic preferences, I agree with them too! It’s why I’m a huge fan of Go. I love how simple they made the language. It’s why I’m also watching Zig development closely (I was a financial backer for some time). Zero cost abstractions (or Zig’s case, memory safety at compile time) isn’t a necessary constraint for all problems, so there’s no reason to pay the cost of that constraint in all cases. This is why I’m trying to ask how to make the design simpler. The problem with breaking the zero cost abstraction rule is that it will invariably become a line of division: “I would use Rust, but since Foo is not zero cost, it’s not appropriate to use in domain Quux, so I have to stick with C or C++.” It’s antithetical to Rust’s goals, so it’s really really hard to break that rule.

                                          I’ve written about this before, but just take generics as one example. Without generics, Rust doesn’t exist. Without generics (and, specifically, monomorphized generics), you aren’t able to write reusable high performance data structures. This means that when folks need said data structures, they have to go implement them on their own. This in turn likely increases the use of unsafe in Rust code and thus significantly diminishes its value proposition.

                                          Generics are a significant source of complexity. But there’s just no real way to get rid of them. I realize you didn’t suggest that, but you brought up the append example, so I figured I’d run with it.

                                          1. 2

                                            I guess personally, I would say that learning how to write and maintain a production quality database has to be at least an order of magnitude more difficult than learning Rust.

                                            Agree, but precisely because it’s a hard problem, ensuring everything else reduces mental overhead becomes critical, I think.

                                            If writing a production db is 10x as hard as learning Rust, but reading Rust is 10x as hard as reading Go, then writing a production grade database in Go is 10x easier overall, hand wavingly (and discounting the GC corner you now have painted yourself into).

                                            One thing worth highlighting is why we’ve managed to stick to the JVM, and where it bites us: most of the database is boring, simpleish code. Hundreds of thousands of LOC dealing with stuff that isn’t on the hot-path. In some world, we’d have a language like Go for writing all that stuff - simple and super focused on maintainability - and then a way to enter “hard mode” for writing performance critical portions.

                                            Java kind of lets us do that; most of the code is vanilla Java; and then critical stuff can drop into unsafe Java, like in the virtual memory implementation. The problem with that in the JVM is that the inefficiency of vanilla Java causes GC stalls in the critical code.. and that unsafe Java is horrifying to work with.

                                            But, of course, then you need to understand both languages as you read the code.

                                        3. 4

                                          I think the argument is to “remove” async/await. Neither C or C++ have async/await and people write tons of event driven code with them; they’re probably the pre-eminent languages for that. My bias for servers is to have small “manual” event loops that dispatch to threads.

                                          You could also write Rust in a completely “inverted” style like nginx (I personally dislike that, but some people have a taste for it; it’s more like “EE code” in my mind). The other option is code generation which I pointed out here:

                                          https://lobste.rs/s/rzhxyk/plain_text_protocols#c_gnp4fm

                                          Actually that seems like the best of both worlds to me. High level and event driven/single threaded at the same time. (BTW the video also indicated that the generated llparse code is 2x faster than the 10 year old battle-tested, hand-written code in node.js)

                                          So basically it seems like you can have no UB and zero cost abstractions, without async/await.

                                          1. 5

                                            After our last exchange, I don’t really want to get into a big discussion with you. But I will say that I’m quite skeptical. The async ecosystem did not look good prior to adding async/await. By that, I mean, that the code was much harder to read. So I suppose my argument is that adding some complexity to language reduces complexity in a lot of code. But I suppose folks can disagree here, particularly if you’re someone who thinks async is overused. (I do tend to fall into that camp.)

                                            1. 2

                                              Well it doesn’t have to be a long argument… I’m not arguing against async/await, just saying that you need more than 2 design constraints to get to “Rust should have async/await”. (The language would be a lot simpler without it, which was the original question AFAICT.)

                                              You also need:

                                              1. “ergonomics”, for some definition of it
                                              2. textual code generation is frowned upon

                                              Without constraint 3, Rust would be fine with people writing nginx-style code (which I don’t think is a great solution).

                                              Without constraint 4, you would use something like llparse.

                                              I happen to like code generation because it enables code that’s extremely high level and efficient at the same time (like llparse), but my experience with Oil suggests that most “casual” contributors are stymied by it. It causes a bunch of problems in the build system (build systems are broken and that’s a different story). It also causes some problems with tooling like debuggers and profilers.

                                              But I think those are fixable with systems design rather than language design (e.g. the C preprocessor and JS source maps do some limited things)

                                              On a related note, Go’s philosophy is to fix unergonomic code with code generation (“go generate” IIRC). There are lots of things that are unergonomic in Go, but code generation is sort of your “way out” without complicating the language.

                                    2. 1

                                      I didn’t know about the first change. That’s very exciting! This is definitely something that bit me when I was first learning the language.

                                    1. 13

                                      Damn, this describes very well the sentiment I’ve been having in my spine towards Rust for the last few years. It kinda seems like async was its sharkjump moment.

                                      (Common Lisp is pretty nice, though. We have crazy macros and parentheses and a language ecosystem that is older than I am and isn’t showing any signs of changing…)

                                      Something is subtly wrong about Common Lisp as well. On paper, it seems like a perfect ecosystem for 99% of the problems out there. It has an unsurpassed runtime, pretty good performance (on SBCL) for such a dynamic language and better stability than anything. What’s wrong with it is that nobody’s using it despite these virtues.

                                      1. 5

                                        If I’d hazard a guess, it’s probably because CL never really figured out its deployment story (and it doesn’t really integrate well with its environment in other respects because the spec is so generic). Going from “runs inside my Emacs/SLIME setup” to “runs in production” is surprisingly non-trivial (as opposed to golang where you can scp a static binary).

                                        There are other reasons, though (see: the Lisp Curse, “worse is better”, etc).

                                      1. 4

                                        I found that managing yubikey ssh identities with gpg-agent was painless. I’ve never encountered issues with the agent forgetting the key, though I have a gpg-connect-agent updatestartuptty /bye in my shellrc file.

                                        1. 2

                                          I mean, if it works for you, great! My experience with most of the gpg stack has been mostly negative, mainly because of the “jack of all trades, master of none” effect: gpg tries to support all the key types, ways of connecting to smartcards (directly and via pcscd), … – and at least for me, seems to suffer in usability as a result (too many moving parts and things that can go wrong). I think my machine has about 3 different types of GnuPG keyring, managed by different pieces of software (GNOME Keyring, pacman, gpg-agent itself, …). When it works, it’s great, but when it breaks…

                                          The PIV solution is interesting in that it seems to be much better designed (in contrast to the GPG API which is a complete tire fire, as far as I understand it) – and yubikey-agent seems to be a relatively simple Go app that just proxies authentication/signing commands through to pcscd and back.

                                          1. 1

                                            I didn’t mean my comment as a dismissal of your article. I was just offering an alternate point of view on using the Yubikey with plain GPG. :)

                                        1. 1

                                          Interesting that this claims a maximum number of 8 digits. I have a yubikey 4c nano holding a gpg-agent based key with a 30-digit PIN. (I hope!)

                                          1. 4

                                            What happens if you try to activate the key using only the first 8 digits?

                                            1. 1

                                              Interesting – I didn’t realise the gpg-agent-based solution had better PIN support in that regard! Personally, my PIN is rather short; given that it’s not directly encrypting anything (and the hardware enforces a lockout after 3 failed tries), I don’t think this is the end of the world :)

                                            1. 1

                                              Nice! As someone who writes a fair deal of Common Lisp, this is a really handy feature to have :)

                                                1. 7

                                                  This is a part of why I really love Common Lisp. Many of its libraries (like bordeaux-threads, the de facto threading library, and usocket, the BSD sockets thing) were last updated something like 10-20 years ago, and they still just work.

                                                  This kind of ecosystem stability is refreshing when compared to a language like Rust, where all my code became unidiomatic / dependent on now-stale libraries within half a year… (!)

                                                  1. 3

                                                    But all your Rust code still works, doesn’t it? So it must be you love Common Lisp for some other reasons: not that bordeaux-threads still works, but that bordeaux-threads is still idiomatic.

                                                  1. 2

                                                    If it uses WhatsApp web, shouldn’t that mean that your phone has to always be turned on, online with WhatsApp running for you to use this?

                                                    1. 2

                                                      It does! However, you can run WhatsApp in a VM – my personal setup involves an android-x86 QEMU VM that uses libvirt. Thanks to virt-manager / SPICE’s USB redirection feature, you can even plug in a UVC webcam to scan the QR code…

                                                      1. 2

                                                        Interesting, so you’d just run it on a device that’s always on. I’ve heard from past experiments, that WhatsApp is inclined to ban accounts that try to circumvent their infrastructure, but this should be safe right? Or might they become suspicious if an account is always logged-on, always active?

                                                        1. 2

                                                          It’s an open question – but I’ve been doing this for a while now and they haven’t seemed to notice…

                                                          They definitely do ban people who don’t use their official mobile app, but so far they don’t seem to crack down on unofficial web client usages.

                                                    1. 9

                                                      I’ve noticed the same issue with Electron apps on my low RAM devices. Anything with 4GB or less of RAM doesn’t allow you to run more than 2 instances of the programs, without chugging into swap space or worse, oom-killing.

                                                      Particularly worrying is most of my messaging apps are exactly like that: Riot/Element, FB Messenger, WhatsApp, Telegram (this last one is actually pretty optimized and doesn’t eat too much). Long gone are the days where an XMPP bridge would solve the issue, as most of the content is now images, audio messages, animated GIFs, emojis and other rich content.

                                                      Thanks for the article, at least I know that i can replace one of the culprits with a daemonized, non-Electron app and just use the phone as a remote control.

                                                      1. 9

                                                        As far as I am aware, Telegram is not Electron, it is actually a Qt based app.

                                                        1. 7

                                                          Long gone are the days where an XMPP bridge would solve the issue, as most of the content is now images, audio messages, animated GIFs, emojis and other rich content.

                                                          I’m not sure what you mean. Most XMPP clients today (like Conversations, Dino, etc.) gracefully handle all of the items you mentioned, and with much less resources than a full web browser would require. I definitely recommend XMPP bridges when possible where the only alternative is an “app” that is really a full web browser.

                                                          1. 4

                                                            Of those listed, I think Riot will maybe disappear at some point. Riot has (amazingly) managed to have native desktop clients pop up, Quarternion, gomatrix and nheko are all packaged for my Linux distribution.

                                                            1. 3

                                                              I understand the desire to use something browser-ish and cross-platform. I don’t fully understand why Electron (hundreds of mb footprint) is so popular over Sciter (5mb footprint).

                                                              1. 1

                                                                Electron is fully free, Sciter is closed-source with a Kickstarter campaign in progress to open-source it.

                                                                For the large companies, the price of something like Sciter should be a non-issue. If I were reviewing a proposal to use it, though, I’d be asking about security review and liability: HTML/CSS/JS have proven to be hard to get secure, Electron leverages the sugar-daddy of Google maintaining Chrome with security fixes, what is the situation with Sciter like?

                                                                Ideally, the internal review would go “okay, but if we only connect to our servers, and we make sure we’re not messing up TLS/HTTPS, then the only attack model is around user-data from other users being rendered in these contexts, and we have to have corner-case testing there no matter which engine we use, to make sure there are no leaks, so this is all manageable”. But I can see that “manageable” might not be enough to overcome the initial knee-jerk reactions.

                                                              2. 2

                                                                Long gone are the days where an XMPP bridge would solve the issue

                                                                I use Dino on desktop to replace the bloated Discord & WhatsApp clients, and it works fine (with inline images, file sharing, etc working too).

                                                                Disclaimer: I did, however, write the WhatsApp bridge :p

                                                                1. 1

                                                                  Isn’t the reason that XMPP isn’t useful more to do with these services wanting to maintain walled gardens? And further, isn’t that a result of the incentives in a world of “free” services?

                                                                1. 3

                                                                  Nice! As a chat systems nerd, I’m always interested in people trying to do something new in this space :)

                                                                  I like that a form of catchup is baked into the protocol from day 1 – being able to implement the equivalent of a message broker’s “durable subscription” is very valuable for not dropping messages on the floor (like in IRC where your connection drops). Minimalism is also a worthy goal – XMPP and Matrix do indeed try to promise the world, and are extensible enough that you can send anything over them, having a deliberately spartan alternative is a nifty idea.

                                                                  One thing that does seem lacking is any sort of discussion around how bridging to other protocols might work – would it just be a special case of s2s (as in XMPP), or would you design out a special extension for it (as in Matrix)?

                                                                  1. 1

                                                                    This is not a full federated your-server-talks-to-every-other-server type thing.

                                                                    There is no s2s. It seems it is meant to compete with IRC-ish c2s only and use centralised servers.

                                                                    1. 1

                                                                      Yeah, because centralized servers are simple to do and relatively hard to break. And because I don’t know a whole lot about decentralized chat systems. Seems like with a centralized chat system you have to trust the owners/operators of the server your client talks to, while with a decentralized chat system you need to trust the owners/operators of every server your own server talks to.

                                                                      Frankly, if I want to talk in the channel #foo@example.com then I see no reason to have to go through an intermediary home-server rather than just talk to example.com directly, and if you want a global topic #foo that any server can participate in then that seems Hard Enough I Don’t Want To Bother. And for bridging to other protocols, I’ve never seen it done well enough to be compelling enough to bother.

                                                                      My mind may be changed on these points, however. Most importantly, authentication and any potential account metadata is decentralized, so no matter whose server you’re talking to, you can still own your own identity. (This part I HAVE thought about.)

                                                                      1. 1

                                                                        Yeah, because centralized servers are simple to do and relatively hard to break. And because I don’t know a whole lot about decentralized chat systems.

                                                                        I’ve been idly wondering how one would do a decentralized chat system since I saw your sketch here. I think one would partition messages across peers with a distributed hash table, and have messages point to the most recent message posted in the chat/channel they’re in like you’ve discussed. That seems to lead to messages forming a DAG that we need some way of making eventually consistent, so maybe one needs some kind of CRDT for the messages? It’s fun to think about.

                                                                        1. 1

                                                                          For sure, except in a Global Network like original IRC dreams, chatrooms effectively end up being single-server. Though it’s useful as as admin to be able to choose the server my chatroom is on without requiring my users to connect to Yet Another Server.

                                                                          If you have decentralized authentication/identity and account metadata (and 1:1 chats, if those are supported) then you are a fully federated type thing.

                                                                          1. 2

                                                                            Now O want to builds something like this (IRC) distributed and federated. Great, as if I don’t have enough unfinished projects.

                                                                            1. 2

                                                                              I’d love to hear your thoughts on design, to be honest. Like I said, I don’t know much about distributed chat systems, but learning more would be nice.

                                                                            2. 1

                                                                              The primary use case of multiple servers is for operators to be able to bounce instances or have some die without losing what might be an important piece of infrastructure. Anything else, including possible load sharing or ping time optimization, is at best a nice side effect.

                                                                              1. 1

                                                                                Right, sorry, I should be careful to say “centralized” in this context. Number of “servers” is a red herring

                                                                              2. 1

                                                                                Sometimes I wonder if federated is bad for implementations, but federated identity on whatever centralized/decentralized server we want is good.

                                                                                1. 1

                                                                                  Like OpenID? That never took off. Nowadays Facebook is probably the largest identity provider online.

                                                                        1. 4

                                                                          My go-to solution is to simply pop the front wheel off, and then lock the bike with a U-lock. This has worked pretty well for me in London thus far; I guess it (a) makes getting away with your stolen bike significantly harder, (b) makes it look like someone else has already gotten to it, and (c) makes someone tinkering with it to try and angle-grind the lock look even more suspicious.

                                                                          1. 13

                                                                            IRC’s lack of federation and agreed-upon extensability is what drove me to XMPP over a decade ago. Never looked back.

                                                                            1. 12

                                                                              Too bad XMPP was effectively embraced/extended/extinguished by Google. In no small way thanks to lack of message acknowledgement in the protocol, which translated to lost messages and zombie presence, which was specially bad across servers, so it paid to be in the same server (which became typically google) as the other endpoint.

                                                                              I did resist that, but unfortunately most of my contacts were in the Google server, and I got isolated from them when Google cut the cord. Ultimately, I never adopted Google Talk (out of principle), but XMPP has never been the same after that.

                                                                              End to end encryption is also optional and not the default, which makes XMPP not much of an improvement over IRC. My hopes are with Matrix taking off, or a truly better (read: fully distributed) replacement like Tox gaining traction.

                                                                              1. 5

                                                                                Showerthought: decentralised protocols needs to have some kind of antinetwork effects baked into them somehow, where there’s some kind of reward for staying out of the monoculture. I dunno what this actually looks like, though. Feels like the sort of thing some of the blockchain people might have a good answer for.

                                                                                1. 7

                                                                                  That’s a fascinating idea and I disagree. :D Network effects are powerful for good reason: centralization and economies of scale are efficient, both in resources like computer power, and in mental resources like “which the heck IRC network do I start a new channel on anyway”. What you do need is ways to avoid lock-in. If big popular network X starts abusing its power, then the reasonable response is to pick up your stakes and go somewhere else. So, that response needs to be as easy as possible. Low barriers to entry for creating new servers, low barriers to moving servers, low barriers to leaving servers.

                                                                                  I expect for any human system your going to result in something like Zipf’s law governing the distribution of who goes where; I don’t have a good reason for saying so, it’s just so damn common. Look at the population of Mastodon servers for example (I saw a really good graphic of sizes of servers and connections between them as a graph of interconnected bubbles once, I wish I could find it again). In my mind a healthy distributed community will probably have a handful of major servers/networks/instances, dozens or hundreds of medium-but-still-significant ones, and innumerable tiny ones.

                                                                                  1. 3

                                                                                    More and more these days I feel like “efficiency” at a large enough scale is just another way to say “homogeneity”. BBSes and their store-and-forward message networks like FidoNet and RelayNet were certainly less efficient than the present internet, but they were a lot more interesting. Personal webpages at some-isp.com/~whoever might have been less efficient (by whatever metric you choose) than everyone posting on Facebook and Twitter but at least they actually felt personal. Of course I realize to some degree I’m over-romanticizing the past (culturally, BBSes and FidoNet especially, as well as the pre-social-media internet, were a lot more white, male, and cishet than the internet is today; and technologically, I’d gnaw my own arm off to not have to go back to dialup speeds), and having lowered the bar to publish content on the internet has arguably broadened the spectrum of viewpoints that can be expressed, but part of me wonders if the establishment of the internet monoculture we’ve ended up with, where the likes of Facebook basically IS the entire internet to the “average” person, was really necessary to get there.

                                                                                  2. 3

                                                                                    I think in a capitalist system this is never going to be enough. What we really need is antitrust enforcement to prevent giant corporations from existing / gobbling up 98% of any kind of user.

                                                                                2. 3

                                                                                  This! Too bad XMPP never really caught on after the explosion of social media, it’s a (near) perfect protocol for real time text-based communication, and then some.

                                                                                  1. 21

                                                                                    It didn’t simply “not caught on”, it was deliberately starved by Facebook and Google by disabling federation between their networks and everyone else. There was a brief moment around 2010 when I could talk to all my friends on gTalk and Facebook via an XMPP client, so it did actually work.

                                                                                    (This was my personal moment when I stopped considering Google to be “not evil”.)

                                                                                    1. 3

                                                                                      It was neat to have federatoion with gtalk, but when that died I finally got a bunch of my contacts off Google’s weak xmpp server and onto a better one, and onto better clients, etc. Was a net win for me

                                                                                      1. 5

                                                                                        What are “better clients” these days for XMPP? I love the IDEA of XMPP, but I loathe the implementations.

                                                                                        1. 6

                                                                                          Dino, Gajim, Conversations. You may want to select a suitable server from (or check your server via) https://compliance.conversations.im/ for the best UX.

                                                                                        2. 5

                                                                                          I don’t have that much influence over my contacts :-)

                                                                                          1. 6

                                                                                            This.

                                                                                            Network effects win out over the network itself, every time.

                                                                                            1. 1

                                                                                              I guess neither do I? That’s why it took Google turning off the server to make them switch

                                                                                          2. 3

                                                                                            IIRC it was Facebook that was a bad actor and started letting the communication go only one way to siphon users from gtalk and forced Google’s hand.

                                                                                            1. 5

                                                                                              Google was playing with Google+ at that moment and wanted to build a walled garden, which included a chat app(s). They even invented some “technical” reasons why XMPP wasn’t at all workable (after it has been working for them for years.)

                                                                                              1. 2

                                                                                                It was weird ever since Android was released. The server could federate with other servers just fine, but Google Talk for Android spoke a proprietary C2S protocol, because the regular XMPP C2S involves keeping a TCP connection perpetually open, and that can’t be done on a smartphone without unacceptable power consumption.

                                                                                                I’m not sure that truly counts as a “good” technical reason to abandon S2S XMPP, but it meant that the Google Talk server was now privileged above all other XMPP servers in hard-to-resolve ways. It made S2S federation less relevant, because servers were no longer interchangeable.

                                                                                                1. 1

                                                                                                  I’m not sure the way GTalk clients talk to their server had anything to do with how the server talked to others. Even if it was, they could’ve treated as a technical problem needed solving rather than an excuse to drop the whole thing.

                                                                                                  1. 2

                                                                                                    Dropping federation was claimed at the time (fully plausibly, imo) to be about spam mitigation. There was certainly a lot of XMPP spam around that time.

                                                                                                  2. 1

                                                                                                    I have been using regular XMPP c2s on my phones over mobile data continuously since 2009 when I got my first smartphone. Battery life has never been an issue. I think if you have tonnes of TCPs the batterylife thing can be true, but for one XMPP session the battery impact is a myth

                                                                                                2. 3

                                                                                                  AFAIK Facebook never had federated XMPP, just a slightly working c2s bridge

                                                                                                  1. 1

                                                                                                    To make sure my memory wasn’t playing any tricks on me I did a quick google search. It did.

                                                                                                    To make Facebook Chat available everywhere, we are using the technology Jabber (XMPP), an open messaging protocol supported by most instant messaging software,

                                                                                                    From: https://www.facebook.com/notes/facebook-app/facebook-chat-now-available-everywhere/297991732130/

                                                                                                    I don’t remember the move they did on Google to siphon users though, but I remember thinking it was a scummy move.

                                                                                                    1. 2

                                                                                                      That link is talking about their c2s bridge. You still needed a Facebook account to use it. It was not federated.

                                                                                                3. 2

                                                                                                  That might be your experience but I’m not sure it’s true for the majority.

                                                                                                  From my contact list of like 30 people 20 weren’t using GTalk in the first place (and no one use used FB for this, completely separate type of folks) and they all stopped using XMPP independently, not because of anything Google. And yes, there were interop problems with those 5, but overall I see the problem of XMPP’s downfall in popularity kinda orthogonal to Google, not related.

                                                                                                  1. 3

                                                                                                    There’s definitely some truth to that, but still, my experience differs greatly. The majority of my contacts used Gtalk back in the day, and once that was off, they simply migrated to more popular, walled garden messaging services. That was the point in time where maintaining my own, self hosted XMPP VPS instance became unjustifiable in terms of the monthly cost and time, simply because there was no one I could talk to anymore.

                                                                                                4. 4

                                                                                                  I often hear this, but I’ve been doing most of my communicating with XMPP continuously for almost 20 years and it just keeps getting better and the community contiues to expand and get work done.

                                                                                                  When I first got a JabberID the best I could do was use an MSN gateway to chat with some highschool pals from Gaim and have them complain that my text wasn’t in fun colours.

                                                                                                  Now I can chat with most of my friends and family directly to their JabberIDs because it’s “just one more chat app” to them on their Android phone. I can send and receive text and picture messages with the phone network over XMPP, and just this month started receiving all voice calls to my phone number over XMPP. There are decent clients for every non-Apple platform and lots of exciting ecosystem stuff happening.

                                                                                                  I think good protocols and free movements are slower because there is so much less money and attention, but there’s also less flash in the pan fad adoption, less being left high and dry by corporate M&A, and over time when the apps you used to compete with are long gone you stand as what is left and still working.

                                                                                                  1. 4

                                                                                                    My experience tells me that the biggest obstacle of introducing open and battle-tested protocols to the masses is the insane friction of installing yet another app and opening yet another account. Most people simply can’t be bothered with it.

                                                                                                    I used to do a lot of fun stuff with XMPP back in the day, just like you did, but nowadays, it’s extremely hard to make non-geek people around me join the bandwagon of pretty much anything outside the usual FAANG mainstream stuff. The concept of open protocols, federation, etc. is a very foreign concept to many ordinary people, for reasons I could never fully grasp.

                                                                                                    Apparently, no one has ever solved that problem, despite many of them trying so hard.

                                                                                                    1. 2

                                                                                                      I don’t really use XMPP, but I know that “just one more chat app” never works with almost everyone in my circle of friends. Unfortunately I still have to use Facebook Messenger to communicate with some people.

                                                                                                    2. 3

                                                                                                      When I was building stuff with XMPP, I found it a little difficult to grasp. At its core, it was a very good idea and continues to drive how federation works in the modern world. I’m not sure if this has to do with the fact that it used XML and wasn’t capable of being transmitted using JSON, protobuf, or any other lightweight transport medium. Or whether it had to do with an extensive list of proposals/extensions in various states of completion that made the topology of the protocol almost impossible to visualize. But in my opinion, it’s not a “perfect” protocol by any means. There’s a good (technical) reason why most IM service operators moved away from XMPP after a while.

                                                                                                      I do wish something would take its place, though.

                                                                                                      1. 5

                                                                                                        Meanwhile it takes about a page or two of code to make an IRC bot.

                                                                                                        1. 4

                                                                                                          XMPP has gotten a lot better, to be fair – a few years ago, the situation really was dire in terms of having a set of extensions that enabled halfway decent mobile support.

                                                                                                          It isn’t a perfect protocol (XML is a bit outdated nowadays, for one) – but crucially, the thing it has shown itself to be really good at is the extensibility aspect: the core is standardized as a set of IETF RFCs, and there are established ways to extend the core that protocols like IRC and Matrix really lack.

                                                                                                          IRC has IRCv3 Capability Negotiation, sure, but that’s still geared toward client-server extensibility — XMPP lets you send blobs of XML to other users (or servers) and have the server just forward them, and provides a set of mechanisms to discover what anything you can talk to supports (XEP-0030 Service Discovery). This means, for example, you can develop A/V calls as a client-to-client feature without the server ever having to care about how they work, since you’re building on top of the standard core features that all servers support.

                                                                                                          Matrix seems to be denying the idea that extensibility is required, and think they can get away with having One True Protocol. I don’t necessarily think this is a good long-term solution, but we’ll see…

                                                                                                          1. 4

                                                                                                            Matrix has the Spec Proposal progress for moving the core spec forward. And it has namespacing (with “m.” reserved as the core prefix, rest should use reverse domain like “rs.lobste.*”) for extension. What do you think is missing?

                                                                                                            1. 1

                                                                                                              Okay, this may have improved since I last checked; it looks like they at least have the basics of some kind of dynamic feature / capability discovery stuff down.

                                                                                                            2. 2

                                                                                                              IRCv3 has client-to-client tags which can contain up to 4096 bytes per message of arbitrary data, which can be attached to any message, or be sent as standalone TAGMSG.

                                                                                                              This is actually how emoji reactions, thread replies, and stuff like read/delivery notifications are implemented, and some clients already made a prototype using it for handshaking WebRTC calls.

                                                                                                              1. 4

                                                                                                                Sure. However, message tags are nowhere near ubiquitous; some IRC netadmins / developers even reject the idea that arbitrary client-to-client communication is a good thing (ref).

                                                                                                                You can get arbitrary client-to-client communication with ircv3 in some configurations. My point is that XMPP allows it in every configuration; in fact, that’s one of the things that lets you call your implementation XMPP :p

                                                                                                              2. 1

                                                                                                                I have been using XMPP on mobile without issue since at least 2009

                                                                                                          2. 2

                                                                                                            How is IRC not federated? It’s transparently federated, unlike XMPP/Email/Matrix/ActivityPub/… that require a (user, server) tuple for identification, but it still doesn’t have a central point of failure or just one network.

                                                                                                            1. 3

                                                                                                              IRC is not federated because a user is required to have a “nick” on each network they want to participate in. I have identities on at least 4 different disconnected IRC networks.

                                                                                                              The IRC server to server protocol that allows networks to scale is very nice, and in an old-internet world of few bad actors having a single global network would have been great. But since we obviously don’t have a single global network, and since the network members cannot communicate with each other, it is not a federated system.

                                                                                                              1. 3

                                                                                                                Servers in a network federate, true. But it’s not an open federation like email, where anyone can participate in a network by running their own server.

                                                                                                            1. 1

                                                                                                              It’s not good, it’s just not bad. Especially for simple use cases, it makes simple things easy. Hard things are still pretty damn hard though.

                                                                                                              Simplest example I can think of, if I’m remembering it right: Each message includes its length. That is the length of the whole message, not the just the payload; it’s payload+header. So if you want to, say, make sure that a payload is small enough for the server to accept or else split it into multiple messages, your target size varies depending on server and channel.

                                                                                                              1. 6

                                                                                                                Close, but no cigar! The actual issue is that you send a message to your IRC server like

                                                                                                                PRIVMSG eta :hi there!
                                                                                                                

                                                                                                                but it gets relayed to the other user like

                                                                                                                :randomuser!~ident@host PRIVMSG eta :hi there!
                                                                                                                

                                                                                                                Worse, it might get relayed to another server like

                                                                                                                :42A00001 PRIVMSG 43A00002 :hi there!
                                                                                                                

                                                                                                                (where 42A00001 and 43A00002 are user UIDs).

                                                                                                                And all of these have the same 256-byte-or-whatever length restriction.

                                                                                                                So basically, it’s Quite Difficult to determine how long you should make your message. If you figure out what your nick!user@host is and do maths based off that you’re probably fine, but you could not be if the server-to-server link stuffs things up.

                                                                                                                1. 1

                                                                                                                  Aha, thank you for the correction! It has been a long time, but it was basically that issue that made me give up on writing my own IRC client lib once upon a time.

                                                                                                              1. 21

                                                                                                                This is the primary reason I find programming in Common Lisp so enjoyable, as opposed to Rust (a probably superior language, but having to wait for 300 dependencies to compile / get linked is a far cry from compiling individual functions in a running Lisp image)

                                                                                                                1. 4

                                                                                                                  Ha! I’m currently coming at this from converting a small C++ project to Rust and I’m finding Cargo to be so much more enjoyable than CMake :)

                                                                                                                  1. 1

                                                                                                                    The lack of IDE support for better C++ build systems (e.g. Bazel) is a boat anchor on C++. I’m struggling right now to find a CMake replacement which supports both Visual Studio and CLion on Windows and Linux.

                                                                                                                1. 2

                                                                                                                  Personally I’m not entirely sure that event loops are a strictly better solution (at least if your language does the whole ‘async tasks’ concurrency thing) - if you’re calling poll() yourself, it’s fine, but I’m inclined to believe async threads are almost as bad…

                                                                                                                  1. 4

                                                                                                                    Async brings it’s own problems and scheduling fairness is another a big one (as to be expected in co-operative scheduling). I’m also not at all convinced, at least in Python land, that many applications really are faster under asynchronous IO (whether explicit like asyncio or implicit like gevent). My personal comparisons of web frameworks for example show that UWSGI is considerably quicker than any of the uvloop-based approaches, often by a lot.

                                                                                                                    1. 3

                                                                                                                      Only a few niche workloads fare better with a userspace scheduler than threads. Usually it’s just load balancers and proxies that have tons of connections sitting around doing nothing. It’s totally fine to spin up 10k threads on most servers to deal with clients that are doing enough work to keep the server busy, in many cases.

                                                                                                                      A big part of doing a job is feeling intellectually stimulated though, and async probably makes a lot of engineers happier by just doing more work to get their stuff done, like how dogs are often happier when eating food out of those “puzzle bowls” that make them lick all kinds of corners before getting their reward.

                                                                                                                      1. 1

                                                                                                                        I observed that uwsgi configured with the right number of workers can go a long way. However, some workloads are heavily relying on external APIs (e.g., OAuth2 stuff) where almost everything consists in waiting for HTTP calls to complete. For that kind of app (I call them “gateways” or “proxies”), something like aiohttp (async-based) may make sense. That’s not the typical workload, though.

                                                                                                                        1. 1

                                                                                                                          I see the rationale there (I think that is how nginx works) but increasing the number of workers is also pretty easy and means you can keep writing normal python.