Threads for eta

  1. 1

    also relevant: ttyd (https://github.com/tsl0922/ttyd), which I’ve used to run weechat before, and actually seems somewhat more maintained (?)

    1. 1

      You might want to look into glowing-bear with weechat, I used it for quite some time and it was great!

    1. 19

      Maybe it’s too simple? This comment was part of the “My first impressions of web3” discussion we are having in parallel:

      A protocol moves much more slowly than a platform. After 30+ years, email is still unencrypted; meanwhile WhatsApp went from unencrypted to full e2ee in a year. People are still trying to standardize sharing a video reliably over IRC; meanwhile, Slack lets you create custom reaction emoji based on your face.

      1. 19

        I don’t want to discount Moxie’s otherwise entirely correct (IMHO) observation here but it’s worth remembering that, like everything else in engineering, this is also a trade-off.

        IRC is an open standard, whereas WhatsApp is both a (closed, I think?) standard and a single application. Getting all implementations of an open standard on the same page is indeed difficult and carries out a lot of inertia. Getting the only implementation of a closed standard on the same page is trivial. However, the technical capability to do so immediately also carries the associated risk that it’s not necessarily the page everyone wants to be on. That’s one of the reasons why, when WhatsApp is going to be as dead as Yahoo! Messenger, people are still going to be using IRC. This shouldn’t be taken to mean that IRC is better than either WhatsApp or Slack – just that, for all its simplicity, there are in fact many good reasons why it outlasted many more capable and well-funded platforms, not all of them purely technical and thus not all of them outperformable by means of technical capability.

        It’s also worth pointing out that, while Slack lets you create custom reaction emoji based on your face, the standard way to share video reliably over both IRC and Slack is posting a Youtube link. More generally, it has been my experience that, for any given task, a trivial application of a protocol that’s better suited for that task will usually outperform any non-core extensions of a less suitable protocol.

        1. 6

          It’s also worth pointing out that, while Slack lets you create custom reaction emoji based on your face, the standard way to share video reliably over both IRC and Slack is posting a Youtube link. More generally, it has been my experience that, for any given task, a trivial application of a protocol that’s better suited for that task will usually outperform any non-core extensions of a less suitable protocol.

          I mean, I’m not uploading an entire 10 minute video, but likely something short from my camera. To do so from Slack, I press the button and pick it out of my camera roll. From IRC, I have to upload it somewhere, copy and paste the link, and paste that…

          1. 1

            Fair point, that’s not the kind of video I was thinking of. You’re right!

            1. 1

              The simplest way for me to share pics among my friends on IRC is to upload the images to our Discord, then share the image from there. Sad, but true.

              A big thing hobbling IRC is the lack of decent mobile support without paid services (IRCloud) or using a bouncer.

          2. 10

            meanwhile, Slack lets you create custom reaction emoji based on your face.

            This is exactly why email survived all the IMs du jour that have came and gone. A clear vision of what matters and what the core functionality is as well as the problem which it poses itself to solve. All of which Slack lacks.

            1. 13

              This is exactly why email survived all the IMs du jour that have came and gone.

              Does this really matter, though? I’ve probably used 10 messaging platforms in the last 25 years, some concurrently for different purposes. They each served their purpose. The transition was, in some cases, a little rocky, but only for a short time. I don’t really think my life would have been meaningfully improved by having used only a single messaging system during that time period.

              1. 4

                I’ve probably used 10 messaging platforms in the last 25 years, some concurrently for different purposes

                Mostly because all the disruptors bootstrapped their network effect by integration with XMPP and IRC and then dropped it when they had enough market share?

                The flipside of Moxie’s (excellent) observation is that the slow-moving protocols are easy to support - so that is where the network effects live. If the world is split between centralised network X and Y (which will refiuse to interoperate with each other) and you can get the core functionality with the (more basic) protocol, then there is a a value to the core protocol (you can speak to X and Y)

                1. 7

                  There are several components to a messaging system:

                  • A client, that the end-user interacts with.
                  • A server (optionally) that the client communicates with.
                  • A protocol that the client and server use to communicate (or that clients use to communicate with each other in a P2P system).
                  • A service, which runs the server and (often) provides a default client.

                  In my experience, the overwhelming majority of users conflate all four of these. Even for email, they see something like Hotmail or GMail as a combination of all of these things and the fact that Hotmail and GMail can communicate with each other is an extra and they have no idea that it works because email is a standard protocol. The fact that WhatsApp doesn’t let you communicate with any other chat service doesn’t surprise them because they see the app as the whole system. The fact that there’s a protocol, talking to a server, operated by a service provider, just isn’t part of their mental model of the system.

                  I read a paper a couple of years ago that was looking at what users thought end-to-end encryption meant. It asked users to draw diagrams of how they thought things like email and WhatsApp worked and it pretty much coincided with my prior belief: there’s an app and there’s some magic, and that’s as far as most users go in their mental models.

                  1. 3

                    …then there is a a value to the core protocol (you can speak to X and Y)

                    But if those two apps won’t talk to one another, why would they speak the shared protocol? Plus, most people don’t choose protocols, they choose applications, and most people don’t choose based on “core functionality”, they choose based on differentiated features. I’m not unsympathetic here, I kicked and screamed about switching from Hipchat to Slack (until Hipchat ceased to exist, of course), but I watched people demand the features that Slack offered (threaded conversations, primarily). They didn’t care about being able to choose their clients, or federation, or being at the mercy of a single company. They cared about the day-to-day user experience.

                2. 11

                  eh, I think it’s potentially more that email is so deeply embedded into society that it’s difficult to remove, rather than about any defining characteristics of the protocol itself :p

                  1. 10

                    I’m still not sure though if you can call it “survived” when the lowest common denominator for email is still

                    • no dmarc
                    • no SPF
                    • MS blackholes at random(status code ok, mail gone, support can “upgrade” your IP; then you get at least a “denied”, and then you’re told to subscribe to some trust-service for email delivery)
                    • google doesn’t like you sometimes
                    • german telekom doesn’t trust new IPs, wants an imprint to whitelist, no SPF
                    • there’s some random blacklist out there marking IPV4s as “dynamic” since 2002, give them money to change that (no imprint)
                    • and if you want push notification the content goes through your mobile OS provider in plaintext
                    • if you want SMTP/IMAP there is no second factor login at all, so you’re probably using the same credentials that everything else uses etc

                    So everyone goes to AWS for a mail sling or some other service. Because the anti-spam / trust model is inherently broken and centralized. Yes email is still alive and I wouldn’t want to exchange it with some proprietary messenger, but it’s increasingly hard to even remotely self host this or even let some company host this for you that isn’t one of the big 5 (because one bad customer in the IP range and you’re out) - except if you don’t actually care if your emails can be received or send.

                  2. 10

                    It is. To properly use IRC, you don’t just need IRC, you need a load of ad-hoc things on top. For example, IRC has no authentication at all. Most IRC servers run a bot called NickServ. It runs as an operator and if you want to have a persistent identity then you register it by sending this bot the username and a password. The bot then kicks anyone off the service if they try to use your username but don’t first notify the bot with your password. This is nice in some ways because it means that a client that is incredibly simple can still support these things by having the user send the messages directly. This is not very user friendly. There’s no command vocabulary and no generalised discovery mechanism, there’s just an ad-hoc set of extensions that you either support individually or you punt to the user. This also means that you’re limited to plain text for the core protocol. Back in the ’90s, Internet Explorer came with a thing called MS Comic Chat, which provided a UI that looked like a comic on top of IRC. Every other user saw some line noise in your messages because it just put the state of the graphics inline in the message. If IRC were slightly less simple then this could have been embedded as out-of-band data and users of other IRC clients could have silently ignored it or chosen to support it.

                    I’m still a bit sad that SILC never caught on. SILC is basically modernised IRC. It had quite a nice permissively licensed client library implementation and I wrote a SILC bot back when it looked like it might be a thing that took over from IRC (2005ish?). It was basically dead by around 2015 though. It had quite a nice identity model: usernames were not unique but public keys were and clients could tell you if the David that you’re talking to today was the same as the one you were talking to yesterday (or even tell you if the person decided to change nicknames but kept their identity).

                    1. 5

                      authentication

                      SASL is a thing now. And clients can get by being simple by simply letting the user enter IRC commands directly.

                      Nickserv is bad mainly because it is an in-band communications mechanism; that has security implications. I have seen people send e.g. ‘msg nickserv identify hunter2’ to a public channel (mistakenly omitting the leading ‘/’).

                      1. 3

                        For example, IRC has no authentication at all.

                        There’s no command vocabulary and no generalised discovery mechanism

                        There is. https://ircv3.net/specs/extensions/capability-negotiation https://ircv3.net/specs/extensions/sasl-3.1

                        About every client and almost every network (the only notable exceptions are: OFTC/Undernet/EFnet/IRCnet) support both.

                        If IRC were slightly less simple then this could have been embedded as out-of-band data and users of other IRC clients could have silently ignored it or chosen to support it.

                        Today it could be done cleanly on many networks, thanks to https://ircv3.net/specs/extensions/message-tags

                    1. 5

                      Strong agree; in my experience, there just isn’t any good FOSS email software that enabled me to be productive, especially when it came to things like managing email rules competently.

                      Fastmail lets me hit “create rule from message” and sort my mail far more effectively, especially since I can do it on my phone instead of having to SSH in and write a Sieve script…

                      1. 2

                        I prefer hosting my own e-mail, but I do agree if you’re going to use hosted e-mail; go with Fastmail or Protonmail or some other service that you pay for, making you the customer and not the cattle.

                        There is no increased privacy using those services of course. E-mail is a shit show when it comes to privacy or confidentiality, but I do trust a paid provide to not censor inbound e-mail. I don’t trust Google or Microsoft (LinkedIn is already silently deleting peoples’ posts without notification)

                        1. 2

                          This won’t fit your mobility requirement, but Emacs + mu4e (the client) + isync (to fetch mail from the server) is pretty good. You can write your filters in lisp which is at least slightly better than Sieve, and your user experience is only limited by your imagination and lisp skills.

                          But it’s also a niche solution :D

                          1. 2

                            Dovecot and Thunderbird (with an extension, as with everything else useful in Thunderbird) support the ManageSieve extension, which allows you to edit Sieve scripts directly from the mail client. This is a bit better than editing over SSH (the server can be asked to validate the script before you save the rules) but I’ve not found anything for writing Sieve scripts that’s better than writing them by hand. Outlook, for all its faults, is quite good at suggesting rules from a message and I’d love to see that kind of functionality in ManageSieve clients.

                          1. 7

                            I know enough of ddevault to understand why he went with IRC instead of Matrix. But I think it is the wrong choice. There’s a reason why sr.ht uses git instead of CVS (or RCS). Similarly IRC should be replaced with Matrix.

                            1. 12

                              I’m not sure Matrix is obviously better than IRC, especially not on a protocol level; it might have more features than IRC right now, but I think half of the point of the project was to try and fix that disparity (?)

                              We already have lots of Matrix clients that work pretty well; why should people not be allowed to work on IRC clients, too, especially since we don’t have as much development going on there?

                              1. 9

                                I find that a very weird statement. If you’re talking about an org of a certain size you can say it makes sense they choose X over Y. But this is a very small org that provides a service to its paying customers, but most probably because they use IRC themselves. Nothing should be replaced if people are happy to use it.

                                1. 6

                                  Matrix is still AFAIK, Not Great™ to operate because the server guzzles resources and lacks a lot of moderation features (which can be hard to implement due to the DAG).

                                  1. 2

                                    So on a resource front I think it definitely depends on the server implementation you use.

                                    I’ve moved to using the conduit home server implementation which is using 500MB RES in some high volume channels.

                                    I don’t have numbers on hand but dendrite and synapse both used gigs of RES mem iirc.

                                    Now admittedly compare that to an ircd which is no doubt much lower.

                                    As far as mod features yeah matrix can use more things in the spec, which probably will be hard to implement.

                                    1. 2

                                      FWIW I’m running Synapse with a ton of open channels across multiple homeservers and I’m not running into any resource issues on a VPS with 2 GB of RAM (previously 1, but it was running a bit tight) and some swap. I expect Dendrite, the new Go homeserver, to cut resource usage down significantly once it stabilizes.

                                      1. 1

                                        Synapse has improved a lot.

                                      2. 4

                                        I don’t think this is a relevant critique. I do, personally, think that Matrix is the better protocol and I use it myself, but sr.ht uses IRC themselves and is just offering a service to its paying customers to use IRC. If you find that valuable, you can pay for it or use it along with your existing sr.ht account, and if you don’t, you don’t need to. 🤷 . If they used XMPP instead, they could even offer similar XMPP services.

                                        1. 3

                                          Do you mind clarifying what parallels you’re drawing between CVS vs Git and IRC vs Matrix?

                                          1. 7

                                            CVS and IRC are hosted on a central server, Git and Matrix are distributed/federated.

                                            1. 10

                                              That makes sense; I’m not sure that alone is an argument that Matrix is an unqualified better choice than IRC though. Matrix is a very heavy ecosystem that has relatively few implementations, actually setting up a homeserver is an arduous process, the homeservers tend to be pretty resource intensive which presents scalability issues, especially for a more “independent” service which is not backed by a cloud monopolist with compute resources coming out the wazoo. IRC is not federated, but the relative ease in which IRC servers are spun up and their undemanding operational requirement smake it far more effectively decentralised as an ecosystem than Matrix.

                                              Matrix also seems not to fit a lot of the ‘ethos’ that sourcehut espouses; in-house developed software that’s for-purpose and aiming to be pragmatic in both terms of use and design, often using technologies and workflows that free software developers already regularly use. IRC fits into this category much neater than Matrix does. It also feels to me personally that the advantages of federation are not as pronounced in real-time (synchronous) chat as source control.

                                              1. 1

                                                IRC is not federated

                                                what do you mean by this? IRC networks consist of many interconnected servers run by different people

                                                1. 3

                                                  IRC is a closed federation. Matrix/XMPP are open federations (that can be limited by allow/denylists or firewalls)

                                                  1. 1

                                                    IRC servers can have allow/denylists and firewalls too. What makes it closed and the others open?

                                                    1. 1

                                                      IRC servers must trust each other, so only trusted servers are allowed in the federation. Matrix, thanks to the state resolution protocol, can work without server administrators trusting each other. Much like email. Spam can still be a problem.

                                                      1. 1

                                                        So Matrix and SMTP servers are vulnerable to spam attacks if they allow untrusted servers, and IRC servers are vulnerable to more attacks like kills from malicious operators. So yes, IRC requires a higher level of cooperation between servers, but it’s a matter of degree.

                                                        1. 1

                                                          Federation between untrusted servers means spam cannot be solved on the server level. Matrix has some plans to solve spam with a reputation-based system.

                                                          1. 1

                                                            I don’t know what you mean by “on the server level.”

                                                  2. 1

                                                    IRC can be distributed through server-to-server connections, but IRC is not a federated protocol because these networks share a common view of users and channels for as long as they are connected, and there is no way to bridge communications with other networks at the level of the protocol itself.

                                                    Compare and contrast with XMPP and Matrix, where it’s perfectly possible to communicate with other on federated servers that have absolutely no relation to your homeserver, and there is clear delineation of the ownership of identities and rooms.

                                                    1. 2

                                                      I don’t see any substance to the idea that IRC servers can’t federate with eachother while XMPP/Matrix servers can. All federated protocols form networks which can become fragmented by mismatches between software and policies.

                                                      You also contrast “a common view of users and channels” with a “clear delineation of the ownership of identities and rooms.” This arises from a fundamental difference in what the protocols offer, namely that XMPP offers persistent identities while IRC does not, but that has no bearing on whether a protocol supports federation. For a non-federated contrast to IRC/XMPP/Matrix, see ICQ.

                                                2. 3

                                                  A more obvious choice in that case might be XMPP. Or even SMTP (see delta chat)

                                                  1. 1

                                                    IRC is federated

                                                3. 3

                                                  perhaps you could lay out why ddevault would disagree, considering that he is unable to respond here (due to a series of incidents that lobste.rs has decided to keep secret)

                                                1. 9

                                                  I think we did this already in https://lobste.rs/s/3vt0s6/why_asynchronous_rust_doesn_t_work – but I would also note, as many people did then, and with the benefit of additional months writing and using async Rust, that indeed it does work, for myself and many others. It’s not perfect, but things are pretty good and they continue to improve. No programming language is going to meet everybody’s expectations or needs.

                                                  1. 3

                                                    Agreed on both counts; after the minor disaster that was the public reaction to the last post, I wasn’t really going to write another (potentially needlessly inflammatory…) post like it (!).

                                                    Ironically I use async Rust at my day job (and did at the time of writing), so it certainly works; the title was poorly chosen (and probably didn’t lead to people understanding the point I actually wanted to make)

                                                  1. 12

                                                    I feel like this misses the mark at a basic level: I don’t want to write async rust.

                                                    I want to write concurrent rust and not have to worry about how many or which executors are in the resulting program. I want to write concurrent rust and not worry about which parts of the standard library I can’t use now. I want to write concurrent rust and not accidentally create an ordering between two concurrent functions.

                                                    1. 19

                                                      I feel like those wants don’t align with rust’s principles. Specifically rust has a principle behind making anything that comes with a cost, explicit. It doesn’t automatically allocate, it doesn’t automatically take references, it doesn’t automatically lock things, etc. What you’re suggesting sounds like making transforms that come with costs implicit. That’s a reasonable tradeoff in many languages, but not rust.

                                                      1. 12

                                                        Sure. This initiative seems really great for people who end up choosing to use async Rust specifically because they need it for their high-performance application, and it sounds like they’ll really get a huge benefit out of this sort of work!

                                                        But I feel like a lot of people don’t actually want to use async Rust, and just get forced into it by the general ecosystem inertia (“I want to use crate X, but crate X is async, so guess I’m async now (or I’m using block_on, but that still requires importing tokio).”). These people (hi, I’m one of them!) are going to be difficult to win over, because they don’t actually want to care about async Rust; they just want to write code (for which async Rust is always going to be net harder than writing sync Rust, IMHO).

                                                        1. 5

                                                          I think you’re describing a desire for the Rust ecosystem whereas the proposal in the OP is about the language. I’ve also been there, wanting to use library X but it turns out its async. This, to me, isn’t a language problem, it’s that someone (including myself) hasn’t written the library I want in a sync context.

                                                          I don’t believe anything in the proposal is going to directly related to the situation you described.

                                                          1. 1

                                                            That’s a very fair point! :)

                                                      1. 6

                                                        I gained an appreciation for remote X when I realized you could use it to run web browsers in virtual machines. I neither like nor trust browsers all that much, and sometimes I’ve found it to be worth the trouble to sandbox them.

                                                        I do like low-power small form factor computers, and I keep meaning to experiment with using a Raspberry Pi as an X terminal for accessing a browser hosted on the dedicated server I have running in a data center.

                                                        1. 1

                                                          Ah, but from what I understand about X11 the “sandboxed” browser can still read everything on the screen / all keyboard input, no?

                                                          1. 2

                                                            There’s various ways to protect yourself from that with X today (and the last decade…) too, including a fully sandboxed X window (e.g. Xephyr) and an “untrusted” client, see https://www.x.org/wiki/Development/Documentation/Security/ which is used if you ssh -X into the vm.

                                                            1. 1

                                                              It may depend on your threat model. I tend to only have one browser tab open at a time, and I only run X11 for the web. As a rule I don’t keep browsers open unless I’m actively using them.

                                                              The thing that got me interested in running browsers in VMs was this CVE from 2015. My main concern was keeping the browser away from the filesystem.

                                                          1. 22

                                                            This was something I realised when writing Rust on STM32 for the first time: it was like “woah, why is everything so complicated??”. Turns out, microcontrollers have a significant amount of stuff hiding underneath the Arduino libraries…

                                                            1. 8

                                                              I’m not sure what the point of this story is? That two nerds trying to impress one another have overengineered the simplest problem to infinity? That most programmer interviews are bollocks? Something about how the company is too woke, or not woke enough? I’m thoroughly confused.

                                                              1. 17

                                                                https://aphyr.com/posts/340-reversing-the-technical-interview and its follow up posts might provide some more context :)

                                                                1. 2

                                                                  People used to try to come up with clever ways to do fizzbuzz to show how clever they were. That’s not in vogue anymore, so people instead veil it as a criticism of contemporary technical interviews instead. Now they not only get to show how clever they are, but also how many fancy words they have in their eclectic lexicon.

                                                                  Some of it is pretty cool, but I would prefer to get straight to the clever part, without the short story.

                                                                1. 39

                                                                  I don’t this article is very useful for a general audience, and I don’t think this article makes a convincing argument about what you think it does. I do think it’s a pretty decent article though.

                                                                  This article is basically “rust is bad because I find it’s closures confusing and they’re becoming really common”. Now that’s a pretty reasonable complaint, rust closures are complex. But this article isn’t really thorough enough to make a convincing argument that this isn’t more of an issue with how the author is trying to do things, rather than the language itself.

                                                                  As someone who uses these features regularly, I have to say that

                                                                  • The closure types (Fn, FnMut, FnOnce) make sense if you understand the borrow checker, I think these fall into the category of unavoidable complexity.
                                                                  • You should almost always use move |args| body instead of |args| body, and doing this will make closures make a whole lot easier to use. I really wish this was the default (but understand why it isn’t).
                                                                  • Callbacks are usually a mistake in rust because of the memory model, with exceptions, but this is also one of those unavoidable complexity things.
                                                                  • Unnameable types annoys me too.
                                                                  • I think there are some legitimate complaints about async rust, but these aren’t them. I haven’t really sat down and thought about how to make the complaints about async rust that I have, but I would be looking at subjects like Pin, Waker, lack of standardization, lack of generators, closures being too opaque, and so on.

                                                                  I hope this article will be useful for the people working on rust documentation and tooling, because it’s pretty obvious that we could do a better job of helping people with this subject (if just because it is one of the harder topics in one of the harder languages out there).

                                                                  1. 13

                                                                    You’re right in that the article doesn’t really go into enough depth to conclusively prove the points made – that’s partially because it was a long enough article anyway, and it’s hard to get the balance right :)

                                                                    This article is basically “rust is bad because I find it’s closures confusing and they’re becoming really common”. Now that’s a pretty reasonable complaint, rust closures are complex.

                                                                    This isn’t really what I wanted to get across, although I can certainly see why you’d come to that conclusion (again, article isn’t that thorough). The point was not that I find Rust’s closures confusing (I mean, I do a little, but I’m more than used to it now ;P), but that a lot of Rust’s design decisions around ownership, borrowing, and how closures are represented (i.e. opaque types) make their ergonomics really bad.

                                                                    And then that doing asynchronous programming is fundamentally equivalent to making a bunch of closures (because async stuff ends up being some form of CPS transform). I definitely agree that this point could have been proved more conclusively – but then again, the “what colour is your function” post already talked about that a lot.

                                                                    I think there are some legitimate complaints about async rust, but these aren’t them.

                                                                    I mean, yeah, I could definitely have talked more about the 2021 async library ecosystem specifically – the 2017 post did more of that (but, uh, for 2017). This was an attempt to try and identify some sort of root cause for all of the ecosystem instability & churn (in the spirit of this recent Lobsters comment by @spacejam), instead of fixating on specific issues that will probably go away (and be replaced with other similar issues >_<)


                                                                    I do actually write async Rust code for my day job (and have been using async stuff since it came out in 2016/7). As I attempted to say, it’s not terrible – but I do think it’s all built on rather precarious foundations, and that leads to a fair deal of instability.

                                                                    1. 3

                                                                      I do actually write async Rust code for my day job

                                                                      Hm, that’s weird. Why do you have to do async then? Can’t you just stick with threads?

                                                                      1. 4

                                                                        It wasn’t my choice; the work codebase had already started using it in earnest before I joined :)

                                                                        (which is ok – I wouldn’t have used it myself due to the ecosystem volatility and pain points, but doing it as a team at work isn’t that bad, and it’s not just me who has to maintain it when it breaks)

                                                                        1. 2

                                                                          Hm, that’s weird. Why do you have to do async then? Can’t you just stick with threads?

                                                                          Async and threads don’t really solve the same problems and aren’t interchangeable. The alternative to async would more likely be to hand-roll a reactor loop.

                                                                          1. 2

                                                                            To quote the original article:

                                                                            Was spinning up a bunch of OS threads not an acceptable solution for the majority of situations?

                                                                            I tend to agree that “just use threads” would probably be a solution for many problems. Though “just” might mean that someone needs to once write an epoll loop to handle http and offload complete requests/responses to a thread pool.

                                                                            1. 2

                                                                              Using OS threads in this way generally doesn’t scale well. We already know this from years of experience in many other languages. In concrete example terms, to avoid terminology issues: if the codebase were an HTTP server handling 10K concurrent TCP connections on a machine with 32 hardware CPU cores: a thread per TCP connection (10K threads) isn’t a great way to scale, but a thread per CPU core dedicated to async TCP connection handling (~32 threads, handling ~312 connections each using some kind of nonblocking eventloop (or async/await, which is just the fashionable way to factor eventloop-driven programs now)) is a great way to scale.

                                                                              1. 7

                                                                                We already know this from years of experience in many other languages.

                                                                                I’d like to challenge this: people have been writing web apps with Django & Rails for years, and that works. async/await can simultaneously be a tremendous help for niche use-cases and unnecessary for majority of use-cases.

                                                                                I haven’t seen a conclusive benchmark that says: “if you are doing HTTP with a database, async/await needs X times less CPU and Y times less RAM”. Up until earlier this year, I haven’t even seen a conclusive micro benchmark which compares threads and async await (there’s https://github.com/jimblandy/context-switch now). I don’t agree that there’s anything “we know” here: I don’t know, and I’ve been regularly re-researching this topic for several years.

                                                                                EDIT: to put some numbers into discussion, 10k threads on Linux give on the order of 100MB overhead: https://github.com/matklad/10k_linux_threads.

                                                                                1. 1

                                                                                  My point isn’t so much about the numbers themselves or some particular 10K limit. We could re-make the argument at some arbitrary, higher threshold. The point is about how the SMP machines of today work, and how synchronization primitives and context-switching and locking works, etc. In general, irrespective of programming language, thread-per-cpu scales better at the limit than thread-per-connection (or substitute “connection” for some other logical object which can get much larger than the cpu count as you try to scale up).

                                                                                  Obviously, if one doesn’t care about scalability, then none of this debate matters. You can always claim that the problems you care about or are working on simply don’t stretch current hardware’s capabilities and you don’t care, but that’s kind of a cop-out when discussing language features built to enable concurrency in the first place.

                                                                                  1. 1

                                                                                    I do think that specific thresholds matter, the words “majority of situations” / “many problems” are important.

                                                                                    Of course you can do more connections per cycle when you go down the levels of abstractions from Os threads to stackful coroutines to stackless coroutines to manually coded state-machines to custom hardware.

                                                                                    The question, for a specific application, where is the threshold when this stops being important and is dominated by other performance concerns? It doesn’t make sense to make network infinitely scalable if the bottleneck is the database you use.

                                                                                    The current fashion is to claim that you always need async, at least when doing web stuff, and that threads are just never good enough. That’s a discussion about culture, not a discussion about a language feature,

                                                                                    Language-feature wise, I would lament the absence of other kinds of benchmarks: how dynamically-dispatched+match based resumption compares to a hand-coded event loop, how thread-per-core works with mandatory synchronized wake ups, etc.

                                                                                2. 2

                                                                                  C10K has been solved for years. I mean, you’re not wrong that having a full thread stack for every connection is expensive, but 10k connections isn’t the point at which thread-per-connection breaks.

                                                                      1. 25

                                                                        Note a couple things:

                                                                        • With the 2021 edition, Rust plans to change closure capturing to only capture the necessary fields of a struct when possible, which will make closures easier to work with.
                                                                        • With the currently in-the-works existential type declarations feature, you’ll be able to create named existential types which implement a particular trait, permitting the naming of closure types indirectly.

                                                                        My general view is that some of the ergonomics here can currently be challenging, but there exist patterns for writing this code successfully, and the ergonomics are improving and will continue to improve.

                                                                        1. 12

                                                                          I have very mixed feelings about stuff like this. On the one hand, these are really cool (and innovative – not many other languages try and do async this way) solutions to the problems async Rust faces, and they’ll definitely improve the ergonomics (like async/await did).

                                                                          On the other hand, adding all of this complexity to the language makes me slightly uneasy – it kind of reminds me of C++, where they just keep tacking things on. One of the things I liked about Rust 1.0 was that it wasn’t incredibly complicated, and that simplicity somewhat forced you into doing things a particular way.

                                                                          Maybe it’s for the best – but I really do question how necessary all of this async stuff really is in the first place (as in, aren’t threads a simpler solution?). My hypothesis is that 90% of Rust code doesn’t actually need the extreme performance optimizations of asynchronous code, and will do just fine with threads (and for the remaining 10%, you can use mio or similar manually) – which makes all of the complexity hard to justify.

                                                                          I may well be wrong, though (and, to a certain extent, I just have nostalgia from when everything was simpler) :)

                                                                          1. 9

                                                                            I don’t view either of these changes as much of a complexity add. The first, improving closure capturing, to me works akin to partial moves or support for disjoint borrows in the language already, making it more logically consistent, not less. For the second, Rust already has existential types (impl Trait). This is enabling them to be used in more places. They work the same in all places though.

                                                                            1. 11

                                                                              I’m excited about the extra power being added to existential types, but I would definitely throw it in the “more complexity” bin. AIUI, existential types will be usable in more places, but it isn’t like you’ll be able to treat it like any other type. It’s this special separate thing that you have to learn about for its own sake, but also in how it will be expressed in the language proper.

                                                                              This doesn’t mean it’s incidental complexity or that the problem isn’t worth solving or that it would be best solved some other way. But it’s absolutely extra stuff you have to learn.

                                                                              1. 2

                                                                                Yeah, I guess my view is that the challenge of “learn existential types” is already present with impl Trait, but you’re right that making the feature usable in more places increases the pressure to learn it. Coincidentally, the next post for Possible Rust (“How to Read Rust Functions, Part 2”) includes a guide to impl Trait / existential types intended to be a more accessible alternative to Varkor’s “Existential Types in Rust.”

                                                                                1. 6

                                                                                  but you’re right that making the feature usable in more places increases the pressure to learn it

                                                                                  And in particular, by making existential types more useful, folks will start using them more. Right now, for example, I would never use impl Trait in a public API of a library unless it was my only option, due to the constraints surrounding it. I suspect others share my reasons too. So it winds up not getting as much visible use as maybe it will get in the future. But time will tell.

                                                                              2. 3

                                                                                eh, fair enough! I’m more concerned about how complex these are to implement in rustc (slash alternative compilers like mrustc), but what do I know :P

                                                                                1. 7

                                                                                  We already use this kind of analysis for splitting borrows, so I don’t expect this will be hard. I think rustc already has a reusable visitor for this.

                                                                                  (mrustc does not intend to compile newer versions of rust)

                                                                                  1. 1

                                                                                    I do think it is the case that implementation complexity is ranked unusually low in Rust’s design decisions, but if I think about it, I really can’t say it’s the wrong choice.

                                                                                2. 4

                                                                                  Definitely second your view here. The added complexity and the trajectory means I dont feel comfortable using Rust in a professional setting anymore. You need significant self control to write maintainable Rust, not a good fit for large teams.

                                                                                  What I want is Go-style focus on readability, pragmatism and maintainability, with a borrow checker. Not ticking off ever-growing feature lists.

                                                                                  1. 9

                                                                                    The problem with a Go-style focus here is: what do you remove from Rust? A lot of the complexity in Rust is, IMO, necessary complexity given its design constraints. If you relax some of its design constraints, then it is very likely that the language could be simplified greatly. But if you keep the “no UB outside of unsafe and zero cost abstractions” goals, then I would be really curious to hear some alternative designs. Go more or less has the “no UB outside of unsafe” (sans data races), but doesn’t have any affinity for zero cost abstractions. Because of that, many things can be greatly simplified.

                                                                                    Not ticking off ever-growing feature lists.

                                                                                    Do you really think that’s what we’re doing? Just adding shit for its own sake?

                                                                                    1. 9

                                                                                      Do you really think that’s what we’re doing?

                                                                                      No, and that last bit of my comment is unfairly attributed venting, I’m sorry. Rust seemed like the holy grail to me, I don’t want to write database code in GCd languages ever again; I’m frustrated I no longer feel confident I could corral a team to write large codebases with Rust, because I really, really want to.

                                                                                      I don’t know that my input other than as a frustrated user is helpful. But I’ll give you two data points.

                                                                                      One; I’ve argued inside my org - a Java shop - to start doing work in Rust. The learning curve of Rust is a damper. To me and my use case, the killer Rust feature would be reducing that learning curve. So, what I mean by “Go-style pragmatism” is things like append.

                                                                                      Rather than say “Go does not have generics, we must solve that to have append”, they said “lets just hack in append”. It’s not pretty, but it means a massive language feature was not built to hammer that one itch. If “all abstractions must have zero cost” is forcing introduction of language features that in turn make the language increasingly hard to understand, perhaps the right thing to do is, sometimes, to break the rule.

                                                                                      I guess this is exactly what you’re saying, and I guess - from the outside at least - it certainly doesn’t look like this is where things are headed.

                                                                                      Two, I have personal subjective opinions about async in general, mostly as it pertains to memory management. That would be fine and well, I could just not use it. But the last few times I’ve started new projects, libraries I wanted to use had abandoned their previously synchronous implementations and gone all-in on async. In other words, introducing async forked the crate community, leaving - at least from my vantage point - fewer maintained crates on each side of the fork than there were before the fork.

                                                                                      Two being there as an anecdote from me as a user. ie. my experience of async was that it (1) makes the Rust hill even steeper to climb, regressing on the no. 1 problem I have as someone wanting to bring the language into my org; (2) it forked the crate community, such that the library ecosystem is now less valuable. And, I suppose, (3) it makes me worried Rust will continue adding very large core features, further growing the complexity and thus making it harder to keep a large team writing maintainable code.

                                                                                      1. 9

                                                                                        the right thing to do is, sometimes, to break the rule.

                                                                                        I would say that the right thing to do is to just implement a high-level language without obvious mistakes. There shouldn’t be a reason for a Java shop to even consider Rust, it should be able to just pick a sane high-level language for application development. The problem is, this spot in the language landscape is currently conspicuously void, and Rust often manages to squeeze in there despite the zero-cost abstraction principle being antithetical to app dev.

                                                                                        That’s systems dynamics that worries me a lot about Rust: there’s a pressure to make it a better language for app development at the cost of making it a worse language for systems programming, for the lack of an actual reasonable app dev language one can use instead of Rust. I can’t say it is bad. Maybe the world would be a better place if we had just a “good enough” language today. Maybe the world would be better if we wait until “actually good” language is implemented.

                                                                                        So far, Rust resisted this pressure successfully, even exceptionally. It managed to absorb “programmers can have nice things” properties of high level languages, while moving downwards in the stack (it started a lot more similar to go). But now Rust is actually popular, so the pressure increases.

                                                                                        1. 4

                                                                                          I mean, we are a java shop that builds databases. We have a lot of pain from the intersection of distributed consensus and GC. Rust is a really good fit for a large bulk of our domain problem - virtual memory, concurrent B+-trees, low latency networking - in theory.

                                                                                        2. 5

                                                                                          I guess personally, I would say that learning how to write and maintain a production quality database has to be at least an order of magnitude more difficult than learning Rust. Obviously this is an opinion of mine, since everyone has their own learning experiences and styles.

                                                                                          As to your aesthetic preferences, I agree with them too! It’s why I’m a huge fan of Go. I love how simple they made the language. It’s why I’m also watching Zig development closely (I was a financial backer for some time). Zero cost abstractions (or Zig’s case, memory safety at compile time) isn’t a necessary constraint for all problems, so there’s no reason to pay the cost of that constraint in all cases. This is why I’m trying to ask how to make the design simpler. The problem with breaking the zero cost abstraction rule is that it will invariably become a line of division: “I would use Rust, but since Foo is not zero cost, it’s not appropriate to use in domain Quux, so I have to stick with C or C++.” It’s antithetical to Rust’s goals, so it’s really really hard to break that rule.

                                                                                          I’ve written about this before, but just take generics as one example. Without generics, Rust doesn’t exist. Without generics (and, specifically, monomorphized generics), you aren’t able to write reusable high performance data structures. This means that when folks need said data structures, they have to go implement them on their own. This in turn likely increases the use of unsafe in Rust code and thus significantly diminishes its value proposition.

                                                                                          Generics are a significant source of complexity. But there’s just no real way to get rid of them. I realize you didn’t suggest that, but you brought up the append example, so I figured I’d run with it.

                                                                                          1. 2

                                                                                            I guess personally, I would say that learning how to write and maintain a production quality database has to be at least an order of magnitude more difficult than learning Rust.

                                                                                            Agree, but precisely because it’s a hard problem, ensuring everything else reduces mental overhead becomes critical, I think.

                                                                                            If writing a production db is 10x as hard as learning Rust, but reading Rust is 10x as hard as reading Go, then writing a production grade database in Go is 10x easier overall, hand wavingly (and discounting the GC corner you now have painted yourself into).

                                                                                            One thing worth highlighting is why we’ve managed to stick to the JVM, and where it bites us: most of the database is boring, simpleish code. Hundreds of thousands of LOC dealing with stuff that isn’t on the hot-path. In some world, we’d have a language like Go for writing all that stuff - simple and super focused on maintainability - and then a way to enter “hard mode” for writing performance critical portions.

                                                                                            Java kind of lets us do that; most of the code is vanilla Java; and then critical stuff can drop into unsafe Java, like in the virtual memory implementation. The problem with that in the JVM is that the inefficiency of vanilla Java causes GC stalls in the critical code.. and that unsafe Java is horrifying to work with.

                                                                                            But, of course, then you need to understand both languages as you read the code.

                                                                                        3. 4

                                                                                          I think the argument is to “remove” async/await. Neither C or C++ have async/await and people write tons of event driven code with them; they’re probably the pre-eminent languages for that. My bias for servers is to have small “manual” event loops that dispatch to threads.

                                                                                          You could also write Rust in a completely “inverted” style like nginx (I personally dislike that, but some people have a taste for it; it’s more like “EE code” in my mind). The other option is code generation which I pointed out here:

                                                                                          https://lobste.rs/s/rzhxyk/plain_text_protocols#c_gnp4fm

                                                                                          Actually that seems like the best of both worlds to me. High level and event driven/single threaded at the same time. (BTW the video also indicated that the generated llparse code is 2x faster than the 10 year old battle-tested, hand-written code in node.js)

                                                                                          So basically it seems like you can have no UB and zero cost abstractions, without async/await.

                                                                                          1. 5

                                                                                            After our last exchange, I don’t really want to get into a big discussion with you. But I will say that I’m quite skeptical. The async ecosystem did not look good prior to adding async/await. By that, I mean, that the code was much harder to read. So I suppose my argument is that adding some complexity to language reduces complexity in a lot of code. But I suppose folks can disagree here, particularly if you’re someone who thinks async is overused. (I do tend to fall into that camp.)

                                                                                            1. 2

                                                                                              Well it doesn’t have to be a long argument… I’m not arguing against async/await, just saying that you need more than 2 design constraints to get to “Rust should have async/await”. (The language would be a lot simpler without it, which was the original question AFAICT.)

                                                                                              You also need:

                                                                                              1. “ergonomics”, for some definition of it
                                                                                              2. textual code generation is frowned upon

                                                                                              Without constraint 3, Rust would be fine with people writing nginx-style code (which I don’t think is a great solution).

                                                                                              Without constraint 4, you would use something like llparse.

                                                                                              I happen to like code generation because it enables code that’s extremely high level and efficient at the same time (like llparse), but my experience with Oil suggests that most “casual” contributors are stymied by it. It causes a bunch of problems in the build system (build systems are broken and that’s a different story). It also causes some problems with tooling like debuggers and profilers.

                                                                                              But I think those are fixable with systems design rather than language design (e.g. the C preprocessor and JS source maps do some limited things)

                                                                                              On a related note, Go’s philosophy is to fix unergonomic code with code generation (“go generate” IIRC). There are lots of things that are unergonomic in Go, but code generation is sort of your “way out” without complicating the language.

                                                                                    2. 1

                                                                                      I didn’t know about the first change. That’s very exciting! This is definitely something that bit me when I was first learning the language.

                                                                                    1. 13

                                                                                      Damn, this describes very well the sentiment I’ve been having in my spine towards Rust for the last few years. It kinda seems like async was its sharkjump moment.

                                                                                      (Common Lisp is pretty nice, though. We have crazy macros and parentheses and a language ecosystem that is older than I am and isn’t showing any signs of changing…)

                                                                                      Something is subtly wrong about Common Lisp as well. On paper, it seems like a perfect ecosystem for 99% of the problems out there. It has an unsurpassed runtime, pretty good performance (on SBCL) for such a dynamic language and better stability than anything. What’s wrong with it is that nobody’s using it despite these virtues.

                                                                                      1. 5

                                                                                        If I’d hazard a guess, it’s probably because CL never really figured out its deployment story (and it doesn’t really integrate well with its environment in other respects because the spec is so generic). Going from “runs inside my Emacs/SLIME setup” to “runs in production” is surprisingly non-trivial (as opposed to golang where you can scp a static binary).

                                                                                        There are other reasons, though (see: the Lisp Curse, “worse is better”, etc).

                                                                                      1. 4

                                                                                        I found that managing yubikey ssh identities with gpg-agent was painless. I’ve never encountered issues with the agent forgetting the key, though I have a gpg-connect-agent updatestartuptty /bye in my shellrc file.

                                                                                        1. 2

                                                                                          I mean, if it works for you, great! My experience with most of the gpg stack has been mostly negative, mainly because of the “jack of all trades, master of none” effect: gpg tries to support all the key types, ways of connecting to smartcards (directly and via pcscd), … – and at least for me, seems to suffer in usability as a result (too many moving parts and things that can go wrong). I think my machine has about 3 different types of GnuPG keyring, managed by different pieces of software (GNOME Keyring, pacman, gpg-agent itself, …). When it works, it’s great, but when it breaks…

                                                                                          The PIV solution is interesting in that it seems to be much better designed (in contrast to the GPG API which is a complete tire fire, as far as I understand it) – and yubikey-agent seems to be a relatively simple Go app that just proxies authentication/signing commands through to pcscd and back.

                                                                                          1. 1

                                                                                            I didn’t mean my comment as a dismissal of your article. I was just offering an alternate point of view on using the Yubikey with plain GPG. :)

                                                                                        1. 1

                                                                                          Interesting that this claims a maximum number of 8 digits. I have a yubikey 4c nano holding a gpg-agent based key with a 30-digit PIN. (I hope!)

                                                                                          1. 4

                                                                                            What happens if you try to activate the key using only the first 8 digits?

                                                                                            1. 1

                                                                                              Interesting – I didn’t realise the gpg-agent-based solution had better PIN support in that regard! Personally, my PIN is rather short; given that it’s not directly encrypting anything (and the hardware enforces a lockout after 3 failed tries), I don’t think this is the end of the world :)

                                                                                            1. 1

                                                                                              Nice! As someone who writes a fair deal of Common Lisp, this is a really handy feature to have :)

                                                                                                1. 7

                                                                                                  This is a part of why I really love Common Lisp. Many of its libraries (like bordeaux-threads, the de facto threading library, and usocket, the BSD sockets thing) were last updated something like 10-20 years ago, and they still just work.

                                                                                                  This kind of ecosystem stability is refreshing when compared to a language like Rust, where all my code became unidiomatic / dependent on now-stale libraries within half a year… (!)

                                                                                                  1. 3

                                                                                                    But all your Rust code still works, doesn’t it? So it must be you love Common Lisp for some other reasons: not that bordeaux-threads still works, but that bordeaux-threads is still idiomatic.

                                                                                                  1. 2

                                                                                                    If it uses WhatsApp web, shouldn’t that mean that your phone has to always be turned on, online with WhatsApp running for you to use this?

                                                                                                    1. 2

                                                                                                      It does! However, you can run WhatsApp in a VM – my personal setup involves an android-x86 QEMU VM that uses libvirt. Thanks to virt-manager / SPICE’s USB redirection feature, you can even plug in a UVC webcam to scan the QR code…

                                                                                                      1. 2

                                                                                                        Interesting, so you’d just run it on a device that’s always on. I’ve heard from past experiments, that WhatsApp is inclined to ban accounts that try to circumvent their infrastructure, but this should be safe right? Or might they become suspicious if an account is always logged-on, always active?

                                                                                                        1. 2

                                                                                                          It’s an open question – but I’ve been doing this for a while now and they haven’t seemed to notice…

                                                                                                          They definitely do ban people who don’t use their official mobile app, but so far they don’t seem to crack down on unofficial web client usages.

                                                                                                    1. 9

                                                                                                      I’ve noticed the same issue with Electron apps on my low RAM devices. Anything with 4GB or less of RAM doesn’t allow you to run more than 2 instances of the programs, without chugging into swap space or worse, oom-killing.

                                                                                                      Particularly worrying is most of my messaging apps are exactly like that: Riot/Element, FB Messenger, WhatsApp, Telegram (this last one is actually pretty optimized and doesn’t eat too much). Long gone are the days where an XMPP bridge would solve the issue, as most of the content is now images, audio messages, animated GIFs, emojis and other rich content.

                                                                                                      Thanks for the article, at least I know that i can replace one of the culprits with a daemonized, non-Electron app and just use the phone as a remote control.

                                                                                                      1. 9

                                                                                                        As far as I am aware, Telegram is not Electron, it is actually a Qt based app.

                                                                                                        1. 7

                                                                                                          Long gone are the days where an XMPP bridge would solve the issue, as most of the content is now images, audio messages, animated GIFs, emojis and other rich content.

                                                                                                          I’m not sure what you mean. Most XMPP clients today (like Conversations, Dino, etc.) gracefully handle all of the items you mentioned, and with much less resources than a full web browser would require. I definitely recommend XMPP bridges when possible where the only alternative is an “app” that is really a full web browser.

                                                                                                          1. 4

                                                                                                            Of those listed, I think Riot will maybe disappear at some point. Riot has (amazingly) managed to have native desktop clients pop up, Quarternion, gomatrix and nheko are all packaged for my Linux distribution.

                                                                                                            1. 3

                                                                                                              I understand the desire to use something browser-ish and cross-platform. I don’t fully understand why Electron (hundreds of mb footprint) is so popular over Sciter (5mb footprint).

                                                                                                              1. 1

                                                                                                                Electron is fully free, Sciter is closed-source with a Kickstarter campaign in progress to open-source it.

                                                                                                                For the large companies, the price of something like Sciter should be a non-issue. If I were reviewing a proposal to use it, though, I’d be asking about security review and liability: HTML/CSS/JS have proven to be hard to get secure, Electron leverages the sugar-daddy of Google maintaining Chrome with security fixes, what is the situation with Sciter like?

                                                                                                                Ideally, the internal review would go “okay, but if we only connect to our servers, and we make sure we’re not messing up TLS/HTTPS, then the only attack model is around user-data from other users being rendered in these contexts, and we have to have corner-case testing there no matter which engine we use, to make sure there are no leaks, so this is all manageable”. But I can see that “manageable” might not be enough to overcome the initial knee-jerk reactions.

                                                                                                              2. 2

                                                                                                                Long gone are the days where an XMPP bridge would solve the issue

                                                                                                                I use Dino on desktop to replace the bloated Discord & WhatsApp clients, and it works fine (with inline images, file sharing, etc working too).

                                                                                                                Disclaimer: I did, however, write the WhatsApp bridge :p

                                                                                                                1. 1

                                                                                                                  Isn’t the reason that XMPP isn’t useful more to do with these services wanting to maintain walled gardens? And further, isn’t that a result of the incentives in a world of “free” services?

                                                                                                                1. 3

                                                                                                                  Nice! As a chat systems nerd, I’m always interested in people trying to do something new in this space :)

                                                                                                                  I like that a form of catchup is baked into the protocol from day 1 – being able to implement the equivalent of a message broker’s “durable subscription” is very valuable for not dropping messages on the floor (like in IRC where your connection drops). Minimalism is also a worthy goal – XMPP and Matrix do indeed try to promise the world, and are extensible enough that you can send anything over them, having a deliberately spartan alternative is a nifty idea.

                                                                                                                  One thing that does seem lacking is any sort of discussion around how bridging to other protocols might work – would it just be a special case of s2s (as in XMPP), or would you design out a special extension for it (as in Matrix)?

                                                                                                                  1. 1

                                                                                                                    This is not a full federated your-server-talks-to-every-other-server type thing.

                                                                                                                    There is no s2s. It seems it is meant to compete with IRC-ish c2s only and use centralised servers.

                                                                                                                    1. 1

                                                                                                                      Yeah, because centralized servers are simple to do and relatively hard to break. And because I don’t know a whole lot about decentralized chat systems. Seems like with a centralized chat system you have to trust the owners/operators of the server your client talks to, while with a decentralized chat system you need to trust the owners/operators of every server your own server talks to.

                                                                                                                      Frankly, if I want to talk in the channel #foo@example.com then I see no reason to have to go through an intermediary home-server rather than just talk to example.com directly, and if you want a global topic #foo that any server can participate in then that seems Hard Enough I Don’t Want To Bother. And for bridging to other protocols, I’ve never seen it done well enough to be compelling enough to bother.

                                                                                                                      My mind may be changed on these points, however. Most importantly, authentication and any potential account metadata is decentralized, so no matter whose server you’re talking to, you can still own your own identity. (This part I HAVE thought about.)

                                                                                                                      1. 1

                                                                                                                        Yeah, because centralized servers are simple to do and relatively hard to break. And because I don’t know a whole lot about decentralized chat systems.

                                                                                                                        I’ve been idly wondering how one would do a decentralized chat system since I saw your sketch here. I think one would partition messages across peers with a distributed hash table, and have messages point to the most recent message posted in the chat/channel they’re in like you’ve discussed. That seems to lead to messages forming a DAG that we need some way of making eventually consistent, so maybe one needs some kind of CRDT for the messages? It’s fun to think about.

                                                                                                                        1. 1

                                                                                                                          For sure, except in a Global Network like original IRC dreams, chatrooms effectively end up being single-server. Though it’s useful as as admin to be able to choose the server my chatroom is on without requiring my users to connect to Yet Another Server.

                                                                                                                          If you have decentralized authentication/identity and account metadata (and 1:1 chats, if those are supported) then you are a fully federated type thing.

                                                                                                                          1. 2

                                                                                                                            Now O want to builds something like this (IRC) distributed and federated. Great, as if I don’t have enough unfinished projects.

                                                                                                                            1. 2

                                                                                                                              I’d love to hear your thoughts on design, to be honest. Like I said, I don’t know much about distributed chat systems, but learning more would be nice.

                                                                                                                            2. 1

                                                                                                                              The primary use case of multiple servers is for operators to be able to bounce instances or have some die without losing what might be an important piece of infrastructure. Anything else, including possible load sharing or ping time optimization, is at best a nice side effect.

                                                                                                                              1. 1

                                                                                                                                Right, sorry, I should be careful to say “centralized” in this context. Number of “servers” is a red herring

                                                                                                                              2. 1

                                                                                                                                Sometimes I wonder if federated is bad for implementations, but federated identity on whatever centralized/decentralized server we want is good.

                                                                                                                                1. 1

                                                                                                                                  Like OpenID? That never took off. Nowadays Facebook is probably the largest identity provider online.