Threads for arathorn

    1. 20

      I’m a bit surprised by the negative comments here. The sluggishness of Element is one of the worst problems for adoption. I am very glad it’s being worked on.

      And indeed, I just tried out Element X, it’s very fast and comparable to Whatsapp and co, very impressive!

      1. 4

        Its a step in the right direction, but i think many feel burned by the performance of element. Since switching to fluffychat my view of matrix is totally changed for the better. Cleaner ux and far better performance when fetching updates on already joined rooms, rivaling that of sliding sync.

        1. 1

          it’s a bit unfortunate if folks won’t try Element X (which is lightyears ahead of both Element and FluffyChat) because of bad experiences on Element. :/

    2. 24

      I wish the Matrix team would have more focus. They’re working on all this new experimentall stuff - a new client, a new server etc all the while the existing stuff is severely broken in many ways.

      1. 16

        I think you’ve entirely missed the point: we’ve focused specifically on fixing the existing severely broken stuff by writing a client to replace the old broken client. We haven’t written a new server; we added an API to the existing server via a shim, so we could focus and implement faster. There are no new features in Matrix 2.0 (other than native group VoIP) - everything else is either removing stuff (the broken old authentication code in favour of Native OIDC), or fixing stuff (the horrific performance problems, by introducing Sliding Sync and Faster Joins).

        1. 2

          With the new server, I was thinking of Dendrite. It’s good you’re fixing Element with Element X, but it feels like it’s been in beta forever, while people keep running into problems with the old Element.

          1. 1

            Synapse (the 1st gen server) has simply had the most focus, by far - Dendrite has ended up being a test bed for experimentation. Synapse has improved unrecognisably over the years and now is basically boring stable tech.

        2. 1

          What about issues related to e2ee and verification? Some of these have been open for a very long time, and I’ve personally experienced many of these, for years. It definitely gives the impression that the Matrix team has lost focus when problems like these exist in a core feature for Matrix (e2ee).

          https://github.com/vector-im/element-android/issues/5305

          https://github.com/vector-im/element-android/issues/2889

          https://github.com/vector-im/element-android/issues/1721

          There are tons more of these issues in your bug tracker, some even reported against the new rust crypto thing, these are just some of the ones I am subscribed to. Is functional E2EE a priority for Matrix?

          1. 1

            I can’t remember when I last had this issue because of matrix doing something wrong. So maybe it’s just not happening that often. I personally wouldn’t say it even exists from my experiments..

          2. 1

            We rewrote e2ee on a single audit-ready rust codebase rather than chasing the combinatoric explosion of bugs across the various separate web, ios & android implementations, which took ages, but has finally landed with the exception of Web, which should merge next week: https://github.com/vector-im/element-web/issues/21972#issuecomment-1705224936 is the thing yo track. Agreed that this strategy left a lot of people in a bad place while the rewrite happened, but hopefully it will transpire to be the right solution in the end.

    3. 14

      I guess I’m very late to the party here - sorry for not spotting it earlier. I’ll try to quickly cover the points raised:

      • Canonical JSON isn’t the disaster it’s made out to be. The “the spec doesn’t actually define what the canonical json form is strictly” statement is just false: https://spec.matrix.org/v1.8/appendices/#canonical-json is the link. Sure it can be frustrating to find that different language’s JSON emitters are hard to canonicalise (and we certainly had some dramas back when we allowed floats in Canonical JSON, given different precision etc), but it’s a wart rather than a catastrophe. The Dendrite devs fixed their bugs on this years ago (although one of them still likes to kvetch about it). In future we can and will switch to a canonical binary format (eg the MIMI IETF work is rather quaintly fixated on using TLS Presentation Layer as a binary format).

      • Decentralised rooms are a feature, not a bug. Just like decentralised VCS repositories are a feature of Git and friends. Yes, this means that buggy implementations can get splitbrained, and rooms can be splitbrained due to netsplits, but these days problematic splits are very rare indeed. We fixed the primary mistakes which caused these over 5 years ago: https://github.com/matrix-org/matrix-spec-proposals/blob/f714aaadd011ac736d779f8460202a8d95799123/proposals/1442-state-resolution.md. The complaints about “not being able to guaranteed to delete data in a decentralised system!” are asinine, obviously: there is no way to force data to be deleted on other folks’ computers, short of utterly evil DRM.

      • In terms of room memberships not being deletable: this is mitigated by MSC4014, which provides per-room pseudo IDs so that the memberships pseudonymous, so it doesn’t necessarily matter much that you can’t delete them. This has now implemented in Dendrite.

      • Meanwhile, other state can be deleted by upgrading the room and discarding the previous version of the room (upgrading rooms between versions is a fairly common operation, albeit one we need to make more seamless). We’re also working on encrypting room state e2ee via MSC3414, which then makes the lack of finegrained deletion less important.

      • Stuff about the DAG being hard to linearise because it is deliberately allowed to be split into discontiguous chunks is just not true, and misses one of the nicest bits of Matrix: that you don’t have to replicate the full DAG to participate in a room (thus allowing fast lazyloaded joins etc). The “depth” parameter is obsolete since 2018 and is marked as such in the SS spec, and tiebreaking on forgeable timestamps is only done as a totally arbitrary, non-security related deterministic tiebreaker.

      • Similarly, the fact remote servers can send old messages into a room (which may or may not be “fake”; who’s to say?) is a feature. Just like it’s a feature for email queues to be able to be flushed, or for users to send email from 1971 if they try hard enough.

      • A valid criticism (at last) is that E2EE is fragile. We’re fixing this both by making the current implementations more robust (adopting the decent and audited matrix-rust-sdk implementation) - and reworking how devicelists work in the context of MLS and MIMI in the long run. Definite mea culpa on this one.

      • Another valid criticism is about lack of authed media and the fact that remote media gets cached on a user’s server when they view it. We’re working on this currently (despite the accusations that we’re ignoring GH issues…)

      • Finally; yes, moderation needs more work. Not for the reason mentioned here (state resets causing moderation problems are incredibly rare since we fixed state resnin 2018), but because we need things like IP or CIDR based banning still, and better tooling for moderators rather than server admins. We’ve just added someone from the synapse team to work fulltime on moderation tooling (a few weeks ago) though, so expect progress there. Also, the Mjolnir moderation bot (while feeling alarmingly eggdroplike) does work pretty well - but it’s not exactly massmarket yet.

      Hope this provides some clarity on the mix of questionable and legit points raised in the post. Sorry if any of it is incoherent; has been written off the top of my head on mobile.

      1. 2

        The “the spec doesn’t actually define what the canonical json form is strictly” statement is just false: https://spec.matrix.org/v1.8/appendices/#canonical-json is the link. Sure it can be frustrating to find that different language’s JSON emitters are hard to canonicalise …

        That link says that e.g.

        Numbers in the JSON must be integers in the range [-(2**53)+1, (2**53)-1], represented without exponents or decimal places, and negative zero -0 MUST NOT appear.

        which suggests that the subsequent example

        {
           "a": -0,
           "b": 1e10
        }
        

        should fail to parse. But instead, it’s said that from that JSON payload

        The following canonical JSON should be produced:

        {"a":0,"b":10000000000}
        

        The literal 1e10 may represent the same value as the literal 10000000000, so maybe we can infer that exponents and decimal places can be allowed as input, so long as the output transforms them to a valid value per the spec. But the literal -0 does not represent the same value as the literal 0, they are different values. An implementation that rejected input with a number literal -0 would be reasonable; why should it be transformed to 0? And what about the number literal 1.0? Should it be rejected, or transformed to 1? What about 1.00000001?

        This is why I say that the spec is at least somewhat ambiguous.

      2. 1

        The complaints about “not being able to guaranteed to delete data in a decentralised system!” are asinine, obviously: there is no way to force data to be deleted on other folks’ computers, short of utterly evil DRM.

        Technical details aside, do people not have the right to delete stuff they’ve created?

        1. 3

          Of course people have the right to delete stuff they’ve created! They’re even guaranteed it under GDPR. Which is why Matrix supports deleting messages (aka redactions), and why all well-behaved Matrix implementations uphold deletions, as explained below: https://lobste.rs/s/wvi9xw/why_not_matrix#c_2eqdof.

          The quibbling point from the original post is that you can’t guarantee that there aren’t malicious servers in the room who will ignore deletion requests. Just as you can’t guarantee that there aren’t malicious users busily publishing screenshots of your undeleted conversations too.

          1. 2

            Good to hear!

            The quibbling point from the original post is that you can’t guarantee that there aren’t malicious servers in the room who will ignore deletion requests. Just as you can’t guarantee that there aren’t malicious users busily publishing screenshots of your undeleted conversations too.

            Couldn’t you do a kind of check?

            Say someone sends a (valid) delete request to server S1 for entity ID 123. The server S1 dutifully deletes entity ID 123, and (verifiably) forwards that delete request to peer servers S2 and S3. After some time, S1 could query S2 and S3 for entity ID 123, or for some higher order collection that would have included entity ID 123, if it still existed. If the response included entity ID 123, then the corresponding server would be marked as malicious, penalized, and, eventually, maybe, removed from the network altogether.

            edit: But, yeah, this reflects a core problem with decentralized systems, I guess! Users both want and need to delegate their trust to a well-defined authority, so that issues like this one (and others) can be authoritatively resolved.

            1. 1

              To be fair, a hostile server could lie about this to you while still keeping the record for the nasties.

              But yes, that’s a great idea, and something Matrix home servers could definitely do to at least deal with bad implementations.

              In my experience though, Matrix doesn’t even seem to support deleting events on the same home server. Now maybe that’s my client not implementing redaction correctly, or it’s the server, but that’s exactly the problem Matrix is constantly facing here: it’s moving fast and changing fundamentals all the time. Clients and servers are barely catching up and there isn’t actually a end-to-end imlementation that fixes all the issues that @arathorn described here (e.g. that MSC being implemented in Dendrite and not Synapse). In my case, it was FluffyChat failing to redact an event on a Synapse server, but I suspect the level of chaos out there is much worse than being described here.

              1. 1

                To be fair, a hostile server could lie about this to you while still keeping the record for the nasties.

                Sure. At the end of the day, once you send data to a node you don’t control, it’s, well, outside of your control. Anything you try to do to control it is gonna be best-effort at, well, at best.

                issues … chaos

                Of course! Opting in to decentralization necessarily means opting out of anything that relies on a central authority. Deletable content is one example — no decentralized system can ever provide true deletes just by definition — but there are countless others. No point trying to paper over the chaos!

      3. 1

        Similarly, the fact remote servers can send old messages into a room (which may or may not be “fake”; who’s to say?) is a feature. Just like it’s a feature for email queues to be able to be flushed, or for users to send email from 1971 if they try hard enough.

        There’s so much to unpack here…

        First, wait what? It’s “a feature […] for users to send email from 1971 if they try hard enough”?? What possible use case is that supposed to represent? As someone who’s been running mail servers for decades (but not since 1971, thank god), I sure wish we could just say “sorry, you can’t send 10 year old emails” or something. That’s a ludicrous feature to have. And yes, I do think offline access is a valuable feature, just not that we should be allowed to inject supposedly 50 year old messages in a stream and expect things to make any sort of sense.

        Also, I find this sort of tone concerning:

        (which may or may not be “fake”; who’s to say?)

        well… surely we should have some way of authenticating users somewhere, somehow, no? Isn’t that what E2EE supposed to cover?


        To take a step back here, I find Matrix’s approach to those problems and critiques in general to be quite cavalier and, to a certain extent, paternalistic in the sense that you assume we don’t know what we’re talking about.. For years now I’ve been hearing similar responses “yes, E2EE is fragile, my bad, gotta fix this soon pinky promise”, “moderation needs work”, “there’s a MSC implemented in Dendrite that fixes everything, no biggie”, I’m sorry, but that just doesn’t fly for me anymore.

        Just last week I sent myself an attachment from FluffyChat on Android to my own avatar from OFTC, in the feeble hope I might have been able to get a HTTP link for a file in a print shop (don’t ask). It worked in the sense that my IRC client got it but I didn’t see the link from FluffyChat, so it actually failed to do what I needed. I was hoping I could then delete this message and get rid of that (public!) file (even though it’s hidden behind a secret URL). That didn’t work either: I deleted the message in FluffyChat, and it’s still out there, on that home server.

        In other words, I tried to redact a message from a well-known, mainstream client, and it didn’t work: the message is still there, on that very home server (not the federation!).

        It’s one thing to argue that decentralized systems are hard, and that moderation on federated system is really hard. But this, this is different. Those are basic interoperability issues for basic features that currently Do Not Work in Matrix, and makes me unlikely to recommend people switch to Matrix.

        It actually makes me even more uncomfortable with the whole thing seeing how the Matrix lead responds to such criticism, barely acknowledging any of those issues, while, I suspect, being painfully aware of how accurate many of those are internally.

        How honest are we being with ourselves here?

        1. 1

          Missed this at the time; responding for posterity: I’m not trying to be cavalier here. In my opinion there are three legit points that this article raised, which I listed at the end of my response: fragile E2EE (fixed by matrix-rust-sdk), media repo problems (lack of auth, lack of DELETE, lack of ability to disable caching) and lack of moderation tooling (although that’s improved in the last 15 days thanks to community efforts: https://matrix.org/blog/2023/09/15/this-week-in-matrix-2023-09-15/#department-of-trust-safety-shield).

          I continue to believe that the canonical JSON complaints are a wart rather than a serious defect; meanwhile state resolution is generally reliable these days; and if I appear paternalistic and imply that the author doesn’t know what they’re talking about, it’s because they lead with stupid points like “you can’t guarantee that other servers will delete your data”.

          So, I’m trying to be honest with myself, and prioritise appropriately. For instance, the media repo issues were already actively being worked on, but have been bumped still higher in priority.

      4. 1

        The complaints about “not being able to guaranteed to delete data in a decentralised system!” are asinine, obviously: there is no way to force data to be deleted on other folks’ computers, short of utterly evil DRM.

        There is comfort in knowing a system at-least tries to guarantee deleting something, versus hoping it can delete something. People like knowing they can remove things from well-intentioned people’s servers. It’s not asinine to hope for the best, even if you’re not planning for the worst.

        1. 4

          Totally, well behaved Matrix servers do delete data on request! The way it works is that the DAG signs a hash of the data, not the data itself, so if folks want to delete nodes in the DAG then they send a “redaction” event which the servers in the room which servers in the room apply to discard the underlying data in question. The details are at https://spec.matrix.org/v1.8/client-server-api/#redactions, and matrix servers implement these by default. You’d have to maliciously tweak the server as an admin not to uphold them.

    4. 4

      I really want Matrix to be a viable option for safe, private hosting of small communities. There really, really needs to be a solid non-corporate option, there’s been a lot of distressing stuff going on with the corporate chat platforms lately.

      It is not a viable option, and this essay lists a large slice of the serious issues that would have to be addressed for me to see it as one. Furthermore most of these issues are baked into the spec and cannot be changed at this point.

      1. 7

        Many of these issues are either non-issues or grossly overstated; guess I’ll have to go through responding to them. Meanwhile, the spec is mutable (we’re on room version 11 already, and pretty much everything can change between room versions), and we’ve been steadily fixing stuff over the years (including some of the stuff incorrectly flagged here as still being problematic).

        1. 5

          you are of course welcome to reply here, and I want to say that I support your project and I very much want it to be better than it is, but like, I’ve dug into the details of a lot of this myself. wishing it to be better doesn’t actually make it better. I hope that, as you suggest, your migration path is good enough to get out of this hole. the world will be significantly brighter if you do someday.

    5. 3

      Agree with quite few of these points, but I see the lack of deniability in chat as a feature. Also malicious servers not deleting events is a very small problem if you look at the fact any client could keep history as well. If you are going to say something that you will regret definitely don’t write it down.

      1. 6

        Ironically, Matrix has deniability. Encrypted messages aren’t signed; there’s no way to prove that the other user in the conversation didn’t spoof the transcript if they could have colluded with the server to put a copy of the spoofed transcript there too.

        1. 3

          Encrypted messages are signed by the sender’s homeserver. With the move to Matrix P2P this will be equivalent to being signed by the sender’s client.

    6. 5

      Biggest issue I’ve had with matrix is its extremely slow to sync when I haven’t used it a while, which then decreases my likelihood to use it again, which means the next time I sign in I’ll have the same problem. It’s a cycle.

      1. 5

        this is being fixed in element x with “sliding sync”

        1. 1

          Can a server sliding sync with the ‘main’ server or is this just clients?

          1. 4

            Sliding sync is just for the client-server API, doesn’t affect server-server API, AFAIK

            1. 1

              Bummer. Then the issue of needing to duplicate the entire history isn’t solved for servers. This makes self-hosting an expensive endeavor if you want to join some big groups unless you want to constantly combat the storage issues with scripts as an admin.

              1. 7

                Matrix has never needed to replicate the entire history when you join a room. It does the last ~20 messages and then pulls in others on demand. It does however need to replicate the “room state” - i.e. who’s in it, what the room permissions are, etc. We switched to lazyloading for this back in ~March; the project was imaginatively called Faster Remote Room Joins and has shipped in Synapse. (It doesn’t lazyload as much as it could or should, but the infrastructure is there now).

                1. 1

                  That is great to hear. I’ve been hearing differently from other sources when I asked in the past.

              2. 3

                IIRC federation doesn’t need to duplicate the entire history. It can fetch old messages from another server as needed. But state events are needed so that the currently valid auth state can be resolved.

    7. 8

      I really want matrix to succeed, but the issues are plentiful.

      The fact that self-hosting synapse in a performant manner is no trivial feat (this is slowly improving), compounded by the fact that no mobile client yet supports sliding sync (ElementX when) makes my user experience in general very miserable. Even the element-desktop client have horrible performance, unable to make use of GPU acceleration on nearly all of my devices.

      1. 12

        unable to make use of GPU acceleration on nearly all of my devices

        As an IRC user, do I want to know why a instant messaging client would need GPU acceleration? :x

        1. 8

          It’s nothing particularly novel to matrix: rendering UIs on the CPU tends to use more battery than the hardware component whose entire goal is rendering, and it’s hard to hit the increasingly-high refresh rates expected solely via CPU rendering.

          1. 3

            A chat application ought to do very infrequent redraws, basically when a new message comes in or whenever the user is composing, worst case when a 10fps gif is being displayed. I find it concerning we now need GPU acceleration for something as simple as a chat to render itself without feeling slugish.

            1. 8

              Rendering text is one of the most processor-intensive things that a modern GUI does. If you can, grab an early Mac OS X machine some time. Almost all of the fancy visual effects that you get today were already there and were mostly smooth, but rendering a window full of text would have noticeable lag. You can’t easily offload the glyph placement to the GPU, but you can render the individual glyphs and you definitely can composite the rendered glyphs and cache pre-composited text blocks in textures. Unless you’re doing some very fancy crypto, that will probably drop the power consumption of a client for a plain text chat protocol by 50%. If you’re doing rich text and rendering images, the saving will be more.

              1. 4

                The downside with the texture atlas rugged approach is that the distribution of glyphs in the various cached atlases in every process tend to become substantially re-invented across multiple graphics sources and make out quite a bit of your local and GPU RAM use. The number of different sizes, styles and so on aren’t that varied unless you dip into some kind of opinionated networked document, and even then the default is default.

                My point is that there is quite some gain to be had by somehow segmenting off the subsurfaces and somewhat split the load – a line packing format in lieu of the pixel buffer one with the LTR/RTL toggles, codepoint or glyph-index lookup, (so the client need to know at least GSUB of the specific font-set) and attributes (bold, italic, colour, …) one way and kerning feedback for picking/selection the other.

                That’s actually the setup (albeit there’s work to be done specifically in the feedback / shaping / substitution area) done in arcan-tui. Initial connection populates font slots and preferred size with a rough ‘how does this fit a monospaced grid w/h” hint. Clients using the same drawing properties shares glyph cache. We’re not even at the atlas stage (or worse, SDFs) stage yet the savings are substantial.

                1. 3

                  The downside with the texture atlas rugged approach is that the distribution of glyphs in the various cached atlases in every process tend to become substantially re-invented across multiple graphics sources and make out quite a bit of your local and GPU RAM use

                  I’m quite surprised by this. I’d assume you wouldn’t render an entire font, but maybe blocks of 128 glyphs at a time. If you’re not doing sub-pixel AA (which seems to have gone out of fashion these days), it’s 8 bits per pixel. I’d guess a typical character size is no more than 50x50 pixels, so that’s around 300 KiB per block. You’d need quite a lot of blocks to make a noticeable dent in the > 1GiB of GPU memory on a modern system. Possibly less if you render individual glyphs as needed into larger blocks (maybe the ff ligature is the only one that you need in that 128-character range, for example). I’d be really surprised if this used up more than a few tens of MiBs, but you’ve probably done the actual experiments so I’d be very curious what the numbers are.

                  That’s actually the setup (albeit there’s work to be done specifically in the feedback / shaping / substitution area) done in arcan-tui. Initial connection populates font slots and preferred size with a rough ‘how does this fit a monospaced grid w/h” hint. Clients using the same drawing properties shares glyph cache. We’re not even at the atlas stage (or worse, SDFs) stage yet the savings are substantial.

                  That sounds like an interesting set of optimisations. Can you quantify ‘substantial’ at all? Do you know if Quartz does anything similar? I suspect it’s a bit tricky if you’ve got multiple rounds of compositing, since you need to render text to some texture that the app then renders into a window (possibly via multiple rounds of render-to-texture) that the compositor composes onto the final display. How does Arcan handle this? And how does it play with the network transparency?

                  I recall seeing a paper from MSR at SIGGRAPH around 2005ish that rendered fonts entirely on the GPU by turning each bezier curve into two triangles (formed from the four control points) and then using a pixel shader to fill them with transparent or coloured pixels on rendering. That always seemed like a better approach since you just stored a fairly small vertex list per glyph, rather than a bitmap per glyph per size, but I’m not aware of any rendering system actually using this approach. Do you know why not? I presume things like font hinting made it a bit more complex than the cases that the paper handled, but they showed some very impressive performance numbers back then.

                  1. 3

                    I’m quite surprised by this. I’d assume you wouldn’t render an entire font, but maybe blocks of 128 glyphs at a time. If you’re not doing sub-pixel AA (which seems to have gone out of fashion these days), it’s 8 bits per pixel.

                    You could’ve gotten away with an alpha-coverage only 8-bit texture had it not been for those little emoji fellows, someone gave acid to the LOGO turtles and now it’s all technicolour rainbow – so full RGBA it is. While it is formally not a requirement anymore, there’s old GPUs around and you still can get noticeable difference when textures are a nice power-of-two (POT) so you align to that as well. Then comes the quality nuances when rendering scaled, since accessibility tools like there zooms in and out you want those to look pretty and not alias or shimmer too bad. The better way for that still is mip-mapping, so there is a point to raster at a higher resolution, switch that mipmap toggle and have the GPU sort out which sampling level to use.

                    That sounds like an interesting set of optimisations. Can you quantify ‘substantial’ at all? Do you know if Quartz does anything similar?

                    There was already a big leap for the TUI cases not having WHBPP*2 or so pixels to juggle around, render to texture or buffer to texture and pass onwards (that could be another *4 because GPU pipelines and locking semantics you easily get drawing-to, in-flight, queued, presenting).

                    The rest was that the font rendering code we have is mediocre (it was 2003 and all that ..) and some choices that doesn’t fit here. We cache on fonts, then the rasterizer caches on resolved glyphs, and the outliner/shaper caches on glyph lookup. I don’t have the numbers available, but napkin level I got it to around 50-75% overhead versus the uncompressed size of the font. Multiply that by the number of windows open (I drift towards the upper two digit of active CLI shells).

                    The size of a TPACK cell is somewhere around 8 bytes or so, using UCS4 even (you already needed the 32-bit due to having font-index addressing for literal substitution), then add some per-line headers. It also does I and P frames so certain changes (albeit not scrolling yet) are more compact. I opted against trying to be overly tightly packed as that has punished people in the past and for the network case, ZSTD just chews that up into nothing. It’s also nice having annotation-compact text-only intermediate representation to juggle around. We have some subprojects about to leverage that.

                    Do you know if Quartz does anything similar? I suspect it’s a bit tricky if you’ve got multiple rounds of compositing, since you need to render text to some texture that the app then renders into a window (possibly via multiple rounds of render-to-texture) that the compositor composes onto the final display. How does Arcan handle this? And how does it play with the network transparency?

                    I don’t remember what Quartz did or how their current *Kits, sorry.

                    For Arcan itself it gets much more complicated and a larger story, as we are also our own intermediate representation for UI components and nest recursively. The venerable format string based ‘render_text’ call at the Lua layer force rasterisation into text locally as some genius thought it a good idea to allow arbitrary embedding of images and other video objects. There’s a long checklist of things to clean up, but that’s after I close down the network track. Thankfully a much more plastic youngling is poking around in those parts.

                    Speaking of networking – depending on the network conditions we outperform SSH when it starts to sting. The backpressure from things like ‘find /’ or ‘cat /dev/random’ resolves and renders locally and with actual synch in the protocol you have control over tearing.

                    I recall seeing a paper from MSR at SIGGRAPH around 2005ish that rendered fonts entirely on the GPU by turning each bezier curve into two triangles (formed from the four control points) and then using a pixel shader to fill them with transparent or coloured pixels on rendering.

                    AFAIR @moonchild has researched this more than me as to the current glowing standards. Back in ‘05 there was still a struggle getting the text part to behave, especially in 3D. Weighted channel based hinting was much more useful for tolerable quality as well, and that’s was easier as a raster preprocess. Eventually Valve set the standard with SDFs that it still(?) the dominant solution today (recently made its way natively into FreeType), and quality optimisations like multi-channel SDFs.

                    1. 1

                      Thanks. I’m more curious about the absolute sizes than the relative savings. Even with emoji, I wouldn’t expect it to be a huge proportion of video memory on a modern system (even my 10-year-old laptop has 2 GiB of video memory). I guess it’s more relevant on mobile devices, which may have only this much total memory.

                      1. 1

                        I will try and remember to actually measure those bits myself, can’t find the thread where C-pharius posted it on Discord because well, Discord.

                        The savings are even more relevant if you hope to either a. at least drive some machines from an FPGAd DIY graphics adapter instead of the modern monstrosities, b. accept a 10-15 year rollback in terms of available compute should certain conflicts escalate, and c. try to consolidate GPU processing to a few victim machines or even VMs (though the later are problematic, see below) – both of which I eventually hope for.

                        I layered things such that the Lua API looks like a balance between ‘animated display postscript’ and ‘basic for graphics’ so that packing the calls in a wire format is doable and asynchronous enough for de-coupling. The internal graphics pipeline also goes through an intermediate-representation layer intended for a wire format before that gets translated to GL calls for the same reason – at any time, these two critical junctions (+ the clients themselves) cannot be assumed/relied upon to be running on the same device / security domain.

                        Public security researchers (CVE/bounty hunters) have in my experience been pack animals as far as targeting goes. Mobile GPUs barely did its normal job correctly and absolutely not securely for a long time and little to nothing could be heard. From DRM (as in corporate malware) unfriendly friends I’ve heard of continuous success bindiffing Nvidia blobs. Fast > Features > Correct > Secure seems generally to be the priority.

                        With DRM (as in direct rendering manager) the same codebase hits BSDs and Linux alike, and for any VM compartmentation, VirGL cuts through it. The whole setup is massive. It evolves at a glacial pace and it’s main job is different forms of memcpy where the rules for src, dst, size and what happens to the data in transit are murky at best. “Wayland” (as it is apparently now the common intersection for several bad IPC systems) alone would’ve had CVEs coming out the wazoo had there been an actual culture around it, we are still waiting around for conformance tests, much less anything requiring more hygiene. Fuzzing is non-existent. I am plenty sure there are people harvesting and filling their barns.

                      2. 1

                        An amusing related curiosity I ran across while revisiting a few notes on some replated topic - https://cgit.freedesktop.org/wayland/wayland/tree/NOTES?id=33a52bd07d28853dbdc19a1426be45f17e573c6b

                        “How do apps share the glyph cache?”

                        That’s the notes from the first Wayland commit covering their design axioms. Seems like they never figured that one out :-)

          2. 3

            Ah, that makes sense, thanks. I’m definitely sympathetic to the first problem.

        2. 1

          With irissi I’m using GPU acceleration because my terminal emulator is OpenGL based.

      2. 4
        1. 1

          Sadly I’m blocked by no support for SSO

          1. 4

            as your link says:

            Account creation and SSO will come with OIDC. OIDC will come in September.

            the code’s there and works; just needs to be released and documented. NB that shifting to native OIDC will be a slightly painful migration though; some of the old auth features may disappear until reimplemented in native OIDC, whicy may or may not be a problem for you.

      3. 4

        If you’re on Android, note that an early release of Element X just hit the Play Store yesterday: https://element.io/blog/element-x-android-preview/.

    8. 12

      Bleurgh - looks like video.fosdem.org is having problems. As others have commented, https://www.youtube.com/watch?v=eUPJ9zFV5IE is the same talk (with chapters!) on youtube.

    9. 9

      Very intersting, but I would love to hear more details on why matrix is not as scalable. They hint at the merge operations but I don’t understand why that is a problem.

      1. 18

        We’d like to know too :) Matrix as protocol isn’t inherently unscalable at all. It’s true that every time you send a message in matrix you effectively are merging the state of one chatroom with the state of another one - very similar to how you push a commit in Git. Generally this is trivial, but if there’s a merge conflict, it’s heavier to resolve. The Synapse python implementation was historically terrible at this, but has been optimised a lot in the last 12 months. The Dendrite go implementation is pretty fast too.

        There’s an interesting optimisation that we designed back in 2018 where you incrementally resolve state (so called ‘delta state res’), where you only resolve the state which has changed rather than considering all the room state (i.e. all key-value pairs of data associated with the room) en masse. https://matrix.org/_matrix/media/v1/download/jki.re/ubNfLtrmXZMmlGjJZYPnlHHy and https://github.com/matrix-org/synapse/pull/3122 give a bit of an idea of how that works. It would be really cool if Process One is doing something like that with ejabberd, but in practice we suspect that they’ve just done an efficiently implementation of the current state res algorithm. We’ve pinged them on Twitter to see if they want to discuss what they’re up to :) https://twitter.com/matrixdotorg/status/1580549591807975430

        1. 11

          There’s an interesting optimisation that we designed back in 2018 where you incrementally resolve state

          Is it really so hard to see why a protocol that cares about conversation state is more difficult to scale than a protocol that completely ignores it? Seems almost tautological to me.

          1. 15

            Matrix is certainly more complex to scale (as our inefficient first gen implementations demonstrated), but i think folks are conflating together “it’s complex to write an efficient implementation” with “it doesn’t scale”. It’s like pointing out that writing an efficient Git implementation is harder than writing an efficient CVS implementation; hardly surprising given the difference in semantics.

            In practice, you can definitely write a Matrix implementation where all operations (joining, sending, receiving, etc) are O(1) per destination, and don’t scale with the amount of state (i.e. key value pairs) in a room. And to be clear, Matrix never scales with the amount of history in a room; history is always lazyloaded so it doesn’t matter how much scrollback there is.

            Historically, joining rooms in Matrix was O(N) with the number of the users in that room, but we’ve recently fixed this with “faster remote joins”, which allows the room state to get lazily synced in the background, thus making it O(1) with size of room, as it should be. https://github.com/matrix-org/matrix.org/blob/80b36d13c3097ffb5ba33572d9011e71940f1486/gatsby/content/blog/2022/10/2022-10-04-faster-joins.mdx is a shortly-to-be-published blog post giving more context, fwiw.

            1. 9

              The post doesn’t say “Matrix doesn’t scale”, just that XMPP and MQTT scale better. This is because they’re solving dramatically simpler problems. I don’t see anything problems with that claim.

            2. 4

              As an aside, from that draft,

              whereas it used to take upwards of 12 minutes to join Matrix HQ […] this is now down to about 30 seconds (and we’re confident that we can reduce this even further).

              Holy cow they did it! Woo! So proud of the Synapse team :)

              1. 2

                On the technical side, that’s genuinely impressive work. On the product side, I can’t help but compare with iMessage, signal, WhatsApp and discord being closer to one second.

                1. 3

                  the target is indeed <1s, and should still be viable. we’ve shaved the number of events needed to join #matrix:matrix.org from ~70K to ~148 iirc, which should be transferred rapidly.

        2. 11

          We’d like to know too :) Matrix as protocol isn’t inherently unscalable at all

          I suspect that this is a question of relative scale. A lot of users of eJabberd are using it as a messaging bus, rather than a chat protocol and so sending a message is likely to be on the order of a few hundred BEAM VM instructions. This is especially true of MQTT, where you don’t have the XML parsing overhead of XMPP and you can parse the packet entirely with Erlang pattern matching. If it’s a deferred message then you may write to Mnesia, but otherwise it’s very fast. In contrast, something that keeps persistent state and does server-side merging is incredibly heavy. That doesn’t mean that it isn’t the right trade off for the group collaboration scale, but it definitely means that you wouldn’t want to use Matrix as the control plane for a massively networked factory, for example.

          1. 5

            I guess it will be interested to benchmark. To use the git v. cvs example again, I think it’s possible to have an efficient (but complex) merging system like git which outperforms a simple “it’s just a set of diffs” VCS. We certainly use Matrix successfully in some places as a general purpose message bus, although when we need faster throughput we typically negotiate a webrtc datachannel over Matrix (e.g. how thirdroom.io exchanges its world data).

            1. 5

              The analogy isn’t really matched to this context though. SIP or XMPP or MQTT doesn’t involve diffs or storage or really even state in the basic use case, whereas Matrix is always diffs and merges.

              1. 4

                Also git and CVS are programs and file formats with (roughly) one implementation, whereas MQTT and Matrix are protocols. The semantics of protocols place an upper bound on the efficiency of any potential implementation.

        3. 9

          No one said it was unscalable, just that it was harder. If it takes a dedicated team multiple years and a full reimplementation to scale it, and even just joining a room is still slow, that says something.

          I currently run Dendrite unfederated, in part (thought not solely) because I don’t want someone to accidentally bring down my small server by joining a large channel somewhere else. I still think Matrix is a good idea, but “scaling Matrix is hard” should be a pretty uncontroversial statement.

          1. 0

            The OP said “Matrix is not as scalable”. My point is that yes, it’s harder to scale, but the actual scalability is not intrinsically worse. It’s the same complexity (these days), and the constants are not that much worse.

    10. 5

      I’m a little worried about the constant focus on features when the basic experience of just text chats and such can be really rough, in terms of inconsistency and client/server performance. It needs more polish, but what attention is there to give when you’re on the next big thing?

      1. 7

        literally the whole point of the original post is to spell out the emphasis we’re putting on perf and usability atm. Only the second half of the post talks about new features (which happens in entirely different teams)

    11. 4

      While the way to go forward is interesting, and Matrix is an excellent piece of engineering, it lacks in several areas still, which I would have expected to work flawlessly by now. This is from my own experimenting with the protocol and surrounding server/clients:

      • 1:1 calling is lacking, TURN appears to rarely work. Calling from mobile (Android) to Desktop and reverse works <50% of the time
      • If you’re logged into both mobile (Android) and desktop, an incoming call picked up by Desktop will continue ringing on the mobile, until it drains the battery. The call pickup even is aparently not properly handled when mobile lockscreen is active.
      • Recently, my mobile notifications and message history on mobile is completely off. It appears to be “receiving” messages minutes after they appear on my desktop. Comparing mobile and desktop message history reveals completely out-of-order messages, or simply not-received-on-mobile ones

      It feels to me that every time the core features are stable, at least on mobile, something happens in the matrix world that triggers new features that replace the stable features with something new (often incomplete) leaving core functionality flawed. It happened with Riot -> RiotX, and it seems to happen again with Element -> Element X. This is one of the reasons I’ve stopped reporting issues with mobile clients, as there’s no point, a “new” mobile client will most likely soon appear and the cycle will start again.

      1. 6

        Agreed that RiotX on Android still has some major bugs (which is frustrating, given how long it’s been around). ElementX on Android will likely not be a rewrite however: it is “just” replacing the Kotlin SDK with the Rust SDK, and replacing the calling implementation with Element Call (so we get native conferencing as well as 1:1) - which should address both issues you’ve mentioned here.

        1. 1

          Will the voice/video call functionality be part of the Rust SDK? How are you implementing WebRTC on the mobile platforms, where presumably running a browser engine’s WebRTC stack inside a web view wouldn’t be a good solution? Are you using the Google WebRTC C++ library directly, do you have some kind of wrapper over it, or are you using a different WebRTC implementation?

          1. 4

            currently the plan is to run webrtc inside a webview, as per the matryoshka section of the OP. the current mobile apps use libwebrtc directly, with limited success (as you can see from the original comment here), so given we’re switching to multiway native Matrix VoIP the idea is to switch to embedding the Element Call webapp, and then replace that with native impls only if performance actually requires it. So far, webrtc in a webview is actually working fine, and avoids us having to build a new native impl of the relatively complex multiway calling on each platform.

            Alternatively, others are very welcome to go wild with libwebrtc or webrtc-rs on top of matrix-rust-sdk or others. After all, it’s all open…

      2. 4

        The slow receiving messages on mobile will probably be fixed by the new sync. Mobile clients don’t currently keep syncing all the time in order to save battery. When a push notification arrives, the mobile client needs to sync in order to receive all the relevant data. That takes a long time and lots of data on sync v2. Exactly what sync v3 is supposed to fix.

    12. 4

      https://matrix.org/blog/2022/08/15/the-matrix-summer-special-2022#wysiwyg

      However, given that users are now used to WYSIWYG in Teams and Slack, we’ve now decided to have another go at it

      You link to a blog post that shows issues with Slack WYSIWYG, but creating richly formatted messages in MS Teams is also an incredible daily source of frustrations, with no way to opt-out and just use Markdown without that awful dynamic interpolation. I hope just writing Markdown to format messages will still be an option in Element when/if this WYSIWYG editor is introduced.

      1. 4

        don’t worry - markdown will not be going away. this is just adding wysiwyg as an option for those who want it (and for parity with Teams)

    13. 11

      These days, I am unsure what Matrix is heading for. This post explains that they want to have VoIP video conferencing and decentralised virtual reality. Then I open the lobste.rs comments and the first thing I see is a comparison to IRC.

      It seems as if Matrix’ mission statement today is going far beyond the goal to open up walled text message gardens. From this post it looks as if they want to make Matrix a decentralised platform for everything. The post talks explicitely about the success of the open web and how Matrix strives to copy it, and that makes me think: don’t we already have the open web? It’s built on a protocol called HTTP. Does this mean Matrix wants to replace HTTP?

      If Matrix is indeed inferior even to IRC (I cannot judge as I do not use Matrix) in the domain IRC occupies (text messaging), such a wide approach seems doomed.

      1. 10

        We’ve always tried to be clear that Matrix is a general purpose comms protocol for realtime data - not just chat. For instance, right from the original launch in Sept 2014 we had VoIP signalling in there too, and did a very basic demo of 3D over Matrix on day 1 too: https://techcrunch.com/video/animatrix-presents-disrupt-sf-2014-hackathon/

        The post talks explicitely about the success of the open web and how Matrix strives to copy it, and that makes me think: don’t we already have the open web? It’s built on a protocol called HTTP. Does this mean Matrix wants to replace HTTP?

        Obviously we’re not trying to replace HTTP. Matrix is an API layered on top of HTTP (or other transports) to provide a communication layer for realtime data. If anything it competes with ActivityStreams as a way to link streams of activity over the open web - except with a completely different architecture. The reason for invoking the open web is that we simply want to be the comms layer for the open web: a global realtime virtual space where folks can chat, talk, interact, and publish/subscribe to realtime data of any kind.

        W3C simply doesn’t provide an API for that, yet - and if they did, hopefully it might be Matrix.

      2. 4

        The open web is not a federated eventually consistent database. That’s what matrix provides. https://matrix.org/ for more info. An update for people following the blog doesn’t cover the introduction.

        Text chat is the first application, but matrix can be used for much more.

    14. 9

      I’ve really enjoyed using Matrix, hopefully when sync is faster I can lure more of my friends to use it.

      1. 3

        meh…Startup (initial connection delay) is considerably slower that IRC. It’s a bit of a downgrade in terms of overall polish of clients too (but again, IRC is soo much bare-bones so it’s simpler by design)

        1. 5

          IRC is nice but I prefer Matrix over it due to the ease of use: I got my fiancee to use Matrix with me on my homeserver, but I doubt i would’ve ever got her use IRC. :)

          1. 2

            I got my parents to use IRC in early 2000s. It wasn’t that difficult, just installed an irc client that automatically connects to a server and channel.

            Matrix would be a hard sell now that Telegram and Whatsapp exist.

            1. 8

              the point of the OP is that Matrix clients have to be better than TG or WhatsApp to win, and that’snwhat we’re aiming for.

              1. 4

                In my experience using Matrix for the last several years, most Matrix clients seem to be struggling to keep up in terms of features/functionality. Which is unfortunate, because the official clients that are web/browser based are slow and frustrating to use on older devices. That said, the Android Element.io app is not too bad :)

                I think supporting multiple (unofficial…) clients is very important, since it prevents “vendor lock in”, which can totally happen even when the protocol is federated if everyone ends up depending on the official client and (hypothetically, but not totally impossible…) it is sold/acquired by some nefarious company in the future… I don’t know if the current situation is from Matrix being some quickly moving target, or if implementing the features is just… hard. In any case, it’s not great having such limited options for clients.

                1. 5

                  I wouldn’t say that “most” matrix clients are struggling to keep up in features/functionality - it’s more that we’re still figuring out some features (as per the OP) and everyone (including Element) is playing catchup to a fast moving target.

                  Totally agreed that vendor lockin is a total antigoal. Element is not the “official” Matrix client - it’s just a client, like Netscape was just a browser in the early days of the web. It happens to be written by the team who created Matrix, but the two are separate these days: matrix.org/foundation v element.io/careers.

                  In terms of native Desktop apps, we’re hoping matrix-rust-sdk will power a new generation of excellent native apps - ElementX iOS supports macOS too, for instance, and Fractal-next is already a GTK app based on rust-sdk.

                  1. 2

                    and Fractal-next is already a GTK app based on rust-sdk.

                    Yeah… but that doesn’t even support E2EE[1]. I consider that to be a major feature of Matrix, without it Matrix is just a slower way to exchange unencrypted text online. Last time I looked, a few Matrix clients were struggling to implement E2EE.

                    1. https://gitlab.gnome.org/GNOME/fractal/-/issues/717
                    1. 2

                      If you look at the checkboxes on that bug, all the hard bits are already done (thanks to leaning on matrix-rust-sdk). You can literally use Fractal-next for E2EE today. They just need to hook up UI for the remaining edge cases (eg key backups). In terms if why that hasn’t happened yet… it’s a FOSS project; PRs welcome.

                  2. 1

                    Just want to give shoutout to Nheko! I use it daily on my desktop.

    15. 3

      Thank you for writing this up, there’s an excellent amount of detail here. The proliferation of silo’ed chat protocols has been one of my pet peeves and has definitely (thus far) been heading the wrong direction; as bad as the proprietary protocols were back in the aughts, they were at least neutrally interoperable in their heyday – these days most companies are downright hostile in how their enforce their ToS when it comes to third-party clients etc.

      I’m hoping the relatively smaller ecosystems (e.g. Discord) take note and at least loosen their ToS to allow for calling user APIs without fear of a perma-ban.

      A couple of additional questions: is it known if companies might attempt to limit the exposure of their APIs to EU markets only, or does the DMA cover that explicitly? Is the DMA a pre-requisite for fully scaling out use of Matrix Bridging Services – i.e. does the interoperability climate pre-DMA preclude you from offering bridging as a commercial service?

      1. 3

        Thanks for the positive feedback :)

        is it known if companies might attempt to limit the exposure of their APIs to EU markets only, or does the DMA cover that explicitly?

        I don’t believe the DMA covers that explicitly, but IANAL. Much like some sites decided to cut off EU traffic rather than implement GDPR, I guess it’s possible that the gatekeepers might only offer open APIs to EU IP addresses - but it feels like the negative PR of doing so (and the theatre of doing so, given how easy it is to get a EU IP) would not be worth it.

        Is the DMA a pre-requisite for fully scaling out use of Matrix Bridging Services – i.e. does the interoperability climate pre-DMA preclude you from offering bridging as a commercial service?

        Any kind of bridging to a closed service from Matrix (or XMPP) is pretty miserable today, given you have to do adversarial interoperability, which massively reduces the interest in building bridges or relying on them. So yes, DMA would be transformative for bridging and interop in general :)

        1. 1

          So yes, DMA would be transformative for bridging and interop in general :)

          How much of this do you suspect will be bridges for alternative open protocols vs alternative clients? Also, how do you foresee abuse/spam issues being handled?

    16. 1

      Personally I’d feel a lot more excited about this if you could give a working demo. I’m glad to see you’re doing VRM support though. I really hate having to re-rig my avatar for every chat app.

      1. 1

        so we’re in the middle of switching stack from matrix-js-sdk to hydrogen-sdk (hence https://thirdroom.io/ being a bit of a mess right now, although if you try hard enough it might work). There’s a video of the initial demo up at https://www.youtube.com/watch?v=e26UJRCGfGk&t=2263s however. I posted the intro post for the project today because we’re finally working on it fulltime as of this week.

      2. 4

        I’m really interested in why this is getting downvoted. Third Room is proposing an open decentralised virtual world built on Matrix as an interoperable open standard, as a much needed alternative to the cryptocurrency/NFT-driven approaches or proprietary silos from the big players. It’s basically trying to extend Matrix’s potential as the realtime layer of the open Web to cover spatial environments too.

        What am I missing? I posted this in good faith.

        1. 3

          I can’t speak for others (I didn’t flag it), but it could be the combination of the dread word “metaverse”, lingering anti-Matrix dislike, and the general vacuous “marketing look” of the page that triggered them.

          1. 2

            interesting; i hadn’t realised there was lingering anti-Matrix dislike. in terms of the page coming across as marketing - it’s written by a bunch of hardcore devs (mainly Robert, who was a lead dev on mozilla hubs and altspace vr and nowadays at Element) and was intended as a manifesto for the project, whose code already exists at https://github.com/matrix-org/thirdroom (i.e. it isn’t vapourware, and is explicitly anti-NFT and anti metaverse-hype).

            In terms of metaverse having become a dirty word… well, that’s depressing. Would be ironic if stupid industry hype and FB have gone and wrecked it for everyone :(

            Eitherway, i think i understand the rationale; will steer well clear in future.

            1. 2

              I apologize for complaining about the copy. it looks different from when I first glanced at it. Maybe i was just confusing it with some other site.

              I am just guessing about anti-Matrix sentiment. Please don’t take it as gospel.

              I sympathize with the desire to have free and open access to the Metaverse but I personally don’t want the Metaverse to become a thing at all :D

              1. 2

                I’m kind of in the same boat; I don’t want the Metaverse to be a thing… but if it has to be, this would be the least bad way.

              2. 2

                No problem. I’ve just spotted that the downvotes are +4, -1 off-topic and -3 spam. I guess spam is a catch-all for stuff which rubs people up the wrong way in this context? To be clear, this is entirely non-profit and FOSS and non-commercial (and governed by the non-profit https://matrix.org/foundation), so from a spam perspective it’s not like it’s trying to sell something.

                In terms of the metaverse not becoming a thing… i’d argue that it’s already here; whether that’s in Google Docs, or Figma, or HackMD, or minecraft, or ActivityPub or Matrix. It’s just a question on whether there should be an open standard for interoperating between them - whether that’s for multiplayer 2D, 3D, document, code content or something else.

                1. 1

                  In terms of the metaverse not becoming a thing… i’d argue that it’s already here; whether that’s in Google Docs, or Figma, or HackMD, or minecraft, or ActivityPub or Matrix

                  OK, that’s one definition. For me, it’s that awful future presented by Mark Zuckerberg where we’re all having meetings with TVs strapped to our faces.

                  1. 2

                    I too do not want to live in a world where facebook has hijacked the word metaverse.

                    (which is what the project is about :)

    17. 8

      Why not jump from old and quirky IRC protocol to e.g. Matrix? Also, matrix is an open federation, so this kind of grab shouldn’t be possible.

      1. 15

        We are ourselves old and quirky.

        Freenode had a ~25 year run, which is significantly better than the median free tier on an online service.

        1. 5

          It is indeed quite the accomplishment. But IRC is clearly on the decline.

          1. 1

            IRC works well with very slow connections like dialup and archaic machines, but not unstable connections unfortunately, my main complaint is the lack of at least a small chat log without using 3rd party services/sw. Any small disruption will make me lose messages in a rural area internet.

      2. 7

        My main issue with Matrix is the lack of a client that I can run easily on extremely low-powered hardware. Just about all the major, well-supported clients are built on Electron. Compare that to IRC: you can have useful IRC client in just about ~5k lines of C (yes, I’ve written my own IRC client).

        1. 4

          I have to wonder if this is really the limiting factor for IRC - if we’re measuring protocols based on what you can write on a coke can, IRC might win, but is that what people actually want?

          1. 2

            i do. the irc client i use is very fast and configurable. i don’t want to run a full web browser and 100000 tons of javascript just to exchange text with people. i currently use the weechat-matrix plugin for weechat to access matrix but it is unmaintained and missing many features i guess.

            1. 4

              weechat-matrix isn’t unmaintained - it’s stable. the author is prioritising matrix-rust-sdk, but weechat should work great.

              1. 2

                Yeah ok, ‘unmaintained’ is a little strong. My point is, nothing new is being added to improve support for matrix (multiline messages, etc) and there are lots and lots of quirks, having used it daily for many months now. And the author has made it clear they have no interest in improving the existing plugin while they go off and RWIR..

                It’s “good enough” for me, but a far cry from supporting everything matrix has to offer. That’s the case for almost all matrix clients though, as I’m sure you are aware.

            2. 3

              Fwiw I heard yesterday about https://github.com/poljar/weechat-matrix-rs . When it’s cooked it might be a good way for me to try Matrix seriously.

              1. 1

                Yeah, that has been around for a bit, and seems to be progressing along slowly. I don’t think it’s anywhere close to replacing the old python version of the plugin in its current state, and seems to be a long ways off from being there.

                1. 1

                  Ah, good to know, thank you. Will keep an eye on it :)

        2. 1

          https://github.com/tulir/gomuks is roughly 15k lines of Go (+ non-trivial LOC from dependencies of course).

      3. 4

        try hosting matrix

        1. 3

          What makes you think I haven’t?

          1. 6

            Everyone I’ve talked to personally who’s tried this has nothing but horror stories when it comes to running their own homeserver. The consensus I’ve heard is that it’s only practical if you have staff to look after it or if you prevent your channels from federating.

            I admire the vision but they have a long way to go before actually realizing the benefits of a decentralized system.

            1. 4

              I’ve had very few issues running it myself. I have Synapse, Postgres, and Nginx running along with IRC, Discord, and Slack app services on a 2 GB VPS. Other than the occasional upgrade, I’ve had minimal issues. I manage everything through my service manager, so usually it’s as simple as running an upgrade task and then restarting the service. That said, I have a lot of experience running web services, so that might contribute.

            2. 2

              I’ve configured synapse by hand and using https://github.com/spantaleev/matrix-docker-ansible-deploy/ . Both work well provided you read the documentation.

          2. 1

            you are using matrix.org

            1. 3

              I have an account on matrix.org, true. That doesn’t prevent me from having accounts elsewhere. A matrix.org account is sometimes useful.

              1. 3

                w.kline.sh/

                I run my own too (@sumit:battlepenguin.im). It works pretty well, and I even have bridges working. Overall I think it’s way easier to stand up than XMPP (everything is over HTTP; there is that weird federated port but you can now use a normal LetsEncrypt cert and stick it behind a Traefik or HAProxy frontend).

                I will say, scaling it would be difficult. I’ve heard other people complain about larger matrix servers with a lot of users and matrix.org has had issues with theirs after multiple huge refactors that dropped CPU usage. I think Matrix would be way better if there were multiple server implementations like ActivityPub does (Mastodon, Pleroma, Peertube, etc.) but it looks like development on the Go implementation is still slow going.

      4. 1

        Yes, go to Matrix, let the eternal september end here.

    18. 1

      I enabled beta and feel Spaces is a much polished version of Communities. On the Spaces Beta discussion on the orange site @Arathorn said that the Discord style communities is the boring obvious bit.

      For what it’s worth, the thing I find most exciting about Spaces is that they provide a decentralised hierarchical namespace with decentralised access controls for every room (ie pubsub topic) in Matrix. So it’s like we’ve sprouted an openly federated global hierarchical filing system for freeform realtime data streams of all flavours - where people can go crazy defining their own trees, applying their own curation ideals; perhaps we’ll even see a single global tree emerge (although the implementation may need some more optimisation first).

      It’s like a multiplayer hybrid of DMOZ and USENET and the read/write Web all rolled together. Once we start storing more interesting data streams than instant messages in it (eg forums, email, bulletin boards, DOMs, scene graphs, ticker data, IOT sensor data…) it really gets interesting :)

      “wow we accidentally created the realtime read/write web”

      What is the non-obvious and interesting part? His rest of comment went over my head.

      1. 2

        I was trying to explain that while we wrote spaces to let users group their rooms together, in practice you can create hierarchies of spaces to group all the rooms together. For instance, I could create a space called #root:matrix.org and then a space within it called #opensource:matrix.org and then a space within that called #linux:matrix.org and then fill the space in that with all the linux chatrooms I know about. I could then give ops to other linux experts in the #linux space, and they could delegate ops onwards… until you’ve built a hierarchy that contains all the best chatrooms anyone knows about. It’s a multiplayer way to curate and categorise all the conversations of the world, including those bridged in from other networks and platforms.

        Hope that makes more sense!

        1. 1

          That makes more sense, thank you for taking the time.

    19. 4

      Some interesting comments from the project lead over at HN.

      And here’s the user-facing post. I created a space for Haskell here (it includes the IRC bridge to #haskell).

      1. 1

        I created a space for Haskell here (it includes the IRC bridge to #haskell).

        URL changed to: https://matrix.to/#/#haskell-space:matrix.org

        1. 1

          How did you set the URL of the space? Been wanting to do this for Pikelet, which is currently a random hash…

          1. 1

            You add an alias for the room that is the space. The UI is probably missing currently, but you can use the API. https://matrix.org/docs/spec/client_server/r0.6.1#put-matrix-client-r0-directory-room-roomalias

            1. 1

              Is there a way to send this request from element in the browser, or so I need to do some more involved shenanigans for this?

              1. 5

                More involved shenanigans, sadly. Adding aliases to Spaces is top of the list for the next wave of work in the beta though.

                1. 1

                  No worries, looking forward to it! Thanks for your efforts!