1. 43
  1.  

    1. 39

      is it just me, or does this article sound like it was (part-)written by an LLM? the constant use of bullet points and “summarizations” makes it somewhat hard to parse…

      at any rate, I also found the MDN article very helpful for learning how to use SSEs; I agree they’re probably not as used as they should be!

      1. 24

        We really need an AI slop flag reason.

        1. 2

          “Spam” seems applicable enough, if the operators would not like to add another reason.

        2. 12

          In particular the back-and-forth between “everything just works; it’s easy and well-supported everywhere” and then later “well, except it doesn’t work with some load balancers. and it might not work with older browsers” felt like … not a thing a human would write. Especially when you click on the link about older browsers, and you see that the truth is the opposite of what the article says; it’s extremely well-supported even among older versions of browsers.

          1. 5

            Yeah, I got a whiff of LLM from this one.

            1. 2

              Definitely has a vibe of generated content. I used lots of genAI and I need to filter the content first before posting them.

              1. 2

                Oh yes, breaking everything down to lists of points like that smells strongly like ChatGPT.

                1. 1

                  Huh. It’s true that it looks LLM-ish, but I actually enjoy this style. I actually like terse, bullet-point driven message. If I wanted to read a novel, I would pick something from the shelf, and I find most technical content out there to be needlessly verbose and colorful.

                  1. 3

                    I agree that much technical content is needlessly verbose and can be reduced to a couple of bullet points without loss. That’s not an argument for terse bullet points. It’s an argument that the people writing the content don’t understand the topic well enough to write something nuanced.

                  2. 1
                  3. 13

                    SSE works seamlessly with existing HTTP infrastructure.

                    No! anything that has a buffer that doesn’t periodically flush itself will delay delivery for up to forever! You must triple-check that you are doing everything right along the entire path. Same as with WS

                    1. 17

                      While that is true in theory, in practice the X-Accel-Buffering: no pseudo header is typically enough to fix issues in practice, even for third party proxies outside of your control. Most importantly since HTTPS you largely no longer need to consider public proxies or ISP level stuff. You control most of the stack yourself.

                      I found SSE to be shockingly well supported and painless in practice for how trivial it is.

                      1. 4

                        A bunch of non-browser things use the ‘long poll’ model, where you start an HTTP request and the server sends keep-alive packets but doesn’t reply until some event happens, which may be hours later. It’s mostly a good approach, the only down side is that most FaaS infrastructure seems incapable of supporting it and so you need to have something persistent terminate the connection.

                        1. 1

                          I was misremembering the problem and its specifics a slight bit, it was buffering and cloudflare, which caused connection drops due to cloudflare assuming no response from server == dead app. Fixing buffering would have helped to a degree, But not a complete solution, instead having to send bogus data as a faux keep-alive, or stop using cloudflare.

                          (I’m having to work this out without much info since I was only a consumer of the problematic API, not a provider)

                          1. 5

                            Both SSE and WebSockets require keep-alives in practice. That’s unrelated to Cloudflare. Cloudflare does support the header I mentioned.

                      2. 7

                        See also “comet” for those who remember the heady days of web 2.0 and AJAX, 15+ years ago…

                        1. 3

                          Of note: if you’re using http/1 you limit of connections per browser is 6.

                          1. 5

                            This might be perfectly fine; 6 connections is a lot, but if someone has multiple tabs open to your site you can hit it, and it is annoying when you do - not only do the SSEs fail to connect on new tabs (or existing tabs fail to reconnect), but the regular page load might too! It acts like the site is down.

                            (What grinds my gears about this is the 6 connection limit is arbitrary - it doesn’t actually come from the spec, which says: “Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server.” so ok, they put a limit (though important to realize that “SHOULD” is not a requirement; it should be evaluated in the conditions you actually apply it to,), but then it says “A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.” you can understand that at the time when servers weren’t designed for long lived connections, but now? regardless, where’s the 6 come from? I guess they reevaluated it later - good - but then arbitrarily stopped reevaluating later, alas.)

                            Anyway, it isn’t really important because is is very easy to workaround: set up a wildcard subdomain to your server and make each connection go to a random one.

                            My live deployment literally does replace("mydomain.dom", "stream"+Math.floor(Math.random() * 100) + ".mydomain.com") when establishing the EventSource. Took a few minutes to set up (I like wildcard subdomains anyway tbh but even if you don’t, it isn’t hard to set one up), and then the problem just completely disappeared, even with the users who leave tabs open for ages and end up with dozens of them. Been years now.

                            1. 8

                              It is not arbitrary limit though: it is one that was found out by testing to be highest amount of connections that can still work without trashing the throughput. Those experiments have been repeated at least for 4G connections (and were actually way too high for 3G).

                              I found out about those limits and the experiments when I was working on chromium network stack about decade ago (when 4G LTE was still the newest thing around), but it seems to be impossible to find those experiment results anymore.

                              edit: realized that I didn’t mention what goes wrong with higher connections counts over slower/higher latency links. Unsurprisingly it is TCP and its congestion control. Which is also the reason why QUIC/HTTP3 is not on top of TCP, major goal was to be able to increase the connection count per client.

                              1. 3

                                Does this apply to usually-idle connections too? I can understand saturating the physical link with multiple active connections, but one just sitting there idle rarely (if ever) transmitting anything ought not do much to the throughput (and if it does, you can close it at any time too).

                                I know almost nothing about how 3G/4G stuff actually works though, except that it radios the stuff (and my understanding is that’s why events on phones tend to be sent down a different mechanism, so it can turn off the radio more often to save power… but again i know almost nothing so don’t believe me lol).

                              2. 3

                                This is a cool workaround. I’ve always thought that the 6 connection limit kind of was just this arbitrary “fuck SSE” thing but that workaround is reasonable.

                                1. 3

                                  You can also work around that arbitrary limit by fronting your SSE stream with an additional BroadcastChannel and enough metadata in each SSE for each tab to determine which ones it should care about. It is much more complicated but it is available when you don’t control the infrastructure.

                              3. 1

                                I think SSEs add the same level of complexity as Web Sockets without a full bidirectional connection.

                                If I had to push messages down to the client I would rather implement WS and have the full feature set available if necessary (I’m aware of the rule of least power).

                                This is how I see it: If you need to push data down to the client, I asume you have some process running outside of the request/response cycle. I will also asume that you have multiple servers handling client connections. The clients need updates on the processes running outside of the req/res cycle.

                                In that scenario, with SSE or WS you will need a way to “know” which server has what client connection, so you are able to push updates to the correct client. In most cases that means that your server is either polling a db or message broker or listening to a pub/sub system for new messages.

                                I’m I missing something? What are the use cases make it simpler to implement SSE?