1. 13

    Everyone I know has been on Telegram/WhatsApp for years, I really don’t see any interest in carrier-based messaging coming back.

    1. 3

      Same. I’m in Europe, and know nobody who uses SMS regularly, or even iMessage.

      1. 3

        Ironic how they brought that upon themselves by being so stingy with SMS pricing

        1. 5

          You can also use stickers in Telegram, have group chats with moderation, have avatars, talk to people without giving them your phone number, and so on. It’s like IRC vs Discord. Even in a world where SMS was free from day one I don’t think it’d last.

          1. 2

            That’s true, however SMS is still noticably more popular in NA than Europe. And also their greediness applies to MMS too, which had many more features than pure text but instantly died due to being actually more expensive than physical mail.

          2. 4

            I did a calculation about 20 years ago that the price for SMS was over £500/MiB. It hasn’t changed very much unless you are on an unlimited-SMS plan. It was cheaper to send a fax to antarctica than send a page of text via SMS, the pricing was insane. On the plus side, the price for data is far more reasonable so even a protocol where the overhead for short messages is a few thousand percent is much cheaper than SMS. I’m using Signal for pretty much everything that I used to use SMS for and now that it’s basically free (it is free on WiFi), I use it a lot more. It also helps that Signal supports clients on multiple devices, so I can use the desktop app when I’m sitting at a computer with a real keyboard and only use the mobile version when I’m out. SMS is intrinsically tied to a single endpoint, which was fine in a world where people owned a single device but when people use a phone, a tablet, a work computer and a personal laptop it just doesn’t work.

        2. 1

          It bother me that I must use telegram for one group, whatsapp for another and signal for the last one. Each of these messaging apps worked on an application rather than a standard. It looks like that is what RCS is, so if I could use a single app for all messaging, I’d be happy 👍

        1. 9

          Can someone sum up for me why one might like QUIC?

          1. 13

            Imagine you are visiting website and you try to fetch files:

            • example.com/foo
            • example.com/bar
            • example.com/baz

            In HTTP 1.1 you needed to have separate TCP connection for each of them to be able to fetch them in parallel. IIRC it was about 4 in most cases, which meant that if you tried to fetch example.com/qux and example.com/quux in addition to above, then one of the resources would wait. It doesn’t matter that the rest 4 could take a lot of time and could block the pipe, so it would do nothing until resource was fully fetched. So if by chance your slow resource was requested before fast resources, then it could slow whole page.

            HTTP 2 fixed that by allowing multiplexing, fetching several files using the same pipe. That meant that you do no longer need to have multiple pipelines. However there is still problem. As TCP is stream of data, that mean that it need all packets before current to be received before processing given frame. That mean that single missing packet can slow down processing resources that are already received due to fact that we need to wait for marauder that can be retired over and over again.

            HTTP 3 (aka HTTP over QUIC with few bells and whistles) is based on UDP and the streaming is build on top of that. That mean that each “logical” stream within single “connection” can be processed independently. It also adds few different things like:

            • always encrypted communication
            • multi homing (which is useful for example for mobile devices which can “reuse” connection when switching between carriers, for example switching from WiFi to cellular)
            • reduced handshake for encryption
            1. 9

              Afaik, multihoming is proposed but not yet standardized. I know of no implementation which that supports it.

              QUIC does have some other nice features though

              • QUIC connections are independent of IP addresses. I.e. they survive IP address changes
              • Fully encrypted headers: Added privacy and also flexibility. Makes it easier to experiment in the Internet without middleboxes interfereing
              • Loss recovery is better than TCP’s
              1. 4

                Afaik, multihoming is proposed but not yet standardized

                That is true, however it should be clarified that only applies to using multiple network paths simultaneously. As you mentioned, QUIC does fix the layering violation of TCP connections being identified partially by their IP address. So what OP described (reusing connections when switching from WiFi to cellular) already works. What doesn’t work yet is having both WiFi and cellular on at the same time.

                1. 3

                  Fully encrypted headers

                  Aren’t headers already encrypted in HTTPS?

                  1. 8

                    HTTP headers, yes. TCP packet headers, no. HTTPS is HTTP over TLS over TCP. Anything at the TCP layer is unencrypted. In some scenarios, you start with HTTP before you redirect to HTTPS, so the initial HTTP request is unencrypted.

                    1. 1

                      They are. If they weren’t, it’d be substantially less useful considering that’s where cookies are sent.

                      e: Though I think QUIC encrypts some stuff that HTTPS doesn’t.

                  2. 3

                    Why is this better than just making multiple tcp connections?

                    1. 5

                      TCP connection are not free, they required handshakes both at TCP level and SSL. They also consume resource at the OS level which can be significant for servers.

                      1. 4

                        Significantly more resources than managing quic connections?

                        1. 4

                          Yes, QUIC use UDP “under the table” so creation of new stream within existing connection is 100% free, as all you need is just to generate new stream ID (no need for communication between participants when creating new stream). So from the network stack viewpoint it is “free”.

                          1. 3

                            Note that this is true for current userspace implementations, but may not be true in the long term. For example, on FreeBSD you can do sendfile over a TLS connection and avoid a copy to userspace. With a userspace QUIC connection, that’s not possible. It’s going to end up needing at least some of the state to be moved into the kernel.

                      2. 5

                        There are also some headaches it causes around network congestion negotiation.

                        Say I have 4 HTTP/1.1 connections instead of 1 HTTP/2 or HTTP/3 connection.

                        Stateful firewalls use 4 entries instead of 1.

                        All 4 connections independently ramp their speed up and down as their independent estimates of available throughput change. I suppose in theory a TCP stack could use congestion information from one to inform behaviour on the other 3, but in practice I believe they don’t.

                        HTTP/1.1 requires single-duplex transfer on each connection (don’t send second request until entirety of first reponse arrives, can’t start sending second response before entirety of second request arrives). This makes it hard for individual requests to get up to max throughput, except when the bodies are very large, because the data flows in each direction keep slamming shut then opening all the way back up.

                        AIUI having 4 times as many connections is a bit like executing a tiny Sybil attack, in the context of multiple applications competing for bandwidth over a contended link. You show up acting like 4 people who are bad at using TCP instead of 1 person who is good at using TCP. ;)

                        On Windows the number of TCP connections you can open at once by default is surprisingly low for some reason. ;p

                        HTTP/2 and so on are really not meant to make an individual server be able to serve more clients. They deliberately spend more server CPU on each client in order to give each client a better experience.

                      3. 2

                        In theory, HTTP 1.1 allowed pipelining requests: https://en.wikipedia.org/wiki/HTTP_pipelining which allowed multiple, simulteneous fetches over a single TCP connection.

                        I’m not sure how broadly it was used.

                        1. 4

                          Pipeline still require each document to be sent in order. A single slow request clog the pipeline. Also, from Wikipedia, it appears to not be broadly used due to buggy implementation and limited proxy support.

                          1. 3

                            QUIC avoids head-of-line blocking. You do one handshake to get an encrypted connection but after that the packet delivery for each stream is independent. If one packet is dropped then it delays the remaining packets in that stream but not others. This significantly improves latency compared to HTTP pipelining.

                        2. 5

                          A non-HTTP-oriented answer: It gives you multiple independent data streams over a single connection using a single port, without needing to write your own framing/multiplexing protocol. Streams are lightweight, so you can basically create as many of them as you desire and they will all be multiplexed over the same port. Whether or not streams are ordered or send in unordered chunks is up to you. You can also choose to transmit data unreliably; this appears to be a slightly secondary functionality, but at least the implementation I looked at (quinn) provides operations to you like “find my maximum MTU size” and “estimate RTT” that you will need anyway if you want to use UDP for low-latency unreliable stuff such sending as media or game data.

                        1. 1

                          As somebody who thinks Dream seems like a decent, well-intentioned guy, I prefer to err on the side of caution. If any of the six runs is conceivably possible when considered separately, then it seems to me conceivably possible that all of them could occur consecutively.

                          Furthermore, assuming not all of the six consecutive streams analyzed in the paper were new records, I find it weird that he would have activated this potential hack/mod for six consecutive streams. If I were him, I’d imagine that I’d use such a hack less consistently, in order not to draw attention to myself.

                          Sure, ego, or whatever, impairs your judgment. But this supposed cheating doesn’t really seem like the result of an impaired judgment, the way it usually looks when top-level speed runners start cheating for no obvious reason.

                          Then again, I tend to give people the benefit of the doubt, perhaps too much, especially if I find them sympathetic.

                          1. 2

                            The thing is, a single “god run” is a lot more probable than a string of a really good runs. If the seed needs you to get, let’s say, 4 ender pearls, a single “god run” that gets all 4 from 4 trades (100 percent success rate) the run is around 0.0005% probable, which is about 88 thousand times more likely than the stream of the Dream’s runs. Those streams had multiple run attempts(a single run takes ~15 minutes). Such a run might gain a lot more scrutiny due to the obvious nature, but this string of highly successful runs is more suspicious in the end.

                            1. 2

                              These streams were recorded after he suddenly returned to speedrunning 1.16.2 after previously quitting them for being too luck dependent. It’s understandable, as it’s extremely frustrating to do everything right just to have your run ruined by bad luck. Presumably, he thought that the increase in odds would be enough to let him finally get a run with good luck and the manipulation would not be noticeable, or extremely hard to prove. That of course turned out to be incorrect.

                            1. 7

                              For fairness, we should find some way to include Dream’s perspective.

                              My perspective on his perspective is that he goes through a lot of handwaving and psychological arguments to explain his situation. The speedrun team’s paper has a basic statistical argument which convinces me that something is unexplained, but I don’t feel like Dream has an explanation. But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                              In a relative rarity for commonly-run games, the Minecraft speedrunning community allows many modifications to clients. It complicates affairs that Dream and many other runners routinely use these community-approved modifications.

                              1. 5

                                But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                                This is the argument that always confuses me. At the end of the day, Minecraft is just some code running on someone else’s computer. Recorded behavior of this code is extremely different from what it should be. There are about a billion ways he could have modified the RNG, even live on stream with logfiles to show for it.

                                1. 1

                                  I like to take a scientific stance when these sorts of controversies arise. When we don’t know how somebody cheated, but strongly suspect that their runs are not legitimate, then we should not immediately pass judgement, but work to find a deeper understanding of both the runner and the game. In the two most infamous cheating controversies in the wider speedrunning community, part of the resolution involved gaining deeper knowledge about how the games in question operated.

                                2. 3

                                  But without a clear mechanism for how cheating was accomplished

                                  Are you asking for a proof of concept of how to patch a minecraft executable or mod to get lucky like Dream was?

                                  1. 3

                                    Here’s one:

                                    • open the minecraft 1.16.4.jar in your choice of archive program
                                    • go to /data/minecraft/loot_tables/gameplay/piglin_bartering.json
                                    • increase the weight of the ender pearl trade
                                    • delete META_INF like in the good old days (it contains a checksum)
                                    • save the archive

                                    Anyone as familiar with Minecraft as dream would know how to do this.

                                  2. 2

                                    But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                                    We have a clear mechanism : he modded his game. That’s because when he was asked for game logs, he deleted them. Just from the odds alone, he is 100.00000000% guilty.

                                    1. 3

                                      As the original paper and video explain, Minecraft’s speedrunning community does not consider modified game clients to be automatically cheating. Rather, the nature of the precise modifications used are what determine cheaters.

                                      While Dream did admit to destroying logs, he did also submit supporting files for his run. Examining community verification standards for Minecraft speedruns, it does not seem like he failed to follow community expectations. It is common for speedrunning communities to know about possible high-relability verification techniques, like input captures, but to also not require them. Verification is just as much about social expectations as about technical choices.

                                      From the odds alone, Dream’s runs are probably illegitimate, sure, but we must refuse to be 100% certain, due to Cromwell’s Rule; if we are completely certain, then there’s no point in investigating or learning more. From the paper, the correct probability to take away is 13 nines of certainty, which is a relatively high amount of certainty. And crucially, this is the probability that our understanding of the situation is incomplete, not the probability that he cheated.

                                      1. 4

                                        But you said there’s no clear mechanism for how cheating was accomplished. Changing the probability tables through mods is a fairly clear and simple mechanism isn’t it?

                                  1. 19

                                    Dream’s uploads are spectacles. In his recent Youtube uploads, there are videos where he attempts (and typically completes) a full any% run while being hindered by other players. These videos include flashy playing, like narrowly escaping pursuers with good movement, or setting explosive traps and successfully luring other players into them. Some of these uploads have now been supplemented with their original recordings, which helps dispel the idea that they were pre-planned; this video is an edited-down 50-minute summary of this original three-hour run. We would be remiss not to notice that these videos have tens of millions of views; this investigation is into a very popular member of the community.

                                    There is something to be said for luck. One person’s luck is another person’s attention to detail. It is always possible that speedrunners intuitively make microdecisions which unconsciously influence their times. A variety of techniques have been discovered this way. I run Link to the Past, where wall pumping was discovered by accident and eventually became an entire collection of movement quirks.

                                    I enjoy the paper’s careful attention to self-doubt and essential epistemic uncertainty. It doesn’t just make the conclusions more solid, but also reflects the internal struggles of the speedrunning community to always be looking for better ways to validate their athletic feats.

                                    I also, as a reverse engineer of Minecraft, appreciate the valuable understanding that Vanilla MC does not use RNG responsibly. Minecraft’s RNG correlates all random events, commutes nothing (no observable random behaviors commute), and does not mix player input into RNG, so the entire game experience includes a hidden variable which non-deterministically but reliably shapes gameplay, akin to classic 8- and 16-bit consoles. In my from-scratch implementation of Java’s RNG, we can clearly see that it is a linear congruential generator and is vulnerable to the same sorts of tight-gameplay predictability as in older consoles. At the same time, the RNG is called so unreasonably often that it is nearly useless for predicting anything. In the category any% Set Seed, where the world seed is fixed, the geometry still can be randomized if the player does not follow specific pre-planned movement.

                                    Some parts of the paper remind me of how powerfully expressive a vernacular language can be. For example, in the context of which items might be useful for a player to obtain via cheating:

                                    Obsidian is an option, possibly allowing for nether travel. Another option is string, which runners have recently started hoping for with the advance of “hypermodern” strats that involve skipping villages – but hypermodern strats were not well-developed during the time that Dream ran, and Dream did not go for string.

                                    The conflation of strategy and tactics into “strat”, the reference to items like “obsidian” and “string” not just as elements of an inventory but as entire portions of a run’s path, the word “village” as a reference to a possible but undetermined place in spacetime, and the delightfully compact phrase “Dream did not go for string” to indicate both empirical truths and also the runner’s emotional state and intent, are all interesting bits of shorthand.

                                    1. 2

                                      I have long been convinced that while the “manhunt” series videos are not cheated, they are not fully truthful either. Given the amount of luck in these videos, it seems pretty incredible that all of them are always more spectacular and unlikely than the last one. There is no episode in which he just dully dies from a creeper in a cave, or falls into a ravine, or gets killed in the first five minutes. When he loses, it’s always at the end of the game, exactly where it would make the video most entertaining… except when competing against people outside of his circle of friends. The runs have also never been streamed.

                                      I think the most sensible explanation is that, despite denying it, there are some agreements in place. For example, restarting the run if the ending is too anticlimactic, or he dies before a certain point. That also explains the amount of narrow escapes, as his opponents would want to deliberately avoiding killing him before the video is long enough to upload.

                                    1. 2

                                      I have personally stopped using debuggers for crashes altogether. I rarely find them useful anymore. I compile my programs with asan (AddressSanitizer) and ubsan (UndefinedBehaviourSanitizer), which insert an enormous amount of checks into the binary directly. The overhead is not insignificant (2-7x), but with that knowledge it can not only detect corruptions that only cause a crash later, it can also pinpoint the exact type of error (use after free, buffer overflow, null reference, overflow, etc), summarize the layout of your data in memory (“read of size 4 at 8 bytes right of array x”) and track the lifecycle of objects and where they were allocated.

                                      With memory bugs taken care of, the major class of problems left for me is then application bugs. For this I often use bpftrace, which lets you attach lightweight probes to collect data when a function is entered or exited without having to restart the binary. It is especially useful with lower level software, as you can use it to collect information from both kernel space and arbitrarily many different user space processes at the same time.