1. 54
  1.  

  2. 7

    Then theoretical limit a server can support on a single port is 2⁴⁸ which is about 1 quadrillion because:

    Over IPv4. This goes to 2¹⁴⁴ over IPv6, which is exceeding by far the estimated 2⁸⁰ atoms in the entire observable universe.

    1. 10

      According to https://educationblog.oup.com/secondary/maths/numbers-of-atoms-in-the-universe, it’s not 2^80 but on the order of 10^80, which (log2(2.4 * 10 ^ 78), to be more exact, as taken from the article) works out to approx. 2^260, so IPv6 is still not ready to cover it all. But I agree with the general idea IPv6 address space should be sufficient for humankind in the observable future.

      1. 4

        I hope we get there one day. For now I’m stuck unsupported: https://test-ipv6.com/ 0/10 :(

        1. 5

          Surprised to see I’m 0/10, too. As far as I know this has never impacted me. Given that the IPv4 scarcity worries turned out like the Peak Oil scare did, can someone remind me why I should still care about IPv6? (I’m only half joking)

          1. 8

            IPv4 scarcity is not the only reason to care about v6 (having millions of IPs per server can be very useful, for just one example) but it’s also not a fake problem. v4 scarcity is getting worse all the time. T-Mobile LTE users don’t even get a NAT’d v4 anymore, just a sort of edge translation to be able to mostly reach v4 only hosts (this breaks a lot of WebRTC stacks if the server is v4 only for example).

            1. 2

              T-Mobile LTE users don’t even get a NAT’d v4 anymore

              Forgive me for being ignorant here, but I thought NAT was pretty much the bandaid for mitigating the symptoms of IPv4 address exhaustion (at least on the client side). Is there some fundamental limit to how many users can be behind a NAT, and is T-Mobile doing a type of translation different from standard NAT in order to get around it?

              1. 5

                Yes, T-Mobile isn’t using a standard NAT or CGNAT at all. They use 464XLAT if you want to look up the tech.

                1. 1

                  There are limits to NAT but it’s mostly either port exhaustion or too much translation happening on a single router. Layered NAT can solve that but that degrades performance. There are probably limits at which point IPv6 would be cheaper to run than layers and layers of NAT, but I don’t know if that time is coming any time soon.

                  1. 0

                    CGnat means you share an ipv4 address; makes hole punching even worse, but most things can be made to work.

              2. 3

                10/10 here

                1. 1

                  Thanks for the link — that’s new to me. I get 10/10 at home; I’m not fond of Comcast/Xfinity the company but I’m happy to see they’re giving me up-to-date service.

                  So does this mean that I could run a publicly-accessible server from home without NAT shenanigans and with a stable IPv6 address?

                  1. 2

                    Yeah. Once enabled, your router (depending on the router) will usually delegate addresses in the assigned /64 to devices on your network. You can live a NAT-free life! Just be careful to firewall.

              3. 2

                Each server socket needs two file descriptors

                Is this always the case? I made an iocp/uring based http2 server, and it would be cool to claim it can handle 1 million concurrent connections.

                Also, even without uring, why isn’t one socket per connection enough? They allow for both reading and writing.

                1. 2

                  More of the quote is:

                  Each server socket needs two file descriptors:

                  A buffer for sending

                  A buffer for receiving

                  You don’t need two file descriptors, (unless you want to count the listening fd which i don’t think they meant), seems like they were conflating the descriptor and the underlying socket, maybe by analogy with pipes, which do need 2 descriptors if you want bi-directional communication.

                  1. 1

                    That’s my bad. I mis-remembered. For some reason i thought there are two file descriptors. After reviewing the docs, I realized I’m wrong. Both methods are only returning one file descriptor.

                    From beej’s networking guide:

                    accept() returns the newly connected socket descriptor, or -1 on error, with errno set appropriately.

                    The new socket descriptor to be used in subsequent calls, or -1 on error (and errno will be set accordingly).

                  2. 2

                    There was a Slashdot article over 10 years ago about WhatsApp handling over a million connections on a single machine with FreeBSD / Erlang. Given that io_uring should be more efficient and computers are a lot faster, I would be pretty shocked if you couldn’t handle that many. I’d expect RAM for TLS protocol state to be your limiting factor, though if those connections are actually doing anything then network bandwidth might start to be (WhatsApp connections were mostly idle), though with 40 GigE that’s only 40 KB/s per connection and at that speed NUMA issues and the LLC sizes start to be important concerns (Netflix was saturating 40 GigE with TLS a few years back, the difficult thing was DMAing data into L3, encrypting it, and DMAing the encrypted data out again before the next DMA from disk started evicting things from L3 and slowed everything to a crawl).

                  3. 1

                    While running similar tests on Mac against a WebSocket application, I’ve also encountered crashes at a certain limit, no diagnostic info either. I wonder if anyone could shed light on how these crashes could be investigated and whether there is a solution.