1. 64
    1. 37

      Github still not supporting ipv6 is a real shame. Had to proxy parts of the deployment for ipv6-only hosts just because of that more than once.

      1. 16

        Last time I checked, Azure’s policy on IPv6 was a mess. IPv6 addresses cost the same amount as v4 addresses. They sold IPv6 addresses individually, not as /64s (I think a /120 was the larges they sold), which is a pain to work with.

      2. 3

        Wow, even my Gitea instance has an edge over Github in this.

    2. 21

      “x is a disaster” after very little experience. That’s an orange-site sentiment.

      FWIW I’ve used IPv6 in production for many years and feel positive. My comparable setup is: The iron has an IPv4 address and provides outgoing IPv4 via NAT to all virtual hosts, spinning up a new virtual host assigns an IPv6 address but no IPv4 so there’s no incoming IPv4, and… that’s all. I don’t even try to avoid using outgoing IPv4, and I don’t need to ever think about whether I have addresses on hand.

      Sometimes people need to access one of those servers and can’t. That happened to me only this summer. They accept that it’s their problem, not mine (and while I feel that this sentence should end with a smiley I’m not sure which).

      1. 23

        That’s an orange-site sentiment

        Not in this case. It is a disaster if all you wanted was a normal, working host, and you got a ipv6-only thingy, that can’t even fetch things from github by default. That’s a major no-go if you expect this to work like any other machine. Whether it really makes ipv6 a “disaster”, or more a statement for how slow-moving corporate networks are is up to the reader. IPV6 certainly “just works” if you have a dual stack. But for now only then.

        1. 16

          The point is ipv6 is not the “disaster”. The fact that so few network-oriented businesses are willing to show technical leadership is.

          1. 24

            It felt pretty clear to me that “ipv6 is a disaster” isn’t referring to technical flaws; the implication is “ipv6 adoption is a disaster”.

            The post I’m actually interested in reading is about why even big-names like github haven’t bothered to fix it.

            1. 4

              Just a speculation, but if you have a working CDN with load balancing, blacklists and rate limiting, it can be very annoying to adopt ipv6. Even more when you suddenly not only need ipv6 but rather /64 matching (for blacklisting or rate limiting). It’s a giant change in your infrastructure, and we all know what brought AWSfacebook down: a broken core router configuration.

            2. 2

              Another great example of people leaving lengthy comments about what they think the headline of the article implies without engaging in any part of the contents, probably not even reading past the headline

          2. 13

            Counter-argument:

            Resources are limited and time is finite. Rolling out IPv6 to satisfy the aesthetics and “best practices” of people who are unlikely to have a direct stake of real size is a bad idea.

            As much as everybody complains about it, the current system works and is a hell of a lot simpler than IPv6. Maybe, if a technology has taken more than two decades (27 years if you ask Wikipedia) to get adopted, there’s some flaw in the technology.

            If you are suggesting that Github or Twitch or others lack technical leadership, I think the empirical data disagrees with your proposed metric.

            1. 5

              the current system works and is a hell of a lot simpler than IPv6.

              Is it? From the blog post there’s a picture of the IPv6 header, and it looked like it has much fewer fields. At a glance, the core of it look simpler. What are the more complicated parts?

              1. 6

                IPv6 is quite complex. Much of the complexity is in the required supporting protocols: NDP, MLD, ICMPv6, and so on.

                Skimming the table of contents of RFC 4861 may give some sense of this.

                Edit: And about headers specifically - the post didn’t mention IPv6 extension headers.

            2. 4

              So what about an ipv8 that doubles ipv4 space, and reserves the last half for ipv4 compatibility?

              1. 2

                One of the transition mechanisms (6to4?) did something like this. It mapped the entire IPv4 address space into the v6 address space. This came with a lot of problems. First, you still need to handle different packet formats, so the boundary involves some translation. Then you have protocols that embed an address in the message, which doesn’t get translated. Then you get the problem that things on the IPv4 side look like they’re on the v6 side, but can’t connect to most hosts there.

                1. 5

                  The entirety of IPv4 is mapped into IPv6 not just once, but at least five times, for different transition mechanisms: https://en.m.wikipedia.org/wiki/IPv6_address#Transition_from_IPv4

                2. 1

                  These require your connection hit a bridge along the way, right? Instead of being able to be switched and routed in the same hardware.

                  1. 1

                    Yes, though the bridge can be on one of the endpoints or on an ISP’s network. I believe the idea for several of these was that ISPs offering both IPv4 and v6 would bridge the networks so that their v4 addresses were all visible and servers could move to v6 and still support legacy v4-only clients.

        2. 1

          If what you wanted was a normal, working foo, and you got a something the designers of foo considered weird, and it was a pain to use, would you consider that a failure of foo, or would you consider it more of a PEBKAC case?

          Dualstack was considered the migration when the various IPv6 designs were compared and the one we have won over the wothers.

          1. 6

            If you have dual-stack, you might as well just run ipv4. Because cost-wise, you already needed the ipv4. Access-wise you only need one (ipv4, the v6 people will already need something to get v4 access, but not vice versa). Complexity wise, you’re better off with only one.

            So what exactly do I gain by having ipv6 compatibility, apart from internet karma ? Everyone else (relevant) seems to get away not caring about ipv6.

            I run dual-stack on my hosts, but I also don’t pay for ipv4 on top, and I only do so since I’m running nftables and have not as much complexity. I also can’t even use the potential of my /64 (for containers etc). Because then I loose ipv4 compatibility. So it’s basically the same as ipv4, except I need to address/debug everything twice.

            1. 1

              If cost is what you care about: Having outboung IPv4 costs my nothing, having inbound IPv4 would incur a per-VM cost.

              1. 1

                it’s always about inbound - and costs for complexity

                1. 1

                  The cost of complexity is actually why I started doing that. Getting a public IPv4 address onsite required meetings and negotiation and justification, costly complex stuff, so I wrote a script that set up a new VM with a publicly reachable IPv6 address, a DNS entry and NAT for outbound IPv4.

                  1. 1

                    Sounds good, but how do you handle accessing stuff purely from IPv4 ? Because that’s my main problem: making it accessible for both - and thus loosing the freedom of my /64.

                    1. 1

                      I don’t bother with that.

                      It can be a problem. Earlier this summer I held a workshop where I said something like “I made an example site showing [blah blah], it’s at [link]” and some of the participants couldn’t access it because they had no IPv6. But mostly it’s okay. Either because the people who are supposed to use it have IPv6, or because they don’t but perceive it as their problem rather than mine.

      2. 5

        I can’t imagine any metric for an internet protocol by which IPv6 would be anything other than a complete disaster.

        I mean I guess the packet header is mostly well designed?

    3. 15

      I don’t think the the price will be high enough for companies to move all that soon.

      I do however think that with IPv6 being old, being required by various institutions (US military if I remember) and how IPv4 is used these days being through a ton of a hacks it might be worthwhile to consider “shaming” on everything that either calls themselves internet related (ISPs, Cloud Providers) or modern, cutting edge, etc. (from Apple to GitHub) basically.

      In my humble opinion it’s also a question of trust. Do I really want to trust the software or service that doesn’t manage to have proper IPv6 support for decades? Even if it wasn’t their first priority. There were events like IPv6 Launch Day. Everyone with at least some pride over how they run their network should at least enable it. It’s not rocket science and many individuals and organizations support it. About half of the clients connect via IPv6.

      There even is technical reasons! ;)

      Software wise I think IPv6 support tends to excellently work as a first indicator for its quality and how seriously I can consider it for something production ready. If there is no proper IPv6 support it tends to be a good indicator that nobody has been using it for anything too serious yet. Of course nothing stands or falls with that alone. It’s just an indicator. But at least in my opinion it happens to work pretty well.

      I imagine internally it can be a good indicator of how well your infrastructure and teams work, how long it takes to properly support IPv6.

      1. 3

        The problem is that it is never a “requirement” for anyone. When you compare two products or two vendors, you never have “does it use ipv6?” as a hard requirement (except if you are in the WAN business and buying hardware).

        Even if you offer it that feature to your users, that’s very unlikely they’ll use it if they can use ipv4. At best, they’ll configure both…

        1. 14

          Well, Apple requires in app reviews, for both iOS and macOS, that the app’s networking works under an IPv6-only environment (I’ve been hit by the problem, so at lest sometimes they check for it). I remember someone told me that the requirement came because some of the networking that iDevices do between them (Internet Sharing, iirc) is IPv6-only under the hood (but it was many years ago).

          cfr. Apple Dev Forums

        2. 4

          Not disagreeing on any of that. It’s basically what I meant with having pride, showing you know what you are doing and seeing it more as indicative for quality.

          It is similar with portability of software. Usually that’s an indicator for good software quality even if it’s something I will never use on another OS or platform.

          If something only works on one system even if it’s the one I’m using to me it makes a worse impression than something that is used over a wide array of platforms.

          Of course it’s just indicators but when you have a whole slur of options I tend towards giving ones that work everywhere and support ipv6. Good change logs, good official docs are other indicators to me.

          1. 2

            Having ‘pride’ and ‘quality’ are all well and good, but since IPv6 provides me little or nothing over IPv4, and would be a huge amount of work to implement and support, I appreciate options that focus on things that matter. At best, IPv6 support is a “nice to have” in my world.

            1. 9

              Given the rise of kubernetes and docker swarm and the like and people still managing to conflict with internally used 172.16/12 networks, I can’t fathom how anyone can claim IPv6 provides little utility. Just being able to yeet anything in the big ULA space or even a /64 block for a whole subnet has to be massive utility. I know my employer’s network team fights me when I ask for /22 prefix (v4) so they must feel the pain too.

              1. 1

                still managing to conflict with internally used 172.16/12 networks

                There are nearly 18m RFC1918 v4 addresses (10/8, 172.16/12, 192.168/16). Either your networking folks haven’t figured out how to manage hiding one RFC1918 space from another, or are you are running kubernetes and docker swarm environments where you need more than that and they need to route to each other individually?

                Yes, I’ve run into networking teams that jealously guard ‘their’ IP spaces (and I’ve seen them do it with v6 as well, which is unfathomable). But that is a social/political issue, not a technical issue.

            2. 4

              and would be a huge amount of work to implement and support

              How come?

              1. 3

                If you’re interacting with addresses in a more detailed fashion than struct sockaddr_storage, it becomes a bunch of extra codepaths you have to build and test. This isn’t just in C-languages either, it’s kind of annoying in Crystal, Ruby, Elixir, and presumably others too.

                1. 8

                  In C, things like getaddrinfo let you totally abstract away the details of the network protocol and just specify service names and host names. The OS and libc will transparently switch between IPv4, IPv6, or things like IPX and DECNET without you caring. The socket API was designed specifically to avoid embedding protocol knowledge in the application and to allow transport agility (when it was created, IP was not the dominant protocol and it needed to support quite a large number).

                  Moving from TLS to QUIC or HTTP/1.1 to HTTP/2 are far more work in the application.

                  1. [Comment removed by author]

                  2. 1

                    The OS and libc will transparently switch between IPv4, IPv6, or things like IPX and DECNET without you caring.

                    Have you ever actually tried to do that? I mean beyond lowest-common-denominator, best effort packet delivery. For IPX or DECNet, it wasn’t much fun.

                    edit: clarity

                    1. 2

                      It’s been a very long time, but back in the win16 days I wrote WinSock code that worked with IP and IPX. The only thing that made it hard was that there was no getaddrinfo back then so I had to have different code paths for name lookup.

    4. 11

      It’s pretty hard for IPv6 to succeed on a personal level when Fucking Verizon doesn’t support it for home users. They were claiming they would deploy it to all their customers in like… 2015 or something, and then just shrugged and never mentioned it again. AFAICT Comcast is similar. So something absurd like 70% of the US population just aren’t gonna have an IPv6 address.

      Yes I can use a HE tunnel like it’s fucking 2003 again but all in all I’m pretty sick of it.

      That said, it’s pretty shameful that github and datadog don’t even support it. Every VPS I’ve paid for since like 2010 has come with IPv6 addresses by default. …that said, now that I look at it, Sourcehut doesn’t have an IPv6 address either. Maybe time to consider Codeberg, hm?

      1. 9

        EC2 VMs don’t get public IPv6 by default. And thus, a lot of stuff running on AWS just never gets around to add IPv6. Even Twitch, now owned by Amazon, still doesn’t have it.

      2. 4

        Comcast sucks, but they have had IPv6 support across their network since at least 2020.

      3. 3

        That’s interesting. I’m both a Verizon wireless and AT&T business customer, and Verizon is the only provider that doles out both ipv6 and ipv4 addresses to my LTE router upon connection. AT&T business never provides ipv6.

        So it’s clearly not a technical problem for Verizon. They’re choosing not to give ipv6 addresses to home users.

        1. 2

          It’s a slow rollout. My understanding is that Verizon is going region-by-region to reduce the support burden. My residential Verizon Fios connection near NYC magically started receiving an IPv6 address about a year ago, and it was seamless enough that I didn’t even realize for a while that most of my traffic was going over IPv6. It has been smooth sailing ever since.

          1. 3

            FWIW - I’m on Verizon Fios in the Boston area, and it appears that I have IPv6 running.

      4. 3

        I wonder if maybe all of these evil vendors aren’t throwing their customers under the IPv6 bus because it doesn’t matter to the vast majority of the customers. Because 98% of their customer base don’t know or care what an IP address is, and as long as TIkTok streams and games can be played, why would they? There just isn’t a value proposition for the average user that demands IPv6.

        1. 4

          You’d think that it makes life easier for them as network admins, though.

          1. 1

            If supporting both is more expensive than supporting one, and your choice is between v4 and v6 OR v4, … how would it be easier? I’m not savvy to why v6 has taken so long, but it certainly seems like a network effects kind of thing.

          2. 1

            It doesn’t. Which is why we network admins haven’t jumped all over v6. We’re lazy…if it was easier v6 would be the de facto everywhere, and would have been for years.

    5. 5

      I really enjoyed the ipv4 to ipv6 header comparison.

      No more checksum, so routers don’t have to do a recalculation for every packet

      IIRC they either do it in hardware, or just straight up don’t bother..

      Also if you’re using ip6tables - give NFTables a chance. Allows for configuring once for ipv4 and v6 in one.

      1. 14

        As I recall (from discussions of IPv6 almost a decade ago, sign), modem networks do a lot of error correction at the lower protocol levels. IPv4 packets basically never fail their checksums because there’s less error detection in that checksum than in the layer one down. This means that the checksum calculation adds overhead and latency but doesn’t buy any robustness. Modern WiFi, for example, has a bunch of forward error correction so that dropped packets can be reconstructed. A small CRC on top of that is a total waste of time and space.

    6. 4

      The comments here are interestingly US centric. AFAIK most mobile devices on earth are IPv6 only because their providers couldn’t get big IPv4 blocks.

      Azure networking was barely keeping their head above water when I worked at MS a couple years ago, so I’m not surprised they haven’t done IPv6 yet.

    7. 4

      I admire the optimism, but I don’t believe the “we can fix it” part of the title.

      Interesting content though. The state of IPv6 is somehow even worse than I thought.

    8. 3

      seems like this blog adds ?ref=matduggan.com to all links indiscriminately. even stuff like the datadog installer script, or to some github repo.

      1. 14

        Oh good catch. I’ll remove that. For some reason the Ghost CMS sets that on by default. Should be off now.

    9. 3

      I was wondering if there is a solution for ingress traffic, not just egress (nat64 seems like a great solution for egress).

      The only thing I can think of is to just slap cloudflare in front of it. But I think this either 1) is just a dumb DNS server 2) proxies traffic with SSL interception

      Any idea what to do there?

      1. 4

        There aren’t too many ways to get clever with ingress - fundamentally something needs to speak IPv4 and proxy to an IPv6 backend. You can setup a 1:1 NAT, which is easy to implement, efficient, but costs an IPv4 address (which is what we’re probably trying to avoid), you can do layer 7 termination like you mentioned, but it’s sometimes not desirable having TLS terminate on a third party. A very good option is to terminate layer 4, but peek far enough into the initial packets to see the SNI to determine what backend to route to. I’ve frequently seen this done internal to large companies, but I’m not aware offhand of any third parties that offer it as a service.

        1. 3

          I’ve frequently seen this done internal to large companies, but I’m not aware offhand of any third parties that offer it as a service

          yeah that’s basically what i am looking for. shared-ipv4 traffic proxy as a service for common protocols on default ports. ideally not just SSL on port 443 but more.

          1. 1

            For this to make sense, it would really have to be offered/run by the cloud/server hosting provider. I continue to be surprised (and frustrated) that Hetzner doesn’t already offer something like this, at least for HTTP (Host:) and TLS (SNI proxy). I mean they have have their load balancer offering, but if what you actually want is 1:N IPv4 -> backend server proxying with 1 backend per hostname not a load balancer per se, the pricing doesn’t make sense compared to N:N IPv4 addresses.

            For other protocols, it’d be a reasonable start to be able to rent specific ports rather than the whole IP. A whole range of protocols simply don’t support this level of inspection of the intended destination of the first client transmission; support for the PROXY header is also not widespread outside of web server software, and handover for an already-established TCP connection to the backend presumably would require some pretty deep network stack voodoo.

            Of course, if you have a bunch of servers in the same data centre or cloud location, and your provider supports private networks (e.g. Hetzner vSwitch) you can use bog-standard IPv4 NAT and port forwarding on a single public address, but it will of course put a heavy traffic load on whatever server is assigned the IP. Again, the hosters could help out with this by performing the NAT on their routers, but I’m guessing their kit might not be designed for this kind of workload.

            (FWIW, Hetzner’s pricing on IPv6 doesn’t make any sense either; extra /56 nets on servers are apparently a one-off non-recurring fee, but on vSwitches a /64 costs €10/month. Either this is an organically grown pricing structure with not much thought behind it, or it’s some kind of market segmentation thing, or it’s precisely this “last mile” virtual network routing workload mismatch.)

          2. 1

            I’ve seen a public service that does this before but I can’t find it now :/

    10. 2

      I liked the little interview. That’s a nice touch.

      Listing IPSec as an advantage of ipv6 is weird tho. IPSec works on ipv4 (but also: just use wireguard).

    11. 1

      The bar for movement is high, and IPv6 has never hit that bar.

      • A CRC for the payload? No just move that up a layer.
      • Session information? No, just move that up a layer.
      • Streams? In order delivery? Resends? Authentication? yada yada? Nah, move that a layer or two.

      So we get a slightly faster routing and a fix to a silly address space issue that we already have fixes to.

      Meh

      1. 9

        a fix to a silly address space issue that we already have fixes to

        Workarounds for a certain subset of the issues, no fixes.