1. 32
  1. 23

    The funny thing was, most engineers at the company (myself included) had no idea this happened until people started linking this article in chat! Amazingly well-handled by our SRE team.

      1. 4

        Mods, can we just change this to point at the incident report?

        1. 8

          It’s not clear-cut to me that that’s the right call? They’re both easy to get to as it stands, if I changed the main link I’d feel it necessary to link to the Wired article from a comment. And Wired did add a bunch of background and context to this one.

          1. 5

            Yea, I agree. @friendlysock could submit the Github post independently. Maybe it’d get upvoted more.

            I don’t recall the Github post stating it was “the biggest recorded.” It does sound a bit sensational, but it’s also context and geared at a more general audience.

            The one thing I haven’t seen in this or other threads is the big question: why? For the lawls? An ex-employee? Why bring down Github? Any evidence of which person/group might have done it and their motivations?

            1. 2

              I wouldn’t expect to see answers to that, unless somebody has actively claimed credit for it. It’s rare to ever know who’s responsible for DDoSes, and any researchers who are investigating surely don’t want to share what they’ve figured out right now. There’s also separate questions of who technologically enabled it vs who paid for it.

      2. 12

        On Monday, February 26th I received an abuse report from another network operator that turned out to be a memcached reflection attack. We resolved that issue on the same day. A couple days later we saw a similar but larger spike in traffic. While investigating that issue I did notice one of our customer VPSes was sending a lot of UDP traffic to GitHub, though GitHub was not the only target. This Wednesday incident was faster for us to resolve, having seen the same thing on Monday. With two related incidents in the same week, we published a notice about the memcached amplification to our users.

        In both of our cases we had users running Zimbra, which bundles/uses memcached. Zimbra published their own memcached notice yesterday.

        Since January, when Meltdown/Spectre were disclosed, I’ve been on a cloud-provider “consortium” Slack server, and we all chattered about memcached amplification Wedensday. All of DigitalOcean, OVH, and Linode experienced both inbound and outbound traffic related to this attack, and decided to block UDP port 11211 at their network edge. At least Linode also escalated to their transit providers as well. I understand that NTT (who is also one of my transit providers) took the same action.

        I had the luxury and pleasure of being able to reach out individually to the sysops on my network and fix the problem at it’s source, and decided not to block this traffic at my network edge. Given the circumstances, I’ve had routinely great experiences working with my customers to resolved issues like this, and my Monday and Wednesday incidents were no different. I’m thankful to both of them for their frankness, willingness to share operational details of their systems, and help in keeping the internet running.

        I’m sorry about this one, GitHub. I’m glad you were able to resolve it so quickly.