1. 38
    1. 25

      That headline is pretty confusing. It seems more likely twitter itself was compromised, than tons of individual users (billionaires, ex-leaders, etc)?

      1. 19

        You’re right. This is a case of Verge reporting what they’re seeing, but the scope has grown greatly since the initial posts. There have since been similar posts to several dozen prominent accounts, and Gemini replied that it has 2FA.

        Given the scope, this likely isn’t accounts being hacked. I suspect that either the platform or an elevated-rights Twitter content admin has been compromised.

        1. 12

          Twitter released a new API today (or was about to release it? Not entirely clear to me what the exact timeline is here), my money is on that being related.

          A ~$110k scam is a comparatively mild result considering the potential for such an attack, assuming there isn’t some 4D chess game going on as some are suggesting on HN (personally, I doubt there is). I don’t think it would be an exaggeration to say that in the hands of the wrong people, this could have the potential to tip election results or even get people killed (e.g. by encouraging the “Boogaloo” people and/or exploiting the unrest relating to racial tensions in the US from some strategic accounts or whatnot).

          As an aside, I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said.

          1. 14

            or even get people killed

            If the Donald Trump account had tweeted that an attack on China was imminent there could’ve been nuclear war.

            Sounds far-fetched, but this very nearly happened with Russia during the cold war when Reagan joked “My fellow Americans, I’m pleased to tell you today that I’ve signed legislation that will outlaw Russia forever. We begin bombing in five minutes.” into a microphone he didn’t realize was live.

            1. 10

              Wikipedia article about the incident: https://en.wikipedia.org/wiki/We_begin_bombing_in_five_minutes

              I don’t think things would have escalated to a nuclear war that quickly; there are some tensions between the US and China right now, but they don’t run that high, and a nuclear war is very much not in China’s (or anyone’s) interest. I wouldn’t care to run an experiment on this though 😬

              Even in the Reagan incident things didn’t seem to have escalated quite that badly (at least, in my reading of that Wikipedia article).

              1. 3

                Haha. Great tidbit of history here. Reminded me of this 80’s gem.

              2. 2

                You’re right - it would probably have gone nowhere.

          2. 6

            I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said

            It’d be nice to think so.

            It would be somewhat humorous if an attack on the internet’s drive-by insult site led to such a thing, rather than the last two decades of phishing attacks targeting financial institutions and the like.

          3. 3

            I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said.

            A built-in system in the browser could create a 2FA system while being transparent to the users.

            1. 5

              2fa wouldn’t help here - the tweets were posted via user impersonation functionality, not direct account attacks.

              1. 0

                If you get access to twitter, or the twitter account, you still won’t have access to the person’s private key, so your tweet is not signed.

                1. 9

                  Right, which is the basic concept of signed messages… and unrelated to 2 Factor Authentication.

                  1. 2

                    2FA, as I used it, means authenticating the message, via two factors, the first being access to twitter account, and the second, via cryptographically signing the message.

                    1. 3

                      Twitter won’t even implement the editing of published tweets. Assuming they’d add something that implicitely calls their competence in stewarding people’s tweets is a big ask.

                      1. 2

                        I’m not asking.

          4. 2

            A ~$110k scam

            The attacker could just be sending coins to himself. I really doubt that anyone really falls for a scam where someone you don’t know says “give me some cash and I’ll give you double back”.

            1. 15

              I admire the confidence you have in your fellow human beings but I am somewhat surprised the scam only made so little money.

              I mean, there’s talk about Twitter insiders being paid for this so I would not be surprised if the scammers actually lost money on this.

            2. 10

              Unfortunately people do. I’m pretty sure I must have mentioned this before a few months ago, but a few years ago a scammer managed to convince a notary to transfer almost €900k from his escrow account by impersonating the Dutch prime minister with a @gmail.com address and some outlandish story about secret agents, code-breaking savants, and national security (there’s no good write-up of the entire story in English AFAIK, I’ve been meaning to do one for ages).

              Why do you think people still try to send “I am a prince in Nigeria” scam emails? If you check you spam folder you’ll see that’s literally what they’re still sending (also many other backstories, but I got 2 literal Nigerian ones: one from yesterday and one from the day before that). People fall for it, even though the “Nigerian Prince” is almost synonymous with “scam”.

              Also, the 30 minute/1 hour time pressure is a good trick to make sure people don’t think too carefully and have to make a snap judgement.

              As a side-note, Elon Musk doing this is almost believable. My friend sent me just an image overnight and when I woke up to it this morning I was genuinely thinking if it was true or not. Jeff Bezos? Well….

              1. 12

                People fall for it, even though the “Nigerian Prince” is almost synonymous with “scam”.

                I’ve posted this research before but it’s too good to not post again.

                Advance-fee scams are high touch operations. You typically talk with your victims over phone and email to build up trust as your monetary demands escalate. So anyone who realizes it’s a scam before they send money is a financial loss for the scammer. But the initial email is free.

                So instead of more logical claims, like “I’m an inside trader who has a small sum of money to launder” you go with a stupidly bold claim that anyone with a tiny bit of common sense, experience, or even the ability to google would reject: foreign prince, huge sums of money, laughable claims. Because you are selecting for the most gullible people with the least amount of work.

        2. 5

          My understand is that Twitter has a tool to tweet as any user, and that tool was compromised.

          Why this tool exists, I have no idea. I can’t think of any circumstance where an employee should have access to such a tool.

          Twitter has been very tight-lipped about this incident and that’s not a good look for them. (I could go on for paragraphs about all of the fscked up things they’ve done)

        3. 5

          or an elevated-rights Twitter content admin

          I don’t think content admins should be able to make posts on other people’s account. They should only be able to delete or hide stuff. There’s no reason they should be able to post for others, and the potential for abuse is far too high for no gain.

          1. 6

            Apparently some privileges allow internal Twitter employees to remove MDA and reset passwords. Not sure how it played out but I assume MFA had to be disabled in some way.

        1. 5

          That’s a good article! Vice has updated that headline since you posted to report that the listed accounts got hijacked, which is more accurate. Hacking an individual implies that the breach was in their control: phone, email, etc. This is a twitter operations failure which resulted in existing accounts being given to another party.

    2. 23

      The story’s catching a lot of “off-topic” flags. I mostly left it up so there’s a single story people can click “hide” on because it’s clear we’re going to see this a dozen times in the next few days as news trickles out. Also, when this story was posted yesterday it wasn’t clear this was a social engineering rather than a zero-day, and we do consider those topical.

      Years ago we removed the ‘news’ tag. Still today, lot of ‘security’ stories are news because they’re about new vulnerabilities. This feeds back into my concern about the recurring idea to remove the culture/practices/etc tags. With a week to reflect on my response there, I think it boils down to: any time we decide to restrict topicality we amplify two problems.

      First, there are Big Event stories like this one that seem worth leaving up. Call it the “heckler’s promo”, because it’s the inverse of the heckler’s veto. A story effects the entire industry and will be submitted over and over, so we make an exception. We can bite the bullet and stop making exceptions, but we know submitters and some number of readers will be outraged at the removal of stories that feel so important in the moment or are so highly emotionally charged. This very often results in angry tweets and PMs to mods about how we’re removing stories to enforce our opinions about politics and business. Having evil motives endlessly imputed to us is personally draining, and more generally not great for the site’s reputation, so that’s why I harp on the need to be able to point to a clear, public standard such that even someone who is angry at having their story or comment removed can accept that it fell within the bounds. The other benefit of drawing a bright line is that it reduces moderator discretion, power, and potential mistakes. (Edit to add sentence:) The more clearly we can draw lines, the more confident users can be that they’re contributing well and getting treated fairly.

      Second, and much harder, stories often touch on multiple topics and it’s not clear where to say that it has so much of the tagged topic that it should be removed on that grounds. When it’s most of the story? Half? One sentence? Implied by popular knowledge about the topic? When it’s likely to prompt a rehash of a contentious topic? We give ‘security’ a pretty wide leeway on ‘news’ but entrepreneurship and business almost none. This problem benefits from having a bright-line rule for all the same reasons as the previous, and because people assume our definition of topicality is the same as other, more popular sites like r/programming and Hacker News. (And: we’ve gotten a lot of off-topic business stories in the last week or so, I don’t know what’s up with the spike.)

      Hope this helps explain why things look the way they do and is a useful framework for future changes.

      1. 2

        What about bringing about a off-topic or unrelated or breaking-news or squirrel tag for all these types of articles that people love to submit but don’t really fit? Too problematic in that it encourages people to submit them?

        Just wondering how people that want to bring up stories like this and enable those of us that couldn’t be bothered to care can both be appeased. Might remove some of those angry pm’s and tweets if you just go, moved this story to the spam category. Ooh there call it spam as the label. >.<

        1. 5

          Something like this gets proposed every couple months. I don’t see any reason it would fix anything; the discussion would only shift terms slightly from “why’d you delete this” to “why did you give it the tag of shame” with the same open questions about when to apply it, plus the comments there would show up on /comments, topics would spill over into other threads, etc. Maybe someone wants to take a page from the chat room playbook and run their own site with different rules that are closer to but still broader than Lobsters itself.

        2. 1

          It might encourage more submissions, but at least an ‘offtopic’ tag could be hidden by users and incur a hotness penalty.

      2. 1

        Cloudflare

        But why did @alynpost merge the twitter stuff with the cloudflare outage ?! that’s completely unrelated..

        1. 3

          I did accidentally conflate story submissions xbl6uc and uptmet and merging them was incorrect. It’s undone.

          1. 1

            Thanks :)

    3. 11

      This reminds me of an incident I saw reported several months ago where the Saudi Arabian government paid some Twitter employees with ties to Saudi Arabia to use their access to Twitter’s systems to get the personal information of some anti-Saudi-government Twitter users. This is a good reminder that technology companies that store unencrypted user data (or encrypted data + the keys for it) are vulnerable to social engineering attacks just as much as to more traditional hacking; and that companies that run globally-important platforms like Twitter are awfully big targets.

      1. 5

        I agree that user-side encryption where users own the keys would solve this issue, but historically users are unable to participate well in public key infrastructures. Even if you get browsers on board think of the education required to deal with bootstrapping initial keys, revocation, and rotation. Users barely understand TLS and what the green lock means. I don’t think it would work. I could be wrong, I know South Korea and Estonia both have ways of strongly authenticating citizens.

        I think there is a different more mundane angle here. Why is a single Twitter administrator allowed to do anything? All internal actions of this severity must require another person at Twitter to say “yep, go ahead”, preferably someone more senior. Requesting too many such internal actions should automatically trigger an alarm.

        Finally there should be some actions that are almost impossible for administrators to do. Why is there even an internal tool in the first place that allows impersonating users? You may respond “sure but there will be some internal API that triggers tweets, in the end a sufficiently knowledge insider can use it”. That’s fine, then lock down that API hard with authentication, authorization, and auditing for who is using the API and when, with alarms on unusual usage. Humans should be prohibited from using it, and prohibited from accessing machines that can use it.

        This is all the realm of security threat modeling…and I’m not claiming this is easy, this is painful to do, and teams hate doing it. But this why security and threat modeling matters.

    4. 7

      Twitter has announced preliminary investigation results that this was a social engineering attack of internal tooling.

      1. 3

        I hope they will make a detailed post explaining what happened once they finish their forensic analysis and restore the system.

        1. 2

          This might be a case where their desire for transparency conflicts with their exposure to liability. It’ll be interesting to see how it plays out.

    5. 6

      This begs the question, which past tweets from these accounts were also fake?

      What’s worse is that there even isn’t a way to know - these accounts don’t seem to have been hacked themselves (an internal tool was).

      Imagine if Russia (or Iran) did this and instead of asking for Bitcoin, started a war, or crashed the markets.

      The cherry on top - Twitter is likely shielded from much accountability thanks to Section 230 (speculation, IANAL).

      1. 2

        Section 230 gets too much hate. It’s a necessary part of our over consolidated internet.

    6. 5

      Brian Krebs has an article up attempting to ID the perpetrators and method. He does excellent work, but I take any breaking news story with a grain of salt, there’s always a lot of confusion in the early days.

      1. [Comment removed by author]

    7. 3

      Another effect of this breach is that leaked screenshots of Twitter admin panels featuring “Search blacklist” and “Trends blacklist” buttons are now making the rounds. Not that the existence of those should surprise anybody here, but more that Twitter will now face increased pressure to own up to their shadowbanning practices.

    8. 2

      Shameless plug: All the companies(Google, Microsoft…) are telling trust us. But, I believe that we should trust us instead of relying on third parties. They always change when businesses interest changes. This is where web3 is coming to play. Technologies like IFFS, safe network are coming. Looking at the scale issue, I guess this web3 takes at least 5 more years. But, this kind p2p technology is possible with small-scaled mesh. Mesh networks within our devices or families. From the beginning, I hate the idea of storing passwords in the third-party password manager. Later, I fell into the same trap because a managing lot of passwords is difficult. So, I building an open-source p2p password manger. Replicates the passwords within your devices, instead of storing everything at the vendor’s cloud. It’s half-way for the closed beta release. I would like to hear everyone’s feedback on this idea.

      Thanks

      1. 1

        Replicates the passwords within your devices, instead of storing everything at the vendor’s cloud. It’s half-way for the closed beta release. I would like to hear everyone’s feedback on this idea.

        We need more of this kind of thing! Telling people not to store their shit “in the cloud” is only half the story. We also need easy to use(!) alternatives we can point to when they ask “so how should I do it?”

    9. 2

      There is a very interesting new article on it! Hackers Tell the Story of the Twitter Attack From the Inside https://www.nytimes.com/2020/07/17/technology/twitter-hackers-interview.html

    10. 2

      Given this and https://blog.twitter.com/en_us/topics/company/2020/information-operations-june-2020.html

      Any reason to trust what you read on twitter?

      1. -1

        You shouldn’t trust any web page that uses sticky banners, that’s for sure.

    11. [Comment removed by author]