We’re sending this note because people are now asking if this could happen with Keybase teams. Simple answer: no. While Keybase now has all the important features of Slack, it has the only protection against server break-ins: end-to-end encryption.
This is a facile false equivalence on the part of Keybase. Slack’s extenuating incident was because of code injection in their client application. If an attacker achieved code injection in a Keybase client the breach would be exactly as bad as Slack’s.
End-to-end encryption is worth little if the client doing the encryption/decryption is compromised, and Keybase’s implicit claim that end-to-end encryption protects against compromised clients is dangerously inaccurate.
where did you see that the injection was client side? I’m wondering if I’m parsing the disclosure incorrectly, but I’m not seeing that spelled out explicitly.
From Slack’s post:
In 2015, unauthorized individuals gained access to some Slack infrastructure, including a database that stored user profile information including usernames and irreversibly encrypted, or “hashed,” passwords. The attackers also inserted code that allowed them to capture plaintext passwords as they were entered by users at the time.
EDIT: After a reread of the Keybase post, I’m not seeing anywhere that they claim Keybase can 100% protect against client side attacks, but their assertion about server side attacks is true. Where did you see that they’re claiming e2e crypto protects against client attacks?
A) Yes, you can (by modifying server code) - basically no sites hash passwords before sending them over the wire.
B) Modifying running code on the server without changing the code on disk is usually called injection in my experience. Happened at twitter (remote code execution in the rails app exploited to add an in-memory-only middleware that siphoned off passwords).
basically no sites hash passwords before sending them over the wire.
Is there a good scheme for doing that?
You can’t just hash on the client, because then the server is just directly storing the credential sent by the client, i.e. as far as the server-side is concerned, you are back to merely directly storing passwords in the clear.
You can implement a scheme where the client proves knowledge of the password by using it to sign a token sent by the server (as in APOP, HTTP Digest Auth, etc.). But then server needs to have the plaintext password stored, otherwise it can’t check the client’s proof of its knowledge.
So either way, the server is storing in the clear whatever thing the client needs to authenticate itself.
The advantage of the usual scheme where the client sends the actual password and the server then stores a derivative of it is that the client sends one thing, and then the server stores another, and the thing the client needs to send cannot be reversed out of the thing the server has stored. That yields the property that exfiltrating the stored credentials data from the server doesn’t allow you to impersonate its users.
But to get this property, the server must know what the actual password is – at least at one instant in time – because the client needs to prove knowledge of this actual password. So you cannot never send the actual password to the server.
Well, that’s not the only way to get that property. The other way is public key cryptography.
Of course trying to go into that direction runs into entirely other trust issues: if you ship the crypto code to the client, you might as well not bother. Notably, “send the actual password to the server” avoids that whole issue too.
If you don’t want the server to know the password, you can use client certs, which have worked since the nineties. The browser UI for it is universally horrid, though, and whatever terminates your TLS needs to provide info to your application server.
This is a hurdle in both development in production - coupled with the browser UX being bad - has left client certs criminally underused.
The missing feature is some mechanism to create the TLS certificate.
Like a “Do you want to use a secure passwordless authentication with that site?” prompt that create a user@this.site CSR, upload it for signing in a POST request, and get the signed cert back, and store it for the next time this.site asks for it.
Private key derivation from the password gives you a private key, from which you may derive a public key, and get public key crypto in javascript in the browser.
So… you are trusting JavaScript code uploaded from the serverv while doing that (uh oh). If that is compromised, it can upload the password somewhere else in clear : if you trust the server, just send the password in clear to it (within TLS using TLS certificates).
Having a remote code execution to modify the server-side code to consume the plain-text passwords that their server receives and exfiltrate them would work just fine.
That seems like a huge assumption about the architecture of Slack, unless you work there, I wouldn’t assert that. It doesn’t even seem very plausible — why only infect a subset of clients? Do electron apps get served off of a server somewhere? (Unless I’m massively understanding them, no) And if popping a shell on a database server gave an attacker lateral access to push malicious code to clients, Slack has HUGE problems.
The hilarious part here is they rightly point out that 2FA can’t protect against server compromise, but they also tell you to upload your private key to their servers, because apparently server compromise would never happen to THEM.
but they also tell you to upload your private key to their servers
You don’t have to upload your private key to use their stuff. You can do that, and that’ll unlock some things like “signing” from the web. Obviously, this isn’t a good idea if you’re really concerned about security… But, they’re downplaying PGP support these days anyway, in favor of their own protocols. Keybase uses per device secret keys (local to the device mind you), and some Merkle tree “magic” to build an identity trust–you can only add a new device by using an existing device to authorize the new device, and other stuff.
Yes, but they can be decrypted in the browser. In the case they’re complaining about with slack, the server had been compromised since 2015. If that had happened with keybase, they would have had ample opportunity to alter the browser code to send the decrypted copy to the attacker. It’s a double standard.
🤔 I agree that Keybase is susceptible to attacks against its web application, but it’s not a double standard? The situation you’re describing is not what Slack themselves said — in their case a database host got popped and software was installed to read plaintext passwords OTA — there’s nothing about client attacks in there. (Also Slack never said that logger was there beyond 2015 — if it was I hope they’d reset more folks passwords and make that super clear.) Keybase would solve for the attack we can see with the information that we have now. I blame Slack partially for an unclear disclosure, but it’s been interesting to see people react so strongly to this post. Does Keybase have a history of bad action? I did some searching and couldn’t find anything.
I find it odd that CEO Super-Secure didn’t change their password in Slack after the widely publicized 2015 breach, even if they didn’t get a notice from Slack that they were included.
Because that $5k of equipment was also being used for things much more important than pinging people about outages. He couldn’t rule out the possibility that his Slack account had been compromised by someone breaking into his computer, so he nuked the computer.
Good point. But the whole thing? I might have just thrown out the storage. The whole thing seems quite disingenuous anyway, so I’m not sure I believe any part of it fully.
Aside from shoddy technical merit, this post (and the email I didn’t get for whatever reason) is a major PR fail straight from a cryptography 101 or security 101 course.
Does he plan to throw away expensive hardware that require underpaid employees work to extract rare material causing ecological disaster every time another facebook messenger alternative database get exposed to public?
I wish the next hardware he’ll get to be more trustworthy so he can fully bios-grade factory reset it instead of throwing it away.
What about spending this money for hiring someone maintaining a private chat server instead?
I don’t get that sense of security.
Sorry, I’m sitting down and breath. No need to get agressive. Paranoia is good too when it comes to privacy after all… But damn, $5000…
Per their email to customers:
This is a facile false equivalence on the part of Keybase. Slack’s extenuating incident was because of code injection in their client application. If an attacker achieved code injection in a Keybase client the breach would be exactly as bad as Slack’s.
End-to-end encryption is worth little if the client doing the encryption/decryption is compromised, and Keybase’s implicit claim that end-to-end encryption protects against compromised clients is dangerously inaccurate.
where did you see that the injection was client side? I’m wondering if I’m parsing the disclosure incorrectly, but I’m not seeing that spelled out explicitly.
From Slack’s post:
You’re of course correct about the vulnerability of keybase clients. They talk about that here: https://keybase.io/docs/server_security
EDIT: After a reread of the Keybase post, I’m not seeing anywhere that they claim Keybase can 100% protect against client side attacks, but their assertion about server side attacks is true. Where did you see that they’re claiming e2e crypto protects against client attacks?
You can’t do that without injecting code into the client.. plus, modification of server side code is usually not called “injection” at all
A) Yes, you can (by modifying server code) - basically no sites hash passwords before sending them over the wire.
B) Modifying running code on the server without changing the code on disk is usually called injection in my experience. Happened at twitter (remote code execution in the rails app exploited to add an in-memory-only middleware that siphoned off passwords).
Is there a good scheme for doing that?
You can’t just hash on the client, because then the server is just directly storing the credential sent by the client, i.e. as far as the server-side is concerned, you are back to merely directly storing passwords in the clear.
You can implement a scheme where the client proves knowledge of the password by using it to sign a token sent by the server (as in APOP, HTTP Digest Auth, etc.). But then server needs to have the plaintext password stored, otherwise it can’t check the client’s proof of its knowledge.
So either way, the server is storing in the clear whatever thing the client needs to authenticate itself.
The advantage of the usual scheme where the client sends the actual password and the server then stores a derivative of it is that the client sends one thing, and then the server stores another, and the thing the client needs to send cannot be reversed out of the thing the server has stored. That yields the property that exfiltrating the stored credentials data from the server doesn’t allow you to impersonate its users.
But to get this property, the server must know what the actual password is – at least at one instant in time – because the client needs to prove knowledge of this actual password. So you cannot never send the actual password to the server.
Well, that’s not the only way to get that property. The other way is public key cryptography.
Of course trying to go into that direction runs into entirely other trust issues: if you ship the crypto code to the client, you might as well not bother. Notably, “send the actual password to the server” avoids that whole issue too.
If you don’t want the server to know the password, you can use client certs, which have worked since the nineties. The browser UI for it is universally horrid, though, and whatever terminates your TLS needs to provide info to your application server.
This is a hurdle in both development in production - coupled with the browser UX being bad - has left client certs criminally underused.
Oh, right. I mentioned public key crypto myself… but I still didn’t even think of client certificates.
The missing feature is some mechanism to create the TLS certificate.
Like a “Do you want to use a secure passwordless authentication with that site?” prompt that create a user@this.site CSR, upload it for signing in a POST request, and get the signed cert back, and store it for the next time this.site asks for it.
… at which point you need a mechanism to extract your credential from the browser and sync it across devices and applications. Hmm.
Yes, unless you remember them all (them all how many?!), mind space wasted…
I use an USB key on which my password are stored, and add that to my physical keyring.
I am now more vunerable to phisical access and less exposed to remote attackers.
It is not perfect. It is working.
Private key derivation from the password gives you a private key, from which you may derive a public key, and get public key crypto in javascript in the browser.
So… you are trusting JavaScript code uploaded from the serverv while doing that (uh oh). If that is compromised, it can upload the password somewhere else in clear : if you trust the server, just send the password in clear to it (within TLS using TLS certificates).
Even today, Slack sends credentials from the client to the server in plaintext (just like almost every other website).
Try it yourself: https://tmp.shazow.net/screenshots/screenshot_2019-07-21_3d7d.png
Having a remote code execution to modify the server-side code to consume the plain-text passwords that their server receives and exfiltrate them would work just fine.
Who knows what else they might have modified.
That seems like a huge assumption about the architecture of Slack, unless you work there, I wouldn’t assert that. It doesn’t even seem very plausible — why only infect a subset of clients? Do electron apps get served off of a server somewhere? (Unless I’m massively understanding them, no) And if popping a shell on a database server gave an attacker lateral access to push malicious code to clients, Slack has HUGE problems.
Ah, I didn’t even think about these. Slack is accessible as a normal web site too, I was only thinking about that.
Also, my assumption was that “as they were entered by users” meant “letter by letter, keylogger style” :D
Occams razor: which is more likely?
You’ve been target by an undetectable son of stuxnet cyberweapon.
A cloud chat company is lying about their security.
This fails to take into account the risk of delaying the remedy.
There’s also the cost of the remedy to consider.
I read that as advertising, as a statement of particularly high company security values.
This reads like an ad for keybase chat.
That’s a foolish and dangerous position to take, especially for the CEO of a security company.
Umm… it is an ad for keybase chat. :)
The hilarious part here is they rightly point out that 2FA can’t protect against server compromise, but they also tell you to upload your private key to their servers, because apparently server compromise would never happen to THEM.
You don’t have to upload your private key to use their stuff. You can do that, and that’ll unlock some things like “signing” from the web. Obviously, this isn’t a good idea if you’re really concerned about security… But, they’re downplaying PGP support these days anyway, in favor of their own protocols. Keybase uses per device secret keys (local to the device mind you), and some Merkle tree “magic” to build an identity trust–you can only add a new device by using an existing device to authorize the new device, and other stuff.
aren’t keys you upload encrypted end to end?
Yes, but they can be decrypted in the browser. In the case they’re complaining about with slack, the server had been compromised since 2015. If that had happened with keybase, they would have had ample opportunity to alter the browser code to send the decrypted copy to the attacker. It’s a double standard.
🤔 I agree that Keybase is susceptible to attacks against its web application, but it’s not a double standard? The situation you’re describing is not what Slack themselves said — in their case a database host got popped and software was installed to read plaintext passwords OTA — there’s nothing about client attacks in there. (Also Slack never said that logger was there beyond 2015 — if it was I hope they’d reset more folks passwords and make that super clear.) Keybase would solve for the attack we can see with the information that we have now. I blame Slack partially for an unclear disclosure, but it’s been interesting to see people react so strongly to this post. Does Keybase have a history of bad action? I did some searching and couldn’t find anything.
I find it odd that CEO Super-Secure didn’t change their password in Slack after the widely publicized 2015 breach, even if they didn’t get a notice from Slack that they were included.
My thoughts exactly, especially since he totally threw out $5K of computer equipment…
They only discuss keybase outages and yet $5k of equipment was ‘thrown away’?
And the end of the article talks about importing Slack groups into Keybase chat?
Might be time for me to get off Keybase.
Because that $5k of equipment was also being used for things much more important than pinging people about outages. He couldn’t rule out the possibility that his Slack account had been compromised by someone breaking into his computer, so he nuked the computer.
Good point. But the whole thing? I might have just thrown out the storage. The whole thing seems quite disingenuous anyway, so I’m not sure I believe any part of it fully.
He couldn’t know if the Intel Management Unit was comprosied. Or the firmware on the Ethernet card. Or the firmware on the USB controllers. Or …
Aside from shoddy technical merit, this post (and the email I didn’t get for whatever reason) is a major PR fail straight from a cryptography 101 or security 101 course.
Great reason to avoid Slack for anything more than just silly /gif chats between breaks.
Does he plan to throw away expensive hardware that require underpaid employees work to extract rare material causing ecological disaster every time another facebook messenger alternative database get exposed to public?
I wish the next hardware he’ll get to be more trustworthy so he can fully bios-grade factory reset it instead of throwing it away.
What about spending this money for hiring someone maintaining a private chat server instead?
I don’t get that sense of security.
Sorry, I’m sitting down and breath. No need to get agressive. Paranoia is good too when it comes to privacy after all… But damn, $5000…
I got the email and then an update later requiring another 112+MB or so download. Only Keybase…