Real secure messaging software. The standard and best answer here is Signal,
Oh please. They aren’t even close to sharing the same level of functionality. If I want to use Signal, I have to commit to depending on essentially one person (moxie) who is hostile towards anyone who wants to fork his project, and who completely controls the server/infrastructure. And I’d have to severely limit the options I have for interfacing with this service (1 android app, 1 ios app, 1 electron [lol!] desktop app). None of those are problems/restrictions with email.
I don’t know what the federated, encrypted ‘new’ email thing looks like, but it’s definitely not Signal. Signal is more a replacement for XMPP, if perhaps you wanted to restrict your freedom, give away a phone number, and rely on moxie.
Matrix is a federated messaging platform, like XMPP or email. You could definitely support email-style use of the system it’s just that the current clients don’t support that. The protocol itself would be fine for email, mailing lists and git-send-email.
The protocol also gives you the benefits of good end-to-end encryption support without faff, which is exactly what general email use and PGP don’t give you.
Adding patch workflow to Matrix is no different to adding it to XMPP or any other messaging solution. Yes, it is possible but why?
I can understand you like Matrix but it’s not clear how Matrix is getting closer to e-mail replacement with just one almost-stable server implementation and the spec that’s not an IETF standard. I’d say Matrix is more similar to “open Signal” than to e-mail.
If I only knew the future I’d counter argument that but given that the future is unknown I can only extrapolate the current and the past. Otherwise Matrix may be “getting closer” to anything.
Do you have any signs that Matrix is getting e-mail patch workflow?
Mailing lists could move to federated chatrooms. They moved from Usenet before, and in some communities moved to forums before the now common use of Slack.
I’m not saying it would be the best solution, but it’s our most likely trajectory.
I do think, actually, that converting most public mailing lists to newsgroups would have a few benefits:
It’d make their nature explicit.
It’d let us stop derailing designs for end-to-end encryption with concerns that really apply only to public mailing lists.
I could go back to reading them using tin.
Snark aside, I do think the newsgroup model is a better fit for most asynchronous group messaging than email is, and think it’s dramatically better than chat apps. Whether you read that to mean slack or any of the myriad superior alternatives to slack. But that ship sailed a long time ago.
Mailing lists don’t use slack and slack isn’t a mailing list. Slack is an instant messaging service. It has almost nothing in common with mailing lists.
It’s really important to drive this point home. People critical of email have a lot of good points. Anyone that has set up a mail server in the last few years knows what a pain it is. But you will not succeed in replacing something you don’t understand.
Personally I think that GitHub’s culture is incredibly toxic. Only recently have there been tools added to allow repository owners to control discussions in their own issues and pull requests. Before that, if your issue got deep linked from Reddit you’d get hundreds of drive by comments saying all sorts of horrible and misinformed things.
I think we’re starting to see a push back from this GitHub/Slack culture at last back to open, federated protocols like SMTP and plain git. Time will tell. Certainly there’s nothing stopping a project from moving to {git,lists}.sr.ht, mirroring their repo on GitHub, and accepting patches via mailing list. Eventually people will realise that this means a lower volume of contributions but with a much higher signal to noise ratio, which is a trade-off some will be happy to make.
Only recently have there been tools added to allow repository owners to control discussions in their own issues and pull requests. Before that, if your issue got deep linked from Reddit you’d get hundreds of drive by comments saying all sorts of horrible and misinformed things.
It’s not like you used to have levers for mailing lists, though, that would stop marc.org from archiving them or stop people from linking those marc.org (or kernel.org) threads. And drive-bys happened from that, too. I don’t think I’m disputing your larger point. Just saying that it’s really not related to the message transfer medium, at least as regards toxicity.
Sure, I totally agree with you! Drive-bys happen on any platform. The difference is that (at least until recently) on GitHub you had basically zero control. Most people aren’t going to sign up to a mailing list to send an email. The barrier to sending an email to a mailing list is higher than the barrier to leaving a comment on GitHub. That has advantages and disadvantages. Drive-by contributions and drive-by toxicity are both lessened. It’s a trade-off I think.
I guess I wasn’t considering a mailing list subscription as being meaningfully different than registering for a github account. But if you’ve already got a github account, that makes sense as a lower barrier.
(A separate issue: I gave up on Matrix because its e2e functionality was too hard to use with multiple clients)
and across UA versions. When I still used it I got hit when I realized it derived the key using the browser user agent, so when OpenBSD changed how the browser presented itself I was suddenly not able to read old conversations :)
Functionality is literally irrelevant, because the premise is that we’re talking about secure communications, in cases where the secrecy actually matters.
Of course if security doesn’t matter then Signal is a limited tool, you can communicate in Slack/a shared google doc or in a public Markdown document hosted on Cloudflare at that point.
Signal is the state of the art in secure communications, because even though the project is heavily driven by Moxie, you don’t actually need to trust him. The Signal protocol is open and it’s basically the only one on the planet that goes out of it’s way to minimize server-side information storage and metadata. The phone number requirement is also explicitly a good design choice in this case: as a consequence Signal does not store your contact graph - that is kept on your phone in your contact store. The alternative would be that either users can’t find each other (defeating the point of a secure messaging tool) or that Signal would have to store the contact graph of every user - which is a way more invasive step than learning your phone number.
even though the project is heavily driven by Moxie, you don’t actually need to trust him
Of course you must trust Moxie. A lot of the Signal privacy features is that you trust them not to store certain data that they have access to. The protocol allows for the data not to be stored, but it gives no guarantees. Moxie also makes the only client you can use to communicate with his servers, and you can’t build them yourself, at least not without jumping hoops.
The phone number issue is what’s keeping me away from Signal. It’s viral, in that everyone who has Signal will start using Signal to communicate with me, since the app indicates that they can. That makes it difficult to get out of Signal when it becomes too popular. I know many people that cannot get rid of WhatsApp anymore, since they still need it for a small group, but cannot get rid of the larger group because their phone number is their ID, and you’re either on WhatsApp completely or you’re not. Signal is no different.
And how can you see that a phone number is able to receive your Signal messages? You have to ask the Signal server somehow, which means that Signal then is able to make the contact graph you’re telling me Signal doesn’t have. They can also add your non-Signal friends to the graph, since you ask about their numbers too. Maybe you’re right and Moxie does indeed not store this information, but you cannot know for sure.
What happens when Moxie ends up under a bus, and Signal is bought by Facebook/Google/Microsoft/Apple and they suddenly start storing all this metadata?
Signal is a 501c3 non-profit foundation in the US, Moxie does not control it nor able to sell it. In theory every organization can turn evil but there is still a big difference between non-profits who are legally not allowed to do certain things vs corporations who are legally required to serve their shareholders, mostly by seeking to turn a profit.
And how can you see that a phone number is able to receive your Signal messages? You have to ask the Signal server somehow, which means that Signal then is able to make the contact graph you’re telling me Signal doesn’t have.
There are two points here that I’d like to make, one broader and one specific. In a general sense, Signal does not implement a feature until they can figure out how to do that securely and with leaking as little information as possible. This has been the pattern for basically almost every feature that Signal has. Specifically, phone numbers are the same: The Signal app just sends a cryptographically hashed, truncated version of phone numbers in your address book to the server, and the server responds with the list of hashes that are signal users. This means that Signal on the server side knows if any one person is a Signal user, but not their contact graph.
Every organization can also be bought by an evil one. Facebook bought WhatsApp, remember?
The Signal app just sends a cryptographically hashed, truncated version of phone numbers in your address book
These truncated hashes can still be stored server-side, and be used to make graphs. With enough collected data, a lot of these truncated hashes can be reversed. Now I don’t think Signal currently stores this data, let alone do data analysis. But Facebook probably would, given the chance.
Every organization can also be bought by an evil one. Facebook bought WhatsApp, remember?
WhatsApp was a for-profit company, 501(c)3 work under quite different conditions. Not saying they can’t be taken over, but this argument doesn’t cut it.
Oh but Signal users can always meet in person to re-verify keys, which would prevent any sim swap attack from working? No, this (overwhelmingly) doesn’t happen. In an era where lots of people change phones every ~1-2yr, it’s super easy to ignore the warning because 99% of the time it’s a false positive.
The alternative would be that either users can’t find each other (defeating the point of a secure messaging tool)
This is a solved problem. I mean, how do you think you got the phone numbers for your contacts in the first place? You probably asked them, and they probably gave it to you. Done.
Linking to a google search is both pedantic and unhelpful. Pedantic because it means that you care more about telling your readers “Why don’t you just figure it out” instead of providing the information directly (and you even took the extra time to make it a link!), and unhelpful because google search is now heavily impacted by previous user searches, localisation, etc…
For me, a french hobbyist using duckduckgo on a daily basis, LARP is an Algorithm and Problem solving software. I’m still not sure what the author mean here, but it definitely pissed me off.
I thought about Live Action Role Playing, and interpreted LARP Security as a snobby way of saying “Security Theatre”. It’s not a term I would use myself.
Users are encouraged to rotate their PGP keys in the same way that LARPers are encouraged to sharpen their play swords: not only does nobody do it, but the whole system would probably fall apart if everyone did.
The dozen person company I work at uses PGP to encrypt their emails. It works fine, no problems. Somewhat better security for basically no work. No, it won’t thwart the NSA, so what? I don’t get what the fuss is about.
“PGP is easy to use, you just have to never make any mistakes, ever.”
I previously made a PGP key available. I’m deprecating it, for reasons described here. As that essay predicts, I have had mostly negative experiences with encrypted emails, including receiving 5+ private keys of security researchers.
If messages can be sent in plaintext, they will be sent in plaintext.
That’s not a bad thing. If the person you are communicating with doesn’t care about encrypting the conversation then they don’t care about the confidentiality of the conversation and encrypting the messages is a waste of everyone’s time, a waste of computing resources and false security anyway. You cannot have secure communication between parties that do not care about security, in my opinion.
Metadata is as important as content, and email leaks it.
Metadata is not important data. That I sent a message to someone at a particular time is known to someone whether it’s a peer-to-peer service in which case it’s all the links between us, or a centralised service in which case it’s the service provider. Either way, everyone is aware I sent a message, and in both cases someone knows who I sent a message to. You should not trust anyone with genuinely important information that genuinely needs to be encrypted. If it’s a centralised service, then the service provider knows, and you should assume everyone knows.
A centralised messaging service might have end-to-end encrypted messages but it cannot have end-to-end encrypted metadata for the obvious reason that it can tell where it has got messages from and it can tell where the messages it’s relaying are meant to go.
If you are sending anything over the public internet, who sent the message, when it was sent, how big it is, who it is to, etc. is public information. The metadata of your Signal messages is public information.
Every archived message will eventually leak.
If I send you a message, and you decrypt it, you are responsible for its confidentiality. I cannot in any technological way prevent you from leaking the message. You can take a screenshot of the programme showing you the message, you can take a photo of your phone or computer screen, you can print it out, you can read it aloud, you can copy it by hand onto some paper. Part of sending someone a message is acknowledging that, regardless of the social norms around confidentiality, there’s no technological means of forcing someone to keep a secret. There are no Unbreakable Vows in the real world.
This really goes back to the first point: if the person you are communicating with doesn’t care about security, you are not communicating securely no matter how many technological safeguards you try to put in place. At some point, they will fuck up, because they do not care. Auto-deleting messages just give a false sense of security. They’re like password expiry dates: all they do is encourage people to write things down. I’d rather my message was archived on their encrypted hard drive than written down on a piece of paper ‘because it will disappear otherwise’.
etc. etc. It just goes on, repeating the same false and misleading statements about how encryption and trust and communication actually work. Ultimately, if you cannot trust the person you are communicating with and you cannot trust that they care about security, then your communications with them are not secure and the only form of ‘security’ you should bother with is the basic TLS level of just ensuring that dragnet surveillance won’t be able to passively eavesdrop on your conversations. But any kind of targeted surveillance will easily compromise your conversation, so don’t give yourself a false sense of security trying to harden yourself against something that you inherently cannot prevent.
First of all, you completely ignored the forward secrecy and key rotation part. That’s kind of the killer, here – in order to support these things, you have to actually negotiate a key, which pretty much wrecks the whole “regular email with an encrypted payload” thing. The rest of this discussion is almost completely irrelevant compared to the lack of forward secrecy.
But anyway…
There are actual solutions for the metadata problem, particularly Mix networks like the recently-brought-to-my-attention Loopix system and the older MixMinion. I would rather use Loopix than Signal, all else being equal, because metadata is really useful. Tell me who you’re with, and I can pretty much figure out who you are even if I can’t actually read your email. And that’s before we start asking uncomfortable questions about the Subject line.
I’d rather use a mixnet than Signal, but I’d still rather use Signal than PGP, for mostly the same reason that the author gave, and the reason that you didn’t address. The UX for the widely-available PGP email systems is really, really bad because they fail open. A well-designed security system should fail by erroring out, not by falling back to plaintext. If someone makes a mistake while using the encryption system, it is not proof that they don’t care. It is, at worst, proof of ignorance, and everyone is ignorant when they start out. It is easy to accidentally send an unencrypted email in most encryption-enabled clients. It is almost impossible to accidentally use email instead of Signal, specifically because the applications are completely separate.
This is the same reason why I would run a Tor hidden service, even if I myself don’t care about my own anonymity, if I expected a lot of anonymity-sensitive users. If someone tries to use an onion site without actually using Tor, the site will not load, and they will immediately notice and correct the mistake. Even if your site offers both an onion address alongside a clearnet address, it at least acts as double-confirmation; someone who wants to use Tor would have to simultaneously use a clearnet browser and a clearnet address in order to accidentally leak their identity to me.
I agree with you that auto-deleting messages are dumb, but seriously, the lack of forward secrecy ought to be enough. The part about making it harder to accidentally send a plaintext message is just a UX improvement that you take at the same time that you do the forward secrecy fix.
First of all, you completely ignored the forward secrecy and key rotation part. That’s kind of the killer, here – in order to support these things, you have to actually negotiate a key, which pretty much wrecks the whole “regular email with an encrypted payload” thing.
I’m not going to nitpick every single line of the post.
The rest of this discussion is almost completely irrelevant compared to the lack of forward secrecy.
I honestly don’t think that forward secrecy is that important.
There are actual solutions for the metadata problem, particularly Mix networks like the recently-brought-to-my-attention Loopix system and the older MixMinion. I would rather use Loopix than Signal, all else being equal, because metadata is really useful. Tell me who you’re with, and I can pretty much figure out who you are even if I can’t actually read your email.
I think this is really just security through obscurity. It’s not a bad thing, of course. But it seems to me that it really just protects metadata from dragnet surveillance in the same way that SMTPS/IMAPS protects data from dragnet surveillance. It doesn’t really prevent any targeted attacks.
I’d rather use a mixnet than Signal, but I’d still rather use Signal than PGP, for mostly the same reason that the author gave, and the reason that you didn’t address. The UX for the widely-available PGP email systems is really, really bad because they fail open. A well-designed security system should fail by erroring out, not by falling back to plaintext.
I’ve never had any issues with the UI for PGP email. I see the same complaints about PGP all the time and frankly people seem to want a magical solution to a problem that doesn’t have any solution: public key encryption is a fairly complicated thing and users need to understand it to use it. You can’t get around that.
If someone makes a mistake while using the encryption system, it is not proof that they don’t care. It is, at worst, proof of ignorance, and everyone is ignorant when they start out. It is easy to accidentally send an unencrypted email in most encryption-enabled clients. It is almost impossible to accidentally use email instead of Signal, specifically because the applications are completely separate.
I want it to be easy to send an unencrypted email, because I want to send lots of unencrypted emails. I don’t want all my emails to be encrypted. Most of them don’t need to be encrypted. Emails I’m sending to public mailing lists don’t need to be and shouldn’t be encrypted, as that just provides a false sense of security.
I don’t want the applications to be completely separate. I think instant messaging (which is what Signal is and is for) should be separate from email, and for instant messaging things like always-on encryption probably does make sense. But email isn’t just for messaging privately to people you know, it’s used for loads of things where encryption isn’t appropriate, like mailing lists.
I know some people think that mailing lists should be replaced with online forums or reddit or something, but I personally like mailing lists a lot more than reddit or even than old phpBB-style forums.
This is the same reason why I would run a Tor hidden service, even if I myself don’t care about my own anonymity, if I expected a lot of anonymity-sensitive users. If someone tries to use an onion site without actually using Tor, the site will not load, and they will immediately notice and correct the mistake. Even if your site offers both an onion address alongside a clearnet address, it at least acts as double-confirmation; someone who wants to use Tor would have to simultaneously use a clearnet browser and a clearnet address in order to accidentally leak their identity to me.
I agree with you that auto-deleting messages are dumb, but seriously, the lack of forward secrecy ought to be enough. The part about making it harder to accidentally send a plaintext message is just a UX improvement that you take at the same time that you do the forward secrecy fix.
To me this is just optimising for stupid. If someone opens your website in Firefox instead of the Tor browser, they obviously don’t really care about their anonymity that much. If they really were worried they were being tracked by a government entity or something they’d be careful, constantly.
If messages can be sent in plaintext, they will be sent in plaintext.
That’s not a bad thing. If the person you are communicating with doesn’t care about encrypting the conversation then they don’t care about the confidentiality of the conversation and encrypting the messages is a waste of everyone’s time, a waste of computing resources and false security anyway. You cannot have secure communication between parties that do not care about security, in my opinion.
It is possible to use unsafe mechanical equipment in a safe way. That doesn’t mean adding safety features is a bad idea.
Yep, people can (and do) still jam the dead man switch, remove the covers etc but on average less people get maimed and killed now than before.
Metadata is as important as content, and email leaks it.
Metadata is not important data.
If you can prove I sent a message to someone I was supposed to not tell I’m in trouble even if you cannot tell the exact contents of the message.
That I sent a message to someone at a particular time is known to someone whether it’s a peer-to-peer service in which case it’s all the links between us, or a centralised service in which case it’s the service provider. Either way, everyone is aware I sent a message, and in both cases someone knows who I sent a message to.
Emphasis mine. This does not necessarily follow. It depends on your threat model.
You should not trust anyone with genuinely important information that genuinely needs to be encrypted. If it’s a centralised service, then the service provider knows, and you should assume everyone knows.
Again: Threat model.
If you are sending anything over the public internet, who sent the message, when it was sent, how big it is, who it is to, etc. is public information.
I had a hard time figuring out what you meant here, one interpretation that makes it correct is if by “over the public internet” you mean if you send it by mail. You can do a lot to make sure this isn’t public information.
The metadata of your Signal messages is public information.
No. I’m not the biggest Signal fan (not open, Signal fans are seriously annoying etc), but let’s stick to the facts:
Signal messages including metadata are encrypted in transit and discarded afterwards.
Just like my 20 year old email hotmail messages aren’t public information, neither is the metadata from my Signal messages.
Metadata is not important data. That I sent a message to someone at a particular time is known to someone whether it’s a peer-to-peer service in which case it’s all the links between us, or a centralised service in which case it’s the service provider.
The amount of stuff that can be worked out just from metadata might surprise you. And the fact that modern secure messaging systems are working to make it harder for third parties to snoop on even the graph of who talks to whom should be a hint that maybe it is an important part of security.
There’s no reasonable expectation of privacy with metadata. It doesn’t matter what data can be worked out AT ALL. That’s not what’s in question.
If I send you a letter, that I sent you a letter is not private. The postman knows, the government is allowed to know, it might as well be public. What I sent you is private. Nobody is allowed to look in. We should be turning those legal privacies into technical, mathematical privacy. We shouldn’t be inventing new types of privacy and just assuming without any actually debate that privacy maximalism is philosophically correct.
If I send you a letter, that I sent you a letter is not private.
Only if you put your name on the outside of the envelope, surely? Otherwise, yes, it’s completely private who and where the letter originated from because …
Most of the author’s objection to encrypted email appears to stem from integration issues.
As an example, look at age, a tool the author proposes as an alternative. Were it integrated with an email client, that email client would still have the ability to send un-encrypted attachments.
Getting serious about usability of integrated encryption is the real answer, IMO, not abandoning the ability to encrypt your emails. A more opinionated cryptographic message syntax and the default refusal to send plaintext messages to contacts for which the email client has a key along with refusal to reply in plaintext to an encrypted email would address those concerns articulated in the post (at least the ones that aren’t bugs) without driving people towards a proprietary messaging system that requires them to share phone numbers before they can use it.
Another point of the author is that only the body of the email is encrypted, while the envelope remains plaintext. So no matter how hard you try to enforce encryption, the metadata would still appear in clear text, because that is how the protocol works.
While not a big deal most of the time (everyone knows Alice and Bob have been chatting for a long time, but nobody knows the topic), it can be an issue on its own, for example when a journalist talks everyday to the same contact, you might assume that this is their secret source.
An application like Signal hides the sender information so the central servers only knows who is the destination, not the sender. There are still some metadata however, like the source IP address, albeit more difficult to correlate with the sender.
The issue with this post is that the author praise an application using central servers controlled by a single company as a way to provide fully secure channels. It would have been more serious IMO to discuss about protocols rather than apps here, especially peer-to-peer or decentralized technologies like the tox protocol for example, or even IRC with SSL.
Tox is a nice protocol, but it is very bound to the network (NAT, UDP…), so it gets harder to implement and tinker with it. (yeah, http://ratox.2f30.org/ rocks, we know that already).
I am curious about how https://messaginglayersecurity.rocks/ can bring to it all. It protects the end-users agains the middle-man, which means you can have untrusted relays carry still-secure messages until it reach the end users.
OTOH, as by today, tox is probably the most reasonable compromise between security (good), convenience (really not bad, easier than PGP), and availability (a lot of clients out there, quite an actual use base).
Regarding chat messages system, the IRC protocol is a good universal end-point, with a lot of clients and apps available, and this permit to have local crypto-chat-protocol <-> IRC gateway listenning on 127.0.0.1, so that the IRC client can in addition connect to the local relay, and finally solve this : https://xkcd.com/1810/
Thanks for the tox link. That one wasn’t on my radar. Can you summarize (or link to a summary of) why it might be preferable to matrix?
For email, I think I consider the plaintext envelope to be tractable. Much like snail mail, certain opsec efforts are required to prevent an observer from knowing that I contacted someone. On email, it’d include using a special purpose mailbox that’s not associated with my meatspace identity. On snail mail, it’s be a PO box or similar.
And FFS, don’t use BCCs if you send encrypted email to multiple recipients. (I have personally written a tool to spin through a mailbox and uncloak those.) And if you can stomach it, stop encrypting your sent mail to yourself. As with BCCs, you’d need to audit your MUA to make sure that’s safe, and it’s not safe on any MUA I’ve ever checked personally.
I agree that this would be a much more interesting post if it was talking at the protocol level instead of product level. But I also do think that signed and encrypted email is salvageable through some combination of education and integration improvements. If we stopped thinking of handing out an email address as being safer than handing out a postal address, the metadata problem shrinks significantly, IMO.
If you wish to PGP encode it (but please only do so if privacy is very urgent, since it is inconvenient) use this pgp key.
I cannot say that I disagree with the statement or the sentiment. We’re supposed to be PGP aware in NetBSD, but from the looks of it, most folk do seem to find it as pointless as the author of the above statement.
Computers are already such a mess to begin with–die tampering, UEFI bugs, AMT, spec ex, apple/google closed-ecosystems, massive OS attack surfaces, etc–that sounding the alarm about something in the application layer as ubiquitous as e-mail seems more like attention whoring than constructive advice. Then he/she goes on to recommend some proprietary rube-goldberg machine as a replacement. No thank you.
If I had something sensitive to send, I’d PGP it–if only because this person says not to.
There’s a lot to like and a lot to dislike in this post. But in any case, the most important part to me is the link to age. I’ve heard a lot of complaints about gpg lately, and agree with a lot of them, but my question has always been, “so what’s a good alternative for signing and encryption then?”
Combine age with minisign, and I think I may have finally found an alternative to gpg.
I miss key handling in both minisign and age. Not that GPGs keychain is anything to be proud of, but minisign just expects you to handle the keys yourself, while age conveniently integrates with a Microsoft-acquired cloud service which was never even intended as a key store (Github) to do key discovery, and trusts it blindly.
How is PGP dead? The author mentions it, links to one vulnerability, and moves on.
It seems to me that if I have someone’s public key, and they have mine, I can use GPG to encrypt messages and send them back and forth. Aside from this being complicated, if followed on both sides it should work.
Thanks for linking this. I wish I was around to contribute to the discussion back then.
I don’t think author should discount PGP, if that’s his complaints. I think his use case is very different than mine.
For regular stuff encrypted email means transport between me and my email server. That’s enough. When I have something important, I use PGP. When I need to sign a GitHub commit, I use PGP.
It’s both hard to send and read an encrypted email. I do it rarely, but when I do, I trust that PGP works.
The authors concerns may be valid for him, but they conflate privacy and encryption. I don’t need forward secret for this purpose. I’d rather have a shared key than manage session keys for all eternity. That’s outside the scope of PGP, although I could use PGP to negotiate session keys if I really wanted.
I don’t care that PGP leaks metadata, the intent is to identify a message to me, so having the subject visible is a feature not a bug. If I wanted anonymity I would use a fake subject and distribute it in ways that preserve my identity.
There are many flaws, but for situations where I just must have encryption, PGP works. I have set it up with desktop clients before and it works well with stuff like Outlook. But since I mainly use webmail, I don’t set it up because it seems kind of pointless to trust a third party with my private key. That would make me have to have a super private key or something.
I don’t think you can use Age for the first use case, “prove I am me”, as Age wil only allow you to encrypt. Age will also not help you in proving that a key belongs to a certain user.
Signal currently requires phone numbers for all its users. It does this not because Signal wants to collect contact information for its users, but rather because Signal is allergic to it: using phone numbers means Signal can piggyback on the contact lists users already have, rather than storing those lists on its servers.
On one hand, <mind_blown.gif>. On the other hand, acquiring a phone number anonymously is kind of a bottleneck. US-focused tutorials suggest jumping through a set of hoops to get a Google Voice phone number anonymously (and using Google’s services for anonymity seems… idk, kind of risky?). The set of hoops is manageable, but there’s a lot of tricky parts. I haven’t seen an EU-based guide to an anonymous phone number other than “get somebody else to register the number, dunno, pay a homeless person or something ¯\(ツ)/¯”.
For the contact list piggybacking, email address would work just the same, would allow more anonymous IDs, easily easily replacing IDs or using multiple IDs to compartmentalize, wouldn’t be tied to a particular country, etc.
I thought for a moment it would be less secure as identity confirmation than SMS, but then SMS is not even considered a good 2FA and reliable confirmation code would require a more involved protocol, and the same protocol could be used over email.
The weaknesses in the underlying OpenPGP standard (specifically, OpenPGP’s lack of mandatory integrity verification) enable one of the attacks given in the paper. Despite its pre-existing weaknesses, OpenPGP can still be used reliably within certain constraints. When using PGP to encrypt or decrypt files at rest, or to verify software with strict signature checking, PGP still behaves according to expectation.
Long term: Update OpenPGP and S/MIME standards. The EFAIL attacks exploit flaws and undefined behavior in the MIME, S/MIME, and OpenPGP standards. Therefore, the standards need to be updated, which will take some time.
I agree that it is PGP/GPG’s role to be a strong protocol and protect against these kind of attacks. Can that happen if you also sign the email after encryption ?
But sure, crypto foot guns is not something we want to be directly exposed to.
Oh please. They aren’t even close to sharing the same level of functionality. If I want to use Signal, I have to commit to depending on essentially one person (moxie) who is hostile towards anyone who wants to fork his project, and who completely controls the server/infrastructure. And I’d have to severely limit the options I have for interfacing with this service (1 android app, 1 ios app, 1 electron [lol!] desktop app). None of those are problems/restrictions with email.
I don’t know what the federated, encrypted ‘new’ email thing looks like, but it’s definitely not Signal. Signal is more a replacement for XMPP, if perhaps you wanted to restrict your freedom, give away a phone number, and rely on moxie.
I think Matrix is getting closer to being a technically plausible email and IM replacement.
The clients don’t do anything like html mail, but I don’t think I’d miss that much, and the message format doesn’t forbid it either.
If you can’t send patches to mailing lists with them then they’re not alternatives to email. Email isn’t just IM-with-lag.
Email can be exported as text and re-parsed by Perl or a different email client.
Until that functionality is available, I won’t consider something a replacement for email.
In all fairness: cmcaine says “Matrix is getting closer”.
Matrix is a federated messaging platform, like XMPP or email. You could definitely support email-style use of the system it’s just that the current clients don’t support that. The protocol itself would be fine for email, mailing lists and git-send-email.
The protocol also gives you the benefits of good end-to-end encryption support without faff, which is exactly what general email use and PGP don’t give you.
Adding patch workflow to Matrix is no different to adding it to XMPP or any other messaging solution. Yes, it is possible but why?
I can understand you like Matrix but it’s not clear how Matrix is getting closer to e-mail replacement with just one almost-stable server implementation and the spec that’s not an IETF standard. I’d say Matrix is more similar to “open Signal” than to e-mail.
“Getting closer” is a statement towards the future, yet all of your counter arguments are about the current state.
If I only knew the future I’d counter argument that but given that the future is unknown I can only extrapolate the current and the past. Otherwise Matrix may be “getting closer” to anything.
Do you have any signs that Matrix is getting e-mail patch workflow?
Mailing lists could move to federated chatrooms. They moved from Usenet before, and in some communities moved to forums before the now common use of Slack.
I’m not saying it would be the best solution, but it’s our most likely trajectory.
Mailing lists existed in parallel with Usenet.
Both still exist :)
I do think, actually, that converting most public mailing lists to newsgroups would have a few benefits:
Snark aside, I do think the newsgroup model is a better fit for most asynchronous group messaging than email is, and think it’s dramatically better than chat apps. Whether you read that to mean slack or any of the myriad superior alternatives to slack. But that ship sailed a long time ago.
Mailing lists are more useful than Usenet. If nothing else, you have access control to the list.
Correct, and the younger generation unfamiliar with Usenet gravitated towards mailing lists. The cycle repeats.
Mailing lists don’t use slack and slack isn’t a mailing list. Slack is an instant messaging service. It has almost nothing in common with mailing lists.
It’s really important to drive this point home. People critical of email have a lot of good points. Anyone that has set up a mail server in the last few years knows what a pain it is. But you will not succeed in replacing something you don’t understand.
The world has moved on from asynchronous communication for organizing around free software projects. It sucks, I know.
Yeah. Not everyone, though.
Personally I think that GitHub’s culture is incredibly toxic. Only recently have there been tools added to allow repository owners to control discussions in their own issues and pull requests. Before that, if your issue got deep linked from Reddit you’d get hundreds of drive by comments saying all sorts of horrible and misinformed things.
I think we’re starting to see a push back from this GitHub/Slack culture at last back to open, federated protocols like SMTP and plain git. Time will tell. Certainly there’s nothing stopping a project from moving to {git,lists}.sr.ht, mirroring their repo on GitHub, and accepting patches via mailing list. Eventually people will realise that this means a lower volume of contributions but with a much higher signal to noise ratio, which is a trade-off some will be happy to make.
It’s not like you used to have levers for mailing lists, though, that would stop marc.org from archiving them or stop people from linking those marc.org (or kernel.org) threads. And drive-bys happened from that, too. I don’t think I’m disputing your larger point. Just saying that it’s really not related to the message transfer medium, at least as regards toxicity.
Sure, I totally agree with you! Drive-bys happen on any platform. The difference is that (at least until recently) on GitHub you had basically zero control. Most people aren’t going to sign up to a mailing list to send an email. The barrier to sending an email to a mailing list is higher than the barrier to leaving a comment on GitHub. That has advantages and disadvantages. Drive-by contributions and drive-by toxicity are both lessened. It’s a trade-off I think.
I guess I wasn’t considering a mailing list subscription as being meaningfully different than registering for a github account. But if you’ve already got a github account, that makes sense as a lower barrier.
Matrix allows sending in the clear, so I suppose this has the “eventually it will leak” property that the OP discussed?
(A separate issue: I gave up on Matrix because its e2e functionality was too hard to use with multiple clients)
and across UA versions. When I still used it I got hit when I realized it derived the key using the browser user agent, so when OpenBSD changed how the browser presented itself I was suddenly not able to read old conversations :)
Oh! I didn’t know that!
Functionality is literally irrelevant, because the premise is that we’re talking about secure communications, in cases where the secrecy actually matters.
Of course if security doesn’t matter then Signal is a limited tool, you can communicate in Slack/a shared google doc or in a public Markdown document hosted on Cloudflare at that point.
Signal is the state of the art in secure communications, because even though the project is heavily driven by Moxie, you don’t actually need to trust him. The Signal protocol is open and it’s basically the only one on the planet that goes out of it’s way to minimize server-side information storage and metadata. The phone number requirement is also explicitly a good design choice in this case: as a consequence Signal does not store your contact graph - that is kept on your phone in your contact store. The alternative would be that either users can’t find each other (defeating the point of a secure messaging tool) or that Signal would have to store the contact graph of every user - which is a way more invasive step than learning your phone number.
Of course you must trust Moxie. A lot of the Signal privacy features is that you trust them not to store certain data that they have access to. The protocol allows for the data not to be stored, but it gives no guarantees. Moxie also makes the only client you can use to communicate with his servers, and you can’t build them yourself, at least not without jumping hoops.
The phone number issue is what’s keeping me away from Signal. It’s viral, in that everyone who has Signal will start using Signal to communicate with me, since the app indicates that they can. That makes it difficult to get out of Signal when it becomes too popular. I know many people that cannot get rid of WhatsApp anymore, since they still need it for a small group, but cannot get rid of the larger group because their phone number is their ID, and you’re either on WhatsApp completely or you’re not. Signal is no different.
And how can you see that a phone number is able to receive your Signal messages? You have to ask the Signal server somehow, which means that Signal then is able to make the contact graph you’re telling me Signal doesn’t have. They can also add your non-Signal friends to the graph, since you ask about their numbers too. Maybe you’re right and Moxie does indeed not store this information, but you cannot know for sure.
What happens when Moxie ends up under a bus, and Signal is bought by Facebook/Google/Microsoft/Apple and they suddenly start storing all this metadata?
Signal is a 501c3 non-profit foundation in the US, Moxie does not control it nor able to sell it. In theory every organization can turn evil but there is still a big difference between non-profits who are legally not allowed to do certain things vs corporations who are legally required to serve their shareholders, mostly by seeking to turn a profit.
There are two points here that I’d like to make, one broader and one specific. In a general sense, Signal does not implement a feature until they can figure out how to do that securely and with leaking as little information as possible. This has been the pattern for basically almost every feature that Signal has. Specifically, phone numbers are the same: The Signal app just sends a cryptographically hashed, truncated version of phone numbers in your address book to the server, and the server responds with the list of hashes that are signal users. This means that Signal on the server side knows if any one person is a Signal user, but not their contact graph.
Every organization can also be bought by an evil one. Facebook bought WhatsApp, remember?
These truncated hashes can still be stored server-side, and be used to make graphs. With enough collected data, a lot of these truncated hashes can be reversed. Now I don’t think Signal currently stores this data, let alone do data analysis. But Facebook probably would, given the chance.
WhatsApp was a for-profit company, 501(c)3 work under quite different conditions. Not saying they can’t be taken over, but this argument doesn’t cut it.
No, it’s an absolutely terrible choice, just like it is a terrible choice for ‘two factor authentication’
Oh but Signal users can always meet in person to re-verify keys, which would prevent any sim swap attack from working? No, this (overwhelmingly) doesn’t happen. In an era where lots of people change phones every ~1-2yr, it’s super easy to ignore the warning because 99% of the time it’s a false positive.
This is a solved problem. I mean, how do you think you got the phone numbers for your contacts in the first place? You probably asked them, and they probably gave it to you. Done.
Careful there… you can’t say bad things about electron in here….
Linking to a google search is both pedantic and unhelpful. Pedantic because it means that you care more about telling your readers “Why don’t you just figure it out” instead of providing the information directly (and you even took the extra time to make it a link!), and unhelpful because google search is now heavily impacted by previous user searches, localisation, etc…
For me, a french hobbyist using duckduckgo on a daily basis, LARP is an Algorithm and Problem solving software. I’m still not sure what the author mean here, but it definitely pissed me off.
I thought about Live Action Role Playing, and interpreted LARP Security as a snobby way of saying “Security Theatre”. It’s not a term I would use myself.
It’s probably just a setup for this zinger:
Makes sense indeed. My point about linking to a google search rather than giving a real definition still stands though.
Phrases like that are one reason I suggested the “rant” tag for this post. It comes across as pretty hostile to me.
I wasn’t sure what the author meant by this either.
The dozen person company I work at uses PGP to encrypt their emails. It works fine, no problems. Somewhat better security for basically no work. No, it won’t thwart the NSA, so what? I don’t get what the fuss is about.
“PGP is easy to use, you just have to never make any mistakes, ever.”
– https://www.kalzumeus.com/about/
(via this thread: https://twitter.com/patio11/status/1230372274257006592)
That’s funny. Colin Percival’s reply is icing on the cake:
“I talk to a lot of security people, and this has never happened to me. I guess people trust you more than they trust me?”
I disagree with this post.
That’s not a bad thing. If the person you are communicating with doesn’t care about encrypting the conversation then they don’t care about the confidentiality of the conversation and encrypting the messages is a waste of everyone’s time, a waste of computing resources and false security anyway. You cannot have secure communication between parties that do not care about security, in my opinion.
Metadata is not important data. That I sent a message to someone at a particular time is known to someone whether it’s a peer-to-peer service in which case it’s all the links between us, or a centralised service in which case it’s the service provider. Either way, everyone is aware I sent a message, and in both cases someone knows who I sent a message to. You should not trust anyone with genuinely important information that genuinely needs to be encrypted. If it’s a centralised service, then the service provider knows, and you should assume everyone knows.
A centralised messaging service might have end-to-end encrypted messages but it cannot have end-to-end encrypted metadata for the obvious reason that it can tell where it has got messages from and it can tell where the messages it’s relaying are meant to go.
If you are sending anything over the public internet, who sent the message, when it was sent, how big it is, who it is to, etc. is public information. The metadata of your Signal messages is public information.
If I send you a message, and you decrypt it, you are responsible for its confidentiality. I cannot in any technological way prevent you from leaking the message. You can take a screenshot of the programme showing you the message, you can take a photo of your phone or computer screen, you can print it out, you can read it aloud, you can copy it by hand onto some paper. Part of sending someone a message is acknowledging that, regardless of the social norms around confidentiality, there’s no technological means of forcing someone to keep a secret. There are no Unbreakable Vows in the real world.
This really goes back to the first point: if the person you are communicating with doesn’t care about security, you are not communicating securely no matter how many technological safeguards you try to put in place. At some point, they will fuck up, because they do not care. Auto-deleting messages just give a false sense of security. They’re like password expiry dates: all they do is encourage people to write things down. I’d rather my message was archived on their encrypted hard drive than written down on a piece of paper ‘because it will disappear otherwise’.
etc. etc. It just goes on, repeating the same false and misleading statements about how encryption and trust and communication actually work. Ultimately, if you cannot trust the person you are communicating with and you cannot trust that they care about security, then your communications with them are not secure and the only form of ‘security’ you should bother with is the basic TLS level of just ensuring that dragnet surveillance won’t be able to passively eavesdrop on your conversations. But any kind of targeted surveillance will easily compromise your conversation, so don’t give yourself a false sense of security trying to harden yourself against something that you inherently cannot prevent.
First of all, you completely ignored the forward secrecy and key rotation part. That’s kind of the killer, here – in order to support these things, you have to actually negotiate a key, which pretty much wrecks the whole “regular email with an encrypted payload” thing. The rest of this discussion is almost completely irrelevant compared to the lack of forward secrecy.
But anyway…
There are actual solutions for the metadata problem, particularly Mix networks like the recently-brought-to-my-attention Loopix system and the older MixMinion. I would rather use Loopix than Signal, all else being equal, because metadata is really useful. Tell me who you’re with, and I can pretty much figure out who you are even if I can’t actually read your email. And that’s before we start asking uncomfortable questions about the Subject line.
I’d rather use a mixnet than Signal, but I’d still rather use Signal than PGP, for mostly the same reason that the author gave, and the reason that you didn’t address. The UX for the widely-available PGP email systems is really, really bad because they fail open. A well-designed security system should fail by erroring out, not by falling back to plaintext. If someone makes a mistake while using the encryption system, it is not proof that they don’t care. It is, at worst, proof of ignorance, and everyone is ignorant when they start out. It is easy to accidentally send an unencrypted email in most encryption-enabled clients. It is almost impossible to accidentally use email instead of Signal, specifically because the applications are completely separate.
This is the same reason why I would run a Tor hidden service, even if I myself don’t care about my own anonymity, if I expected a lot of anonymity-sensitive users. If someone tries to use an onion site without actually using Tor, the site will not load, and they will immediately notice and correct the mistake. Even if your site offers both an onion address alongside a clearnet address, it at least acts as double-confirmation; someone who wants to use Tor would have to simultaneously use a clearnet browser and a clearnet address in order to accidentally leak their identity to me.
I agree with you that auto-deleting messages are dumb, but seriously, the lack of forward secrecy ought to be enough. The part about making it harder to accidentally send a plaintext message is just a UX improvement that you take at the same time that you do the forward secrecy fix.
I’m not going to nitpick every single line of the post.
I honestly don’t think that forward secrecy is that important.
I think this is really just security through obscurity. It’s not a bad thing, of course. But it seems to me that it really just protects metadata from dragnet surveillance in the same way that SMTPS/IMAPS protects data from dragnet surveillance. It doesn’t really prevent any targeted attacks.
I’ve never had any issues with the UI for PGP email. I see the same complaints about PGP all the time and frankly people seem to want a magical solution to a problem that doesn’t have any solution: public key encryption is a fairly complicated thing and users need to understand it to use it. You can’t get around that.
I want it to be easy to send an unencrypted email, because I want to send lots of unencrypted emails. I don’t want all my emails to be encrypted. Most of them don’t need to be encrypted. Emails I’m sending to public mailing lists don’t need to be and shouldn’t be encrypted, as that just provides a false sense of security.
I don’t want the applications to be completely separate. I think instant messaging (which is what Signal is and is for) should be separate from email, and for instant messaging things like always-on encryption probably does make sense. But email isn’t just for messaging privately to people you know, it’s used for loads of things where encryption isn’t appropriate, like mailing lists.
I know some people think that mailing lists should be replaced with online forums or reddit or something, but I personally like mailing lists a lot more than reddit or even than old phpBB-style forums.
To me this is just optimising for stupid. If someone opens your website in Firefox instead of the Tor browser, they obviously don’t really care about their anonymity that much. If they really were worried they were being tracked by a government entity or something they’d be careful, constantly.
Related: http://mixmaster.sourceforge.net/
Wrong / misleading in more than one way:
It is possible to use unsafe mechanical equipment in a safe way. That doesn’t mean adding safety features is a bad idea.
Yep, people can (and do) still jam the dead man switch, remove the covers etc but on average less people get maimed and killed now than before.
If you can prove I sent a message to someone I was supposed to not tell I’m in trouble even if you cannot tell the exact contents of the message.
Emphasis mine. This does not necessarily follow. It depends on your threat model.
Again: Threat model.
I had a hard time figuring out what you meant here, one interpretation that makes it correct is if by “over the public internet” you mean if you send it by mail. You can do a lot to make sure this isn’t public information.
No. I’m not the biggest Signal fan (not open, Signal fans are seriously annoying etc), but let’s stick to the facts:
Signal messages including metadata are encrypted in transit and discarded afterwards.
Just like my 20 year old email hotmail messages aren’t public information, neither is the metadata from my Signal messages.
The amount of stuff that can be worked out just from metadata might surprise you. And the fact that modern secure messaging systems are working to make it harder for third parties to snoop on even the graph of who talks to whom should be a hint that maybe it is an important part of security.
There’s no reasonable expectation of privacy with metadata. It doesn’t matter what data can be worked out AT ALL. That’s not what’s in question.
If I send you a letter, that I sent you a letter is not private. The postman knows, the government is allowed to know, it might as well be public. What I sent you is private. Nobody is allowed to look in. We should be turning those legal privacies into technical, mathematical privacy. We shouldn’t be inventing new types of privacy and just assuming without any actually debate that privacy maximalism is philosophically correct.
Only if you put your name on the outside of the envelope, surely? Otherwise, yes, it’s completely private who and where the letter originated from because …
Most of the author’s objection to encrypted email appears to stem from integration issues.
As an example, look at
age
, a tool the author proposes as an alternative. Were it integrated with an email client, that email client would still have the ability to send un-encrypted attachments.Getting serious about usability of integrated encryption is the real answer, IMO, not abandoning the ability to encrypt your emails. A more opinionated cryptographic message syntax and the default refusal to send plaintext messages to contacts for which the email client has a key along with refusal to reply in plaintext to an encrypted email would address those concerns articulated in the post (at least the ones that aren’t bugs) without driving people towards a proprietary messaging system that requires them to share phone numbers before they can use it.
Another point of the author is that only the body of the email is encrypted, while the envelope remains plaintext. So no matter how hard you try to enforce encryption, the metadata would still appear in clear text, because that is how the protocol works.
While not a big deal most of the time (everyone knows Alice and Bob have been chatting for a long time, but nobody knows the topic), it can be an issue on its own, for example when a journalist talks everyday to the same contact, you might assume that this is their secret source.
An application like Signal hides the sender information so the central servers only knows who is the destination, not the sender. There are still some metadata however, like the source IP address, albeit more difficult to correlate with the sender.
The issue with this post is that the author praise an application using central servers controlled by a single company as a way to provide fully secure channels. It would have been more serious IMO to discuss about protocols rather than apps here, especially peer-to-peer or decentralized technologies like the tox protocol for example, or even IRC with SSL.
Tox is a nice protocol, but it is very bound to the network (NAT, UDP…), so it gets harder to implement and tinker with it. (yeah, http://ratox.2f30.org/ rocks, we know that already).
I am curious about how https://messaginglayersecurity.rocks/ can bring to it all. It protects the end-users agains the middle-man, which means you can have untrusted relays carry still-secure messages until it reach the end users.
OTOH, as by today, tox is probably the most reasonable compromise between security (good), convenience (really not bad, easier than PGP), and availability (a lot of clients out there, quite an actual use base).
Regarding chat messages system, the IRC protocol is a good universal end-point, with a lot of clients and apps available, and this permit to have local crypto-chat-protocol <-> IRC gateway listenning on 127.0.0.1, so that the IRC client can in addition connect to the local relay, and finally solve this : https://xkcd.com/1810/
Thanks for the tox link. That one wasn’t on my radar. Can you summarize (or link to a summary of) why it might be preferable to matrix?
For email, I think I consider the plaintext envelope to be tractable. Much like snail mail, certain opsec efforts are required to prevent an observer from knowing that I contacted someone. On email, it’d include using a special purpose mailbox that’s not associated with my meatspace identity. On snail mail, it’s be a PO box or similar.
And FFS, don’t use BCCs if you send encrypted email to multiple recipients. (I have personally written a tool to spin through a mailbox and uncloak those.) And if you can stomach it, stop encrypting your sent mail to yourself. As with BCCs, you’d need to audit your MUA to make sure that’s safe, and it’s not safe on any MUA I’ve ever checked personally.
I agree that this would be a much more interesting post if it was talking at the protocol level instead of product level. But I also do think that signed and encrypted email is salvageable through some combination of education and integration improvements. If we stopped thinking of handing out an email address as being safer than handing out a postal address, the metadata problem shrinks significantly, IMO.
FWIIW, I just noticed the following amusing snippet on openbsd.org:
http://www.openbsd.org/security.html#reporting
I cannot say that I disagree with the statement or the sentiment. We’re supposed to be PGP aware in NetBSD, but from the looks of it, most folk do seem to find it as pointless as the author of the above statement.
Computers are already such a mess to begin with–die tampering, UEFI bugs, AMT, spec ex, apple/google closed-ecosystems, massive OS attack surfaces, etc–that sounding the alarm about something in the application layer as ubiquitous as e-mail seems more like attention whoring than constructive advice. Then he/she goes on to recommend some proprietary rube-goldberg machine as a replacement. No thank you.
If I had something sensitive to send, I’d PGP it–if only because this person says not to.
There’s a lot to like and a lot to dislike in this post. But in any case, the most important part to me is the link to age. I’ve heard a lot of complaints about gpg lately, and agree with a lot of them, but my question has always been, “so what’s a good alternative for signing and encryption then?”
Combine age with minisign, and I think I may have finally found an alternative to gpg.
I miss key handling in both minisign and age. Not that GPGs keychain is anything to be proud of, but minisign just expects you to handle the keys yourself, while age conveniently integrates with a Microsoft-acquired cloud service which was never even intended as a key store (Github) to do key discovery, and trusts it blindly.
I find that for release signing, you have to handle the keys yourself anyway, and small keys used by sifnify/minisign are definitely easier to handle.
PGP is dead and shouldn’t be used, however, I haven’t found a great alternative to sign/encrypt my stuff.
How do I prove I am me speaking without a PGP signed message? How do I encrypt blobs of text?
Any recommendations?
But https://dot.kde.org/2020/02/18/gpg4kde-gpg4win-approved-transmission-processing-national-classified-information 😋
Also, this: GnuPG can now be used to perform notarial acts in the State of Washington.
Woah! So I guess they really are faced with no serious alternatives.
So is PGP really dead afterall? I mean, the government uses it. ;)
How is PGP dead? The author mentions it, links to one vulnerability, and moves on.
It seems to me that if I have someone’s public key, and they have mine, I can use GPG to encrypt messages and send them back and forth. Aside from this being complicated, if followed on both sides it should work.
From the same author, and discussion here:
https://lobste.rs/s/7rkfsu/pgp_problem
Thanks for linking this. I wish I was around to contribute to the discussion back then.
I don’t think author should discount PGP, if that’s his complaints. I think his use case is very different than mine.
For regular stuff encrypted email means transport between me and my email server. That’s enough. When I have something important, I use PGP. When I need to sign a GitHub commit, I use PGP.
It’s both hard to send and read an encrypted email. I do it rarely, but when I do, I trust that PGP works.
The authors concerns may be valid for him, but they conflate privacy and encryption. I don’t need forward secret for this purpose. I’d rather have a shared key than manage session keys for all eternity. That’s outside the scope of PGP, although I could use PGP to negotiate session keys if I really wanted.
I don’t care that PGP leaks metadata, the intent is to identify a message to me, so having the subject visible is a feature not a bug. If I wanted anonymity I would use a fake subject and distribute it in ways that preserve my identity.
There are many flaws, but for situations where I just must have encryption, PGP works. I have set it up with desktop clients before and it works well with stuff like Outlook. But since I mainly use webmail, I don’t set it up because it seems kind of pointless to trust a third party with my private key. That would make me have to have a super private key or something.
I think OP’s beef with PGP in general is that it’s a huge ball of mud that tries to do everything.
For every use case of PGP, there are alternatives that are more modern and more debuggable.
My hope is that modern tools like https://age-encryption.org/ will catch on.
I don’t think you can use Age for the first use case, “prove I am me”, as Age wil only allow you to encrypt. Age will also not help you in proving that a key belongs to a certain user.
On one hand, <mind_blown.gif>. On the other hand, acquiring a phone number anonymously is kind of a bottleneck. US-focused tutorials suggest jumping through a set of hoops to get a Google Voice phone number anonymously (and using Google’s services for anonymity seems… idk, kind of risky?). The set of hoops is manageable, but there’s a lot of tricky parts. I haven’t seen an EU-based guide to an anonymous phone number other than “get somebody else to register the number, dunno, pay a homeless person or something ¯\(ツ)/¯”.
For the contact list piggybacking, email address would work just the same, would allow more anonymous IDs, easily easily replacing IDs or using multiple IDs to compartmentalize, wouldn’t be tied to a particular country, etc.
I thought for a moment it would be less secure as identity confirmation than SMS, but then SMS is not even considered a good 2FA and reliable confirmation code would require a more involved protocol, and the same protocol could be used over email.
Is there something I’m missing here?
I agree that GPG/PGP is an overly complex protocol that should be replaced by something much nicer (and not direclty mixed with email workflow).
But EFAIL is NOT a GPG/PGP failure. It is an email clients failure, which naively open all URL that are presented to them.
From https://www.eff.org/deeplinks/2018/05/pgp-and-efail-frequently-asked-questions#pgp_broken
and further (https://efail.de/):
(my emphasis)
I agree that it is PGP/GPG’s role to be a strong protocol and protect against these kind of attacks. Can that happen if you also sign the email after encryption ?
But sure, crypto foot guns is not something we want to be directly exposed to.
[Comment removed by author]