Ah, but don’t you know, they “aren’t” rolling their own crypto, per their FAQ.
Is Session rolling its own cryptography?
No, Session does not roll its own cryptography. Session uses Libsodium, a highly tested, widely used, and highly-regarded crypto library.
Libsodium is completely open-source.
heavily rolls eyes
I like libsodium. It’s a great library of cryptography algorithms.
It doesn’t come, batteries included, with a full protocol for end-to-end encryption built-in. And so anyone who uses libsodium for e2ee is necessarily rolling their own crypto.
I’ve only heard of Session in passing over the years. This result is not surprising.
i initially thought you were being overzealous, until i read the first paragraph of the article
using libsodium to implement the documented signal protocol, i think would be fine. it does have risks, and should have some review before relying on it, but i wouldn’t really call that “rolling your own crypto”. and having a clean-room re-implementation would probably be good for the ecosystem
…but that’s not what they’re doing. they’re implementing their own protocol, and a cursory glance at their reasoning suggests that they want a decentralized messenger and care about security as an afterthought. which would be fine if they branded it that way, and not as an alternative to signal
This may be a little off topic, but I dislike the phrase “don’t roll your own crypto”.
Don’t roll your own crypto is generally a term which in itself is very ambiguous.
I’ve seen this phrase being thrown around when people just use gnutls in C vs people implementing some hash algorithm themselves. One I find very valid while the other one is an area where I would just use libsodium.
There are so many layers in crypto where you can apply the phrase that I find refuting (their claims with) this phrase in itself is meaningless unless you know what the authors intended. In this case it may as well be claims regarding the resistance against certain side channel attacks.
I’ve always asked myself how I can identify the moment I arrive at a skill level where I’m allowed to “roll my own crypto” depending on each possible interpretation people are using.
There are so many layers in crypto where you can apply the phrase that I find refuting this phrase in itself is meaningless unless you know what the authors intended.
Absolutely. And the advice, taken to its logical extreme, would result in zero cryptography ever being developed.
It’s supposed to be along the same lines as advice given from lawyers to their kids that are considering a career in law. They say, “Don’t be a lawyer,” and if their kid isn’t dissuaded and can argue why they’d succeed as a lawyer, then maybe they should be one.
“Don’t roll your own crypto” is along the same lines. I currently do it professionally, but I also have a lot of oversight into the work I do to ensure it’s correct. Detailed specifications, machine-assisted proofs, peer review, you name it. I never start with code; I always start with “here’s why I’m doing this at all” (which includes closely related ideas and why they do not work for the problem I’m solving) and a threat model for my specific solution.
It can take months, or even years, to get a new cryptography design vetted and released with the appropriate amount of safety.
When it comes to cryptography protocol design, the greatest adversary is often your own ego.
I always read the advice as an incomplete sentence, which ends with “unless you know what you’re doing”, which is coincidentally like other safety advice, right? “This thing that you’re about to do is risky and dangerous unless you know how to do it, and in some cases, even if you do. Avoid doing it if you can. Exercise caution and care otherwise.” No?
I always viewed it as “don’t ship your own” - feel free to roll your own to play around with, but be cautious and get some review before putting it into production.
One piece of advice I’ve heard is: Before trying to build crypto, learn how to break crypto. Cryptopals is a good resource for that. It’s mindbending to learn about all the weird ways that crypto can fall apart.
I remember many moons ago that an expert in security and crypto actually published a list of cryptographic choices that should be your default. I wonder if this rings a bell to someone, it would be nice to recover that document, publish it here and see what this community would say in terms of freshen it up.
I might be wrong, but I think in the beginning the meaning of the phrase “don’t roll your own crypto” mean “do not try to come up with cryptographic algorithms on your own; use something tested and done by someone who know what they are doing”. I think the best way to describe what Soatok is putting forward is “don’t skip the default practices of security” or “don’t wave away cryptographic defaults in name of a watered down threat model”.
I think that’s a very reasonable concern. Particularly in light of the very first issue @soatok cites: the removal of PFS from the protocol. I’m on record as being skeptical of the “just use signal” advice that seems frequently deployed as a though-terminating measure in discussions about encrypted communication, but if I wanted to make something that was like signal but really a deniable honeypot, Session makes the same choices I would. It seems like a downgrade from signal in every important way.
Unrelated: the “Ad-blocker not detected” message at the bottom of the post made me laugh quite a bit. I use some tracker-blocking extensions (and browser configs) but I don’t run a real ad blocker in this browser. But many sites give me an “ad blocker detected” message and claim I need to turn off my non-existent ad blocker to support them. This site telling me I’m not running enough of one struck me as very funny.
Sure, its plausible.
But I find basically every time Soatok (or any security researcher) exposes any application that advertises itself as “secure/private” on the box, for their glaring bad practices, people (myself included) immediately go to “this is so stupid it has to be a honeypot”.
Are they all honeypots? (genuinely, maybe yes), or is it just stupidity?
I would posit stupidity. Previous honeypots that weren’t takeovers of server operators have been somewhat targeted: Anom required a buy-in of the phone (as evidence you’re a criminal), Playpen required you be a pedophile (or at least, hanging out online with pedophiles) to be caught in the net, Hansa was a drug market, etc. Creating a general-purpose backdoored app to en masse catch criminals seems to cast quite a wide net when the same arrest portfolio can probably be gathered by doing the same thing to Twitter DMs with a backdoor and a secret court order. I wouldn’t put it past law enforcement but it seems like a mega hassle vs. targeted honeypots and backdoors.
If it were a honeypot (or backdoor), it’s certainly too much hassle for legitimate law enforcement purposes like the ones you described. You’d want this for someone you couldn’t reach through normal court (even a secret rubberstamp like FISA) channels.
This would be more like something you’d use for getting information from a group operating under some legal regime that’s not friendly to you gathering that information. Getting it in place, then convincing the group you were interested in to migrate off, say, Telegram, might be one approach.
The interesting thing in this case (IMO) here is that the fork removes things that were:
Already implemented by an easy-to-reuse library
Not costly in terms of performance or cognitive overhead
Seemingly beneficial to the stated goals of their product
and without articulating the upside to their removal very persuasively. Sure, stupidity is always a possibility. But it feels more like they want to add some features that they don’t want to talk about. On the less-nefarious end of that spectrum, I could imagine that it is as simple as supporting devices that don’t work with the upstream, but that they don’t want to discuss in public. It’s also easy to imagine wanting to support some middle scanner-type box on a corporate network that PFS would break. But it could also be something like being able to read messages from a device where you can maintain passive traffic capture/analysis but can’t (or would prefer not to) maintain an ongoing endpoint compromise without detection. e.g. You have compromised a foreign telco and can pull long term key material off a device when its owner stays in your hotel, but you can’t or won’t leave anything running on there because the risk of detection would then be too high.
That’s just all speculation about when it might serve someone with an effectively unlimited budget to do something like this. Stupidity is certainly more common than such scenarios.
Only the first bit could charitably be attributed to “don’t roll your own crypto”. The rest was just obtuse idiocy or malevolence. Call the library-provided “encrypt this chunk with symmetric encryption using this key” then providing a public key.. that’s not about rolling your own crypto.
The author mentions the encrypt function and links to the wrong overload of that function. Instead the one being called is the later overload, of the same name, which takes the public key but then also uses an ephemeral locally generated private key to compute a shared secret. This shared secret, computable only by the request creator and the Service Node, is then passed into the function that the author is linking to.
‘Our code is so obfuscated that working out how it works’ is not the flex you think it is. If you’re writing a crypto library, any numpty should be able to understand the control flow because, if they can’t, then the people who can understand the underlying crypto won’t be able to review it.
If Soatok, who has read and reviewed a load of crypto libraries, can’t read your code and understand it, this tells me that one of two things is true:
You have intentionally obfuscated your code to hide something malicious.
I don’t think using function overloading to decide between “cryptographic fuckup”“implementation fuckup” and “good implementation” is a good choice ever..
I think that it requires much more work to say where is the truth and I would caution against rushing to judgment…
On this particular point - they say in their response:
The code for both encrypt functions, which are side-by-side in the code, can be viewed here
and both encrypt() functions are in the same file, right next to each other and the parameter names seems meaningful: symmetricKey vs. hexEncodedX25519PublicKey and have different data type. But I would not call it obfuscated – It could be written better, of course, everything could be, but realizing which function is called should be easy here.
I am not sure whether getsession.org is worth using. (BTW: the name Session is terribly ambiguous). But completely dismiss it on the basis of some weak algorithms or bugs? Algorithms that are OK today may (will) become weak tomorrow. Bugs might also be discovered later. They can even be introduced later and software that was perfect can become unusable or even compromised in the next release. So for me these qualities are more important in the long term:
Is the software able to evolve and able to keep its cryptography up to date? (not just today, but in the next five or ten years)
Ed25519 Keypairs generated from their KeyPairUtilities object only have 128 bits of entropy, rather than the ~253 bits (after clamping) you’d expect from an Ed25519 seed.
Oh that sounds…bad…
What this code is doing (after decryption):
Grab the public key from the payload.
Grab the signature from the payload.
Verify that the signature on the rest of the payload is valid… for the public key that was included in the payload.
…uhh…
When encrypting payloads for onion routing, it uses the X25519 public key… as a symmetric key, for AES-GCM.
??? how even.
This is like, the list of things to not do in your crypto! How do you even get this many things in a row wrong.
The reason Session performs Ed25519 key generation this way, with 128-bits of entropy instead of 256-bits, isn’t just for fun. It’s so that Session can use 13-word mnemonic seed phrases instead of 25 word mnemonic seed phrases (called Recovery Passwords in Session). These shorter mnemonic seed phrases are easier for users to write down and save.
“The reason I use 8 character passwords isn’t just for fun. They are easier for me to write down and save.”
Come on. Surely the amount of people who would write down a 13-word seed phrase but wouldn’t write down a 25-word seed phrase is negligible. IME the part that requires effort isn’t having to write down a few words, but figuring out how to securely store the phrase after you’ve written it down.
Maybe 13 words can be just remembered while 25 are too difficult for most people?
However, even if this is the reason, I would expect, that people could chose whether they prefer a) stronger algorithm or b) being able to recover their identity just off the top of their head.
Of course, you are delegating that decision to users, and it is kind of alibist attitude… But maybe we should not pretend that one size fits all and that even the safest bullet-proof system is also dumb-proof and easy to use for anyone without any knowledge and discipline.
Remembering secrets is more about frequency of use than about length. You should not expect anyone to remember a recovery passphrase because they are very infrequently used, which is why they are supposed to be written down.
Just imagine you have to cross some state borders… I think, there are use cases for it – you can not take any hardware with you or if you can, you can not trust it anymore after passing the border control… same with your written notes (maybe you can hide 25 words in your shoe or make a little dots in a book to mark such words…). And you are looking for best solution available that is based on what you can remember. After passing the border, you want to find a trustworthy computer and recover your identity in order to communicate with your old friends.
Maybe 13 word mnemonic is such solution? Maybe not. I would probably chose some reliable hash or PBKDF function that works with variable-length input.
@pushcx: is this the kind of response that should be merged into kk5ogc ? Between the discussion still being active there and the author’s decision not to link the post they were responding to, without explanation, it might be better co-located with that context.
I started working on the documentation, database model, and UI of story merging on a recent office hours stream. It is my highest priority office hours project but delayed by the demands of the UK OSA.
It’s also interesting how that same defense doesn’t discuss one of the most obvious reasons you’d want PFS. Suppose you have an adversary who’s almost always passive. That is, they can observe and record traffic to one of your endpoints, perhaps because they are the government that controls the local telco. Suppose further that you occasionally visit that country either for business or to see family and friends who live there. Now imagine that while you’re there, you leave your device in your hotel at times, and that this adversarial government can access your device and image it while you do so, but that government is unwilling to attempt to install software on your device because they don’t want to be detected.
I think we can all name people and governments who are similarly situated.
It would make surveillance easier and less risky for such a government adversary if the people using a chat service chose a protocol that didn’t offer PFS. It would be significantly harder for, say, Citizen Lab to expose operations that didn’t require ongoing endpoint compromise. And Citizen Lab has been a problem for a number of entities that likely also have passive traffic capture capabilities.
The weak dismissal of PFS is interesting in the context of a thought experiment like this.
I think the simplest example is that you can delete messages on your device (*) - but without PFS those deleted messages can be reconstructed from the recorded traffic. Not so much with PFS, even if they have access to your device afterwards. For which your example is a pretty good scenario.
Unrelated, but I was just working on some C code that forks, starts a new process session and then uses signals to communicate back with the parent. I have no idea what Session and Signal are, so for a moment this title made me very confused ;)
Signal is a private messaging app with mass market appeal. It’s open source and encrypts messages on your device to your conversation partners’ devices.
Session is a fork of Signal, which is what’s being criticized in the article.
That said, your observation is spot on: We suck at naming things, and sometimes that leads to humorous confusion.
Even the name of the blog post confused me for a moment. I thought maybe Session was the name of some software that Signal had forked, and that the post was going to advise against using Signal’s fork of Session. But no, almost the opposite!
Session’s response in the first section about ed25519 keygen is quibbling about how you attack reduced-entropy ed25519, which is largely irrelevant to the existence of the reduced-entropy vulnerability. They don’t say they will fix the fuckup.
I would have to read more code to investigate the other sections, and I don’t care enough to bother so I’ll wait for Soatok’s forthcoming reply.
I … never thought I’d find myself defending this, as I’ve been angrily accused of anti-signal bias for my skepticism of the company and their policies …
But I think calling this author “highly biased towards Signal” is really over-stating things. This author certainly seems to believe that Signal is the current best option for encrypted real-time messaging. But calling them “highly biased towards signal” makes it sound like they’re advocating for Signal for reasons other than wanting to secure the communications of members of their community.
I don’t believe that’s the case. I might (and do, IIRC) find grounds to argue with their threat model. Or to disagree with the tradeoffs they’ve chosen. But I don’t think their preference for signal is related to anything other than sincerely held and well-considered beliefs about how best to secure the communications of people who really need to do that. I won’t advocate for Signal, but I think your warning about the author may create a misimpression that I do not believe is warranted.
I think this is a misunderstanding (at best) or possibly a mischaracterization. The author isn’t saying “this is not signal”, he’s saying “this doesn’t even qualify to compete with Signal”. It’s not a subtle semantic difference.
The statement that’s actually made in these posts does make room for alternatives that aren’t Signal, but only if they’re sufficiently secure from an applied cryptography perspective. For the ones that we’ve read about, they didn’t meet the bar.
Your comment is reframing his posts as “this is not signal so that’s enough reason not to use it”, which is a wildly different proposition. If he was saying that, he’d get a lot more pushback from cryptography nerds like myself.
This post is the first one I’ve seen from this author that primarily makes legitimate complaints about the cryptography in use. However even in this one where there was good content to come, they couldn’t help but start the post with a snide “oh also they don’t do PFS so it’s trash” comment.
I’m not a cryptography nerd, so I’m not gonna pretend to completely understand all of them, but TFA seems to be giving more than a few points about why they’re knocking Session for not doing PFS. One of those points is that Signal does do PFS, and Session just kind of decided to rip it out, if I’m understanding correctly.
Again, I’m not knowledgeable enough to really refute any of the other reasons they mention, but I definitely see removal of a security feature as highly suspicious and potentially misguided.
Can I please ask you why you don’t share my concerns?
I think I know the feeling you’re argumenting from - but “they don’t do PFS so it’s trash” is the technical argument why it’s not playing on the same level as signal. (Which combined with the other security flaws might be breaking their users neck, as you can now decrypt things later on.)
I agree. Any messaging platform that demands you have an Android or iOS primary device as well as requires a SIM card (which many countries require a ID or passport to get) should be seen as a non-starter. We do not need to further strengthen that duopoly or even demand folks carry a phone for their messaging needs. I honestly regret getting my extended family on Signal like 6–7 years ago.
I strongly dislike being in the position of having to trust Signal. There are a number of options that lack its downsides if you don’t care about leaking metadata, which is a hell of a concession to have to make.
Right. You have to take it on faith that Signal’s servers discard your metadata after use like they say they do. XMPP and Matrix will “leak” it to servers involved in the conversation which mostly don’t even pinky-promise like Signal. You could run a private, non-federated XMPP or Matrix server, of course.
works here. But it might be doing Content Negotiation and misinterpret your browsers Accept header, that’s something people regularly get subtly wrong.
If anyone is seriously interested in studying or even auditing this source code, I recommend opening it in an IDE with Kotlin support (IntelliJ IDEA / Android Studio) to avoid embarrassing mistakes. Due to type inference and other language characteristics, the source code is not much readable in a plain-text editor (or just with plain syntax highlighting).
Yes, it means downloading lot of… stuff and setting up an isolated environment (untrusted code). There is also question whether using type inference (and overloaded functions) is good for safety – maybe explicit types would be better because changing the type of a return value will not magically change behavior on the other side of the program.
Did you read the article?
Your comment is basically whataboutism. The fact that Signal needs to improve their “trustability” (I agree on that) doesn’t relate to Session’s security. They’ve opened gaping holes compared to Signal.
Except that it cites a real downgrade vs signal right at the top: Session drops support for PFS with a really flimsy justification. I am also a signal skeptic, but that doesn’t mean that these concrete criticisms of this fork are FUD.
I’ve flagged this as off-topic (it seemed like the closest of the flag options). It seems pretty clear that you are pushing your own blog post instead of engaging with the OP in any way. And the fact that you don’t trust Signal—however valid your reasons are—has no bearing on whether this article about a Signal fork is FUD.
This list of deficiencies reads like a slightly obscured, writ-large version of “don’t roll your own crypto”.
Ah, but don’t you know, they “aren’t” rolling their own crypto, per their FAQ.
heavily rolls eyes
I like libsodium. It’s a great library of cryptography algorithms.
It doesn’t come, batteries included, with a full protocol for end-to-end encryption built-in. And so anyone who uses libsodium for e2ee is necessarily rolling their own crypto.
I’ve only heard of Session in passing over the years. This result is not surprising.
i initially thought you were being overzealous, until i read the first paragraph of the article
using libsodium to implement the documented signal protocol, i think would be fine. it does have risks, and should have some review before relying on it, but i wouldn’t really call that “rolling your own crypto”. and having a clean-room re-implementation would probably be good for the ecosystem
…but that’s not what they’re doing. they’re implementing their own protocol, and a cursory glance at their reasoning suggests that they want a decentralized messenger and care about security as an afterthought. which would be fine if they branded it that way, and not as an alternative to signal
This may be a little off topic, but I dislike the phrase “don’t roll your own crypto”.
Don’t roll your own crypto is generally a term which in itself is very ambiguous.
I’ve seen this phrase being thrown around when people just use gnutls in C vs people implementing some hash algorithm themselves. One I find very valid while the other one is an area where I would just use libsodium.
There are so many layers in crypto where you can apply the phrase that I find refuting (their claims with) this phrase in itself is meaningless unless you know what the authors intended. In this case it may as well be claims regarding the resistance against certain side channel attacks.
I’ve always asked myself how I can identify the moment I arrive at a skill level where I’m allowed to “roll my own crypto” depending on each possible interpretation people are using.
edit: added intended missing meaning in (…)
Absolutely. And the advice, taken to its logical extreme, would result in zero cryptography ever being developed.
It’s supposed to be along the same lines as advice given from lawyers to their kids that are considering a career in law. They say, “Don’t be a lawyer,” and if their kid isn’t dissuaded and can argue why they’d succeed as a lawyer, then maybe they should be one.
“Don’t roll your own crypto” is along the same lines. I currently do it professionally, but I also have a lot of oversight into the work I do to ensure it’s correct. Detailed specifications, machine-assisted proofs, peer review, you name it. I never start with code; I always start with “here’s why I’m doing this at all” (which includes closely related ideas and why they do not work for the problem I’m solving) and a threat model for my specific solution.
It can take months, or even years, to get a new cryptography design vetted and released with the appropriate amount of safety.
When it comes to cryptography protocol design, the greatest adversary is often your own ego.
I always read the advice as an incomplete sentence, which ends with “unless you know what you’re doing”, which is coincidentally like other safety advice, right? “This thing that you’re about to do is risky and dangerous unless you know how to do it, and in some cases, even if you do. Avoid doing it if you can. Exercise caution and care otherwise.” No?
I always viewed it as “don’t ship your own” - feel free to roll your own to play around with, but be cautious and get some review before putting it into production.
One piece of advice I’ve heard is: Before trying to build crypto, learn how to break crypto. Cryptopals is a good resource for that. It’s mindbending to learn about all the weird ways that crypto can fall apart.
At least one of my online friends agrees.
I think it’s more like, don’t roll your own crypto: don’t do it by yourself, collaborate with other experts, get lots of review from many angles.
I remember many moons ago that an expert in security and crypto actually published a list of cryptographic choices that should be your default. I wonder if this rings a bell to someone, it would be nice to recover that document, publish it here and see what this community would say in terms of freshen it up.
I might be wrong, but I think in the beginning the meaning of the phrase “don’t roll your own crypto” mean “do not try to come up with cryptographic algorithms on your own; use something tested and done by someone who know what they are doing”. I think the best way to describe what Soatok is putting forward is “don’t skip the default practices of security” or “don’t wave away cryptographic defaults in name of a watered down threat model”.
But maybe I am too off?
You might be thinking of “cryptographic right answers” from ’tptacek (2018 version, 2024 post-quantum version)
YES!!! You found it! Thank you @zimpenfish!
There’s also What To Use Instead of PGP from the very blog this Lobsters thread is about.
It was also posted on Lobsters.
Maybe I’m paranoid, but it reads to me like a plausibly deniable honeypot.
I think that’s a very reasonable concern. Particularly in light of the very first issue @soatok cites: the removal of PFS from the protocol. I’m on record as being skeptical of the “just use signal” advice that seems frequently deployed as a though-terminating measure in discussions about encrypted communication, but if I wanted to make something that was like signal but really a deniable honeypot, Session makes the same choices I would. It seems like a downgrade from signal in every important way.
Unrelated: the “Ad-blocker not detected” message at the bottom of the post made me laugh quite a bit. I use some tracker-blocking extensions (and browser configs) but I don’t run a real ad blocker in this browser. But many sites give me an “ad blocker detected” message and claim I need to turn off my non-existent ad blocker to support them. This site telling me I’m not running enough of one struck me as very funny.
Sure, its plausible.
But I find basically every time Soatok (or any security researcher) exposes any application that advertises itself as “secure/private” on the box, for their glaring bad practices, people (myself included) immediately go to “this is so stupid it has to be a honeypot”.
Are they all honeypots? (genuinely, maybe yes), or is it just stupidity?
i used to think that people sending a knockoff paypal payment link from a TLD i’ve never heard of was an obvious scam
then i tried to buy a monitor from hewlett packard via customer support, and i found out who these scammers are imitating
I would posit stupidity. Previous honeypots that weren’t takeovers of server operators have been somewhat targeted: Anom required a buy-in of the phone (as evidence you’re a criminal), Playpen required you be a pedophile (or at least, hanging out online with pedophiles) to be caught in the net, Hansa was a drug market, etc. Creating a general-purpose backdoored app to en masse catch criminals seems to cast quite a wide net when the same arrest portfolio can probably be gathered by doing the same thing to Twitter DMs with a backdoor and a secret court order. I wouldn’t put it past law enforcement but it seems like a mega hassle vs. targeted honeypots and backdoors.
If it were a honeypot (or backdoor), it’s certainly too much hassle for legitimate law enforcement purposes like the ones you described. You’d want this for someone you couldn’t reach through normal court (even a secret rubberstamp like FISA) channels.
This would be more like something you’d use for getting information from a group operating under some legal regime that’s not friendly to you gathering that information. Getting it in place, then convincing the group you were interested in to migrate off, say, Telegram, might be one approach.
The interesting thing in this case (IMO) here is that the fork removes things that were:
and without articulating the upside to their removal very persuasively. Sure, stupidity is always a possibility. But it feels more like they want to add some features that they don’t want to talk about. On the less-nefarious end of that spectrum, I could imagine that it is as simple as supporting devices that don’t work with the upstream, but that they don’t want to discuss in public. It’s also easy to imagine wanting to support some middle scanner-type box on a corporate network that PFS would break. But it could also be something like being able to read messages from a device where you can maintain passive traffic capture/analysis but can’t (or would prefer not to) maintain an ongoing endpoint compromise without detection. e.g. You have compromised a foreign telco and can pull long term key material off a device when its owner stays in your hotel, but you can’t or won’t leave anything running on there because the risk of detection would then be too high.
That’s just all speculation about when it might serve someone with an effectively unlimited budget to do something like this. Stupidity is certainly more common than such scenarios.
Hence “plausibly deniable.”
Only the first bit could charitably be attributed to “don’t roll your own crypto”. The rest was just obtuse idiocy or malevolence. Call the library-provided “encrypt this chunk with symmetric encryption using this key” then providing a public key.. that’s not about rolling your own crypto.
‘Our code is so obfuscated that working out how it works’ is not the flex you think it is. If you’re writing a crypto library, any numpty should be able to understand the control flow because, if they can’t, then the people who can understand the underlying crypto won’t be able to review it.
If Soatok, who has read and reviewed a load of crypto libraries, can’t read your code and understand it, this tells me that one of two things is true:
Neither of these engenders trust.
I don’t think using function overloading to decide between
“cryptographic fuckup”“implementation fuckup” and “good implementation” is a good choice ever..The category you’re looking for in this case is actually at the intersection of the two, i.e. “implementation fuckup”.
Good catch, updated it.
That was tongue-in-cheek, really, but I guess the fact that it’s also reads as a good catch is, uh, a little worrying in this context, too? :-D
I think that it requires much more work to say where is the truth and I would caution against rushing to judgment…
On this particular point - they say in their response:
and both
encrypt()functions are in the same file, right next to each other and the parameter names seems meaningful:symmetricKeyvs.hexEncodedX25519PublicKeyand have different data type. But I would not call it obfuscated – It could be written better, of course, everything could be, but realizing which function is called should be easy here.I am not sure whether
getsession.orgis worth using. (BTW: the name Session is terribly ambiguous). But completely dismiss it on the basis of some weak algorithms or bugs? Algorithms that are OK today may (will) become weak tomorrow. Bugs might also be discovered later. They can even be introduced later and software that was perfect can become unusable or even compromised in the next release. So for me these qualities are more important in the long term:Oh that sounds…bad…
…uhh…
??? how even.
This is like, the list of things to not do in your crypto! How do you even get this many things in a row wrong.
“The reason I use 8 character passwords isn’t just for fun. They are easier for me to write down and save.”
Come on. Surely the amount of people who would write down a 13-word seed phrase but wouldn’t write down a 25-word seed phrase is negligible. IME the part that requires effort isn’t having to write down a few words, but figuring out how to securely store the phrase after you’ve written it down.
They could at least offer this as an option.
Maybe 13 words can be just remembered while 25 are too difficult for most people?
However, even if this is the reason, I would expect, that people could chose whether they prefer a) stronger algorithm or b) being able to recover their identity just off the top of their head.
Of course, you are delegating that decision to users, and it is kind of alibist attitude… But maybe we should not pretend that one size fits all and that even the safest bullet-proof system is also dumb-proof and easy to use for anyone without any knowledge and discipline.
Remembering secrets is more about frequency of use than about length. You should not expect anyone to remember a recovery passphrase because they are very infrequently used, which is why they are supposed to be written down.
Just imagine you have to cross some state borders… I think, there are use cases for it – you can not take any hardware with you or if you can, you can not trust it anymore after passing the border control… same with your written notes (maybe you can hide 25 words in your shoe or make a little dots in a book to mark such words…). And you are looking for best solution available that is based on what you can remember. After passing the border, you want to find a trustworthy computer and recover your identity in order to communicate with your old friends.
Maybe 13 word mnemonic is such solution? Maybe not. I would probably chose some reliable hash or PBKDF function that works with variable-length input.
@pushcx: is this the kind of response that should be merged into kk5ogc ? Between the discussion still being active there and the author’s decision not to link the post they were responding to, without explanation, it might be better co-located with that context.
Really not a fan of this merge of the two comment sections. It’s chaotic and harder to understand who’s replying to what.
I started working on the documentation, database model, and UI of story merging on a recent office hours stream. It is my highest priority office hours project but delayed by the demands of the UK OSA.
It’s interesting how their defense for not using PFS comes at the very end and reads like a smokescreen.
It’s also interesting how that same defense doesn’t discuss one of the most obvious reasons you’d want PFS. Suppose you have an adversary who’s almost always passive. That is, they can observe and record traffic to one of your endpoints, perhaps because they are the government that controls the local telco. Suppose further that you occasionally visit that country either for business or to see family and friends who live there. Now imagine that while you’re there, you leave your device in your hotel at times, and that this adversarial government can access your device and image it while you do so, but that government is unwilling to attempt to install software on your device because they don’t want to be detected.
I think we can all name people and governments who are similarly situated.
It would make surveillance easier and less risky for such a government adversary if the people using a chat service chose a protocol that didn’t offer PFS. It would be significantly harder for, say, Citizen Lab to expose operations that didn’t require ongoing endpoint compromise. And Citizen Lab has been a problem for a number of entities that likely also have passive traffic capture capabilities.
The weak dismissal of PFS is interesting in the context of a thought experiment like this.
I think the simplest example is that you can delete messages on your device (*) - but without PFS those deleted messages can be reconstructed from the recorded traffic. Not so much with PFS, even if they have access to your device afterwards. For which your example is a pretty good scenario.
(*) let’s ignore methods to extract old data)
related article (just for record): Don’t Use Session (Signal Fork)
They go over that the original post in such detail, but they won’t link to it. That is a bit weak.
Unrelated, but I was just working on some C code that forks, starts a new process session and then uses signals to communicate back with the parent. I have no idea what Session and Signal are, so for a moment this title made me very confused ;)
Signal is a private messaging app with mass market appeal. It’s open source and encrypts messages on your device to your conversation partners’ devices.
Session is a fork of Signal, which is what’s being criticized in the article.
That said, your observation is spot on: We suck at naming things, and sometimes that leads to humorous confusion.
Even the name of the blog post confused me for a moment. I thought maybe Session was the name of some software that Signal had forked, and that the post was going to advise against using Signal’s fork of Session. But no, almost the opposite!
Seems like a solid response. I’m not a cryptography expert though.
Hmm, I’m not so sure.
Session’s response in the first section about ed25519 keygen is quibbling about how you attack reduced-entropy ed25519, which is largely irrelevant to the existence of the reduced-entropy vulnerability. They don’t say they will fix the fuckup.
I would have to read more code to investigate the other sections, and I don’t care enough to bother so I’ll wait for Soatok’s forthcoming reply.
Legitimate, well-written findings! I like how Soatok casually drops actual security audits as blog posts.
I have to admit that this post makes good points. But I want to remind folks that this author is highly biased towards Signal, which I criticized the last time one of their articles was posted here
My recommendation is don’t use Session and don’t use Signal. @dijit summarized my thoughts about Signal in their comments, it’s just missing the mention of the MobileCoin conflicts of interest
I … never thought I’d find myself defending this, as I’ve been angrily accused of anti-signal bias for my skepticism of the company and their policies …
But I think calling this author “highly biased towards Signal” is really over-stating things. This author certainly seems to believe that Signal is the current best option for encrypted real-time messaging. But calling them “highly biased towards signal” makes it sound like they’re advocating for Signal for reasons other than wanting to secure the communications of members of their community.
I don’t believe that’s the case. I might (and do, IIRC) find grounds to argue with their threat model. Or to disagree with the tradeoffs they’ve chosen. But I don’t think their preference for signal is related to anything other than sincerely held and well-considered beliefs about how best to secure the communications of people who really need to do that. I won’t advocate for Signal, but I think your warning about the author may create a misimpression that I do not believe is warranted.
When every single post from them starts with “this is not signal so that’s enough reason not to use it” it does make one wonder…
I think this is a misunderstanding (at best) or possibly a mischaracterization. The author isn’t saying “this is not signal”, he’s saying “this doesn’t even qualify to compete with Signal”. It’s not a subtle semantic difference.
The statement that’s actually made in these posts does make room for alternatives that aren’t Signal, but only if they’re sufficiently secure from an applied cryptography perspective. For the ones that we’ve read about, they didn’t meet the bar.
Your comment is reframing his posts as “this is not signal so that’s enough reason not to use it”, which is a wildly different proposition. If he was saying that, he’d get a lot more pushback from cryptography nerds like myself.
This post is the first one I’ve seen from this author that primarily makes legitimate complaints about the cryptography in use. However even in this one where there was good content to come, they couldn’t help but start the post with a snide “oh also they don’t do PFS so it’s trash” comment.
I’m not a cryptography nerd, so I’m not gonna pretend to completely understand all of them, but TFA seems to be giving more than a few points about why they’re knocking Session for not doing PFS. One of those points is that Signal does do PFS, and Session just kind of decided to rip it out, if I’m understanding correctly.
Again, I’m not knowledgeable enough to really refute any of the other reasons they mention, but I definitely see removal of a security feature as highly suspicious and potentially misguided.
Can I please ask you why you don’t share my concerns?
I think you are trolling
Definitely not trolling. Most of the articles in those search results are what led me to this conclusion
I think I know the feeling you’re argumenting from - but “they don’t do PFS so it’s trash” is the technical argument why it’s not playing on the same level as signal. (Which combined with the other security flaws might be breaking their users neck, as you can now decrypt things later on.)
I agree. Any messaging platform that demands you have an Android or iOS primary device as well as requires a SIM card (which many countries require a ID or passport to get) should be seen as a non-starter. We do not need to further strengthen that duopoly or even demand folks carry a phone for their messaging needs. I honestly regret getting my extended family on Signal like 6–7 years ago.
I strongly dislike being in the position of having to trust Signal. There are a number of options that lack its downsides if you don’t care about leaking metadata, which is a hell of a concession to have to make.
Signal also leaks metadata, to signal at least. It’s down to who you want to leak it to
Right. You have to take it on faith that Signal’s servers discard your metadata after use like they say they do. XMPP and Matrix will “leak” it to servers involved in the conversation which mostly don’t even pinky-promise like Signal. You could run a private, non-federated XMPP or Matrix server, of course.
You don’t have to trust it, you can use SImpleX
Is it just me, or am I getting an ActivityPub message instead of the blog post when I view it?
works here. But it might be doing Content Negotiation and misinterpret your browsers Accept header, that’s something people regularly get subtly wrong.
If anyone is seriously interested in studying or even auditing this source code, I recommend opening it in an IDE with Kotlin support (IntelliJ IDEA / Android Studio) to avoid embarrassing mistakes. Due to type inference and other language characteristics, the source code is not much readable in a plain-text editor (or just with plain syntax highlighting).
Yes, it means downloading lot of… stuff and setting up an isolated environment (untrusted code). There is also question whether using type inference (and overloaded functions) is good for safety – maybe explicit types would be better because changing the type of a return value will not magically change behavior on the other side of the program.
This feels like FUD given that I don’t trust Signal.
A fork should be fine, if we are trusting Signal when they dk the same things.
Did you read the article?
Your comment is basically whataboutism. The fact that Signal needs to improve their “trustability” (I agree on that) doesn’t relate to Session’s security. They’ve opened gaping holes compared to Signal.
Trustworthiness
Thanks, words are hard!
They don’t do the same things: did you read the parts where Soatok explained how Session changed Signal’s cryptography and fucked things up?
Except that it cites a real downgrade vs signal right at the top: Session drops support for PFS with a really flimsy justification. I am also a signal skeptic, but that doesn’t mean that these concrete criticisms of this fork are FUD.
I’ve flagged this as off-topic (it seemed like the closest of the flag options). It seems pretty clear that you are pushing your own blog post instead of engaging with the OP in any way. And the fact that you don’t trust Signal—however valid your reasons are—has no bearing on whether this article about a Signal fork is FUD.