The cited law doesn’t seem like it’s about combating abuse. It seems like it’s about providing a way to obtain evidence for prosecution. This article seems to fail to address the particular concerns the law is addressing.
I believe backdoors are fundamentally unworkable and wrong. But the arguments against it need to do better than this.
The point of the article is to prove that there is no viable solution to the particular concern the proposed law is suggesting (see also https://lobste.rs/s/ntyvtw/combating_abuse_matrix_without#c_52uuvc) and proposes an alternative which could help mitigate abuse across the board if embraced properly.
But the arguments against it need to do better than this.
I think where these arguments seem to go off the rails is in two ways.
They treat the backdoor proposals as a black and white proposition.
They propose solutions to a different problem than the proponents are trying to solve.
If I had the audience to effect change here I would be focusing my arguments instead on the tradeoffs and I would be honest about them. We can’t allow shining a light on bad behavior like child exploitation, hate groups, and terrorism without sacrificing privacy/security in the long term. What we are actually talking about are tradeoffs that society has to make.
To be clear the authorities actually don’t care about the 1/1 gpg encrypted communication between bad parties here. For them that’s a straw man argument. It’s such a small skillset that by definition it’s a smaller problem. The danger in there eyes is the mass enablement of more people to hide their bad behavior and the accompanying fear that the ability to hide results in an explosion of bad behavior because it’s safer and easier to hide it.
When we refuse to engage them on that point we lose the argument. One small part of the argument is touched on in your article. The cat is out of the bag for something like Matrix. You only need one competent administrator in a different juristiction to provide a way to hide for other.
But the other part of the argument that seems to be shied away from is that we regard the risk of to society of a backdoor as greater than the risk to children that good privacy and encryption poses. But unless you convince the rest of society that this is the case and get them to own it you will lose the argument.
They treat the backdoor proposals as a black and white proposition
That’s because they are a black and white proposition. There’s no such thing as half a back door. And “ability to decrypt anything on command and we promise to have a good reason” is a back door, no matter how many times politicians pretend otherwise.
Governments care about the ability to spy on their citizens. They are hiding this agenda behind the more politically potent cause of preventing child abuse.
But you shouldn’t be trying to convince the government. The people you need to convince are the governed. The government are on one side of the argument and you are the other.
Maybe I am not understanding the solution, but how will a reputation score help when I get served a subpoena to the chat logs of a room that I host on my server? As a server operator, I will either have to comply with the subpoena and provide a copy of the clear text logs or face the consequences of not providing them.
I can see how a reputation system could help administrators be more selective of the content/rooms/users they host, but it doesn’t really help in addressing the request made in the international statement that inspired the post.
In other words, as soon as someone starts operating a public-facing matrix server, they become responsible for the content that others put up on it. The question then is whether matrix should provide a backdoor setting for administrators to access to logs in case that they do not want to take the risk of going up against the government if they get asked for the logs (see what happened to the lavabit guy, for example).
So, anything which lets a server admin cough up clear text logs is obviously a backdoor. Any general purpose backdoor is obviously flawed. Therefore we have to offer an alternative: in this instance it’s something that can help investigation and prevention instead: letting folks self identify bad stuff. The authorities’ role then becomes one of infiltrating and preventing abuse at the time rather than busting privacy in retrospect. Frankly, the legal processes have no choice but evolve to reflect the reality of the technology rather than trying to turn back the tide and stuff E2EE back in its bottle.
I hope you’re right that the legal processes will evolve, but I have to admit it looks like this latest political push for surveillance transcends national and partisan boundaries… I’m upset at the prospect of living in the world it would create, if successful.
I suggested most of this to a senior Mozilla person at the last MozFest. I think it’s a bunch of good ideas and fairly obvious to anyone who’s looking at the intersection of distributed systems and moderation :)
The devil is always in the details and I hope you’re able to get a nice user experience! Good luck!
I’m probably missing something, but I fail to see how this solution helps combating propagation of child pornography. If I’m reading this correctly, it basically assumes that enough poorly-vetted participants will join and down-score before leaving (what happens with the score if they leave, which they have to?) and/or that there will be enough external indicators to find them, which really would be just punting the problem elsewhere. It seems to me to be based on the usual bad assumption that there will be enough “good actors” present to deter “bad actors” (for whatever that means).
i think the confusion is over the idea of “down-score”. so say that you’re a govt chasing child abusers around Matrix: if you infiltrate a bad room, you might well publish a blocklist of the hashes of the users in that room, or the hash of that room ID, or the hash of content in that room. you could then leave the room, and that blocklist will hang around for as people want to use it. if people trust your list (and trust you not to abuse it by overreach or retaliation or whatever) then they will continue to use it as a way to block content they don’t want off the servers. this score then lasts for as long as people choose to trust it; if folks got ‘unfairly’ maligned by innocently lurking in a child abuse room then they’ll need to abandon that identity.
in terms of external indicators: if you’re a random ‘good’ user on some room directory and stumble across a room full of obnoxious stuff, then you might flag it to the authorities, who would check it, ban it, and blocklist it. in practice, the external indicators do exist in order for people to find these rooms, so this seems to be a valid assumption (and not a case of punting the problem elsewhere).
Thank you for your explanation. I understand now how this would help with content blocking, but not really in how this would help collect evidence against perpetrators. Blocking content is not nearly as important than stopping its creation (at least when it comes to child abuse). Could be I need to think about it more.
I continue to disagree about punting. External indicators exist in large part because so far e2e encryption hasn’t been used as widely as its proponents would like it to be. I don’t see how one can both argue that e2e encryption protects you from snooping and not stopping legitimate cases of observing as well. You’d basically left to serendipitous discoveries of entries to those networks.
I understand now how this would help with content blocking, but not really in how this would help collect evidence against perpetrators.
By making it easier to identify abusive content, you can both filter it out (as an end user) or infiltrate/investigate it (as law enforcement).
External indicators exist in large part because so far e2e encryption hasn’t been used as widely as its proponents would like it to be.
It’s a contradiction in terms to e2e encrypt publicly visible content. If you’ve built a community for whatever purpose, you need to advertise it somehow - which means telling random people about it, which means consciously relaxing your encryption to do so.
One thing to note: governments aren’t thinking in terms of “backdoors.” They’re thinking in terms of “frontdoors,” which I think is a rather different paradigm of thought. We in the technical realm know that these kinds of things are “backdoors,” but non-technical politicians don’t understand the difference between a legit frontdoor and backdoor. Effectively, there’s cognitive dissonance between how we in the technical world think and how they in the legislative world think.
The cited law doesn’t seem like it’s about combating abuse. It seems like it’s about providing a way to obtain evidence for prosecution. This article seems to fail to address the particular concerns the law is addressing.
I believe backdoors are fundamentally unworkable and wrong. But the arguments against it need to do better than this.
The point of the article is to prove that there is no viable solution to the particular concern the proposed law is suggesting (see also https://lobste.rs/s/ntyvtw/combating_abuse_matrix_without#c_52uuvc) and proposes an alternative which could help mitigate abuse across the board if embraced properly.
go on then, we’re all ears :)
I think where these arguments seem to go off the rails is in two ways.
If I had the audience to effect change here I would be focusing my arguments instead on the tradeoffs and I would be honest about them. We can’t allow shining a light on bad behavior like child exploitation, hate groups, and terrorism without sacrificing privacy/security in the long term. What we are actually talking about are tradeoffs that society has to make.
To be clear the authorities actually don’t care about the 1/1 gpg encrypted communication between bad parties here. For them that’s a straw man argument. It’s such a small skillset that by definition it’s a smaller problem. The danger in there eyes is the mass enablement of more people to hide their bad behavior and the accompanying fear that the ability to hide results in an explosion of bad behavior because it’s safer and easier to hide it.
When we refuse to engage them on that point we lose the argument. One small part of the argument is touched on in your article. The cat is out of the bag for something like Matrix. You only need one competent administrator in a different juristiction to provide a way to hide for other.
But the other part of the argument that seems to be shied away from is that we regard the risk of to society of a backdoor as greater than the risk to children that good privacy and encryption poses. But unless you convince the rest of society that this is the case and get them to own it you will lose the argument.
That’s because they are a black and white proposition. There’s no such thing as half a back door. And “ability to decrypt anything on command and we promise to have a good reason” is a back door, no matter how many times politicians pretend otherwise.
I think you misunderstand what I mean by black and white. Ubiquitious encryption is not an unambiguous good in many peoples eyes. It’s a tradeoff.
Governments care about the ability to spy on their citizens. They are hiding this agenda behind the more politically potent cause of preventing child abuse.
But you shouldn’t be trying to convince the government. The people you need to convince are the governed. The government are on one side of the argument and you are the other.
[Comment removed by author]
Maybe I am not understanding the solution, but how will a reputation score help when I get served a subpoena to the chat logs of a room that I host on my server? As a server operator, I will either have to comply with the subpoena and provide a copy of the clear text logs or face the consequences of not providing them.
I can see how a reputation system could help administrators be more selective of the content/rooms/users they host, but it doesn’t really help in addressing the request made in the international statement that inspired the post.
In other words, as soon as someone starts operating a public-facing matrix server, they become responsible for the content that others put up on it. The question then is whether matrix should provide a backdoor setting for administrators to access to logs in case that they do not want to take the risk of going up against the government if they get asked for the logs (see what happened to the lavabit guy, for example).
So, anything which lets a server admin cough up clear text logs is obviously a backdoor. Any general purpose backdoor is obviously flawed. Therefore we have to offer an alternative: in this instance it’s something that can help investigation and prevention instead: letting folks self identify bad stuff. The authorities’ role then becomes one of infiltrating and preventing abuse at the time rather than busting privacy in retrospect. Frankly, the legal processes have no choice but evolve to reflect the reality of the technology rather than trying to turn back the tide and stuff E2EE back in its bottle.
I hope you’re right that the legal processes will evolve, but I have to admit it looks like this latest political push for surveillance transcends national and partisan boundaries… I’m upset at the prospect of living in the world it would create, if successful.
Thanks for offering a way forward. Good luck.
EDRi has a nice document about encryption workarounds for law enforcement: https://edri.org/files/encryption/workarounds_edriposition_20170912.pdf
I suggested most of this to a senior Mozilla person at the last MozFest. I think it’s a bunch of good ideas and fairly obvious to anyone who’s looking at the intersection of distributed systems and moderation :)
The devil is always in the details and I hope you’re able to get a nice user experience! Good luck!
I’m probably missing something, but I fail to see how this solution helps combating propagation of child pornography. If I’m reading this correctly, it basically assumes that enough poorly-vetted participants will join and down-score before leaving (what happens with the score if they leave, which they have to?) and/or that there will be enough external indicators to find them, which really would be just punting the problem elsewhere. It seems to me to be based on the usual bad assumption that there will be enough “good actors” present to deter “bad actors” (for whatever that means).
What am I missing?
I was thinking the same. People joining chats with illegal material will typically be the ones who are also interested in keeping it on the down low.
i think the confusion is over the idea of “down-score”. so say that you’re a govt chasing child abusers around Matrix: if you infiltrate a bad room, you might well publish a blocklist of the hashes of the users in that room, or the hash of that room ID, or the hash of content in that room. you could then leave the room, and that blocklist will hang around for as people want to use it. if people trust your list (and trust you not to abuse it by overreach or retaliation or whatever) then they will continue to use it as a way to block content they don’t want off the servers. this score then lasts for as long as people choose to trust it; if folks got ‘unfairly’ maligned by innocently lurking in a child abuse room then they’ll need to abandon that identity.
in terms of external indicators: if you’re a random ‘good’ user on some room directory and stumble across a room full of obnoxious stuff, then you might flag it to the authorities, who would check it, ban it, and blocklist it. in practice, the external indicators do exist in order for people to find these rooms, so this seems to be a valid assumption (and not a case of punting the problem elsewhere).
Thank you for your explanation. I understand now how this would help with content blocking, but not really in how this would help collect evidence against perpetrators. Blocking content is not nearly as important than stopping its creation (at least when it comes to child abuse). Could be I need to think about it more.
I continue to disagree about punting. External indicators exist in large part because so far e2e encryption hasn’t been used as widely as its proponents would like it to be. I don’t see how one can both argue that e2e encryption protects you from snooping and not stopping legitimate cases of observing as well. You’d basically left to serendipitous discoveries of entries to those networks.
By making it easier to identify abusive content, you can both filter it out (as an end user) or infiltrate/investigate it (as law enforcement).
It’s a contradiction in terms to e2e encrypt publicly visible content. If you’ve built a community for whatever purpose, you need to advertise it somehow - which means telling random people about it, which means consciously relaxing your encryption to do so.
One thing to note: governments aren’t thinking in terms of “backdoors.” They’re thinking in terms of “frontdoors,” which I think is a rather different paradigm of thought. We in the technical realm know that these kinds of things are “backdoors,” but non-technical politicians don’t understand the difference between a legit frontdoor and backdoor. Effectively, there’s cognitive dissonance between how we in the technical world think and how they in the legislative world think.