Data isn’t like physical goods:
- When you steal it, the original owner still has it
- It’s common to hold data you don’t own (eg photos of other peoples children that were sent to the adult)
- The theft hurts the datas owner, not the datas custodian.
Storing other peoples sensitive personal data which you don’t need is negligence, plain and simple.
I worked for the company that ran Siri. I had access to all the voice data. I worked for Salesforce with production root access. I could have stolen plenty of data. I worked for Army intelligence and NGA. I had access to lots of data that I could have just walked away with and the companies/organizations would have been blamed for negligence.
There are these people, called security professionals, that companies can hire to prevent these sorts of problems from happening. (Hi!)
Those people, like me, are ripping their hair out in frustration after hearing story-after-story about these companies that think “it’s OK” to just have customer data sitting on their servers waiting to be exploited by hackers or the company itself.
This is completely unnecessary. You can design secure applications and services where only users can read the data. It is possible. There are people working on pennies, dimes, or literally nothing at all who are doing a better job at protecting user data than mega-rich companies like Facebook simply because they care.
Whatever amount of blame we place on the attacker, there’s plenty of blame to share with other parties.
I mean, if they’re telling the truth that they don’t intend to use it - which is strongly suggested by the way they carefully released a redacted subset of the data - then I do think the penetration test itself was ethical, although the handling of the actual data wasn’t.
The ethics of the manner of disclosure are certainly questionable, but giving the company a chance to keep it quiet was not appropriate, as the issue was what they stored, not only their perimeter security. Also, they didn’t describe the vulnerabilities used, only the existence of the data, which is certainly putting users in substantially less danger. They ultimately did all the users a favor by disclosing in a way that caused the company to add security.
It’s conceivable they disclosed as a form of advertisement to potential buyers, but that really does seem unlikely because there’s no need for it.
A point usually glossed over far too quickly in these disclosure debates - it’s still possible that they were the only attacker to access the data, but we will never know that. Most of the other breaches you name, the public found out about perhaps a year after the initial compromise, because the data went on sale, and researchers compared records to figure out where it might have come from. That hasn’t happened here.
I haven’t seen an allegation that this involved insider access, so I assume the relevance of that comparison is an idea that because horrifying practices are the standard, those practices are okay. I have to say I have no sympathy for that. The entire industry needs to wake up to the concept that nobody whose job doesn’t directly require it should be able to access user data. And such access should be logged and audited.
If someone breaks into a building, steals records, and tells the New York Times about it, they would just be a thief. Would we blame the locksmith for easily picked locks or the company for not having enough guards on duty?
Depends on how bad the company’s practices were. If they had a standard lock from a professional locksmith who’s a member of the relevant professional body, probably not. If they left the records in a pile on the sidewalk, or didn’t have any locks on their doors at all, we’d probably blame the company. IMO an SQL injection vulnerability has been more like the latter since about 2003.
In information security, we also have a phenomenon where what “everybody does” (and it IS most companies, though definitely not everybody) is completely unacceptable. That makes it a lot harder to establish legal liability among other things.
Audio clips too, apparently.
[Comment removed by author]
Data isn’t like physical goods: - When you steal it, the original owner still has it - It’s common to hold data you don’t own (eg photos of other peoples children that were sent to the adult) - The theft hurts the datas owner, not the datas custodian.
Storing other peoples sensitive personal data which you don’t need is negligence, plain and simple.
[Comment removed by author]
There are these people, called security professionals, that companies can hire to prevent these sorts of problems from happening. (Hi!)
Those people, like me, are ripping their hair out in frustration after hearing story-after-story about these companies that think “it’s OK” to just have customer data sitting on their servers waiting to be exploited by hackers or the company itself.
This is completely unnecessary. You can design secure applications and services where only users can read the data. It is possible. There are people working on pennies, dimes, or literally nothing at all who are doing a better job at protecting user data than mega-rich companies like Facebook simply because they care.
No excuses!
Whatever amount of blame we place on the attacker, there’s plenty of blame to share with other parties.
I mean, if they’re telling the truth that they don’t intend to use it - which is strongly suggested by the way they carefully released a redacted subset of the data - then I do think the penetration test itself was ethical, although the handling of the actual data wasn’t.
The ethics of the manner of disclosure are certainly questionable, but giving the company a chance to keep it quiet was not appropriate, as the issue was what they stored, not only their perimeter security. Also, they didn’t describe the vulnerabilities used, only the existence of the data, which is certainly putting users in substantially less danger. They ultimately did all the users a favor by disclosing in a way that caused the company to add security.
It’s conceivable they disclosed as a form of advertisement to potential buyers, but that really does seem unlikely because there’s no need for it.
A point usually glossed over far too quickly in these disclosure debates - it’s still possible that they were the only attacker to access the data, but we will never know that. Most of the other breaches you name, the public found out about perhaps a year after the initial compromise, because the data went on sale, and researchers compared records to figure out where it might have come from. That hasn’t happened here.
I haven’t seen an allegation that this involved insider access, so I assume the relevance of that comparison is an idea that because horrifying practices are the standard, those practices are okay. I have to say I have no sympathy for that. The entire industry needs to wake up to the concept that nobody whose job doesn’t directly require it should be able to access user data. And such access should be logged and audited.
More-or-less everyone already agrees that breaking & entering is bad - it doesn’t need to be discussed.
The concept that you are responsible for securing PII you hold is a new one, and needs to be talked about over and over until it becomes and old one.
Depends on how bad the company’s practices were. If they had a standard lock from a professional locksmith who’s a member of the relevant professional body, probably not. If they left the records in a pile on the sidewalk, or didn’t have any locks on their doors at all, we’d probably blame the company. IMO an SQL injection vulnerability has been more like the latter since about 2003.
Completely agreed.
In information security, we also have a phenomenon where what “everybody does” (and it IS most companies, though definitely not everybody) is completely unacceptable. That makes it a lot harder to establish legal liability among other things.