1. 36
  1.  

  2. 7

    The issue is interested because there are both moral and legal questions to consider. On the legal front (as is actually addressed in another post on the same blog) given the vagueness of the CFAA and the lack of applicable case law it is actually difficult to know what (if any) criminal liability a security researcher will have for publishing the exploit. Given that the exploit (in the specific case of the pacemaker) has the potential for lethality, it may even become an issue of homicide or something similar. It’s a situation where the legality is unclear enough that any researcher is likely to be wary of posting the exploit.

    On the moral front, it becomes an issue of imperfect information. Before the exploit is published, it is often impossible to know which is more likely to happen first: the company patching the exploit or the exploit being used to kill people. Given the imperfect information and the unavailability of alternative options, is it immoral to share the exploit’s existence? Consider this, whether the exploit is shared publicly, it exists. The individual who has found it is the person we collectively want to find it: a security researcher interested only in the exploit being fixed. If they do not share it, then the pool of people likely to find it has decreased by one, thus increasing (however slightly) the proportion of that population likely to exploit the vulnerability maliciously. This is known to the researcher, and given the knowledge that keeping quiet increases the likelihood of the exploit being used maliciously before it can be patched, isn’t it moral to speak up?

    Security researchers do incredibly important work as watchdogs over the technology we rely on. Their ability to not only study these exploits but act to force their patching should not be stymied by concerns about embarrassment, nor by bureaucratic nonsense. All should be able to agree that public disclosure is the worst of the options available, but when all other options are unavailable and the exploit remains, we should affirm the right (both morally and legally) of researchers to disclose it.

    1. 2

      Really, I’d think going the press and telling people with remotely configurable pacemakers need to disconnect them. If the focus isn’t on the health of the patient why even disclose the vulnerability in the first place? The hippocratic oath takes over even if you aren’t a doctor. Do no harm, do what is best for the patient.

      1. 2

        This is getting OT, but I’d like to dig in the issue

        The hippocratic oath takes over even if you aren’t a doctor It does? I mean, if I didn’t swore by it, why should I be bound by it? It is not geneva conventions or something that is enforced by any real law iirc.

        I don’t disagree that doing the best to protect ill is the right way, I just can’t agree with that specific part.

        1. 4

          While you’re right that people who haven’t sworn the Hippocratic oath aren’t bound to it, aren’t we still socially bound to minimize the suffering of those around us? It’s not exactly the same, but I imagine that’s what animatronic was getting at, however the original sentence was phrased.

          1. 1

            aren’t we still socially bound to minimize the suffering of those around us?

            While for me this isn’t something that comes from social bounds, but moral ones, I agree. If that was more or the less what @animatronic meant, then okay.

        2. 1

          Perhaps I should clarify. I do not think that public disclosure is the best option, in fact I think it’s the worst. Yet if all other options have failed, and the researcher is left with no other avenue to instigate the fixing of a security hole, I believe that public disclosure is necessary for the public good.

          This assumes several things:

          1. The researcher has made a serious effort to use other avenues of disclosure
          2. Not disclosing the vulnerability would cause more harm than disclosing it
          3. The disclosure is done in a way that minimizes harm and facilitates a rapid solution to the problem

          There is a difference between “hey everyone here’s a security hole and how to exploit it kthxbai” and “I’ve found a security hole in X, which I reported to Y to no avail, it can be patched with Z.” Researchers should always seek to minimize damage and protect public safety and security, and as such should not be beholden to the whims of organizations willing to let these vulnerabilities stand at the public’s risk.

      2. 3

        Barnaby Jack was set to talk at Black Hat last year about a vulnerability in pacemakers before his passing. I’m sure someone over there could be of help.

        1. 3

          I am wondering what luck you would have with a lawsuit against the company for negligence.

          1. 1

            IANAL, but I’m pretty sure that you would have to be a person with an affected pacemaker or whatever that was actually hacked and suffered quantifiable injury because of it to have standing to sue the manufacturer.

          2. 2

            It is a tricky question. I would say that it is morally acceptable to drop the 0day if the company is not willing to patch the bug. If the exploit is so obvious, it is unlikely that a determined hacker with malicious intent wouldn’t discover it and try to use it or sell it on the black market. By making the flaw public, doctors and patients are at least informed about the risks of using the device and may put pressure on the company to fix the security hole. As long as the security researcher makes every effort to get the manufacturer to patch the flaw before disclosing it, the manufacturer, and not the researcher, would be culpable for any deaths.

            1. 3

              I think going the press about a manufacturer failing to patch or mitigate would be way more productive than dropping a 0day.

            2. 1

              Actually to me, the situation is pretty clear: If this can be done, then the author should go for it. If this is hypothetical talk, or the circumstances to allow this attack are extraordinary, then again IMHO should be presented but of course doesn’t make it as interesting as it sounds.

              I see no moral issues here: If this guy can do it, there are probably others with much more sinister intentions that will carry out the attack - already shown in TV series Homeland season 2 IIRC - no matter what. We might argue that there’s a moral obligation to actually show a situation where this attack could place in the real world.

              1. 1

                The problem is that dropping a pacemaker 0day is so horrific that most people would readily agree it should be outlawed. But, at the same time, without the threat of 0day, vendors will ignore the problem.

                I don’t think vendors of medical equipment think the same way.

                Source: worked in medical tech for a while.

                1. 1

                  They didn’t fixed it for a year. I can’t think about more not giving a shit.