1. 8
  1.  

  2. 2

    It is hard for me to think about this without also thinking about the CFAA. After all, the CFAA criminalizes actions taken against vulnerable software, and software liability is about trying to find somebody to take responsibility for vulnerable software.

    Currently, a challenge to the CFAA is pending before the Supreme Court. The question is simple, and I’ll quote it verbatim from source:

    Whether a person who is authorized to access information on a computer for certain purposes violates Section 1030(a)(2) of the Computer Fraud and Abuse Act if he accesses the same information for an improper purpose.

    This strikes close to software liability, as well. I could paraphrase this gently, as “whether a person who publishes software to be used by another becomes liable if the software is usable for an improper purpose,” and argue that, just as how people should not be criminalized for accessing computers which are publically available but not well-secured, people should not be penalized for publishing software which is not well-hardened.

    1. 1

      Let’s try an analogy:

      Should bridge builders be liable for building bridges that are not well-hardened?

      1. 2

        That is a terrible analogy. Let me try a better one: Should radio vendors be liable for building radios that are not well-hardened?

        First, let me analyze my analogy. Under the current regulatory regime in the USA, when a radio vendor sells a radio, they must also certify that the radio does not attempt to interfere with other radios harmfully, and also will accept interference from other sources. This means that, paradoxically, there are devices on the market which exist solely to broadcast small bubbles of personal FM transmission without a license. This is because all radios are embedded within a single ambient electromagnetic field, and we must all share it.

        Similarly, we might imagine that, when a software vendor publishes a software package, they might also certify that the package does not attempt to harm the hardware that it runs on, but that, paradoxically, the package can be instructed to do certain kinds of harm to the hardware. Further, the package might not attempt to broadcast malicious messages out to the Internet, but it will act exactly according to any messages that it receives, including malicious messages. Should these software vendors be penalized for mere publication, as long as they are explicitly attempting this sort of (disclaimer of) warranty?

        The part where your analogy falls down is that bridges are public infrastructure implemented physically, and so we have very different expectations for who to assign liability to in case of disaster. In particular, we never just blame the builders and designers of the bridge, even in case of engineering error; we also include the managers and governors who failed to engage proper oversight. Moreover, a bridge falling down is usually a memorable natural disaster which is accompanied by mourning, regret, and ceremony; computers are broken into on a minute-by-minute basis around the globe in automated industrial fashion by advanced persistent threats. The societal responses are totally different, due to the fact that natural disasters are humans versus the environment, while hacking attempts are humans attacking humans. Indeed, in the particular case of bridge disaster I chose, another bridge was reinforced because of the other disaster; we improved our infrastructure in response to this calamity.

        1. 1

          I don’t think it’s a terrible analogy. I think your analogy is good, too.

          Not all bridges are public infrastructure. And to generalize a bit, most building constructions are not publicly funded.

          I totally agree that computer systems, due to their global connectedness and difficult to track threats, are subject to far more human attacks than bridges and buildings. As long as you take that into account, can the analogy not provide some insight?

    2. 1

      I don’t like this article. It ignores at least two possible regulatory regimes:

      1. No liability for software vendors for security
      2. Enforced disclosure, and liability for false disclosures. This is basically how most financial things are regulated.

      It also doesn’t really investigate the mode of liability (strict, negligence etc).

      Fundamentally, making software is not like making food - best practices are not known and safety or not is not well defined, while mistakes are inevitable.

      If we think making software is like making drugs - we’re better off not having it unless it’s proven safe - then it should be regulated like medical devices or drugs.

      If we don’t think it’s like drugs or medical devices then we need to wrestle with the problem that flaws are inevitable, have variable impacts, have variable likely impacts depending on what the software is, and often users have a huge part to play.

      We also need to wrestle with different levels of transparency. For example, for open source software it is possible to at least understand what’s in the software.

      I suspect that we’re better off defining narrowly the exact evil being targeted and working from there.