1. 2

Full Disclosure: I’m related to the projects mentioned in the article.

  1.  

  2. 2

    The piece of speculative fiction linked to by the article (“The Big Hack”) is good, but not as good as Car Wars by Doctorow.

    …NB: I am aware that the protagonist in the Car Wars piece deliberately disables auto-auto updates. It’s still a better piece because, it covers machine learning, if for no other reason.

    1. 2

      I like the idea of this project. I even wanted to do a secure, update project a long time ago but funding authorities weren’t interested in it. It just required using what we knew instead of creating something new that created more Ph.D.‘s and such. ;) Glad people are working on it now. It’s quite critical. I did take issue with this comparison:

      “ It is designed so that two entirely different parties are responsible for any software update. It is similar to the two-man rule used to launch nuclear missiles.”

      It’s… nothing like that. People often focus on two people turning a key. They don’t think about the context in which it happens. Now, I wasn’t super-deep into that part of military. I might be wrong about something here or just out of date. Here’s what I picked up studying it:

      1. The people in the room have a high clearance. That means they were thoroughly investigated. They also had to answer questions where dishonesty might put them in prison.

      2. They were authenticated at the door.

      3. They are monitored on the inside. That may or may not stop malicious behavior. It will lead to easier punishment for any criminals that survive.

      4. The keys are physically separated enough that one person can’t turn both. Turning both will require conning or physically assaulting a person prepared to use lethal force to prevent that. The tampering action is super-high risk.

      5. The computers mostly use dedicated, monitored lines off the Internet on ancient computers that are the embodiment of security through obscurity or obfuscation. I’m leaning toward obscurity. Most people don’t know how to hack them. It is also pricey (real machines) or problematic (emulators) to learn these things well enough to hack them. Especially if you’re adding whatever protocols the secret cables are running.

      This combination means you need well-funded people with a lot of time on their hands and rare skills to hack this stuff. That only gets them so far with the checks in place. Sandia redid a bunch of them for more modern, robust design. The in-person malice is highly-traceable, highly-vetted, comes with long sentences, and might entail lethal force. That creates more deterrent.

      Creating all of these attributes for an Internet-connected environment doing software updates is possible but very unlikely to be what anybody is really doing. The article lists the first party being software running in the cloud which isn’t rated as very secure by about any measure. Maybe if they use HSM’s but they have their own issues. The next component is “authorized developers” which is pretty vague. If made concrete by private sector, we’d have the very people doing shoddy software and insecure updates responsible for that “factor” in a secure, update process. So, it’s a combination of one factor of unknown trust and one that’s been untrustworthy so far to establish two factors of trust. I don’t buy it for a nuclear comparison. Sadly, it might improve the current situation a lot, though, in terms of what the overall project brings to insecure, embedded field.