1. 58
  1.  

  2. 24

    Oh yeah. I really hate the overuse of computers and lack of analog failsafes. The worst thing is that it’s not only consumer Internet of Shit devices. Critical infrastructure is apparently built the same way and HOW DID THAT EVER GET APPROVED ANYWHERE?! We now have government people screaming about “cyber warfare” and stuff because there were already incidents of power grids being hacked. How did anyone ever agree to give full control of everything to computers is beyond me. Worse, not just computers — general purpose computers running general purpose operating systems. What the hell?! Critical infrastructure should be controlled by, like, FPGAs that only do the control stuff and nothing more. Not Windows machines with tons of I/O ports that will happily run malware from USB sticks. Yeah, if you want to add super smart deep learning predictive buzzword magic things to improve the control — make it OPTIONAL. If the smart computer shuts down, the system should still work in a more basic old-school way.

    1. 6

      General purpose computers are not the best, but I’m more worried about isolation. In the worst cases a company might have a single network with their client databases on the same network as their development work and their “cloud” offerings. In the worst cases if you can pwn a lowly receptionist’s computer (Just leave a USB stick in the parking lot) then you have the keys to the castle.

      Multiple networks are good, but they are frequently all connected to the internet anyway. I rarely hear about air gaps being used for anything but NSA or similar level operations. I think most companies could benefit from a few “offline only” restricted networks. Just simple air gapping can negate some of the risks of using general purpose hardware with proprietary OSs/software.

      1. 6

        I have an acquaintance who has air-gapped his business PC’s (accounting system, etc) since the 80’s. I used to think he was over the top. Nothing gets connected to those PC’s.

        Upgrades are VERY infrequent, although I’m not sure how he handles them exactly. I don’t know how he does backups.

        Clearly this doesn’t scale past a business where you have one book-keeper who can get on THE accounting PC and do their thing.

        Regardless, he doesn’t complain about malware. I suspect he sleeps pretty well at night.

        1. 4

          I rarely hear about air gaps being used for anything but NSA or similar level operations.

          Really? While I’ve personally avoided these places, I’ve known a number of people who needed to leave the secure area to print out entries from Stack Overflow on paper, and then hope that they printed out contained the information they needed to solve their programming challenge when they returned to their desks. This was private-sector-but-government-contract work, but still a million miles away from “NSA or similar level operations”.

          1. 3

            I’ve known a number of people who needed to leave the secure area to print out entries from Stack Overflow on paper, and then hope that they printed out contained the information they needed to solve their programming challenge when they returned to their desks.

            It’s because the people who developed those rules saw clever attacks coming. They just put physical, electrical, and then digital separation between different levels of security. Now, we got air-gap-jumping malware, people hitting printers, leaks through light bulbs, and who knows what else. The rule you just gave might stop a bunch of them from working between those rooms so long as they aren’t using USB sticks or something.

            1. 2

              Just use your smartphone.

              1. 3

                Another solution is each desk getting two network ports, one computer on the internet, one computer on an internal network. That has some issues though, the user is not allowed to plug the wrong device into the wrong network or “link all the things!”, in some government agencies they actually have a unique network socket and plug for each network so you can’t possibly get the wrong devices hooked up. You still want to disable USB/Firewire/eSATA/HDMI/whatever ports that could be used for communication.

                As Nick mentioned, you still have to consider if these computers might be communication on back channels such as speakers and microphones, fan speed control, CPU/monitor frequency, HDD noise, temperature monitors. Hackers are very creative apparently. Like the bad kids in class, you may have to physically separate these computers in different rooms and sneaker net data or print outs.

          2. 3

            The situation seems almost exactly analogous to the situations that the FCC and electrical codes were intended to prevent.

            A company isn’t allowed to sell an electronic device that emits strong interference to radio receivers. There are tests these things are suppose to pass. A company isn’t allowed to sell an electrical switch that kills one out of a thousand people. There are tests, similarly.

            But somehow, the concept of “Internet interference” isn’t a thing. Obviously, it should be.

            1. 2

              Who do you sue when the software fails?

              1. 2

                There are various security certification programs around the world… and they’re mostly complete shitshows. Sometimes “compliance” makes actual security worse.

              2. 2

                I’m okay with using computers for critical infrastructure – if and only if the entire stack is formally verified from silicon up. Almost nothing around today meets that standard.

                1. 2

                  I note you said almost. In case someone thought nothing did, I’ll note a few products and projects that could do it but just didn’t get any market demand outside defense. The first was CLI stack which resulted in FM9001 processor. Their tooling also transformed into ACL2 project used in a lot of hardware verification.

                  https://link.springer.com/content/pdf/10.1007%2FBFb0021724.pdf

                  Next I saw was VAMP done with Verisoft project. It’s a DLX-style core (MIPS-like). I don’t know its I.P. status but someone built on it relatively recent.

                  http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=FB6FCE66F64A71373F63D3B2A5609A6B?doi=10.1.1.217.2251&rep=rep1&type=pdf

                  In high-assurance security, Rockwell-Collins decided to do a fault-tolerant, stack machine with separation kernel built-in. Their neat tooling let them verify a bunch of it plus that programs are implemented correctly in assembly. Its registers are tripled with voting to spot bitflips. They use it in cross-domain solutions (i.e. guards) among other things.

                  https://www.rockwellcollins.com/-/media/Files/Unsecure/Products/Product_Brochures/Information_Assurance/Crypto/AAMP7G_data_sheet.ashx

                  http://www.ccs.neu.edu/home/pete/acl206/papers/hardin.pdf

                  That’s about as good as it gets from an individual component. It would seem to be better to go with a NonStop-style setup if trying to mitigate more risks on top of that far as reliability. CHERI or SAFE for security. I want to see someone combined the best of all them one day w/ NUMA support. That would be The Ultimate System. :)

                  1. 2

                    Sweet, thanks for the specific examples.

                    Edit: Wow, the VAMP is really impressive. Fully verifying a tomasulo scheduler must have been a giant PITA. That’s the single most complicated component I’ve ever had the pleasure of building in hardware.

                  2. 1

                    Yeah, verified code and minimal stacks would be ideal. Like, a verified control application running on seL4 (which itself is verified). Even better — verified FPGA circuits when possible instead of general purpose processors. Also for bonus points add classic reliability tricks like having multiple implementations and a simple circuit deciding between their commands.

                    1. 1

                      Even better — verified FPGA circuits when possible instead of general purpose processors.

                      That’s possibly more complex with who knows what level of verification for FPGA itself. Hardware is easier to fully verify given its binary nature. There’s good tools for it. However, we already have languages, compilers, and CPU’s which are individually verifying or verified. Might as well target our code to them since the verification of correspondence between our part and theirs might be smaller than a whole, new piece of hardware.

                      Since you’re interested in that, though, there is an architectural concept in between doing hardware and working on CPU’s. Looks like nobody has submitted it. I’ll try to remember to submit it Monday when more are reading and might see it. Made a note.

                2. 10

                  Fun article, but the author has a skewed definition of “firmware”. If you look at those controllers sitting in engines or satellites, “no instructions in it that aren’t pertinent to the job at hand” isn’t what you’ll find. Satellite controllers are usually an RTOS with preemptive threading and all that “fanciness”. Some of them run Lisp. And Linux supports this sort of environment, with the RT patchset.

                  Now, if you want to argue that there’s no way a bunch of VC’d twentysomethings should be trusted to write real-time reliable code for a fire alarm, that’s a fair argument. The failure mode of the alarms apparently backs you up. It helps to have an accurate idea of what competent control software looks like, though.

                  1. 2

                    Some of them run Lisp.

                    Actually that article says they do not run Lisp anymore.

                    This is a very interesting talk by the same author about something slightly different but which touches on the same points: https://www.youtube.com/watch?v=_gZK0tW8EhQ

                  2. 5

                    The author updated the post due to criticism of article’s details with this statement:

                    “Simply put, if they’d done their due diligence and done their job competently, the reported problems would not have happened. Decades of perfectly functional, reliable alarms prove that. They took a solved problem and de-solved it, made a product worse than it was, and it was a product that couldn’t tolerate that. Excuses excuses, I’m not interested in them and neither is a family without parents or possessions or a home.”

                    Well-said. I agree. Regardless of author’s accuracy on details, the general point stands that it was a solved problem, they “de-solved it” (I like that), and the crap solution made things worse. They should’ve left this area alone probably.

                    1. 2

                      They are angry that it was posted to HN though, albeit afterwards.

                    2. 3

                      This article is a work of speculative fiction. Be sure to read the author’s disclaimer at the end:

                      “This is my opinion. I do not work in this industry or anywhere in hardware or software development. I base this on my autodidactic knowledge of these topics. I do not personally own a Nest Protect, and I don’t know how much of the information I’ve read is relevant to the first version vs. what might be unique to the second version. I did some research, but I wouldn’t say it was extensive because this article is about the industry as a whole”

                      Suggest “rant”.

                      1. -1

                        Nice Artical..