Reposting a few comments I made on HN about this.
Opponent: “It’s all fine and well until one of those improperly configured devices are a medical device or something critical. “
Me: “It would be their fault. High-assurance industry has been telling SCADA and medical industry to get their shit together for a long time. This included pentests showing it could all be destroyed. They even have people at conferences talking about it with products or basic advice to deal with it.
The reason it’s all still vulnerable is that they… don’t… care. They turn whatever small amounts of money the security would’ve cost into profit. I mean, in some cases we’re talking about remote monitoring that operates one way that could be done with a data diode for nearly impenetrable security. Cheap as hell if you homebrew it on cheap, embedded boxes. Likewise for FOSS VPN if two-way is required. Instead, costly system connected to wide open Internet to save a few hundred dollars. They just don’t care.
So, you have to make them care. The customers don’t as much since they often don’t know better. Those that do are apathetic since it will be someone else’s problem. That’s best moment for regulation to step in to force a solution. There’s no regulation, though. Court’s seem unreliable on this but still some hope there. So, your options are waiting for them to hit you, paying exhorbitant costs for DDOS mitigation due to problems others are creating (i.e. externalizing), or maybe a criminal just smashes the insecure devices until people stop buying them or manufacturers start securing them. So, I like what’s going given nothing else is reducing risk as effectively.”
One person said security was really hard. Improving for IoT has easy solutions.
“Put OpenBSD and OpenSSH on them with configuration explained in a good book on the subject. Write your apps in memory-safe language that validates external input. The End [for vast majority of attacks in IoT space]. It’s not as hard as you detractors claim. They just don’t care.”
Some brought up regulations or how engineers do things in safety-critical fields. Predictably, people who haven’t looked hard claim it cant be done for software safety or security. I point out it was done for both… still being done for safety… with good results.
“The only time vendors ever delivered secure or safe solutions was when sound regulations were forced on them with a requirement they were followed before a purchase was made. That was TCSEC and DO-178B respectively.”
Many also compare it to things like stealing cars. I argue its more like smashing risky, defective products that keep harming things.
“Your counter and metaphor doesnt really apply here. Let’s look at why:
A car is a necessity that cost a ton to replace. An internet-connected camera or TV isnt. They could just as easily not buy an Internet-enabled appliance.
These devices are being used as weapons when people leave them around insecure. Leaving loaded guns lying around is a bit closer but minus the lethality.
With cars, we have efforts on safety and security at user side, manufacturer side, and the law. There’s no effort to buy secure IoT by these users, to do even minimum protections at manufacturing, or pass laws putting liability on users or manufacturers where it should be. Now, it’s more like a car with defective parts that make it hit other cars. A city’s worth are affected with nobody taking action but people are told armored cars are available for a fortune.
So, these comparisons to highly damaging thefts of legit goods on innocent people are nonsense. There’s defective products damaging innocent people. Nobody with power to prevent or punish it legitimately is doing anything. Im happy that a vigilante is reducing risk to Internet hosts plus putti g cost on those responsible for that risk.”
Finally, I tried to brainstorm situations for worst-case scenario of home device killing or really harming someone. Didnt think of many and most cant be done by bricking. They exist, though.
“ was particularly thinking of baby monitors during an emergency. It was most important house-hold device I could think of in terms of harm. Maybe turn a freezer off on IoT fridge while people are on vacation then back on just before they return to make meat refreeze or something spoiled. Maybe turn off the power in household with IoT home automation and someone on life support of some kind.
Im only having a few possibilities come to mind that are life-threatening. Most are just annoying or financial drain. If we add painful, maybe make an epileptic’s screen on SmartTV blink fast like the attack on the web site. Turn off people’s alarm clock enough they get fired and loose health insurance before major operation. Im really having to stretch it here.”
All you have to do is wait until some IoT insecurity issue causes somebody to die and then we can fix it. Hopefully it is a cute little girl.
That’s how it works with humans.
Possibly true. Ford Pinto comes to mind. That thing burned up many, many people before problem was solved.
I honestly think that a lot of the commenters on HN for this story rather missed the point of what janit0r was doing.
It’s probable. They also tend to oversimplify on the moral end. The situation is complex if one is trying to get a solution. Yet, so many were utterly certain his was evil whereas allowing status quo to continue was good. The didn’t seem to realize that doing nothing is same as doing something (DDOS’ing many services) when there’s something to do about it.
I recognize that what janit0r is doing here is illegal, but I would rather that we could get a law on the books that made it legal (provided that it would be restricted to trivial, disclosed, non 0-days), and made it a standard practice that in order to sell your IoT device, it has to survive those sort of tests. Maybe make it part of FCC certification or something.
We’re not talking about 0-days here. We’re talking about news-worthily badly configured devices (as used by Mirai), and if janit0r is being honest, the devices that get bricked rather than patched are at a comically bad level of security, where the password could not be changed even if he wanted too (see what the article has to say about Plan B). In this case, taking them down rather than waiting for them to be used in another DDoS seems like an attractive idea.
It’s like leaving a bunch of stereos or loudspeakers left on outdoors that can be remotely tuned/activated by anyone that looks at a well-known github repository, and not expecting them to be used blasting crazy loud music and/or disturbing the peace. If something like that was pulled off, and wasn’t stopped by law enforcement, of course people would start destroying them using various means.
I don’t know of a way to do it legally, but I think a constant background scan off this sort would do a great service in terms of forcing IoT manufacturer’s hands, rather like the Malware landscape around XP forced Microsoft to get better at securing Windows.