He’s right. The shared trust model the internet was built on is due for an overhaul. What’s not clear to me is how we keep the open playground we have now and yet provide some controls to keep the threats we’re seeing now more easily in check.
I strongly agree with your assertion.
There are people thinking about this, especially in the Wireless Community Network scene (and surrounding scenes), as well as people doing F2F and darknet (not the usual suspects there) stuff and wanting to bring concepts up a layer or two.
I really hope that the future will still be open, but with problems fixed, rather than relying on centralization and politics as a hack. With the future I mean developments we, as the “community” (user/developer base) work on and already support to a large degree.
There are projects like netsukuku, cjdns and many others that go into such a direction. I know they are far from perfect, people sometimes laugh at them and still they are the best we’ve got.
About “The internet era of fun and games is over”. The article mentions the IoT based attacks. I know that might be a bit cynical, but that together with some other emerging problems (huge amounts of unsecured, never updated cloud services) it feels like the era of fun and games for some people is just starting. As Schneier mentions complexity is the worst enemy of security, yet complexity throughout the industry is growing at an incredible speed, often unnecessarily.
I disagree on the statement that the government needs to be part of the solution, or at least not much more than it is. What I mean by that is that it shouldn’t be the government making rules about how to secure a system, but rather rules about responsibility, which essentially already exist. Culpable negligence on the side of “cheap devices” (phrase used in the article) was reason for various recent problems. And at least some degree of negligence is the reason for most successful attacks on the internet.
What I want to say is that one should be careful about government defined standards for security. As Schneier and others say security is a process. It’s not just a static set of rules. That’s why I think it’s better for the government to rather defining such a set (such as PCI for example) incentivizing the result (good security) by punishing the loss of customer data or a DDOS enabled by IoT-manufactors would be reasonable.
Security evolves and having just a static set of rules is neither helping with the evolution nor a reasonable approach to security. I might be wrong, but I think such a static set of rules about security could lead to both companies doing only the bare minimum and an industry that tries to achieve compliance with the bare minimum amount of effort and resources. While low resource usage could be seen as something good it in this case will likely lead to compliant negligence in the corners that aren’t mentioned. Either because it’s such an edge case, because it’s a recent development or simply because the systems and infrastructure of a company are specific enough.
I think it could potentially lead to a stagnation of research in this area, as new concepts are usually not covered by standards and when you try to a approach a problem from a novel position you may have require a different set of best practices and may obsolete existing ones.
Now, despite of what I wrote I don’t think standards and best practices are bad. I mostly think that it might be very hard or even impossible to create a set of rules that is somewhat flexible for the real world, yet covers more than basics that I think most people with a very basic interest in IT security know anyway. And that those basics are probably insufficient for many attacks we have seen and will be seeing in future. So it might be better than nothing, yet not really a big part of a solution.
In other words I think governments are needed, but that one should keep their scientific skepticism about any proposed solutions, especially when it comes to shortcuts that might have no or even counterproductive effects in the real world.
Hadn’t heard about Netsukuku, thanks for that. That work is happening is heartening, but the key is adoption. These projects have to become widely used enough to make a dent and re-create the community elsewhere.
What I mean by that is that it shouldn’t be the government making rules about how to secure a system, but rather rules about responsibility, which essentially already exist.
I agree that a government-mandated security checklist is probably not the right answer. Responsibility is the way to go. But in what sense does that system already exist?
Other engineering professions have a “professional engineer” license, which is not only a proof of competency but a code of ethics. Ideally we would all behave ethically anyway, but holding a license like this could provide more leverage against unreasonable demands of management.
Take the Volkswagen emissions scandal. Allegedly management had no role in the decision to subvert emissions tracking, and it was entirely the fault of “a couple of rogue engineers”. If that is true, those programmers should lose their (hypothetical) licenses. If not, perhaps being able to say “I won’t do this because it’s unethical and I could lose my license” would have helped them push back against management.
which is not only a proof of competency
Licenses are neither sufficient nor necessary for a demonstration of competency. The only thing a license says is that you were able to pass an exam that some (probably small) group of people invented.
Right, that’s fair. I should have said evidence of competency. The responsibility part is what I care more about.
I’m with Bruce on bringing in regulations or liability. I know it brings a ton of speculation and counterpoints when brought up. I think the one’s I see on HN are worth addressing collectively since they turn up constantly on any forum. They reflect a consensus of concerns by IT & business people in general. Here goes with concerns paraphrased.
“I’m don’t know if security for software could be encoded into regulations.”
“If it could be, I’m not sure it could lead to more secure software.”
It was done successfully before. Resulted in most secure products ever to exist. A few still use such methods with excellent results during pentests. Also preempted major vulnerabilities in popular stuff. Such methods would’ve also prevented a good chunk of Snowden leaks and TAO catalog. Bell, of Bell-LaPadula security model, describes it here:
Examples included Boeing SNS server (going strong 20+ years), BAE XTS-400, Aesec GEMSOS, esp KeyKOS, a secure VMM, an embedded OS in Ada, GUI’s immune to keylogging/spoofing, databases immune to external leaks, and so on. CompSci projects with practical bent had even more stuff. Such research continues today but is a trickle compared to, say, Java extensions or machine learning stuff. Same thing happened with DO-178B, etc for safety-critical markets: tons of high-quality components showed up with many reusable.
“So then I guess you also want a Personal Computer or Home Security System to be $1,000,000.”
“You’re mentioning an industry funded by about 1/6th of the Federal Budget related to war and consequences if things go wrong. Just cuz it worked for the DOD doesn’t mean it will work for anything else.”
If we’re increasing the baseline, there’s companies size of startups that do it all the time while remaining lean in expenses with speed in development. An example of low-defect methodology was Cleanroom that varied from reducing costs to neutral to slightly increasing them. Complexity could still be high as Cleanroom just forced better structuring. Cost and speed overall unaffected on average since up-front quality reduced debugging and maintenance phase costs so much. Finally, for high-assurance, applying it to well-understood problem areas… such as TCB’s, VPN’s, or compilers… ranged from 35-50% premium on top of normal development process. Nearly unhackable Windows at $150 instead of $100 sounds like a steal.
Main drawback of highest security is loss of development speed. The amount of rigor on problems with unknowns means you are going to spend quite a bit of time on modeling them, analyzing them, prototyping them, pentesting that, and so on. Lipner, when leading VAX VMM for high-assurance, said it too “two to three quarters” to implement a major change in the product. Probably weeks to a month with crud approach common in industry. They also had less features due to (a) need to figure out how to secure them and (b) esp complexity & inherent insecurity of many standard features. High-assurance systems would need to pick ZeroMQ over OLE or CORBA, JSON over XML, native apps over web apps, Modula-3 over C++, musl over whatever GNU builds, and so on. Throwing stuff together with no thinking about effect of their architecture, coding style, or implementation tooling will go down in huge way for products with high assurance requirements. This would be opposed by a lot of people. Even “INFOSEC” people that I see. ;)
Quick note on pricing. Remember that the extra cost get spread out among large numbers of users. If one survives the initial development, then the resulting software gets cheaper as it grows. The idea that we’re spending a million for Windows or Oracle is ridiculous. On per-customer basis, it would probably have the same ridiculous price it has now if medium assurance under Cleanroom with a safe language.
“The certification costs might kill my startup.”
Editing to add this risk I left off. It’s real. One alternative for readers to consider is to apply the evaluation criteria after harm is done in court. The idea is you don’t pay anything upfront: just apply security methods with documentation you did. If your product causes harm & you’re sued, the court can ask a licensed/certified evaluator who knows INFOSEC to look at your product’s design, code, configuration, and so on to compare it against published standards. If deviations resulted in the harm, then you’re found guilty. The fines or damages rewarded go up with the amount of deviation plus severity of the harm. Companies can also differentiate on security by getting such evaluations ahead of time like they already do. That’s optional, though.
“The free market crowd would argue that the surviving companies will bake in the right security if consumers demand it. If companies don’t take it seriously, either their customers aren’t demanding it, or they will be replaced by companies that do a good job at it.”
I argue this myself. However, let’s include the caveats of cartel and otherwise malicious behavior againts consumers that reduces the impact of that preference:
Companies lie to customers about how necessary these vulnerabilities are. They condition them to expect it. They also charge them for fixes. It takes almost no effort to knock out the common ones with only 30-50% premium for high-assurance of specific components. Even premium producers often don’t do either with those that do so rare most consumers or businesses might have never heard of them.
Years of lock-in via legacy code, API’s, formats, patents, etc means consumers often don’t have a choice or only have a few if they want the modern experience. Many times specific choices will even be mandated by groups like colleges. Market created the problem that now lets it milk a captive audience out of money. It won’t solve that problem no matter what they want.
First Mover advantage, lock-in techniques like obscure formats, and patents combine by themselves to give rise to the situation we’re in. Two of those are protected by government. They will need to be solved at government level. Or just force the incumbents to provide increased security in what they lock us into.
“I mean the only reason lawmakers and regulators are not all over this issue is because they don’t realise how bad things are.”
They realize it. Impenetrable systems are also impenetrable to FBI and NSA which advise against the good stuff being mandated. The bribes they take from COTS vendors also brought in a preference for insecure solutions from samd vendors. Everyone paying them wants to maximize profit, too. Costly rewrites will cut into that.
So, they’re willingly covering their ears while sitting on their asses. At least those on major committees.
“What would a better version of the situation look like that would preserve benefits of the Internet while dealing with events like massive DDOS’s?”
A combo of per-customer authentication at packet-level, DDOS monitoring, and rate limiting (or termination) of specific connection upon DDOS or malicious activity. That by itself would stop a lot of these right at the Tier 3 ISP level. Trickle those suckers down to dialup speeds with a notice telling them their computer is being used in a crime with a link to helpful ways on dealing with it (or support number).
Far as design, they could put cheap knockoff of an INFOSEC guard in their modems with CPU’s resistant to code injection. Include accelerators for networking functions and/or some DDOS detection (esp low-layer flooding) right at that device.
Old one from high-assurance field, albeit with medium rating, that did what I’m describing in an Ethernet, card computer:
Modern implementation could probably be done in a cheap clone and security-enhanced mod of this product: