People rarely value reliability. It’s a waste of time to spend any energy on it unless you are contractually liable for problems that pop up in what you sell or buy to support customers with. Even then, there are often perverse incentives to build shitty things, because who would want to spend money on a support contract for something that works? It makes much more sense to build for the interface today.
Huge opportunities for taking advantages of externalities! Great time to sell snake oil :)
My definition of success: damage is unlikely, businesses aren’t forced to shutter when they get targeted by an attack, and development is not dramatically hindered by policy.
IoT is posing a huge threat to the stability of the internet, but manufacturers don’t pay any of the costs of a massive zombie toaster attack. A nice function of government is to sometimes compel people who impose costs on others to cover the bill. If you’re making money by building these DDoS agents with toaster functionality, maybe you should pay the bill when people use them to cause damage. Maybe you should have the option of using an insurance company that charges you a regular fee based on their assessment of how likely your products are to be weaponized, and what a likely attack will cost them to cover, plus a premium. You probably won’t pay very much if you are using memory-safe well-tested components that are regularly updated. This will exist really quickly when there is any money in it whatsoever. Then there will be open source reactions that further drive price performance.
I’m not sure if containers, microservices, MAC or defense in depth in general are all that effective against the IoT existential threat. If you can cause the toaster to send a packet where you want, that’s the end game. Model checking is great if you can afford it, and we should totally be doing it for massively deployed rarely changing core libraries, but it’s pretty cumbersome to have feature development blocked on it. Looking at things that commonly get weaponized, so much of that would have been prevented by using a memory-safe language. They tend to bee too slow though for use in things like web browsers, parsers and flash VM’s. I hope rust or something like it start seeing much more widespread use in our widely deployed systems, but the incentives aren’t currently there.
I’m pretty sure nobody wants to pay for the damage caused by weaponized toasters, so maybe real liability for those responsible for their existance should be a thing.
The four ways that government generally influences businesses are through taxes, regulation, lawsuits, and subsidies. In the world of IoT, only regulation will be capable of improving the situation. For taxes, how would one formulate a tax that discourages the sale of insecure devices. For subsidy, attempting to subsidize secure devices would be very expensive. Lawsuits don’t scale when everyone is making insecure devices, and there’s no guarantee that the government would actually win those suits. This leaves regulation, requiring that businesses secure their devices to meet predefined standards, and defining consequences if those standards aren’t met. Unfortunately, the current political climate in the US is strongly anti-regulation, and so support for regulation at the federal level is unlikely to happen without some large-scale and substantially harmful event related to the sale of insecure devices.
As a non-governmental alternative, some have suggested private certification of device security. The problem is that the insecure devices will remain on the market, and would likely be sold for cheaper as they don’t pay the cost of going through the certification process. While some consumers would choose the certified secure devices, I don’t imagine that most would, and so as a solution this option would be insufficient to address the problem.
On the ISP side, I don’t know that the ISP level is the place to tackle this, if for no other reason than the battles that have been and are still being fought to try to get ISPs to treat all data the same. Encouraging them to much around with what is being sent through their pipes seems to only be asking for trouble from organizations that have shown themselves to be untrustworthy as stewards of crucial communication infrastructure.
Unfortunately, the one you left off applies: influence of government by private sector in form of bribes. They’ve lobbied against software liability and regulation successfully for quite a while. This includes for defense contracting. The prior model, which actually produced systems with strong security, was overridden by the COTS acquisition mandate (done via lobbying) and NSA’s politics.
So, it could get better but there’s people paying for it to remain the same.
Yeah, I was addressing ways in which the government can influence business, not the ways in which business can influence government. Going from government to business, a bribe would instead be called a subsidy.
Interesting enough, they subsidize development of software and security techniques already. Just don’t mandate it gets mainstream deployment or FOSS to benefit America as a whole. Limits the benefit to company doing the development. Good news is many CompSci groups are doing it anyway. Just need more people building on that stuff.
A partially broken system can pay for its own repairs and maintenance a lot of the time, so economically it makes sense to deliver a partially working system first.
The perception of reliability is highly valued. That’s where the cultural expectations and political games that programmers hate come into play. The problem is that technical reliability requires technical excellence and that tends to require creative excellence and that is the opposite of (superficial) personal reliability.
Technical reliability is a system that rarely falls down and does no harm when it does. It matters objectively. Personal reliability is showing up for meetings on time, not pissing people off, and not showing displeasure or having a drop in performance when decisions are made against one’s career goals– stuff that matters only because people who had power decided that it does.
In other words, so long as programmers are judged based on superficial personal reliability, technical reliability will get the shaft. They pull against each other. Law firms and investment banks demand high personal reliability but have a low quality of technology: if you demand that smart people throw down extra hours and cover gaps, there’s no need to do it with machines. Everyone probably knows that, but I don’t see it changing, given that most programmers report to non-technical businessmen who hear about “zombie toasters” and think it’s ridiculous… oblivious to the fact that their cars are computers that drive, and that their phones are computers that make calls, and that their lives depend on often-ancient programs that route electrical power and that run the ventilation systems necessary for making office buildings habitable.
People rarely value reliability. It’s a waste of time to spend any energy on it unless you are contractually liable for problems that pop up in what you sell or buy to support customers with. Even then, there are often perverse incentives to build shitty things, because who would want to spend money on a support contract for something that works? It makes much more sense to build for the interface today.
Huge opportunities for taking advantages of externalities! Great time to sell snake oil :)
My definition of success: damage is unlikely, businesses aren’t forced to shutter when they get targeted by an attack, and development is not dramatically hindered by policy.
IoT is posing a huge threat to the stability of the internet, but manufacturers don’t pay any of the costs of a massive zombie toaster attack. A nice function of government is to sometimes compel people who impose costs on others to cover the bill. If you’re making money by building these DDoS agents with toaster functionality, maybe you should pay the bill when people use them to cause damage. Maybe you should have the option of using an insurance company that charges you a regular fee based on their assessment of how likely your products are to be weaponized, and what a likely attack will cost them to cover, plus a premium. You probably won’t pay very much if you are using memory-safe well-tested components that are regularly updated. This will exist really quickly when there is any money in it whatsoever. Then there will be open source reactions that further drive price performance.
I’m not sure if containers, microservices, MAC or defense in depth in general are all that effective against the IoT existential threat. If you can cause the toaster to send a packet where you want, that’s the end game. Model checking is great if you can afford it, and we should totally be doing it for massively deployed rarely changing core libraries, but it’s pretty cumbersome to have feature development blocked on it. Looking at things that commonly get weaponized, so much of that would have been prevented by using a memory-safe language. They tend to bee too slow though for use in things like web browsers, parsers and flash VM’s. I hope rust or something like it start seeing much more widespread use in our widely deployed systems, but the incentives aren’t currently there.
I’m pretty sure nobody wants to pay for the damage caused by weaponized toasters, so maybe real liability for those responsible for their existance should be a thing.
[Comment removed by author]
The four ways that government generally influences businesses are through taxes, regulation, lawsuits, and subsidies. In the world of IoT, only regulation will be capable of improving the situation. For taxes, how would one formulate a tax that discourages the sale of insecure devices. For subsidy, attempting to subsidize secure devices would be very expensive. Lawsuits don’t scale when everyone is making insecure devices, and there’s no guarantee that the government would actually win those suits. This leaves regulation, requiring that businesses secure their devices to meet predefined standards, and defining consequences if those standards aren’t met. Unfortunately, the current political climate in the US is strongly anti-regulation, and so support for regulation at the federal level is unlikely to happen without some large-scale and substantially harmful event related to the sale of insecure devices.
As a non-governmental alternative, some have suggested private certification of device security. The problem is that the insecure devices will remain on the market, and would likely be sold for cheaper as they don’t pay the cost of going through the certification process. While some consumers would choose the certified secure devices, I don’t imagine that most would, and so as a solution this option would be insufficient to address the problem.
On the ISP side, I don’t know that the ISP level is the place to tackle this, if for no other reason than the battles that have been and are still being fought to try to get ISPs to treat all data the same. Encouraging them to much around with what is being sent through their pipes seems to only be asking for trouble from organizations that have shown themselves to be untrustworthy as stewards of crucial communication infrastructure.
Unfortunately, the one you left off applies: influence of government by private sector in form of bribes. They’ve lobbied against software liability and regulation successfully for quite a while. This includes for defense contracting. The prior model, which actually produced systems with strong security, was overridden by the COTS acquisition mandate (done via lobbying) and NSA’s politics.
So, it could get better but there’s people paying for it to remain the same.
Yeah, I was addressing ways in which the government can influence business, not the ways in which business can influence government. Going from government to business, a bribe would instead be called a subsidy.
Interesting enough, they subsidize development of software and security techniques already. Just don’t mandate it gets mainstream deployment or FOSS to benefit America as a whole. Limits the benefit to company doing the development. Good news is many CompSci groups are doing it anyway. Just need more people building on that stuff.
A partially broken system can pay for its own repairs and maintenance a lot of the time, so economically it makes sense to deliver a partially working system first.
The perception of reliability is highly valued. That’s where the cultural expectations and political games that programmers hate come into play. The problem is that technical reliability requires technical excellence and that tends to require creative excellence and that is the opposite of (superficial) personal reliability.
Technical reliability is a system that rarely falls down and does no harm when it does. It matters objectively. Personal reliability is showing up for meetings on time, not pissing people off, and not showing displeasure or having a drop in performance when decisions are made against one’s career goals– stuff that matters only because people who had power decided that it does.
In other words, so long as programmers are judged based on superficial personal reliability, technical reliability will get the shaft. They pull against each other. Law firms and investment banks demand high personal reliability but have a low quality of technology: if you demand that smart people throw down extra hours and cover gaps, there’s no need to do it with machines. Everyone probably knows that, but I don’t see it changing, given that most programmers report to non-technical businessmen who hear about “zombie toasters” and think it’s ridiculous… oblivious to the fact that their cars are computers that drive, and that their phones are computers that make calls, and that their lives depend on often-ancient programs that route electrical power and that run the ventilation systems necessary for making office buildings habitable.