1. 33

  2. 20

    I don’t think this is controversial. The benefits of producing insecure software outweighs the costs. Or rather, the costs are so externalized and diffuse that there’s no appreciable cost to the producer of said software.

    1. 12

      The car industry went through this. Without a doubt benefits of cars still outweighed the cost of them being death traps: https://en.wikipedia.org/wiki/Unsafe_at_Any_Speed

      And before that the rail industry. Trains were revolutionary, but it was accepted that even such a basic task as stopping the train would regularly get people killed: https://en.wikipedia.org/wiki/Brakeman

      In comparison, software vulnerabilities don’t seem that bad! But we should still sort this out, since we rely on software more and more, and it will get serious eventually.

      1. 6

        That’s safety, not security, though. Security is tougher because it is an adversarial game with opponents that make counter-moves.

        It’s telling that despite hundreds (thousands?) of years of history, bank buildings and homes are still very regularly robbed.

        1. 1

          But that’s the point of the article. Apart from a few very very rare cases (Therac-25) bad software doesn’t kill people.

          1. 14

            The point of the article is “benefits outweigh the costs” but your point “the costs are externalized so even if they were high it wouldn’t matter” is a lot more subtle and much closer to the truth.

        2. 7

          Yes until there is an organized response against the problem, or a use of governmental power to associate developing insecure software with loss of profits, there will never be any meaningful action on the issue.

        3. 11

          a very simplistic analysis of a way more complex matters. Just to begin with: saying that cloud computing/artificial intelligence/online shopping as is are “societal gains” without a measure of to whom and when, is simplistic and naive.

          The same technologies he is writing as societal gains also caused monopoly in the hands of few mega corporations and increased the income gap. Those are not societal gains! What is the societal gain of amazon as a one stop shop for everything in the USA and that controls most of the infrastructure of the world’s IT? A lot of money for Jeff Bezos and the destruction of competition?

          The “artificial intelligence” systems we have so far are extremely biased and when applied to things that have real life consequences, such as healthcare or finance, they create difficulties for groups that were already at disadvantage such as people of color.

          From the narrow point of view of this author, a while male in tech, they might be societal gains, but to world at large, it’s debatable.

          I am NOT saying software has not created positive outcomes for society, what I am saying is that it’s complicated, and economical and social forces aligned with technological advancements create complex situations where big groups have to deal with consequences that others don’t. The implications of modern technology in our economical framework are catastrophic.

          I’ll agree with him when we manage to put “stop climate change” in the list of “societal gains” that software has done.

          1. 1

            caused monopoly in the hands of few mega corporations and increased the income gap. Those are not societal gains!

            They also are not societal losses. It is the way of things that companies become monopolies and stagnate and become irrelevant. Income inequality doesn’t hurt people, it just means everybody is getting richer. Through most of history, if someone said you had a way to make everybody have more money but the downside is the royalty would be twice as rich, nobody would care that the income gap is larger. They’d just be happy that instead of starving to death they are overeating to slower and rarer death.

            1. 3

              Income inequality doesn’t hurt people

              That’s patently untrue. It causes

              • Psychological harm - people are social animals. Poor people who live around other poor people are, by and large, the happiest on the planet - much happier than most wealthy folk who live with the awareness that others have very different lives.
              • Institutional harm - most democracies don’t have a way to protect against the very wealthy e.g. running propaganda to push a policy line that benefits their business.

              Now, if you had said that the harm of income inequality was not as great as the benefits of increased overall wealth, I could see a reasonable argument being had.

              1. 1

                “It is the way of things that companies become monopolies and stagnate and become irrelevant.”

                It’s hard to argue with you, because the problem, it’s that you made some big statements that aren’t true. The inequality has already been argued by someone else. Just would like to add: There’s in fact still people dying of starvation, homeless people, countries in endless wars… but ok. The one of “It is the way of things” it’s some weird assumption: To begin with, how long do you think the history of financial contemporary capitalism exists? It is not he way of things, this illusion of post-history is actually a lack of vision, you’re talking about the last 100 years at most. And so far, this was not actually the way of things, the income concentration has maintained in the hands of the same families since the renaissance. Not to even talk about the racial inequality.The companies of the last few decades might had changed name, but not actual ownership…

                Second, you presumes that things will keep as they are, when we in fact know we are moving to an ecological catastrophe and that the current economical framework is self destroying.

                You’re so deep in your ideological convictions it makes hard to talk to you, you’re as extreme as a hard left person saying that Stalin wasn’t so bad after all.

                1. 1

                  The companies of the last few decades might had changed name, but not actual ownership…

                  Much of this is too involved to get in a discussion about today, much as I’d love to, but I do have to challenge this point.

                  The 1% in America change every generation or two. The same percentages may be extremely wealthy, but which families are involved absolutely does change.

                  As for businesses, they absolutely are owned by different groups. Sometimes you have the obvious empires like the fast food chains all owned by the same people, the financial centers all owned by the same people (disclaimer: I work at such a financial center), but those are rare enough, and rarely outlast the owner, and are constantly changing anyway.

            2. 6

              I think this article is using the wrong measure, or at least unclear about what it is saying.

              The question is not whether the net value of software is positive, but whether the marginal cost of securing software is greater than the marginal benefit. It clearly isn’t for some companies: Equifax lost billions in market cap.

              1. 2

                Equifax lost billions in market cap.

                That’s an interesting example. Did Equifax lose market share? I.e., was the company’s ability to extract rents from the credit-verification process in any way impeded? Sure, stockholders (among them no doubt the leaders of the company) lost money when the stock went down, but was the company’s bottom line affected in any meaningful way?

                Equifax’s customers (the ones seeking to learn the credit-worthiness of the people who were affected by the breach) weren’t affected. Who cares if the bank from which you’re seeking a loan uses Equifax? Would you forgo a cheaper mortgage from one that does, just out of principle - especially if for all you know the competitors are just as bad?

                1. 11

                  I need to write this up at some point, because the belief that Equifax saw no consequences is really common. The short of it is that Equifax lost several hundred million, multiple executives lost jobs, and the loss of stock market value is important, because

                  1. the stockholders own the company (albeit with all the caveats you have to attach to that) and
                  2. many key players are compensated in stock

                  Everyone involved was incentivized to do better, but they had zero control of their own tech processes.

                  One thing I need to look into further is what drove the stock price to take such a big hit. Who expects what about Equifax’s future profits? I can’t answer your question about the impact was to Equifax’s market share, so thanks for that!

                  P.S. None of this is to say that they shouldn’t have faced more consequences.

                  1. 6

                    I look forward to reading anything you write up. I’m sure it’s all a bit more complex than I imagined, and I’d love to know more!

                    1. 4

                      A lot has happened since the news broke. I’m also interested in a detailed write-up on what costs and effects the breach had.

                  2. 1

                    Absolutely, & I think the reason so much software is insecure is that so many devs & managers make the same mistake in reasoning as OP (i.e., doing cost/benefit of bad software versus no software, instead of comparing bad software to no software, and thus accidentally constructing a set of practices and norms built on the assumption that good software is not possible).

                    The benefits of making marginally better software is generally worth the cost on society as a whole, since the ease of reproduction is an incredible force multiplier: a single developer might take a man-week to fix a particularly nasty bug or a man-month to do a complete refactor, but only days of only hundreds of running copies need to exist for that engineer time to be amoritized in saved machine time.

                    Other resources that are harder to quantify are arguably even more important: How much electricity are you saving & what’s the impact on carbon release? How much stress have you saved in the non-technical users who have no choice but to use the application as it exists? A few minutes of work can improve these metrics by orders of magnitude.

                    Now, the cost & benefits of society as a whole is very externalized from the perspective of a for-profit business. It’s almost never worthwhile for an individual developer to make their software any better than ‘good enough’, or for a company to do any more than slightly better than the competition. The bad software vs no software division is aligned in praxis with practices like rentseeking through frequent upgrades, writing obfuscated code for the sake of job security, hiring only inexperienced developers, letting upper management control the tech stack based on third-hand hype, wasteful project management styles like scaling scrum to hundreds of developers, and shipping big monolithic applications or using web tech.

                    1. 2

                      I like the analogy of user data / personal information to toxic waste. In the “good old times”, industries dumped effluent into waterways without any sort of intervention or regulation. After a while, legislation was enacted to prevent this.

                      GDPR is a step in this direction, I think. As flawed as it is, it does recognize consumers as having rights outside the dictates of a corporation’s EULA or Terms of Service.

                  3. 5

                    I’m trying to understand if the analysis is normative (“why we should not try to make software secure”) or descriptive (“this is why software security is hard to improve”). If descriptive, then it misses the point explained by many above about externalized cost: even when insecurity ends up costing money, much of the cost is externalized and hard to measure: what is the true cost of the equifax breach?

                    If prescriptive, then I am pretty sure I disagree. For one, it does incentivize companies to externalize costs of security aggressively. But even more importantly, it ignores a black swan event: a hostile-power-used attack can kill millions, or tens of millions, even if previous breaches only killed hundreds.

                    1. 5

                      This is partly true (i.e. cost-benefit analysis). Three other causes and/or solutions:

                      (a) Businesses or individuals that want secure, usable software often can’t buy it because the market mostly doesn’t produce it. They don’t produce it because most people don’t buy it. Most are too greedy or risk adverse to aim for tiny, uncertain, niche markets. There is a market segment for more reliable, private, on-site, and/or secure stuff in various areas it that’s not in any of these visualizations, though.

                      (b) Software EULA’s are scheming bullshit. Specifically, the liability part that makes it harder or impossible to sue them for defective software. Lawsuits might address the reliability and security problems of software by putting an incentive for investing in them on the executives’ balance sheets. I bet a lot of people would’ve sued them if these EULA provisions didn’t exist.

                      (c) Regulations. In many industries, regulations form to make sure the job is done safely. TCSEC did that for computer security with financial incentives. The market immediately produced the most secure OS’s to ever exist. Stopped the second the regulations were changed. In safety/reliability, the DO-178B and similar regulations had software vendors using every measure they can to make their products flawless to avoid high re-certification cost in event of failure to get past reviewers. A graphics driver for Radeons was designed robustly. A first I think! So, regulations worked, they’d likely work again, and shouldn’t be dismissed like article does.

                      So, those are the three things to focus on where small wins can make big changes. User demand is the hardest. I recommend pushing liability and regulations on at least the basics of computer security. These might include memory safety, using secure logins vs telnet, ability to get systems patched (see mobile situation), validation of all user input, and so on. General principles that go a long way that market often ignores.

                      1. 3

                        I think this is correct, if you replace “societal” with “commercial”. Otherwise the loss side turns out way bigger. Identity theft, stalking, stress, bigger financial losses, loss of privacy, etc.

                        On privacy one can extend a lot, even for the fact that I can now type someone’s email address into something haveibeenpwned.com and similar to get an overview of their interests for the last few decades in many cases.

                        I also think we are (still very slowly) starting to see political effects due to bad security and information that is not intended to be accessible (unlike marketing information) being used by individuals or groups to gain or stabilize power, which I personally see as a very big societal impact.

                        Protection of data privacy, which is impossible with bad security has for long been seen as integral for a functioning society and politics. Things like the Bauta (a kind of Venetian mask) show this. We are complaining about governments and companies having too much insight into people, yet the same insights seem to be easily accessible by anyone, with fairly basic knowledge and the bar to gain this information in huge amounts given a little bit of criminal energy, due to lack of software security.

                        I also think that the approach of “only giving information to big companies” is bad. It just reinforces the thinking that security does not matter for small companies. Maybe the best would be to make it so that accessing certain private data and reporting that to authorities results in in a financial punishment, which goes to the person reporting the issue to the government. I know there’s other approaches, similar to PCI, but I think the measures which can be described are way to static. Security depends on many factors. And simply working through a certain set of rules might be better than nothing, but depending on the software it might not be fitting at all and given there are better or simply new approaches there’s a chance they won’t be covered.

                        Of course this is in no ways perfect, but in my opinion it’s better than the current status of investing in security essentially being a (financial) net-loss for a lot of companies.

                        In addition I think the majority of issues we see is in the area of software security is not the lack of a secure language, some compiler flag, the underlying operating system and oftentimes not even horribly outdated machines/containers/… (even though they exist), but simply very badly written software. An indicator for that is that there’s still many cases where passwords are unencrypted. This is absolutely minimal effort, usually only involving fetching a bcrypt or scrypt library and replacing the line where the password is checked and verified accordingly. There’s many other such cases, such as switching off a port, using what usually is arguments (for prepared statements) in SQL queries rather than string interpolation, etc. So these involve a very minimal amount of investment and for example in the case of prepared statements might have other benefits than security. Oftentimes thinking about these things will also make it easier to spot bugs, increase maintainability and extensibility.

                        And yes, there is of course more involved attacks and there is a difference between different kind of attacks, but here how interesting a target is to an attacker scales a lot more with the size (and usually income/funding) of a company.

                        1. 3

                          Agree with the analysis and as the other comment mentions, there are clear reasons for this.

                          “Number of people killed by bad software” is an interesting one. There are certainly the classic stories of thing gone wrong (Therac-25 comes to mind), but I imagine that a software failed occurs somewhat invisibly to the people whose lives depend on it - hidden amongst a tumult of other failures.

                          The boeing MAX planes also come to mind. Certainly compared to other things that can kill us software is a negligible slice, but it’s not zero.

                          As we continue to have more software in the world and depend on it for more and more, the number can only go up. I wonder though if it will rise disproportionately with the distribution of new software systems: will we care less about safety as we grow?

                          1. 3

                            The boeing MAX planes also come to mind.

                            I believe that wasn’t really a software failure. Oh, it was definitely an engineering clusterfuck because they wanted to save money on re-certification:

                            • Aerodynamically unstable design (so they could make bigger, more fuel efficient reactors).
                            • Botched redundancy (left computer used left sensor, right computer used right sensor, and no way to tell which computer is right when one sensor (inevitably) go south).
                            • Limited pilot training, that hides the differences of the MAX under the carpet.
                            • Difficult to override automatic controls (the pilots basically have to lift weights to be able to counter the nosedive).
                            • […]

                            Forgot where I saw it, but a pilot wrote a painstakingly detailed review of the debacle. If someone can find the link…

                            That said, whether it was a software failure or something else doesn’t really matter. We make stuff, and bad things happen when it breaks. Software shouldn’t be treated any differently. (And in the case of the MAX, they certainly expected software to compensate for the physical shortcomings of the plane. Too bad it didn’t, I guess…)

                            1. 3

                              so they could make bigger, more fuel efficient reactors

                              I think you just meant to write “engines” here.

                              1. 2

                                Crap, I did.

                                1. 1

                                  I figured you did :D An older name for a jet plane in Swedish is “reaplan” (“plan” is plane, and “rea” is from “reaktionsmotor”) and it has the same root.

                              2. 1

                                Fair. I was considering the software overcompensation for a physical failure as a software failure, but as mentioned, “tumult of other failures” might be over-blaming the software.

                                1. 2

                                  Note: I believe the software people ought to have noticed this: see, each computer relied on one sensor, and then you have to resolve the conflict whenever they disagree. With only two systems —not three as is commonly seen in vote based redundancy systems. Actually, I’m pretty sure a number of engineers, software or otherwise, did notice something fishy was going on. They probably told their hierarchy too. Yet someone somewhere still decided they were going to go through with this.

                            2. [Comment removed by moderator pushcx: Insults and attacks are not OK.]

                              1. [Comment removed by author]

                                1. [Comment removed by moderator pushcx: Insults and attacks are not OK.]