You see this in a lot of computing domains that have genuinely hard problems, not just security. Many practitioners in our profession espouse that they “hate all programming languages” or “security is impossible, so why try”. Half of it is that progress in computing is really glacially slow, the fads come and go so quickly these days while very little of the fundamental bedrock of our computing infrastructure improves on an observable scale. Things do improve, it just typically happens on the order of decades.
Nihilism is easy, putting in years of work to incrementally improve the state of computing is hard.
One way to feel a bit better is pull out a good book from 20 years ago. It’s neat to see what problems get solved faster than thought and not-neat to see which problems persist (only with newer names, like technical debt)…
progress in computing is really glacially slow
Maybe from an individual’s perspective, but I think it’s worth considering that the art of software engineering has only really existed for roughly the length of a human’s lifetime.
We’re just getting started.
I can only approve of the language-centric tone of the recommendations, but for nitpicking it should be pointed out that “proposition as types” is not actually directly related to producing more robust (or secure, or correct) software. It is an interesting and scientifically-useful connection, but it does not by itself have a place in a list of techniques supposed to make your programs safer. More precisely, it relies programs written in some strongly-typed languages to proof of some properties in some logic, but said properties are not about the programs themselves. (It can serve as a guiding inspiration to design type systems guaranteeing interesting properties, as in the work on temporal logics for example, but that is a fairly far-fetched second-order effect.)
I agree with this bit,
Often, the problems are economic, political, and even inter-personal.
But as a result, I think the list, which is focused mostly on techniques for engineering more secure software, is only a part of the picture. Another part is incentives, especially for large companies: why should they care about security breaches? In many cases, the breaches aren’t even technical ones caused by software flaws, but purely the result of company processes. For example, a huge list of personal information is leaked because the data-management strategy chosen by the company is “mail around Excel files to a large number of employees”, which makes it nearly inevitable that eventually some employee will sell or accidentally give away one of the files. Aggregation of personal data for marketing purposes is another company process that makes big leaks more likely, though in that case software flaws are often involved in the actual leak. I think we need to find better ways to address the incentives here if the situation is to improve.
The solution is rigorous caring, across the board. This is only accomplished by regulatory scrutiny or customers that radically reject insecurity. People who are passionate burn out or wind up hiring people who care less: the law of large numbers drives all large companies to mediocrity.