1. 32
  1.  

  2. 10

    As noted on another thread, sound regulations are the solution since they’ve already worked in getting the supply side to produce what they want:

    “Just a regulation forcing high-quality development practices followed by strong review will drive quality up. That was evidence by the TCSEC for security and currently DO-178C for safety in aerospace. The latter created an entire ecosystem of tooling including model-based development, static analyzers, certified compilers (or insert tool here), safer languages, graphics stacks, and even companies expediting certification-oriented tasks. Each component is done the best they can since re-certification likely costs more than removing the defect before certification. Then, they re-use what components they can to further reduce certification costs to just new components plus integration specs/code. It’s a proven model.”

    For aerospace (DO-178B and DO-178C), the suppliers started writing lean, well-documented, well-tested software that went through all sorts of automated analysis. For security (TCSEC), the market produced several high-assurance systems with kernel designs formally verified against specifications, rigorously analyzed/tested, and with excellent results during NSA pentesting which took 2 years average. They’d occasionally find a problem the vendor would fix immediately but most problems showed up in components done with less assurance. Most commercial and FOSS software, even security-focused, had many, many more vulnerabilities than B3 or A1 class systems. The TCSEC failed and was disbanded for political reasons mainly with DO-178C and similar standards currently only applied to safety-critical subfields. A new standard for assurance of software security could be made by building on DO-178C with TCSEC-like requirements for assurance activities. Especially in DO-178C’s style of not being too prescriptive on exactly how they produce or evaluate it. Flexibility plus different levels of requirements based on criticality are both important.

    https://en.wikipedia.org/wiki/DO-178B

    https://en.wikipedia.org/wiki/DO-178C

    https://en.wikipedia.org/wiki/Trusted_Computer_System_Evaluation_Criteria

    1. 8

      This article joins a pantheon of writings that all aim to be the unsafe at any speed of the software industry, including books like Geekonomics: the real cost of insecure software. Organisations such as the ACM and BCS attempt to “professionalise” software production by publishing ethics codes for practitioners to adhere to. Engineers go to jail for unethical practices.

      But at what point, if ever, does the tipping point occur? When does this frustration move from “You, the person reading this, whether you work in the media or tech or unloading container ships or selling falafels, need to learn how computers work, and start demanding they work better for you.” to a sea change in the industry? If the threat of a prison sentence isn’t sufficient motivation to put more care, care that some people believe is warranted, into our work, then what is?

      1. 9

        It is indeed surprising how slow the change in attitude seems, given the fact that leaks and other major security issues are happening at an increasing and alarming pace.

        It’s not just consumers bearing the brunt of it – e.g. Equifax and Yahoo breaches. It’s also powerful institutions:

        • the DNC had its e-mail hacked and leaked
        • major corporations like Sony and the Interview movie incident
        • major governments like Iran getting their nuclear weapons programs hacked
        • celebrities getting their phones hacked and pictures leaked

        I feel like there is maybe an ignorance of what’s out there. Maybe there’s not enough communication from computer security experts to policy makers and users.

        It seems like some institutions should be more willing to bear very large costs for increased security, e.g. performance costs. Often, you can buy better hardware to compensate, and honestly hardware is cheap compared to people and cleaning up after security incidents.

        One thing I wonder about is simply having an entire Linux distro compiled with ASAN [1]. This would prevent heartbleed-type attacks and the recent OPTIONS attacks. And I think you can also compile the Linux kernel with it.

        I know that Android developers at Google run entire phones built with ASAN as a “soak test”, so it’s certainly possible. The phone is apparently still usable.

        The cost is not even 2x in terms of speed, although the memory hit might be more like 4x. But still you can imagine that there are plenty of servers that run with 2 G of RAM that could just use 8 G. It might not make sense for all applications, but certainly there are some situations where it makes sense.

        Also I’ve been looking into Alpine Linux lately, and it certainly seems like a big step up from Debian/Ubuntu, security-wise. The fact that Debian postinstall scripts will start daemons and open ports for you is inexcusable IMO. It’s an insecure practice done in the name of a trivial amount of “usability”.

        It does seem true that there are a large number of corporations and developers relying on open source software, but they’re not aware of what they’re using, and not contributing back.

        Equifax seems like a great example. I wonder what their stewardship of Struts is, given that they depend on it so heavily. They probably just have some guy download it every once in awhile, and probably don’t test alpha releases, etc. I could be wrong in their case, but certainly there are a lot of organizations like that.

        Another thing I always wonder about is having a good query language for the trusted computing base. For example, if I do “sudo apt install struts”, a Linux distro should be able to tell me “You are transitively depending on 120 million more lines of code than you were before, committed by 576 developers in 1,023 commits, etc.” And maybe “here are their PGP keys” and whatnot.

        And I don’t think there is even a good query language for CVEs. People may be reluctant to upgrade their distro because of breakage, so it would be nice to have a command to query bugs fixed BEFORE upgrading.

        [1] https://en.wikipedia.org/wiki/AddressSanitizer

      2. 6

        To get out of this hole, we need massive improvements in both tooling and regulation.

        Tooling:

        • We need better type systems, and they need to work as an assistant to developers, rather than a hinderance. Not just scalar / record types (int / Person), but problem domain types and unit types (meters vs feet).
        • We need method/function contracts that do not go away in production. And we need the ability for the tools to generate checking code by default, and optionally remove it once some sort of formal proof system has been satisfied.
        • We need to stop pretending that memory and concurrency safety is something to be managed by people and offload it to compilers and runtimes. And stop turning off the runtimes for “performance”.
        • Where speed is truly a top priority, which is rare, we need unbreakable walls between “fast unmanaged code that has no capabilities” and “managed safe code that can access things”.
        • We need far better, and more transparent metaprogramming facilities. Not just something like macros or templates, but tools to model business logic, as well as to write new rules that check/enforce business logic on the “meta-program”, rather than reporting only on things that go wrong once it’s been “expanded”.
        • We need much better isolation when using third-party libraries or FFI to existing C/C++ code during the transition to better code.

        I could go on…

        Regulation:

        • Collecting and sharing dossiers on users without users being able to see what has been collected / inferred / shared (and with whom) about them needs to stop.
        • Users need to be able to correct or destroy such dossiers with due process.
        • Commercial (closed or open) software and services can not absolve themselves of their duties to users via contract, nor of the users’ ability to seek redress when they cock up.
        • Embedded software that destroys the users’ otherwise legal rights to resell or repair physical goods, or to remove users’ legal rights to fair-use of informational goods need to be illegal.

        Aaaaaand, I know (almost) all of this is a pipe dream in our currenct political / economic climate.

        Uncle Bob is wrong about many things, but he’s right about the rapidly approaching point where shitty software begins to regularly kill tens or hundreds of people at a time, rather than just “inconveniencing the masses”.

        1. 3

          The tooling is there already, for those who want it. Everything you describe exists in modern strongly-typed functional languages - heck, almost all of it exists in ML.

          1. 2

            We need far better, and more transparent metaprogramming facilities. Not just something like macros or templates, but tools to model business logic…

            I’ve recently been wondering if these two are inversely correlated. Most of the most metaprogramming-friendly languages I know are also not statically-typed.

            … as well as to write new rules that check/enforce business logic on the “meta-program”, rather than reporting only on things that go wrong once it’s been “expanded”.

            Cough have you considered TLA+? ;)

            1. 2

              I’ve tried to learn TLA several times over the years. All the documentation I can find is either in mathematics unreadable to me, completely trivial hello-world-level examples, or only 10% completed. I also picked up the book “Formal Development of a Network-Centric OS” which I’m about a third through and don’t feel like I’ve gleaned much extra knowledge. I suspect I may have reached the limits of my smarts dealing with this topic, without having a month off to learn enough to read the documents.

              1. 2

                I took a graduate class that used TLA+ and I learned nothing. I could blame the material and the professor, but honestly it was mainly lack of effort from my part and state of my health at the time.

                I am happy that hwayne is solving this very problem (that most TLA+ documentation is often not very helpful) for us!

          2. 5

            Start asking lawmakers why you have to give up otherwise inalienable consumer rights the second you touch a Turing machine.

            It is, at least in principle, possible to analyze a bridge design and prove that it won’t, (if constructed as designed), fall down.

            The same is not possible, even in principle, for a Turing machine.

            That said… the practice remains far from the boundaries imposed by the principles.

            1. 1

              Most programs don’t need to be Turing-complete though. Total languages are a potential way forward.

            2. 3

              I think straightforward regulation may make things even worse:

              • “Serious” companies where each developer has lots of certificates and each change is accompanied by 100 MS Word documents make even worse software than unprofessionally looking startups. Their infrastructure is especially rotten (recent example is Equifax with ancient Struts, at least it was not Cobol). Hospital devices which are heavily regulated and cost millions have as bad security as chinese IoT lamps.
              • If each snapshot of code will require government certification, opensource & free software will be mostly outlawed. Buy certified software from large vendors
              • Updates of final product and its dependencies will be extremely slowed down
              • Developers will go to jail for introducing pointer bugs while still required to use C/C++ (still almost no alternatives for lots of tasks)
              • You will be required to wear suit and work from 8 AM in cubicle on Windows where you can’t install programs and network is MITMed
              • We are returning to wonderful world of Delphi and Cobol shops making box software
              1. 1

                Greenberg’s detailed and riveting story focuses largely on the politics of hacking, and the conflict between an increasingly imperialist Russia, and Ukraine, with an eye towards what it means for America. For people who respond to such attacks, like FireEye and Crowdstrike, these kinds of events are bread and butter.

                It’s fascinating that the “Ruskies did it” narrative caught so quickly. Who needs facts when the Crowdstrike co-founder and CTO - moonlighting as a think tank agitator - gives you “wink-wink, nudge-nudge” group names? Who needs scientific attribution, when Crowdstrike’s employer decided the narrative beforehand? Who needs confusion when some of the software is actually a tool of choice for the Chinese?

                Let’s get on with the Red Scare and increase the bridge building budgets! America its them bad Russians.