1. 61
  1.  

  2. 17

    I think this mini-renaissance of Ada awareness is in part due to the “mass discovery” of safe systems programming led by Rust.

    I like everything I’ve read about Ada, and I think SPARK is its killer feature. That being said, in the realm of “safe systems programming” I don’t think Ada’s going to be able to withstand the onslaught of Rust – Rust has the modern tooling, the ecosystem, and all the momentum.

    (This pains me to say because I’m not the biggest Rust fan and I really like what I’ve seen of Ada, but if I’m going to learn one strict language, it seems like the best one career-wise would be Rust. I’d love to be proven wrong, though; the world needs to ring with the keyboards of a thousand languages.)

    1. 11

      Ada’s been flying under the radar doing it’s own thing, until AdaCore started open sourcing things and pushing awareness a few years ago. GNAT is actually a mature ecosystem (gpr, gnat studio, gnatdoc, gnattest, gnatpretty, etc) and has libraries which go back decades, it’s just not well known. The issue is that there had been no unifying system for distributing libraries before, which has hampered things, but Alire is fixing that now and also streamlining working in the language as well.

      Rust is doing well for good reason. It has an easy way to start, build, test and share things, which is exactly why Alire is based on the work that community (and the npm community) has done. The big philosophical difference I’ve found between writing Rust and Ada is that Rust focuses on traits/types, whereas Ada focuses on functions since it relies predominantly on function overloading and separation into packages (modules), though you can use ML-style signatures. It’s interesting because in Ada, in general (except for tasks/protected objects), types aren’t namespaces for functions, whereas Rust relies on this with traits. In this way, Ada feels more to me like a functional language like Haskell and Rust feels more like an OOP language like Java. I’ve always found this bizarre because it should be the opposite.

      Ada’s big advantage is how its general conceptual models are much more similar to C or C++ (think type safe C with function overriding, with classes if you want them), but it builds in a much stronger type system and concurrency types as well. Built-in pre/postcondition, invariants, and type range checks also goes a lot towards correctness, even if you don’t want to fully commit to format verification with SPARK. Note also that SPARK isn’t an all or nothing thing either with Ada code, you can verify individual components.

      1. 7

        While I’m generally a huge user of Rust and a proponent of it for many CPU-bound workloads, Rust really doesn’t have much in the way of safety features except when you compare it strictly to C and C++. Rust has a lot of antipatterns that the ecosystem has broadly adopted around error handling and error-prone techniques in general, and the overall complexity of the language prevents a lot of subtle issues from being cheap to find in code review. Building databases in Rust is not all that different from building them in C++, except there is a bit narrower of a search scope when I discover memory corruption bugs during development.

        Ada (and much more so, SPARK) are a completely different tool for a completely different class of problems. The proof functionality of why3 that you gain access to in SPARK has no usable analogue in Rust. The ability to turn off broad swaths of language features is one of the most powerful tools for reducing the cost to build confidence in implementations for safety critical domains. There are so many Rust features I personally ban while working on correctness-critical projects (async, most usage of Drop, lots of logical patterns that have nothing to do with memory safety directly etc…, many patterns for working with files that must be excruciatingly modeled and tested to gain confidence, etc…) but with Ada it’s so much easier to just flip off a bunch of problematic features using profiles.

        Ada SPARK faces zero competition from Rust for real safety critical work. It is simply so much cheaper to build confidence in implementations than Rust. I love Rust for building things that run on commodity servers, but if I were building avionics etc… I think it would be a poor business choice TBH.

        1. 3

          The ability to turn off broad swaths of language features is one of the most powerful tools for reducing the cost to build confidence in implementations for safety critical domains

          This is something I never thought about, but yeah, Ada is the only language I’ve even heard of that lets you globally disable features.

          1. 2

            antipatterns that the ecosystem has broadly adopted around error handling and error-prone techniques […]

            What kind of antipatterns are you thinking of? A lot of people appreciate the Result type and the use of values instead of hidden control flow for error handling.

            1. 3

              Try backed by Into is far, far worse than Java’s checked exceptions in terms of modeling error handling responsibilities. I’ve written about its risks here. Async increases bugs significantly while also increasing compile times and degrading performance to unacceptable degrees that make me basically never want to rely on broad swaths of essentially untested networking libraries in the ecosystem. It makes all of the metrics that matter to me worse. There are a lot of things that I don’t have the energy to write about anymore, I just use my own stuff that I trust and increasingly avoid relying on anything other than the compiler and subsets of the standard library.

              1. 1

                I think your concerns are valid, but it’s basically like saying “don’t write int foo() throws Exception” in java, which would be the equivalent.

                There is a strong tendency in the Rust community to throw all of our errors into a single global error enum.

                citation needed there, I guess? Each library does tend to have its own error type, which means you’re going to have to compose it somewhere, but it’s not like everyone returns Box<dyn Error>, which would be the single global error type (although that’s what anyhow is for applications). The right balance should be that functions return a very precise error type, but that there’s Into implementations into a global foo::Error (for library foo) so that if you don’t care about which errors foo operations raise, you can abstract that. If you do care then don’t upcast.

                I think async is overused, but claiming it degrades performance is again [citation needed]?

            2. 1

              antipatterns that the ecosystem has broadly adopted around error handling and error-prone techniques in general

              I’m interested to hear some of these problematic antipatterns around error handling and error-prone techniques in general?

            3. 3

              You’re right about the tooling, ecosystem and momentum, but Rust is a very long way from provability. I wonder if we’ll ever see something SPARK-like but for a subset of Rust.

              1. 4

                The question is how much provability matters. My guess is not a whole lot in most cases.

                I feel like Rust’s momentum is largely a combination of, “C and C++ have too many footguns,” “Mutable shared state is hard to get right in a highly concurrent program,” and, “Everyone is talking about this thing, may as well use it for myself.”

                1. 5

                  Provability is starting to matter more and more. There are very big companies getting involved in that space - Nvidia has started using Spark, Microsoft has been funding work on things like Dafny and F* for a long time, Amazon uses TLA+… Provability is just getting started :).

                  1. 3

                    I think provability definitely has some applications, largely in the space that Ada is used for currently – safety-critical systems mostly. I want to be able to prove that the software in between my car’s brake pedal and my actual brakes is correct, for example.

                    1. 18

                      I want to be able to prove that the software in between my car’s brake pedal and my actual brakes is correct, for example.

                      I think we’ve shown pretty conclusively you can’t solve the car halting problem.

                      1. 17

                        I want to be able to prove that the software in between my car’s brake pedal and my actual brakes is correct, for example.

                        If you’re going to use 11,253 read/write global variables (yes, eleven thousand, not a typo) in your car software with very limited and broken failsafes against bit flips and other random faults while intentionally ignoring OS errors, then you’re going to run in to problems no matter which tool you use.

                        Just running codesonar on the Toyota software resulted in:

                        • 2272 - global variable declared with different types
                        • 333 - cast alters value
                        • 99 - condition contains side-effect
                        • 64 - multiple declaration of a global
                        • 22 - uninitialized variable

                        The throttle function was 1300 lines of code. With no clear tests.

                        These issues are only partly technical problems IMO; something like Ada can help with those, but the far bigger problem isn’t technical. People choose to write software like this. Toyota choose to not take the reports seriously. They choose to lie about all sorts of things. They choose not to fix any of this. The US gov’t choose to not have any safety standards at all. Regulators choose not to take action until there really wasn’t any other option.

                        And then it took a four-year investigation and an entire team of experts to independently verify the code before action finally got taken. You didn’t need any investigation for any of this. You just needed a single software developer with a grain of sense and responsibility to look at this for a few days to come to the conclusion that it was a horribly unacceptable mess.

                        (And yes, you do need a proper investigation for legal accountability and burden of proof, and more in-depth analysis of what exactly needs fixing, but the problems were blatantly obvious.)

                        I’d wager all of this would have failed with Ada as well. There weren’t even any specifications for large parts of the code (and other specifications didn’t match the code), there wasn’t even a bug tracker. If you’re going to cowboy code this kind of software then you’re asking for trouble. Do you think that Ada, TLA+, or SPARK would have fared better in this kind of culture? At the end of the day a tool is only as good as your willingness to use it correctly. Software people tend to think in terms of software to solve these problems. But if you look a bit beyond all the software bugs and WTFs, the core problem wasn’t really with the language they used as such. They knew it was faulty already.

                        Does provability matter? I guess? But only if you’re willing to invest in it, and it seems abundantly clear that willingness wasn’t there; they willingly wrote bad software. It’s like trying to solve the VW emission scandal by using a safer programming language or TLA+. That’s not going to stop people from writing software to cheat the regulatory tests.

                        It’s kind of amazing how few faults there were, because in spite of the many problems it did work, mostly, and with some extra care it probably would have always worked. More guarantees are always good, but to me this seems to indicate that maybe provability isn’t necessarily that critical (although it can probably replace parts of the current verification tools/framework, but I’m not so sure if it has a lot of additional value beyond economics).

                        Summary of the reports: one, two.

                        1. 1

                          Does provability matter? I guess?

                          Great, I’m glad we agree.

                          More seriously: sure, culture is a problem; I certainly wasn’t trying to discount it.

                          But this is an AND situation: to have proved-correct software you need a culture that values it AND the technology to support it. Writing off provability as a technology because the culture isn’t there yet is nonsensical. It actively undermines trying to get to the desirable state of proved-correct safety-critical software.

                          I appreciate you wanted to have a rant about software culture, and believe me, I love to get angry about it too. You’re just aiming at something other than what I said.

                          1. 1

                            I appreciate you wanted to have a rant about software culture, and believe me, I love to get angry about it too. You’re just aiming at something other than what I said.

                            It wasn’t intended as a rant, it was to demonstrate that at Toyota things were really bad and that this is why it failed at Toyota specifically. This wasn’t about “software culture”, it was about Toyota.

                            Writing off provability as a technology because the culture isn’t there yet is nonsensical. It actively undermines trying to get to the desirable state of proved-correct safety-critical software.

                            Of course you need to consider the actual reasons it failed before considering what the solution might be. Overall, the existing systems and procedures work pretty well, when followed. When is the last time your accelerator got stuck due to a software bug? It’s when you choose to not follow them that things go wrong. If Toyota has just followed their own procedures we wouldn’t talking about this now.

                            Great, I’m glad we agree.

                            I don’t think we agree at all, as I don’t think that provability will increase the average quality of this kind of software by all that much, if at all. The existing track record for this kind of software is already fairly good and when problems do arise it’s rarely because of a failure in the existing procedures as such, it’s because people decided to not follow them. Replacing the procedures with something else isn’t going to help much with that.

                            It may replace existing test procedures and the like with something more efficient and time-consuming, but that’s not quite the same.

                            And the Toyota problem wasn’t even necessarily something that would have been caught, as it was essentially a hardware problem inadequately handled by software. You can’t prove the correctness of something you never implemented.

                            1. 1

                              Proving software correct is categorically better verification than testing it with examples. If we can achieve provably-correct software, we should not throw that away just because people are not even verifying with example testing.

                              Programming languages that are provable increase the ceiling of potential verification processes. Whether a particular company or industry chooses to meet that ceiling is an entirely different discussion.

                  2. 3

                    Have you run across any good SPARK resources? I’m interested in learning not only about how to use it, but also the theory behind it and how it’s implemented.

                    1. 4

                      https://learn.adacore.com/ should be a good starting point to learn Spark. If you’re interested in how it’s implemented, reading https://www.adacore.com/papers might be a good way to learn.

                  3. 3

                    Pretty surprising to see this kind of article (basically PL fanboying) on BlackBerry-(the cell phone maker)’s blog. Normally corporate blogs are more oriented toward teaching something specific or talking about experience whereas this is more of a love letter.

                    As I write this, I have a serious Ada program consisting of 19 source code files containing 2,985 comment lines and 6,253 of Ada code open on my desktop. This forms part of a semi-formal verification suite of a new QNX development in progress at the moment. In the past I would have written this program in C or Python. Writing it in Ada has sometimes been infuriating because I turn on all compiler warnings and specify that warnings are to be treated as errors. However, Ada’s characteristics have saved me hours of debugging. Getting a clean compilation takes a little longer, but that is more than compensated by the reduced debugging effort.

                    It’s possible that this “new QNX development” is a work project he’s using Ada on? Hard to tell.

                    1. 10

                      BB basically no longer makes phones and is entirely security software now; plus I think they still own QNX.

                      1. 8

                        Indeed. This is a post from a developer at QNX which in my mind is a whole different entity than Blackberry the cellphone maker.

                        1. 1

                          Wasn’t very clear to me from his linked bio:

                          Chris Hobbs is the author of the definitive book on functional safety, Embedded Software Development for Safety-Critical Systems. He has advised some of the largest industrial control and automotive companies as they pursued safety-certification standards.

                          But thanks for clarifying.

                          1. 2

                            QNX is used in a lot of auto infotainment systems and I think they’re slowing absorbing onboard sensors and the like into the mix, thus the need for safety critical