1. 10

  2. 3

    My main experience of MISRA is they are trying to force people to write in a language that isn’t C. It might be compiled by a C compiler, but it’s an arcane (mostly) subset of C.

    Admittedly C is a shitty language for safety critical software, but the fix isn’t coding standards.

    The fix is to use a language that doesn’t have all those footguns.

    eg. D or Rust.

    1. 7

      Which D or Rust compilers possessing certifications necessary for safety-critical applications do you suggest for this? Do they support the target architecture used?

      I don’t have fond memories of the tools we worked with when I worked on a car component’s firmware, yet I understand why accountability is important in safety critical systems, and very few vendors are willing to pay the costs of verifying their toolchain, and selling that. Open Source simply does not cut it there, with the famous “NO WARRANTY WHATSOEVER” phrase in each and every license. Being responsible for your own code is just enough of a burden, when you may even end up in jail if your product malfunctions in some disastrous way.

      I also think that C is not a very good language for writing robust code, as it has too many traps, yet currently that is one of the few languages mature enough (with regards to tooling and accumulated knowledge of its pitfalls) available for such work (Ada being another one I know of). On the longer run I’m sure Rust will make its way to the embedded and safety critical scene, but is not there yet. It is not matured enough for other vendors to build a compiler and have it audited and certified. Also it does not have enough developers yet, so the customer side is also not ready yet for adopting it. Maybe a decade later.

      Until then coding standards are a tool to ensure better quality code we already posses, and it is known to work, albeit admittedly with great overhead.

      1. 2

        certifications necessary for safety-critical applications. Open Source simply does not cut it there, with the famous “NO WARRANTY WHATSOEVER” phrase in each and every license

        I always wondered about that.

        One of the reasons I switched wholly to open source was I entirely disinterested in warranties. I wanted my code to work, not someone to blame when it didn’t.

        I found over a decade ago no warranty, open source code, was usually less buggy than paid for code.

        Especially compilers.

        Why? More users, more people creating compelling test cases, more people looking at the code.

        I have received patches for everything from libraries I use, down to the kernel, in my mailbox, the next day… simply by writing a compelling test case for a bug that was bothering me… and submitting that to an email list.

        In other cases I fixed it myself.

        In the close source projects where massive amounts of monies were paid… the usual response is “it will be fixed in the next release”….

        As promised by a customer service rep who actually hadn’t a clue what I was talking about.

        But then it’s almost irrelevant. Yes, I have been hit by compile bugs in my life. About three orders of magnitude less than common or garden homegrown bugs.

        What finds and fixes bugs before they even get to release builds? Compiler warnings. These days they are bloody marvellous. Every new release of gcc they come up with more. Every release I run around our code base fix up the new warnings. Every time I find a couple of real live bugs.

        Alas, the C language design disables a lot of what the compiler could warn about.

        Unit tests, especially with valgrind. Valgrind is worth way more than any certified compiler (or coding standard).

        The GNU D is a front end for the gcc compiler, so whichever targets the backend supports (ie. more than any other compiler) in principle D supports. There is some fine print around the library runtime support though for odd targets though. Support for arm does exist.

        1. 3

          Well we have also met some bugs in the Green Hills compiler we used in a project, and actually the code was compiled with multiple compilers and large parts tested on multiple architectures. C also had this advantage, that you could cross check compilers. (In-system tests were single-compiler tested, but 95% of the code was tested on multiple compilers on the project I worked on)

          Still I can only recall Ada a mature and established embedded language with multiple compilers available… (Also avoiding vendor lock in at least as an option is important at a large project)

          edit: BTW oftentimes regulators may also demand you use certified tooling. (I’m not totally sure, but lots of regulations applied to the automotive industry, AFAIK developing according to MISRA was a regulatory requirement)

          1. 1

            C also had this advantage, that you could cross check compilers.

            D has two compilers the DMD and the gnu D compilers so you have that advantages two open source compilers.

            Certainly if you’re multithreading (heaven forfend, wash your mouth out if you do “safety critical” and “multithreaded” together), running your code under every damn target, os, compiler and number of CPU’s you can find does wondrously at shaking bugs out of the tree.

            BTW oftentimes regulators may also demand you use certified tooling

            BTW oftentimes regulators may also demand you use certified tooling suffer from regulatory capture

            1. 3

              I see you really want to believe, yet still that is not how the industry works. Safety critical embedded development is quite different from other areas of the software industry. It is more conservative, independently of regulatory capture’s involvment or not. Projects move slowly and lot of stuff has to be decided up front.

              I don’t know why you mention multi-threaded programming, it was not considered anywhere in the discussion. Using multiple architectures and compilers can help catching other subtle bugs around memory alignment, endiannes and such things, which are common in low level code. Or in the compilers themselves.

              D might have two open source compilers, but that is only one aspect of the problem. D has not proven in that field, there are no expert and seniors in D, and the industry is not backing it. However better it may be (I believe it is not revolutionary enough to justify investment, Rust is more promising), nobody will risk its billion euro project for this, when C or ADA is already known and proven with all its pros and cons.

              A new generation of programmers needs to come and they must push safer languages, and their tooling must meet the regulatory requirements. If you don’t like regulators, and government, so be it, yet those are the rules, and if you want to sell stuff as a automotive/aerospace component, you must play by those rules.

              1. 2

                I been on this hamster wheel for a few turns of the cycle.

                The way it works is the bureaucracy ramps up and up and the costs ramp up and up…..

                ( Here’s a very very old joke….

                What do you call a MilSpec mouse?

                An Elephant )

                Until the users note that commodity devices are way way way cheaper, way way more functional, and actually way less buggy.

                The regulations and red tape remain in place, but the users aren’t actually using those devices.

                Eventually someone renames that class of devices the users are using so the old regs don’t apply…. and the slow the cycle starts up on the new class of devices.


                Gone round that wheel a couple of times. On the “users are starting to ignore us and all the regs” phase of the current one.

        2. 1

          Which D or Rust compilers possessing certifications necessary for safety-critical applications do you suggest for this? Do they support the target architecture used?

          Ada/SPARK has you covered. I think they also have compilers to C for event the target isn’t supported.

          1. 1

            Sure, but it may be more dificult to find experienced workforce for Ada. (Note that I also mentioned Ada as an available mature tool)

            1. 1

              The bootstrap problem is best solved by just teaching people how to use the better tools. Like Jane St does. They can do mock or just non-critical stuff for the learning projects.

              1. 4

                If you have people expert in those better tools, and confidence in those tools. Being revolutionary on the internet is easy, but then Euro billions are at stake you tend to be more conservative. Especially if you are German. :)

                For non-critical stuff it was free for all, we used Python, Java, Excel/VBA, even PHP. For safety critical stuff only approved tools were used.

                Lots of Simulink was used BTW, as most of the complexity was in control system. Simple stuff like hardware drivers were fine in C, and I understand the decision, as it was easy to kickstart the project back in the past with C..

              2. 1

                Ada is not used in automotive as far as I know. Apparently, Toyota made a push a few years ago but it did not catch on.

          2. 3

            Michael Wong called it “MISRAable code”. 😉

            1. 1

              A common anti-pattern I have seen in “MISRAable code” is that it’s correct in the small and wrong in the large. ie. much thought and care had been invested in ticking every rule for that particular function, but when you look at all the invocations of it, you know things aren’t going to quite work, and certainly not reliably.

              1. 4

                I find MISRA warnings often have a quick fix which makes the code worse. However, if you think further it may also reveal a reasonable design issue. Of course, a design (or architecture) issues are more work to fix, so the quick fix is tempting.

                Example: We use object-oriented state machines a lot. This means every state is a member variable. At some point someone wanted to iterate over the states and thus put them in an array. Our checker then came up with some lifetime issue. The quick fix was some casting trickery. The real problem is about the design. We have ownership confusion: Is a state object owned by the array or by the object?

                (Btw learning some Rust has helped me for C++. Concepts like ownership and borrowing are useful terms.)

                1. 2

                  I know the situation you are talking about. This also depends on the review culture of the org. We have thrown out lots of “clever” code (MISRA passing) at the review phase.

                  MISRA is just one element in a puzzle, and while some of its rules are totally straightforward (eg. don’t use if(a() == b()) because order of evaluation is undefined, which can cause problems if they have side-effects), but usually they should make you stop and consider what are you about to do and why? They are indicators of code smells. Just as you said: the real problem usually is the design.

            2. 3

              Just by following the Directive 4.12 (Dynamic memory allocation shall not be used) you can eliminate quite a few footguns, I think.

              As the article says, MISRA “wrote out any and all ways to make a mistake”, which is a treasure trove by it’s own; whether you enforce MISRA-C in your project or not.

              1. 1

                Translation: Thou shalt effectively re-invent (poorly) your own schemes to solve the problem hand carved out of a raw emasculated C.

                Yes, malloc() is an atrocious design, I could rant on for hours about it’s flaws.

                What it needs to provide, is out of the box, a collection of better, safer, more effective tools, not merely don’t use the sharp ones..

                C++ is heading in the right direction here…


                ..they are aiming for machine checkable standards complete with supporting libraries of better alternatives.

                1. 2

                  I have never seen the need for dynamic allocation in safety critical code. Everything had fixed buffer sized, fixed sampling rates, fixed response times.

                  Could you provide an example where dynamic allocation is necessary in safety critical context?

                  1. 2

                    Typically where you have a resource limited device that has different modes and can only be in one at a time.

                    Yup, the joys of embedded, we will always be pushed to deliver mechanically smaller, lower power, cheaper unit cost devices.


                    No matter what Moore’s law gives us.

                    Another common place is resource pools. eg. in an IP connected device, you’ll have a fixed size pool of ram for packets. If you get flooded with more packets than you have in your pool, you just drop them. So note…

                    • Malloc is a shitty solution as you can make a trivial DoS attack the rest of your system by flooding with packets, so MISRA is right so far, don’t do that.
                    • The packets are typically allocated when/where they arrive in the stack and deallocated when/where they leave (or acked). ie. It’s dynamic allocation.

                    Yup, I’m most familiar with the insides of an IP stack, but I bet other packet based protocols will have something similar.

                    1. 2

                      Well, when I was working on auto parts (5+ years ago) our component did not need an IP stack. (Thank God!) I understand that nowadays everything is connected, but are those devices also safety critical? Anyways, where I worked MISRA was a fine aid in our work, gave a sane baseline for automatically checkable quality, so the testers could focus on tests for larger scale logic problems (which was mentioned in a different thread).

                      1. 1

                        I’m not saying MISRA doesn’t make things safer, it certainly does make things safer than using C without anything else.

                        I’m saying it’s a bad tool for the job. It’s a bandaid, it’s security theater.

                        There are a lot better ways of making things safer.

                        Alas, where MISRA makes things less safe, it allows butt coverers to say, “Job Done, the safety risks with using C have been addressed.”

                        Which is simply not true, not even close.

                        I use IP stack as an example,, as I have the code in front of me. But I bet any packet based protocol based on top of say CAN bus will have the same.

                        1. 3

                          MISRA should be a part of a safety net, not the sole item providing safety. Safety critical code must be designed and implemented in such manner. The regulations usually also require development practices, independent audits, tests, requirements traceability, and safety experts who are liable for the safety of the system.

                          If at your project safety criticality is claimed after passing a set of static tests for coding style (MISRA), than that sure is security theater, but also that is a serious deviation from the norms experienced in the European automotive industry. With such attitude a component cannot qualify for roadworthiness in the EU.

                          Our CAN stack used fixed sized buffers, the whole system was hard realtime, no dynamic allocation happened.

            3. 3

              Just yesterday I watched this CopCon talk which gives some background about MISRA and safe C++.

              1. 1

                Does anybody know why none of the German car makers are MISRA members? Do the they use another coding standard?

                1. 5

                  I worked for one German auto industry company, and we did work according to MISRA-C:2003 if i recall the specific revision well. We had static analysis enforcing MISRA, and in the extremely rare cases when a violation was deemed necessary extensive safety evaluation was carried out, and the violation had its justification, and its tradeoffs, risks documented, and approved, which was also audited by independent experts.

                  I found MISRA a pretty good collection helping to create more robust software.

                  1. 3

                    In this MISRA document appendix A is an example for “in the extremely rare cases when a violation was deemed necessary extensive safety evaluation was carried out, and the violation had its justification, and its tradeoffs, risks documented”.

                    1. 2

                      Wow. Thanks a lot for this real world use case.

                    2. 3

                      The talk mentions the AUTOSAR Standard and that it tries to modernize MISRA. AUTOSAR is big in Germany. So maybe it is indirect. I asked around at work but got no answers.

                      MISRA compliance is important for us (and our customers). However, we do have an internal standard based on MISRA and a few other ones. I guess as a big organization, the Not-Invented-Here syndrome is stronger.

                      1. 1

                        Thanks a lot. Somehow I managed to ignore AUTOSAR while being interested in secure software. (Maybe that’s caused by spending to much time on English forums - where, it seems, MISRA dominates).