1. 12
  1.  

  2. 7

    Before reading the article:

    When assessing this, look at each system’s Trusted Computing Base. Look at the privileges, size, complexity, and assurance activities. Java was a no at a glance since it pulls in an entire platform worth of attack surface. However, on case by case basis, Java program using battle-tested libraries will win over random C programs due to C’s memory unsafety causing a larger share of vulnerabilities. If a high-integrity/security implementation, then Java programs would almost always win until you’re talking covert/side channels. Then, you need low-level control that C or Ada can give you or a Java modification like Jif/Sif. And so on.

    After reading the article:

    re C vs Java privilege models. I don’t think this is right. The fact that Java would run all kinds of applications in its platform meant it usually had more privilege. We saw something similar with web apps, too. Whereas, each C app being its own thing with its own needs made it easier to use external privileges. The low-level control without being locked into a pre-existing platform also lets one use more compile- and run-time mitigations for C apps.

    re Java has more rules due to more subsystems. Ok, this ties into my TCB claim. More stuff equaled more attacks. That said, the Java apps don’t have to actually use everything that’s in them. An app that doesn’t use a database won’t need this rule. Author notes this later. So, this illustrates why I don’t like author’s methodology of defining C vs Java security by looking at the rules for all C vs Java apps. Security evaluations always defined security of a specific system against a specific, security policy considering everything in its lifecycle. So, the claim would be X program in C vs Y program in Java on (security goals here).

    re Java JNI. Is that really a Java rule or Java + C? The 2nd probably deserves its own category given it’s multi-language development. M.L.D. allows so-called abstraction gap attacks where one model has this representation/behavior, one has another that’s incompatible, and then there’s dangerous interactions. Analyzing and mitigating these is the bleeding edge of current research in verification. So, here be dragons if doing it in production with real-world constraints. ;)

    re validate method arguments and generate random numbers. These should be library, not language, problems. No surprise they apply to both languages.

    re C’s biggest is memory corruption but Java has marginally-less, high-severity rules. About 80% of vulnerabilities that hackers can actually use are memory corruption. Outside its TCB dependency, that makes Java have 80% less risk overall vs equivalent programs in C. Author should’ve immediately made that observation instead of one that minimizes Java’s value.

    re closest analogue. Nah, the root and admin programs are typically smaller, more focused, and with fewer features than the Java platform. Closest analog to Java platform would be POSIX. I don’t think this kind of comparison even really matters, though. It’s the specifics of the program that will create or reduce risk.

    re wrapping up. Didn’t read it. I’ll stop here to say this write-up illustrates perfectly why their rules are misleading for comparisons of languages’ security, are probably good for dodging security problems in applications, and why we should still evaluate system/app/component security on case-by-case basis in a lifecycle way. Basically, what they were doing in 1980’s-1990’s. Also, there’s program analyzers for C and Java, esp commercial, that can find violations of many of these rules automatically. Use them if you can.

    1. 3

      Really interesting and worthwhile analysis! I very much appreciate the work CERT does on secure coding, although I think the definition could be widened to cover languages more holistically.

      One note to make in interpreting these results is that there is an embedded severity assignment which underlies the analysis, and that severity assignment may not be correct for your system. For example, they class denial of service attacks as low severity, however that may not be true if your system is, for example, an emergency response system where a failure in availability would impact the ability of emergency services to respond to requests.

      It’s important in any security system which does its own severity assignment (which many do, for example the popular commercial static code analyzers) to not take those assignments at face value. Consider the categories for your own system and potentially re-assign severity to enable proper prioritization of vulnerabilities based on your system needs and threat model (remember, risk is [severity] x [likelihood based on threat model]).