1. 61
  1.  

  2. 17

    I found the code base size statistics interesting as well:

    • OpenBSD: 2,863,505 loc
    • NetBSD: 7,330,629 loc
    • FreeBSD: 8,997,603 loc
    1. 6

      Here are some interesting FreeBSD 11 code stats from The Design and Implementation of the FreeBSD Operating System (2nd edition): https://imgur.com/a/sF3aV

      1. 4

        ZFS is a beast!

        1. 2

          I think it’s actually bigger than all the evaluated, secure kernels combined. It might be bigger than the TCB’s of all the security-oriented filesystems combined. I’m not sure about that as much since at least one might have been big. I’d have blocked it from a secure system before but now I’m certain. User-mode or it’s gone.

        2. 2

          oh wow. I remember someone claiming that most of FreeBSD code is Ethernet drivers… they were wrong :D

          1. 3

            It may be true! These stats don’t include any drivers or machine-dependent code (“not shown are 2,814,900 lines of code for the hundreds of supported devices”).

            Not sure where the discrepancy comes: the linked slides claim ~9M LOC, while the book claims, for amd64, 1.5M (machine-independent) + 0.5M (machine-dependent) + 2.8M (drivers) ~= 4.8M. (I doubt there’s twice as much machine-depended code for other archs).

            1. 1

              Oh. Somehow I thought it was machine-dependent on one page and independent on the other.

              1. 1

                You’re right, but machine-dep code stats don’t include drivers (see the update in my comment above).

      2. 13

        TL;DR:

        • OBSD clear winner (they have massively reduced their attack surface over the years)
        • NBSD clear loser
        • FBSD is somewhere in between
        1. 1

          I wonder where DragonflyBDD would fall on that coninuum.

        2. 11

          I wrote fixes for most of the wifi issues Ilja reported and it was tremendous fun! This kind of work is very valuable. Thanks!

            1. 2

              OpenBSD’s defect density is great vs FreeBSD as expected. They do have different sizes with latter supporting in-demand features higher complexity. Just multiplying the size difference by OpenBSD’s defect rate still gives them a very, low rate. Still looking great. The other thing would be to look at the defect rate of OpenBSD features that are similar in complexity to what’s in FreeBSD that they’d usually avoid. I have no prediction for that since I don’t have data. Just curious since the complex stuff really tests developers.

              OpenSSH comes to mind as a good example of what the difference between minimalist and complex apps might look like for OpenBSD developers. It’s had something like 90 vulnerabilities on CVE list with who knows how many bugs (defects). At least for vulnerabilities, divide lines of code by number of vulnerabilities. Then, do same with number of vulnerabilities that aren’t DOS’s or something that doesn’t lead to potential code injection or something serious to get more important defect rate for security.

              1. 1

                Given the lines of codes mentioned there could it be that in OpenBSD it’s really just the kernel and in FreeBSD lots of external code (compiler, etc.?)

                1. 10

                  I doubt it.

                  OpenBSD’s style is fairly minimal compared to other OSs. There are far less bells and whistels and exposed features and layers of abstractions in components such as device drivers, for instance.

                  Code is rarely redundant, whereas in e.g. Linux it is not difficult to find redundant implementations (several competing wifi stacks, e.g. madwifi vs mac80211) or driver code copy-pasted and then adjusted many times (rtlwifi). And I have seen a Linux wifi driver (brmsmac) which supports a single chipset family and is larger than all of OpenBSD’s existing wifi drivers and wireless stack combined!

                  Granted, FreeBSD is better than Linux in this regard, but they’re not actively fighting code bloat as much either.

                  1. 1

                    I understand that OpenBSD is very minimal (especially when you only count the kernel), but the FreeBSD source code being 900 times bigger according to those numbers is a bit surprising.

              2. 2

                As usual, effected by the fact that OpenBSD’s are just counted as bugs they fix if one doesn’t prove they can beat the mitigations on top of the flaw. The kind of people who beat mitigations and those that just report bugs in C code are usually different with gormer rare & focused on stuff with market share. Or in spy agencies. I’d like to see numbers showing bugs that are potential vulnerabilities vs what was proven to be to get a better idea of general, defect rate.

                Far as going by published vulnerabilities, the proprietary BSD in SCC’s Sidewinder Firewall would top all but OpenBSD with its 15 vulnerabilities. It also had built-in MAC. SELinux was a knock-off of its model of type enforcement. So, it gets No 2 if we include non-FOSS.

                1. 3

                  SELinux was a knock-off of its model of type enforcement

                  I vaguely recall reading (around the time SELinux was released) that the Fluke OS was a research oriented OS which prototyped FLASK, apparently done as a joint venture between the NSA, University of Utah, and SCC. SELinux was an implementation of FLASK (also originally¹ released² by the NSA) ported to the Linux kernel. I don’t know that calling it a “knock-off” of SCC’s model tells the complete story.

                  1. 2

                    You’re closer to the truth but not quite. Here’s my motivation:

                    A. Most people reading about BSD’s dont usually care about that work when I post it. So, I didnt do backstory.

                    B. The real product was LOCK system with type enforcement at hardware level, an IOMMU, and high-assurance security. The one aimed at big market was retrofitting that to a BSD (Sidewinder’s) based on work you described. SELinux was a small team at Mitre doing a research prototype not intended for production since it didn’t even have full features of a CMW [1]. They said so in FAQ.

                    Based on that, it seemed like a knock-off of highly-secure or well-built systems like the two predecessors at SCC. Its field record isn’t as good either. So, I call it a knock-off.

                    [1] http://web.ornl.gov/~jar/doecmw.pdf

                  2. 2

                    potential vulnerabilities vs what was proven to be to get a better idea of general

                    I agree here, but keep in mind that this means that from “only found problems” to “only found problems and only found exploits”. This can be really hard if peel through layers of security on different ends.

                    Security alone is already way more complex. In reality your security could be weakened by wrong or outdated documentation.

                    Now that doesn’t mean at all that any of this is worthless, quite the opposite, but just that measuring something already hard to measure, where numbers have the potential of wrong impressions becomes even more fuzzy if you combine it with more numbers that could give wrong impressions in even more ways.

                    So I’d say more numbers is good, but also that one needs to take care to not misinterpret them and think that something fixes an issue with some numbers, when it doesn’t. Otherwise the outcome will likely be hypes and I’ve seen it more than once that optimizing too much on the number side both resulted in worse quality, but also gave a wrong sense of security.

                  3. 2

                    CVE and vulnerability statistics are nonsense. How often do we have to explain this?

                    Classic: https://www.youtube.com/watch?v=3Sx0uJGRQ4s

                    1. 6

                      Hmm, they didn’t just count CVEs; according to the slides, they did a three-month audit of BSDs and then made conclusions based on the found bugs. So, although close, it’s not exactly “vulnerability statistics”.

                      1. [Comment removed by author]

                        1. 4

                          A set of requirements, good design, implementation, and strong verification of each by independent parties. It’s what was in the first, security certifications. The resulting systems were highly resistant to hackers. At B3 or A1 level, that usually showed during the first pentests where evaluators would find very little or nothing in terms of vulnerabilities.

                        2. 2

                          That’s a great presentation despite deficiencies I’ll overlook. Especially on the relationship between what vulnerability researchers focus on and what the CVE lists show. A good example of this I’ve been discussing in another thread is OpenVMS. It lives up to its legendary reliability as far as I can tell so far but I learned that its security was an actual legend: mix of myth and reality. The reality was better architecture for security than its competitors back in the day, attention to quality in implementation, and low CVE’s in practice with famous DEFCON result. I figured what actually was happening is most hackers didn’t care about it or just couldn’t get their hands on the expensive system (same with IBM mainframe/minicomputers). I predicted they’d find significant vulnerabilities in it which happened at a later DEFCON. So, nice work, highly reliable, and not as secure as advertised by far. ;)

                          Another good example to remember is Linux kernel. I slam it on vulnerabilities but that’s because they (esp Linus) don’t seem to care that much. The vulnerability count itself is heavily biased due to its popularity like Windows once was before Lipner of high-assurance security implemented Security, Development Lifecycle. I’ll especially note the effect of CompSci and vendors of verification/validation tools. They love hitting Linux since it’s a widely-used codebase with open code. Almost every time I see a new tool in static analysis, fuzz testing, or whatever they apply it to Linux kernel or major programs in Linux ecosystem. They find new stuff inevitably since the code wasn’t designed for security or simplicity like OpenBSD or similar project. So, there’s more to report just because there’s more eyeballs and analysis in practice instead of just in “many eyeballs” theory. Same amount of attention applied to other projects might have found similar amount of vulnerabilities, more, less, or who knows what.

                          1. 3

                            nickpsecurity:

                            So, there’s more to report just because there’s more eyeballs and analysis in practice instead of just in “many eyeballs” theory

                            That was one of the conclusions from Ilja as well, if I read it right:

                            “Say what you will about the people reviewing the Linux kernel code, there are simply orders of magnitude more of them. And it shows in the numbers”

                        3. 1

                          Is this pdf mirrored anywhere?

                          1. 3

                            Here you go. Please note, that’s on my private server so don’t spread it around as the canonical source.

                            1. 1

                              Thanks!