This document was interesting for the combo of things in it. It’s the latest from one of founders of INFOSEC, Roger Schell, who with Paul Karger did landmark evaluations such as MULTICS, helped establish first standards for securing systems (with systems appearing), and with a Burroughs guy convinced Intel to add security features to 286. The recent work introduces the fundamental concepts (eg reference monitors, TCB subsets) invented in high-assurance security to deal with the effect of complexity on security. They describe how subversion risk is mitigated throughout the lifecycle by TCSEC criteria. They give numerous examples with or without their GEMSOS system. They show legacy systems such as Linux software can be run with full, MLS security. Then, since it’s also a heavily-biased piece of marketing, they encourage adoption of COTS, high-assurance products such as GEMSOS (which they license) with the last thing saying they have “no conflict of interest.” That was probably a joke by Roger Schell.
In the one I didn’t share with more marketing, Schell noted two things about the recent works:
The methods to bolt on security to monolithic kernels (esp Windows and UNIX’s) consistently failed to work. Smart people build a clever, lightweight mitigation that ignores root cause. Once it’s popular or with big money involved, then other smart people come up with a clever attack to bypass the mitigation. Schell and Karger called this “penetrate and patch” saying it would fail. The only ones unbroken right now are those with very little use by consumers or money to be made by breakers. I call it the “Mac is immune to viruses effect” until I have a better name. :)
Schell notes that the simpler, separation kernels that dominate right now in high-assurance don’t enforce end-to-end policy enforcement at the application level. They just enforce separation. Each component and interaction (or middleware) must be proven to be secure. Whereas, if you can tolerate prior policies (eg MLS, Type Enforcement), the security kernels enforced it for everything in the system. They were also used in distributed systems by essentially labeling the messages.
The security kernels were reusable. This goes for separation kernels and low-level runtimes, too. A common objection to an A1-class system, which includes formal verification, is that it requires specialists, takes a long time, and costs plenty of resources to build or change. They add only slow-changing or ultra-critical systems should use such an approach. Schell et al anticipated this early on by making things such as GEMSOS, STOP, and LOCK generic enough for many applications. Some of the integrations were really cruddy or inefficient but got the job done. Costly, high assurance plus reuse equals low-to-medium cost for later projects.
The new stuff doesn’t include full lifecycle protection against subversion. INTEGRITY-178B did as required for EAL6+ assuming the politics in Common Criteria didn’t let them hand-waive something. Not a strong assumption. However, most of these separation kernels are just using formal verification with code review. Reaching the source or object code with full formal is a step up from security kernels of old. However, they lack covert-channel analysis, secure composition, exhaustive testing, external pentesting (that I’ve seen), and highly-secure repos. The build from source or secure distribution aspects vary project by project. None have had years of field use in systems that nation-states might try to attack like the old ones. So, they do less than the old A1-class kernels in terms of overall assurance plus aren’t field-proven. Those deficiencies must be improved.
Now, to counter him, I’d point out research in capability systems, language-oriented security, crypto-oriented security against RAM attacks, CPU-level enforcement of any of that, and so on show their methods are likely dated. That plus new classes of attack and verification methods mean one would have to re-analyze those systems to see what assurance they have today. That said, the patterns still work today. The systems from that time are still stronger in security than most produced today. They would, by design, be easier to implement and verify with today’s methods. So, definitely stuff worth learning or imitating from even that mid-80’s to early-90’s era approach to INFOSEC. Especially given it worked while their critics’ systems mostly got hacked in simple ways. :)
This document was interesting for the combo of things in it. It’s the latest from one of founders of INFOSEC, Roger Schell, who with Paul Karger did landmark evaluations such as MULTICS, helped establish first standards for securing systems (with systems appearing), and with a Burroughs guy convinced Intel to add security features to 286. The recent work introduces the fundamental concepts (eg reference monitors, TCB subsets) invented in high-assurance security to deal with the effect of complexity on security. They describe how subversion risk is mitigated throughout the lifecycle by TCSEC criteria. They give numerous examples with or without their GEMSOS system. They show legacy systems such as Linux software can be run with full, MLS security. Then, since it’s also a heavily-biased piece of marketing, they encourage adoption of COTS, high-assurance products such as GEMSOS (which they license) with the last thing saying they have “no conflict of interest.” That was probably a joke by Roger Schell.
In the one I didn’t share with more marketing, Schell noted two things about the recent works:
The methods to bolt on security to monolithic kernels (esp Windows and UNIX’s) consistently failed to work. Smart people build a clever, lightweight mitigation that ignores root cause. Once it’s popular or with big money involved, then other smart people come up with a clever attack to bypass the mitigation. Schell and Karger called this “penetrate and patch” saying it would fail. The only ones unbroken right now are those with very little use by consumers or money to be made by breakers. I call it the “Mac is immune to viruses effect” until I have a better name. :)
Schell notes that the simpler, separation kernels that dominate right now in high-assurance don’t enforce end-to-end policy enforcement at the application level. They just enforce separation. Each component and interaction (or middleware) must be proven to be secure. Whereas, if you can tolerate prior policies (eg MLS, Type Enforcement), the security kernels enforced it for everything in the system. They were also used in distributed systems by essentially labeling the messages.
The security kernels were reusable. This goes for separation kernels and low-level runtimes, too. A common objection to an A1-class system, which includes formal verification, is that it requires specialists, takes a long time, and costs plenty of resources to build or change. They add only slow-changing or ultra-critical systems should use such an approach. Schell et al anticipated this early on by making things such as GEMSOS, STOP, and LOCK generic enough for many applications. Some of the integrations were really cruddy or inefficient but got the job done. Costly, high assurance plus reuse equals low-to-medium cost for later projects.
The new stuff doesn’t include full lifecycle protection against subversion. INTEGRITY-178B did as required for EAL6+ assuming the politics in Common Criteria didn’t let them hand-waive something. Not a strong assumption. However, most of these separation kernels are just using formal verification with code review. Reaching the source or object code with full formal is a step up from security kernels of old. However, they lack covert-channel analysis, secure composition, exhaustive testing, external pentesting (that I’ve seen), and highly-secure repos. The build from source or secure distribution aspects vary project by project. None have had years of field use in systems that nation-states might try to attack like the old ones. So, they do less than the old A1-class kernels in terms of overall assurance plus aren’t field-proven. Those deficiencies must be improved.
Now, to counter him, I’d point out research in capability systems, language-oriented security, crypto-oriented security against RAM attacks, CPU-level enforcement of any of that, and so on show their methods are likely dated. That plus new classes of attack and verification methods mean one would have to re-analyze those systems to see what assurance they have today. That said, the patterns still work today. The systems from that time are still stronger in security than most produced today. They would, by design, be easier to implement and verify with today’s methods. So, definitely stuff worth learning or imitating from even that mid-80’s to early-90’s era approach to INFOSEC. Especially given it worked while their critics’ systems mostly got hacked in simple ways. :)