1. 6
  1.  

  2. 2

    Nice submission. I’d love to see regular debates like these in the various subfields as each has stuff worth debating.

    I was doing knock-offs of Tannenbaum vs Torvalds pushing microkernel-based systems. I get Linus’s points since there’s definitely sense in them. I advocated Nizza architecture specifically to restrict how much stuff is done like a distributed system. Yet, timing related errors kept cropping up even in Linux kernel between syscalls showing even the monolith had similar problems. Whereas, on the other side, I found Tannenbaum et al were correct in that most opponents didn’t know anything about microkernels past Mach. The field-proven benefits of QNX, BeOS, KeyKOS, and later Minix 3 were unknown to them. Shows their opposition of microkernels had no technical merit so much as following a crowd somewhere.

    Another was N-version programming. I pushed this as a counter to both failure and subversion. The first problem I noticed was different solutions to same simple protocol often looked pretty similar. I hypothesized that we’re taught to use arrays, control flow, stacks, GC’s, and so on in similar ways more often than not. My enhancement was to require the tools (esp languages or VM’s) to have as different internal structure as possible plus with different coding style. Only the input to output mapping should be the same. On hardware side, different suppliers using different sub-components and process nodes if possible. Same for power supplies and maybe cabling. Far as subversion, I proposed having ideologically opposing people work together with, say, Israeli people coding with Palestenian reviewing trading spaces occasionally. I still think N-Version can help so long as each submission is high quality but internally different as I describe.

    Also, shout out to @vyodaiken for making the list. Your countering the formal methodist with NonStop was perfect reply! :) Those systems just run and run and run despite failures at every level. Through formal verification? No, just systematic use of mitigations in every layer developed from at least a thousand instances of failure customers reported in their systems. Empirical method of adapting a solution with combo of informal reasoning and field data. Shows one doesn’t need formal methods to make something work nearly perfect in practice.

    That said, formal verification of protocols for things similar to NonStop always found errors esp corner cases humans and tests missed. More accurate to say combining the empirical and formal methods can increase correctness over just using one of them. Due to cost, apply it to only most important part like replication or failover in NonStop. Four teams applied it to CPU’s of varying sizes with them having no errors during FPGA or silicon testing. That didn’t happen for non-formally-verified chips which also were rarely first-pass silicon. So, that’s further argument for formal verification able to improve something even as good as NonStop.

    Far as Code-Pointer Integrity, that was a great piece of research I pushed as an interim solution in a few places. They wisely used segments like NaCl and security kernels did before them. Lacking a security-focused CPU, one should use any hardware-accelerated forms of isolation available to best of their ability. The CPI technique used it much more efficiently than prior work. The attack paper focused on the watered-down version instead of the strong version to of course find faults. As usual, we’d have been much better off if attackers instead threw their brainpower at attacking the strongest stuff. So, CPI people rightly countered saying so.

    edit: reposted comment to the comment section of OP site in case anyone wants to discuss it there, too.