1. 9
  1. 2

    There’s a lot to like in this paper. It’s very interesting. I look forward to seeing what they build. I do see some problems.

    “The Multics system, for example, did not have a kernel at all.”

    Multics had a kernel. It was a microkernel of significant size. It also partly used the address-space mechanism for protection that the author doesn’t like. It was Per Brinch Hansen that came up with the concept, also called nucleus, for Solo OS. The original motivation was keeping everything else in the system consistent in how it was described or interacted. Even Windows had a microkernel in it to do that. So, the LISP alternative needs to do that without a microkernel/nucleus or a different kind of microkernel/nucleus.

    “But it’s very hard to protect oneself against defective software. There can be defects in the checkpointing code or in the code for logging transactions, and in the underlying filesystem. We believe it’s a better use of developer time to find and eliminate defects than aim for a recovery as a result of defects.”

    I have no idea where this view came from as it’s very unsupported by empirical evidence. There’s a number of transactional databases, strongly-consistent protocols, backup/checkpointing software, recovery methods, and so on that work very well. There’s OpenVMS and NonStop clusters that have run for years to nearly two decades. There’s also the principle in safe/secure software that it’s best to combine prevention (their method), detection, and recovery. They should do it all or have a good reason for not doing it.

    For instance, they might recommend the users run backup software that snapshots their data as often as they feel necessary plus periodically test restores to make sure they work. Maybe they build checks into the filesystem to catch problems there. Maybe they do something like SQLite does or just embed SQLite like IBM i embeds DB2. They should have some recommendation other than ignoring backup and restore entirely in favor of attempting perfect software.

    EDIT as I read: They later do a checkpointing solution like EROS. That’s a good idea. So, the paper is just inconsistent where they start talking like it doesn’t matter, change their mind, and implement a decent solution. Moving on…

    “In LispOS, the normal mode of execution is supervisor mode. the code executed by the user is translated to machine code by a trusted compiler which is know to not generate code that, if executed, might represent a risk to integrity.”

    That’s an old concept that I’ve never believed in. Here’s a few problems where some kind of isolation mechanism is nice at least as a fall-back: bitflip or other hardware action does arbitrary things to code or memory which break language’s safety model; compiler error or optimization eliminates a necessary check or security mitigation; a developer screws up somehow on a permission in a system that mixes ACL’s and capabilities; the developer was malicious. They seem to be ignoring the first issue while intentionally assuming the others won’t happen since compilers and developers are perfect. They must work with some exceptional people and tools. Probably running on VLISP or Myreen’s LISP 1.5 in Isabelle/HOL. ;)