1. 2
  1.  

  2. 1

    Seibel tries to suggest that software is fractal, that it keeps getting more complex as we drill down through layers, but Peter disagrees. He makes the case that the kinds of complexity seen at different layers are also different, which doesn’t really fit the definition of fractal.

    I thought this was really interesting and something I hadn’t thought of but I think you implicitly know.

    It makes me wonder: what would things have to look like to make knowledge more transferable? For example, debugging a CLI tool is a much different experience from debugging something that does RPCs Maybe Plan9 was going in the right direction? But then that doesn’t really solve debugging web apps. A previous chapter had the idea that instead of the web we should have just started with remote rendering, like an X11 client, and then optimized from there. Maybe that would have made things more uniform as well?

    1. 3

      So consider something like ls. It calls functions like printf and readdir in libc. They call syscalls in the kernel. VFS layer calls UFS calls SCSI (via some further indirection). At each layer, you call one function and that turns into another 100 lines of code. Looks like a fractal just by line count expansion.

      But ls is entirely synchronous and single threaded. The SCSI code is the complete opposite. So understanding the design of ls, seeing patterns, learning techniques, etc. Other than some very basic stuff like C has pointers, it’s all different.

      The good news is you probably don’t need to understand all these layers. I think there’s something to be said about good abstractions. The syscall interface is actually pretty good I think. It doesn’t leak too much of the underlying complexity up. Maybe you need to take a peak inside readdir, and can’t treat that as a black box, but that’s as far as you need to go.

      1. 1

        So what you’re saying is all layers should be 100% declarative with explicit dependencies as that is the common denominator…

        But more seriously, where I think things currently fall apart for me is anything involving a distributed systems. Just something simple like setting a break point is an ordeal, and that’s before you start getting all serverless. I don’t actually know how people debug microservices with a traditional debugger. But I guess we’re moving towards standardized printf debugging these days.