1. 23

This seems a little more relevant in light of Spectre and Meltdown.

  1.  

  2. 11

    That’s such a good idea that mainframes have been doing it for a long time: Channel I/O. It’s part of why their throughput was so much higher with 90+% utilization versus single digits that happen a lot with GHz PC’s. The CPU’s for computation just keep running on whatever is ready to compute while the I/O processors soak up all the interrupts running their I/O programs (i.e. channel programs) for that. I read somewhere they also used tricks such as self-modifying code to presumably make them use less RAM or cache. I recommended for years to bring back channel I/O in simplified form so we could use different assurance techniques and hardware mitigations for the two, different styles of programs on top of the performance and simplicity benefits. Embedded is already doing it now with some a mix of high-performance and low-power microcontrollers in one SOC talking this use case.

    Another architecture in high-assurance security that tried to isolate interrupts from main CPU a bit was SEED architecture (see Section 3) used in Sandia Secure Processor, aka “Score” Processor, that natively ran Java bytecode in fault-handling fashion.

    1. 3

      good point. It’s just that ubiquitous multi-processors makes this relevant to microprocessors again. SOCs with specialized i/o or specialized compute (for dsp) processors have been around for a long time. The problem they always face, however, is due to impedence between the OS for the compute processors and the OS on the microcontrollers both in terms of programming environment and just moving data and control back and forth. IBM did not have that problem with their i/o channels. Also: I don’t want the i/o processors to absorb the interrupts, I want the interrupts to go away. (It is kind of amazing how the last 30 years of microprocessor design has been a rediscovery of all the techniques used in mainframes).

      1. 1

        Yesh, the mismatch was a big problem. That still seems true with new work on network-plane functions and OS’s. Im interested in what you think of SEED given your idea of little controllers plus FIFO queues is similar to theirs. Hell, unless I speed-read it wrong, both of yours might be combined where you isolate the main CPU’s while the I/O cores do something like SEED with consistency with main CPU’s data types, memory layout, etc.

    2. 3

      What if it were phased in? Like writing OS behavior to mask-interrupts-and-poll with the existing design, and new chips could be designed to capitalize on the fact that no one uses the interrupts?

      I ask because it seems like a cardinal sin for hardware to make backwards-incompatible changes.