1. 2

I found this Googling just the other day.

Apparently in the 90s this group came up with some general purpose methods of running arbitrary parallel (MIMD) programs on SIMD hardware.

The approach seems fascinating in itself. At the same time, as far as I can tell, a significant factor in the rise of deep learning techniques has been the way it fits very well into simple GPU computing. I wonder whether other machine learning techniques could get a boost if they leveraged the MOG approaches?

Of course, arbitrary means different instructions for different kernels/data-sets. One would still be limited by the memory model of the chips. Still, this also seems to relate to the periodic discussions you get of “new classes of chips” since it shows ability of GPUs to emulate arbitrary parallelism.

  1.