1. 44
  1. 4

    How fast or slow was the execution time of training etc? I’m curious!

    1. 5

      Thanks! The execution time for the prediction with the Lisp program was 4 seconds on QEMU and 2 minutes on the i8086 emulator Blinkenlights (https://justine.lol/blinkenlights/) on a 2.8 GHz Intel i7 CPU. On a 4.77 MHz IBM PC, I believe it should run about 590 times slower than QEMU which is roughly about 40 minutes. The training time on TensorFlow for 1000 epochs was 6.5 seconds on a 6GB GTX 1060 GPU. The memory usage for the Lisp program fits into 64 KiB, including the SectorLISP binary, the S-Expression stack for the entire Lisp program, and the additional stack used for evaluating the program, meaning that it should run in the boot process of the original hardware.

    2. 4

      This was a fun read. I love Lisp articles and I’m just learning how to build neural networks in the Coursera deep learning class so this is all super fresh in my brain. Thanks for sharing!

      1. 3

        Thanks! I’m happy that you enjoyed it! It was interesting how the notation for the final neural network function closely resembled that of modern frameworks such as TensorFlow, which is also used in this project for the training. It shows how Lisp’s capability of abstraction is extensive, even for pure Lisp.