1. 10
  1.  

  2. 8

    I had to pause on video to write something on a common mistake he made that really needs more attention:

    “Discrete vs continuous computing. What distinguishes computing from other sciences dealing with the real world is that it is the study of the discrete. Whole point is getting a continuous system to act discrete. Hybrid is dealing with discrete system and describing its interaction with continuous environment. You can use the same methods to analyze both. Essense of computing is discreteness. Essense of perception is turning environment into discrete version.”

    This is wrong. The immediate answer to Meijer’s question is that there’s two forms of computation: discrete (eg digital) and continuous (eg analog). Shannon and others even have computational models for general-purpose analog. Actually, many early computers were analog systems that operated on real numbers continuously. Here’s a great summary on it for anyone interested in early analog computers and how they work:

    http://oro.open.ac.uk/5795/1/bletchley_paper.pdf

    Note: It jumped out at me that the simple analog simulator for missile’s took 1 minute to do what digital computers took nearly 100 hours to simulate. Think about the implications. :)

    A few, quick points about them of interest. Analog computers directly implement mathematical functions with one to a few components per function. The same function would take many more “gates” in digital. The analog circuits run full-speed and never blink in terms of sensing something. The digital ones only run when the clock hits them but sleep otherwise. Those two alone are why analog remained popular in control and monitoring systems for long into the development of computers. The simplicity of components also means they use less power (sometimes an eighth) and low unit cost.

    Drawbacks of analog are many. They’re essentially operating by modifying the physical properties like voltage of electricity itself. A little scary to represent a million dollars in voltage. ;) There’s limits to precision you can use. They also have noise that comes in with constant battle in designs to counter it. They have no memory although it can be emulated to a degree. The digital computers can simulate basically anything plus remember it whereas the analog have no memory and are limited to combinations of mathematical primitives they implement. The regularity of digital also meant synthesis and verification just got more automated over time with development largely being high-level design + interaction with black boxes. The analog works with physical materials so mostly has to be done by hand with it getting harder at every, smaller, process node. Less confidence it will work the first or first, few times like with digital means the analog could cost a lot more thanks to mask production and fab runs. I have little data on how that plays out in practice in deep, sub-micron analog though. Oh yeah, it’s also worth noting that all digital circuits are analog underneath that’s wired up in a constrained way to pretend to be little, lego blocks. Digital tamed but didn’t replace analog. ;)

    The result was digital dominated in general-purpose computing with most study on discrete and boolean systems. That’s where Lamport’s mistake formed. Simultaneously, continuous and analog developments kept going in both CompSci and marketplace (eg Texas Instruments). The resultant computers confined analog circuits to power management, I/O, some pre-processing, and so on. The constant battle to get max performance per watt led to a resurgence of analog in mixed-signal ASIC’s where more functionality uses analog for the raw speed at low watts that old analog computers have had. They dominate in things like mobile SOC’s. Niche work continued in general-purpose analog computers, models, and FPGA-like solutions below. Other researchers, even I, noticed the brain seemed to operate a lot like analog components and so they built ultra-efficient neural nets out of analog circuits. One was a whole wafer.

    So, we’re all better off if we remember that computation can be both discrete or continuous plus invest R&D in both. That nature is continuous kind of implies we should’ve been spending more on analog. The coprocessor model seems easiest route for now where analog accelerators are embedded in an otherwise digital system to do what they’re good at. FPAA’s deserve more research, too. Personally, I want to see more work on the general-purpose, analog computers as discoveries there might surprise us. :)

    Analog co-processor 400x faster than digital (2005) http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.3740&rep=rep1&type=pdf

    General-purpose analog deployed in production (old) http://www.comdyna.com/gp6intro.htm

    GPAC and Turing equivalence (2012) https://arxiv.org/abs/1203.4667

    60 million synapses on a whole wafer using analog circuits http://web1.kip.uni-heidelberg.de/Veroeffentlichungen/download.php/4713/ps/1856.pdf

    Sieglemann et al’s page on analog NN’s, supra-Turing computation, and BSS model for real computers http://binds.cs.umass.edu/anna_cp.html