1. 3
  1.  

  2. 1

    This is a nice summary of some very old ideas which is good for people who entered the neural network field recently to have as a kind of “history of the field”. The hypothesis that synaptic dynamics are very important for computation is old, and is one of those ideas that come in and out of fashion.

    I suspect that as far as connections between neurons go, the key aspect for artificial computation is how synaptic weights change with activity. The biological details are interesting but Hebbian learning captured the essence very early in the game.

    The whole tenor of the article is another one of those “If only we modeled more faithfully what actual biological systems do, we would do better” which is a hypothesis. It has been very popular, but it’s not that clear if it as stood the test of time.

    In some manner, yes, if we could reproduce a brain atom by atom we would have another brain. However, the essence of modeling is know what to leave out. Scientists have spent much effort devoted to accurately modeling synaptic dynamics and it is not clear if this has solved any essential computational problems.

    There is some follow up reading you may want to do:

    1. Neuromorphic engineering. This was very hot once and the core promise was more efficient low powered chips often by mimicking spiking neurons and synaptic dynamics.
    2. Blue Brain Project which originally aimed for extremely accurate simulations of mouse brains. It provided some entertaining public drama, because of some individuals involved with it and possibly some jealousy around how much funding the EU gave it (e.g. https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/)

    Also, this is kind of an advertisement for a research program at Stanford U. which is somewhat of a gray line, but I guess people indirectly advertise for products and services here all the time.