This is a nice summary of some very old ideas which is good for people who entered the neural network field recently to have as a kind of “history of the field”. The hypothesis that synaptic dynamics are very important for computation is old, and is one of those ideas that come in and out of fashion.
I suspect that as far as connections between neurons go, the key aspect for artificial computation is how synaptic weights change with activity. The biological details are interesting but Hebbian learning captured the essence very early in the game.
The whole tenor of the article is another one of those “If only we modeled more faithfully what actual biological systems do, we would do better” which is a hypothesis. It has been very popular, but it’s not that clear if it as stood the test of time.
In some manner, yes, if we could reproduce a brain atom by atom we would have another brain. However, the essence of modeling is know what to leave out. Scientists have spent much effort devoted to accurately modeling synaptic dynamics and it is not clear if this has solved any essential computational problems.
There is some follow up reading you may want to do:
Also, this is kind of an advertisement for a research program at Stanford U. which is somewhat of a gray line, but I guess people indirectly advertise for products and services here all the time.