while I deeply admire Kaparthy’s work, this guide gets just about everything wrong in terms of scope. this is a tutorial on gradient descent, and nothing more. anyone with a university calculus credit can suss that out eventually. the interesting / fun parts of hacking on ANNs are input handling, neuron types, evolutionary strategies, spatial optimisation, and avoiding overfit . . . none of which are covered.
while I deeply admire Kaparthy’s work, this guide gets just about everything wrong in terms of scope. this is a tutorial on gradient descent, and nothing more. anyone with a university calculus credit can suss that out eventually. the interesting / fun parts of hacking on ANNs are input handling, neuron types, evolutionary strategies, spatial optimisation, and avoiding overfit . . . none of which are covered.
Hint number one: they’re called “artificial neural networks”, because they started off as a failed model of real neural networks.