Now change the world a little bit and watch your nice little and dramatically over-fitted neural network completely freak out. ^_^;;
This is beautifully presented! The diagram of half-precision floats communicates the internals super well. Are self driving cars using genetic algorithms in the real world and crashing into everything for a few generations? I would imagine they would take an unsupervised approach so you’re not required to have every kind of stop sign and car labeled.
Thanks for the feedback! Yeah, I don’t think that the Genetic Algorithm would be a real-world solution for self-driving cars. I was using it in this side project just to get some intuition about how the algorithm works and to check whether the cars will actually evolve into something interesting or not.
This is super interesting, though I feel like this is also a great example of how like.. your sensor infrastructure must deeply affect the results. It would be interesting to see discussion about why there are 8 sensors around the car, and how that affects how the car sees the world.
And then something like letting the car move the sensors around too could be very interesting… super cool in any case!
EDIT: thinking about this a bit more… it’s kind of insane that the car doesn’t know where the parking spot is in this example! Like, you look at the sensors and it knows nothing about the parking location!
Did I miss something or does the car not actually know where the target is? It just looks like it has overfitted to keeping some distance to the left and centering forwards and backwards?
Yep. It also basically uses Genetic Programming to find the coefficient for a linear regression.
To do this with RL, probably need to have a few layers deep network and then passing the target as the input (as well as the sensor input). Should be pretty simple to modify the author’s code to do so.