Interesting article. My suspicion of deep learning was always around lack of symbolic capabilities. There is an article I recently posted that outlined all the issues with the first AI winter and the current bubble seems to be committing the same kinds of mistakes. Researchers are fooling themselves into thinking they’re addressing real issues about intelligence when in reality they’re just building better amd better pattern matchers. Nothing wrong with that approach but the hype doesn’t line up with reality and that’s usually when bubbles pop.
Exactly, most of what passes for AI really is just function estimation. In that context the term AI is a misnomer.
But to be clear, we’ve had some pretty good success with function estimation being applicable in the real world, e.g., voice recognition and translation. It simply is not the magic sauce that people want to believe that it is.
The current “AI” techniques can work, but still not be intelligence.
What is intelligence though? Could it be the case that it’s nothing more than trillions of dumb function estimators?
Due to function composition, the current AI wave is proving that assertion false. Even driving cars is proving to be a little bit more than an over complicated mathematical function.