1. 2
  1.  

  2. 1

    There isn’t any evidence presented in the article that composability (which is not metricized) helps any more than more formally explored forms of machine learning.

    Here’s an example of a flawed argument from the article about why neural networks can’t do NLP: “Pixels come from dumb sensors, but as we have seen, words come from people with rich models of both the world and likely listeners.”

    1. 1

      Last I checked, existing AI approaches can only produce expert systems, and aren’t able to do effective transfer learning and self directed learning. A general purpose AI needs to be able to learn independently to solve novel problems that it hasn’t previously encountered, and to do that efficiently it must be able to leverage existing learning from other contexts. As the article states:

      The set of possible situations is effectively infinite because situations are composed of combinations of an almost infinite set of possible pieces. The only way to match that complexity is to be able to dynamically compose pieces to fit the situation—we need the ability to combinatorially choose model pieces to match the combinatorial complexity of the environment.

      Meanwhile, I’m curious what specifically you’re claiming is flawed in the argument that neural networks doing NLP have limited and superficial understanding of the content. That’s demonstrably the case.