1. 6
  1.  

  2. 5

    This part is rather overstated:

    I think most contemporary AI researchers agree that Logic-based AI is dead.

    Few contemporary AI researchers think logic-based AI is the whole story, but not many think it is 0% of the story, either. In terms of research and applications, if anything there’s a bit of a resurgence in the past 10 years in logical, constraint-based, and related methods.

    Of course they are used for problems where they’re suited. For example, because it’s the local 900-lb gorilla, many Danish AI researchers work on projects with Maersk, a large shipping/logistics company. Many of these projects end up using inference and constraint-solving tools in various ways, because global logistics tends to be well suited to such methods.

    In terms of tooling, two current-generation classes of tools to come out of logic-programming and constraint-solving research are satisfiability-modulo-theories (SMT) solvers, like the recently open-sourced Z3, and answer-set programming systems, like Potassco. Both get quite a bit of use, with SMT more widespread. They’re more experimental, but there is also a lot of work on hybrid logic/statistical systems, like Alchemy.

    1. 1

      In terms of research and applications, if anything there’s a bit of a resurgence in the past 10 years in logical, constraint-based, and related methods.

      IMO that’s a controversial statement. I’m not aware of any impactful results coming from those research areas in the past ~10 years, where ‘important’ can probably be roughly approximated by ‘was a big deal at NIPS/ICML/ECML/AISTATS’ - the premier venues for machine learning research.

      1. 2

        I’m talking about AI rather than specifically machine learning, which is a sub-area of AI. Yes, logic is not big in machine learning. If you check out premiere AI venues like AAAI, ECAI, IJCAI, or JAIR, the situation is quite different.

    2. 4

      This is a disappointing article on an interesting topic.

      Dismissing logic is clearly a mistake but honestly none of the topics are looked with any depth or insight.

      What would be good would be an article which looked at the good parts of each method and then compared, contrasted and imagined ways to unify them, barriers to unifying them and so forth.

      Especially, artificial neural networks apparently are quite good at recognizing particular static patterns after they’ve been laboriously trained to do so - deep learning seems to be mostly a refinement of this. What’s obviously needed is a way to combine this ability with some higher degree of logic - allow search for patterns modified by various logical constraints. (I know efforts to do this exist but they are definitely in their infancy).

      1. 1

        Agree entirely. I found the continuous unsupported off-the-cuff assertions very off-putting.