1. 8
  1.  

  2. 2

    We have previously discussed some of what Norvig talks about, like impossibility theorems in consensus, how explainability is not enough, the robustness of adversarial images, how the curse of high dimensionality afflicts reasoning and how it might not matter.

    I wonder about the degree to which, as Norvig puts it:

    We’re seduced by our low-dimensional metaphors. You look in a textbook, and it says, “Okay, now we’ve mapped out the space. ‘Cat’ is here [ASL: circle], and ‘dog’ is here [ASL: circle (other hand)], and maybe there’s a tiny little spot in the middle [ASL: small] where you can’t tell the difference, but mostly we’ve got it all covered.” And if you believe that metaphor, then you say “We’re nearly there, and there’s only going to be a couple adversarial inputs.” But I think that’s the wrong metaphor. What you should really say is, “It’s not a 2D flat space [ASL; wall] that we’ve got mostly covered. It’s a million-dimensional space [ASL: big] and ‘cat’ is this string that goes out in this crazy path [ASL: point-insane?], and if you step a little bit off the path in any direction [ASL: go? (repeated)], you’re in nowhere-land and you you don’t know what’s going to happen.”

    I can’t read all of the signs, but I still find this bilingual metaphor to be really interesting. Did we create these gaps into which we can fall, or are they always there? It is quite possible that our own rationality and reasoning are similarly delicate paths which can go into wrong alien regions. It also suggests that motivated reasoning is a danger for us and computers alike, since a motive is nothing more than a constant vector in high-dimensional semantic space; being consistently perturbed in a single direction could be sufficient to step into Norvig’s “nowhere-land”.

    I think that Fridman might miss the mark somewhat when they say:

    When you say “problem-solving”, you really mean taking some kind of – maybe collecting – some kind of data set, cleaning it up, and saying something interesting about it, which is useful in all kinds of domains.

    This isn’t wrong, but I think that it is a narrowing of the scope of AI research. Part of the original hope of expert systems, and part of the history of the seasonal hype cycles of AI, is the difficulty in accepting that computers do what they are told to do. We cannot instruct humans to be free and imaginative, and we cannot do it for computers either, but we hoped that computers could somehow learn to learn (to learn to learn to …) and overcome this infinite regress. I sympathize greatly with this conclusion, but we should always keep in mind that if we accept it in full, then we are doomed to Lucas’ conception of minds and machines, where humans are always the superior mathematicians and AIs are no better than pocket calculators.