1. 7
  1.  

  2. 6

    We are so quick to dream about the concept of super-intelligence, yet we don’t even understand just plain old boring intelligence.

    I think this is a wrong frame. Intelligence (and super-intelligence) is a capability. Capability doesn’t require any understanding. You can use muskets without understanding them, as Maori did. The first hydrogen bomb did explode, even if creators were badly mistaken about lithium 7. We are creating capable deep learning models by trial and error, but we don’t have good theoretical understanding of why they work. Aspirin has been sold since 1899, but its mechanism of action was unknown until 1971.

    1. 3

      I tried to create a point-by-point rebuttal of this piece but there’s just so much to poke holes at, I gave up a few sections through. Its general point, that we aren’t close to General AI, and that AI is a “mathematical trick” meant to optimize pattern recognition is certainly under vigorous debate right now, and depending on the practitioner/researcher you ask, probably true. It’s certainly what I believe anyway, that we’re a long way away from General AI. I’m not sure what to make of this article though as it could condense its meandering into a few sentences to get the point across.

      1. 3

        I think most of the article makes good points. I’m not sure what you’d poke holes in. It does ramble, but I think the general gist is sound.

        This article may not be as in depth as On the Imminence and Danger of AI, but it makes a lot of the same points.

        There are a lot of scientists who think if they can just figure out what part of our brain architecture leads to our sentient, we can replicate that part. But looking at the regulatory network for a bacteria shows how insanely complex biology is! It’s not a matter of finding the missing piece.

        Another issue is that people think intelligence can scale. Say we create a sentient machine. Can we just add more CPU; more brain, to make it go faster? A true general purpose AI may not be able to multiple numbers faster than we can, and there may be no way to scale it except to give it access to regular machines.

        I agree this article seems to ramble a bit, but the philosophical questions around what is “intelligence” is important. We train our machines to give us the outputs we want. You train software to identify cats from dogs and it’s always an either/or output, what happens when you throw in a bird?

        The big question is that of goal settings. How do we choose our goals? Will we create machines that will one day be able to chose their own goals? Their own evolutionary fitness, outside of any constraints we put on their environments.

        1. 4

          I don’t know what the numbers look like, but I’d be willing to bet that only a minority of AI researchers and practitioners believe that our current path along ML will lead to the emergence of AGI, due in large part to limitations exposed by NFL theorems and the curse of dimensionality. Likewise, a lot of the current successful topologies, like GANs, are not based on prior biological art, and are locked solidly in the ML discipline itself.

          The big question is that of goal settings. How do we choose our goals? Will we create machines that will one day be able to chose their own goals? Their own evolutionary fitness, outside of any constraints we put on their environments.

          Perhaps, but there’s many more mundane AI goals to tackle first that can have real value. Optimal PIDs, sun-tracking swivelling solar arrays, self-balancing platforms, sensor denoising, supply chain prediction, automated drone flying; there’s a lot of value for AI that has nothing to do with AGI itself. Indeed to many AGI is the least interesting of the lot.

          If anything, I think there’s a large danger lurking in the homogeneity of our datasets and the implicit biases found in practitioners. This can lead anywhere from just not having input data on entire ethnicities to missed post-stratification because observed likelihoods line up with “prior” experience. Rather than spending mental effort on trying to epistomologically back AI, I’d rather we understand and effectively communicate the dangers of our increasingly AI algorithm dependent world than one where we may eventually have AGI in some unclear timeline.

          As far as the article itself, I think it falls into the common trap of assuming that, because a subset of AI practitioners try to throw gobs of compute on garbage models that are much more complex than the problem domain asks for, that every AI practitioner is like that. Most AI practitioners understand how difficult hyperparameter search is, understand the real pitfalls of deep NN topologies and overfitting, and are sensitive to the sheer effort that must be put into data cleaning before an algorithm can result in meaningful predictions. Judging an entire field by its weakest members does the field injustice.

          1. 3

            Nope, no free lunch theorem presents no barrier to AGI. This is basic. You can’t be effective against all problems, but that’s not an issue, since you only need to be effective against real world problems. NFL, if applied as you want, would also prove human intelligence is impossible, which is absurd.

            1. 3

              NFL, if applied as you want, would also prove human intelligence is impossible, which is absurd.

              Or that our current model of neurons/NNs/search algorithms is not how human intelligence works. I’ll freely admit that I’m not particularly strong on the AGI aspect of ML because I’ve only read a few papers on it, so take that as you will.

          2. 3

            Will we create machines that will one day be able to chose their own goals?

            Why would we do that? Then they will pursue their own goals, instead of ours, which presumably is not what we want.

            1. 2

              Good thing we know how to precisely specify our own goals then! :)